Quidquid latine dictum sit, altum sonatur, - Whatever is said in Latin sounds profound.
We will continue having lots of useless documentation to save our asses, tons of pointless meeting, and fruitless and endless discussions until we find out why no work is getting done and the code is unmaintainable. Meanwhile, I will keep using unrelated words and single letters in naming functions and variables, recycling variables and making as many of them static as I possibly can, documenting the obvious, but never ever writing a variable declaration or explaining what some section is attempting to accomplish, and avoiding testing like the plague, Apocalypse, Anawim, #justtothepoint.
Recall that a field is a commutative ring with unity such that each nonzero element has a multiplicative inverse, e.g., every finite integral domain, ℤp (p prime), ℚ, ℝ, and ℂ. A field has characteristic zero or characteristic p with p prime and F[x] is a field.
Definition. Let F be a field and V a set of two operations:
We say V is a vector space over F if (V, +) is an Abelian group under addition, and ∀a, b ∈ F, u, v ∈V the following conditions hold:
Definition. Suppose V is a vector space over a field F and let S be a set of vectors from V, S = {v1, v2, ···, vn}. We say S is linearly dependent if there are elements from F, not all zero, α1, α2, ···, αn, such that they satisfy the following equation α1v1 + α2v2 + ··· + αnvn = 0 or alternatively, ∃v1, v2, ···, vn ∈ V, α1, α2, ···, αn∈ F such that α1v1 + α2v2 + ··· + αnvn = 0 We say S is linearly independent if α1v1 + α2v2 + ··· + αnvn = 0 ⇒ α1 = α2 = ··· = αn = 0.
Examples:
x$(\begin{smallmatrix}1\\ 2\end{smallmatrix}) + y(\begin{smallmatrix}3\\ -4\end{smallmatrix})=(\begin{smallmatrix}0\\ 0\end{smallmatrix}) ⇒ (\begin{smallmatrix}x+3y\\ 2x-4y\end{smallmatrix})=(\begin{smallmatrix}0\\ 0\end{smallmatrix})$ ⇒ x + 3y = 0, 2x - 4y = 0 ⇒ x = y = 0.
ℝ3, S = {(1, 0, 0), (1, 0, 1), (1, 1, 1)} are linearly independent because a(1, 0, 0) + b(1, 0, 1) + c(1, 1, 1) = (a + b + c, c, b + c) = (0, 0, 0) ⇒ a = b = c = 0.
ℝ3, S = {(1, 2, 1), (0, 1, 2), (1, 1, 0)} are linearly independent because a(1, 2, 1) + b(0, 1, 2) + c(1, 1, 0) = (a + c, 2a + b + c, a + 2b) = (0, 0, 0) ⇒ a = b = c = 0.
ℝ3, S = {(1, 1, 0), (0, 1, 1), (2, 3, 1)} are linearly dependent, a = -2, b = -1, c = 1.
$-2(\begin{smallmatrix}1\\ 1\\ 0\end{smallmatrix}) -1(\begin{smallmatrix}0\\ 1\\ 1\end{smallmatrix}) + (\begin{smallmatrix}2\\ 3\\ 1\end{smallmatrix})=(\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix})$
Proposition. Suppose V is a vector space over a field F and let S be a set of vectors from V, S = {v1, v2, ···, vn}. {v1, v2, ···, vn} is linearly dependent if and only if one of the vi can be written as a linear combination of the others.
Proof.
⇒) Suppose {v1, v2, ···, vn} is linearly dependent ⇒ ∃α1, α2, ···, αn ∈ F not all zero, so ∃αi ≠ 0 for some 1 ≤ i ≤ n, such that α1v1 + α2v2 + ··· + αnvn = 0 ⇒ αivi = -(α1v1 + α2v2 + ··· αi-1vi-1 + αi+1vi+1 + ··· + αnvn) ⇒ [F field, αi ≠ 0, it has a multiplicative inverse] vi = (-α1αi-1)v1 (-α2αi-1)v2 + ··· (-αi-1αi-1)vi-1 (-αi+1αi-1)vi+1 + ··· (-αnαi-1)vn ∎
⇐) Suppose vi = β1v1 +β2v2 + ··· βi-1vi-1 +βi+1vi+1 + ··· +βnvn ⇒ β1v1 +β2v2 + ··· βi-1vi-1 -vi +βi+1vi+1 + ··· +βnvn = 0, and one of the coefficient, namely -1 ≠ 0 ∎
0, 1 and -1 are necessary elements in every field. 0 and 1 are the addition and multiplication identities respectively, and -1 is the additive inverse of 1 in F. However, those elements can be the same, e.g., -1 = 1 in ℤ2.
Definition. A collection of vectors B ⊆ V is called a basis for V if B is linearly independent and span{v1, v2, ···, vn} = {α1v1 + α2v2 + ··· + αnvn| αi ∈ F} = V.
Proposition. B is a basis of V if and only if every vector v ∈ V can be written uniquely as a linear combination of elements of the basis.
Proof.
⇒) Let’s take v ∈ V = span{v1, v2, ···, vn} = {α1v1 + α2v2 + ··· + αnvn| αi ∈ F} = V, and suppose v = α1v1 + α2v2 + ··· + αnvn = β1v1 + β2v2 + ··· + βnvn where αi, βi ∈ F, vi ∈ B. If there were different elements of V in both expressions, we can complete them by giving them zero-weights (αl = 0) ⇒ (α1-β1)v1 + (α2-β2)v2 + ··· + (αn-βn)vn = 0 ⇒ [B is basis for V, so B is linearly independent] αi-βi = 0, 1 ≤ i ≤ n ⇒ αi = βi = 0, 1 ≤ i ≤ n∎
⇐) Every vector v ∈ V can be written uniquely as a linear combination of elements of the basis ⇒ V = span{v1, v2, ···, vn} = {α1v1 + α2v2 + ··· + αnvn| αi ∈ F}
Let’s prove that B is linearly independent.
α1v1 + α2v2 + ··· + αnvn = 0 = 0v1 + 0v2 + ··· +0vn ⇒ [By assumption, every vector v ∈ V can be written uniquely as a linear combination of elements of the basis] βi = 0, 1 ≤ i ≤ n∎
Examples:
Definition. Suppose V has a basis B = span{v1, v2, ···, vn} = {α1v1 + α2v2 + ··· + αnvn| αi ∈ F}. Let’s define the coordinate map, γB: V → Fn, γB(v) = $(\begin{smallmatrix}α_1\\ α_2\\ ··· \\α_n\end{smallmatrix})$ if v = α1v1 + α2v2 + ··· + αnvn
Examples.
{1, x, x2} is the standard basis of F2[x] = $\wp_2$ (all polynomials of degree less or equal than two) γB: $\wp_2$ → F3, γB(α0 + α1x + α2x2) = $(\begin{smallmatrix}α_0\\ α_1\\α_2\end{smallmatrix})$.
Is B = {$(\begin{smallmatrix}2\\ 3\end{smallmatrix}), (\begin{smallmatrix}1\\ -1\end{smallmatrix})$} a basis of ℝ2?
Theorem. Suppose V = span{v1, v2, ···, vn} and {w1, w2, ···, wm} is linearly independent. Then, m ≤ n.
Proof.
w1 ∈ V = span{v1, v2, ···, vn} ⇒ w1 = α1v1 + α2v2 + ··· + αnvn, αi ∈ F and not all αi≠0 because if all αi were zero, then w1 = 0 ⇒ {w1, w2, ···, wm} is lineal dependent.
Rename v1, v2, ···, vn so that α1 ≠ 0 ⇒ v1 = -α1-1(α2v2 + α3v3 + ··· + αnvn - w1) ∈ span{w1, v2, ···, vn} ⇒ V = span{w1, v2, ···, vn}
Let’s repeat the process, write w2 = β1w1 + β2v2 + ··· + βnvn, where β2,··· βn not all zero because otherwise, w2 = β1w1 and then {w1, w2, ···, wm} is not linearly dependent.
Rename or reorder, so that β2 ≠ 0, and v2 = -β2-1(β1w1 + β3v3 + ··· + βnvn -w2) ∈ span{w1, w2, v3 ···, vn} ⇒ V = span{w1, w2, v3 ···, vn}
Let’s continue this process until we “use” all wi ⇒ m ≤ n. V = spam{w1,···, wm, vm+1, ···, vn}
Definition. If B = {v1, v2, ···, vn} is a basis for V, V is said to have dimension n.
Proposition. The notion of dimension is well defined. In other words, every basis has the same number of vectors., e.g., {1, i} is a basis for ℂ over ℝ, and the dimension is two.
Proof. Suppose B = {v1, v2, ···, vn} and B’ = {w1, w2, ···, wm} are basis for V.
B spans and B’ is linear independence ⇒ m ≤ n, but B’ spans and B is linear independence ⇒ n ≤ m. Therefore, m = n ∎.
Vector space | Basis | Dimensions | ||
---|---|---|---|---|
{0} | ∅ | 0 | ||
Fn | e1, e2, ···, en | n | ||
F[x] | 1, x, x2, x3, ··· | +∞ | ||
$\wp_n$ | 1, x, x2, x3, ···, xn | n + 1 | ||
ℂ(ℝ) | unlistable | +∞ |
x1 + x2 + x3 = 0 ⇒ x1 = - x2 - x3 ⇒ $v=(\begin{smallmatrix}-x_2-x_3\\ x_2\\x_3\end{smallmatrix}) = x_2(\begin{smallmatrix}-1\\ 1\\0\end{smallmatrix})+x_3(\begin{smallmatrix}-1\\ 0\\1\end{smallmatrix})$, for some x2, x3 ∈ ℝ. Therefore, V = span{$(\begin{smallmatrix}-1\\ 1\\0\end{smallmatrix}), (\begin{smallmatrix}-1\\ 0\\1\end{smallmatrix})$}
We only need to prove that these two vectors are lineal independent. Suppose α, β ∈ ℝ,
$α(\begin{smallmatrix}-1\\ 1\\ 0\end{smallmatrix})+β(\begin{smallmatrix}-1\\ 0\\ 1\end{smallmatrix}) = (\begin{smallmatrix}0\\ 0\\ 0\end{smallmatrix})$ ⇒ -α-β=0, α=0, β=0, ∎. Therefore, we have a basis and dim(V) = 2.
Let an arbitrary f(x) ∈ {f(x) ∈ ℝ[x]| f(3)=0 and deg(f(x) ≤ 2)} ⊆ $\wp_2$ ⇒ f(3) = 0 ⇒ f(x) = (x-3)g(x) where g(x) is a polynomial of degree at most 1.
f(x) = (ax + b)(x - 3) = ax(x -3) + b(x-3) = span{x(x-3), x-3}
{f(x) ∈ ℝ[x]| f(3)=0 and deg(f(x) ≤ 2)} = span{x(x-3), x-3}, so we only need to show that these two polynomials are linearly independent.
ax(x - 3) + b(x -3) = 0. If x = 0 ⇒ b(-3) = 0 ⇒ b = 0.
ax(x -3) = 0 ∀x ∈ ℝ. In particular, x = 1, a(-2) = 0 ⇒ a = 0. Therefore, it is a basis, and its dimension is equal to two.