The more I learn about people, the more I like dogs and I don’t even have a dog, Apocalypse, Anawim, #justtothepoint.
Recall that a field is a commutative ring with unity such that each nonzero element has a multiplicative inverse, e.g., every finite integral domain, ℤ_{p} (p prime), ℚ, ℝ, and ℂ. A field has characteristic zero or characteristic p with p prime and F[x] is a field.
Definition. Let F be a field and V a set of two operations:
+: VxV → V, ∀v, w ∈ V, v + w ∈ V
·: FxV → V, ∀v ∈ V, a ∈ F, av ∈ V. This operation is called scalar multiplication
We say V is a vector space over F if (V, +) is an Abelian group under addition, and ∀a, b ∈ F, u, v ∈V the following conditions hold:
a(v + u) = av + au, (a + b)v = av + bv
a(bv) = (ab)v
1v = v.
Examples:
In physics, a vector represents a quantity that has both magnitude and direction. Vectors can be added together by the parallelogram rule. Place both vectors so they have the same initial point, then draw a parallel line across from the first vector. Next, draw a parallel line across from the second vector and you will form a parallelogram. Finally, draw a straight line from the tails of the two vectors across the diagonal of the parallelogram.
The trivial vector space, V = {0} over any field. The scalar multiplication is defined as a0 = 0 ∀a ∈ F.
Let F be ℚ, ℝ, ℂ, etc. and V = F. In other words, a field F is a vector space over itself where vector addiction and the scalar multiplication is the addition and multiplication in the field respectively.
Let F be any field, V = F x F = {(a_{1}, a_{2}) | a_{1}, a_{2} ∈ F} = {$(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix})$| a_{1}, a_{2} ∈ F} is a vector space where v + w = $(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix}) + (\begin{smallmatrix}b_1\\ b_2\end{smallmatrix}) = (\begin{smallmatrix}a_1+b_1\\ a_2+b_2\end{smallmatrix})$ and αv= $α(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix}) = (\begin{smallmatrix}αa_1\\ αa_2\end{smallmatrix})$ We can generalize it, with V = F^{n} {(a_{1}, a_{2}, ··· a_{n}) | a_{1}, a_{2}, ··· a_{n} ∈ F} = {$(\begin{smallmatrix}a_1\\ a_2 \\ …\\ a_n\end{smallmatrix})$| a_{1}, a_{2},··· a_{n} ∈ F}. In particular, ℝ^{n} is a vector space over ℝ.
Let F be any field, the vector space of polynomials with coefficients from F, F[x]={a_{0} + a_{1}x + ··· + a_{n}x^{n} | n ≥ 0, a_{i} ∈ F}. In particular ℤ_{p}[x] of polynomials with coefficients from ℤ_{p}, p prime, is a vector space.
Let F = ℝ and F(ℝ) be the set of al real-valued functions from ℝ → ℝ. It is a vector space over ℝ where (f+g)(x) = f(x)+g(x) and (αf)(x) = αf(x).
The set of complex numbers ℂ is a vector space over ℝ.
Let V be the set of linear equations with n variables, say x_{1}, x_{2}, ···, x_{n} with coefficients c_{1}, c_{2}, ···, c_{n}, and b_{1} ∈ F, F being a field, e.g., c_{1}x_{1}+···+c_{n}x_{n}=b_{1}, d_{1}x_{1}+···+d_{n}x_{n}=b_{2}, etc. V is a vector space. Let E_{1}, E_{2}, … E_{m} be a list of linear equation with a common solution x ∈ F^{n} ⇒ x is obviously a solution to any linear combination of a_{1}E_{1} +a_{2}E_{2}+···+a_{n}E_{n}.
Proposition. Suppose V is a a vector space, K a field, α ∈ F, v ∈ V. Then,
The zero vector 0 is unique. The additive inverse of u is unique.
Suppose there is another zero, say θ, ∀v ∈ V: θ + v = v + θ = v. In particular θ + 0 = 0 = [∀v∈ V: 0 + v = v + 0 = v. In particular, 0 + θ = θ + 0 = θ.] θ
Similarly, ∃u’ and -u inverses: u’ = u’ + 0 = [-u is an inverse of u] u’ + (u -u) = [Associative] (u’ +u) -u = [u’ is “another” inverse of u] 0 -u = -u
αv = 0 ↭ α = 0 or v = 0. Proof: ⇐) Trivial. ⇒. Suppose αv = 0. If α = 0, we are done. Suppose that α ≠ 0, α ∈ F ⇒ [F field, every non-zero element has a multiplicative inverse] ∃ α^{-1} ∈ F: αα^{-1} = 1 ∈ F. αv = 0 ⇒ α^{-1}(αv) = α^{-1}0 = 0 ⇒ v = 0.
Definition. Suppose V is a vector space over a field F and let U be a subset of V (U ⊆ V). We say U is a subspace of V if U is also a vector space over F under the operations of V or equivalently if,
(U, +) is a subgroup of (V, +), that is, 0 ∈ U; ∀u, v ∈ U, u + v ∈ U and -u ∈ U.
∀u ∈ U, ∀α ∈ F, αu ∈ U.
Proposition. U ⊆ V is a subspace iff ∀α ∈ F, u, v ∈ U, we have αu, u + v ∈ U.
Examples.
Let F be any field, V = F x F = {(a_{1}, a_{2}) | a_{1}, a_{2} ∈ F} = {$(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix})$| a_{1}, a_{2} ∈ F}, U = {$(\begin{smallmatrix}a\\ 0\end{smallmatrix})$| a ∈ F} is a subspace of V.
Let F be any field, V = F x F = {(x_{1}, x_{2}) | x_{1}, x_{2} ∈ F} = {$(\begin{smallmatrix}x_1\\ x_2\end{smallmatrix})$| x_{1}, x_{2} ∈ F}, U = {$(\begin{smallmatrix}x_1\\ x_2\end{smallmatrix})$| x_{1}, x_{2} ∈ F, ax_{1} + bx_{2} = 0, where a and be are elements that are fixed from the field, a, b ∈ F} is a subspace of V. In particular, V = ℝ^{2}, U = {(x_{1}, x_{2}) | 2x_{1} - x_{2} = 0}
The subset of F(ℝ) that contains the set of real-valued continuous functions, C^{0}(ℝ) is a vector subspace of F(ℝ). Let C^{n} be the set of real-valued continuous and differentiable functions that are n-times differentiable. The set of real-valued continuous and differentiable functions is a subspace of the vector space of real-valued continuous functions. Futhermore, C^{0}(ℝ) ⊇ C^{1}(ℝ) ⊇ C^{2}(ℝ) ⊇ ··· this a string of vector subspaces.
Let F[x] be the vector space of all coefficients in F. F_{n}[x] be the set of all polynomials of degrees less or equal to n is a subspace of F[x]. In particular, the sets {a_{1}x + a_{0} | a_{1}, a_{0} ∈ ℝ} and {a_{2}x^{2} + a_{1}x + a_{0} | a_{2}, a_{1}, a_{0} ∈ ℝ} are subspaces of ℝ[x].
We need to ask for all polynomials of degree less than n and not just of degree equal n, because it needs to be close under addition, e.g., (x^{2} + 2) + (-x^{2} -7x +4) = -7x +6. Futhermore, we need to include the zero polynomial.
W, W’ ⊆ C^{1}(ℝ) -real functions continuous and diferenciables-, W = {f | f(3) = 0}, W’ = {f | f’(x) = f(x)}, W and W’ are subspaces of C^{1}(ℝ).
Definition. Suppose V is a vector space over F, α_{1}, α_{2}, ···, α_{n} ∈ F, v_{1}, v_{2}, ···, v_{n} ∈ V, then the linear combination of v_{1}, v_{2}, ···, v_{n} with weights or coefficients α_{1}, α_{2}, ···, α_{n} is α_{1}v_{1} + α_{2}v_{2} + ··· + α_{n}v_{n}. The set of all such lineal combinations is called the subspace of V spanned by v_{1}, v_{2}, ···, v_{n}, that is ⟨v_{1}, v_{2}, ···, v_{n}⟩ = span{v_{1}, v_{2}, ···, v_{n}} = {α_{1}v_{1} + α_{2}v_{2} + ··· + α_{n}v_{n}| α_{i} ∈ F}
Proof:
v = α_{1}v_{1} + α_{2}v_{2} + ··· + α_{n}v_{n}, w = β_{1}v_{1} + β_{2}v_{2} + ··· + β_{n}v_{n}
v + w = (α_{1} + β_{1})v_{1} + (α_{2} + β_{2})v_{2} + ··· + (α_{n} + β_{n})v_{n} ∈ ⟨v_{1}, v_{2}, ···, v_{n}⟩ because α_{i} + β_{i} ∈ F, F field.
αv = (αα_{1})v_{1} + (αα_{2})v_{2} + ··· + (αα_{n})v_{n} ∈ ⟨v_{1}, v_{2}, ···, v_{n}⟩ because αα_{i} ∈ F, F field.
This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This post relies heavily on the following resources, specially on NPTEL-NOC IITM, Introduction to Galois Theory, Michael Penn, and Contemporary Abstract Algebra, Joseph, A. Gallian.
This website uses cookies to improve your navigation experience. By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.