The more I learn about people, the more I like dogs and I don’t even have a dog, Apocalypse, Anawim, #justtothepoint.
Recall that a field is a commutative ring with unity such that each nonzero element has a multiplicative inverse, e.g., every finite integral domain, ℤ_{p} (p prime), ℚ, ℝ, and ℂ. A field has characteristic zero or characteristic p with p prime and F[x] is a field.
Definition. Let F be a field and V a set of two operations:
 +: VxV → V, ∀v, w ∈ V, v + w ∈ V
 ·: FxV → V, ∀v ∈ V, a ∈ F, av ∈ V. This operation is called scalar multiplication
We say V is a vector space over F if (V, +) is an Abelian group under addition, and ∀a, b ∈ F, u, v ∈V the following conditions hold:
 a(v + u) = av + au, (a + b)v = av + bv
 a(bv) = (ab)v
 1v = v.
Examples:
 In physics, a vector represents a quantity that has both magnitude and direction. Vectors can be added together by the parallelogram rule. Place both vectors so they have the same initial point, then draw a parallel line across from the first vector. Next, draw a parallel line across from the second vector and you will form a parallelogram. Finally, draw a straight line from the tails of the two vectors across the diagonal of the parallelogram.
Let $u = (\begin{smallmatrix}2\\ 5\\ 4\end{smallmatrix}), v = (\begin{smallmatrix}1\\ 3\\ 2\end{smallmatrix})∈ ℝ^3.~ u+v=(\begin{smallmatrix}2\\ 5\\ 4\end{smallmatrix})+(\begin{smallmatrix}1\\ 3\\ 2\end{smallmatrix})=(\begin{smallmatrix}3\\ 8\\ 6\end{smallmatrix}).~ 4u=4(\begin{smallmatrix}2\\ 5\\ 4\end{smallmatrix})=(\begin{smallmatrix}8\\ 20\\ 16\end{smallmatrix})$
 The trivial vector space, V = {0} over any field. The scalar multiplication is defined as a0 = 0 ∀a ∈ F.
 Let F be ℚ, ℝ, ℂ, etc. and V = F. In other words, a field F is a vector space over itself where vector addiction and the scalar multiplication is the addition and multiplication in the field respectively.
 Let F be any field, V = F x F = {(a_{1}, a_{2})  a_{1}, a_{2} ∈ F} = {$(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix})$ a_{1}, a_{2} ∈ F} is a vector space where v + w = $(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix}) + (\begin{smallmatrix}b_1\\ b_2\end{smallmatrix}) = (\begin{smallmatrix}a_1+b_1\\ a_2+b_2\end{smallmatrix})$ and αv= $α(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix}) = (\begin{smallmatrix}αa_1\\ αa_2\end{smallmatrix})$ We can generalize it, with V = F^{n} {(a_{1}, a_{2}, ··· a_{n})  a_{1}, a_{2}, ··· a_{n} ∈ F} = {$(\begin{smallmatrix}a_1\\ a_2 \\ …\\ a_n\end{smallmatrix})$ a_{1}, a_{2},··· a_{n} ∈ F}. In particular, ℝ^{n} is a vector space over ℝ.
 Let F be any field, the vector space of polynomials with coefficients from F, F[x]={a_{0} + a_{1}x + ··· + a_{n}x^{n}  n ≥ 0, a_{i} ∈ F}. In particular ℤ_{p}[x] of polynomials with coefficients from ℤ_{p}, p prime, is a vector space.
 Let F = ℝ and F(ℝ) be the set of al realvalued functions from ℝ → ℝ. It is a vector space over ℝ where (f+g)(x) = f(x)+g(x) and (αf)(x) = αf(x).
 The set of complex numbers ℂ is a vector space over ℝ.
 Let V be the set of linear equations with n variables, say x_{1}, x_{2}, ···, x_{n} with coefficients c_{1}, c_{2}, ···, c_{n}, and b_{1} ∈ F, F being a field, e.g., c_{1}x_{1}+···+c_{n}x_{n}=b_{1}, d_{1}x_{1}+···+d_{n}x_{n}=b_{2}, etc. V is a vector space. Let E_{1}, E_{2}, … E_{m} be a list of linear equation with a common solution x ∈ F^{n} ⇒ x is obviously a solution to any linear combination of a_{1}E_{1} +a_{2}E_{2}+···+a_{n}E_{n}.
Proposition. Suppose V is a a vector space, K a field, α ∈ F, v ∈ V. Then,
 The zero vector 0 is unique. The additive inverse of u is unique.
Suppose there is another zero, say θ, ∀v ∈ V: θ + v = v + θ = v. In particular θ + 0 = 0 = [∀v∈ V: 0 + v = v + 0 = v. In particular, 0 + θ = θ + 0 = θ.] θ
Similarly, ∃u’ and u inverses: u’ = u’ + 0 = [u is an inverse of u] u’ + (u u) = [Associative] (u’ +u) u = [u’ is “another” inverse of u] 0 u = u
 α·0 = 0. α.0 = α·(0 + 0) = α·0 + α·0 ⇒ [Cancellation law] 0 = α·0.
 0·v = 0. It is quite similar.
 (α)v = (αv). Proof: αv + (α)v = (α + (α))v = 0v = 0 ⇒ αv + (α)v = 0 ⇒ [(αv) in both sides] (α)v = (αv)
 αv = 0 ↭ α = 0 or v = 0. Proof: ⇐) Trivial. ⇒. Suppose αv = 0. If α = 0, we are done. Suppose that α ≠ 0, α ∈ F ⇒ [F field, every nonzero element has a multiplicative inverse] ∃ α^{1} ∈ F: αα^{1} = 1 ∈ F. αv = 0 ⇒ α^{1}(αv) = α^{1}0 = 0 ⇒ v = 0.
Definition. Suppose V is a vector space over a field F and let U be a subset of V (U ⊆ V). We say U is a subspace of V if U is also a vector space over F under the operations of V or equivalently if,
 (U, +) is a subgroup of (V, +), that is, 0 ∈ U; ∀u, v ∈ U, u + v ∈ U and u ∈ U.
 ∀u ∈ U, ∀α ∈ F, αu ∈ U.
Proposition. U ⊆ V is a subspace iff ∀α ∈ F, u, v ∈ U, we have αu, u + v ∈ U.
Examples.
 Let F be any field, V = F x F = {(a_{1}, a_{2})  a_{1}, a_{2} ∈ F} = {$(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix})$ a_{1}, a_{2} ∈ F}, U = {$(\begin{smallmatrix}a\\ 0\end{smallmatrix})$ a ∈ F} is a subspace of V.
 Let F be any field, V = F x F = {(x_{1}, x_{2})  x_{1}, x_{2} ∈ F} = {$(\begin{smallmatrix}x_1\\ x_2\end{smallmatrix})$ x_{1}, x_{2} ∈ F}, U = {$(\begin{smallmatrix}x_1\\ x_2\end{smallmatrix})$ x_{1}, x_{2} ∈ F, ax_{1} + bx_{2} = 0, where a and be are elements that are fixed from the field, a, b ∈ F} is a subspace of V. In particular, V = ℝ^{2}, U = {(x_{1}, x_{2})  2x_{1}  x_{2} = 0}
 The subset of F(ℝ) that contains the set of realvalued continuous functions, C^{0}(ℝ) is a vector subspace of F(ℝ). Let C^{n} be the set of realvalued continuous and differentiable functions that are ntimes differentiable. The set of realvalued continuous and differentiable functions is a subspace of the vector space of realvalued continuous functions. Futhermore, C^{0}(ℝ) ⊇ C^{1}(ℝ) ⊇ C^{2}(ℝ) ⊇ ··· this a string of vector subspaces.
 Let F[x] be the vector space of all coefficients in F. F_{n}[x] be the set of all polynomials of degrees less or equal to n is a subspace of F[x]. In particular, the sets {a_{1}x + a_{0}  a_{1}, a_{0} ∈ ℝ} and {a_{2}x^{2} + a_{1}x + a_{0}  a_{2}, a_{1}, a_{0} ∈ ℝ} are subspaces of ℝ[x].
We need to ask for all polynomials of degree less than n and not just of degree equal n, because it needs to be close under addition, e.g., (x^{2} + 2) + (x^{2} 7x +4) = 7x +6. Futhermore, we need to include the zero polynomial.
 W, W’ ⊆ C^{1}(ℝ) real functions continuous and diferenciables, W = {f  f(3) = 0}, W’ = {f  f’(x) = f(x)}, W and W’ are subspaces of C^{1}(ℝ).
Definition. Suppose V is a vector space over F, α_{1}, α_{2}, ···, α_{n} ∈ F, v_{1}, v_{2}, ···, v_{n} ∈ V, then the linear combination of v_{1}, v_{2}, ···, v_{n} with weights or coefficients α_{1}, α_{2}, ···, α_{n} is α_{1}v_{1} + α_{2}v_{2} + ··· + α_{n}v_{n}. The set of all such lineal combinations is called the subspace of V spanned by v_{1}, v_{2}, ···, v_{n}, that is ⟨v_{1}, v_{2}, ···, v_{n}⟩ = span{v_{1}, v_{2}, ···, v_{n}} = {α_{1}v_{1} + α_{2}v_{2} + ··· + α_{n}v_{n} α_{i} ∈ F}
Proof:
v = α_{1}v_{1} + α_{2}v_{2} + ··· + α_{n}v_{n}, w = β_{1}v_{1} + β_{2}v_{2} + ··· + β_{n}v_{n}
v + w = (α_{1} + β_{1})v_{1} + (α_{2} + β_{2})v_{2} + ··· + (α_{n} + β_{n})v_{n} ∈ ⟨v_{1}, v_{2}, ···, v_{n}⟩ because α_{i} + β_{i} ∈ F, F field.
αv = (αα_{1})v_{1} + (αα_{2})v_{2} + ··· + (αα_{n})v_{n} ∈ ⟨v_{1}, v_{2}, ···, v_{n}⟩ because αα_{i} ∈ F, F field.
Exercises.

In ℂ^{2}, 3 $(\begin{smallmatrix}2\\ i\end{smallmatrix})+2i(\begin{smallmatrix}2+i\\ 1+i\end{smallmatrix})=(\begin{smallmatrix}6+2i(2+i)\\ 3i+2i(1+i)\end{smallmatrix})=(\begin{smallmatrix}4(1 + i)\\ 2 + 5 i\end{smallmatrix})$

In $ℤ_3^3,~ (\begin{smallmatrix}0\\ 2\\ 1\end{smallmatrix})+2(\begin{smallmatrix}2\\ 1\\ 1\end{smallmatrix})+2(\begin{smallmatrix}1\\ 0\\ 1\end{smallmatrix})=(\begin{smallmatrix}0+1+2\\ 2+2\\ 1+2+2\end{smallmatrix})=(\begin{smallmatrix}0\\ 1\\ 2\end{smallmatrix})$
Bibliography
This content is licensed under a Creative Commons AttributionNonCommercialShareAlike 4.0 International License. This post relies heavily on the following resources, specially on
NPTELNOC IITM, Introduction to Galois Theory, Michael Penn, and Contemporary Abstract Algebra, Joseph, A. Gallian.
 NPTELNOC IITM, Introduction to Galois Theory.
 Algebra, Second Edition, by Michael Artin.
 LibreTexts, Calculus. Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
 Field and Galois Theory, by Patrick Morandi. Springer.
 Michael Penn, Andrew Misseldine, blackpenredpen, and MathMajor, YouTube’s channels.
 Contemporary Abstract Algebra, Joseph, A. Gallian.
 MIT OpenCourseWare, 18.01 Single Variable Calculus, Fall 2007 and 18.02 Multivariable Calculus, Fall 2007, YouTube.
 Calculus Early Transcendentals: Differential & MultiVariable Calculus for Social Sciences.