To err is human, to blame it on someone else is even more human, Jacob’s Law

Definition. A vector $\vec{AB}$ is a geometric object that has magnitude (or length) and direction. Vectors in an n-dimensional Euclidean space can be represented as coordinates vectors in a Cartesian coordinate system.

Definition. The magnitude or length of the vector $\vec{A}$ is given by $|\vec{A}|~ or~ ||\vec{A}|| = \sqrt{a_1^2+a_2^2+a_3^2}$, e.g., $||< 3, 2, 1 >|| = \sqrt{3^2+2^2+1^2}=\sqrt{14}$, $||< 3, -4, 5 >|| = \sqrt{3^2+(-4)^2+5^2}=\sqrt{50}=5\sqrt{2}$, or $||< 1, 0, 0 >|| = \sqrt{1^2+0^2+0^2}=\sqrt{1}=1$.

The dot or scalar product is a fundamental operation between two vectors. It produces a scalar quantity that represents the projection of one vector onto another. The dot product is defined as follows: $\vec{A}·\vec{B} = \sum a_ib_i = a_1b_1 + a_2b_2 + a_3b_3,$ e.g. $\vec{A}·\vec{B} = \sum a_ib_i = ⟨2, 2, -1⟩·⟨5, -3, 2⟩ = a_1b_1 + a_2b_2 + a_3b_3 = 2·5+2·(-3)+(-1)·2 = 10-6-2 = 2.$

Definition. The cross product, denoted by $\vec{A}x\vec{B}$, is a binary operation on two vectors in three-dimensional space. It is a vector that is perpendicular to both of the input vectors (perpendicular to the parallelogram) and has a magnitude equal to the area of the parallelogram formed by the two input vectors.

The direction of the resulting vector is determined by the right-hand rule: if you curl the fingers of your right hand from $\vec{A}$ to $\vec{B}$ (first finger points $\vec{A}$, second finger points to $\vec{B}$), your thumb points in the direction of $\vec{A} \times \vec{B}$.

In many applications, it’s necessary to work with different coordinate systems. For instance, in physics, engineering, and computer graphics, we often deal with transformations between Cartesian, polar, cylindrical, and spherical coordinates, among others. These transformations are described by sets of equations, typically linear, e.g., $\begin{cases} u_1 = 2x_1 + 3x_2 +3x_3 \\ u_2 = 2x_1 + 4x_2 +5x_3 \\ u_3 = x_1 +x_2 + 2x_3 \end{cases}$

Matrices provide an efficient way to solve systems of linear equations,

$(\begin{smallmatrix}2 & 3 & 3\\ 2 & 4 & 4\\ 1 & 1 & 2\end{smallmatrix}) (\begin{smallmatrix}x_1\\ x_2\\x_3\end{smallmatrix}) = (\begin{smallmatrix}u_1\\ u_2\\u_3\end{smallmatrix}) ↭ A · X = U$ (more convenient and concise notation) where we do the dot products between the rows of A (a 3 x 3 matrix) and the column vector of X (a 3 x 1 matrix).

Given two matrices A and B, where A has dimensions m × n and B has dimensions n × p, the resulting matrix C = A⋅B will have dimensions m × p. The entry in the ith row and jth column of C is computed by taking the dot product of the ith row of A with the jth column of B, i.e., $c_{ij} = \sum_{K=1}^n a_{ik}b_{kj}$

$A = (\begin{smallmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9\end{smallmatrix}), B = (\begin{smallmatrix}1 & 2 \\ 3 & 4 \\ 5 & 6\end{smallmatrix})$

To compute the product AB:

$(\begin{smallmatrix}1\cdot1 + 2\cdot3 + 3\cdot5 & 1\cdot2 + 2\cdot4 + 3\cdot6 \\ 4\cdot1 + 5\cdot3 + 6\cdot5 & 4\cdot2 + 5\cdot4 + 6\cdot6 \\ 7\cdot1 + 8\cdot3 + 9\cdot5 & 7\cdot2 + 8\cdot4 + 9\cdot6\end{smallmatrix})$[Performing the multiplications and additions:] AB = $(\begin{smallmatrix}1 + 6 + 15 & 2 + 8 + 18 \\ 4 + 15 + 30 & 8 + 20 + 36 \\ 7 + 24 + 45 & 14 + 32 + 54\end{smallmatrix}) = (\begin{smallmatrix}22 & 28 \\ 49 & 64 \\ 76 & 100 \end{smallmatrix})$

⚠️ We need to verify that the number of columns in the first matrix (width) is equal to the number of rows in the second matrix (height).

(AB)X = A(BX), this is the transformation of a vector $\vec{X}$ by first applying matrix or transformation B, and then matrix or transformation A.

Matrices have several important properties that are fundamental in linear algebra: Addition; scalar and matrix multiplication; matrix multiplication is associative, i.e., (AB)C = A(BC); identity matrix, ∃I, identity matrix with all its diagonal elements equal to 1, and zeroes everywhere, such that AI = IA = A, e.g., I_{3x3} = $(\begin{smallmatrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{smallmatrix})$; and distributive property, i.e., A(B + C) = AB + AC.

In two-dimensional Euclidean space, rotation matrices are used to perform transformations that rotate points around the origin.

To rotate a point (x, y) counterclockwise around the origin by an angle θ, you would multiply the point by the rotation matrix:

$(\begin{smallmatrix}cos(θ) & -sin(θ)\\ sin(θ) & cos(θ)\end{smallmatrix})(\begin{smallmatrix}x\\ y\end{smallmatrix}) = (\begin{smallmatrix}xcos(θ)-ysin(θ)\\ xsin(θ)+ycos(θ)\end{smallmatrix})$

Let θ = 90°, to rotate a point (x, y) counterclockwise around the origin by 90°:

$(\begin{smallmatrix}cos(90) & -sin(90)\\ sin(90) & cos(90)\end{smallmatrix})(\begin{smallmatrix}x\\ y\end{smallmatrix}) = (\begin{smallmatrix}0 & -1\\ 1 & 0\end{smallmatrix})(\begin{smallmatrix}x\\ y\end{smallmatrix}) = (\begin{smallmatrix}-y\\ x\end{smallmatrix})$, e.g., $(\begin{smallmatrix}0 & -1\\ 1 & 0\end{smallmatrix})(\begin{smallmatrix}1\\ 0\end{smallmatrix}) = (\begin{smallmatrix}0\\ 1\end{smallmatrix})↭ R\vec{i} = \vec{j}$. It is very easy to see that $R\vec{j} = -\vec{i}, R^2=(\begin{smallmatrix}-1 & 0\\ 0 & -1\end{smallmatrix})=-I_{2x2}$

Given a square matrix A, an inverse matrix A^{-1} exists if and only if A is non-singular, i.e., its determinant is non-zero (det(A)≠ 0).

The inverse matrix of A, denoted as A^{-1}, is a matrix such that when multiplied by A yields the identity matrix, i.e., $A \times A^{-1} = A^{-1} \times A = I$

If you multiply a matrix by its inverse, you get the identity matrix. In other words, the inverse “undoes” the effect of the original matrix.

$A = (\begin{smallmatrix}2 & 3 & 3\\ 2 & 4 & 5\\ 1 & 1 & 2\end{smallmatrix}).$

To calculate the inverse:

- Calculate the cofactor matrix C of A. The element in the ith row and jth column of C is given by $C_{ij} = (-1)^{i+j} \times \text{minor}(A_{ij})$ where $\text{minor}(A_{ij})$ is the determinant of the submatrix obtained by deleting the ith row and jth column of A.
$\text{minors}=(\begin{smallmatrix}3 & -1 & -2\\ 3 & 1 & -1\\ 3 & 4 & 2\end{smallmatrix}),$ e.g., $\text{minor}(A_{00}) = det(\begin{smallmatrix}4 & 5\\1 & 2\end{smallmatrix}) = 8 - 5 = 3.~ C = (\begin{smallmatrix}3 & 1 & -2\\ -3 & 1 & 1\\ 3 & -4 & 2\end{smallmatrix})$. Note that the signs of the cofactors follow a “checkerboard pattern". Namely, (-1)
^{i+j}is pictured in this matrix:

$(\begin{smallmatrix}+ & - & +\\ - & + & -\\ + & - & +\end{smallmatrix})$

- Transpose the cofactor matrix C to obtain the adjugate (also known as adjoint) matrix $\text{adj}(A).$ Informally, to transpose a matrix is to build a new matrix by swapping its rows and columns (flip it about its main diagonal).

$\text{adj}(A) = (\begin{smallmatrix}3 & -3 & 3\\ 1 & 1 & -4\\ -2 & 1 & 2\end{smallmatrix})$

- Finally, calculate the inverse matrix $A^{-1}$ using the formula: $A^{-1} = \frac{1}{\text{det}(A)} \times \text{adj}(A)$, where $\text{det}(A)$ is the determinant of A.

Recall that in linear algebra, a determinant is a scalar value that can be computed from the elements of a square matrix and characterizes some properties of the matrix. The determinant of a product of matrices is the product of their determinants.

$det(\begin{smallmatrix}a & b & c\\ d & e & f\\ g & h & i\end{smallmatrix}) = a(ei - fh) - b(di - fg) + c(dh - eg).$

$\text{det}(A) = 2(4 \times 2 - 5 \times 1) - 3(2 \times 2 - 5 \times 1) + 3(2 \times 1 - 4 \times 1) = 2(8 - 5) - 3(4 - 5) + 3(2 - 4) = 2(3) - 3(-1) + 3(-2) = 6 + 3 - 6 = 3.$

$A^{-1} = \frac{1}{\text{det}(A)} \times \text{adj}(A) = \frac{1}{3}(\begin{smallmatrix}3 & -3 & 3\\ 1 & 1 & -4\\ -2 & 1 & 2\end{smallmatrix})$

This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007].

- NPTEL-NOC IITM, Introduction to Galois Theory.
- Algebra, Second Edition, by Michael Artin.
- LibreTexts, Calculus and Calculus 3e (Apex). Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
- Field and Galois Theory, by Patrick Morandi. Springer.
- Michael Penn, and MathMajor.
- Contemporary Abstract Algebra, Joseph, A. Gallian.
- YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra.
- MIT OpenCourseWare 18.01 Single Variable Calculus, Fall 2007 and 18.02 Multivariable Calculus, Fall 2007.
- Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences.