Quantity has its own quality, Joseph Stalin

Now I am become Death, the destroyer of worlds, Robert Oppenheimer

An algebraic equation is a mathematical statement that declares or asserts the equality of two algebraic expressions. These expressions are constructed using:

**Dependent and independent variables**. Variables represent unknown quantities. The independent variable is chosen freely, while the dependent variable changes in response to the independent variable.**Constants**. Fixed numerical values that do not change.**Algebraic operations**. Operations such as addition, subtraction, multiplication, division, exponentiation, and root extraction.

Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$

It involves (e.g., $\frac{dy}{dx} = 3x +5y$):

**Dependent variables**:*Variables that depend on one or more other variables*(y).**Independent variables**: Variables upon which the dependent variables depend (x).**Derivatives**: Rates at which the dependent variables change with respect to the independent variables, $\frac{dy}{dx}$

The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:

- The function f(x, y) (the right-hand side of the ODE) in y’ = f(x, y) is continuous in a neighborhood around a point (x
_{0}, y_{0}) and - Its partial derivative with respect to y, $\frac{∂f}{∂y}$, is also continuous near (x
_{0}, y_{0}).

Then the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x_{0}, y_{0}) .

A first-order linear differential equation (ODE) has the general form: a(x)y' + b(x)y = c(x) where y′ is the derivative of y with respect to x, and a(x), b(x), and c(x) are functions of x. If c(x) = 0, the equation is called homogeneous, i.e., a(x)y’ + b(x)y = 0.

The equation can also be written in the standard linear form as: y’ + p(x)y = q(x) where $p(x)=\frac{b(x)}{a(x)}\text{ and }q(x) = \frac{c(x)}{a(x)}$

A second-order linear homogeneous differential equation ODE with constant coefficients is a differential equation of the form: y'' + Ay' + By = 0 where:

- y is the dependent variable (a function of the independent variable t),
- y′ and y′′ are the first and second derivatives of y with respect to t,
- t is the independent variable,
- A and B are constants.

This equation is homogeneous, meaning that there are no external forcing terms (like a function of t) on the right-hand side.

Consider the system of differential equations: $\begin{cases} x’ = -x -y \\ y’ = 2x -3y \end{cases}$

Our goal is to solve this system, find the general solution, and sketch the trajectories of the solutions to understand the system’s behavior over time.

Step 1: **Representing the System in Matrix Form**: x’ = Ax, where x = A =$(\begin{smallmatrix}x\\ t\end{smallmatrix})$ A =$(\begin{smallmatrix}-1 & -1\\ 2 & -3\end{smallmatrix})$

Step 2: **Finding the Eigenvalues of Matrix A**

To solve this system, we first find the eigenvalues and eigenvectors of the matrix A. The characteristic equation is derived from the determinant of A −λI, where I is the identity matrix and λ represents the eigenvalues.

To find the eigenvalues λ of matrix A, we solve the characteristic equation: |A - λI| = $\vert\begin{smallmatrix}-1-λ & -1\\ 2 & -3-λ\end{smallmatrix}\vert = (−1−λ)(−3−λ) + 2 = (λ+1)(λ+3) +2 = λ^2 + 4λ + 3 + 2 = λ^2 + 4λ + 5 = 0 $

Solving the Characteristic Equation: $λ^2 + 4λ + 5 = 0 ⇒[\text{Using the quadratic formula}] λ = \frac{-4±\sqrt{-4}}{2} = -2±i$

Step 3: **Finding the Eigenvectors**

We need to solve: (A −λI)v = 0

- λ
_{1}= -2 + i The matrix A - λI is: $\vert\begin{smallmatrix}−1−(−2+i) & -1\\ 2 & −3−(−2+i)\end{smallmatrix}\vert = \vert\begin{smallmatrix}1-i & -1\\ 2 & -1-i\end{smallmatrix}\vert$

For λ = -2 + i, we solve $\vert\begin{smallmatrix}1-i & -1\\ 2 & -1-i\end{smallmatrix}\vert(\begin{smallmatrix}a_1\\a_2\end{smallmatrix}) = 0$, which gives: (1-i)a_{1} -a_{2} = 0 ⇒ a_{2} = (1-i)a_{1}.

From the first equation a_{2} = (1-i)a_{1}. Substitute a_{2} into the second equation: 2a_{1} + (-1-i)a_{1} = 0 ↭ $2a_1 + (-1-i)(1-i)a_1 = 0 ↭ 2a_1 -1 +i -i -1 = 0 ↭ 2a_1-2=0↭ a_1 = 1 ⇒ a_2 = (1-i)$

Thus, the eigenvector corresponding to λ_{1}=−2+i is: $\vec{α_1} = (\begin{smallmatrix}a_1\\a_2\end{smallmatrix}) = (\begin{smallmatrix}1\\1-i\end{smallmatrix})$

Step 4: **Constructing the General Solution**

The corresponding solution to the system can be written as $x = e^{-2t}e^{it}(\begin{smallmatrix}1\\1\end{smallmatrix})+ i(\begin{smallmatrix}0\\ -1 \end{smallmatrix}) = e^{-2t}((\begin{smallmatrix}1\\1\end{smallmatrix})+ i(\begin{smallmatrix}0\\ -1 \end{smallmatrix}))(cos(t) + isin(t)) $

so that we get respectively for the real and imaginary parts of x

$x_1 = e^{-2t}((\begin{smallmatrix}1\\1\end{smallmatrix})cos(t) - (\begin{smallmatrix}0\\ -1\end{smallmatrix})sin(t))$

$x_2 = e^{-2t}((\begin{smallmatrix}1\\1\end{smallmatrix})sin(t) + (\begin{smallmatrix}0\\ -1\end{smallmatrix})cos(t))$

The general solution is $c_1[((\begin{smallmatrix}1\\1\end{smallmatrix})cos(t) - (\begin{smallmatrix}0\\ -1 \end{smallmatrix})sin(t))]e^{-2t} + c_2[((\begin{smallmatrix}1\\1\end{smallmatrix})sin(t) + (\begin{smallmatrix}0\\ -1 \end{smallmatrix})cos(t))]e^{-2t}$

The real question is what $[((\begin{smallmatrix}1\\1\end{smallmatrix})cos(t) - (\begin{smallmatrix}0\\ -1\end{smallmatrix})sin(t)), ((\begin{smallmatrix}1\\1\end{smallmatrix})sin(t) + (\begin{smallmatrix}0\\ -1 \end{smallmatrix})cos(t))]]$ look like? As a curve, this is **bounded**, **periodic**, the exponential decay e^{-2t} causes the amplitude of the trajectories to decrease over time, resulting in a spiral inward toward the origin. The system describes a spiral sink, meaning that as time progresses, the solutions are spiraling trajectories that move inward toward the origin.

How do you know that it goes around counterclockwise and not clockwise? By calculating the vector at (1, 0) from the system’s velocity field (our original system equations), we obtained (-1, 2), therefore the motion is counterclockwise (Refer to Figure v for a visual representation and aid in understanding it).

Given x=1, y=0: x’ = −x−y = −1−0 = −1 (motion to the left). y′ = 2x−3y = 2(1)−0 = 2 (motion upwards). This indicates a counterclockwise rotation around the origin.

- Sketch the following homogeneous linear system

A = $(\begin{smallmatrix}-2 & 3\\ -3 & -2\end{smallmatrix}), λ = -2 ± 3i$

The real part of the eigenvalue Real(λ) = -2 > 0, which is negative. Since the real part of the eigenvalues is negative, the trajectories will spiral inward towards the origin.

The presence of the imaginary part ±3i indicates that the trajectories will be spirals (oscillatory bounded behaviour).

To determine the direction of the spiral, you can compute the action of the matrix on a standard basis vector, such as $(\begin{smallmatrix}1\\ 0\end{smallmatrix})$

$(\begin{smallmatrix}-2 & 3\\ -3 & -2\end{smallmatrix})(\begin{smallmatrix}1\\ 0\end{smallmatrix}) = (\begin{smallmatrix} -2 \\ -3 \end{smallmatrix})$. This vector points downwards and to the left, indicating a clockwise rotation if you follow the arrow.

This homogeneous linear system is stable because the trajectories spiral inward towards the origin. The negative real part of the eigenvalues (Re(λ)=−2) causes the trajectories to move towards the origin, and the presence of the imaginary part (±3i) indicates that the trajectories are spirals. The system exhibits a stable spiral in a clockwise direction (Refer to Figure i for a visual representation and aid in understanding it)

- Sketch the following homogeneous linear system

A = $(\begin{smallmatrix}2 & 3\\ -3 & 2\end{smallmatrix}), λ = 2 ± 3i$

The real part of the eigenvalue Real(λ) = 2 > 0, which is positive. Since the real part of the eigenvalues is positive, the trajectories will spiral outward, away from the origin.

The presence of the imaginary part ±3i (oscillatory bounded behaviour) indicates that the trajectories will be spirals.

To determine the direction of the spiral, you can compute the action of the matrix on a standard basis vector, such as $(\begin{smallmatrix}1\\ 0\end{smallmatrix})$

$(\begin{smallmatrix}2 & 3\\ -3 & 2\end{smallmatrix})(\begin{smallmatrix}1\\ 0\end{smallmatrix}) = (\begin{smallmatrix} 2 \\ -3 \end{smallmatrix})$. This vector points downwards and to the right, indicating a clockwise rotation if you follow the arrow.

This homogeneous linear system is unstable because the trajectories spiral outwards away from the origin. The positive real part of the eigenvalues (Re(λ) = 2) causes the trajectories to move away from the origin, and the presence of the imaginary part (±3i) indicates that the trajectories are spirals. The system exhibits an unstable spiral in a clockwise direction (Refer to Figure ii for a visual representation and aid in understanding it)

- Sketch the following homogeneous linear system

A = $(\begin{smallmatrix}0 & 1\\ -5 & 0\end{smallmatrix})$

Step 1: **Finding the Eigenvalues**

We start by calculating the eigenvalues λ of A by solving the characteristic equation: det(A -λI) = 0 ↭ $det(\begin{smallmatrix}-λ & 1\\ -5 & -λ\end{smallmatrix}) = (−λ)(−λ)−(1)(−5)= λ^2+5 = 0$

Solve for λ: $λ^2 = -5 ↭ λ = ±\sqrt{5}i$, so the eigenvalues are purely imaginary.

Eigenvalues that are purely imaginary indicate oscillatory behavior in the system. Since the eigenvalues have no real component, there will be no exponential growth or decay, meaning the trajectories will be closed orbits around the origin. This type of system is known as a center, and the trajectories are expected to be circles or ellipses centered at the origin

Step 2: **Finding the Eigenvector for $λ = +\sqrt{5}i$**

To find the eigenvector corresponding to $λ = ±\sqrt{5}i$, we solve: (A−λI)v = 0

$v = (\begin{smallmatrix}-\sqrt{5}i & 1\\ -5 & -\sqrt{5}i\end{smallmatrix})(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix}) = (\begin{smallmatrix}0\\ 0\end{smallmatrix})$

This gives the system of equations:

$\begin{cases} -\sqrt{5}ia_1 + a_2 = 0 \\ -5a_1 -\sqrt{5}a_2 = 0 \end{cases}$

From the first equation, we get: $a_2 = \sqrt{5}ia_1$. Let a_{1} = 1, then $a_2 = \sqrt{5}ia_1 = \sqrt{5}i$. So, an eigenvector corresponding to $λ = \sqrt{5}i$ is v = $(\begin{smallmatrix}1\\ \sqrt{5}i\end{smallmatrix})$

Step 3: **Constructing the Complex Solution**

The complex solution is: $\vec{x}(t) = Ce^{λt}v = C(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})e^{\sqrt{5}it} =[\text{Using Euler’s formula}] C(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})(cos(\sqrt{5}t)+isin(\sqrt{5}t))$

Step 4: **Separating Real and Imaginary Parts to Form Real Solutions**

The real and imaginary parts of x(t) give us two linearly independent solutions:

The real solutions are: $\vec{x_1} = C_1(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})cos(\sqrt{5}t), \vec{x_1} = C_2(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})sin(\sqrt{5}t)$

Therefore, the general solution can be written as a linear combination of these two real solutions. The general solution for a system with purely imaginary eigenvalues can be expressed as: $\vec{x}(t) = C_1(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})cos(\sqrt{5}t) + C_2(\begin{smallmatrix} 1 \\ \sqrt{5}\end{smallmatrix})sin(\sqrt{5}t)$

Step 5: **Interpreting the Solution and Phase Portrait**

The eigenvalues are purely imaginary, which indicates that the system has oscillatory behavior and periodic motion ($cos(\sqrt{5}t)$ and $sin(\sqrt{5}t)$), so there is no exponential growth or decay (λ has no real part). This results in closed trajectories that form circular or elliptical orbits around the origin.

The system represents a center, meaning that trajectories will be closed curves (circles or ellipses) around the origin. They neither converge to nor diverge from the origin. Instead, they form closed orbits. (Refer to Figure iii for a visual representation and aid in understanding it)

Step 6: **Determining the Direction of Rotation**

To determine whether the closed trajectories are rotating clockwise or counterclockwise, we examine the effect of A on a standard basis vector,, such as $(\begin{smallmatrix}1\\ 0\end{smallmatrix})$

$ (\begin{smallmatrix}0 & 1\\ -5 & 0\end{smallmatrix})(\begin{smallmatrix}1\\ 0\end{smallmatrix}) = (\begin{smallmatrix}0\\ -5\end{smallmatrix})$. This results, $(\begin{smallmatrix}0\\ -5\end{smallmatrix})$, indicates that the vector $(\begin{smallmatrix}1\\ 0\end{smallmatrix})$ is rotated into the vector $(\begin{smallmatrix}0\\ -5\end{smallmatrix})$, suggesting a clockwise rotation around the origin.

In the study of differential equations, particularly in advanced calculus or Calculus III, homogeneous linear systems play a crucial role. Specifically, we focus on systems of the form: $\vec{x’} = A\vec{x}$ where $\vec{x}(t)$ is a vector of unknown functions, $\vec{x} = (\begin{smallmatrix}x(t)\\ y(t)\end{smallmatrix})$, A is a constant 2x2 matrix, and $\vec{x’}(t)$ denotes the derivative of $\vec{x}(t)$ with respect to t.

Our goal is to find the general solution to this system, which involves understanding the concepts of linear independence, the Wronskian, and the fundamental matrix.

Theorem. The general solution to the system $\vec{x'} = A\vec{x}$ can be expressed as a linear combination of two linearly independent solutions: $\vec{x(t)} = c_1\vec{x_1} +c_2\vec{x_2}$ where c_{1} and c_{2} are constants, and $\vec{x_1}(t)$ and $\vec{x_2}(t)$ are linearly independent solutions.

- These solutions $\vec{x_1}(t)$ and $\vec{x_2}(t)$ are called fundamental solutions because they form a basis for the solution space of the differential equation. Any solution $\vec{x}(t)$ can be written as a linear combination of these two solutions.
- Two solutions are linearly independent if one is not a scalar multiple of the other. This means they span the full space of solutions.
- Constants c
_{1}and c_{2}. These constants are determined by initial conditions or specific requirements of the problem.

To check whether two solutions $\vec{x_1}$ and $\vec{x_2}$ are linearly independent, we use the Wronskian.

Definition. The Wronskian for two vector-valued functions is defined as: $W(\vec{x_1}(t), \vec{x_2}(t)) := |\vec{x_1} \vec{x_2}| = det(\begin{smallmatrix}x_1(t) & x_2(t)\\ y_1(t) & y_2(t)\end{smallmatrix})$ where $\vec{x_1}(t) = (\begin{smallmatrix}x_1(t)\\y_1(t)\end{smallmatrix})$ and $\vec{x_2} = (\begin{smallmatrix}x_2(t)\\y_2(t)\end{smallmatrix})$

The Wronskian gives us valuable information about the relationship between the two solutions:

- If the Wronskian W(t) ☰ 0 for all t, then the two solutions $\vec{x_1}, \vec{x_2}$ are linear dependent - that is, one is scalar multiple of the other.
- If the Wronskian W(t) is never zero, W(t) ≠ 0 for any t in an interval, then the solutions $\vec{x_1}, \vec{x_2}$ are linearly independent, meaning they span the entire solution space.
Note: In linear systems with constant coefficients, the Wronskian is either always zero or never zero. This is due to the uniqueness of solutions in linear differential equations.

A powerful tool in solving linear systems is the fundamental matrix.

A fundamental matrix X(t) for the system $\vec{x'} = A\vec{x}$ is a 2 x 2 matrix whose columns are linearly independent solutions of the system, $X := [\vec{x_1} \vec{x_2}] = (\begin{smallmatrix}x_1(t) & x_2(t)\\\ y_1(t) & y_2(t)\end{smallmatrix})$. It provides a compact representation of the full solution space.

Each column of X(t) is one of the independent solutions, so the matrix X(t) contains all the information about the general solution to the system.

Properties:

- The determinant of X(t), which is the Wronskian of the two solutions, is never zero. Since $\vec{x_1}(t)$ and $\vec{x_2}(t)$ are linearly independent, then det(X(t)) ≠ 0 for all t.
- The matrix X(t) satisfies the original differential equation: X' = AX.

This property follows from the fact that each column of X(t) is a solution to the differential equation

X’ = AX ↭ $[\vec{x_1’} \vec{x_2’}] = A[\vec{x_1} \vec{x_2}] =[\text{Simple matrix multiplication}] [A\vec{x_1} A\vec{x_2}] ↭ \vec{x_1’} = A\vec{x_1}, \vec{x_2’} = A\vec{x_2}$

This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007].

- NPTEL-NOC IITM, Introduction to Galois Theory.
- Algebra, Second Edition, by Michael Artin.
- LibreTexts, Calculus and Calculus 3e (Apex). Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
- Field and Galois Theory, by Patrick Morandi. Springer.
- Michael Penn, and MathMajor.
- Contemporary Abstract Algebra, Joseph, A. Gallian.
- YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra.
- MIT OpenCourseWare [18.03 Differential Equations, Spring 2006], YouTube by MIT OpenCourseWare.
- Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences.