Mathematics is the language in which God has written the universe, Galileo Galilei
Mathematics is not about numbers, equations, computations, or algorithms: it is about understanding, William Paul Thurston
An algebraic equation is a mathematical statement that declares or asserts the equality of two algebraic expressions. These expressions are constructed using:
Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$
It involves (e.g., $\frac{dy}{dx} = 3x +5y$):
The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:
Then the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x_{0}, y_{0}) .
A first-order linear differential equation (ODE) has the general form: a(x)y' + b(x)y = c(x) where y′ is the derivative of y with respect to x, and a(x), b(x), and c(x) are functions of x. If c(x) = 0, the equation is called homogeneous, i.e., a(x)y’ + b(x)y = 0.
The equation can also be written in the standard linear form as: y’ + p(x)y = q(x) where $p(x)=\frac{b(x)}{a(x)}\text{ and }q(x) = \frac{c(x)}{a(x)}$
Second-order linear homogeneous ordinary differential equations (ODEs) are fundamental in mathematical modeling and appear frequently in physics, engineering, and other sciences. They describe a wide range of phenomena, such as mechanical vibrations, electrical circuits, and fluid dynamics.
Definition. A second-order linear homogeneous ODE is a differential equation of the form: y'' + p(x)y' + q(x)y = 0 where:
The term linear refers to the fact that y, y’, and y′′ appear to the first power only -there are no squares (y^{2}), cubes (y’^{(3)}), or other nonlinear expression.
The term homogeneous means the right-hand side of the equation is zero. In other words, the equation is set equal to zero, indicating that there are no external forcing functions.
The goal when solving such ODEs is to find the general solution, which represents all possible solutions.
The general solution to a second-order linear homogeneous ODEs is: y = c_{1}y_{1} + c_{2}y_{2} where
Two solutions y_{1} and y_{2} are said to be linearly independent if neither is a constant multiple of the other. That is, there is no constant c or c’ such that y_{2} ≠ cy_{1} and y_{1} ≠ c’y_{2} over the interval of interest.
Why is Linear Independence Important?
Test for Linear Independence
One common method to test for linear independence is using the Wronskian determinant: $W(y_1, y_2)(x) = y_1(x)y_2'(x) -y_1'(x)y_2(x)$. If W(y_{1}, y_{2})(x) ≠ 0 for some x in the interval, then y_{1} and y_{2} are linearly independent.
The superposition principle applies to linear homogeneous differential equations and states: If y_{1}(x) and y_{2}(x) are solutions to a linear homogeneous ODE, then any linear combination of y_{1} and y_{2}, i.e., y = c_{1}y_{1} + c_{2}y_{2} is also a solution
It allows us to construct a general solution from known solutions and underpins the structure of the solution space for linear homogeneous ODEs.
We will present two proofs to demonstrate why linear combinations of solutions are also solutions.
Consider the ODE: y’’ + p(x)y’ + q(x)y = 0. Suppose y_{1} and y_{2} are known solutions to the ODE, meaning:
y_{1}’’ + p(x)y_{1}’ + q(x)y_{1} = 0
and
y_{2}’’ + p(x)y_{2}’ + q(x)y_{2} = 0
Now, take a linear combination y = c_{1}y_{1} + c_{2}y_{2}, where c_{1} and c_{2} are constant. Let’s verify that this y is also a solution.
First derivative: y’ = c_{1}y’_{1} + c_{2}y’_{2}. Second derivative: c_{1}y’’_{1} + c_{2}y’’_{2}
Next, substitute these into the original equation y’’ + p(x)y’ + q(x)y = 0:
(c_{1}y’’_{1} + c_{2}y’’_{2}) + p(x)(c_{1}y’_{1} + c_{2}y’_{2}) + q(x)(c_{1}y_{1} + c_{2}y_{2}) = 0 ↭[🚀]
By distributing c_{1} and c_{2}, we get:
↭[🚀] c_{1}(y’’_{1} + p(x)y’_{1} + y_{1}) + c_{2}(y’’_{2} + p(x)y’_{2} + y_{2}) = 0 ↭[🚀]
Since y_{1} and y_{2} are both solutions: y’’_{1} + p(x)y’_{1} + y_{1} = 0 and y’’_{2} + p(x)y’_{2} + y_{2} = 0. Therefore,
↭[🚀] c_{1}·0 + c_{2}· 0 = 0. Thus, y = c_{1}y_{1} + c_{2}y_{2} satisfies the original equation, and is indeed a solution ∎
Our linear homogeneous ODE is normally written as y’’ + p(x)y’ + q(x)y = 0. We are going to rewrite it using the Differentiator operator as D^{2}y + pDy + qy = 0, then factor y: (D^{2} + pD + q)·y = 0.
Now, we are going to consider D^{2} + pD + q as a linear operator and abbreviate it as L, so our equation is simplified to L·y = 0 where L = D^{2} + pD + q ⇒ L(y) = y’’ + p(x)y’ + q(x)y. L is kind of a black box or function with inputs u(x) and outputs v(x), the result of applying the operator L to u(x).
L is a linear operator, meaning it satisfies two important properties:
where c is a constant, u, u_{1}, and u_{2} are functions.
For example, the differential operator is a linear operator: (u_{1} + u_{2})’ = u_{1}’ + u_{2}’ and (cu)’ = cu’.
Additivity. Suppose u_{1} and u_{2} are two functions. Then, the operator L applied to the sum of u_{1} and u_{2} is:
L(u_{1} + u_{2}) = (u_{1} + u_{2})’’ + p(x)(u_{1} + u_{2})’ + q(x)(u_{1} + u_{2})
Using the basic properties of derivatives and addition, we can split each term:
L(u_{1} + u_{2}) = u_{1}’’ + u_{2}’’ +p(x)(u_{1}’ + u_{2}’) + q(x)(u_{1} + u_{2})
Now, rearrange terms:
L(u_{1} + u_{2}) = (u_{1}’’ + p(x)u_{1}’ + q(x)u_{1}) + (u_{2}’’ + p(x)u_{2}’ + q(x)u_{2}) = L(u_{1}) + L(u_{2}).
Homogeneity (Scaling). Next, we need to show that L satisfies the scaling property. Suppose c is a constant and u is a function. Then applying L to the constant multiple c⋅u gives: L(c·u) = (c⋅u)′′ +p(x)(c⋅u)′ +q(x)(c⋅u)
Since the derivative of a constant multiple of a function is just the constant times the derivative of the function, we can factor out the constant c from each term:
L(c⋅u) = c⋅u′′ +p(x)⋅c⋅u′ +q(x)⋅c⋅u =[Factor c out] = c·(u’’ + p(x)u’ + q(x)) = c·L(u).
Our ODE is expressed as L·y = 0. Because L is a linear operator (it satisfies both additivity and homogeneity), it follows that the superposition principle holds, that is, if y_{1} and y_{2} are solutions to the ODE L·y = 0 or L(y) = 0, then any linear combination of these solutions, i.e., c_{1}y_{1} + c_{2}y_{2}, will also be a solution.
L·(c_{1}y_{1} + c_{2}y_{2}) =[Additivity] L(c_{1}y_{1}) + L(c_{2}y_{2}) =[Homogeneity (Scaling)] c_{1}L(y_{1}) + c_{2}L(y_{2}) =[Since both y_{1} and y_{2} are solutions to the ODE] c_{1}·0 + c_{2}·0 = 0. Thus, y = c_{1}y_{1} +c_{2}y_{2} is also a solution.
For second-order linear homogeneous ODEs:
Characteristic Equation: r^{2} −5r +6 = 0. It roots are: r=2, r=3
Second-order linear ODE’s. Real and distinct roots. In this case, the solution is a combination of two exponentially terms. The general solution is y(x) = $c_1e^{2x}+c_2e^{3x}$
This is Euler’s Equation.
Assumed Solution: y(x) = x^{r}
Substitute into the ODE: x^{2}(r(r−1)x^{r−2}) −x(rx^{r−1}) +xr = 0 ↭ r(r−1)x^{r} −rx^{r} +x^{r} = 0
Simplify: [r(r-1) -r + 1]x^{r} = 0 ⇒ [r^{2} -2r + 1]x^{r} = 0. Characteristic Equation: r^{2} −2r + 1 = 0. Factoring this, we find: (r−1)^{2} = 0. The roots of the characteristic equation are: r=1 (repeated root).
We need to find v(x) such that y_{2}(x) is a solution to the original differential equation.
Compute derivatives: y_{2}’(x) = rx^{r-1}v(x) + x^{r}v’(x)
y_{2}’’(x) = r(r-1)x^{r-2}v(x) + rx^{r-1}v’(x) + rx^{r-1}v’(x) + x^{r}v’’(x) = r(r-1)x^{r-2}v(x) + 2rx^{r-1}v’(x) + x^{r}v’’(x)
Substitute y_{2} into the differential equation x^{2}y′′ −xy′ +y = 0: x^{2}(r(r-1)x^{r-2}v(x) + 2rx^{r-1}v’(x) + x^{r}v’’(x)) −x(rx^{r-1}v(x) + x^{r}v’(x)) + x^{r}v(x) = r(r-1)x^{r}v(x) + 2rx^{r+1}v’(x) + x^{r+2}v’’(x) -rx^{r}v(x) -x^{r+1}v’(x) + x^{r}v(x) = [Combine like terms] x^{r+2}v’’(x) + (2r-1)x^{r+1}v’(x) + (r(r-1)-r+1)v(x) =[Given r = 1] x^{3}v’’(x) + (2-1)x^{1+1}v’(x) + (0-1+1)v(x) = 0 ⇒ x^{3}v’’(x) + x^{2}v’(x) = 0 ⇒[Divide the entire equation by x^{2} (since x > 0)] xv’’(x) + v’(x) = 0
This is a first-order linear ODE for v′(x). Let u = v′(x): xu’(x) + u(x) = 0⇒[This can be rewritten as:] $u’(x) + \frac{1}{x}u(x) = 0$.
This is a separable equation: $\frac{du}{u} = \frac{-1}{x}dx⇒[\text{Integrate both sides}] ln|u| = -ln|x| + C ↭ u = \frac{K}{x}$ where K = e^{C} is a constant. Thus, $v’(x) = \frac{K}{x} ⇒ v(x) = Kln(x) + C’$. Since C’ is an arbitrary constant, we can set K = 1 without loss of generality (absorbing constants into C’). Therefore, the second solution is y_{2}(x) = x^{r}v(x) = x^{1}·ln(x).
For two functions y_{1}(x) and y_{2}(x) to be linearly independent, the Wronskian W(y_{1}, y_{2})(x) must be non-zero.
y_{1}(x) = x, y_{1}’(x) = 1.y’_{2}(x) = $x\frac{1}{x}+ln(x) = 1 + ln(x)$
Since W(y_{1}, y_{2})(x) = $y_1(x)y_2’(x)-y_1’(x)y_2(x) = x(ln(x)+1)-1·xln(x) = x ≠ 0$ for x > 0, the solutions are linearly independent.
General Solution: y(x) = c_{1}x^{r} + c_{2}x^{r}ln(x) =[Substituting r=1:] c_{1}x + c_{2}xln(x) where:
Given initial conditions y(x_{0})= a, y’(x_{0}) = b. the general solution y(x) = c_{1}x + c_{2}xln(x) and its derivative y’(x) = c_{1} + c_{2}(ln(x) + 1).
$\begin{cases} c_1x_0 + c_2x_0ln(x_0) = a \\ c_1 + c_2(ln(x_0)+1) = b \end{cases}$
This system can be solved for c_{1} and c_{2} provided that the Wronskian(y_{1}, y_{2})(x_{0}) = x_{0} ≠ 0, ensuring a unique solution exists.
Let y_{1}(x) and y_{2}(x) be two linearly independent solutions to the differential equation y′′ +p(x)y′ +q(x)y = 0
Proposition. The family of solutions {c_{1}y_{1} + c_{2}y_{2}} is sufficient to satisfy any initial conditions for the second-order linear homogeneous differential equation y′′ +p(x)y′ +q(x)y = 0.
Proof.
Let y(x_{0}) = a, y’(x_{0}) = b be the initial values at some point x_{0}. The general solution to the homogeneous equation is of the form: y(x) = c_{1}y_{1}(x) + c_{2}y_{2}(x) where y_{1}(x) and y_{2}(x) are two independent solutions to the differential equation.
Taking the derivative of the general solution y(x), we get: y’ = c_{1}y_{1}’ + c_{2}y_{2}’. Substitute the initial conditions y(x_{0})=a and y′(x_{0})=b into these expressions:
$\begin{cases} c_1y_1(x_0) + c_2y_2(x_0) = a \\ c_1y_1’(x_0) + c_2y_2’(x_0) = b \end{cases}$
This forms a system of two linear equations in the unknowns (variables) c_{1} and c_{2}. For the system to have a unique solution for c_{1} and c_{2}, the determinant of the coefficient matrix, which is called the Wronskian, must be non-zero at x_{0}:
$W(y_2, y_2)(x_0) = \Bigl \vert\begin{smallmatrix}y_1(x_0) & y_2(x_0)\\ y_1’(x_0) & y_2’(x_0)\end{smallmatrix}\Bigr \vert ≠ 0$
The Wronskian of two differentiable functions f and g is defined as W(f, g) = fg’ -gf’. More generally, W(f_{1}, ···, f_{n}) = $\Biggl \vert\begin{smallmatrix}f_1(x) & f_2(x) & ··· & f_n(x)\\ f_1’(x) & f_2’(x) & ··· & f_n’(x)\\· & · & ··· & ·\\· & · & ··· & ·\\· & · & ··· & ·\\f_1^{n-1}(x) & f_2^{n-1}(x) & ··· & f_n^{n-1}(x)\end{smallmatrix}\Biggr \vert$. This is the determinant of the matrix constructed by placing the functions in the first row, the first derivatives of the functions in the second row, and so on. In our particular case, W(y_{1}, y_{2}) = $\Bigl \vert\begin{smallmatrix}y_1 & y_2\\ y_1’ & y_2’\end{smallmatrix}\Bigr \vert = y_1(x)y_2’(x)-y_1’(x)y_2(x)$
This determinant plays a crucial role in determining whether y_{1} and y_{2} are linearly independent. If W(y_{1}, y_{2}) ≠ 0 for some x, the functions y_{1} and y_{2} are linearly independent, meaning that they form a valid basis for the space of solutions to the differential equation. If W(y_{1}, y_{2}) = 0 for all x, the two functions are linearly dependent, which means one is a constant multiple of the other.
Conclusion: Because the Wronskian W(y_{1}, y_{2})(x_{0}) ≠ 0, we can solve the system of equations for c_{1} and c_{2}, ensuring that the general solution satisfies the given initial conditions. Thus, the family {c_{1}y_{1}(x) + c_{3}y_{2}(x)} is sufficient to satisfy any initial conditions ∎
Theorem on the Wronskian. If y_{1} and y_{2} are solutions to the linear homogeneous ODE y′′ +p(x)y′ +q(x)y = 0, and p(x) and q(x) are continuous on an interval I, then the Wronskian W(y_{1}, y_{2}) either vanishes identically (i.e., W(y_{1}, y_{2}) ≡ 0, W(y_{1}, y_{2})(x) = 0 ∀x ∈ I) or is never zero on I (i.e., W(y_{1}, y_{2})(x) ≠ 0 ∀x ∈ I).
Implications
Since y_{1} and y_{2} are assumed to be linearly independent solutions, their Wronskian W(y_{1}, y_{2}) ≠ 0 on I.
Given any initial condition y(x_{0}) = a and y’(x_{0}) = b, we can always find unique values for c_{1} and c_{2} that satisfy these conditions (the system of equations for c_{1} and c_{2} has a unique solution).
Claim: The set of solutions {c_{1}y_{1} + c_{2}y_{2}} forms the complete family of solutions to the differential equation. Note: However, y_{1} and y_{2} are not necessarily the only independent solutions (“the only game in town”). It is possible that other pairs of independent solutions, say u_{1}(x) and u_{2}(x), can also form a valid basis for the solution space. In this case, the general solution would still be expressed as: {c_{1}u_{1} + c_{2}u_{2}}
Relationship Between Different Solution Pairs:
However, any other pair u_{1} and u_{2} can be expressed as linear combinations of y_{1} and y_{2}, meaning there is a relationship:
$\begin{cases} u_1 = \bar {c_1}y_1 + \bar {c_2}y_2 \\ u_2 = \bar {\bar {c_1}}y_1 + \bar {\bar {c_2}}y_1 \end{cases}$
We can also define “normalized solutions”, say Y_{1} and Y_{2}, which are particular solutions of the ODE that satisfy specific initial conditions at x_{0} (often x_{0} = 0): Y_{1}(0) = 1, Y_{1}’(0) = 0, and Y_{2}(0) = 0, Y_{2}’(0) = 1.
Purpose of Normalized Solutions: convenience (they simplify the process of solving initial value problems) and direct application (Any solution satisfying initial conditions can be directly expressed as a linear combination of Y_{1}(x) and Y_{2}(x)).
Standard solutions y_{1} = cos(x), y_{2}(x) = sin(x).
Normalized solutions Y_{1}(x) = cos(x) (since cos(0) = 1 and cos’(0) = 0), = Y_{2}(x) = sin(x) (since sin(0) = 1 and sin’(0) = 0).
Standard solutions are y_{1} = e^{x}, y_{2}(x) = e^{-x} and the general solution is y = c_{1}e^{x} + c_{2}e^{-x}, y’ = c_{1}e^{x} - c_{2}e^{-x}
Find Normalized Solutions. To find Y_{1}(x):
$\begin{cases}c_1e^0 + c_2e^0 = c_1 + c_2 = 1 \\ c_1e^0 - c_2e^0 = c_1 -c_2 = 0 \end{cases}$
The solution is $c_1 = c_2 = \frac{1}{2}, Y_1 = \frac{e^x+e^{-x}}{2} = cosh(x)$
To find Y_{2}:
$\begin{cases} c_1e^0 + c_2e^0 = c_1 + c_2 = 0 \\ c_1e^0 - c_2e^0 = c_1 -c_2 = 1 \end{cases}$
The solution is $c_1 = \frac{1}{2}, c_2 = \frac{-1}{2}, Y_2 = \frac{e^x-e^{-x}}{2} = sinh(x)$
Advantages of Normalized Solutions
If we have two normalized solutions, say Y_{1} and Y_{2}, at the point x = 0, then the the solution to the Initial Value Problem (IVP): ODE + $\begin{cases} y(0) = a = y_0 \\ y’(0) = b = y’_0 \end{cases}$ is: y = aY_{1} + bY_{2}
This works because the initial conditions are satisfied as follows: y(0) = aY_{1}(0) + bY_{2}(0) = a·1 + b·0 = a, y’(0) = aY’_{1}(0) + b’Y_{2}(0) = a·0 + b·1= b
It states that for a second-order linear homogeneous differential equation of the form: y’’ + py’ + qy = 0 where p(x) and q(x) are continuos functions on an interval containing x_{0}, there exists exactly one solution (both existence and uniqueness) that satisfies the given initial conditions y(x_{0}) = a, and y'(x_{0}) = b.
This means that the family of solutions {c_{1}Y_{1} + c_{2}Y_{2}} encompasses all possible solutions to the differential equation. Any solution u(x) with initial conditions u(0) = u_{0}, u'(0) = u'_{0}, can be written as: u_{0}Y_{1} + u'_{0}Y_{2}.
Thus, all solutions to the differential equation belong to the family {c_{1}Y_{1} + c_{2}Y_{2}}, and if another solution v(x) satisfies the same initial conditions v(0) = u_{0}, v’(0) = u’_{0}, by the uniqueness theorem, u(x) = v(x).