Mathematics is the language in which God has written the universe, Galileo Galilei
Mathematics is not about numbers, equations, computations, or algorithms: it is about understanding, William Paul Thurston
Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$
It involves (e.g., $\frac{dy}{dx} = 3x +5y$):
Definition. A first-order linear ordinary differential equation is an ordinary differential equation (ODE) involving an unknown function y(x), its first derivative y′, and functions of the independent variable x, which can be written in the general form:: a(x)y' + b(x)y = c(x) where:
These equations are termed “linear” because the unknown function y and its derivative y’ appear to the first power and are not multiplied together or composed in any nonlinear way.
If the function c(x)=0 for all x in the interval of interest, the equation simplifies to: a(x)y’ + b(x)y = 0. Such an equation is called a homogeneous linear differential equation.
The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:
Then, the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x0, y0), meaning that it satisfies the initial condition y(x0) = y0.
This theorem ensures that under these conditions, the solution exists and is unique near x = x0.
A second-order linear homogeneous differential equation ODE with constant coefficients is a differential equation of the form: y'' + Ay' + By = 0, where A and B are constants.
To solve this ODE, we seek two linearly independent solutions y1(t) and y2(t). The general solution is then a linear combination of these solutions: $c_1y_1 + c_2y_2$ where c1 and c2 are two arbitrary constants determined by initial conditions. The key to solving the ODE is the characteristic equation, whose roots determine the behavior of the solutions.
Second-order linear homogeneous ordinary differential equations (ODEs) are fundamental tools in mathematical modeling and appear frequently in physics, engineering, and other sciences. They describe a wide range of phenomena, such as mechanical vibrations, electrical circuits, and fluid dynamics where the rate of change of a quantity depends linearly on the quantity and its derivatives.
Definition. A second-order linear homogeneous ODE is a differential equation of the form: y'' + p(x)y' + q(x)y = 0 where:
The term linear refers to the fact that the unknown y and its derivatives y’ and y′′ appear to the first power only and are not multiplied together. That is, there are no squares (y2), cubes (y’(3)) terms, or other nonlinear expression.
The term homogeneous means the right-hand side of the equation is zero. In other words, the equation is set equal to zero, indicating that there are no external forcing functions. This contrasts with nonhomogeneous (or inhomogeneous) equations, where the right-hand side is a non-zero function f(x), leading to an equation of the form: y’’ + p(x)y’ + q(x)y = f(x).
The goal when solving such ODEs is to find the general solution, which represents all possible solutions that satisfy the differential equation.
The general solution to a second-order linear homogeneous ODEs is: y = c1y1 + c2y2 where:
Two solutions y1 and y2 are said to be linearly independent if neither is a constant multiple of the other. That is, there is no constant c or c’ such that y2 = cy1 and y1 = c’y2 over the interval of interest.
Why is Linear Independence Important?
One common method to test for linear independence is using the Wronskian determinant. Given two functions y1(x) and y2(x), the Wronskian determinant is defined as $W(y_1, y_2)(x) = |\begin{smallmatrix}y_1(x) & y_2(x)\\\ y_1'(x) & y_2'(x)\end{smallmatrix}| = y_1(x)y_2'(x) -y_1'(x)y_2(x)$.
Linear Independence Criterion: If the Wronskian W(y1, y2)(x) is not identically zero on the interval of interest (i.e., W(y1, y2)(x) ≠ 0 for some x in the interval), then y1 and y2 are linearly independent on that interval.
The superposition principle applies to linear homogeneous differential equations and states: If y1(x) and y2(x) are solutions to a linear homogeneous ODE, then any linear combination of these solutions is also a solution, that is, y1 and y2, i.e., y = c1y1 + c2y2 is also a solution for any constant c1 and c2.
It allows us to construct a general solution from known solutions and underpins the structure of the solution space for linear homogeneous ODEs. The set of all solutions forms a vector space of dimension equal to the order of the differential equation (in this case, two).
We will present two proofs to demonstrate why linear combinations of solutions are also solutions.
Consider the ODE: y’’ + p(x)y’ + q(x)y = 0. Suppose y1 and y2 are known solutions to the ODE, meaning:
y1’’ + p(x)y1’ + q(x)y1 = 0
and
y2’’ + p(x)y2’ + q(x)y2 = 0
Now, let’s consider a linear combination y(x) = c1y1 + c2y2, where c1 and c2, and verify that y(x) is also a solution.
Our linear homogeneous ODE is normally written as y’’ + p(x)y’ + q(x)y = 0. We are going to rewrite it using the Differentiator operator as D2y + pDy + qy = 0, then factor y: (D2 + pD + q)·y = 0.
Now, we are going to consider D2 + pD + q as a linear operator and abbreviate it as L, so our equation is simplified to L·y = 0 where L = D2 + pD + q ⇒ L(y) = y’’ + p(x)y’ + q(x)y. L is kind of a black box or function with inputs u(x) and outputs v(x), the result of applying the operator L to u(x).
L is a linear operator, meaning it satisfies two important properties:
where c is a constant, u, u1, and u2 are functions.
For example, the differential operator is a linear operator: (u1 + u2)’ = u1’ + u2’ and (cu)’ = cu’.
Additivity. Suppose u1 and u2 are two functions. Then, the operator L applied to the sum of u1 and u2 is:
L(u1 + u2) = (u1 + u2)’’ + p(x)(u1 + u2)’ + q(x)(u1 + u2)
Using the basic properties of derivatives and addition, we can split each term:
L(u1 + u2) = u1’’ + u2’’ +p(x)(u1’ + u2’) + q(x)(u1 + u2)
Now, rearrange terms:
L(u1 + u2) = (u1’’ + p(x)u1’ + q(x)u1) + (u2’’ + p(x)u2’ + q(x)u2) = L(u1) + L(u2).
Homogeneity (Scaling). Next, we need to show that L satisfies the scaling property. Suppose c is a constant and u is a function. Then applying L to the constant multiple c⋅u gives: L(c·u) = (c⋅u)′′ +p(x)(c⋅u)′ +q(x)(c⋅u)
Since the derivative of a constant multiple of a function is just the constant times the derivative of the function, we can factor out the constant c from each term:
L(c⋅u) = c⋅u′′ +p(x)⋅c⋅u′ +q(x)⋅c⋅u =[Factor c out] = c·(u’’ + p(x)u’ + q(x)) = c·L(u).
Our ODE is expressed as L·y = 0. Because L is a linear operator (it satisfies both additivity and homogeneity), it follows that the superposition principle holds, that is, if y1 and y2 are solutions to the ODE L·y = 0 or L(y) = 0, then any linear combination of these solutions, i.e., c1y1 + c2y2, will also be a solution.
L·(c1y1 + c2y2) =[Additivity] L(c1y1) + L(c2y2) =[Homogeneity (Scaling)] c1L(y1) + c2L(y2) =[Since both y1 and y2 are solutions to the ODE] c1·0 + c2·0 = 0. Thus, y = c1y1 +c2y2 is also a solution.
For second-order linear homogeneous ODEs:
This is known as an Euler equation or Cauchy-Euler equation.
Step 1. For Euler equations, we can assume a solution of the form: y(x) = xr where r is a constant to be determined.
Step 2. Compute the first and second derivatives and substitute into the ODE: x2(r(r−1)xr−2) −x(rxr−1) +xr = 0 ↭ r(r−1)xr −rxr +xr = 0
Step 3. Simplify the equation: [r(r-1) -r + 1]xr = 0 ⇒[Since xr ≠ 0, we can divide both sides by xr] [r2 -2r + 1]xr = 0.
Step 4. Solve the characteristic Equation: r2 −2r + 1 = 0. Factoring this, we find: (r−1)2 = 0. The roots of the characteristic equation are: r=1 (repeated root).
Step 5. Find the General Solution.
We need to find v(x) such that y2(x) is a solution to the original differential equation.
Compute derivatives: y2’(x) = rxr-1v(x) + xrv’(x)
y2’’(x) = r(r-1)xr-2v(x) + rxr-1v’(x) + rxr-1v’(x) + xrv’’(x) = r(r-1)xr-2v(x) + 2rxr-1v’(x) + xrv’’(x)
Substitute y2 into the differential equation x2y′′ −xy′ +y = 0: x2(r(r-1)xr-2v(x) + 2rxr-1v’(x) + xrv’’(x)) −x(rxr-1v(x) + xrv’(x)) + xrv(x) = r(r-1)xrv(x) + 2rxr+1v’(x) + xr+2v’’(x) -rxrv(x) -xr+1v’(x) + xrv(x) = [Combine like terms] xr+2v’’(x) + (2r-1)xr+1v’(x) + (r(r-1)-r+1)xrv(x) =[Given r = 1] x3v’’(x) + (2-1)x1+1v’(x) + (0-1+1)v(x) = 0 ⇒ x3v’’(x) + x2v’(x) = 0 ⇒[Divide the entire equation by x2 (since x > 0)] xv’’(x) + v’(x) = 0
This is a first-order linear ODE for v′(x). Let u = v′(x): xu’(x) + u(x) = 0⇒[This can be rewritten as:] $u’(x) + \frac{1}{x}u(x) = 0↭\frac{du}{dx} =\frac{-1}{x}u(x)$.
This is a separable equation: $\frac{du}{u} = \frac{-1}{x}dx⇒[\text{Integrate both sides}] ln|u| = -ln|x| + C ↭ u = e^{ln|x|^{-1}}e^{C} ↭ u = \frac{K}{x}$ where K = eC is a constant. Thus, $v’(x) = \frac{K}{x} ⇒ v(x) = Kln(x) + C’$. We can choose K = 1 and C’ = 0 without loss of generality (absorbing constants into c2). Therefore, the second solution is y2(x) = xrv(x) = x1·ln(x).
Step 6. Verify Linear Independence.
For two functions y1(x) and y2(x) to be linearly independent, the Wronskian W(y1, y2)(x) must be non-zero.
y1(x) = x, y1’(x) = 1.y’2(x) = $x\frac{1}{x}+ln(x) = 1 + ln(x)$
Since W(y1, y2)(x) = $y_1(x)y_2’(x)-y_1’(x)y_2(x) = x(ln(x)+1)-1·xln(x) = x ≠ 0$ for x > 0, the solutions are linearly independent.
Step 7. Write the General Solution. General Solution: y(x) = c1xr + c2xrln(x) =[Substituting r=1:] c1x + c2xln(x) where:
Step 8. Apply Initial Conditions. Given initial conditions y(x0)= a, y’(x0) = b. the general solution y(x) = c1x + c2xln(x) and its derivative y’(x) = c1 + c2(ln(x) + 1).
$\begin{cases} c_1x_0 + c_2x_0ln(x_0) = a \\ c_1 + c_2(ln(x_0)+1) = b \end{cases}$
This system can be solved for c1 and c2 provided that the Wronskian(y1, y2)(x0) = x0 ≠ 0, hence the Wronskian ensures a unique solution exists for the initial value problem.
Let y1(x) and y2(x) be two linearly independent solutions to the second-order linear homogeneous differential equations of the form: y′′ +p(x)y′ +q(x)y = 0.
Proposition. The family of solutions {c1y1 + c2y2} is sufficient to satisfy any initial conditions for the second-order linear homogeneous differential equation y′′ +p(x)y′ +q(x)y = 0.
Proof.
Let y(x0) = a, y’(x0) = b be the initial values at some point x0. The general solution to the homogeneous equation is of the form: y(x) = c1y1(x) + c2y2(x) where y1(x) and y2(x) are two independent solutions to the differential equation.
Taking the derivative of the general solution y(x), we get: y’ = c1y1’ + c2y2’. Substitute the initial conditions y(x0)=a and y′(x0)=b into these expressions:
$\begin{cases} c_1y_1(x_0) + c_2y_2(x_0) = a \\ c_1y_1’(x_0) + c_2y_2’(x_0) = b \end{cases}$
This is a system of two linear equations with two unknowns (variables) c1 and c2. For the system to have a unique solution for c1 and c2, the determinant of the coefficient matrix, which is called the Wronskian, must be non-zero at x0:
$W(y_1, y_2)(x_0) = \Bigl \vert\begin{smallmatrix}y_1(x_0) & y_2(x_0)\\ y_1’(x_0) & y_2’(x_0)\end{smallmatrix}\Bigr \vert ≠ 0$
The Wronskian of two differentiable functions f and g is defined as W(f, g) = fg’ -gf’. More generally, W(f1, ···, fn) = $\Biggl \vert\begin{smallmatrix}f_1(x) & f_2(x) & ··· & f_n(x)\\ f_1’(x) & f_2’(x) & ··· & f_n’(x)\\· & · & ··· & ·\\· & · & ··· & ·\\· & · & ··· & ·\\f_1^{n-1}(x) & f_2^{n-1}(x) & ··· & f_n^{n-1}(x)\end{smallmatrix}\Biggr \vert$. This is the determinant of the matrix constructed by placing the functions in the first row, the first derivatives of the functions in the second row, and so on. In our particular case, W(y1, y2) = $\Bigl \vert\begin{smallmatrix}y_1 & y_2\\ y_1’ & y_2’\end{smallmatrix}\Bigr \vert = y_1(x)y_2’(x)-y_1’(x)y_2(x)$
This determinant plays a crucial role in determining whether y1 and y2 are linearly independent.
Conclusion: Since y1(x) and y2(x) are two linearly independent solutions, the Wronskian W(y1, y2)(x0) ≠ 0, we can solve the system of equations for c1 and c2, ensuring that the general solution satisfies the given initial conditions. Thus, the family {c1y1(x) + c2y2(x)} is sufficient to satisfy any initial conditions ∎
Theorem on the Wronskian. If y1 and y2 are solutions to the linear homogeneous ODE y′′ +p(x)y′ +q(x)y = 0, and p(x) and q(x) are continuous on an interval I, then the Wronskian W(y1, y2) either vanishes identically (i.e., W(y1, y2) ≡ 0, W(y1, y2)(x) = 0 ∀x ∈ I) or is never zero on I (i.e., W(y1, y2)(x) ≠ 0 ∀x ∈ I).
Implications
Linear Independence and the Wronskian: Since y1 and y2 are assumed to be linearly independent solutions, their Wronskian W(y1, y2) ≠ 0 for all x ∈ I.
Uniqueness of Solutions: Given any initial condition y(x0) = a and y’(x0) = b, we can always find unique values for c1 and c2 that satisfy these conditions (the system of equations for c1 and c2 has a unique solution) because the Wronskian is non-zero.
Claim: The set of solutions {c1y1 + c2y2} forms the complete family of solutions to the differential equation.
Note: However, y1 and y2 are not necessarily the only independent solutions (“the only game in town”). It is possible that other pairs of independent solutions, say u1(x) and u2(x), can also form a valid basis for the solution space. In this case, the general solution would still be expressed as: {c1u1 + c2u2}
Relationship Between Different Solution Pairs:
However, any other pair u1 and u2 can be expressed as linear combinations of y1 and y2, meaning there is a relationship:
$\begin{cases} u_1 = \bar {c_1}y_1 + \bar {c_2}y_2 \\ u_2 = \bar {\bar {c_1}}y_1 + \bar {\bar {c_2}}y_1 \end{cases}$
We can also define “normalized solutions”, say Y1 and Y2, which are particular solutions of the ODE that satisfy specific initial conditions at x0 (often x0 = 0): Y1(0) = 1, Y1’(0) = 0, and Y2(0) = 0, Y2’(0) = 1.
Purpose of Normalized Solutions: convenience (they simplify the process of solving initial value problems) and direct application (any solution satisfying initial conditions can be directly expressed as a linear combination of Y1(x) and Y2(x)).
The differential equation y’’ + y = 0 is a homogeneous linear differential equation with constant coefficients. To find its general solution, we start by solving the characteristic equation: r2 + 1 = 0 ⇒ r = ± i.
Pure imaginary roots, this yields the standard solutions: y1(x) = cos(x), y2(x) = sin(x). These solutions are linearly independent and form a fundamental set of solutions for the differential equation. y(x) = c1cos(x) + c2sin(x).
Normalized solutions are particular solutions that satisfy specific initial conditions at a chosen point, often x = 0: Y1(x) = cos(x) (since cos(0) = 1 and cos’(0) = 0), Y2(x) = sin(x) (since sin(0) = 0 and sin’(0) = cos(0) = 1).
For the differential equation y’’ - y = 0, the characteristic equation is r2 -1 = 0 ⇒ r = ± 1.
Two real and distinct roots gives the standard solutions: y1 = ex, y2(x) = e-x and the general solution is y = c1ex + c2e-x, y’ = c1ex - c2e-x
Find Normalized Solutions. To find Y1(x):
$\begin{cases}c_1e^0 + c_2e^0 = c_1 + c_2 = 1 \\ c_1e^0 - c_2e^0 = c_1 -c_2 = 0 \end{cases}$
From equation 2: $ c_1 - c_2 = 0 ⇒ c_1 = c_2 $⇒[Substitute c2 = c1 into equation 1] $c_1 + c_1 = 1 ⇒ c_1 = \frac{1}{2}⇒ c_2 = \frac{1}{2}$.
The solution is $c_1 = c_2 = \frac{1}{2}, Y_1 = \frac{e^x+e^{-x}}{2} = cosh(x)$
To find Y2:
$\begin{cases} c_1e^0 + c_2e^0 = c_1 + c_2 = 0 \\ c_1e^0 - c_2e^0 = c_1 -c_2 = 1 \end{cases}$
The solution is $c_1 = \frac{1}{2}, c_2 = \frac{-1}{2}, Y_2 = \frac{e^x-e^{-x}}{2} = sinh(x)$
Advantages of Normalized Solutions
If we have two normalized solutions, say Y1 and Y2, at the point x = x0, then the the solution to the Initial Value Problem (IVP): ODE + $\begin{cases} y’’ + p(x)y’ + q(x)y = 0 \\ y(x_0) = a = y_0 \\ y’(x_0) = b = y’_0 \end{cases}$ is: y = aY1 + bY2
This works because the initial conditions are satisfied as follows: y(x0) = aY1(x0) + bY2(x0) = a·1 + b·0 = a, y’(x0) = aY’1(x0) + b’Y2(x0) = a·0 + b·1= b
It states that for a second-order linear homogeneous differential equation of the form: y’’ + py’ + qy = 0 where p(x) and q(x) are continuos functions on an interval containing x0, there exists exactly one solution (both existence and uniqueness) that satisfies the given initial conditions y(x0) = a, and y'(x0) = b.
This means that the family of solutions {c1Y1 + c2Y2} encompasses all possible solutions to the differential equation (existence). Any solution u(x) with initial conditions u(0) = u0, u'(0) = u'0, can be written as: u0Y1 + u'0Y2.
Thus, all solutions to the differential equation belong to the family {c1Y1 + c2Y2}, and if another solution v(x) satisfies the same initial conditions v(0) = u0, v’(0) = u’0, by the uniqueness theorem, u(x) = v(x).