One who asks a question is a fool for a minute; one who does not remains a fool forever, Chinese proverb
Being happy doesn’t mean that everything is perfect. It means you’ve decided to look beyond the imperfections. an
Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$
It involves (e.g., $\frac{dy}{dx} = 3x +5y$):
Definition. A first-order linear ordinary differential equation is an ordinary differential equation (ODE) involving an unknown function y(x), its first derivative y′, and functions of the independent variable x, which can be written in the general form:: a(x)y' + b(x)y = c(x) where:
These equations are termed “linear” because the unknown function y and its derivative y’ appear to the first power and are not multiplied together or composed in any nonlinear way.
If the function c(x)=0 for all x in the interval of interest, the equation simplifies to: a(x)y’ + b(x)y = 0. Such an equation is called a homogeneous linear differential equation.
The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:
Then, the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x0, y0), meaning that it satisfies the initial condition y(x0) = y0.
This theorem ensures that under these conditions, the solution exists and is unique near x = x0.
First-order linear differential equations are fundamental tools in mathematical modeling across various disciplines, including physics, engineering, biology, and economics. They are used to describe systems where the rate of change of a quantity depends linearly on the quantity itself and possibly on external inputs. Understanding how to solve these equations is crucial for predicting system behaviors over time.
Definition. A first-order linear differential equation (ODE) has the general form: a(x)y' + b(x)y = c(x) where y′ = $\frac{dy}{dx}$ is the derivative of y with respect to x, and a(x), b(x), and c(x) are known functions of the independent variable x.
To simplify and standardize the equation, we can divide both sides by a(x), assuming a(x) ≠ 0: y’ + p(x)y = q(x) where $p(x)=\frac{b(x)}{a(x)}\text{ and }q(x) = \frac{c(x)}{a(x)}$ This is the standard linear form of a first-order differential equation, which is more convenient for applying solution methods, such as the integrating factor method.
The integrating factor method is an effective technique for solving first-order linear differential equations. The method involves multiplying the entire equation by a carefully chosen function (the integrating factor) to simplify it into a form that can be integrated directly. The steps are as follows:
When p(x) and q(x) are constants, the equation simplifies, and finding the solutions become more straightforward.
Consider the equation: $\frac{dy}{dt} + ky = q(t)$ where:
This solution y(t) consists of two parts:
The total solution is the sum of the particular and homogeneous solutions: y(t) = ypart(t) + yhom(t).
Sometimes the equation y’ + ky = kqe(t) is expressed as $\frac{1}{k}y’ + y = q_e(t)$ where qe(t) is called the input.
Solve the differential equation: $\frac{dy}{dt}+2y = e^{-2t}$.
The solution consists of two components:
Thus, the solution with the initial condition is $y(t) = e^{-2t}(t + y_0)$
In this example, despite the presence of t, the steady-state solution also decays to zero because of the exponential term, indicating that the entire solution diminishes over time.
Physical Interpretation: The system returns to equilibrium over time. Any initial perturbations (captured by C) decay exponentially.
The superposition principle is a fundamental concept in linear systems, particularly in the study of linear differential equations. It allows us to break up a problem into simpler, more manageable parts, and then at the end assemble the answer from its simpler pieces. This principle is immensely powerful because it leverages the linearity property of differential equations.
Let’s consider the first-order linear differential equation: y’ + p(t)y = q(t) where:
In this context, we can think of the left-hand side $\frac{dy}{dt}+p(t)y$ as representing the system and the right-hand side q(t) as the input to the system.
For any given input q(t) that has an output y(t) we will write q ↭ y (read input q leads to output y).
The superposition principle states that in a linear system, the response (output) to a sum of inputs (e.g., q1(t) and q2(t)) is the sum of the responses to each input individually. In other words, the sum of solutions corresponding to individual inputs is also a solution corresponding to the sum of those inputs.
Mathematically, if q1(t) leads to solution y1 (q1 ↭ y1) and q2(t) leads to solution y2 (q2 ↭ y2), then the combined input c1q1(t) + c2q2(t) leads to the solution: c1y1(t) + c2y2(t) (c1q1 + c2q2 ↭ c1y1 + c2y2). This principle holds because the differential equation is linear and linear systems allow for the sum of solutions to be a solution itself.
Proof:
The proof takes a few lines.
Compute the derivative of the combined solution: $\frac{d(c_1y_1 + c_2y_2)}{dt} + p(c_1y_1 + c_2y_2) = c_1\frac{dy_1}{dt} + c_2\frac{dy_2}{dt} + c_1py_1 + c_2py_2 =[\text{Group terms:}] c_1(\frac{dy_1}{dt} + py_1) + c_2(\frac{dy_2}{dt} + py_2) =[q_1↭y_1, q_2↭y_2] c_1q_1 + c_2q_2.$
Therefore, y(t) = c1y1(t) + c2y2(t) solves the differential equation with input q(t) = c1q1(t) + c2q2(t).
Step 1: Decompose the Input. We can break this problem into two simpler differential equations. First input q1(t) = 1. Second input: q2(t) = 3e-2t.
Step 2: Solve Each Subproblem Individually
Equation 1. $\frac{dx_1}{dt} + 2x_1 = 1$. The solution can be found:
Equation 2. $\frac{dx_2}{dt}+2x_2 = e^{-2t}$,
Step 3: Combine Solutions Using Superposition. The general solution is x(t) = 2·x1(t) + 3·x2(t) = $2(\frac{1}{2} + C_1e^{-2t}) + 3e^{-2t}(t+C_2) = 1 + 2C_1e^{-2t} + 3e^{-2t}t + 3e^{-2t}C_2 =[\text{Simplify}] 1 + e^{-2t}(3t + 2C_1 + 3C_2) = 1 + e^{-2t}(3t + C)$ where C = 2C1 + 3C2.
Step 4: Interpretation.
First-order linear differential equations with periodic inputs are common in modeling physical systems subjected to oscillatory forces, such as electrical circuits with alternating currents, mechanical systems under periodic forces, or any system responding to cyclical influences.
Consider the differential equation y’ + ky = kqe(t) where:
Our goal is to find the solution y(t) that satisfies this equation.
The idea behind complexifying the equation is that complex exponentials (of the form eiθ) are easier to work with than trigonometric functions. Once we solve the complex version of the equation, we can extract the real part to find the solution to the original problem, cos(wt) = Real(eiwt).
By representing the cosine function using Euler’s formula, we can convert the equation into one involving exponentials, which are easier to manipulate.
This allows us to consider the complex version of the differential equation: $\tilde{y’}+k\tilde{y} = ke^{iwt}$, where $\tilde{y}$ is a complex-valued function. Our aim is to solve the complex differential equation for $\tilde{y} = y_1 + iy_2$, and then extract the real part to obtain the solution to our original differential equation ODE.
The complex differential equation is a first-order linear differential equation and can be solved using the integrating factor method: $\tilde{y’}+k\tilde{y} = ke^{iwt}$.
The integral of eat with respect to t is $\frac{1}{a}e^{at}$. Here, a = k + iw.
Since we are primarily interested in the steady-state solution (long-term behavior), we can ignore the transient solution $Ce^{-kt}$, which decays exponentially and vanishes as t → ∞ due to k > 0. Thus, we can set C = 0 and the solution is: $\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt}$
We want to express the coefficient $\frac{1}{1+i(\frac{w}{k})}$ in a more useful form. We are going to convert this expression to polar form to better understand the magnitude and phase of the solution and extract its real part.
Recall that any complex number α = x + iy can be expressed in polar form α = reiΦ where:
Besides, $\frac{1}{α}·α = 1 ⇒ |\frac{1}{α}|·|α| = |1| = 1 ⇒ |\frac{1}{α}| = \frac{1}{|α|}, arg(\frac{1}{α})+arg(α) = arg(1) = 0 ⇒ arg(\frac{1}{α}) = -arg(α)$
$β = 1 + i(\frac{w}{k})$ has x = 1 (real part), y = w/k (imaginary part),
We are searching for the polar form of $ \frac{1}{1+i(\frac{w}{k})}$In our case $arg(α) = -arg(1+i(\frac{w}{k})) = -Φ$ (Refer to Figure A for a visual representation and aid in understanding it); we can rewrite it in polar form $ \frac{1}{1+i(\frac{w}{k})} = Ae^{-iΦ} = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}e^{-iΦ}$
$\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt} = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}e^{iwt-iΦ} = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}e^{i(wt-Φ)}$
Since we are only interested in the real part of $\tilde{y}$, we take the real part of the above expression: $y_1 = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}cos(wt-Φ)$ where $Φ = tan^{-1}(\frac{w}{k})$ is the phase lag of the function (Refer to Figure B for a visual representation and aid in understanding it).
A forcing term in a differential equation represents an external influence or input to the system. In the equation y′ + ky = kcos(wt), the term kcos(wt) is the forcing term. If there were no forcing term (i.e., if the equation were y′+ky=0), the system would naturally settle into some kind of equilibrium or decay to zero over time (depending on the sign of k). The forcing term kcos(wt) represents a periodic input or disturbance (e.g., a mechanical vibration, an alternating electric current, or any other cyclical influence on the system) that drives the system. It represents an external oscillation with frequency w that forces the system to oscillate as well.
An alternative method involves expressing the complex coefficient in terms of its real and imaginary parts and then simplifying.
y’ + ky = kcos(wt). While the polar form provides insight into the amplitude and phase, we can also express the solution using trigonometric identities.
The solution is: $\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt}$. The goal is to convert this complex solution into Cartesian form (a form that involves real and imaginary parts explicitly) and extract the real part, which gives the solution to the original differential equation.
The expression $\frac{1}{1+i(\frac{w}{k})}$ is a complex fraction. We want to simplify it into a form that separates the real and imaginary parts.
To simplify, we multiply both the numerator and denominator by the complex conjugate of the denominator.
$\frac{1}{1+i(\frac{w}{k})}·\frac{1-i(\frac{w}{k})}{1-i(\frac{w}{k})} = \frac{1-i(\frac{w}{k})}{1+(\frac{w}{k})^2}$
$\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt} = \frac{1-i(\frac{w}{k})}{1+(\frac{w}{k})^2}(cos(wt)+isin(wt))$
Since we are only interested in the real part of $\tilde{y}=y_1+iy_2$, we take the real part of the above expression: $y_1 = \frac{1}{1+(\frac{w}{k})^2}(cos(wt)+\frac{w}{k}sin(wt)) =$[Using the trigonometry identity a·cos(θ)+b·sin(θ) = c·cos(θ - Φ) where a and b are the two legs or sides of a right triangle, c is the hypotenuse, Φ is the angle between a and c, Φ = $tan^{-1}(\frac{b}{a})$ (Refer to Figure C for a visual representation and aid in understanding it)]
$y_1 = \frac{1}{1+(\frac{w}{k})^2}(cos(wt)+\frac{w}{k}sin(wt)) = \frac{1}{1+(\frac{w}{k})^2}·\sqrt{1+(\frac{w}{k})^2}cos(wt-Φ) = (1+(\frac{w}{k}))^{-1+\frac{1}{2}}cos(wt-Φ) = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}cos(wt-Φ)$ where Φ = $tan^{-1}(\frac{b}{a}) = tan^{-1}(\frac{w}{k})$ is the phase shit or phase lag, and $\frac{1}{\sqrt{1+(\frac{w}{k})^2}}$ is the amplitude of the oscillation.
This matches the solution obtained using the polar form.
To justify the formula a·cos(θ)+b·sin(θ) = c·cos(θ - Φ) where a and b are the two legs or sides of a right triangle, c is the hypotenuse, Φ is the angle between a and c, Φ = $tan^{-1}(\frac{b}{a})$, consider the following geometric interpretation using vectors:
Let $\hat{\mathbf{u}} = ⟨cos(θ), sin(θ)⟩$ be a unit vector, $\vec{v}$ = ⟨a, b⟩ be a vector with components a and b. The dot product of these two vectors is equal to the magnitude of $\vec{v}$ multiplied by the cosine of the angle between them: $\hat{\mathbf{u}}·\vec{v} = ⟨a, b⟩·⟨cos(θ), sin(θ)⟩ = |⟨a, b⟩|·1·cos(θ - Φ)$ 🚀 (Refer to Figure D for a visual representation and aid in understanding it).
$\hat{\mathbf{u}}·\vec{v} = ⟨a, b⟩·⟨cos(θ), sin(θ)⟩ = acos(θ) + bsin(θ) =[🚀] ccos(θ - Φ)|$ where c = $|⟨a, b⟩| = \sqrt{a^2 + b^2}, Φ = tan^{-1}(\frac{b}{a})$ is the angle between the vector $\vec{v}$ and the positive x-axis.
Another way of proving it is like this (a-bi)(cos(θ) + isin(θ)) =[Polar form] $\sqrt{a^2+b^2}e^{-iΦ}e^θ = \sqrt{a^2+b^2}e^{θ-iΦ}↭ (a-bi)(cos(θ) + isin(θ)) = \sqrt{a^2+b^2}e^{θ-iΦ}$. Taking the real parts in both side of the equation: $acos(θ)+bsin(θ) = \sqrt{a^2+b^2}cos(θ-iΦ)$ ∎