One who asks a question is a fool for a minute; one who does not remains a fool forever, Chinese proverb

Being happy doesn’t mean that everything is perfect. It means you’ve decided to look beyond the imperfections. an

Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$

It involves (e.g., $\frac{dy}{dx} = 3x +5y$):

**Dependent variables**:*Variables that depend on one or more other variables*(y).**Independent variables**: Variables upon which the dependent variables depend (x).**Derivatives**: Rates at which the dependent variables change with respect to the independent variables, $\frac{dy}{dx}$

Definition. A first-order linear ordinary differential equation is an ordinary differential equation (ODE) involving an unknown function y(x), its first derivative y′, and functions of the independent variable x, which can be written in the general form:: a(x)y' + b(x)y = c(x) where:

- y = y(x) is the unknown function of the independent variable x.
- y′ = $\frac{dy}{dx}$ is the derivative of y with respect to x.
- a(x), b(x), and c(x) are known functions of x.

These equations are termed **“linear” because the unknown function y and its derivative y’ appear to the first power and are not multiplied together** or composed in any nonlinear way.

If the function c(x)=0 for all x in the interval of interest, the equation simplifies to: a(x)y’ + b(x)y = 0. Such an equation is called a homogeneous linear differential equation.

The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:

- The function f(x, y) (the right-hand side of the ODE) in the equation y’ = f(x, y) is continuous in some neighborhood around a point (x
_{0}, y_{0}) and - Its partial derivative with respect to y, denoted as $\frac{∂f}{∂y}$, is also continuous near (x
_{0}, y_{0}).

Then, the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x_{0}, y_{0}), meaning that it satisfies the initial condition y(x_{0}) = y_{0}.

This theorem ensures that under these conditions, the solution exists and is unique near x = x_{0}.

First-order linear differential equations are fundamental tools in mathematical modeling across various disciplines, including physics, engineering, biology, and economics. They are used to describe systems where the rate of change of a quantity depends linearly on the quantity itself and possibly on external inputs. Understanding how to solve these equations is crucial for predicting system behaviors over time.

Definition. A first-order linear differential equation (ODE) has the general form: a(x)y' + b(x)y = c(x) where y′ = $\frac{dy}{dx}$ is the derivative of y with respect to x, and a(x), b(x), and c(x) are known functions of the independent variable x.

To simplify and standardize the equation, we can divide both sides by a(x), assuming a(x) ≠ 0: y’ + p(x)y = q(x) where $p(x)=\frac{b(x)}{a(x)}\text{ and }q(x) = \frac{c(x)}{a(x)}$ This is the standard linear form of a first-order differential equation, which is more convenient for applying solution methods, such as the integrating factor method.

The integrating factor method is an effective technique for solving first-order linear differential equations. The method involves multiplying the entire equation by a carefully chosen function (the integrating factor) to simplify it into a form that can be integrated directly. The steps are as follows:

**Identify p(x) and q(x) in the standard form**$\frac{dy}{dx} +p(x)y = q(x)$**Calculate the integrating Factor μ(x)**. The integrating factor μ(x) is defined as: $μ(x)=e^{\int p(x)dx}$. This function is chosen so that when we multiply both sides of the differential equation by μ(x), the left-hand side becomes the derivative of μ(x)y.**Multiply Both Sides by the Integrating Factor:**$μ(x)\frac{dy}{dx} +μ(x)p(x)y = μ(x)q(x)$. This simplifies the left-hand side to the derivative of μ(x)y: $\frac{d}{dx}[μ(x)y] = μ(x)q(x)$**Integrate both sides with respect to x:**$\int \frac{d}{dx}[μ(x)y]dx = \int μ(x)q(x)dx ↭ μ(x)y = \int μ(x)q(x)dx + C$.**Solve for y(x):**$y(x) = \frac{1}{μ(x)}(\int μ(x)q(x)dx + C)$

When p(x) and q(x) are constants, the equation simplifies, and finding the solutions become more straightforward.

Consider the equation: $\frac{dy}{dt} + ky = q(t)$ where:

- k is a constant positive coefficient, k > 0.
- q(t) is a given function of t.

**Identify p(t) and q(t)**: p(t) = k (constant), and q(t) is the given function of t.**Calculate the Integrating Factor μ(t)**. The integrating factor is: $μ(t) = e^{\int p(t)dt} = e^{\int kdt} = e^{kt}$**Multiply Both Sides by μ(t)**: $e^{kt}\frac{dy}{dt} + ke^{kt}y = e^{kt}q(t)$.**Simplify the Left-Hand Side:**$e^{kt}\frac{dy}{dt} + ke^{kt}y = \frac{d}{dt}(e^{kt}·y)$. So the equation becomes: $\frac{d}{dt}(e^{kt}·y) = e^{kt}q(t)$**Integrate Both Sides:**$\int \frac{d}{dt}(e^{kt}·y)dt = \int e^{kt}q(t)dt ↭ e^{kt}·y = \int e^{kt}q(t)dt + C$**Solve for y**: $y(t) = \frac{1}{μ(t)}(\int μ(t)q(t)dt + C) = e^{-kt}(\int e^{kt}q(t)dt + C)$

This solution y(t) consists of two parts:

- Particular Solution (Steady-State Response). The first term $y_{part}(t) = e^{-kt}\int q(t)e^{kt}dt$ is called the
**steady-state solution**. It represents the part of the solution that persists over time (how the system responds in the long term, it dominates the behaviour of y(t) as t → ∞), especially if q(t) is not zero and remains after the transient effects have decayed (Refer to Figure v for a visual representation and aid in understanding it). It represents the**forced response of the system due to the input q(t)**. - Homogeneous Solution (Transient Response). The second term $y_{hom}(t)=Ce^{-kt}$ is the
**transient solution**. As t increases t → ∞, this terms decays exponentially to zero when k > 0, meaning it represents the temporary behavior that fades away and the**natural response of the system**. It depends on the initial conditions of the system.

The total solution is the sum of the particular and homogeneous solutions: y(t) = y_{part}(t) + y_{hom}(t).

Sometimes the equation y’ + ky = kq_{e}(t) is expressed as $\frac{1}{k}y’ + y = q_e(t)$ where q_{e}(t) is called the input.

Solve the differential equation: $\frac{dy}{dt}+2y = e^{-2t}$.

**Identify p(t) and q(t) in the standard form**$\frac{dy}{dt} +p(t)y = q(t)$, p(t) = 2, q(t) = e^{-2t}.**Calculate the integrating Factor μ(t)**: $μ(t)=e^{\int p(t)dt} = e^{\int 2dt} = e^{2t}$**Multiply Both Sides by the Integrating Factor:**$e^{2t}\frac{dy}{dt}+2e^{2t}y = e^{2t}e^{-2t} = 1 ⇒[\text{So, we have}] e^{2t}\frac{dy}{dt}+2e^{2t}y =1$.**Recognize the Left-Hand Side as a Derivative**: $\frac{d}{dt}[e^{2t}y] = 1$**Integrate both sides:**$\int \frac{d}{dt}[e^{2t}y]dt = \int 1dt ↭ e^{2t}y = t + C$.**Solve for y(t):**$y(t) = e^{-2t}(t + C)$

The solution consists of two components:

- The transient solution is y
_{transient}(t) = $Ce^{-2t}$, which decays exponentially to zero as t → ∞. It depends on initial condition. The constant C is determined by the initial value y(0). y(0) = e^{-2·0}(0 + C) = C = y_{0}. - The steady-state solution is y
_{steady}(t) = $te^{-2t}$

Thus, the solution with the initial condition is $y(t) = e^{-2t}(t + y_0)$

In this example, despite the presence of t, the steady-state solution also decays to zero because of the exponential term, indicating that **the entire solution diminishes over time.**

Physical Interpretation: The system returns to equilibrium over time. Any initial perturbations (captured by C) decay exponentially.

The superposition principle is a fundamental concept in linear systems, particularly in the study of linear differential equations. It allows us to break up a problem into simpler, more manageable parts, and then at the end assemble the answer from its simpler pieces. This principle is immensely powerful because it leverages the linearity property of differential equations.

Let’s consider the first-order linear differential equation: y’ + p(t)y = q(t) where:

- y(t) is the unknown function of time t we aim to solve for.
- p(t): A known function of t (coefficient of y).
- q(t): A known function of t (the input or forcing function).

In this context, we can think of the left-hand side $\frac{dy}{dt}+p(t)y$ as representing the system and the right-hand side q(t) as the input to the system.

For any given input q(t) that has an output y(t) we will write q ↭ y (read input q leads to output y).

The superposition principle states that in a linear system, the response (output) to a sum of inputs (e.g., q_{1}(t) and q_{2}(t)) is the sum of the responses to each input individually. In other words, the sum of solutions corresponding to individual inputs is also a solution corresponding to the sum of those inputs.

Mathematically, if q_{1}(t) leads to solution y_{1} (q_{1} ↭ y_{1}) and q_{2}(t) leads to solution y_{2} (q_{2} ↭ y_{2}), then the combined input c_{1}q_{1}(t) + c_{2}q_{2}(t) leads to the solution: c_{1}y_{1}(t) + c_{2}y_{2}(t) (c_{1}q_{1} + c_{2}q_{2} ↭ c_{1}y_{1} + c_{2}y_{2}). This principle holds because the differential equation is linear and linear systems allow for the sum of solutions to be a solution itself.

Proof:

The proof takes a few lines.

Compute the derivative of the combined solution: $\frac{d(c_1y_1 + c_2y_2)}{dt} + p(c_1y_1 + c_2y_2) = c_1\frac{dy_1}{dt} + c_2\frac{dy_2}{dt} + c_1py_1 + c_2py_2 =[\text{Group terms:}] c_1(\frac{dy_1}{dt} + py_1) + c_2(\frac{dy_2}{dt} + py_2) =[q_1↭y_1, q_2↭y_2] c_1q_1 + c_2q_2.$

Therefore, y(t) = c_{1}y_{1}(t) + c_{2}y_{2}(t) solves the differential equation with input q(t) = c_{1}q_{1}(t) + c_{2}q_{2}(t).

- Solve the differential equation: $\frac{dx}{dt} + 2x = 2 + 3e^{-2t}$

Step 1: **Decompose the Input**. We can break this problem into two simpler differential equations. First input q_{1}(t) = 1. Second input: q_{2}(t) = 3e^{-2t}.

Step 2: **Solve Each Subproblem Individually**

**Equation 1.** $\frac{dx_1}{dt} + 2x_1 = 1$.
The solution can be found:

- Calculate the integrating factor: $μ(t)=e^{\int p(t)dt} = e^{\int 2dt} = e^{2t}$.
- Multiply Both Sides by the Integrating Factor: $e^{2t}\frac{dx_1}{dt} + 2x_1e^{2t} = e^{2t}$.
- Recognize the Left-Hand Side as a Derivative: $\frac{d}{dt}[e^{2t}x_1] = e^{2t}$.
- Integrate both sides: $\int \frac{d}{dt}[e^{2t}x_1]dt = \int e^{2t}dt↭e^{2t}x_1 = \frac{e^{2t}}{2} + C_1$.
- Solve for x: $x_1(t) = \frac{1}{2} + C_1e^{-2t}$ (i)

**Equation 2**. $\frac{dx_2}{dt}+2x_2 = e^{-2t}$,

- Calculate the integrating factor: $μ(t)=e^{\int p(t)dt} = e^{\int 2dt} = e^{2t}$.
- Multiply Both Sides by the Integrating Factor: $e^{2t}\frac{dx_2}{dt} + 2x_2e^{2t} = e^{2t}e^{-2t} = 1$.
- Recognize the Left-Hand Side as a Derivative: $\frac{d}{dt}[e^{2t}x_2] = 1$.
- Integrate both sides: $\int \frac{d}{dt}[e^{2t}x_2]dt = \int dt↭e^{2t}x_2 = t + C_2$.
- Solve for x: $x_2(t) = e^{-2t}(t+C_2)$ (ii)

Step 3: **Combine Solutions Using Superposition**. The general solution is x(t) = 2·x_{1}(t) + 3·x_{2}(t) = $2(\frac{1}{2} + C_1e^{-2t}) + 3e^{-2t}(t+C_2) = 1 + 2C_1e^{-2t} + 3e^{-2t}t + 3e^{-2t}C_2 =[\text{Simplify}] 1 + e^{-2t}(3t + 2C_1 + 3C_2) = 1 + e^{-2t}(3t + C)$ where C = 2C_{1} + 3C_{2}.

Step 4: **Interpretation**.

- The constant term 1 is the
**steady-state solution due**to the constant input 2 (since 2q_{1}(t) = 2·1 = 2). - The term $e^{-2t}(3t + C)$ is the
**transient solution**due to the exponential input $3e^{-2t}$ and the homogeneous solution related to the system’s natural response. - As t → ∞, the transient term decays to zero ($e^{-2t}(3t + C)$ approaches zero because the exponential decay dominates the linear growth), leaving the steady-state solution x(t) → 1, and the system settles at the steady-state value.

Consider the differential equation y’ + ky = kq_{e}(t) where:

- k is a positive constant.
- ω is the angular frequency (the number of complete oscillations in a period of 2π) of the periodic input.
- q
_{e}(t) = cos(wt) is a periodic input function with angular frequency w.

Our goal is to find the solution y(t) that satisfies this equation.

The idea behind complexifying the equation is that complex exponentials (of the form e^{iθ}) are easier to work with than trigonometric functions. Once we solve the complex version of the equation, we can extract the real part to find the solution to the original problem.

To simplify the problem, we “complexify” the equation by representing the cosine function using Euler’s formula: cos(wt) = Real(e^{iwt}).

This allows us to consider the complex version of the differential equation: $\tilde{y’}+k\tilde{y} = ke^{iwt}$, where $\tilde{y}$ is a complex-valued function. Our aim is to solve the complex differential equation for $\tilde{y} = y_1 + iy_2$, and then extract the real part, y_{1}(t), as the solution to our original differential equation ODE.

The complex differential equation is a first-order linear differential equation and can be solved using the integrating factor method: $\tilde{y’}+k\tilde{y} = ke^{iwt}$.

- It is already in standard form:
- The integral factor is $μ(t) = e^{\int kdt} = e^{kt}$, derived from the coefficient of $\tilde{y}$ (which is k).
- Multiplying both sides of the differential equation by the integral factor, we get: $e^{kt}\tilde{y’}+e^{kt}k\tilde{y} = e^{kt}ke^{iwt} ⇒[\text{This simplifies to:}] (\tilde{y}e^{kt})’ = ke^{(k+iw)t}$
- We integrate both sides with respect to t: $ \int \frac{d}{dt}(\tilde{y}e^{kt})dt = \int ke^{(k+iw)t} ↭ \tilde{y}e^{kt} = \frac{k}{k+iw}e^{(k+iw)t}+C$
The integral of e

^{at}with respect to t is $\frac{1}{a}e^{at}$. Here, a = k + iw. - Dividing both sides by $e^{kt}$. $\tilde{y} = \frac{k}{k+iw}e^{iwt}+ Ce^{-kt}↭ \frac{1}{1+i(\frac{w}{k})}e^{iwt} +Ce^{-kt}$

Since we are primarily interested in the steady-state solution, we can ignore the transient solution $Ce^{-kt}$, which decays exponentially and vanishes as t → ∞. Thus, the solution is: $\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt}$

We want to express the coefficient $\frac{1}{1+i(\frac{w}{k})}$ in a more useful form. We are going to convert this expression to polar form to better understand its magnitude and phase.

Recall that any complex number α = x + iy can be expressed in polar form α = re^{iΦ} where:

- r = |α| = $\sqrt{x^2+y^2}$ is the magnitude
- Φ = arg(α) = $tan^{-1}(\frac{y}{x})$ is the phase or argument.

Besides, $\frac{1}{α}·α = 1 ⇒ |\frac{1}{α}|·|α| = |1| = 1 ⇒ |\frac{1}{α}| = \frac{1}{|α|}, arg(\frac{1}{α})+arg(α) = arg(1) = 0 ⇒ arg(\frac{1}{α}) = -arg(α)$

$β = 1 + i(\frac{w}{k})$ has x = 1 (real part), y = w/k (imaginary part),

We are searching for the polar form of $ \frac{1}{1+i(\frac{w}{k})}$In our case $arg(α) = -arg(1+i(\frac{w}{k})) = -Φ$ (Refer to Figure A for a visual representation and aid in understanding it); we can rewrite it in polar form $ \frac{1}{1+i(\frac{w}{k})} = Ae^{-iΦ} = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}e^{-iΦ}$

$\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt} = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}e^{iwt-iΦ} = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}e^{i(wt-Φ)}$

Since we are only interested in the real part of $\tilde{y}$, we take the real part of the above expression: $y_1 = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}cos(wt-Φ)$ where $Φ = tan^{-1}(\frac{w}{k})$ is the **phase lag of the function** (Refer to Figure B for a visual representation and aid in understanding it).

- The amplitude of the solution is $\frac{1}{\sqrt{1+(\frac{w}{k})^2}}$. As k increases ($\frac{dy}{dt} + ky = k·cos(wt)$, the term ky represent damping,
**the constant k determines the strength of the damping effects**), the amplitude approaches 1. In other words, higher values of k result in stronger damping, causing the system to lose energy more quickly. - The term k·cos(wt) is the forcing term, representing an external periodic input driving the system. If the forcing term were absent (i.e., k·cos(ωt) = 0), the system would exhibit natural decay due to damping.
- The phase lag $Φ = tan^{-1}(\frac{w}{k})$ measures or indicates how much the solution (system's response) lags behind the forcing term cos(wt).

A forcing term in a differential equation represents an external influence or input to the system. In the equation y′ + ky = kcos(wt), the term kcos(wt) is the forcing term. If there were no forcing term (i.e., if the equation were y′+ky=0), the system would naturally settle into some kind of equilibrium or decay to zero over time (depending on the sign of k). The forcing term kcos(wt) represents a periodic input or disturbance (e.g., a mechanical vibration, an alternating electric current, or any other cyclical influence on the system) that drives the system. It represents an external oscillation with frequency w that forces the system to oscillate as well.

- Recall that tan
^{-1}is a continuous, smooth curve that increases from negative infinity to positive infinity. As k increases (stronger damping), the phase lag decreases, meaning the solution responds more quickly to the input. As w increases (faster oscillations), the phase lag increases, meaning the solution responds more slowly to the input

Damping refers to the gradual reduction in the amplitude of oscillations in a system. In the equation y′+ky = kcos(wt), the term ky represent damping, where k is a constant that determines the strength of the damping effect. This term resists changes in y(t) and tends to reduce the amplitude of oscillations. The larger the constant k, the stronger the damping. As k increases, the damping effect becomes stronger. This means the system loses energy more quickly, and the oscillations become smaller in amplitude.

y’ + ky = kcos(wt). While the polar form provides insight into the amplitude and phase, we can also express the solution using trigonometric identities.

The solution is: $\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt}$. The goal is to convert this complex solution into Cartesian form (a form that involves real and imaginary parts explicitly) and extract the real part, which gives the solution to the original differential equation.

The expression $\frac{1}{1+i(\frac{w}{k})}$ is a complex fraction. We want to simplify it into a form that separates the real and imaginary parts.

To simplify, we multiply both the numerator and denominator by the complex conjugate of the denominator.

$\frac{1}{1+i(\frac{w}{k})}·\frac{1-i(\frac{w}{k})}{1-i(\frac{w}{k})} = \frac{1-i(\frac{w}{k})}{1+(\frac{w}{k})^2}$

$\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt} = \frac{1-i(\frac{w}{k})}{1+(\frac{w}{k})^2}(cos(wt)+isin(wt))$

Since we are only interested in the real part of $\tilde{y}$, we take the real part of the above expression: $y_1 = \frac{1}{1+(\frac{w}{k})^2}(cos(wt)+\frac{w}{k}sin(wt)) =$[Using the trigonometry identity a·cos(θ)+b·sin(θ) = c·cos(θ - Φ) where a and b are the two legs or sides of a right triangle, c is the hypotenuse, Φ is the angle between a and c, Φ = $tan^{-1}(\frac{b}{a})$ (Refer to Figure C for a visual representation and aid in understanding it)]

$y_1 = \frac{1}{1+(\frac{w}{k})^2}(cos(wt)+\frac{w}{k}sin(wt)) = \frac{1}{1+(\frac{w}{k})^2}·\sqrt{1+(\frac{w}{k})^2}cos(wt-Φ) = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}cos(wt-Φ)$ where Φ = $tan^{-1}(\frac{b}{a}) = tan^{-1}(\frac{w}{k})$ is the phase shit or phase lag, and $\frac{1}{\sqrt{1+(\frac{w}{k})^2}}$ is the amplitude of the oscillation.

This matches the solution obtained using the polar form.

To justify the formula a·cos(θ)+b·sin(θ) = c·cos(θ - Φ) where a and b are the two legs or sides of a right triangle, c is the hypotenuse, Φ is the angle between a and c, Φ = $tan^{-1}(\frac{b}{a})$, consider the following geometric interpretation using vectors:

Let $\hat{\mathbf{u}} = ⟨cos(θ), sin(θ)⟩$ be a unit vector, $\vec{v}$ = ⟨a, b⟩ be a vector with components a and b. The dot product of these two vectors is equal to the magnitude of $\vec{v}$ multiplied by the cosine of the angle between them: $\hat{\mathbf{u}}·\vec{v} = ⟨a, b⟩·⟨cos(θ), sin(θ)⟩ = |⟨a, b⟩|·1·cos(θ - Φ)$ 🚀 (Refer to Figure D for a visual representation and aid in understanding it).

$\hat{\mathbf{u}}·\vec{v} = ⟨a, b⟩·⟨cos(θ), sin(θ)⟩ = acos(θ) + bsin(θ) =[🚀] ccos(θ - Φ)|$ where c = $|⟨a, b⟩| = \sqrt{a^2 + b^2}, Φ = tan^{-1}(\frac{b}{a})$ is the angle between the vector $\vec{v}$ and the positive x-axis.

Another way of proving it is like this (a-bi)(cos(θ) + isin(θ)) =[Polar form] $\sqrt{a^2+b^2}e^{-iΦ}e^θ = \sqrt{a^2+b^2}e^{θ-iΦ}↭ (a-bi)(cos(θ) + isin(θ)) = \sqrt{a^2+b^2}e^{θ-iΦ}$. Taking the real parts in both side of the equation: $acos(θ)+bsin(θ) = \sqrt{a^2+b^2}cos(θ-iΦ)$ ∎

This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007].

- NPTEL-NOC IITM, Introduction to Galois Theory.
- Algebra, Second Edition, by Michael Artin.
- LibreTexts, Calculus and Calculus 3e (Apex). Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
- Field and Galois Theory, by Patrick Morandi. Springer.
- Michael Penn, and MathMajor.
- Contemporary Abstract Algebra, Joseph, A. Gallian.
- YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra.
- MIT OpenCourseWare [18.03 Differential Equations, Spring 2006], YouTube by MIT OpenCourseWare.
- Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences.