JustToThePoint English Website Version
JustToThePoint en español
Colaborate with us

First-order Linear with Constant Coefficients

One who asks a question is a fool for a minute; one who does not remains a fool forever, Chinese proverb

Being happy doesn’t mean that everything is perfect. It means you’ve decided to look beyond the imperfections. an

Recall

Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$

It involves (e.g., $\frac{dy}{dx} = 3x +5y$):

First-Order Linear Ordinary Differential Equations (ODEs)

Definition. A first-order linear ordinary differential equation is an ordinary differential equation (ODE) involving an unknown function y(x), its first derivative y′, and functions of the independent variable x, which can be written in the general form:: a(x)y' + b(x)y = c(x) where:

These equations are termed “linear” because the unknown function y and its derivative y’ appear to the first power and are not multiplied together or composed in any nonlinear way.

If the function c(x)=0 for all x in the interval of interest, the equation simplifies to: a(x)y’ + b(x)y = 0. Such an equation is called a homogeneous linear differential equation.

The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:

Then, the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x0, y0), meaning that it satisfies the initial condition y(x0) = y0.

This theorem ensures that under these conditions, the solution exists and is unique near x = x0.

Anime girl thinking

First-order Linear with Constant Coefficients

First-order linear differential equations are fundamental tools in mathematical modeling across various disciplines, including physics, engineering, biology, and economics. They are used to describe systems where the rate of change of a quantity depends linearly on the quantity itself and possibly on external inputs. Understanding how to solve these equations is crucial for predicting system behaviors over time.

Definition. A first-order linear differential equation (ODE) has the general form: a(x)y' + b(x)y = c(x) where y′ = $\frac{dy}{dx}$ is the derivative of y with respect to x, and a(x), b(x), and c(x) are known functions of the independent variable x.

To simplify and standardize the equation, we can divide both sides by a(x), assuming a(x) ≠ 0: y’ + p(x)y = q(x) where $p(x)=\frac{b(x)}{a(x)}\text{ and }q(x) = \frac{c(x)}{a(x)}$ This is the standard linear form of a first-order differential equation, which is more convenient for applying solution methods, such as the integrating factor method.

Solving the Equation Using Integrating Factors

The integrating factor method is an effective technique for solving first-order linear differential equations. The method involves multiplying the entire equation by a carefully chosen function (the integrating factor) to simplify it into a form that can be integrated directly. The steps are as follows:

  1. Identify p(x) and q(x) in the standard form $\frac{dy}{dx} +p(x)y = q(x)$
  2. Calculate the integrating Factor μ(x). The integrating factor μ(x) is defined as: $μ(x)=e^{\int p(x)dx}$. This function is chosen so that when we multiply both sides of the differential equation by μ(x), the left-hand side becomes the derivative of μ(x)y.
  3. Multiply Both Sides by the Integrating Factor: $μ(x)\frac{dy}{dx} +μ(x)p(x)y = μ(x)q(x)$. This simplifies the left-hand side to the derivative of μ(x)y: $\frac{d}{dx}[μ(x)y] = μ(x)q(x)$
  4. Integrate both sides with respect to x: $\int \frac{d}{dx}[μ(x)y]dx = \int μ(x)q(x)dx ↭ μ(x)y = \int μ(x)q(x)dx + C$.
  5. Solve for y(x): $y(x) = \frac{1}{μ(x)}(\int μ(x)q(x)dx + C)$

Equations with Constant Coefficients

When p(x) and q(x) are constants, the equation simplifies, and finding the solutions become more straightforward.

Standard Form with Constant Coefficients

Consider the equation: $\frac{dy}{dt} + ky = q(t)$ where:

Applying the Integrating Factor Method

  1. Identify p(t) and q(t): p(t) = k (constant), and q(t) is the given function of t.
  2. Calculate the Integrating Factor μ(t). The integrating factor is: $μ(t) = e^{\int p(t)dt} = e^{\int kdt} = e^{kt}$
  3. Multiply Both Sides by μ(t): $e^{kt}\frac{dy}{dt} + ke^{kt}y = e^{kt}q(t)$.
  4. Simplify the Left-Hand Side: $e^{kt}\frac{dy}{dt} + ke^{kt}y = \frac{d}{dt}(e^{kt}·y)$. So the equation becomes: $\frac{d}{dt}(e^{kt}·y) = e^{kt}q(t)$
  5. Integrate Both Sides: $\int \frac{d}{dt}(e^{kt}·y)dt = \int e^{kt}q(t)dt ↭ e^{kt}·y = \int e^{kt}q(t)dt + C$
  6. Solve for y: $y(t) = \frac{1}{μ(t)}(\int μ(t)q(t)dt + C) = e^{-kt}(\int e^{kt}q(t)dt + C)$

Interpretation of the Solution

This solution y(t) consists of two parts:

The total solution is the sum of the particular and homogeneous solutions: y(t) = ypart(t) + yhom(t).

Complex Numbers. EDO

Sometimes the equation y’ + ky = kqe(t) is expressed as $\frac{1}{k}y’ + y = q_e(t)$ where qe(t) is called the input.

Example: Solving a First-Order Linear ODE with Constant Coefficients

Solve the differential equation: $\frac{dy}{dt}+2y = e^{-2t}$.

  1. Identify p(t) and q(t) in the standard form $\frac{dy}{dt} +p(t)y = q(t)$, p(t) = 2, q(t) = e-2t.
  2. Calculate the integrating Factor μ(t): $μ(t)=e^{\int p(t)dt} = e^{\int 2dt} = e^{2t}$
  3. Multiply Both Sides by the Integrating Factor: $e^{2t}\frac{dy}{dt}+2e^{2t}y = e^{2t}e^{-2t} = 1 ⇒[\text{So, we have}] e^{2t}\frac{dy}{dt}+2e^{2t}y =1$.
  4. Recognize the Left-Hand Side as a Derivative: $\frac{d}{dt}[e^{2t}y] = 1$
  5. Integrate both sides: $\int \frac{d}{dt}[e^{2t}y]dt = \int 1dt ↭ e^{2t}y = t + C$.
  6. Solve for y(t): $y(t) = e^{-2t}(t + C)$

Interpretation of the solution

The solution consists of two components:

Thus, the solution with the initial condition is $y(t) = e^{-2t}(t + y_0)$

In this example, despite the presence of t, the steady-state solution also decays to zero because of the exponential term, indicating that the entire solution diminishes over time.

Physical Interpretation: The system returns to equilibrium over time. Any initial perturbations (captured by C) decay exponentially.

The Superposition Principle

The superposition principle is a fundamental concept in linear systems, particularly in the study of linear differential equations. It allows us to break up a problem into simpler, more manageable parts, and then at the end assemble the answer from its simpler pieces. This principle is immensely powerful because it leverages the linearity property of differential equations.

Let’s consider the first-order linear differential equation: y’ + p(t)y = q(t) where:

In this context, we can think of the left-hand side $\frac{dy}{dt}+p(t)y$ as representing the system and the right-hand side q(t) as the input to the system.

For any given input q(t) that has an output y(t) we will write q ↭ y (read input q leads to output y).

The Superposition Principle Explained

The superposition principle states that in a linear system, the response (output) to a sum of inputs (e.g., q1(t) and q2(t)) is the sum of the responses to each input individually. In other words, the sum of solutions corresponding to individual inputs is also a solution corresponding to the sum of those inputs.

Mathematically, if q1(t) leads to solution y1 (q1 ↭ y1) and q2(t) leads to solution y2 (q2 ↭ y2), then the combined input c1q1(t) + c2q2(t) leads to the solution: c1y1(t) + c2y2(t) (c1q1 + c2q2 ↭ c1y1 + c2y2). This principle holds because the differential equation is linear and linear systems allow for the sum of solutions to be a solution itself.

Proof:

The proof takes a few lines.

Compute the derivative of the combined solution: $\frac{d(c_1y_1 + c_2y_2)}{dt} + p(c_1y_1 + c_2y_2) = c_1\frac{dy_1}{dt} + c_2\frac{dy_2}{dt} + c_1py_1 + c_2py_2 =[\text{Group terms:}] c_1(\frac{dy_1}{dt} + py_1) + c_2(\frac{dy_2}{dt} + py_2) =[q_1↭y_1, q_2↭y_2] c_1q_1 + c_2q_2.$

Therefore, y(t) = c1y1(t) + c2y2(t) solves the differential equation with input q(t) = c1q1(t) + c2q2(t).

Example Using Superposition

Step 1: Decompose the Input. We can break this problem into two simpler differential equations. First input q1(t) = 1. Second input: q2(t) = 3e-2t.

Step 2: Solve Each Subproblem Individually

Equation 1. $\frac{dx_1}{dt} + 2x_1 = 1$. The solution can be found:

  1. Calculate the integrating factor: $μ(t)=e^{\int p(t)dt} = e^{\int 2dt} = e^{2t}$.
  2. Multiply Both Sides by the Integrating Factor: $e^{2t}\frac{dx_1}{dt} + 2x_1e^{2t} = e^{2t}$.
  3. Recognize the Left-Hand Side as a Derivative: $\frac{d}{dt}[e^{2t}x_1] = e^{2t}$.
  4. Integrate both sides: $\int \frac{d}{dt}[e^{2t}x_1]dt = \int e^{2t}dt↭e^{2t}x_1 = \frac{e^{2t}}{2} + C_1$.
  5. Solve for x: $x_1(t) = \frac{1}{2} + C_1e^{-2t}$ (i)

Equation 2. $\frac{dx_2}{dt}+2x_2 = e^{-2t}$,

  1. Calculate the integrating factor: $μ(t)=e^{\int p(t)dt} = e^{\int 2dt} = e^{2t}$.
  2. Multiply Both Sides by the Integrating Factor: $e^{2t}\frac{dx_2}{dt} + 2x_2e^{2t} = e^{2t}e^{-2t} = 1$.
  3. Recognize the Left-Hand Side as a Derivative: $\frac{d}{dt}[e^{2t}x_2] = 1$.
  4. Integrate both sides: $\int \frac{d}{dt}[e^{2t}x_2]dt = \int dt↭e^{2t}x_2 = t + C_2$.
  5. Solve for x: $x_2(t) = e^{-2t}(t+C_2)$ (ii)

Step 3: Combine Solutions Using Superposition. The general solution is x(t) = 2·x1(t) + 3·x2(t) = $2(\frac{1}{2} + C_1e^{-2t}) + 3e^{-2t}(t+C_2) = 1 + 2C_1e^{-2t} + 3e^{-2t}t + 3e^{-2t}C_2 =[\text{Simplify}] 1 + e^{-2t}(3t + 2C_1 + 3C_2) = 1 + e^{-2t}(3t + C)$ where C = 2C1 + 3C2.

Step 4: Interpretation.

Differential Equations with Periodic Inputs

First-order linear differential equations with periodic inputs are common in modeling physical systems subjected to oscillatory forces, such as electrical circuits with alternating currents, mechanical systems under periodic forces, or any system responding to cyclical influences.

Consider the differential equation y’ + ky = kqe(t) where:

Our goal is to find the solution y(t) that satisfies this equation.

Complexifying the Equation

The idea behind complexifying the equation is that complex exponentials (of the form e) are easier to work with than trigonometric functions. Once we solve the complex version of the equation, we can extract the real part to find the solution to the original problem, cos(wt) = Real(eiwt).

By representing the cosine function using Euler’s formula, we can convert the equation into one involving exponentials, which are easier to manipulate.

This allows us to consider the complex version of the differential equation: $\tilde{y’}+k\tilde{y} = ke^{iwt}$, where $\tilde{y}$ is a complex-valued function. Our aim is to solve the complex differential equation for $\tilde{y} = y_1 + iy_2$, and then extract the real part to obtain the solution to our original differential equation ODE.

Solving the Complex Differential Equation

The complex differential equation is a first-order linear differential equation and can be solved using the integrating factor method: $\tilde{y’}+k\tilde{y} = ke^{iwt}$.

  1. It is already in standard form: p(t) = k (constant), q(t) = keiwt.
  2. The integral factor is $μ(t) = e^{\int kdt} = e^{kt}$, derived from the coefficient of $\tilde{y}$ (which is k).
  3. Multiplying both sides of the differential equation by the integral factor, we get: $e^{kt}\tilde{y’}+e^{kt}k\tilde{y} = e^{kt}ke^{iwt} ⇒[\text{This simplifies to:}] (\tilde{y}e^{kt})’ = ke^{(k+iw)t}$
  4. We integrate both sides with respect to t: $ \int \frac{d}{dt}(\tilde{y}e^{kt})dt = \int ke^{(k+iw)t} ↭ \tilde{y}e^{kt} = \frac{k}{k+iw}e^{(k+iw)t}+C$

    The integral of eat with respect to t is $\frac{1}{a}e^{at}$. Here, a = k + iw.

  5. Solve for $\tilde{y}$ by dividing both sides by $e^{kt}$. $\tilde{y} = \frac{k}{k+iw}e^{iwt}+ Ce^{-kt}↭ \frac{1}{1+i(\frac{w}{k})}e^{iwt} +Ce^{-kt}$

Since we are primarily interested in the steady-state solution (long-term behavior), we can ignore the transient solution $Ce^{-kt}$, which decays exponentially and vanishes as t → ∞ due to k > 0. Thus, we can set C = 0 and the solution is: $\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt}$

We want to express the coefficient $\frac{1}{1+i(\frac{w}{k})}$ in a more useful form. We are going to convert this expression to polar form to better understand the magnitude and phase of the solution and extract its real part.

Polar Form

Recall that any complex number α = x + iy can be expressed in polar form α = re where:

Besides, $\frac{1}{α}·α = 1 ⇒ |\frac{1}{α}|·|α| = |1| = 1 ⇒ |\frac{1}{α}| = \frac{1}{|α|}, arg(\frac{1}{α})+arg(α) = arg(1) = 0 ⇒ arg(\frac{1}{α}) = -arg(α)$

$β = 1 + i(\frac{w}{k})$ has x = 1 (real part), y = w/k (imaginary part),

We are searching for the polar form of $ \frac{1}{1+i(\frac{w}{k})}$In our case $arg(α) = -arg(1+i(\frac{w}{k})) = -Φ$ (Refer to Figure A for a visual representation and aid in understanding it); we can rewrite it in polar form $ \frac{1}{1+i(\frac{w}{k})} = Ae^{-iΦ} = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}e^{-iΦ}$

Polar Forms

$\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt} = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}e^{iwt-iΦ} = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}e^{i(wt-Φ)}$

Since we are only interested in the real part of $\tilde{y}$, we take the real part of the above expression: $y_1 = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}cos(wt-Φ)$ where $Φ = tan^{-1}(\frac{w}{k})$ is the phase lag of the function (Refer to Figure B for a visual representation and aid in understanding it).

Polar Forms

A forcing term in a differential equation represents an external influence or input to the system. In the equation y′ + ky = kcos(wt), the term kcos(wt) is the forcing term. If there were no forcing term (i.e., if the equation were y′+ky=0), the system would naturally settle into some kind of equilibrium or decay to zero over time (depending on the sign of k). The forcing term kcos(wt) represents a periodic input or disturbance (e.g., a mechanical vibration, an alternating electric current, or any other cyclical influence on the system) that drives the system. It represents an external oscillation with frequency w that forces the system to oscillate as well.

Damping refers to the gradual reduction in the amplitude of oscillations in a system. In the equation y′+ky = kcos(wt), the term ky represent damping, where k is a constant that determines the strength of the damping effect. This term resists changes in y(t) and tends to reduce the amplitude of oscillations. The larger the constant k, the stronger the damping. As k increases, the damping effect becomes stronger. This means the system loses energy more quickly, and the oscillations become smaller in amplitude.

Alternative Approach: Cartesian Form

An alternative method involves expressing the complex coefficient in terms of its real and imaginary parts and then simplifying.

y’ + ky = kcos(wt). While the polar form provides insight into the amplitude and phase, we can also express the solution using trigonometric identities.

The solution is: $\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt}$. The goal is to convert this complex solution into Cartesian form (a form that involves real and imaginary parts explicitly) and extract the real part, which gives the solution to the original differential equation.

The expression $\frac{1}{1+i(\frac{w}{k})}$ is a complex fraction. We want to simplify it into a form that separates the real and imaginary parts.

To simplify, we multiply both the numerator and denominator by the complex conjugate of the denominator.

$\frac{1}{1+i(\frac{w}{k})}·\frac{1-i(\frac{w}{k})}{1-i(\frac{w}{k})} = \frac{1-i(\frac{w}{k})}{1+(\frac{w}{k})^2}$

$\tilde{y} = \frac{1}{1+i(\frac{w}{k})}e^{iwt} = \frac{1-i(\frac{w}{k})}{1+(\frac{w}{k})^2}(cos(wt)+isin(wt))$

Since we are only interested in the real part of $\tilde{y}=y_1+iy_2$, we take the real part of the above expression: $y_1 = \frac{1}{1+(\frac{w}{k})^2}(cos(wt)+\frac{w}{k}sin(wt)) =$[Using the trigonometry identity a·cos(θ)+b·sin(θ) = c·cos(θ - Φ) where a and b are the two legs or sides of a right triangle, c is the hypotenuse, Φ is the angle between a and c, Φ = $tan^{-1}(\frac{b}{a})$ (Refer to Figure C for a visual representation and aid in understanding it)]

Cartesian Forms

$y_1 = \frac{1}{1+(\frac{w}{k})^2}(cos(wt)+\frac{w}{k}sin(wt)) = \frac{1}{1+(\frac{w}{k})^2}·\sqrt{1+(\frac{w}{k})^2}cos(wt-Φ) = (1+(\frac{w}{k}))^{-1+\frac{1}{2}}cos(wt-Φ) = \frac{1}{\sqrt{1+(\frac{w}{k})^2}}cos(wt-Φ)$ where Φ = $tan^{-1}(\frac{b}{a}) = tan^{-1}(\frac{w}{k})$ is the phase shit or phase lag, and $\frac{1}{\sqrt{1+(\frac{w}{k})^2}}$ is the amplitude of the oscillation.

This matches the solution obtained using the polar form.

Formula’s proof

To justify the formula a·cos(θ)+b·sin(θ) = c·cos(θ - Φ) where a and b are the two legs or sides of a right triangle, c is the hypotenuse, Φ is the angle between a and c, Φ = $tan^{-1}(\frac{b}{a})$, consider the following geometric interpretation using vectors:

Let $\hat{\mathbf{u}} = ⟨cos(θ), sin(θ)⟩$ be a unit vector, $\vec{v}$ = ⟨a, b⟩ be a vector with components a and b. The dot product of these two vectors is equal to the magnitude of $\vec{v}$ multiplied by the cosine of the angle between them: $\hat{\mathbf{u}}·\vec{v} = ⟨a, b⟩·⟨cos(θ), sin(θ)⟩ = |⟨a, b⟩|·1·cos(θ - Φ)$ 🚀 (Refer to Figure D for a visual representation and aid in understanding it).

Polar Forms

$\hat{\mathbf{u}}·\vec{v} = ⟨a, b⟩·⟨cos(θ), sin(θ)⟩ = acos(θ) + bsin(θ) =[🚀] ccos(θ - Φ)|$ where c = $|⟨a, b⟩| = \sqrt{a^2 + b^2}, Φ = tan^{-1}(\frac{b}{a})$ is the angle between the vector $\vec{v}$ and the positive x-axis.

Another way of proving it is like this (a-bi)(cos(θ) + isin(θ)) =[Polar form] $\sqrt{a^2+b^2}e^{-iΦ}e^θ = \sqrt{a^2+b^2}e^{θ-iΦ}↭ (a-bi)(cos(θ) + isin(θ)) = \sqrt{a^2+b^2}e^{θ-iΦ}$. Taking the real parts in both side of the equation: $acos(θ)+bsin(θ) = \sqrt{a^2+b^2}cos(θ-iΦ)$ ∎

Bibliography

This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007].
  1. NPTEL-NOC IITM, Introduction to Galois Theory.
  2. Algebra, Second Edition, by Michael Artin.
  3. LibreTexts, Calculus and Calculus 3e (Apex). Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
  4. Field and Galois Theory, by Patrick Morandi. Springer.
  5. Michael Penn, and MathMajor.
  6. Contemporary Abstract Algebra, Joseph, A. Gallian.
  7. YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra.
  8. MIT OpenCourseWare [18.03 Differential Equations, Spring 2006], YouTube by MIT OpenCourseWare.
  9. Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences.
Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.