Mathematics is the music of reason, James Joseph Sylvester
If you’re going through hell, keep on going, Winston Churchill.
Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$
It involves (e.g., $\frac{dy}{dx} = 3x +5y$):
Definition. A first-order linear ordinary differential equation is an ordinary differential equation (ODE) involving an unknown function y(x), its first derivative y′, and functions of the independent variable x, which can be written in the general form:: a(x)y' + b(x)y = c(x) where:
These equations are termed “linear” because the unknown function y and its derivative y’ appear to the first power and are not multiplied together or composed in any nonlinear way.
If the function c(x)=0 for all x in the interval of interest, the equation simplifies to: a(x)y’ + b(x)y = 0. Such an equation is called a homogeneous linear differential equation.
The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:
Then, the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x0, y0), meaning that it satisfies the initial condition y(x0) = y0.
This theorem ensures that under these conditions, the solution exists and is unique near x = x0.
A second-order linear homogeneous differential equation ODE with constant coefficients is a differential equation of the form: y'' + Ay' + By = 0, where A and B are constants.
To solve this ODE, we seek two linearly independent solutions y1(t) and y2(t). The general solution is then a linear combination of these solutions: $c_1y_1 + c_2y_2$ where c1 and c2 are two arbitrary constants determined by initial conditions. The key to solving the ODE is the characteristic equation, whose roots determine the behavior of the solutions.
In this section, we introduce the concept of Fourier series, a powerful mathematical tool used to represent periodic functions as sums of sine and cosine functions. This is particularly useful when dealing with inhomogeneous linear differential equations of the form: y’’ + ay’ + by = f(t) where
Up to this point, we have often encountered input functions f(t) that are exponential, sine, or cosine functions. Although these functions might seem quite limited and special, they are in fact fundamental blocks because they allow us to build more complex periodic functions through superposition.
Due to the linear nature of the differential equation, we can decompose more complicated inputs into sums of sines and cosines, solve for their responses individually, and then combine the results to obtain the overall solution. This principle is known as superposition and is a cornerstone in solving linear differential equations.
A Fourier series allows to write or express any periodic function f(t) (with period 2π) as an infinite sum of sine and cosine function. The general form of a Fourier series is given by: f(t) = $\frac{a_0}{2} + \sum_{n=1}^\infty [a_ncos(nt)+b_nsin(nt)]$ where
In the Fourier series representation, each sine and cosine term corresponds to a harmonic of the fundamental frequency, capturing different frequency components of the periodic function.
When solving the differential equation: y′′ +ay′ +by = f(t) we can use the principle of superposition to handle more complex input functions f(t). The idea is as follows. If f(t) is a sum of functions, then the response y(t) is the sum of the responses to each function in the sum.
This allows us to:
The table below shows how different input functions lead to specific responses when solving the differential equation y’’ + ay’ + by = f(t)
Table of Inputs and Responses
Input | Response | Commentary |
---|---|---|
sin(nt) | yn(s)(t) | Response to sine input |
cos(nt) | yn(c)(t) | Response to cosine input |
bnsin(nt) | bnyn(s)(t) | Scaled by bn, due to linearity |
ancos(nt) | anyn(c)(t) | Scaled by an, due to linearity |
f(t) | $\frac{a_0}{2} + \sum_{n=1}^\infty a_ncos(nt)+b_nsin(nt)$ | Sum of all individual responses |
The response y(t) to the input f(t) is the sum of the responses to each term in its Fourier series. Here yn(s)(t) and yn(c)(t) denote the particular solutions to the differential equation with inputs sin(nt) and cos(nt), respectively.
This table illustrates the superposition principle: since the differential equation is linear, the response to a sum of inputs is simply the sum of the responses to each input. If we can solve the equation for simple inputs like sin(nt) and cos(nt), we can construct the solution for any periodic input function by adding up the responses corresponding to its Fourier components.
The goal is to determine the Fourier series of a given function f(t) with period 2π.
Definition. Two functions u(t) and v(t), defined on ℝ and periodic with period 2π, are said to be orthogonal on the interval [-π, π] if $\int_{-π}^{π} u(t)v(t)dt = 0$
Theorem. Orthogonality of Sine and Cosine Functions on [π, -π]. The set of functions: $\begin{cases} sin(nt) n = 1, 2, 3, ··· \\ cos(mt) m = 0, 1, 2, ··· \end{cases}$ is orthogonal on the interval [-π, π]. This means that, for any two distinct functions from this set, the integral of their product over [-π, π] is zero:
For completeness, recall the special cases where the functions are identical (n = m ≠ 0): $\int_{-π}^{π} sin^2(nt) = \int_{-π}^{π} cos^2(nt) = π$. For n = 0, $\int_{-π}^{π} cos^2(0·t)dt = \int_{-π}^{π} 1^2dt = 2π.$
Orthogonality in this context means that the sine and cosine functions are “independent” of each other in a certain sense. In the Fourier series, we decompose a periodic function into a sum of sines and cosines. This property ensures that each Fourier coefficient (associated with a particular sine or cosine function) is independent of the others and can be calculated separately.
Proof of Orthogonality:
Let’s Un and Vm be any two functions from the set sin(nt) or cos(nt), where n ≠ m. We will show that the integral of their product over [-π, π] is zero: $\int_{-π}^{π} U_n(t)V_m(t)dt = 0$
We begin by recalling that sine and cosine functions satisfy the second-order differential equation for harmonic oscillators: U’’n(t) + n2Un(t) = 0 ⇒ U’’n(t) = -n2Un(t) where Un(t) can represent either sin(nt) or cos(nt).
We are interested in proving that for n ≠ m: $\int_{-π}^{π} U_n(t)V_m(t)dt = 0$
Let’s compute the integral $\int_{-π}^{π} U_n’’(t)V_m(t)dt$ using integration by parts:
$U_n’(t)V_m(t)\bigg|_{-π}^{π} ···$ This term vanishes, because both sine and cosine functions (and their derivatives) vanish at the endpoints t = -π and t = π. For example, sin(nπ) = 0 and cos(nπ) = (-1)n while their derivatives n·cos(nt) for sine and -n·sin(nt) for cosine behave similarly.
$-\int_{-π}^{π} U_n’(t)V_m’(t)dt$
This simplifies our expression to: $\int_{-π}^{π} U_n’’(t)V_m(t)dt = -\int_{-π}^{π} U_n’(t)V_m’(t)dt$
From the harmonic oscillator equation, $U_n’’(t) = -n^2U_n(t)$, we substitute into the integral: $\int_{-π}^{π} U_n’’(t)V_m(t)dt = -n^2\int_{-π}^{π} U_n(t)V_m(t)dt $
Similarly, we can repeat the same logic to Vm(t) which satisfies $V_m’’(t) = -m^2V_m(t)$, and we get: $\int_{-π}^{π} V_m’’(t)U_n(t)dt = -m^2\int_{-π}^{π} U_n(t)V_m(t)dt$.
By the same reasoning, this also simplifies to: $\int_{-π}^{π} V_m’’(t)U_n(t)dt = -\int_{-π}^{π} U_n’(t)V_m’(t)dt$ (Integration by parts and the first term is zero)
Now we have the two equations: $-n^2\int_{-π}^{π} U_n(t)V_m(t)dt = -m^2\int_{-π}^{π} U_n(t)V_m(t)dt$
Factoring out the common integral: $(n^2-m^2)\int_{-π}^{π} U_n(t)V_m(t)dt = 0$⇒[Since n ≠ m, we have n2-m2 ≠ 0]$\int_{-π}^{π} U_n(t)V_m(t)dt = 0$∎
The Fourier series is a powerful tool that allows us to represent any periodic function f(t) as a sum of sine and cosine functions. Specifically, if a function f(t) is periodic with a period of 2π, we can express it as an infinite series involving cosines and sines: $f(t) = \frac{a_0}{2} + \sum_{n=1}^\infty [a_ncos(nt)+b_nsin(nt)]$
The goal is to determine th Fourier coefficients an, and bn, which describe or quantify the contribution of each cosine and sine term to the overall function f(t).
Step 1. Multiply Both Sides by cos(nt) or sin(nt)
To isolate the coefficient an, we multiply both sides of the equation by cos(nt):
$f(t)cos(nt) = \frac{a_0}{2}cos(nt) + \sum_{m=1}^\infty [a_mcos(mt)+b_msin(mt)]cos(nt) = \frac{a_0}{2}cos(nt) + \sum_{m=1}^\infty [a_mcos(mt)cos(nt)+b_msin(mt)cos(nt)]$
Similarly, to find bn, multiply both sides by sin(nt):
$f(t)sin(nt) = \frac{a_0}{2}sin(nt) + \sum_{m=1}^\infty [a_mcos(mt)+b_msin(mt)]sin(nt) = \frac{a_0}{2}sin(nt) + \sum_{m=1}^\infty [a_mcos(mt)sin(nt)+b_msin(mt)sin(nt)]$
Step 2: Integrate Over the interval [-π, π]:
Integrate both sides over the interval [-π, π]:
$\int_{-π}^{π} f(t)cos(nt)dt = \frac{a_0}{2}\int_{-π}^{π} cos(nt)dt + \sum_{m=1}^\infty [a_m\int_{-π}^{π}cos(mt)cos(nt)dt + b_m\int_{-π}^{π} sin(mt)cos(nt)dt]$
Because of the orthogonality property of sine and cosine functions on [π, -π], almost all terms vanish:
Thus, only the terms where m = n survive.
Step 3. Solve for an and bn
For an: $\int_{-π}^{π} f(t)cos(nt)dt = a_n\int_{-π}^{π}cos^2(nt)dt = a_nπ$. Solving for an, we get: $a_n = \frac{1}{π}\int_{-π}^{π} f(t)cos(nt)dt$
Similarity, for bn: $\int_{-π}^{π} f(t)sin(nt)dt = b_n\int_{-π}^{π}sin^2(nt)dt = b_nπ$. Solving for bn, we get: $b_n = \frac{1}{π}\int_{-π}^{π} f(t)sin(nt)dt$
Step 4. Calculating a0
To find the coefficient a0, integrate f(t) over [-π, π]:
$\int_{-π}^{π} f(t)dt = \frac{a_0}{2}\int_{-π}^{π}1dt +\sum_{n=1}^\infty [a_n\int_{-π}^{π} cos(nt)dt + b_n\int_{-π}^{π} sin(nt)dt]$
Since: $\int_{-π}^{π} cos(nt)dt = 0, \int_{-π}^{π} sin(nt)dt = 0$ for n ≥ 1.
$\int_{-π}^{π} f(t)dt = \frac{a_0}{2}\int_{-π}^{π}1dt = \frac{a_0}{2}(2π)$
Solving for a0: $a_0 = \frac{1}{π}\int_{-π}^{π} f(t)dt$
A Fourier series expands any periodic function f(t) (with period 2π) as an infinite sum of sines and cosines. The general form of a Fourier series is f is periodic with period 2π, is $f(t) = \frac{a_0}{2} + \sum_{n=1}^\infty a_ncos(nt)+b_nsin(nt)$ where $a_n=\frac{1}{π}\int_{-π}^{π} f(t)cos(nt)dt, b_n = \frac{1}{π}\int_{-π}^{π} f(t)sin(nt)dt$
The square-wave function is a classic example used to demonstrate the power of Fourier series in representing periodic functions, even those with discontinuities. The function oscillates between two levels, 0 and 1, over one period [−π, π]. Formally, the square-wave function f(t) is defined as:
$f(t) = \begin{cases} 0, &-π ≤ t < 0 \\ 1, &0 ≤ t < π \end{cases}$
This function is periodic with a period of 2π, meaning f(t+2π) = f(t) for all t.
Since f(t) is periodic, it has a Fourier series representation of the form:
$f(t) = \frac{a_0}{2} + \sum_{n=0}^\infty a_ncos(nt)+b_nsin(nt)$
Our task is to compute the Fourier coefficients a0, an, and bn and express the function as a Fourier series.
$a_0 = \frac{1}{π}\int_{-π}^{π} f(t)dt$
Since f(t) = 0 for -π ≤ t < 0 and f(t) = 1 for 0 ≤ t < π, the integral simplifies to:
$a_0 = \frac{1}{π}\int_{0}^{π} dt = \frac{1}{π}(π-0) = \frac{π}{π} = 1$. Thus, the constant term in the Fourier series is: $\frac{a_0}{2} = \frac{1}{2}$
$a_n = \frac{1}{π}\int_{-π}^{π} f(t)cos(nt)dt = \frac{1}{π}·\int_{0}^{π} cos(nt)dt = \frac{1}{nπ}sin(nt)\bigg|_{0}^{π} = \frac{1}{nπ}[sin(nπ)-sin(0)] = \frac{1}{nπ}[0 -0] = 0$ because sin(nt) = 0 for all integers n and sin(0) = 0.
$b_n = \frac{1}{π}\int_{-π}^{π} f(t)sin(nt)dt = \frac{1}{π}\int_{0}^{π} sin(nt)dt = \frac{-1}{nπ}cos(nt)\bigg|_{0}^{π} = \frac{-1}{nπ}((-1)^n-1)$
Recall that cos(nπ) = (-1)n and cos(0) = 1
$b_n =\begin{cases} 0, &n = 2k ~even \\ \frac{-1}{nπ}(-2)=\frac{2}{nπ}, &n = 2k-1 ~odd \end{cases}$
Constructing the Fourier Series Now that we have all the Fourier coefficients, we can write the Fourier series for the square-wave function, $f(t) = \frac{a_0}{2} + \sum_{n=0}^\infty a_ncos(nt)+b_nsin(nt) = \frac{1}{2} + \sum_{k=1}^\infty \frac{2}{(2k-1)π}sin((2k-1)nt) =[\text{Expanding the first few terms}] \frac{1}{2} + \frac{2}{π}sin(t) + \frac{2}{3π}sin(3t) + \frac{2}{5π}sin(5t) + ··· =$
Final Fourier Series for the Square-Wave Function: f(t) = $\frac{1}{2}+\frac{2}{π}[sin(t) + \frac{1}{3}sin(3t) + \frac{1}{5}sin(5t) + ···]$
This Fourier series consists only of sine terms with odd harmonics and no cosine terms. The absence of cosine terms (i.e., an = 0) arises because the function f(t) is such that the product with cos(nt) integrate to zero over the interval [-π, π].
This is the Fourier series representation of the square-wave function, valid for all t except at the discontinuities, i.e., at k = kπ, where k ∈ ℤ (Refer to Figure i for a visual representation and aid in understanding it)
Adding more terms (higher harmonics) to the Fourier series improves the approximation of the square-wave function, capturing more details of its sudden jumps. This series allows us to express the discontinuous square-wave function as an infinite sum of continuous sine functions. This phenomenon is illustrated by the Gibbs phenomenon, where overshoots occur near the points of discontinuity.
The Fourier series provides an unique way to represent a periodic function as an infinite sum of sine and cosine functions. A fundamental property of Fourier series is their uniqueness, which ensures that each periodic function corresponds to one specific set of Fourier coefficients.
Theorem (Uniqueness of Fourier Series): If two functions f(t) and g(t) are periodic with period 2L (or 2π), and they are equal almost everywhere on the interval [−L, L] (or [−π, π]), then their Fourier series representations are identical. That is, the Fourier coefficients an and bn for both functions must also be the same.
$a_n^{(f)}=\frac{1}{L}\int_{-L}^{L} f(t)cos(\frac{nπt}{L})dt = \frac{1}{L}\int_{-L}^{L} g(t)cos(\frac{nπt}{L})dt = a_n^{(g)}$
Similarly, $b_n^{(f)} = b_n^{(g)}$
This result holds because the formulas used to compute the Fourier coefficients are derived from integrals depending only on the values of the functions over the interval [-L, L], and if both functions are equal on that interval f(t) = g(t), the integrals (and thus the coefficients) must be equal as well.
The Fourier series of a periodic function is unique. This means that:
For the Fourier coefficients to exist and the uniqueness theorem to hold, the functions f(t) and g(t) must be:
The uniqueness of the Fourier series is a crucial property that underpins much of its utility in mathematical analysis and applied fields such as engineering and physics. When we compute the Fourier series of a function, we obtain a representation that is unique to that function (consistency in representation). Besides, if we know the Fourier coefficients, we can reconstruct the original function (except possibly at points of discontinuity) using its Fourier series (reliable reconstruction).