Mathematics is the music of reason, James Joseph Sylvester

An algebraic equation is a mathematical statement that declares or asserts the equality of two algebraic expressions. These expressions are constructed using:

**Dependent and independent variables**. Variables represent unknown quantities. The independent variable is chosen freely, while the dependent variable changes in response to the independent variable.**Constants**. Fixed numerical values that do not change.**Algebraic operations**. Operations such as addition, subtraction, multiplication, division, exponentiation, and root extraction.

Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$

It involves (e.g., $\frac{dy}{dx} = 3x +5y$):

**Dependent variables**:*Variables that depend on one or more other variables*(y).**Independent variables**: Variables upon which the dependent variables depend (x).**Derivatives**: Rates at which the dependent variables change with respect to the independent variables, $\frac{dy}{dx}$

The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:

- The function f(x, y) (the right-hand side of the ODE) in y’ = f(x, y) is continuous in a neighborhood around a point (x
_{0}, y_{0}) and - Its partial derivative with respect to y, $\frac{∂f}{∂y}$, is also continuous near (x
_{0}, y_{0}).

Then the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x_{0}, y_{0}) .

A first-order linear differential equation (ODE) has the general form: a(x)y' + b(x)y = c(x) where y′ is the derivative of y with respect to x, and a(x), b(x), and c(x) are functions of x. If c(x) = 0, the equation is called homogeneous, i.e., a(x)y’ + b(x)y = 0.

The equation can also be written in the standard linear form as: y’ + p(x)y = q(x) where $p(x)=\frac{b(x)}{a(x)}\text{ and }q(x) = \frac{c(x)}{a(x)}$

A second-order linear homogeneous differential equation ODE with constant coefficients is a differential equation of the form: y'' + Ay' + By = 0 where:

- y is the dependent variable (a function of the independent variable t),
- y′ and y′′ are the first and second derivatives of y with respect to t,
- t is the independent variable,
- A and B are constants.

This equation is homogeneous, meaning that there are no external forcing terms (like a function of t) on the right-hand side.

In this section, we are introducing the concept of Fourier series, a powerful mathematical tool used to represent periodic functions as sums of sine and cosine functions. This is particularly useful when dealing with inhomogeneous differential equations of the form: y’’ + ay’ + by = f(t) where

- f(t) is the input or forcing function.
- y(t) is the response or solution of the differential equation.
- The coefficients a and b describe characteristics of the system, such as damping or stiffness in physical systems.

The input functions f(t) we often encounter and have studied so far are exponential, sine, and cosine functions. **Although these functions might seem quite limited and special, they are in fact fundamental because they allow us to build more complex periodic functions by superposition**.

Due to the linear nature of the differential equation, we can decompose more complicated inputs into sums of sines and cosines, solve for their responses individually, and then combine the results. This principle is known as **superposition** and is a cornerstone of solving linear differential equations.

A Fourier series expands any periodic function f(t) (with period 2π) as an infinite sum of sines and cosines. The general form of a Fourier series is given by: f(t) = $\frac{a_0}{2} + \sum_{n=1}^\infty a_ncos(nt)+b_nsin(nt)$ where

- $\frac{a_0}{2}$ is the
**constant term**. - a
_{n}and b_{n}are the**Fourier coefficients**, representing the amplitudes of the cosine and sine terms, respectively, - n is a positive integer that determines the frequency of each trigonometric term (harmonics of the fundamental frequency).
In the Fourier series representation, each sine and cosine term corresponds to a harmonic of the fundamental frequency.

When solving the differential equation: y′′ +ay′ +by = f(t) we can use the principle of superposition to handle more complex input functions. The idea is as follows. If f(t) is a sum of functions, then the response y(t) is the sum of the responses to each function in the sum.

This allows us to:

**Decompose**f(t) into a sum of sines and cosines using its Fourier series.**Solve the differential equation**for each sine and cosine term individually.**Sum the individual solutions**to get the total response.

The table below shows how different input functions lead to specific responses when solving the differential equation y’’ + ay’ + by = f(t)

**Table of Inputs and Responses**

Input | Response | Commentary |
---|---|---|

sin(nt) | y_{n}^{(s)}(t) |
Response to sine input |

cos(nt) | y_{n}^{(c)}(t) |
Response to cosine input |

b_{n}sin(nt) |
b_{n}y_{n}^{(s)}(t) |
Scaled by b_{n}, due to linearity |

a_{n}cos(nt) |
a_{n}y_{n}^{(c)}(t) |
Scaled by a_{n}, due to linearity |

f(t) | $\frac{a_0}{2} + \sum_{n=1}^\infty a_ncos(nt)+b_nsin(nt)$ | Sum of all individual responses |

This table illustrates the superposition principle: since the differential equation is linear, the response to a sum of inputs is simply the sum of the responses to each input. If we can solve the equation for simple inputs like sin(nt) and cos(nt), we can construct the solution for any periodic input function by adding up the responses corresponding to its Fourier components.

The goal is to determine the Fourier series of a given function f(t) with period 2π.

Definition. Two functions u(t), v(t), defined on ℝ and periodic with period 2π, are said to be orthogonal on the interval [-π, π] if $\int_{-π}^{π} u(t)v(t)dt = 0$

Theorem. Orthogonality of Sine and Cosine Functions on [π, -π]. The collection of functions
$\begin{cases} sin(nt) n = 1, 2, 3, ··· \\ cos(mt) m = 0, 1, 2, ··· \end{cases}$ is **orthogonal** on the interval [-π, π]. This means that, for any two distinct functions from this collection, the integral of their product over [-π, π] is zero:

- $\int_{-π}^{π} sin(nt)sin(mt)dt = 0$ for n ≠ m
- $\int_{-π}^{π} cos(nt)cos(mt)dt = 0$ for n ≠ m
- $\int_{-π}^{π} sin(nt)cos(mt)dt = 0$ for all n, m (∀n, m)
For completeness, recall the special cases where the functions are identical (n = m ≠ 0): $\int_{-π}^{π} sin^2(nt) = \int_{-π}^{π} cos^2(nt) = π$. For n = 0, $\int_{-π}^{π} cos^2(0·t)dt = \int_{-π}^{π} 1^2dt = 2π.$

Orthogonality in this context means that the sine and cosine functions are “independent” of each other in a certain sense. In the Fourier series, we decompose a periodic function into a sum of sines and cosines. This property ensures that each Fourier coefficient (associated with a particular sine or cosine function) is independent of the others and can be calculated separately.

Proof:

Let’s U_{n} and V_{m} be any two functions from the set sin(nt) or cos(nt), where n ≠ m. We will show that the integral of their product over [-π, π] is zero: $\int_{-π}^{π} U_n(t)V_m(t)dt = 0$

We begin by recalling that sine and cosine functions satisfy the second-order differential equation for harmonic oscillators: U’’_{n}(t) + n^{2}U_{n}(t) = 0 ⇒ U’’_{n}(t) = -n^{2}U_{n}(t) where U_{n}(t) can represent either sin(nt) or cos(nt).

We are interested in proving that for n ≠ m: $\int_{-π}^{π} U_n(t)V_m(t)dt = 0$

Let’s compute the integral $\int_{-π}^{π} U_n’’(t)V_m(t)dt$ using integration by parts:

$U_n’(t)V_m(t)\bigg|_{-π}^{π} ···$ This term vanishes, because both sine and cosine functions (and their derivatives) vanish at the endpoints t = -π and t = π. For example, sin(nπ) = 0 and cos(nπ) = (-1)^{n} while their derivatives cos(nt) for sine and -sin(nt) for cosine behave similarly.

$-\int_{-π}^{π} U_n’(t)V_m’(t)dt$

This simplifies our expression to: $\int_{-π}^{π} U_n’’(t)V_m(t)dt = -\int_{-π}^{π} U_n’(t)V_m’(t)dt$

From the harmonic oscillator equation, $U_n’’(t) = -n^2U_n(t)$, we substitute into the integral: $\int_{-π}^{π} U_n’’(t)V_m(t)dt = -n^2\int_{-π}^{π} U_n(t)V_m(t)dt $

Similarly, we can repeat the same logic to V_{m}(t) which satisfies $V_m’’(t) = -m^2V_m(t)$, and we get: $\int_{-π}^{π} V_m’’(t)U_n(t)dt = -m^2\int_{-π}^{π} U_n(t)V_m(t)dt$.

By the same reasoning, this also simplifies to: $\int_{-π}^{π} V_m’’(t)U_n(t)dt = -\int_{-π}^{π} U_n’(t)V_m’(t)dt$ (Integration by parts and the first term is zero)

Now we have the two equations: $-n^2\int_{-π}^{π} U_n(t)V_m(t)dt = -m^2\int_{-π}^{π} U_n(t)V_m(t)dt$

Factoring out the common integral: $(n^2-m^2)\int_{-π}^{π} U_n(t)V_m(t)dt = 0$⇒[Since n ≠ m, we have n^{2}-m^{2} ≠ 0]$\int_{-π}^{π} U_n(t)V_m(t)dt = 0$∎

The Fourier series is a powerful tool that allows us to represent any periodic function as a sum of sine and cosine functions. Specifically, if a function f(t) is periodic with a period of 2π, we can express it as an infinite series involving cosines and sines: $f(t) = \frac{a_0}{2} + \sum_{n=1}^\infty a_ncos(nt)+b_nsin(nt)$

The goal is to find the coefficients a_{n}, and b_{n}, which describe the contribution of each cosine and sine term to the overall shape of f(t).

Step 1. **Multiply Both Sides by cos(nt) or sin(nt)**

To isolate the coefficient a_{n}, we multiply both sides of the equation by cos(nt):

$f(t)cos(nt) = \frac{a_0}{2}cos(nt) + \sum_{m=1}^\infty [a_mcos(mt)+b_msin(mt)]cos(nt) = \frac{a_0}{2}cos(nt) + \sum_{m=1}^\infty [a_mcos(mt)cos(nt)+b_msin(mt)cos(nt)]$

Similarly, to find b_{n}, multiply both sides by sin(nt):

$f(t)sin(nt) = \frac{a_0}{2}sin(nt) + \sum_{m=1}^\infty [a_mcos(mt)+b_msin(mt)]sin(nt) = \frac{a_0}{2}sin(nt) + \sum_{m=1}^\infty [a_mcos(mt)sin(nt)+b_msin(mt)sin(nt)]$

Step 2: **Integrate Over the interval [-π, π]**:

Integrate both sides over the interval [-π, π]:

$\int_{-π}^{π} f(t)cos(nt)dt = \frac{a_0}{2}\int_{-π}^{π} cos(nt)dt + \sum_{m=1}^\infty [a_m\int_{-π}^{π}cos(mt)cos(nt)dt + b_m\int_{-π}^{π} sin(mt)cos(nt)dt]$

Because of the orthogonality property of sine and cosine functions on [π, -π], almost all terms vanish:

- $\int_{-π}^{π} cos(mt)cos(nt)dt = 0$ for m ≠ n
- $\int_{-π}^{π} sin(mt)cos(nt)dt = 0$ for m ≠ n
- $\int_{-π}^{π} cos^2(nt)dt = π$
- $\int_{-π}^{π} sin^2(nt)dt = π$

Thus, only the term where m = n survives.

Step 3. **Solve for a _{n} and b_{n}**

For a_{n}: $\int_{-π}^{π} f(t)cos(nt)dt = a_n\int_{-π}^{π}cos^2(nt)dt = a_nπ$. Solving for a_{n}, we get: $a_n = \frac{1}{π}\int_{-π}^{π} f(t)cos(nt)dt$

Similarity, for b_{n}: $\int_{-π}^{π} f(t)sin(nt)dt = b_n\int_{-π}^{π}sin^2(nt)dt = b_nπ$. Solving for a_{n}, we get: $b_n = \frac{1}{π}\int_{-π}^{π} f(t)sin(nt)dt$

Step 4. **Calculating a _{0}**

To find the coefficient a_{0}, integrate f(t) over [-π, π]:

$\int_{-π}^{π} f(t)dt = \frac{a_0}{2}\int_{-π}^{π}1dt \sum_{n=1}^\infty [a_n\int_{-π}^{π} cos(nt)dt + b_n\int_{-π}^{π} sin(nt)dt]$

Since: $\int_{-π}^{π} cos(nt)dt = 0, \int_{-π}^{π} sin(nt)dt = 0$ for n ≥ 1.

$\int_{-π}^{π} f(t)dt = \frac{a_0}{2}\int_{-π}^{π}1dt = \frac{a_0}{2}(2π)$

Solving for a_{0}: $a_0 = \frac{1}{π}\int_{-π}^{π} f(t)dt$

A Fourier series expands any periodic function f(t) (with period 2π) as an infinite sum of sines and cosines. The general form of a Fourier series is f is periodic with period 2π, is $f(t) = \frac{a_0}{2} + \sum_{n=1}^\infty a_ncos(nt)+b_nsin(nt)$ where $a_n=\frac{1}{π}\int_{-π}^{π} f(t)cos(nt)dt, b_n = \frac{1}{π}\int_{-π}^{π} f(t)sin(nt)dt$

The square-wave function is a classic example used to demonstrate the power of Fourier series. The function oscillates between two levels, 0 and 1, over one period [−π,π]. Formally, the square-wave function f(t) is defined as:

$f(t) = \begin{cases} 0, &-π ≤ t < 0 \\ 1, &0 ≤ t < π \end{cases}$

This function is periodic with a period of 2π, meaning f(t+2π) = f(t) for all t.

Since f(t) is periodic, it has a Fourier series representation of the form:

$f(t) = \frac{a_0}{2} + \sum_{n=0}^\infty a_ncos(nt)+b_nsin(nt)$

Our task is to compute the Fourier coefficients c_{0}, a_{n}, and b_{n} and express the function as a Fourier series.

$a_0 = \frac{1}{π}\int_{-π}^{π} f(t)dt$

Since f(t) = 0 for -π ≤ t < 0 and f(t) = 1 for 0 ≤ t < π, the integral simplifies to:

$a_0 = \frac{1}{π}\int_{0}^{π} dt = \frac{1}{π}(π-0) = \frac{π}{π} = 1$. Thus, the constant term in the Fourier series is: $\frac{a_0}{2} = \frac{1}{2}$

$a_n = \frac{1}{π}\int_{-π}^{π} f(t)cos(nt)dt = \frac{1}{π}·\int_{0}^{π} cos(nt)dt = \frac{1}{nπ}sin(nt)\bigg|_{0}^{π} = \frac{1}{nπ}[sin(nπ)-sin(0)] = 0$ because sin(nt) = 0 for all integers n and sin(0) = 0.

$b_n = \frac{1}{π}\int_{-π}^{π} f(t)sin(nt)dt = \frac{1}{π}\int_{0}^{π} sin(nt)dt = \frac{-1}{nπ}cos(nt)\bigg|_{0}^{π} = \frac{-1}{nπ}((-1)^n-1)$

Recall that cos(nπ) = (-1)^{n} and cos(0) = 1

$b_n =\begin{cases} 0, &n = 2k ~even \\ \frac{-1}{nπ}(-2)=\frac{2}{nπ}, &n = 2k-1 ~odd \end{cases}$

**Constructing the Fourier Series**
Now that we have all the Fourier coefficients, we can write the Fourier series for the square-wave function, $f(t) = \frac{a_0}{2} + \sum_{n=0}^\infty a_ncos(nt)+b_nsin(nt) = \frac{1}{2} + \sum_{k=1}^\infty \frac{2}{(2k-1)π}sin((2k-1)nt) = \frac{1}{2} + \frac{2}{π}sin(t) + \frac{2}{3π}sin(3t) + \frac{2}{5π}sin(5t) + ··· =$

Final Fourier Series for the Square-Wave Function:g $\frac{1}{2}+\frac{2}{π}[sin(t) + \frac{1}{3}sin(3t) + \frac{1}{5}sin(5t) + ···]$

This series consists only of sine terms with odd harmonics, and no cosine terms, because the square-wave function is an odd function. This is the Fourier series representation of the square-wave function, valid for all t except at the discontinuities, i.e., at k = kπ, where k ∈ ℤ (Refer to Figure i for a visual representation and aid in understanding it)

Adding more terms (higher harmonics) improves the approximation of the square-wave function, capturing more details of its sudden jumps. This series allows us to express the discontinuous square-wave function as an infinite sum of continuous sine functions.

The Fourier series is a way to represent a periodic function as an infinite sum of sine and cosine functions. A fundamental property of Fourier series is their uniqueness: If two periodic functions f(t) and g(t) are equal on a given interval (typically [−L, L] or [−π, π] ), then their Fourier series representations are identical.

**Theorem (Uniqueness of Fourier Series):**
If two functions f(t) and g(t) are periodic with period 2L (or 2π), and they are equal almost everywhere on the interval [−L, L] (or [−π, π]), then their Fourier series representations are identical. That is, the Fourier coefficients a_{n} and b_{n} for both functions must also be the same.

$a_n^{(f)}=\frac{1}{L}\int_{-L}^{L} f(t)cos(\frac{nπt}{L})dt = \frac{1}{L}\int_{-L}^{L} g(t)cos(\frac{nπt}{L})dt = a_n^{(g)}$

Similarly, $b_n^{(f)} = b_n^{(g)}$

This result holds because the formulas used to compute the Fourier coefficients are derived from integrals over the interval [-L, L], and if both functions are equal on that interval, the integrals (and thus the coefficients) will be equal as well.

The Fourier series of a periodic function is unique. This means that:

**A given periodic function corresponds to one and only one set of Fourier coefficients**.- If two periodic functions have the same Fourier coefficients, they must be equal almost everywhere on the interval, meaning they can differ at a finite number of points (e.g., points of discontinuity), and still have identical Fourier coefficients.

For the Fourier coefficients to exist, the functions f(t) and g(t) must be:

**Piecewise Continuous**: They can have a finite number of discontinuities, but no infinite discontinuities within the interval.**Absolutely Integrable**: The integrals used to compute the Fourier coefficients must converge.

This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007].

- NPTEL-NOC IITM, Introduction to Galois Theory.
- Algebra, Second Edition, by Michael Artin.
- LibreTexts, Calculus and Calculus 3e (Apex). Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
- Field and Galois Theory, by Patrick Morandi. Springer.
- Michael Penn, and MathMajor.
- Contemporary Abstract Algebra, Joseph, A. Gallian.
- YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra.
- MIT OpenCourseWare [18.03 Differential Equations, Spring 2006], YouTube by MIT OpenCourseWare.
- Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences.