Beware that, when fighting monsters, you yourself do not become a monster… for when you gaze long into the abyss. The abyss gazes also into you, Friedrich W. Nietzsche

An algebraic equation is a mathematical statement that declares or asserts the equality of two algebraic expressions. These expressions are constructed using:

**Dependent and independent variables**. Variables represent unknown quantities. The independent variable is chosen freely, while the dependent variable changes in response to the independent variable.**Constants**. Fixed numerical values that do not change.**Algebraic operations**. Operations such as addition, subtraction, multiplication, division, exponentiation, and root extraction.

Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$

It involves (e.g., $\frac{dy}{dx} = 3x +5y$):

**Dependent variables**:*Variables that depend on one or more other variables*(y).**Independent variables**: Variables upon which the dependent variables depend (x).**Derivatives**: Rates at which the dependent variables change with respect to the independent variables, $\frac{dy}{dx}$

The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:

- The function f(x, y) (the right-hand side of the ODE) in y’ = f(x, y) is continuous in a neighborhood around a point (x
_{0}, y_{0}) and - Its partial derivative with respect to y, $\frac{∂f}{∂y}$, is also continuous near (x
_{0}, y_{0}).

Then the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x_{0}, y_{0}) .

A first-order linear differential equation (ODE) has the general form: a(x)y' + b(x)y = c(x) where y′ is the derivative of y with respect to x, and a(x), b(x), and c(x) are functions of x. If c(x) = 0, the equation is called homogeneous, i.e., a(x)y’ + b(x)y = 0.

The equation can also be written in the standard linear form as: y’ + p(x)y = q(x) where $p(x)=\frac{b(x)}{a(x)}\text{ and }q(x) = \frac{c(x)}{a(x)}$

A second-order linear homogeneous differential equation ODE with constant coefficients is a differential equation of the form: y'' + Ay' + By = 0 where:

- y is the dependent variable (a function of the independent variable t),
- y′ and y′′ are the first and second derivatives of y with respect to t,
- t is the independent variable,
- A and B are constants.

This equation is homogeneous, meaning that there are no external forcing terms (like a function of t) on the right-hand side.

The Laplace Transform of a function f(t), where t ≥ 0, is defined as $\mathcal{L}(f(t)) = \int_{0}^{∞} f(t)e^{-st}dt = F(s)$.

One of the most important properties of the Laplace Transform is linearity, which states: $\mathcal{L}(af(t)+bg(t)) = a\mathcal{L}(f(t))+b\mathcal{L}(g(t))$

Function | Laplace Transform |
---|---|

u(t) | $\mathcal{L}(u(t)) = \frac{1}{s}, s > 0$ |

$e^{at}$ | $\frac{1}{s - a}, s > a$ |

$e^{(a + bi)t}$ | $\mathcal{L}(e^{(a + bi)t}) = \frac{1}{s - (a + bi)}, s > a$ |

$\cos(\omega t)$ | $\mathcal{L}(\cos(\omega t)) = \frac{s}{s^2 + \omega^2}, s > 0$ |

$\sin(\omega t)$ | $\mathcal{L}(\sin(\omega t)) = \frac{\omega}{s^2 + \omega^2}, s > 0$ |

$t^n$ | $\mathcal{L}(t^n) = \frac{n!}{s^{n+1}}, s > 0$ |

$u(t-a)$ | $\mathcal{L}(u(t-a)) = \frac{e^{-as}}{s}, s > 0$ |

$\delta(t-a)$ | $\mathcal{L}(\delta(t-a)) = e^{-as}, a \geq 0$ |

$\frac{1}{t}$ | $\mathcal{L}\left(\frac{1}{t}\right) = \text{not defined}$ |

$e^{-bt} \cos(\omega t)$ | $\mathcal{L}(e^{-bt} \cos(\omega t)) = \frac{s + b}{(s + b)^2 + \omega^2}, s > -b$ |

$e^{-bt} \sin(\omega t)$ | $\mathcal{L}(e^{-bt} \sin(\omega t)) = \frac{\omega}{(s + b)^2 + \omega^2}, s > -b$ |

$e^{at}f(t)$ | $\mathcal{L}(e^{at}f(t)) = F(s-a)$ |

This is the Exponential Shift Theorem, indicating that multiplying a function by an exponential term shifts its Laplace Transform.

Besides, $\mathcal{L}(f’(t)) = sF(s)-f(0), \mathcal{L}(f’’(t)) = s^2F(s)-sf(0)-f’(0)$

- $\mathcal{L}(f(at)), \mathcal{L}(sin(bt))$

The Laplace transform of f(at) is given by: $\mathcal{L}(f(at)) = \int_{0}^{∞} e^{-st}f(at)dt =$ We perform a change of variables, letting u = at. Then: du = adt. When t = , u = 0. When t = ∞, u = ∞.

Substituting these into the integral, we get: $\int_{0}^{∞} e^{-s\frac{u}{a}}f(u)\frac{du}{a} = \frac{1}{a} \int_{0}^{∞} e^{-\frac{s}{a}u}f(u)du$ =[Recognizing that the integral on the right is F(s) evaluated at ^{s}⁄_{a}] $\frac{1}{a}F(\frac{s}{a})$

$\mathcal{L}(sin(bt)) =[\text{To find the Laplace transform of sin(bt), we use the result from the previous section } \mathcal{L}(f(at)) = \frac{1}{a}F(\frac{s}{a})] \frac{1}{b}F(\frac{s}{b}) = [sin(at) \leadsto \frac{a}{s^2+a^2}, sin(t) \leadsto \frac{1}{s^2+1}] \frac{1}{b}·\frac{1}{(\frac{s}{b})^2+1} = \frac{1}{b}·\frac{b^2}{s^2+b^2} = \frac{b}{s^2+b^2}$

- $\mathcal{L}(\frac{f(t)}{t})$

We aim to calculate $\mathcal{L}(\frac{f(t)}{t}) = \int_{0}^{∞} e^{-st}\frac{f(t)}{t}dt$

We can express $\frac{e^{-st}}{t}$ as an integral over u: $\int_{s}^{∞} e^{-ut}du = \frac{1}{-t}e^{-ut}\bigg|_{s}^{∞} = 0 -(\frac{1}{-t})e^{-st} = \frac{e^{-st}}{t}$

$\mathcal{L}(f(t)) = F(s) ↭ \int_{0}^{∞} e^{-ut}\frac{f(t)}{t}dt = F(u) ↭[\text{Taking integrals}] \int_{s}^{∞} \int_{0}^{∞} e^{-ut}\frac{f(t)}{t}dtdu = \int_{s}^{∞} F(u)du ↭[\text{Interchange the Order of Integration}] \int_{0}^{∞} f(t) (\int_{s}^{∞} \frac{e^{-ut}}{t}du)dt = \int_{s}^{∞} F(u)du ↭[\text{Using the previous result}] \int_{0}^{∞} f(t)\frac{e^{-st}}{t}dt = \int_{s}^{∞} F(u)du ↭ \mathcal{L}(\frac{f(t)}{t}) = \int_{s}^{∞} F(u)du$

- $\mathcal{L}(\frac{sin(t)}{t})$

We know that $\mathcal{L}(sin(t)) = \frac{1}{s^2+1} =[\text{Using a dummy variable}] \frac{1}{u^2+1}$, and from the previous exercise $\mathcal{L}(\frac{f(t)}{t}) = \int_{s}^{∞} F(u)du$

$\mathcal{L}(\frac{sin(t)}{t}) = \int_{s}^{∞} \frac{1}{u^2+1}du = tan^{-1}(u)\bigg|_{s}^{∞} = tan^{-1}(∞)-tan^{-1}(s) = \frac{π}{2}-tan^{-1}(s)$

- We aim to find the inverse Laplace Transform of the following expression:
^{1}⁄_{s(s+3)}.

**Step 1: Partial Fraction Decomposition**. Express the rational function as a sum of simpler fractions:
$\frac{1}{s(s+3)} =[\text{Using fractions decomposition}] = \frac{A}{s} + \frac{B}{s+3}$

Now, we need to find A and B. To do this, we rewrite the expression: $\frac{1}{s(s+3)} = \frac{A(s+3)+Bs}{s(s+3)} = \frac{As + 3A + Bs}{s(s+3)} =[\text{Grouping terms}] \frac{(A + B)s + 3A}{s(s+3)}$

**Step 2: Solve for A and B**

Expand and group like terms: 1 = (A + B)s + 3A ⇒ A + B = 0 (coefficient of s), 3A = 1 (constant term). Solving for A and B: $A = \frac{1}{3}, B = -\frac{1}{3}$

**Step 3. Write the Decomposed Form and Apply Inverse Laplace Transform**

Using standard inverse Laplace Transforms: $\frac{1}{s(s+3)} = \frac{\frac{1}{3}}{s} + \frac{\frac{-1}{3}}{s+3} \leadsto_\mathcal{L⁻¹} \leadsto \frac{1}{3} -\frac{1}{3}e^{-3t}$ where we are using that $1 \leadsto \frac{1}{s}, e^{at} \leadsto \frac{1}{s-a}$ where s > a. The inverse Laplace Transform of $\frac{1}{s(s+3)}$ is: $\frac{1}{s(s+3)} \leadsto_\mathcal{L⁻¹} \frac{1}{3}(1 -e^{-3t}) $

- Compute the inverse Laplace Transform of the following expression:
^{2}⁄_{(s+1)3}.

Recall that $\mathcal{L}(e^{at}f(t)) = F(s-a), \mathcal{L}(t^{n}) = \frac{n!}{s^{n+1}}$. Our example is a translation with a = -1 of ^{1}⁄_{s3} and n = 2.

$\mathcal{L⁻¹}\frac{1}{(s+1)^3} = e^{-t}t^2$

- Compute the inverse Laplace Transform of the following expression: $\frac{s}{s^2-2s+5}$

$\frac{s}{s^2-2s+5} = \frac{s}{(s-1)^2+2^2} = \frac{(s-1)+1}{(s-1)^2+2^2} = \frac{s-1}{(s-1)^2+2^2} + \frac{+1}{(s-1)^2+2^2}$

Recall $\mathcal{L}(sin(wt)) = \frac{w}{s^2+w^2}, \mathcal{L}(cos(wt)) = \frac{s}{s^2+w^2}, \mathcal{L}(e^{at}f(t)) = F(s-a)$

Applying the translation property again (with a = 1) and adjusting for the coefficient of the sine term: $\mathcal{L⁻¹}(\frac{s-1}{(s-1)^2+2^2} + \frac{+1}{(s-1)^2+2^2})= e^tcos(2t)+ \frac{1}{2}e^tsin(2t)$

This is a fundamental result in Laplace Transform theory and is widely used in solving differential equations and analyzing systems.

Let’s calculate the Laplace Transform of t^{n}, where n is a non-negative integer. Recall that the Laplace Transform of a function f(t) is defined as:

$\mathcal{L}(f(t)) = F(s) = \int_{0}^{∞} f(t)e^{-st}dt$ where s is a complex number, typically with Re(s) > 0 to ensure convergence.

For f(t) = t^{n}, the Laplace Transform becomes:

$\mathcal{L}(t^n) = \int_{0}^{∞} t^ne^{-st}dt$

To compute the integral, we will use the method of integration by parts, which is based on the formula: ∫udv =uv − ∫vdu.

A common strategy is to choose u as a function that becomes simpler when differentiated, and dv as a function that remains manageable when integrated, e.g., u = t^{n}. We use the method of integration by parts repeatedly to reduce the power of t

u = t^{n} ⇒ du = nt^{n-1}dt, and dv = e^{-st}dt, so $v = \frac{e^{-st}}{-s}$.

Using the integration by parts formula ∫udv =[Part A] uv − [Part B] ∫vdu, we get:

[Part A] $t^n\frac{e^{-st}}{-s}\bigg|_{0}^{∞} $

[Part B] $- \int_{0}^{∞} nt^{n-1}\frac{e^{-st}}{-s}dt = [🚀]$

[Part A] $\lim_{t \to ∞} t^n\frac{e^{-st}}{-s} = \frac{1}{-s}\lim_{t \to ∞} \frac{t^n}{e^{st}} = 0$ by n applications of L’Hospital’s rule (s > 0). As t → ∞, e^{-st} decay exponentially (faster) while t^{n} grows potentially! At t = 0, the term is also zero: $0^n·\frac{e^0}{-s} = 0$

[🚀] = $ 0 -0 [\text{ Part A }] + \frac{n}{s}\int_{0}^{∞} t^{n-1}e^{-st}dt = \frac{n}{s}\mathcal{L}(t^{n-1})$. Thus, we have a recursive relation.

We can apply the recursive formula repeatedly: $\mathcal{L}(t^n) = \frac{n}{s}\mathcal{L}(t^{n-1}) = \frac{n}{s}\frac{n-1}{s}\mathcal{L}(t^{n-2}) = ··· = \frac{n(n-1)···1}{s^n}\mathcal{L}(t^0) = \frac{n!}{s^n}\mathcal{L}(1) = \frac{n!}{s^n}\frac{1}{s} = \frac{n!}{s^{n+1}}$ for s > 0.

This result is fundamental and provides a direct way to compute the Laplace Transform of any non-negative integer power of t.

Recall that the Gamma function is defined as: $Γ(z) = \int_{0}^{∞} e^{-t}t^{z-1}dt$ for Re(z) > 0

$Γ(1) = \int_{0}^{∞} e^{-t}1^{z-1}dt = 1, Γ(n+1) = \int_{0}^{∞} e^{-t}t^{n}dt$

Integrating by parts. Let u = t^{n} d⇒ du = nt^{n-1}dt, dv = e^{-t}dt, v = -e^{-t}

$Γ(n+1) = \int_{0}^{∞} e^{-t}t^{n}dt = -e^{-t}t^n\bigg|_{0}^{∞}$

This first part is zero because the exponential decays exponentially, meaning faster than the polynomial grows.

$-\int_{0}^{∞} -e^{-t}·nt^{n-1}dt = nΓ(n)$.

It can be recursively applied to get, Γ(n+1) = nΓ(n) = n(n-1)Γ(n-1) = ··· = n!

$\mathcal{L}(f(t)) = F(s) = \int_{0}^{∞} f(t)e^{-st}dt$ where s is a complex number, typically with Re(s) > 0 to ensure convergence.

By substituting st=u ⇒ sdt = du ⇒$dt = \frac{du}{s} $, we can write the integral as:

$\int_{0}^{∞} f(t)e^{-st}dt = \int_{0}^{∞} t^ne^{-st}dt = \int_{0}^{∞} e^{-u}\frac{u^n}{s^n}\frac{du}{s} = \frac{1}{s^{n+1}}\int_{0}^{∞} e^{-u}·u^ndu = \frac{1}{s^{n+1}}Γ(n+1) = \frac{n!}{s^{n+1}}$

Recall that the Laplace Transform of a function f(t) is given by: $\mathcal{L}(f(t)) = F(s) = \int_{0}^{∞} f(t)e^{-st}dt$

To ensure the Laplace Transform of a function f(t) exists (i.e., the integral converges to a finite value), we need to impose certain conditions on the behavior of f(t) as t grows. Specifically, **the function should not grow too rapidly for large t, otherwise, the exponential decay of e ^{-st} may not be enough to ensure convergence, i.e., the integral involved in the Laplace Transform may not converge**.

A function f(t) is said to be of exponential rate α if there exist positive constants M, α, and t_{0} such that |f(t)| ≤ Me^{αt}, for all t ≥ t_{0}. This condition means that f(t) does not grow faster than an exponential function.

For $\mathcal{L}(f(t))$ to exist:

**Piecewise Continuity**: f(t) must be piecewise continuous on the interval [0, ∞). This means that f(t) is continuous except at a finite number of points where it may have finite jump discontinuities (no vertical asymptotes).**Exponential order.**A common condition that ensures the Laplace Transform will exist is that f(t) must be of exponential order α.

Assuming f(t) is of exponential order α, we have: $|f(t)| ≤ Me^{αt}⇒ |f(t)e^{-st}| ≤ Me^{αt}e^{-st} = Me^{-(s-α)t}$ for t ≥ t_{0}

- If s > α, then (s -α)> 0, and $e^{-(s-α)t}$ decays exponentially as t → ∞. Consequently, the integrand $f(t)e^{-st}$ approaches zero, and the improper integral converges.
- However, if s ≤ α, the exponent -(s -α)t is non-negative or zero, and the integrand may not decay, or might even grow, and this can cause the integral to diverge.

$\int_{t_0}^{∞} |e^{-st}f(t)|dt ≤ \int_{t_0}^{∞} e^{-st}Me^{αt}dt = M\frac{e^{(α-s)t}}{α-s}\bigg|_{t_0}^{∞} =[\text{s> α}] \frac{M}{α-s}$ → 0 as s → ∞

$|F(s)| = |\int_{0}^{∞} e^{-st}f(t)dt| ≤ |\int_{0}^{t_0} e^{-st}f(t)dt| + |\int_{t_0}^{∞} e^{-st}f(t)dt| ≤ \text{Finite Constant} + \frac{M}{α-s}$, F(s) → Finite Constant as s → ∞.

$|\int_{0}^{t_0} e^{-st}f(t)dt|$ is finite because f(t) is piecewise continuous on [0, t_{0}

**Sine Function**: The sine function oscillates between -1 and 1, so it is bounded. Therefore, |sin(t)| ≤ 1. So we can write: |sin(t)| ≤ 1·e^{0·t}. In this case, the sine function is of exponential type with α = 0, indicating that it does not grow exponentially at all.**Polynomial functions**: t^{n}is of exponential type α: t^{n}≤ Me^{αt}for some M and all t > t_{0}.

Observation: For any positive α, the exponential function e^{αt} eventually outgrow any polynomial t^{n} as t → ∞.

$\frac{t^n}{e^{αt}}≤ M$ for some M, and this is true because $\lim_{t \to 0} \frac{t^n}{e^{αt}} = 0$ by n applications of L’Hospital’s rule (the numerator becomes a constant n! and the denominator remains an exponential function multiplied by a polynomial term, which still grows exponentially) -since e^{αt} grow faster than any polynomial t^{n}!

**Reciprocal Function**. 1/t is not of exponential type because as t → 0, 1/t grows unboundedly. Additionally, near t = 0, the integrand behaves like 1/t, which is not integrable on [0, ε], the integral for its Laplace Transform does not converge: $\int_{0}^{∞} e^{-st}\frac{1}{t}dt$.This integral diverges because

^{1}⁄_{t}behaves too wildly near t = 0, so the Laplace Transform of ^{1}⁄_{t}does not exist.**Functions Growing Faster Than Exponential**

If f(t) grows faster than any exponential function (e.g., f(t)= $e^{t^2}$), the Laplace Transform may not exist.

$f(t)e^{-st} = e^{t^2-st} = e\text{t(t-s)}$. As t → ∞, t(t-s) → ∞, so $f(t)e^{-st}$ grows with bond, hence the Laplace Transform does not exist.

In practice, the exponential growth condition ensures that the Laplace Transform can handle a wide variety of useful functions, including polynomials, exponentials, and trigonometric functions, all of which appear frequently in solving differential equations and modeling physical systems.

This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007].

- NPTEL-NOC IITM, Introduction to Galois Theory.
- Algebra, Second Edition, by Michael Artin.
- LibreTexts, Calculus and Calculus 3e (Apex). Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
- Field and Galois Theory, by Patrick Morandi. Springer.
- Michael Penn, and MathMajor.
- Contemporary Abstract Algebra, Joseph, A. Gallian.
- YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra.
- MIT OpenCourseWare [18.03 Differential Equations, Spring 2006], YouTube by MIT OpenCourseWare.
- Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences.