The scariest monsters are the ones that lurk within our souls, Edgar Allen Poe.

An algebraic equation is a mathematical statement that declares or asserts the equality of two algebraic expressions. These expressions are constructed using:

**Dependent and independent variables**. Variables represent unknown quantities. The independent variable is chosen freely, while the dependent variable changes in response to the independent variable.**Constants**. Fixed numerical values that do not change.**Algebraic operations**. Operations such as addition, subtraction, multiplication, division, exponentiation, and root extraction.

Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$

It involves (e.g., $\frac{dy}{dx} = 3x +5y$):

**Dependent variables**:*Variables that depend on one or more other variables*(y).**Independent variables**: Variables upon which the dependent variables depend (x).**Derivatives**: Rates at which the dependent variables change with respect to the independent variables, $\frac{dy}{dx}$

The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:

- The function f(x, y) (the right-hand side of the ODE) in y’ = f(x, y) is continuous in a neighborhood around a point (x
_{0}, y_{0}) and - Its partial derivative with respect to y, $\frac{∂f}{∂y}$, is also continuous near (x
_{0}, y_{0}).

Then the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x_{0}, y_{0}) .

A first-order linear differential equation (ODE) has the general form: a(x)y' + b(x)y = c(x) where y′ is the derivative of y with respect to x, and a(x), b(x), and c(x) are functions of x. If c(x) = 0, the equation is called homogeneous, i.e., a(x)y’ + b(x)y = 0.

The equation can also be written in the standard linear form as: y’ + p(x)y = q(x) where $p(x)=\frac{b(x)}{a(x)}\text{ and }q(x) = \frac{c(x)}{a(x)}$

A second-order linear homogeneous differential equation ODE with constant coefficients is a differential equation of the form: y'' + Ay' + By = 0 where:

- y is the dependent variable (a function of the independent variable t),
- y′ and y′′ are the first and second derivatives of y with respect to t,
- t is the independent variable,
- A and B are constants.

This equation is homogeneous, meaning that there are no external forcing terms (like a function of t) on the right-hand side.

The Laplace Transform of a function f(t), where t ≥ 0, is defined as $\mathcal{L}(f(t)) = \int_{0}^{∞} f(t)e^{-st}dt = F(s)$.

One of the most important properties of the Laplace Transform is linearity, which states: $\mathcal{L}(af(t)+bg(t)) = a\mathcal{L}(f(t))+b\mathcal{L}(g(t))$

Function | Laplace Transform |
---|---|

u(t) | $\mathcal{L}(u(t)) = \frac{1}{s}, s > 0$ |

$e^{at}$ | $\frac{1}{s - a}, s > a$ |

$e^{(a + bi)t}$ | $\mathcal{L}(e^{(a + bi)t}) = \frac{1}{s - (a + bi)}, s > a$ |

$\cos(\omega t)$ | $\mathcal{L}(\cos(\omega t)) = \frac{s}{s^2 + \omega^2}, s > 0$ |

$\sin(\omega t)$ | $\mathcal{L}(\sin(\omega t)) = \frac{\omega}{s^2 + \omega^2}, s > 0$ |

$t^n$ | $\mathcal{L}(t^n) = \frac{n!}{s^{n+1}}, s > 0$ |

$u(t-a)$ | $\mathcal{L}(u(t-a)) = \frac{e^{-as}}{s}, s > 0$ |

$\delta(t-a)$ | $\mathcal{L}(\delta(t-a)) = e^{-as}, a \geq 0$ |

$\frac{1}{t}$ | $\mathcal{L}\left(\frac{1}{t}\right) = \text{not defined}$ |

$e^{-bt} \cos(\omega t)$ | $\mathcal{L}(e^{-bt} \cos(\omega t)) = \frac{s + b}{(s + b)^2 + \omega^2}, s > -b$ |

$e^{-bt} \sin(\omega t)$ | $\mathcal{L}(e^{-bt} \sin(\omega t)) = \frac{\omega}{(s + b)^2 + \omega^2}, s > -b$ |

$e^{at}f(t)$ | $\mathcal{L}(e^{at}f(t)) = F(s-a)$ |

This is the Exponential Shift Theorem, indicating that multiplying a function by an exponential term shifts its Laplace Transform.

Besides, $\mathcal{L}(f’(t)) = sF(s)-f(0), \mathcal{L}(f’’(t)) = s^2F(s)-sf(0)-f’(0)$

Convolution is an important operation that combines two functions to produce a third function. The convolution of two functions f(t) and g(t) is denoted by (f∗g)(t) and is defined as: (f * g)(t) = $\int_{0}^{t} f(u)g(t-u)du$. It is widely used in various fields such as engineering, physics, and applied mathematics.

The variable u is a dummy variable of integration that runs from 0 to t. f(u) represents the input function evaluated at time u. g(t−u) represents the system’s response shifted by t−u. The convolution integral sums up the product f(u)·g(t−u) over the interval from 0 to t.

Given two functions f(t) and g(t), their Laplace Transforms are: $F(s) = \mathcal{L}(f(t)) = \int_{0}^{∞} e^{-st}f(t)dt, G(s) = \mathcal{L}(f(t)) = \int_{0}^{∞} e^{-st}g(t)dt$. A common question arises: Is there a direct formula for the Laplace Transform of the product f(t)·g(t) in terms of their individual Laplace Transforms? The answer is: that there is no direct formula for the Laplace Transform of the product of two functions f(t)g(t) solely in terms of their individual Laplace transforms F(s) and G(s)..

However, we can find the Laplace transform of their convolution (f * g)(t) using a special rule.

**The Convolution Theorem**. The Laplace Transform of the convolution of two functions f(t) and g(t), (f∗g)(t), is equal to the product of their individual Laplace transforms. Mathematically, $\mathcal{L}((f*g)(t)) = F(s)G(s) ↭ F(s)G(s) = \int_{0}^{∞} e^{-st}(f*g)(t)dt$ where F(s) is the Laplace transform of f(t) and G(s) is the Laplace transform of g(t).

The convolution operation in the time domain corresponds to multiplication in the Laplace (frequency) domain. This theorem provides a powerful method for solving differential equations and analyzing systems.

Proof.

We will prove that: $F(s)G(s) = \int_{0}^{∞} e^{-st}(f * g)(t)dt = \mathcal{L}((f * g)(t))$

We begin with the Laplace Transforms of the two functions f(t) and g(t): $F(s) = \int_{0}^{∞} e^{-su}f(u)du$ and $G(s) = \int_{0}^{∞} e^{-sv}g(v)dv$.

Multiplying these two transforms yields:

$F(s)G(s) = [\text{By definition}] \int_{0}^{∞} e^{-su}f(u)du·\int_{0}^{∞} e^{-sv}g(v)dv = \int_{0}^{∞}\int_{0}^{∞}e^{-s(u+v)}f(u)g(v)dudv$

Next, we perform a change of variables. Let t = u + v (⇒v = t -u), and u = u (remains the same).

du·dv = $\frac{∂(u, v)}{∂(u, t)}du·dt = \vert \begin{smallmatrix}\frac{∂u}{∂u} & \frac{∂u}{∂t}\\ \frac{∂v}{∂u} & \frac{∂v}{∂t}\end{smallmatrix} \vert = \vert \begin{smallmatrix}1 & 0\\ -1 & 1\end{smallmatrix} \vert du·dt = 1·du·dt$. Hence, the differentials remain dudt because the Jacobian determinant of the transformation is 1.

$\int_{0}^{∞}\int_{0}^{∞}e^{-s(u+v)}f(u)g(v)dudv = \int_{0}^{∞}\int_{0}^{t}e^{-st}f(u)g(t-u)dudt = \int_{0}^{∞}e^{-st}\int_{0}^{t} f(u)g(t-u)dudt = \int_{0}^{∞}e^{-st} (f*g)(t)$ ∎

Refer to Figure ii for a visual representation and aid in understanding it. t is the outer variable running from 0 to ∞, u is the variable inside the convolution integral and for each t, u ranges from 0 to t.

This means that instead of directly computing the convolution integral, we can compute the Laplace transforms of the two functions separately, multiply them, and then take the inverse Laplace Transform to get back to the time domain.

**Commutativity**: (f * g) (t) = (g * f) (since FG = GF). $f(t)*g(t) = \int_{0}^{t} f(τ)g(t-τ)dτ =$[Change of variables, u = t -τ, du = -dt. For τ = 0, u = t. For τ = t, u = 0] $-\int_{t}^{0} f(t-u)g(u)du = \int_{0}^{t} g(u)f(t-u)du = g(t) * f(t)$ ∎**Associativity**: f ∗ (g ∗ h) = (f ∗ g) ∗ h.**Distributivity Over Addition**: f ∗ (g + h) = (f ∗ g) + (f ∗ h).**The Convolution Theorem**: $\mathcal{L}(f(t)*g(t)) = \mathcal{L}(f(t))·\mathcal{L}(g(t))↭ \mathcal{L}^{-1}(F(s)·G(s)) = f(t)*g(t)$

- (t
^{2}* t)(t) =[Using the convolution integral definition] $\int_{0}^{t} u^2·(t-u)du = \frac{u^3}{3}t-\frac{u^4}{4}\bigg|_{0}^{t} = \frac{t^4}{3} - \frac{t^4}{4} = \frac{t^4}{12}$

Alternative Method: Using the Laplace Transform.

**Take the Laplace Transform of each function**: $\mathcal{L}(t^2)=\frac{2}{s^3}, \mathcal{L}(t) = \frac{1}{s^2}$.**Multiply the Laplace Transform**. By the Convolution Theorem: $t^2*t \leadsto \frac{2}{s^3}·\frac{1}{s^2} = \frac{2}{s^5}$.**Take the Inverse Laplace Transform**. Recognize that $\mathcal{L}^{-1}(\frac{n!}{s^{n+1}})=t^n, \mathcal{L}^{-1}(\frac{4!}{s^5}) = t^4$. Therefore, the Laplace inverse $\mathcal{L}^{-1}(\frac{2}{s^5}) = \frac{1}{4·3}{L}^{-1}(\frac{4!}{s^5}) = \frac{t^4}{12}$ Both methods yield the same result, confirming the validity of the Convolution Theorem: $(t^2*t)(t)=\frac{t^4}{12}$

- Convolution with the Constant Function 1:
(f * 1)(t) =[By definition] $\int_{0}^{t} f(u)·1du = \int_{0}^{t} f(u)du$
Convolution with the constant function 1 results in the cumulative integral (accumulation) of f(t) up to time t. This operation is equivalent to finding the antiderivative (indefinite integral) of f(t) evaluated from 0 to t.

Convolution can be intuitively understood intuitively as a way of combining two processes: the input and the system’s response. In this case, the input is the dumping of radioactive waste, and the system’s response is the decay of that waste over time.

Imagine you’re dumping radioactive waste onto a pile. The dumping rate at time t, measured in years, is represented by the function f(t).

**Input Function f(t)**: Represents the rate at which radioactive waste is dumped onto a pile over time.**System’s Response g(t)**: Represents the decay function of the radioactive waste, typically e^{-kt}, where k is the decay constant.

To determine the total amount of radioactive waste remaining at time t, we need to account for:

**Amount of Waste Dumped at Earlier Time**: f(u)Δu.**Amount of Waste Dumped between times t**: ΔW_{i}and t_{i+1}_{i}= f(t_{i})Δt_{i}where Δt_{i}= t_{i+1}-t_{i}**Decay of Waste Over Time u**.*Waste dumped at time u decays over the period t -u*: f(u)Δue^{-k(t - u)}, representing the fraction of waste remaining after time t − u.

**Total Waste Remaining Over Time**. Now, imagine you are continuously dumping radioactive waste starting from time t = 0. At any later time t, the total amount of radioactive waste left on the pile will depend on two factors = [How much waste was dumped at each earlier time]·[How much has decayed since that earlier time].

For each small time interval [u_{i}, u_{i+1}], the amount of waste dumped is f(u_{i})Δu_{i}, and the amount of radioactive waste left on the pile at time t (after decay) is $f(u_i)Δu_ie^{-k(t-u)}$. The total amount of radioactive waste left at time t can be approximated as: $\sum_{i=1}^n f(u_i)Δu_ie^{-k(t-u)}$.

As Δu_{i} → 0 (to do this approximation more accurate), this sum turns into an integral, giving us **the exact total amount of radioactive waste at time t**: $\int_{0}^{t} f(u)e^{-k(t-u)}du = f(t) * e^{-kt}$. This integral is the convolution of the dumping rate function f(t) with the decay function $e^{-kt}$.

Special Cases of Convolution:

**Constant Decay Function**: Now, let’s consider a simpler case where the decay function is constant, meaning the waste does not decay (i.e., e^{-kt}= 1). In this case, the convolution becomes: f(t) * 1 = $\int_{0}^{t} f(u)du$. This means that the total amount of waste on the pile at time t is simply the cumulative amount of all waste dumped up to that time. This is the cumulative sum of all previous dumping.**Linear Growth Example (Chicken Farm)**. Imagine a small chicken farm where the number of baby chicks born is increasing linearly over time. Let f(t) represent the production rate of chicks (measured in kilograms). The total mass of chickens produced at time t is a function of the production rate and growth of production over time.

If the growth is linear, the total mass of chickens at time t would be: f(t) ∗ g(t) where f(t) ∗ g(t) represents the convolution of the production rate f(t) (#kg of chickens at time t) and g(t) represents the growth function over time (e.g., t). Total Mass at time t=(f∗t)(t)= $\int_{0}^{t} f(t)(t-u)dt$. It accounts for how much each earlier production contributes to the total at the present time. In other words, convolution sums up how past production rates contribute to the total at a given time.

A jump discontinuity occurs when a function abruptly jumps from one value to another at a certain point. A classic example of a function that exhibits a jump discontinuity is the Heaviside step function (also called the unit step function).

The unit step function, denoted as u(t), is a step function, the value of which is zero for negative arguments, one for positive arguments, and u(0) is undefined. Refer to Figure i for a visual representation and aid in understanding it. Mathematically, it is defined as: $u(t) = \begin{cases} 0, &t < 0 \\ undefined~ or~ 0, &t = 0 \\ 1, &t > 0 \end{cases}$

The Heaviside step function is a function that “jumps” from 0 to 1 at t = 0. There is a bit of controversy or ambiguity regarding the value of u(0). Some definitions leave u(0) undefined, while others define it as either 0, ^{1}⁄_{2} or 1. This ambiguity usually doesn’t matter for practical purposes, it is often inconsequential as most applications of the Heaviside step function focus on t > 0.

By shifting the Heaviside Step Function, we can control the point at which the jump occurs.

Let’s shift the unit step by a constant a. This creates a new function, u_{a}(t), which “activates” at t = a. The translated or shifted Heaviside step function is defined as: u_{a}(t) = u(t-a) = $u(t) =
\begin{cases}
0, &t < a \\
1, &t ≥ a
\end{cases}$

This shift allows us to control when the jump occurs. Instead of jumping from 0 to 1 at t = 0, the function jumps at t = a.

We can use two translated Heaviside functions to create a box-shaped function that is 1 between two points a and b, and 0 outside this interval. This is called the **unit box function** and is denoted by u_{ab}(t). It “turns on” 🔦 at t = a and “turns off” at t = b.

It is defined as:

$u_{ab}(t) = \begin{cases} 0, &t < a \\ 1, &a ≤ t ≤ b \\ 0, &t > b \end{cases}$

When we multiply a function f(t) by u_{a}(t), we effectively “turn on” the function at
t=a:

$f(t) = \begin{cases} 0, &t < a \\ f(t), &t > b \end{cases}$

This is useful for modeling systems where an input or force is applied starting at time t = a.

When using the Unit Box Function u_{ab}(t), we “window” the function f(t) between t = a and t =
b. In other words, we can express u_{ab}(t) using the difference of two Heaviside step functions. $u_{ab}(t) = u_a(t)-u_b(t) = u(t-a)-u(t-b)$. The main idea is that u_{a}(t) turns on the function at t = a and u_{b}(t) turns it off at t = b, creating a window between a and b where the function is 1.

Consider multiplying the unit box function u_{ab}(t) by another function f(t), i.e., u_{ab}(t)f(t). This product has the effect of “windowing” the function f(t), meaning that:

- For t ∈ [a, b], u
_{ab}(t)f(t) = f(t), so the function is unchanged. - For t ∉ [a, b], u
_{ab}(t)f(t) = 0, meaning that all values of f(t) outside the interval or window [a,b] are completely wiped out.

$f(t)u_{ab}(t) = \begin{cases} 0, &t < a \\ f(t), &a ≤ b < t \\ 0, &t > a \end{cases}$

Let’s calculate the Laplace transform of the unit step function u(t): $\mathcal{L}(u(t)) = \int_{0}^{∞} e^{-st}u(t)dt =[\text{Since u(t) = 1 for positive values of t}] = \int_{0}^{∞} e^{-st}dt = \frac{1}{s}$, s > 0

Thus, the Laplace transform of the Heaviside step function is ^{1}⁄_{s},which is consistent with the Laplace Transform of the constant function 1 for s > 0.

If we know that $\mathcal{L}(1) = \mathcal{L}(u(t)) = \frac{1}{s}$, s > 0, the question becomes: What is the inverse Laplace transform of ^{1}⁄_{s}, $\mathcal{L}^{-1}(\frac{1}{s})$?

Suppose that the Laplace Transform of f is F, $f(t)\leadsto_F F(s)$, the inverse Laplace transform of F(s) is f(t), but any function that behaves like f(t) for positive arguments (and possibly has different behavior for negative arguments) will also work. This is because the Laplace transform is only concerned with what happens for t ≥ 0.

To ensure uniqueness, we usually agree that f(t) is 0 for all t < 0. This is equivalent to multiplying f(t) by the Heaviside step function u(t), which forces f(t) to be 0 for negative arguments: $F(s) \leadsto_{L^{-1}} u(t)f(t)$ ↭ $\mathcal{L}^{-1}(F(s)) = f(t)u(t)$. By applying the Heaviside step function, we effectively cut off any behavior of f(t) for t < 0 (its tail) and ensure that f(t) is defined only for non-negative time.

There is no a direct formula for $\mathcal{L}(f(t-a))$ in term of $\mathcal{L}(f(t))$. The problem is basically that the Laplace of f does not care about the function’s behaviour for negative t because it is only defined for t ≥ 0 (it looses all information about f in (-a, 0)), but this information is needed to calculate $\mathcal{L}(f(t-a))$ (Refer to Figure ii for a visual representation and aid in understanding it).

We can get this result by wiping out this area. When we shift the function, we effectively change when the function starts. If the original function starts at t = 0, the shifted function starts at t = a. The Heaviside function u(t−a) takes care of this, making the function zero for t < a and nonzero for t ≥ a. In other words, we can get a formula from the function u(t-a)·f(t-a).

In the context of the Laplace Transform, the T-axis translation formula deals with shifting a function in time. When you shift a function f(t) by a constant a (i.e., replace t with t −a), the Laplace Transform of the shifted function is related to the Laplace Transform of the original function through multiplication by an exponential factor. More formally, $u(t-a)f(t-a) \leadsto e^{-as}F(s)↭ \mathcal{L}(u(t-a)f(t-a)) = e^{-as}F(s)$ or equivalently $u(t-a)f(t) [\text{Observe t = (t +a) -a}] \leadsto e^{-as} \mathcal{L}(f(t+a)) = e^{-as}F(s+a)$ where:

- u(t -a) is the Heaviside step function, ensuring that the function is zero before t = a.
- f(t −a) is the original function, shifted by a.
- F(s) is the Laplace Transform of f(t), i.e., $F(s)=\mathcal{L}(f(t)).$ Refer to Figure iii for a visual representation and aid in understanding it.

Proof

We aim to prove that $u(t-a)f(t-a) \leadsto e^{-as}F(s)$.

The Laplace Transform of u(t-a)f(t-a) is defined as:

$\int_{0}^{∞} e^{-st}u(t-a)f(t-a)dt =[\text{Since u(t−a) = 0 for t < a, the integral from 0 to a is zero. Therefore, we can adjust the limits of integration:}] \int_{a}^{∞} e^{-st}f(t-a)dt$

We make a substitution (change of variables) to simplify the integral. Let τ = t - a. This implies that when t = a, τ = 0. When t → ∞, τ → ∞, and the differential remain the same, i.e., dt = dτ.

$\int_{a}^{∞} e^{-st}f(t-a)dτ = \int_{a}^{∞} e^{-s(τ+a)}f(τ)dτ =[\text{Simplify the Exponential Term:}] e^{-sa}\int_{-a}^{∞} e^{-sτ}f(τ)dτ =[\text{The remaining integral is exactly the definition of the Laplace Transform of f(t)}] e^{-sa}F(s)$

$u(t-a)f(t-a) \leadsto e^{-as}\mathcal{L}(f(t)) = e^{-as}F(s) ↭ u(t-a)f(t-a) \leadsto e^{-as}\mathcal{L}(f(t))$

Similarly, $u(t-a)f(t-a+a) \leadsto e^{-as}\mathcal{L}(f(t+a))↭ u(t-a)f(t) \leadsto e^{-as}\mathcal{L}(f(t+a))$

- The Laplace Transform of the unit box function.

The unit box function u_{ab}(t) is a piecewise function, which is 1 in the interval [a, b] and 0 otherwise. u_{ab}(t) = u(t -a) - u(t -b) where u(t) is the Heaviside Step Function, and a and b are constants with a < b.

$u_{a, b}(t) = \begin{cases} 0, &t < a \\ 1, &a ≤ t < b \\ 0, &t ≥ b \end{cases}$

Recall that $u(t) \leadsto \frac{1}{s}, u(t-a)f(t-a) \leadsto e^{-as}F(s), u(t -a)·u(t -a) = u(t -a)$

Using the Linearity of the Laplace Transform: $u(t -a) - u(t -b) \leadsto \frac{e^{-as}}{s}-\frac{e^{-bs}}{s} = \frac{e^{-as}-e^{-bs}}{s}$

- Laplace Transform of Shifted Polynomial Functions.

Compute the Laplace Transform of the function: $u(t-1)t^2$

Relevant Laplace Transform Properties:

- Second Shifting Theorem (Time Shift): $u(t-a)f(t) \leadsto e^{-as}F(s)$ where $F(s)=\mathcal{L}(f(t+a))$
- Laplace Transform of Polynomials: $\mathcal{L}(t^n)=\frac{n!}{s^{n+1}},\text{ and } \mathcal{L}(1)=\frac{1}{s}.$

Identify a = 1, f(t) = t^{2}, $F(s)=\mathcal{L}(f(t+a)) = \mathcal{L}((t+1)^2)$

$u(t-1)t^2 \leadsto e^{-s}\mathcal{L}(t+1)^2 = e^{-s}\mathcal{L}(t^2+2t+1) = e^{-s}(\frac{2}{s^3}+ \frac{2}{s^2}+\frac{1}{s})$

- $\mathcal{L^{-1}}(\frac{1}{s^2+1}·\frac{1}{s^2+1})$

Recall: $\mathcal{L}^{-1}(F(s)·G(s)) = f(t)*g(t), \mathcal{L}(sin(t)) = \frac{1}{s^2+1}$

$\mathcal{L^{-1}}(\frac{1}{s^2+1}) = sin(t)*sin(t) = \int_{0}^{t} sin(τ)sin(t -τ)dτ =$

Using the property: $sin(a)sin(b) = \frac{1}{2}[cos(b-a)-cos(b+a)]$

$= \frac{1}{2}\int_{0}^{t} cos(τ-2τ)-cos(t)dτ$

This can be separated into two integrals:

$\frac{1}{2}\int_{0}^{t} cos(τ-2τ)dτ =[\text{Ising the substitution u=t−2τ, du = -2dτ}] \frac{1}{2}\frac{-1}{2}sin(t−2τ)\bigg|_{0}^{t} = -\frac{1}{4}(sin(-t)-sin(t)) = \frac{-1}{4}(-sin(t)-sin(t)) = \frac{1}{2}sin(t)$

$-\frac{1}{2}cos(t)\int_{0}^{t}dτ = -\frac{1}{2}cos(t)t$

$\mathcal{L^{-1}}(\frac{1}{s^2+1}·\frac{1}{s^2+1}) = \frac{1}{2}sin(t) -\frac{1}{2}tcos(t)$

- $\mathcal{L^{-1}}(\frac{1+e^{-πs}}{s^2+1})$

Relevant Laplace Transform Properties:

- First Shifting Theorem: $u(t-a)f(t-a) \leadsto e^{-as}F(s), \mathcal{L^{-1}}(e^{-as}F(s)) = u(t-a)f(t-a)$ where $f(t) = \mathcal{L^{-1}}(F(s))$
- $\mathcal{L^{-1}}(\frac{1}{s^2+a^2}) = sin(at)$, t ≥ 0.

Step 1: **Decompose F(s)**
We can break this into two simpler terms: $\frac{1}{s^2+1}+\frac{e^{-πs}}{s^2+1}$. Then, we apply the inverse Laplace transform of each term.

Step 2: **Compute the Inverse Laplace Transform of Each Term**

$\frac{1}{s^2+1} \leadsto_{\mathcal{L^{-1}}} u(t)sin(t)$, t ≥ 0 The reason that we need to add the step function is because of the presence of exponential in the original formula.

Step 3: **Combine the Results**

$\frac{e^{-πs}}{s^2+1} \leadsto_{\mathcal{L^{-1}}} u(t-π)sin(t-π)$, t ≥ π and we are using the TFirst Shifting Theorem where a = π, f(t) = sin(t), and F(s) = $\frac{1}{s^2+1}$.

$\mathcal{L^{-1}}(\frac{1+e^{-πs}}{s^2+1}) = u(t)sin(t)+u(t-π)sin(t-π)$

Step 4: **Simplify**

$f(t) = \begin{cases} sin(t), &0 ≤ t ≤ π \\ sin(t)+sin(t-π), &t ≥ π \end{cases}$

Since sin(t -π) = -sin(t), then sin(t) + sin(t-π) =[It simplifies to] sin(t) - sin(t) = 0.

$f(t) = \begin{cases} sin(t), &0 ≤ t ≤ π \\ 0, &t ≥ π \end{cases}$

This is a piecewise function where the sine function is “turned off” at t = π due to the time shift.

This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007].

- NPTEL-NOC IITM, Introduction to Galois Theory.
- Algebra, Second Edition, by Michael Artin.
- LibreTexts, Calculus and Calculus 3e (Apex). Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
- Field and Galois Theory, by Patrick Morandi. Springer.
- Michael Penn, and MathMajor.
- Contemporary Abstract Algebra, Joseph, A. Gallian.
- YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra.
- MIT OpenCourseWare [18.03 Differential Equations, Spring 2006], YouTube by MIT OpenCourseWare.
- Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences.