JustToThePoint English Website Version
JustToThePoint en español

Symmetry with Respect to a Line and Circle

The first principle is that you must not fool yourself – and you are the easiest person to fool, Richard Feynman

image info

Definition. Two points $z,z^*\in \mathbb{C}$ are symmetric with respect to a line L if the line through z and $z^*$ is perpendicular to L and L passes through the midpoint of the segment joining z and $z^*$. In other words, L is the perpendicular bisector of the segment connecting z and $z^*$.

If you stand in front of a flat mirror, your reflection is the same distance behind the mirror as you are in front of it. The line connects you and your reflection perpendicularly, and the mirror cuts that line in half.

For a circle with center O and radius R, two points $z$ and $z^*$ are symmetric if they lie on the same ray from the center and their distances to the center satisfy $|z - O| \cdot |z^* - O| = R^2$. Inversion (Notice that being “symmetric with respect to C” means that one point is the inversion of the other in C) is a transformation I of the extended plane such that:

  1. Points on the circle stay on the circle (they are their own reflection): if |z - O| = R, then I(z) is also on C.
  2. Points inside go or are mapped outside and vice versa, in a way that depends only on distance to the center along each ray.
  3. The center is special: the center O goes to $\infty$ , and $\infty$ goes to a.

Definition. Let $\Gamma$ be a generalized circle passing through distinct points $z_2, z_3, z_4$. The points z and $z^*$ in $\mathbb{C} \cup \{ \infty \}$ are said to be symmetric with respect to $\Gamma$ if $(z^*, z_2, z_3, z_4) = \overline{(z, z_2, z_3, z_4)}$.

The cross-ratio maps $\Gamma$ to the real line $\mathbb{R}$. On the real line, symmetry is just complex conjugation, $x+iy \to x-iy$. This definition forces $z$ and $z^*$ to mirror each other relative to that real line.

Symmetry principle. If a Möbius transformation $T$ takes a generalized circle $\Gamma_1$ onto a generalized circle $\Gamma_2$, then any pair of points $z, z^*$ symmetric with respect to $\Gamma_1$ are mapped to a pair of points $T(z), T(z^*)$ which are symmetric with respect to $\Gamma_2$.

Proof.

Let $z, z^*$ be symmetric with respect to $\Gamma_1$, $z_2, z_3, z_4$ be three distinct points on the source circle $\Gamma_1$ and $w_k = T(z_k)$ for $k=2,3,4$. Since $T$ takes a generalized circle $\Gamma_1$ onto a generalized circle $\Gamma_2$, these points $w_k$ lie on the target circle $\Gamma_2$.

We want to prove that $T(z)$ and $T(z^*)$ satisfy the symmetry definition for $\Gamma_2$, that is, we must show: $(T(z^*), w_2, w_3, w_4) = \overline{(T(z), w_2, w_3, w_4)}$

We use the previously demonstrated property that the cross-ratio is invariant under Möbius transformations:

$$ \begin{aligned} (T(z^*), T(z_2), T(z_3), T(z_4)) &=(z^*, z_2, z_3, z_4) \\[2pt] &\text{Since z and z* are symmetric with respect to Γ₁:}\\[2pt] &=\overline{(z, z_2, z_3, z_4)} \\[2pt] &\text{Now, use the invariance property on the right-hand side (inside the conjugate):} \\[2pt] &=(\overline{T(z), T(z_2), T(z_3), T(z_4)}) \end{aligned} $$

Combining the steps, we have: $(T(z^*), w_2, w_3, w_4) = \overline{(T(z), w_2, w_3, w_4)}$

Definition. If $\Gamma$ is a circle, then an orientation for $\Gamma$ is an ordered triple of points $(z_1, z_2, z_3)$ such that $z_1, z_2, z_3$ are distinct points on $\Gamma$.

Imagine you are walking along the boundary of a country or region (the circle or line $\Gamma$). The orientation $(z_1, z_2, z_3)$ tells you the direction to walk: start at $z_1$, walk towards $z_2$, then to $z_3$. As you walk, one region is on your left and the other is on your right.

Definition. For an oriented circle $\Gamma$ determined by three distinct points $(z_1, z_2, z_3)$, the right side is the set of points $z$ such that: $\text{Im}(z, z_1, z_2, z_3) > 0$. Similarly, the left side is is the set of points $z$ such that: $\text{Im}(z, z_1, z_2, z_3) < 0$.

Orientation principle. Let $\Gamma_1$ and $\Gamma_2$ be two circles in $\mathbb{C} \cup \{ \infty \}$ and let T be a Möbius transformation such that $T(\Gamma_1) = T(\Gamma_2)$. Let $(z_1, z_2, z_3)$ be an orientation for $\Gamma_1$. Then, T takes the right and left side of $\Gamma_1$ onto the right and left side of $\Gamma_2$ with respect to the orientation of $\Gamma_2$ given by $(T_{z_1}, T_{z_2}, T_{z_3}).$

If you define “left” based on the direction you walk on the first circle, the map $T$ preserves this. If you walk along the image points $T(z_1) \to T(z_2) \to T(z_3)$ in the second world, the region that was on your left in the first world gets mapped to the region on your left in the second world. Möbius maps preserve angles and their orientation (they don’t flip images like a mirror unless there is a conjugate involved).

Proof:

Let $\Gamma_1$ be determined by $(z_1, z_2, z_3)$ and let its image $\Gamma_2 = T(\Gamma_1)$ be determined by $(w_1, w_2, w_3)$ where $w_k = T(z_k)$.

We know the cross-ratio is invariant under Möbius transformations: $(z, z_1, z_2, z_3) = (T(z), w_1, w_2, w_3)$

Let z be a point on the Right Side of $\Gamma_1$. By definition, $\text{Im}(z, z_1, z_2, z_3) > 0$. Because of the previous equality, the image point $T(z)$ satisfies: $\text{Im}(T(z), w_1, w_2, w_3) > 0$. Since the condition defining the “Right Side” is identical for the pre-image and the image, $T$ maps the Right Side of $\Gamma_1$ to the Right Side of $\Gamma_2$.

Example:

The map $T(z) = \frac{(1+i)(z+1)}{z+i} = \frac{(1+i)z + (1+i)}{z+i}$ is a Möbius transformation that maps $i \to 1, -1 \to 0, i \to \infty$. In other words, it sends the unit circle to the real line.

$T(i) = \frac{(1+i)(i+1)}{i+i} = \frac{(1+i)^2}{2i} = \frac{2i}{2i} = 1, T(-1) = \frac{(1+i)(0)}{-1+i} = 0, T(-i) = \frac{(1+i)(-i+1)}{-i+i} = \frac{(1+i)(-i+1)}{0} = \infty$

Source. The trip $i \to -1 \to -i$ is a counter-clockwise walk along the unit circle. If you walk counter-clockwise along a circle, its interior (the disk) is on the left, Left = Disk (Interior) Unit Circle.

Image. The trip $1 \to 0 \to \infty$ is a walk along the real axis moving to the left (from positive to negative, real line backwards). If we are standing on 1 and start walking towards 0, the left hand points down (negative imaginary axis, $\text{Im}(w) < 0$), Left = Lower Half Plane.

Let’s test the center point 0 which is obviously inside the circle. $T(0) = \frac{1+i}{i} = \frac{1}{i} + 1 = -i + 1 = 1 - i$. The point 1-i has a negative imaginary part. It is indeed in the Lower Half Plane.


The Minimum Modulus Principle. Let f be analytic and non-zero in a region A. Then, |f| has no strict local minima on A.

Let’s compare the graph of $|f(z)|$ as a landscape. According to the Maximum Modulus Principle, “mountain peaks” (local maxima) for analytic functions can only appear on the regions’s boundary and never within. The Minimum Modulus Principle says that if the function never touches zero (the “sea level” in our metaphor), then there are no “bottoms of pits” (local minima) inside the region either. The “pits” are also pushed or forced out to the boundary.

Proof.

Since f is analytic and non-zero in a region A, then the reciprocal function $g(z) = \frac{1}{f(z)}$ is well-defined and analytic everywhere in A.

There is an inverse relationship between the magnitude of a number and the magnitude of its reciprocal: $|g(z)| = \left| \frac{1}{f(z)} \right| = \frac{1}{|f(z)|}$. If $z_0$ were a strict local minimum of $|f|$, then $z_0$ would be a strict local maximum of $|g|$.

By the Maximum modulus theorem, $|g(z)| = |\frac{1}{f}|$ cannot have a strict local maximum in A, then $|f(z)|$ cannot have a strict local minimum in A.

Recall: Maximum modulus theorem. A non-constant analytic function cannot have a strict local maximum inside its domain.

Problem. Find the maximum of |sin(z)| on the square domain D = [0, 2π] x [0, 2π].

Solution.

Expand $\sin(z)$ into Real and Imaginary Parts Let $z = x + iy$. $ =$

$$ \begin{aligned} sin(z) = \sin(x+iy) &=[\text{Addition formula for sine}] \sin(x)\cos(iy) + \cos(x)\sin(iy) \\[2pt] &\text{Recall the hyperbolic identities: } \cos(iy) = \cosh(y), \sin(iy) = i\sinh(y)\\[2pt] &= \sin(x)\cosh(y) + i\cos(x)\sinh(y). \end{aligned} $$

sin(x +iy) = sin(x)cosh(y) + isinh(y)cos(x). We want to maximize $|\sin(z)|$. It is mathematically easier to maximize the square, $|\sin(z)|^2$, because the square root function is monotonic (if $A > B$, then $\sqrt{A} > \sqrt{B}$).

$$ \begin{aligned} |\sin(z)|^2 = [\sin(x)\cosh(y)]^2 + [\cos(x)\sinh(y)]^2 &= \sin^2(x)\cosh^2(y) + \cos^2(x)\sinh^2(y) \\[2pt] &\text{Use the identity: } \cosh^2(y) = 1 + \sinh^2(y)\\[2pt] &= \sin^2(x)(1 + \sinh^2(y)) + \cos^2(x)\sinh^2(y) \\[2pt] &\text{Factor out } \sinh^2(y) \\[2pt] &= \sin^2(x) + \sinh^2(y)(\underbrace{\sin^2(x) + \cos^2(x)}_{1}). \end{aligned} $$

Therefore, $|\sin(z)|^2 = \sin^2(x) + \sinh^2(y)$

|sin(x + iy)| = $sin^2(x)cosh^2(y) + sinh^2(y)cos^2(x) = sin^2(x)(1+sinh^2(y)) + sinh^2(y)cos^2(x) = sinh^2(y)(sin^2(x) + cos^2(x)) + sin^2(x) = sinh^2(y) + sin^2(x)$

By the Maximum Modulus Principle, the maximum of a non-constant analytic function on a compact set (like this closed square) must occur on the boundary.

$\sin(z)$ is one of the classical entire functions, along with $e^z, \cos(z)$, polynomials, etc. $\sin z=\sin x\cosh y+i\cos x\sinh y,u(x,y)=\sin x\cosh y, v(x,y)=\cos x\sinh y.$. Its partial derivatives ($u_x=\cos x\cosh y, v_y=\cos x\cosh y, u_y=\sin x\sinh y, v_x=-\sin x\sinh y$) satisfy the Cauchy–Riemann equations ($u_x = v_y, u_y=-v_x$). Since u and v have continuous partial derivatives everywhere, $\sin(z)$ is analytic everywhere.

We need to maximize the sum of two positive terms: $\sin^2(x) + \sinh^2(y)$.

  1. The function $\sinh(y)$ is strictly increasing on the entire real line. On our domain $y \in [0, 2\pi]$, the maximum value occurs at the largest $y, y = 2\pi$.

    Recall: $\sinh y=\frac{e^y-e^{-y}}{2}, \frac{d}{dy}\sinh y=\frac{e^y-(-e^{-y})}{2} = \frac{e^y+e^{-y}}{2} \gt 0$

  2. The function $\sin^2(x)$ oscillates between 0 and 1. The maximum value is 1, it occurs when $\sin(x) = \pm 1$. On our domain $x \in [0, 2\pi]$, the points where sine is $\pm 1$ are: $x = \frac{\pi}{2} \text{ and } x = \frac{3\pi}{2}$

Since $0 \le sin^2(x) \le 1$ & $sin^2(x) = 1$ where $x = \frac{\pi}{2} \text{ or } \frac{2\pi}{2}$

To get the global maximum, we need both terms to be at their peak simultaneously, $z_1 = \frac{\pi}{2} + i2\pi, z_2 = \frac{3\pi}{2} + i2\pi$.

Hence, the maximum value of $|\sin(z)|$ occurs at the points $\frac{\pi}{2} + 2\pi i$ and $\frac{3\pi}{2} + 2\pi i$. The value of the maximum modulus squared is $1 + \sinh^2(2\pi) = \cosh^2(2\pi)$. Thus, the value of the maximum modulus is $\cosh(2\pi)$.

Extended Liouville’s Theorem. Suppose that f is an entire function. If for some integer $k \ge 0$, there are positive constants A and B such that $|f(z)| \le A + B|z|^k, \forall z \in \mathbb{C}$, then f is a polynomial of degree at most k.

Proof.

We proceed by mathematical induction on the integer $k$.

Base Case (k = 0)

If k = 0, $|f(z)| \le A + B|z|^0 = A + B \le A, \forall z \in \mathbb{C}$. This means |f(z)| is bounded by a constant for all $z \in \mathbb{C}$. By the standard Liouville’s Theorem (which states that every bounded entire function is constant), $f(z)$ must be constant. A constant is a polynomial of degree 0. Thus, the statement holds for k = 0.

Inductive Step. Assume the statement is true for some integer k -1 (where $k \ge 1$). That is, if an entire function g satisfies $|g(z)| \le C + D|z|^{k-1}$, then g is a polynomial of degree at most k -1. We must prove it is true for k. Let f be an entire function satisfying: $|f(z)| \le A + B|z|^k$

Define the function h(z) as $h(z) = \begin{cases} \frac{f(z)-f(0)}{z}, &z \ne 0 \\\\ f'(0), &z = 0 \end{cases}$

Claim 1: h(z) is entire.

Since f is entire, it has a Taylor series expansion at z = 0: $f(z) = f(0) + f'(0)z + \frac{f''(0)}{2!}z^2 + \cdots$ converges for every complex number z.

If a function is analytic on all of $\mathbb{C}$, then its Taylor series at any point (in particular at 0) has infinite radius of convergence.

Subtracting f(0) and dividing by z (for $z \ne 0$) gives: $h(z) = \frac{f(z)-f(0)}{z} = f'(0) + \frac{f''(0)}{2!}z + \frac{f'''(0)}{3!}z^2 + \cdots, \forall z \in \mathbb{C} \setminus \{ 0 \}$. This power series converges for all z, making $h(z)$ analytic everywhere (entire).

Subtracting f(0) and dividing by z preserves convergence. What about the point z = 0? The series gives h(0) = $f'(0)$ which matches the definition of h(0), so the power series defines h at z = 0 as well. Thus, h is analytic at 0 and everywhere else.

Note: $h(z)=\sum _{k=0}^{\infty }\frac{f^{(k+1)}(0)}{(k+1)!}z^k$. This is a new power series with coefficients $a_k=\frac{f^{(k+1)}(0)}{(k+1)!}.$ The radius of convergence of a power series does not change when you drop the first term, divide by z, or reindex because these operations do not affect the growth rate of the coefficients and since the original series had radius $R = \infty$, the new series also has radius $R = \infty$

Claim 2: Establish the bound for $h(z)$. We aim to show that $h(z)$ satisfies the condition for the induction hypothesis (degree k-1).

Inside the Unit Disk ($|z| \le 1$). Since h(z) is continuous and the closed unit disk $\overline{D}(0,1)$ is compact (closed and bounded), hence |h(z)| is bounded by some maximum constant M on this disk. We can trivially say: $|h(z)| \le M + B|z|^{k-1}$ (since $B|z|^{k-1}$ is non-negative, this inequality holds safely).

Recall: In $\mathbb{R}^n$ or $\mathbb{C}^n$, the Heine–Borel theorem states that a set is compact if and only if it is closed and bounded. Futhermore, a continuous function on a compact set is bounded and attains its maximum and minimum.

Outside the Unit Disk |z| > 1$: $$ \begin{aligned} |h(z)| &=|\frac{f(z)-f(0)}{z}| \\[2pt] &\text{By the Triangle inequality} \\[2pt] &\le \frac{|f(z)| + |f(0)|}{|z|} \\[2pt] &\text{By Induction Hypothesis} \\[2pt] &\le \frac{A + B|z|^k + |f(0)|}{|z|} \\[2pt] &=\frac{A + |f(0)|}{|z|} + B|z|^{k-1} \\[2pt] &\text{|z| > 1} \\[2pt] &\le A + |f(0)| + B|z|^{k-1} \end{aligned} $$

Combining both cases, there exist new constants A’ and B’ such that $\forall z \in \mathbb{C}, |h(z)| \le A' + B'|z|^{k-1}$.

By our Inductive Hypothesis, since $h(z)$ grows at most like $|z|^{k-1}$, $h(z)$ must be a polynomial of degree at most $k-1$. Let $h(z) = c_0 + c_1 z + \dots + c_{k-1} z^{k-1}$.

Recall the definition of $h(z)$ for $z \ne 0, h(z) = \frac{f(z) - f(0)}{z} \implies f(z) = z \cdot h(z) + f(0)$. Multiplying a polynomial of degree at most k -1 by z increases the degree by exactly 1. In other words, f(z) is a polynomial with a degree of at most k.

Problem. Find the power series expansion of the function $f(z) = \frac{1}{z^2-3z+2}$ centered about 0 and determine its radius of convergence.

Solution:

First, we factor the denominator to identify the singularities (poles) of the function: $\frac{1}{z^2-3z+2} = \frac{1}{(z-1)(z-2)}$. The singularities are at z =1 and z = 2.

Next, we use Partial Fraction Decomposition to split the function into simpler terms $\frac{1}{(z-1)(z-2)} = \frac{A}{z-1} + \frac{B}{z-2}$

Solving for A and B (using standard algebraic methods or the cover-up method): $f(z) = \frac{1}{z-2} - \frac{1}{z-1} = \frac{1}{1-z} - \frac{1}{2-z}$

We are going to use the standard Geometric Series formula: $\frac{1}{1-w} = \sum_{n=0}^{\infty} w^n, \quad \text{valid for } |w| < 1$.

$\frac{1}{1-z} - \frac{1}{2-z} =[\text{Factor out 2: }] \frac{1}{1-z}-\frac{1}{2}(\frac{1}{1-\frac{z}{2}})$

$\frac{1}{2-z} = \frac{1}{2(1 - \frac{z}{2})} = \frac{1}{2} \cdot \frac{1}{1 - \frac{z}{2}}$

Next, we expand both terms using the geometric series formula:

$f(z)=\sum_{n=0}^{\infin}z^n -\frac{1}{2}(\sum_{n=0}^{\infin}(\frac{z}{2})^n)$ where these series converges for |z| < 1 and $|\frac{z}{2}| \lt 1 \implies |z| \lt 2$ respectively. For the power series to exist, both parts of the sum must converge simultaneously, and the intersection of both regions is |z| < 1 (we could have reasoned alternatively that the radius of convergence is always the distance from the center z = 0 to the nearest singularity, in this particular case 1).

$f(z) =\sum_{n=0}^{\infin} z^n(1 -\frac{1}{2}(\frac{1}{2^n})) =\sum_{n=0}^{\infin} (1 - \frac{1}{2^{n+1}})z^n$. Radius of Convergence: R = 1 (valid for $|z| < 1$).

Proposition. Let f(t) be a complex-valued continuous function defined on a real interval [a, b]. The Laplace transform $\mathbb{F}(z) = \int_a^b e^{-zt}f(t)dt$ is analytic for any $z \in \mathbb{C}$.

Proof.

We aim to show that F(z) is complex differentiable (holomorphic) for any $z \in \mathbb{C}$.

Fix an arbitrary $z \in \mathbb{C}$. Consider a small non-zero perturbation $h \in \mathbb{C} \setminus \{0\}$. We examine the difference quotient: $\frac{F(z+h) - F(z)}{h}$.

$\frac{F(z+h) - F(z)}{h} = \frac{\int_a^b e^{-(z+h)t}f(t)dt - \int_a^b e^{-zt}f(t)dt}{h} = \frac{\int_a^b e^{-zt}e^{-ht}f(t)dt - \int_a^b e^{-zt}f(t)dt}{h} =[\text{Combine the integrals using linearity: }] \frac{1}{h}[\int_a^b e^{-zt}f(t)(e^{-ht} -1)dt]$

Since h is independent from the variable of integration:

$\frac{1}{h}[\int_a^b e^{-zt}f(t)(e^{-ht} -1)dt] = \int_a^b e^{-zt}f(t)(\frac{e^{-ht} -1}{h})dt$

We need to compute the limit as $h \to 0, \lim_{h \to 0}\frac{F(z+h) - F(z)}{h} = \lim_{h \to 0} \int_a^b e^{-zt}f(t)(\frac{e^{-ht} -1}{h}) = \int_a^b e^{-zt}f(t) \lim_{h \to 0} \left( \frac{e^{-ht} - 1}{h} \right) dt$

The crucial step is to rigorously justify why we can interchange the limit and the integral.

First, let’s compute $\lim_{h \to 0} \frac{e^{-ht} - 1}{h}$. For any differentiable function f(h), $\lim _{h\rightarrow 0}\frac{f(h)-f(0)}{h}=f'(0)$. Consider $f(h)=e^{-ht}$. Since $f(0)=e^0=1$, the limit becomes $\lim _{h\rightarrow 0}\frac{e^{-ht}-1}{h},$ which is exactly the difference quotient for f at h = 0. $\lim_{h \to 0} \frac{e^{-ht} - 1}{h} = \frac{d}{dh} (e^{-ht}) \Big|_{h=0} = -te^{-ht}\Big|_{h=0} = -t e^{-0} = -t$

The previous interchange is valid if the expression $D_h(t) = \frac{e^{-ht} - 1}{h}$ converges uniformly to its limit -t on the interval $[a, b]$.

Consider the real function $g(u) = e^{-ut}$ (treating $t$ as a constant parameter). By the Mean Value Theorem, for any real $h \neq 0$, there exists a number $\theta \in (0, 1)$ such that :$\frac{g(h) - g(0)}{h} = g'(\theta h)$.

Substituting $g(u) = e^{-ut}$ and $g'(u) = -te^{-ut}$, we get: $\frac{e^{-ht} - 1}{h} = -t e^{-(\theta h)t}$

Now, compare this to the limit -t: $\left| \frac{e^{-ht} - 1}{h} - (-t) \right| = \left| -t e^{-\theta h t} - (-t) \right| = |t| \left| 1 - e^{-\theta h t} \right|$

Let $t \in [a, b]$, set $M = \max(|a|, |b|)$, so $|t| \le M$. From the Mean Value Theorem applied to $e^x$, for any real x, $e^x-1=e^{\xi }x$ for some $\xi$ between 0 and x. Hence $|e^x-1|=|e^{\xi }||x|\leq e^{|x|}|x|$. Now restrict x to a bounded interval, say $|x|\leq K$. Then, $e^{|x|}\leq e^K$, so $|e^x-1|\leq e^K|x|, \forall |x|\leq K.$

Since $x=-\theta ht$, with $\theta \in (0,1)$. Thus, $|x|=|\theta ht|\leq |h|\, |t|\leq |h|M.$

For $x$ in a bounded range, $|1 - e^x| \le C|x|$ where C = $e^k M$. Choose h small enough so that $|h|M \le K \implies |x| \le K$, and we get $|1-e^{-\theta ht}|=|e^x-1|\leq e^K|x|\leq e^KM|h| = C|h|$.

So we obtain a uniform bound (in $t \in [a,b]$): $\sup _{t\in [a,b]}|1-e^{-\theta ht}|\leq C|h|$, and plugging this back into: $\left| \frac{e^{-ht}-1}{h}+t\right| =|t|\left| e^{-\theta ht}-1\right|$ gives $\left| \frac{e^{-ht}-1}{h}+t\right| \leq |t|\cdot C|h|\leq MC|h| = e^kM^2|h|$.

The right-hand side $e^kM^2|h|$ depends only on $h$ and the fixed bounds $a, b$, but not on the specific choice of $t$. As $h \to 0$, this bound goes to 0 uniformly for all $t \in [a, b]$. Since $D_h(t) \to -t$ uniformly, and the factor $e^{-zt}f(t)$ is continuously bounded on $[a, b]$, the product converges uniformly. Uniform convergence on a finite interval allows you to pass the limit inside the integral: $\lim _{h\rightarrow 0}\int _a^be^{-zt}f(t)\frac{e^{-ht}-1}{h}\, dt=\int _a^bf(t)e^{-zt}(-t)\, dt.$

$e^{-zt}f(t)$ is continuous on the compact interval [a, b], hence bounded. Let’s denote $g(t)=e^{-zt}f(t),g_h(t)=g(t)D_h(t).$ Since g is continuous on a compact interval, it is bounded: $|g(t)|\leq M, \forall t\in [a,b].$ Uniform convergence of D_h means: $\sup_{t \in [a, b]}|D_h(t)+t|\rightarrow 0.$ Now estimate the difference between the product and its limit: $|g_h(t)-g(t)(-t)| = |g(t)| |D_h(t)+t|$.


If $g_h\rightarrow g$ uniformly on a finite interval [a, b], and each $g_h$ is integrable, then $\lim _{h\rightarrow 0}\int _a^bg_h(t)dt =\int _a^b\lim _{h\rightarrow 0}g_h(t)dt = \int _a^bg(t)dt$.

Uniform convergence means: $\sup _{t\in [a,b]}|g_h(t)-g(t)|\rightarrow 0$.

$$ \begin{aligned} \left| \int_a^b g_h(t)dt -\int_a^b g(t)dt\right| &=\left| \int _a^b(g_h(t)-g(t))\, dt\right| \\[2pt] &=\text{Triangle inequality} \\[2pt] &\leq \int_a^b |g_h(t)-g(t)|dt \\[2pt] &\text{Since the convergence is uniform, we can pull out the supremum} \\[2pt] &\leq (b-a) \sup_{t\in [a,b]}|g_h(t)-g(t)|. \end{aligned} $$

But the supremum goes to 0, so the whole expression goes to 0. Therefore: $\int _a^bg_h(t)dt \rightarrow \int_a^b g(t)dt$.


Since we can pass the limit inside the integral: $F'(z) = \int_a^b f(t) e^{-zt} \left( \lim_{h \to 0} \frac{e^{-ht} - 1}{h} \right) dt = \int_a^b f(t) e^{-zt} (-t) dt = - \int_a^b f(t) e^{-zt}tdt$

We must verify this integral exists and is finite. Let’s define $G(t, z) = -t f(t) e^{-zt}$

  1. f(t) is continuous on $[a, b]$, -t and $e^{-zt}$ are continuous with respect to t. Therefore, the product $G(t, z)$ is a continuous function of t on [a, b].
  2. A continuous function on a compact interval is bounded.
  3. Therefore, there exists some constant $K_z$ (depending on $z$) such that $|G(t, z)| = |-t f(t) e^{-zt}| \le K_z, \forall t \in [a, b]$.
  4. Since the integrand is continuous and bounded on a finite interval $[a, b]$, it is Riemann integrable. The integral exists and is finite.
Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.