JustToThePoint English Website Version
JustToThePoint en español

Zeros of Analytic Functions

If you find yourself in a hole, stop digging, Will Rogers.

Topology and Limits

Introduction

Definition. Complex sequence A sequence of complex numbers is a function $a: \mathbb{N} \to \mathbb{C}$. We usually denote it by $(a_n)_{n \in \mathbb{N}}$ or simply $(a_n)$, where $a_n := a(n)$. The value $a_1$ is called the first term of the sequence, $a_2$ the second term, and in general $a_n$ the n-th term of the sequence.

Definition. Convergent complex sequence. A complex sequence $(a_n)_{n \in \mathbb{N}}$ is said to converge to a complex number $L \in \mathbb{C}$ if for every $\varepsilon > 0$ there exists an integer $N \in \mathbb{N}$ such that for all $n \geq N$ one has $|a_n - L| < \varepsilon$. In this case we write $\lim_{n \to \infty} a_n = L$ or $a_n \to L$ as $n \to \infty$, and L is called the limit of the sequence $(a_n)_{n \in \mathbb{N}}$.

Definition. Cauchy sequence. A complex sequence $(a_n)_{n \in \mathbb{N}}$ is called a Cauchy sequence if for every $\varepsilon > 0$ there exists an integer $N \in \mathbb{N}$ such that for all $n, m \geq N$ one has $|a_n - a_m| < \varepsilon$.

Definition. Series and partial sums.Let $(a_n)_{n \in \mathbb{N}}$ be a complex sequence. For each n $\in \mathbb{N}$, the finite sum $s_n := a_1 + a_2 + \cdots + a_n = \sum_{k=1}^n a_k$ is called the n-th partial sum of the (infinite) series $\sum_{k=1}^\infin a_k$ which we also denote simply by $\sum a_n$ when the index is clear from the context.

Definition. Convergent series. The series $\sum_{n=1}^{\infty} a_n$ is said to converge to the sum $s \in \mathbb{C}$ if the sequence of partial sums $(s_n)_{n \in \mathbb{N}}$ defined by $s_n = a_1 + a_2 + \cdots + a_n = \sum_{k=1}^n a_k$ converges to s, that is, $\lim_{n \to \infty} s_n = s$. In this case we write $s := \sum_{n=1}^\infin a_n$. If the sequence $(s_n)_{n \in \mathbb{N}}$ does not converge, we say that the series $\sum_{n=1}^{\infty} a_n$ diverges (or does not converge).

Definition. A complex power series centered at 0 in the variable z is a series of the form $a_0 + a_1z + a_2z^2 + \cdots = \sum_{n=0}^\infty a_n z^n$ with coefficients $a_i \in \mathbb{C}$

Definition. A complex power series centered at a complex number $a \in \mathbb{C} $ is an infinite series of the form: $\sum_{n=0}^\infty a_n (z - a)^n,$ where each $a_n \in \mathbb{C}$ is a coefficient, z is a complex variable, and $(z - a)^n$ is the nth power about the center.

Theorem. Given a power series $\sum_{n=0}^\infty a_n z^n$, there exists a unique value R, $0 \le R \le \infin$ (called the radius of convergence) such that:

  1. For any z with |z| < R (inside the circle), the series $\sum_{n=0}^\infty a_n z^n$ converges absolutely (this is a “green light” zone).
  2. For any z with |z| > R, the series diverges (this is a “red light” zone).

    On the Circle (|z| = R), this theorem gives no information. This is the yellow light zone —the series could converge or diverge.

Differentiability of Power Series. If $f(z) = \sum_{n=0}^{\infty} a_nz^n$ for |z| < R (R > 0), then f is analytic on B(0; R) and $f'(z) = \sum_{n=1}^{\infty} na_nz^{n-1}$ for |z| < R.

Weierstrass M-test. Let $\{u_k(z)\}_{k=0}^\infty$ be a sequence of complex-valued functions defined on a set $\gamma^* \subseteq \mathbb{C}$. If there exists a sequence of non-negative real numbers $\{M_k\}_{k=0}^\infty$ such that:

  1. Bounding Condition: $|u_k(z)| \leq M_k$ for all $z \in \gamma^*$ and all $k \in \mathbb{N}$
  2. Convergence of Bound: The series $\sum_{k=0}^\infty M_k$ converges

Then, the original series $\sum_{k=0}^\infty u_k(z)$ converges uniformly on $\gamma^*$.

Coefficients of power series. Let f(z) = $\sum_{k=0}^\infty c_kz^k$ where this power series has radius of convergence R > 0. Then,the n-th coefficient of a power series $c_n$ can be extracted using the integral formula, $c_n = \frac{1}{2\pi i} \int_{C_r} \frac{f(z)}{z^{n+1}}dz, 0 \le r \lt R, n \ge 0$ where $C_r$ is a circle of radius r centered at 0 and oriented positively.

Taylor’s Theorem. If f is analytic on an open disk B(a; R) (a disk of radius R centered at a), then f(z) can be represented exactly by a unique power series within that disk: $f(z) = \sum_{n=0}^{\infty}a_n (z - a)^n, \forall z \in B(a; R)$

This theorem bridges the gap between differentiability and power series. It guarantees that if a function behaves well (it is analytic) in a disk, it must also be infinitely differentiable and expressed or representable by a power series (an infinite polynomial) within that disk.

Furthermore, there exist unique constants $a_n = \frac{f^{(n)}(a)}{n!} = \frac{1}{2\pi i}\int_{C_r} \frac{f(w)}{(w-a)^{n+1}}dw$ where $C_r$ is a circle of radius r < R centered at a and oriented in the counterclockwise direction (positively oriented).

Zeroes of Analytic Functions

Definition. A zero of a function f is simply a point where the function outputs the value 0. If f(a) = 0, we say a is a zero of f. Definition. The set of all zeros is denoted as $Z(f) = \{z \in G : f(z) = 0\}$

Connection to Taylor Series. Suppose f is analytic on a disk B(a; r), r > 0 and suppose a is a zero of f, i.e., f(z) = 0. By Taylor’s Theorem, we can write $f(z)$ as a series: $f(z) = \sum_{n=0}^{\infty} a_n(z-a)^n = a_0 + a_1(z-a) + a_2(z-a)^2 + \dots$ for |z - a| < r where $a_n = \frac{f^{(n)}(a)}{n!}$.

Because we know f(a) = 0, the first constant term must be zero ($a_0 = 0$). The series actually looks like this: $f(z) = a_1(z-a) + a_2(z-a)^2 + a_3(z-a)^3 + \dots$. There are only two cases or possibilities to consider:

  1. If all the derivatives of f at a are zero, then f must be identically zero in the neighborhood of a, i.e., f is identically zero on the whole disk B(a; r), a fact unique to analytic functions. If an analytic function is flat (zero derivatives) at one point, it is flat everywhere in that neighborhood.
  2. If not all derivatives are zero, the smallest positive integer m for which the m-th derivative at a is non-zero defines the order of the zero, $f^{(m)}(a) \ne 0$. Since the first few terms are gone, the series starts at power m: $f(z) = a_m(z-a)^m + a_{m+1}(z-a)^{m+1} + \dots$ and we can factor out the common term $(z-a)^m$: $f(z) = (z-a)^m\sum_{n=m}^{\infty} a_n(z-a)^{n-m}$ for z ∈ B(a; r) which the first term $a_m = \frac{f^{(m)}(a)}{m!} \ne 0$. f is said to have a zero of order m at a.
  3. Main Idea. Locally, analytic functions behave like polynomials. Near a zero a of order m, the function looks roughly like (or the dominant term is) $f(z) ≈ a_m(z-a)^m, a_m = \frac{f^{(m)}(a)}{m!}$.
  4. If m = 1, the zero is simple, the graph crosses the axis “linearly” like a line. If m = 2, it’s a double zero; the graph touches the axis “quadratically” like a parabola. Zero of Order m: The function behaves like $(z-a)^m$ near the point a, the function flattens out more at the zero.
  5. Example A: Simple Zeros of $\sin(z)$: $\sin(z), \text{ for } z = k\pi, \frac{d}{dz}(sin(z)) = cos(z) \ne 0, \text{ for } z = k\pi$. So all the zeros of sin(z) are simple (m = 1). Example B: Higher Order Zero. zsin(z). We know $\sin(z) = z - \frac{z^3}{3!} + \dots$. Multiply by $z$: $f(z) = z \left( z - \frac{z^3}{6} + \dots \right) = z^2 - \frac{z^4}{6} + \dots$. The lowest power of z appearing in the series is 2. Therefore, z = 0 is a zero of order 2 (a double zero).
  6. Suppose f is analytic on a disk B(a; r), r > 0 and suppose a is a zero of f of order m. if a is a zero of order $m$, we can factor f locally, $f(z) = (z-a)^m\sum_{n=m}^{\infty} a_n(z-a)^{n-m}$ for z ∈ B(a; r) which the first term $a_m = \frac{f^{(m)}(a)}{m!} \ne 0$. Define h(z) by the series $\sum_{n=m}^{\infty} a_n(z-a)^{n-m}$ for z ∈ B(a; r). The value at the center is the first non-zero coefficient, $h(a) = a_m \ne 0$. Since h is defined as a convergent power series in B(a; r), h is analytic, and therefore continuous.

Because h is continuous, if it is non-zero at a specific point $h(a) \ne 0$, it must remain non-zero in the immediate vicinity of that point. There exists a small radius $\varepsilon \gt 0$ such that $h(z) \ne 0, \forall z \in B(a; \varepsilon)$. So we conclude that the zeroes of f are isolated.

For every $\varepsilon > 0$, there exists $\delta >0$ such that $|h(z)-h(a)| \lt \varepsilon$. Let $\varepsilon = |h(a)|$. Since $h(a) \ne 0$, this is a positive number. Suppose for some $z \in B(a; \varepsilon)$, h(z) = 0, then |h(z) -h(a)| = |0 -h(a)| = |h(a)| < |h(a)| contradiction.

Now, look at the factorization again for any point z in this small disk $B(a; \varepsilon)$, provided $z \ne a, f(z) = \underbrace{(z-a)^m}_{\ne 0} \cdot \underbrace{h(z)}_{\ne 0}$. (1) $(z-a)^m$ is not zero because $z \ne a$; (2) h(z) is not zero because of the continuity argument. Therefore, the product is not zero.

Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.