Mathematics is not about numbers, equations, computations, or algorithms: it is about understanding, William Paul Thurston

Definition. Complex sequence A sequence of complex numbers is a function $a: \mathbb{N} \to \mathbb{C}$. We usually denote it by $(a_n)_{n \in \mathbb{N}}$ or simply $(a_n)$, where $a_n := a(n)$. The value $a_1$ is called the first term of the sequence, $a_2$ the second term, and in general $a_n$ the n-th term of the sequence.
Definition. Convergent complex sequence. A complex sequence $(a_n)_{n \in \mathbb{N}}$ is said to converge to a complex number $L \in \mathbb{C}$ if for every $\varepsilon > 0$ there exists an integer $N \in \mathbb{N}$ such that for all $n \geq N$ one has $|a_n - L| < \varepsilon$. In this case we write $\lim_{n \to \infty} a_n = L$ or $a_n \to L$ as $n \to \infty$, and L is called the limit of the sequence $(a_n)_{n \in \mathbb{N}}$.
Definition. Cauchy sequence. A complex sequence $(a_n)_{n \in \mathbb{N}}$ is called a Cauchy sequence if for every $\varepsilon > 0$ there exists an integer $N \in \mathbb{N}$ such that for all $n, m \geq N$ one has $|a_n - a_m| < \varepsilon$.
Definition. Series and partial sums.Let $(a_n)_{n \in \mathbb{N}}$ be a complex sequence. For each n $\in \mathbb{N}$, the finite sum $s_n := a_1 + a_2 + \cdots + a_n = \sum_{k=1}^n a_k$ is called the n-th partial sum of the (infinite) series $\sum_{k=1}^\infin a_k$ which we also denote simply by $\sum a_n$ when the index is clear from the context.
Definition. Convergent series. The series $\sum_{n=1}^{\infty} a_n$ is said to converge to the sum $s \in \mathbb{C}$ if the sequence of partial sums $(s_n)_{n \in \mathbb{N}}$ defined by $s_n = a_1 + a_2 + \cdots + a_n = \sum_{k=1}^n a_k$ converges to s, that is, $\lim_{n \to \infty} s_n = s$. In this case we write $s := \sum_{n=1}^\infin a_n$. If the sequence $(s_n)_{n \in \mathbb{N}}$ does not converge, we say that the series $\sum_{n=1}^{\infty} a_n$ diverges (or does not converge).
Definition. A complex power series centered at 0 in the variable z is a series of the form $a_0 + a_1z + a_2z^2 + \cdots = \sum_{n=0}^\infty a_n z^n$ with coefficients $a_i \in \mathbb{C}$
Definition. A complex power series centered at a complex number $a \in \mathbb{C} $ is an infinite series of the form: $\sum_{n=0}^\infty a_n (z - a)^n,$ where each $a_n \in \mathbb{C}$ is a coefficient, z is a complex variable, and $(z - a)^n$ is the nth power about the center.
Theorem. Given a power series $\sum_{n=0}^\infty a_n z^n$, there exists a unique value R, $0 \le R \le \infin$ (called the radius of convergence) such that:
On the Circle (|z| = R), this theorem gives no information. This is the yellow light zone —the series could converge or diverge.
Differentiability of Power Series. If $f(z) = \sum_{n=0}^{\infty} a_nz^n$ for |z| < R (R > 0), then f is analytic on B(0; R) and $f'(z) = \sum_{n=1}^{\infty} na_nz^{n-1}$ for |z| < R.
Weierstrass M-test. Let $\{u_k(z)\}_{k=0}^\infty$ be a sequence of complex-valued functions defined on a set $\gamma^* \subseteq \mathbb{C}$. If there exists a sequence of non-negative real numbers $\{M_k\}_{k=0}^\infty$ such that:
Then, the original series $\sum_{k=0}^\infty u_k(z)$ converges uniformly on $\gamma^*$.
Coefficients of power series. Let f(z) = $\sum_{k=0}^\infty c_kz^k$ where this power series has radius of convergence R > 0. Then,the n-th coefficient of a power series $c_n$ can be extracted using the integral formula, $c_n = \frac{1}{2\pi i} \int_{C_r} \frac{f(z)}{z^{n+1}}dz, 0 \le r \lt R, n \ge 0$ where $C_r$ is a circle of radius r centered at 0 and oriented positively.
Taylor’s Theorem. If f is analytic on an open disk B(a; R) (a disk of radius R centered at a), then f(z) can be represented exactly by a unique power series within that disk: $f(z) = \sum_{n=0}^{\infty}a_n (z - a)^n, \forall z \in B(a; R)$
This theorem bridges the gap between differentiability and power series. It guarantees that if a function behaves well (it is analytic) in a disk, it must also be infinitely differentiable and expressed or representable by a power series (an infinite polynomial) within that disk.
Furthermore, there exist unique constants $a_n = \frac{f^{(n)}(a)}{n!} = \frac{1}{2\pi i}\int_{C_r} \frac{f(w)}{(w-a)^{n+1}}dw$ where $C_r$ is a circle of radius r < R centered at a and oriented in the counterclockwise direction (positively oriented).
Local Identity Theorem. Suppose that f is analytic on a disk B(a; r), r > 0 and a is a zero of f, i.e., f(a) = 0. Then, exactly one of the following two scenarios must be true:
Zeros are isolated. If f is analytic on a disk B(a; R), not identically zero, and f(a) = 0, then there exists a small neighborhood around a where f(z) is never zero(except obviously at a itself).
Corollary. Let f be holomorphic in a domain $D \subseteq \mathbb{C}$. If a is a limit point of the zero set $Z(f)=\{ z\in D:f(z)=0\}$, then $f \equiv 0 \text{ in some neighborhood } B(a;r) \subseteq D.$
Proof.
Suppose $a \in D$ is a limit point of Z(f). Then every epsilon neighborhood contains a zero of f, for every $n \in \mathbb{N}$, the ball B(a; 1/n) contains some $z_n \in Z(f)$ with $z_n \neq a$.
Therefore, the lonely scenario cannot happen which implies the flat scenario, f ≡ 0 in B(a; r), the function must collapse entirely — it is identically zero in a neighborhood.
Theorem. Identity Theorem General Form. Let G be a region (an open, connected set) and suppose that f is analytic on G. Assume that the set of zeros Z(f) = {$z \in G: f(z) = 0$} has a limit point in G. Then, f is identically zero on G (i.e., f(z) = 0 for all $z \in G$).
It states that an analytic function cannot be zero on a cluster of points without being zero everywhere.
Proof.
Let E be the set of limits points of the zero set Z(f) that lie within G. It contains the “cluster points” of zeroes. $E = \{ a \in G : a \text{ is a limit point of } Z(f) \}$
By the theorem’s assumption, we know $E \ne \emptyset$. Let $a \in E$. By definition of a limit point, there exists a sequence of zeros $a_n$ converging to a, $\lim_{n \to \infin}a_n = a$. Since f is continuous (because it is analytic): $f(a) = f(\lim_{n \to \infty} a_n) =[\text{f is continuous}] \lim_{n \to \infty} f(a_n) = \lim_{n \to \infty} 0 = 0$. Thus, $f(a) = 0$, so $a \in Z(f)$. Every limit point of zeros is itself a zero.
Claim: E is open (it spreads locally).
$\forall a \in E \subseteq Z(f), a \in Z(f) \leadsto f(a) = 0$. Since a (a ∈ E) is a limit point of zeros, a is not an isolated zero. By the Local Identity Theorem, if a zero is not isolated, the function must be identically zero in some small neighborhood $B(a; r) \subseteq G$. If f(z) = 0 for all z in this disk, then every point inside this disk is a limit point of zeros (Consider any point $z_0$ inside this small disk $B(a; r)$. All the “close neighbors” of $z_0$, specifically those inside the disk, are also zeros, we can find a sequence of distinct zeros within the disk approaching $z_0$. This makes $z_0$ a limit point of zeros). Therefore, the entire (open) disk $B(a; r)$ is contained in E, hence E is an open set.
Claim: E is closed (it contains its own boundaries).
To show E is closed, we will show that its complement, $G \setminus E$, is open. Pick any point $a \in G \setminus E$, then there are two possibilities for a:
In both cases, a has a neighborhood outside E. Thus $G \setminus E$ is open, which implies E is closed.
Recall. A connected set G cannot be split into two disjoint, non-empty open sets. Equivalently, the only subsets of G that are both open and closed are the empty set $\emptyset$ and the whole set G.
Finally, since E is non-empty, open, and closed set inside a connected set G, it must be G itself, E = G. This means every point in G is a limit point of zeros. For an analytic function (which is continuous), this implies f(z) = 0 for all $z \in G$.
Proposition. The Arithmetic of Zeroes. Let f and g be analytic on a disk B(a; r). Suppose both have a zero at a with orders $n_f$ and $n_g$ respectively. Then, the product function $(f \cdot g)(z)$ has a zero at a of order $n_f + n_g$.
You could think of analytic functions as “infinite polynomials.” If f(z) behaves like $z^2$ (order 2) and g(z) behaves like $z^3$ (order 3), then $f(z) \cdot g(z)$ behaves like $z^2 \cdot z^3 = z^5$ (order 2 + 3 = 5). This simple exponent rule works for all analytic functions.
Proof.
Recall that if a function f has a zero of order n at a, we can factor it perfectly: $f(z) = (z-a)^n h(z)$ where h(z) is analytic and $h(a) \ne 0$.
Since f and g have zero of order $n_f$ and $n_g$ respectively, there exists analytic functions h(z) and k(z) such that $h(a) \ne 0, k(a) \ne 0$ and $f(z) = (z-a)^{n_f} h(z), g(z) = (z-a)^{n_g} k(z)$.
Now, construct the product $(f \cdot g)(z), (f \cdot g)(z) = \left[ (z-a)^{n_f} h(z) \right] \cdot \left[ (z-a)^{n_g} k(z) \right] =\text{[Group the terms by type]} (z-a)^{n_f} (z-a)^{n_g} \cdot [h(z) k(z)] = (z-a)^{n_f + n_g} \cdot [h(z) k(z)]$.
Let $H(z) = h(z)k(z)$.
Uniqueness theorem. Let G be a region (open and connected). Suppose that f and g are analytic functions on G. If f(z) = g(z) for all points in a set S and S has a limit point in G, then then f and g are identical everywhere on G ($f \equiv g$).
Proof.
Consider the difference function $d(z) = f(z) - g(z)$. $\forall z \in S, d(z) = 0$. Therefore, the set of zeros $Z(d)$ has a limit point in G. By the Identity Theorem, $d(z)$ must be identically zero. If f(z) - g(z) = 0, then f(z) = g(z).
Counting Zeros (The Logarithmic Derivative).
It shows how we can use an integral to “count” the order (multiplicity) of a zero.
Suppose f is analytic on a disk B(a; r) and has a zero at a of order m. We assume f is not identically zero, so a is an isolated zero.
Using the Structural Definition of a zero, we can write: $f(z) = (z-a)^mh(z), \forall z \in B(a; r)$ where h is analytic (and therefore continuous) and $h(a) \ne 0$. Because $h(a) \neq 0$ and h is continuous, there is a small neighborhood $B(a; \varepsilon)$ where h(z) is never zero.
f’(z) =[Use the Product Rule on $f(z) = (z-a)^m h(z)$] $m(z-a)^{m-1}h(z) + (z-a)^mh'(z), \forall z \in B(a; r)$
We want to analyze the fraction $\frac{f'(z)}{f(z)}$. This is often called the Logarithmic Derivative.
For $z \ne a, z \in B(a; \varepsilon), \frac{f'(z)}{f(z)} = \frac{m}{z-a}+\frac{h'(z)}{h(z)}$
Now, we integrate this ratio around a small circle $C_{\varepsilon_0}$ centered at a with radius $\varepsilon_0 < \varepsilon$, oriented counterclockwise.
$\int_{C_{\varepsilon_0}}\frac{f'(z)}{f(z)} dz = \int_{C_{\varepsilon_0}}\frac{m}{z-a}dz + \int_{C_{\varepsilon_0}} \frac{h'(z)}{h(z)} dz$
$\int_{C_{\varepsilon_0}} \frac{h'(z)}{h(z)} dz$ vanishes because $\frac{h'(z)}{h(z)}$ is an analytic function (h is analytic, $h(z) \neq 0$ in this small circle, therefore the quotient is analytic inside and on the circle) on the contour $C_{\varepsilon_0}$. By Cauchy’s Theorem (integral of an analytic function on a closed loop), this integral is zero.
$\int_{C_{\varepsilon_0}}\frac{f'(z)}{f(z)} dz = m \cdot (2\pi i) \leadsto \frac{1}{2\pi i} \int_{C_{\varepsilon_0}}\frac{f'(z)}{f(z)} dz = m$
Cauchy integral formula: $f(z_{0})=\frac{1}{2\pi i}\oint_{C}\frac{f(z)}{z-z_{0}}dz, \oint_{C} \frac{1}{z-a} dz = 2\pi i$
$m = \frac{1}{2\pi i} \oint_{C_{\varepsilon_0}} \frac{f'(z)}{f(z)} dz$ tells us that integrating $\frac{f'}{f}$ around a zero effectively “scans” the point and “detects” the multiplicity; so this formula returns an integer equal to the order of that zero.