The problem is not the problem. The problem is your attitude about the problem, Captain Jack Sparrow

Definition. Complex sequence A sequence of complex numbers is a function $a: \mathbb{N} \to \mathbb{C}$. We usually denote it by $(a_n)_{n \in \mathbb{N}}$ or simply $(a_n)$, where $a_n := a(n)$. The value $a_1$ is called the first term of the sequence, $a_2$ the second term, and in general $a_n$ the n-th term of the sequence.
Definition. Convergent complex sequence. A complex sequence $(a_n)_{n \in \mathbb{N}}$ is said to converge to a complex number $L \in \mathbb{C}$ if for every $\varepsilon > 0$ there exists an integer $N \in \mathbb{N}$ such that for all $n \geq N$ one has $|a_n - L| < \varepsilon$. In this case we write $\lim_{n \to \infty} a_n = L$ or $a_n \to L$ as $n \to \infty$, and L is called the limit of the sequence $(a_n)_{n \in \mathbb{N}}$.
Definition. Cauchy sequence. A complex sequence $(a_n)_{n \in \mathbb{N}}$ is called a Cauchy sequence if for every $\varepsilon > 0$ there exists an integer $N \in \mathbb{N}$ such that for all $n, m \geq N$ one has $|a_n - a_m| < \varepsilon$.
Definition. Series and partial sums.Let $(a_n)_{n \in \mathbb{N}}$ be a complex sequence. For each n $\in \mathbb{N}$, the finite sum $s_n := a_1 + a_2 + \cdots + a_n = \sum_{k=1}^n a_k$ is called the n-th partial sum of the (infinite) series $\sum_{k=1}^\infin a_k$ which we also denote simply by $\sum a_n$ when the index is clear from the context.
Definition. Convergent series. The series $\sum_{n=1}^{\infty} a_n$ is said to converge to the sum $s \in \mathbb{C}$ if the sequence of partial sums $(s_n)_{n \in \mathbb{N}}$ defined by $s_n = a_1 + a_2 + \cdots + a_n = \sum_{k=1}^n a_k$ converges to s, that is, $\lim_{n \to \infty} s_n = s$. In this case we write $s := \sum_{n=1}^\infin a_n$. If the sequence $(s_n)_{n \in \mathbb{N}}$ does not converge, we say that the series $\sum_{n=1}^{\infty} a_n$ diverges (or does not converge).
Definition. A complex power series centered at 0 in the variable z is a series of the form $a_0 + a_1z + a_2z^2 + \cdots = \sum_{n=0}^\infty a_n z^n$ with coefficients $a_i \in \mathbb{C}$
Definition. A complex power series centered at a complex number $a \in \mathbb{C} $ is an infinite series of the form: $\sum_{n=0}^\infty a_n (z - a)^n,$ where each $a_n \in \mathbb{C}$ is a coefficient, z is a complex variable, and $(z - a)^n$ is the nth power about the center.
Theorem. Given a power series $\sum_{n=0}^\infty a_n z^n$, there exists a unique value R, $0 \le R \le \infin$ (called the radius of convergence) such that:
On the Circle (|z| = R), this theorem gives no information. This is the yellow light zone —the series could converge or diverge.
Differentiability of Power Series. If $f(z) = \sum_{n=0}^{\infty} a_nz^n$ for |z| < R (R > 0), then f is analytic on B(0; R) and $f'(z) = \sum_{n=1}^{\infty} na_nz^{n-1}$ for |z| < R.
Weierstrass M-test. Let $\{u_k(z)\}_{k=0}^\infty$ be a sequence of complex-valued functions defined on a set $\gamma^* \subseteq \mathbb{C}$. If there exists a sequence of non-negative real numbers $\{M_k\}_{k=0}^\infty$ such that:
Then, the original series $\sum_{k=0}^\infty u_k(z)$ converges uniformly on $\gamma^*$.
Coefficients of power series. Let f(z) = $\sum_{k=0}^\infty c_kz^k$ where this power series has radius of convergence R > 0. Then,the n-th coefficient of a power series $c_n$ can be extracted using the integral formula, $c_n = \frac{1}{2\pi i} \int_{C_r} \frac{f(z)}{z^{n+1}}dz, 0 \le r \lt R, n \ge 0$ where $C_r$ is a circle of radius r centered at 0 and oriented positively.
Taylor’s Theorem. If f is analytic on an open disk B(a; R) (a disk of radius R centered at a), then f(z) can be represented exactly by a unique power series within that disk: $f(z) = \sum_{n=0}^{\infty}a_n (z - a)^n, \forall z \in B(a; R)$
This theorem bridges the gap between differentiability and power series. It guarantees that if a function behaves well (it is analytic) in a disk, it must also be infinitely differentiable and expressed or representable by a power series (an infinite polynomial) within that disk.
Furthermore, there exist unique constants $a_n = \frac{f^{(n)}(a)}{n!} = \frac{1}{2\pi i}\int_{C_r} \frac{f(w)}{(w-a)^{n+1}}dw$ where $C_r$ is a circle of radius r < R centered at a and oriented in the counterclockwise direction (positively oriented).
The Local Mapping Theorem. Suppose that f is analytic at $z_0$ and $f(z_0) = w_0$ and that $f(z)-w_0$ has a zero of order n at $z_0$. Then, for all sufficiently small $\varepsilon \gt 0$, there exists a corresponding $\delta \gt 0$ such that $\forall a: |a - w_0| \lt \delta ↔ a \in B(w_0; \delta) \setminus {w_0}$, the equation f(z) = a has exactly n roots inside the disk $B(z_0; \varepsilon) ↔ |z - z_0| \lt \varepsilon$.
Corollary. Open mapping theorem. A non-constant analytic function maps open sets to open sets.
Every point $a$ near the center (except the center itself $w_0$) comes from exactly $n$ points in the domain. Essentially, it shows that an analytic function behaves like a polynomial near a point, mapping a small neighborhood to another small neighborhood in an n-to-1 fashion.
Inverse Function Theorem. Let f be analytic and one-to-one (injective) on an open set G. Then, the inverse image $f^{-1}$ is analytic on the image set f(G), and its derivative is given by $(f^{-1})'(b) = \frac{1}{f'(f^{-1}(b))}, \forall b \in f(G), b = f(a).$
Proof.
👣 Does $f^{-1}$ exist? Since f is one-to-one on G, it is indeed a bijection from G to its image f(G). Therefore, the inverse function is well-defined: $f^{-1}: f(G) \to G$.
Recall. Open mapping theorem. A non-constant analytic function maps open sets to open sets.
Recall. Topological Definition of Continuity. A function $h: X \to Y$ is continuous if and only if for every open set V in the target space Y, the pre-image $h^{-1}(V)$ is an open set in the source space X.
Continuous functions pull open sets back to open sets.
👣 is $f^{-1}$ is continuous? We will prove that the inverse image of an arbitrary open set V in G is open in f(G). In other words, the pre-image of V under $f^{-1}$ is open in f(G).
Let V be an arbitrary open set in G.
We know f is a non-constant analytic function. The Open Mapping Theorem tells us that f maps open sets to open sets. Since V is open, then f(V) is open. Since f is one-to-one correspondence from V to f(V), then the “inverse of the inverse” is just the original function, $f(V) = (f^{-1})^{-1}(V)$. So $(f^{-1})^{-1}(V)$ is open in f(G). Conclusion: the pre-image of V under $f^{-1}$ is open in f(G).
👣 Is $f^{-1}$ differentiable? (Analyticity)
Let $a \in G, f(a) = b$, then $f^{-1}(b) = a$. By previous preposition, since f is one-to-one, then f is conformal, $f'(f^{-1}(b)) \ne 0$
$f^{-1}$ is continuous, then $\lim_{z \to b} f^{-1}(z) = f^{-1}(b)$. We want to compute the derivative of the inverse function $f^{-1}(z)$ at b:
$$ \begin{aligned} \lim_{z \to b} \frac{f^{-1}(z) - f^{-1}(b)}{z-b} &=\lim_{z \to b} \frac{f^{-1}(z) - f^{-1}(b)}{f(f^{-1}(z))-f(f^{-1}(b))} \\[2pt] &\text{f is one-to-one and } f^{-1} \text{ is continuous} \\[2pt] &=\lim_{f^{-1}(z) \to f^{-1}(b)} \frac{1}{\frac{f(f^{-1}(z))-f(f^{-1}(b))}{f^{-1}(z) - f^{-1}(b)}} \\[2pt] &\text{Since f is conformal, }f'(f^{-1}(b)) \ne 0 \\[2pt] &=\frac{1}{f'(f^{-1}(b))} \end{aligned} $$The limit exists, $f^{-1}$ is analytic at b (i.e. f(a)) and since f is onto f(G), every point in f(G) is $b = f(a)$ for some a, so $f^{-1}$ is analytic in f(G) and $(f^{-1})'(b) = \frac{1}{f'(f^{-1}(b))}$.
The maximum principle. Let f be a function that is analytic and non-constant on a region (an open connected set) G. Then, the absolute value |f(z)| cannot have a maximum at any point inside G.
Imagine the graph of the absolute value $|f(z)|$ as a landscape or a tent sheet. The maximum principle states that if the function is “well behaved” and “smooth” (analytic) and not flat (constant), you will never find a mountain peak strictly inside the region. The true maximum height can only be found at the edges (the boundary) of the region.
Proof by Contradiction.
Assume there exists a point $z_0$ (a maximum) strictly inside the region G such that $|f(z_0)|$ is the maximum value, $|f(z_0)| \ge |f(z)|, \forall z \text{ near } z_0$
Let $w_0 = f(z_0)$ be the value at this peak. Since G is a region, it is an open set, so there is some “elbow room” (safety bubble) around $z_0$. We can find a small radius r > 0 such that the entire ball (disk) fits inside G. Formally, $\exist r \gt 0, \mathbb{B}(z_0; r) \subseteq G$.
Now we look at the image of this small ball under the function f. We invoke the Open Mapping Theorem. Non-constant analytic functions map open sets to open sets. Since $\mathbb{B}(z_0; r)$ is an open set, its image $f(\mathbb{B}(z_0; r))$ must also be an open set.
Because the image is open and contains $w_0$, there must be a “elbow room” or safety bubble (a small disk of radius $\delta$) around $w_0$ that fits entirely inside the image, $\mathbb{B}(w_0; \delta) = \mathbb{B}(f(z_0); \delta) \subseteq f(\mathbb{B}(z_0; r))$.
💡 The function f takes the neighborhood of $z_0$ and spreads it out to cover a neighborhood of $w_0$ in all directions.
In such a ball, $|f(z_0)|$ cannot be the maximum value of |f(z)|, you can always move slightly away from the origin in the direction of $w_0$ to find a point with a larger modulus, e.g., $w_{new} = w_0 + \frac{\delta}{2} \frac{w_0}{|w_0|}$ (moving radially outward).
Schwarz’s Lemma. Suppose f is analytic on $\mathbb{B}(0; r)$, f(0) = 0 and $|f(z)| \le M, \forall z \in \overline{\mathbb{B}(0; r)}$, then $|f(z)| \le \frac{M}{r}|z|, \forall |z| \le r.$ If equality holds $|f(z)| \le \frac{M}{r}|z|$ for some point inside the disk $z_0$ with $0 < |z_0| < r$, then there is a real constant m such that $f(z) = \frac{Mze^{im}}{r}$
If an analytic function f on a disk of radius r is pinned at the origin (f(0) = 0) and bounded by a ceiling $|f(z)| \le M$, then |f(z)| is squeezed by a line: $|f(z)| \le \frac{M}{r}|z|$. such that $|f(z_0)| = \frac{M}{r}|z_0|$, then $f(z)$ must be a rotation of a linear function, then f must be a rotation of a linear function.
Proof
We cannot apply the Maximum Modulus Principle to f(z) directly to get the result we want. We need to “peel off” or “remove” the zero at the origin.
Since f is analytic near 0, it has a Taylor series expansion: $f(z) = a_0 + a_1z + a_2z^2 + a_3z^3 + \dots$ with radius of convergence at least r. Because we are given $f(0) = 0$, the constant term $a_0$ must be 0. $f(z) = 0 + a_1z + a_2z^2 + a_3z^3 + \dots =[\text{We can factor out a z from every term:}] z \cdot \underbrace{(a_1 + a_2z + a_3z^2 + \dots)}_{g(z)}$
Motivated by this, let’s define, $g(z) = \begin{cases} \frac{f(z)}{z} & z \neq 0 \\ f'(0) & z = 0 \end{cases}$. Notice that
A power series converges inside its radius of convergence. Since the original series for f(z) converges for |z| < r, this new series for g(z) also converges for $|z| < r$. Any convergent power series defines an analytic function.
We apply the Maximum Modulus Principle to g(z) on a slightly smaller disk of radius $r_1$ (where $r_1 < r$). It states that the maximum of $|g(z)|$ occurs on the boundary $|z| = r_1$.
Max Modulus Principle, weaker version. If a function f(z) is analytic and not constant in a connected open set 𝐺, then the modulus (absolute value) of the function, |𝑓(𝑧)|, has no local or global maximum point within 𝐺. Max Modulus Principle, strong version. If a function 𝑓(𝑧) is continuous on a closed and bounded region 𝑅 and is analytic (non-constant) in its interior, then the maximum value of |𝑓(𝑧)| in 𝑅 must occur on the boundary of 𝑅.
For any point z on this boundary circle ($|z| = r_1$): $|g(z)| = \left| \frac{f(z)}{z} \right| = \frac{|f(z)|}{|z|} \le[\text{|f(z)| ≤ M from Schwarz’s Lemma statement}] \le \frac{M}{r_1}$.
Since the maximum on the boundary is $\frac{M}{r_1}$, the Maximum Modulus Principle guarantees that for all z inside this smaller disk: $|g(z)| \le \frac{M}{r_1}$
We have the bound $|g(z)| \le \frac{M}{r_1}, \forall z \in \overline{\mathbb{B}(0; r_1)}$ for any radius $r_1 < r$. We can choose $r_1$ to be as close to $r$ as we want. Therefore, on the limit, as we push $r_1 \to r^-$, the value $\frac{M}{r_1}$ approaches $\frac{M}{r}$. Then, for any fixed z inside the disk: $|g(z)| \le \frac{M}{r}, \forall z \in \overline{\mathbb{B}(0; r)}$. Notice that for ∣z∣ = r (any fixed z on the disk), the inequality follows directly from the definition of g and the hypothesis on ∣f∣: |f(z)| ≤ M.
Recall that $f(z) = z \cdot g(z) \leadsto[\text{Taking the absolute value}] |f(z)| = |z| \cdot |g(z)|$.
Substitute the previous bound for |g(z)|, $|f(z)| \le |z| \cdot \frac{M}{r}, \forall z \in \overline{\mathbb{B}(0; r)}$
What happens if “equality holds”? If $|f(z_0)| = \frac{M}{r}|z_0|, \text{ for some } z_0 \in \mathbb{B}(0; r), z_0 \ne 0$, $|f(z_0)| = \frac{M}{r}|z_0| \implies |z_0| \cdot |g(z_0)| = \frac{M}{r}|z_0|$
Divide by $|z_0|$ (which is non-zero): $|g(z_0)| = \frac{M}{r}$.
We previously established that for all z in the disk, $|g(z)| \le \frac{M}{r}$ for all ∣z∣ ≤ r. So, $\frac{M}{r}$ is the maximum possible value (the ceiling) for $|g(z)|$. However, we just found a point $z_0$ strictly inside the disk where g actually hits this ceiling.
The Maximum Modulus Principle states that a non-constant analytic function cannot reach its maximum modulus inside a domain (connected, open set G). Therefore, g is a constant function, g(z) = c, and we already know the magnitude of this constant, $|c| = \frac{M}{r}$
Any complex number c of modulus $\frac{M}{r}$ can be written in polar form as $c = \frac{M}{r} e^{im}$ for some m real number (the angle or “phase”). Finally, substitute g(z) = c back into our function definition, $f(z) = z \cdot g(z) = z \cdot \left( \frac{M}{r} e^{im} \right) = \frac{M e^{im}}{r} z$