The problem is not the problem. The problem is your attitude about the problem, Captain Jack Sparrow

Definition. Complex sequence A sequence of complex numbers is a function $a: \mathbb{N} \to \mathbb{C}$. We usually denote it by $(a_n)_{n \in \mathbb{N}}$ or simply $(a_n)$, where $a_n := a(n)$. The value $a_1$ is called the first term of the sequence, $a_2$ the second term, and in general $a_n$ the n-th term of the sequence.
Definition. Convergent complex sequence. A complex sequence $(a_n)_{n \in \mathbb{N}}$ is said to converge to a complex number $L \in \mathbb{C}$ if for every $\varepsilon > 0$ there exists an integer $N \in \mathbb{N}$ such that for all $n \geq N$ one has $|a_n - L| < \varepsilon$. In this case we write $\lim_{n \to \infty} a_n = L$ or $a_n \to L$ as $n \to \infty$, and L is called the limit of the sequence $(a_n)_{n \in \mathbb{N}}$.
Definition. Cauchy sequence. A complex sequence $(a_n)_{n \in \mathbb{N}}$ is called a Cauchy sequence if for every $\varepsilon > 0$ there exists an integer $N \in \mathbb{N}$ such that for all $n, m \geq N$ one has $|a_n - a_m| < \varepsilon$.
Definition. Series and partial sums.Let $(a_n)_{n \in \mathbb{N}}$ be a complex sequence. For each n $\in \mathbb{N}$, the finite sum $s_n := a_1 + a_2 + \cdots + a_n = \sum_{k=1}^n a_k$ is called the n-th partial sum of the (infinite) series $\sum_{k=1}^\infin a_k$ which we also denote simply by $\sum a_n$ when the index is clear from the context.
Definition. Convergent series. The series $\sum_{n=1}^{\infty} a_n$ is said to converge to the sum $s \in \mathbb{C}$ if the sequence of partial sums $(s_n)_{n \in \mathbb{N}}$ defined by $s_n = a_1 + a_2 + \cdots + a_n = \sum_{k=1}^n a_k$ converges to s, that is, $\lim_{n \to \infty} s_n = s$. In this case we write $s := \sum_{n=1}^\infin a_n$. If the sequence $(s_n)_{n \in \mathbb{N}}$ does not converge, we say that the series $\sum_{n=1}^{\infty} a_n$ diverges (or does not converge).
Definition. A complex power series centered at 0 in the variable z is a series of the form $a_0 + a_1z + a_2z^2 + \cdots = \sum_{n=0}^\infty a_n z^n$ with coefficients $a_i \in \mathbb{C}$
Definition. A complex power series centered at a complex number $a \in \mathbb{C} $ is an infinite series of the form: $\sum_{n=0}^\infty a_n (z - a)^n,$ where each $a_n \in \mathbb{C}$ is a coefficient, z is a complex variable, and $(z - a)^n$ is the nth power about the center.
Theorem. Given a power series $\sum_{n=0}^\infty a_n z^n$, there exists a unique value R, $0 \le R \le \infin$ (called the radius of convergence) such that:
On the Circle (|z| = R), this theorem gives no information. This is the yellow light zone —the series could converge or diverge.
Differentiability of Power Series. If $f(z) = \sum_{n=0}^{\infty} a_nz^n$ for |z| < R (R > 0), then f is analytic on B(0; R) and $f'(z) = \sum_{n=1}^{\infty} na_nz^{n-1}$ for |z| < R.
Weierstrass M-test. Let $\{u_k(z)\}_{k=0}^\infty$ be a sequence of complex-valued functions defined on a set $\gamma^* \subseteq \mathbb{C}$. If there exists a sequence of non-negative real numbers $\{M_k\}_{k=0}^\infty$ such that:
Then, the original series $\sum_{k=0}^\infty u_k(z)$ converges uniformly on $\gamma^*$.
Coefficients of power series. Let f(z) = $\sum_{k=0}^\infty c_kz^k$ where this power series has radius of convergence R > 0. Then,the n-th coefficient of a power series $c_n$ can be extracted using the integral formula, $c_n = \frac{1}{2\pi i} \int_{C_r} \frac{f(z)}{z^{n+1}}dz, 0 \le r \lt R, n \ge 0$ where $C_r$ is a circle of radius r centered at 0 and oriented positively.
Taylor’s Theorem. If f is analytic on an open disk B(a; R) (a disk of radius R centered at a), then f(z) can be represented exactly by a unique power series within that disk: $f(z) = \sum_{n=0}^{\infty}a_n (z - a)^n, \forall z \in B(a; R)$
This theorem bridges the gap between differentiability and power series. It guarantees that if a function behaves well (it is analytic) in a disk, it must also be infinitely differentiable and expressed or representable by a power series (an infinite polynomial) within that disk.
Furthermore, there exist unique constants $a_n = \frac{f^{(n)}(a)}{n!} = \frac{1}{2\pi i}\int_{C_r} \frac{f(w)}{(w-a)^{n+1}}dw$ where $C_r$ is a circle of radius r < R centered at a and oriented in the counterclockwise direction (positively oriented).
The Local Mapping Theorem. Suppose that f is analytic at $z_0$ and $f(z_0) = w_0$ and that $f(z)-w_0$ has a zero of order n at $z_0$. Then, for all sufficiently small $\varepsilon \gt 0$, there exists a corresponding $\delta \gt 0$ such that $\forall a: |a - w_0| \lt \delta ↔ a \in B(w_0; \delta) \setminus {w_0}$, the equation f(z) = a has exactly n roots inside the disk $B(z_0; \varepsilon) ↔ |z - z_0| \lt \varepsilon$.
Every point $a$ near the center (except the center itself $w_0$) comes from exactly $n$ points in the domain. Essentially, it shows that an analytic function behaves like a polynomial near a point, mapping a small neighborhood to another small neighborhood in an n-to-1 fashion.
Proof.
Notice that analytic at a point $z_0$ means that the function is differentiable at $z_0$ and also at every point in some neighborhood (disk) surrounding $z_0$.
We are looking at the function $h(z) = f(z) - w_0$. We know $h(z_0) = 0$. Since zeroes of non-zero analytic functions are isolated, there is a small disk around $z_0$ where no other zeroes exist. In the closed disk $\overline{B(z_0; \varepsilon)}, f(z) = w_0$ only at the center $z_0$ ($z_0$ is the only zero in this closed disk).
f is analytic on the whole disk $|z-z_0| < \varepsilon$
Let $\gamma$ be the boundary circle of this disk: $|z - z_0| = \varepsilon$, a circle of center $z_0$ and radius $\varepsilon$ orientated positively. On this boundary $\gamma$, we know $f(z) \neq w_0$ (because the zero is only at the center).
Let $\Gamma$ be the image of this boundary $\Gamma = f(\gamma)$.
Because $f(z) \neq w_0$ on the boundary $\gamma$, the image curve $\Gamma$ does not pass through $w_0$. Therefore, $w_0$ is in the “open complement” $\mathbb{C} \setminus \Gamma^*$.
Since the curve $\Gamma$ is a closed set and does not touch $w_0$, there must be some “breathing room” around $w_0$ ($\mathbb{C} \setminus \Gamma^*$ is open). There exists a neighborhood around $w_0$ (a small radius $\delta \gt 0$) such that the disk $B(w_0; \delta)$ does not intersect $\Gamma$. This means the entire little disk $B(w_0; \delta)$ lies inside the same connected component as $w_0$.
Now, pick any target value inside this small disk (neighbor) $a \in B(w_0, \delta)$.
The winding number $n(\Gamma; a)$ represents the number of solutions to f(z) = a. Since the winding number is $n$, there are exactly $n$ roots.
Note. If the order n > 1, the derivative f’(z) has a zero at $z_0$ (indeed of order n -1). We can write $f(z)-w_0=(z-z_0)^ng(z)$ where g is analytic in a neighborhood of $z_0$ and $g(z_0)\neq 0 \leadsto f'(z) = n (z - z_0)^{n-1} g(z) + (z - z_0)^n g'(z) = (z - z_0)^{n-1} \bigl[ n g(z) + (z - z_0) g'(z) \bigr]$. Define $h(z) = n g(z) + (z - z_0) g'(z)$. Since g is analytic at $z_0$, h is also analytic at $z_0$. Moreover, $h(z_0) = n g(z_0) \neq 0$. $f'(z) = (z - z_0)^{n-1} h(z)$, which means that f’(z) has a zero of order n - 1 at $z_0$.
Zeroes of analytic functions are isolated. Therefore, in a sufficiently small disk around $z_0$, the derivative $f'(z)$ is never zero except at the center $z_0$. If we pick a target $a \neq w_0$, the roots of $f(z) = a$ lie in this region where $f'(z) \neq 0$. If $f(z_k) = a$ and $f'(z_k) \neq 0$, then $z_k$ is a simple root (multiplicity 1). Since the sum of multiplicities must be n and each root counts for 1, there must be exactly $n$ distinct roots.
Corollary. Open mapping theorem. A non-constant analytic function maps open sets to open sets.
Proof.
Let f be a non-constant analytic function on a open set G.
Claim: f(G) is open. Let $a \in G$, we want to show that there is a safely bubble (a small disk) around “f(a)” that stays entirely inside f(G). More formally, there exists $\delta \gt 0, \text{ such that } \mathbb{B}(f(a); \delta) \subseteq f(G) \equiv \forall w \in \mathbb{B}(f(a); \delta)$, w has a pre-image (a starting point) $z_0$ in G via f, $f(z_0) = w$.
Recall the Local Mapping Theorem. Suppose that f is analytic at $z_0$ and $f(z_0) = w_0$ and that $f(z)-w_0$ has a zero of order n at $z_0$. Then, for all sufficiently small $\varepsilon \gt 0$, there exists a corresponding $\delta \gt 0$ such that $\forall a: |a - w_0| \lt \delta ↔ a \in B(w_0; \delta) \setminus {w_0}$, the equation f(z) = a has exactly n roots inside the disk $B(z_0; \varepsilon) ↔ |z - z_0| \lt \varepsilon$.
f is non-constant, $f(z) - f(a)$ has a zero of some finite order $n \ge 1$ at a, then by the Local Mapping Theorem, we can choose a sufficiently small $\varepsilon \gt 0$ so there exists a corresponding $\delta \gt 0$ such that every value in the disk $B(f(a); \delta)$ is hit by the function inside a circle of radius $\varepsilon$ around “a” exactly n times (and obviously at least one). Therefore, $\forall w \in \mathbb{B}(f(a); \delta)$, w is part of the image set f(G). Since every point in the disk $B(w_0; \delta)$ is in the image, the entire disk is contained in the image: $B(f(a); \delta) \subseteq f(B(a; \varepsilon)) \subseteq f(G)$, the set $f(G)$ is open.
Remark. If the zero of $f(z)-w_0$ is simple at $z_0$ (i.e., n = 1), then f is a local bijection. There is a one-to-one and onto correspondence between an open neighborhood U of $z_0$ and an open neighborhood V of $w_0$. This means an inverse function $f^{-1}$ exists on that neighborhood.
Definition. A conformal map is a function $f: U\subseteq \mathbb{C}\rightarrow \mathbb{C}$ that preserves angles between curves at every point in its domain. More precisely, if two smooth curves intersect at a point $z_0$, then their images under f intersect at $f(z_0)$ with the same angle (including orientation).
Proposition. If f is analytic (holomorphic) on a domain U, then at any point $z_0$ where $f'(z_0)\neq 0$, the function is conformal.
Proposition. Let G be an open set and let f be analytic and one-to-one in G. Then, f is conformal in G.
One-to-One (Injective): $f(z_1) = f(z_2) \implies z_1 = z_2$. The function never maps two different points to the same spot. In other words, it never “overlaps” itself.
Proof.
Claim: f is conformal in G or, alternatively, $f'(z) \neq 0$ for all $z \in G$. We will assume the derivative is zero at some point and show that this forces the function to overlap itself (proof by contradiction).
Suppose there exists a point $a \in G$ where the derivative vanishes: $f'(a) = 0$.
Let $w_0 = f(a)$. Consider the Taylor expansion of $f(z) - w_0$ around a, $f(z) - w_0 = c_1(z-a) + c_2(z-a)^2 + \dots$ where $c_1 = f'(a) =[\text{By our previous assumption}] 0$. Therefore, the first non-zero term must be at least the second power ($c_2$) or higher. This means the function has a zero of order $n \ge 2$.
We established in the Local Mapping Theorem that if the order is n, then there is a $\varepsilon$-neighborhood around “a” and a corresponding $\delta$-neighborhood around “f(a) = $w_0$” such that for every target value $w \in \mathbb{B}(w_0; \delta), w \ne w_0$, the equation f(z) = w has exactly n distinct roots in $\mathbb{B}(a; \varepsilon)$ (f(w) is “hit” n times).
Since $n \ge 2$, the equation $f(z) = w$ has at least 2 distinct roots, say $z_1$ and $z_2, z_1 \ z_2$ where $f(z_1) = w \text{ and } f(z_2) = w$. This means f maps two different points to the same location. Therefore, f is not one-to-one (not injective). This contradicts the theorem’s hypothesis that f is one-to-one on G, therefore our initial assumption that f’(a) = 0 must be false, $f'(z) \neq 0, \quad \forall z \in G$
Recall: Proposition. If f is analytic (holomorphic) on a domain U, then at any point $z_0$ where $f'(z_0)\neq 0$, the function is conformal.
Since the derivative is never zero, the mapping is conformal everywhere.