For every problem there is always, at least, a solution which seems quite plausible. It is simple and clean, direct, neat and nice, and yet very wrong, #Anawim, justtothepoint.com

If a function $f(z) = u(x, y) + iv(x, y)$ is complex-differentiable at a point $z_0 = x_0 + iy_0$, then its real and imaginary parts, $u(x,y)$ and $v(x,y)$, must satisfy the Cauchy–Riemann equations at that point: $\frac{\partial u}{\partial x}(x_0, y_0) = \frac{\partial v}{\partial y}(x_0, y_0), \quad \frac{\partial u}{\partial y}(x_0, y_0) = -\frac{\partial v}{\partial x}(x_0, y_0)$.
Proposition. Let $f(z) = u(x, y) + iv(x, y)$ for $z = x + iy \in D$, where $D \subseteq \mathbb{C}$ is an open set. Assume that $u$ and $v$ have continuous first partial derivatives throughout $D$ and they satisfy the Cauchy–Riemann equations at a point $z \in D$. Then $f'(z)$ exists, i.e., $f$ is differentiable at $z$.
Theorem. Let $f = u + iv$ be a complex function $f(z) = u(x, y) + iv(x, y)$, where $z = x + iy$, which is differentiable (analytic) at every point in a domain $D \subseteq \mathbb{C}$. Suppose that $|f(z)| = k$ for all $z \in D$, where $k$ is some constant. Then $f$ must be constant on $D$.
This is a fundamental result that reveals a deep property of analytic functions: they cannot map an open region $D$ to a circle ($|f(z)| = k$ for all $z \in D$) without collapsing to a single point ($f \equiv c$).
Proof:
Recall that the magnitude of a complex number is: $f(z) = \sqrt{u(x, y)² + v(x, y)²}$
So the condition $|f(z)| = k$ for all $z \in D$ implies that $u(x, y)^2 + v(x, y)^2 = k^2$ for all $(x, y)$ corresponding to $z \in D$. This means that as $z$ varies over the domain $D$, the point $(u(x, y), v(x, y))$ always lies on a circle of radius $k$ centered at the origin in the $w$-plane.
🍀 Case 1: k = 0 (The Trivial Case)
$u(x, y)^2 + v(x, y)^2 = 0$. Since both $u(x, y)^2$ and $v(x, y)^2$ are non-negative, this implies $u(x, y) = v(x, y) = 0$ for all $z \in D$. Therefore $f(z) = 0$ for all $z \in D$, and $f$ is the zero function — clearly constant.
🍀 Case 2: k ≠ 0 (The Non-Trivial Case)
Now assume $k \neq 0$. We differentiate the identity $u^2 + v^2 = k^2$ with respect to each variable. The right-hand side is zero because $k^2$ is constant; on the left-hand side, applying the chain rule:
$\text{Differentiate with respect to } x: 2u\frac{\partial u}{\partial x} + 2v\frac{\partial v}{\partial x} = 0. \text{Differentiate with respect to } y: 2u\frac{\partial u}{\partial y} + 2v\frac{\partial v}{\partial y} = 0.$
Since $f$ is analytic on $D$, it satisfies the Cauchy–Riemann equations everywhere in $D$: $\frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y}, \qquad \frac{\partial v}{\partial y} = \frac{\partial u}{\partial x}.$
Substituting these into the differentiated equations and dividing by two:
$$\begin{cases} u\dfrac{\partial u}{\partial x} - v\dfrac{\partial u}{\partial y} = 0, \\[8pt] u\dfrac{\partial u}{\partial y} + v\dfrac{\partial u}{\partial x} = 0. \end{cases}$$Let us denote $A = \frac{\partial u}{\partial x}$ and $B = \frac{\partial u}{\partial y}$. The system becomes:
$$\begin{cases} uA - vB = 0, \\ uB + vA = 0. \end{cases}$$This is a homogeneous linear system in the unknowns $A$ and $B$. We solve it by elimination.
Finding A: Multiply the first equation by $u$ and the second by $v$:
$$\begin{cases} u^2 A - uvB = 0, \\ uvB + v^2 A = 0. \end{cases}$$Adding: $(u^2 + v^2)A = 0$. Since $u^2 + v^2 = k^2 \neq 0$, we conclude $A = \frac{\partial u}{\partial x} = 0$.
Finding B: Multiply the first equation by $v$ and the second by $u$:
$$\begin{cases} uvA - v^2 B = 0, \\ u^2 B + vuA = 0. \end{cases}$$Subtracting the first from the second: $(u^2 + v^2)B = 0$. Again $u^2 + v^2 = k^2 \neq 0$, so $B = \frac{\partial u}{\partial y} = 0$.
Therefore, $\frac{\partial u}{\partial x} = 0 \quad \text{and} \quad \frac{\partial u}{\partial y} = 0$ everywhere in $D$. This means that $u$ has zero gradient throughout $D$, and hence zero directional derivatives in all directions.
Recall (Zero Gradient Implies Constancy on Connected Sets).
For a function u(x, y) that is differentiable (which is guaranteed in our case since f = u + iv is analytic, making u and v continuously differentiable), the directional derivative in the direction of a unit vector $\vec{v}$ = (a, b) is given by: $D_{\vec{v}}u = \nabla u · \vec{v} = \frac{\partial u}{\partial x}·a + \frac{\partial u}{\partial y}·b$
where $\nabla u = (\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y})$ is the gradient of u, $\vec{v}$ is a unit vector (a² + b² = 1). The dot product captures how u changes in the direction of $\vec{v}$. If $\frac{\partial u}{\partial x} = \frac{\partial u}{\partial y} = 0 \leadsto D_{\vec{v}}u = \nabla u · \vec{v} = \frac{\partial u}{\partial x}·a + \frac{\partial u}{\partial y}·b = 0·a + 0·b = 0$, and this holds for any direction $\vec{v}$.
Both partial derivatives tell us how steep the surface is when moving in the x and y -direction respectively. If both of these are zero, there is neither slope in the x- direction, nor slope in the y -direction. Therefore, there can’t be a slope in any direction that’s a combination of x and y. Visually, the surface is perfectly flat at that point.
The directional derivative of a differentiable function u at a point x in the direction of a unit vector $\vec{v}$ is $D_{\vec{v}}u(x)=\nabla u(x)\cdot \vec{v}$. If $D_{\vec{v}}u(x)=0$ for every direction $\vec{v}$, then the inner product with any $\vec{v}$ vanishes. That can only happen when $\nabla u(x) = (0,0,\dots,0) \forall x$ (gradient is identically zero).
On a connected (in fact path-connected) region Ω, pick any two points P₁ and P₂. Choose a smooth curve C from P₁ to P₂. Then, the change in u along C is given by the line integral of its gradient: $u(P₂)-u(P₁)=\int_{C}\nabla u\cdot d\mathbf s=0.$ Since this holds for every pair of points, u takes the same value everywhere on Ω.
Since $D$ is a domain (by definition, an open connected set), u has zero directional derivatives in all directions, $u$ must be constant throughout $D$.
Now consider $v$. From the Cauchy–Riemann equations:
$$\frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y} = 0, \qquad \frac{\partial v}{\partial y} = \frac{\partial u}{\partial x} = 0.$$So $v$ also has zero partial derivatives everywhere in $D$, which means $v$ is constant on $D$ as well.
Since both $u$ and $v$ are constant on $D$, the function $f(z) = u(x, y) + iv(x, y)$ must also be constant on $D$. $\blacksquare$
Since D is a domain (by definition, an open connected set), u must be constant throughout D .
Now let’s look at v. From the Cauchy-Riemann equations: $\frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y} = 0, \frac{\partial v}{\partial y} = \frac{\partial u}{\partial x} = 0$
So v also has zero partial derivatives everywhere in D, which means v is constant on D as well. Since both u and v are constant functions on D, their combination: f(z) = u(x, y) + iv(x, y) must also be constant on D.
Note. The proof relies on D being connected. If $D$ were disconnected (e.g., two disjoint, nonempty open regions $D = D_1 \cup D_2$ with $D_1 \cap D_2 = \emptyset$), then $f$ could take one constant value on $D_1$ and another on $D_2$:
$f(z) = \begin{cases} k, &z \in D_1\\\\ -k, &z \in D_2 \end{cases}$
This function would have ∣f(z)∣ = k everywhere, but wouldn’t be constant. Furthermore, in real analysis, one can have non-constant functions with constant magnitude — for instance, $f(x) = e^{ix} = \cos x + i\sin x$ on $\mathbb{R}$ satisfies $|f(x)| = 1$ for all $x$ but is not constant. However, this doesn’t violate anything because $e^{ix}$ is not analytic as a function of a complex variable on any open subset of $\mathbb{C}$. It is only analytic as a function of a real variable.
Theorem. Let $f(z) = u(x, y) + iv(x, y)$ be a complex function that is differentiable (analytic) at every point in a domain $D \subseteq \mathbb{C}$. If $f'(z) = 0$ for all $z \in D$, then $f$ must be constant on $D$.
Proof.
Recall that for a complex function f(z) = u(x, y) + iv(x, y) where z = x + iy , the complex derivative is defined as: $f'(z) = \lim_{h \to 0} \frac{f(z + h) - f(z)}{h}$
When this limit exists for all z ∈ D, f is analytic on D. For an analytic function, the complex derivative can be expressed using partial derivatives: $f'(z) = \frac{\partial u}{\partial x}+i\frac{\partial v}{\partial x}$ This comes from considering the limit as h→0 along the real axis (h = t ∈ ℝ): $f'(z) = \lim_{t \to 0} \frac{f(z + t) - f(z)}{t}$
Alternatively, using the Cauchy-Riemann equations, we can also write: $f'(z) = \frac{\partial v}{\partial y}-i\frac{\partial u}{\partial y}$
By assumption, $f'(z) = 0$ for all $z \in D$. For a complex number to be zero, both its real and imaginary parts must vanish. From the first expression: $\frac{\partial u}{\partial x} = 0, \qquad \frac{\partial v}{\partial x} = 0 \quad \text{throughout } D.$
From the second expression: $\frac{\partial v}{\partial y} = 0, \qquad \frac{\partial u}{\partial y} = 0 \quad \text{throughout } D.$
Therefore, all four first-order partial derivatives of $u$ and $v$ are zero throughout $D$.
For a differentiable function $u(x, y)$, the directional derivative in the direction of any unit vector $\vec{\mathbf{v}} = (a, b)$ is: $D_{\vec{\mathbf{v}}} u = \nabla u \cdot \vec{\mathbf{v}} = \frac{\partial u}{\partial x} a + \frac{\partial u}{\partial y} b = 0 \cdot a + 0 \cdot b = 0.$
This holds for every direction $\vec{\mathbf{v}}$, and the same applies to $v(x, y)$. Both $u$ and $v$ have zero directional derivatives in all directions at every point in $D$. As it was established previously, this implies that $u$ and $v$ are each constant on the connected domain $D$. Therefore, $f$ is constant. $\blacksquare$
Thus, $\frac{d}{dz} e^z = e^z$, recovering the familiar formula.
Since the Cauchy-Riemann equations are not satisfied at any point, we conclude: f(z) = z² + z̄ is not complex differentiable at any point in ℂ. Thus, f is nowhere analytic.
The continuity of the partial derivatives $\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}, \frac{\partial v}{\partial x}, \frac{\partial v}{\partial y}$ at a point (x₀, y₀) implies that the function f(x, y) = (u(x, y), v(x, y)), considered as a map from ℝ² → ℝ² is (real) differentiable at (x₀, y₀). This real differentiability means that the change in f can be approximated by a linear transformation (the Jacobian) plus an error term that goes to zero faster than the distance from (x₀, y₀).
The Jacobian matrix at (x₀, y₀) is $J_f(x₀, y₀) = (\begin{smallmatrix}\frac{\partial u}{\partial x}(x₀, y₀) & \frac{\partial u}{\partial y}(x₀, y₀)\\\ \frac{\partial v}{\partial x}(x₀, y₀) & \frac{\partial v}{\partial y}(x₀, y₀)\end{smallmatrix})$. Additionally, if f being real differentiable, the partial derivatives satisfy the Cauchy-Riemann equations at (x₀, y₀) $\frac{\partial u}{\partial x}(x_0, y_0) = \frac{\partial v}{\partial y}(x_0, y_0), \frac{\partial u}{\partial y}(x_0, y_0) = -\frac{\partial v}{\partial x}(x_0, y_0)$.
Substituting these into the Jacobian matrix simplifies it to: $J_f(x₀, y₀) = (\begin{smallmatrix}a & -b\\\ b & a\end{smallmatrix}) \text{, where } a = \frac{\partial u}{\partial x}(x₀, y₀), b = \frac{\partial v}{\partial x}(x₀, y₀)$. The simplified matrix $(\begin{smallmatrix}a & -b\\\ b & a\end{smallmatrix})$ is equivalent to or represents multiplication by the complex number a + ib: (a + ib)⋅(x + iy) = (ax −by) +i(bx + ay). a + ib = $\frac{\partial u}{\partial x}(x_0, y_0) + i\frac{\partial v}{\partial x}(x₀, y₀)$ is precisely f’(z₀). Thus, the real linear approximation (via the Jacobian) becomes a complex linear approximation when the Cauchy-Riemann equations hold.
Hence your first‑order (linear) approximation is not just real‑affine, but complex‑affine, f(z) is approximated by: f(z) ≈ f(z₀) + f’(z₀)(z − z₀), where f’(z₀) = $\frac{\partial u}{\partial x}(x_0, y_0) + i\frac{\partial v}{\partial x}(x₀, y₀)$. This complex linearity is the defining feature, the hallmark of complex differentiability.
When a complex function f: D ⊆ ℂ → ℂ, f(z) = f(x + iy) = u(x,y ) + iv(x, y) is viewed as a mapping from ℝ² to ℝ², f: D ⊆ ℝ² → ℝ², f(x, y) = (u(x, y), v(x, y)), its real differentiability is characterized by the existence of the Jacobian matrix. If f is real-differentiable at a point (a,b), its Jacobian matrix $\mathbb{Df} \bigg\rvert_{(a, b)}$ (also denoted $\mathbb{J_f}(a, b)$) is given by:
$\mathbb{Df} \bigg\rvert_{(a, b)} = \mathbb{J_f}(a, b) = (\begin{smallmatrix}\frac{\partial u}{\partial x}(a, b) & \frac{\partial u}{\partial y}(a, b)\\\ \frac{\partial v}{\partial x}(a, b) & \frac{\partial v}{\partial y}(a, b)\end{smallmatrix})$
This matrix represents the best linear approximation to the function f near the point (a, b). The existence of this matrix requires that all four partial derivatives $\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}, \frac{\partial v}{\partial x}, \frac{\partial v}{\partial y}$ exist at (a, b). If these partial derivatives exist and are continuous in a neighborhood of (a, b), then f is guaranteed to be real-differentiable at (a,b). The Jacobian matrix encapsulates how f transforms small changes in the input (x,y) near (a,b).
If a complex function f(z) = u(x, y) + iv(x, y) is complex-differentiable at a point z₀ = a + ib, then it is automatically real-differentiable at (a, b) (complex differentiability is a stronger condition), and its Jacobian matrix Jf(a, b) must have a very specific structure. This structure is a direct consequence of the Cauchy-Riemann equations. Let f′(z₀) = α + iβ. The complex differentiability implies that the linear transformation represented by the Jacobian matrix must act like multiplication by the complex number f′(z₀).
Complex differentiability at z₀ means the local behavior of f near z₀ is indistinguishable from multiplying by the complex number f′(z₀). This forces the real derivative (Jacobian matrix) to be a matrix representation of that complex multiplication.
Multiplication by α + iβ corresponds to the linear transformation: (α + iβ)⋅(x + iy) = (αx −βy) +i(βx + αy)
$(\begin{smallmatrix}x\\\ y\end{smallmatrix}) \to (\begin{smallmatrix}αx−βy\\\ βx+αy\end{smallmatrix}) = (\begin{smallmatrix}α & −β\\\ β & α\end{smallmatrix})(\begin{smallmatrix}x\\\ y\end{smallmatrix})$
Complex differentiability imposes a severe restriction on the Jacobian. If f is complex-differentiable at z₀, its Jacobian matrix must be of the form: Jf(a, b) = $(\begin{smallmatrix}α & −β\\\ β & α\end{smallmatrix})$. Comparing this with the general form of the Jacobian $(\begin{smallmatrix}\frac{\partial u}{\partial x} & \frac{\partial u}{\partial y}\\\ \frac{\partial v}{\partial x} & \frac{\partial v}{\partial y}\end{smallmatrix})$ and using the Cauchy-Riemann equations $\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}$, we identify $\alpha = \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \beta = \frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y}$. Thus, the Jacobian matrix of a complex-differentiable function is not an arbitrary 2×2 real matrix but is constrained to this specific structure, a special “amplitwist” form if you want.
Geometrically, the local transformation is a conformal map: it combines a uniform scaling (amplification) by ∣α + iβ∣= $\sqrt{α²+ β²}$ and a rotation (twist) by arg(α+iβ). This is the origin of the term “amplitwist”.
Consider a complex function f(z)=u(x, y) + iv(x, y), where z = x + iy. We can view this as a mapping from ℝ² to ℝ², specifically (x, y) ↦ (u(x, y), v(x, y)).
The total derivative of this mapping at a point (a, b) is represented by the Jacobian matrix, $\mathbb{J_f}(a, b)$. This matrix encapsulates all the first-order partial derivative information of the component functions u and v with respect to x and y. Specifically, the Jacobian matrix is defined as: $\mathbb{Df} \bigg\rvert_{(a, b)} = \mathbb{J_f}(a, b) = (\begin{smallmatrix}\frac{\partial u}{\partial x}(a, b) & \frac{\partial u}{\partial y}(a, b)\\\ \frac{\partial v}{\partial x}(a, b) & \frac{\partial v}{\partial y}(a, b)\end{smallmatrix}) =[\text{Cauchy-Riemman Equations}] (\begin{smallmatrix}\frac{\partial u}{\partial x}(a, b) & \frac{\partial u}{\partial y}(a, b)\\\ -\frac{\partial u}{\partial y}(a, b) & \frac{\partial u}{\partial x}(a, b)\end{smallmatrix})$ This matrix serves as the best linear approximation of the function f near the point (a, b). In other words, for small changes (Δx, Δy) away from the point (a, b), the change in the function f can be approximated by the product of the Jacobian matrix and the column vector representing the change: $\mathbb{J_f}(a, b)(\begin{smallmatrix}Δx\\\ Δy\end{smallmatrix})$.
This linear approximation is a fundamental concept in calculus, extending the idea of the derivative of a real-valued function to higher dimensions. The Jacobian matrix, therefore, describes how the function f transforms small neighborhoods around a point in its domain. Each entry in the Jacobian matrix represents the rate of change of one of the output components (u or v) with respect to one of the input variables (x or y).
For instance $\frac{\partial u}{\partial x}(a, b)$ tells us how rapidly the real part u of the complex function f(z) changes as x changes, while keeping y constant at b, and similarly for the other partial derivatives. The geometric interpretation of the Jacobian matrix is that it describes the local stretching, rotating, or transforming effect of the function near the point (a, b).
Let’s compute the determinant of this constrained Jacobian (Recall: for a function f to be complex differentiable, it must satisfy the Cauchy-Riemann equations): $det(\mathbb{J_f}(a, b)) = |\begin{smallmatrix}\frac{\partial u}{\partial x}(a, b) & \frac{\partial u}{\partial y}(a, b)\\\ -\frac{\partial u}{\partial y}(a, b) & \frac{\partial u}{\partial x}(a, b)\end{smallmatrix}| = (\frac{\partial u}{\partial x}(a, b))^2+(\frac{\partial u}{\partial y}(a, b))^2$
The complex derivative is defined as: f’(a + ib) = $\frac{\partial u}{\partial x}(a, b) + i\frac{\partial v}{\partial x}(a, b) =[\text{Using the Cauchy-Riemman Equations}] \frac{\partial u}{\partial x}(a, b) - i\frac{\partial u}{\partial y}(a, b)$
The magnitude of the complex derivative is: |f’(a + ib)| = $\sqrt{(\frac{\partial u}{\partial x}(a, b))^2 + (\frac{\partial u}{\partial y}(a, b))^2}$. Notice that: $|f'(a + ib)|^2 = (\frac{\partial u}{\partial x}(a, b))^2 + (\frac{\partial u}{\partial y}(a, b))^2 = det(\mathbb{J_f}(a, b))$
Assume f’(a + ib) ≠ 0, hence |f’(a + ib)| ≠ 0. Consider a small increment in the complex plane, $\alpha = \alpha_1 + i*\alpha_2$, represented as a vector: $(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix})$.
Since Df is a linear approximation of the function near the point (a, b) ≡ a + ib. So if we take a small increment to $\alpha1 + i*\alpha2$, it tells us how this small vector gets transformed:
$f'(a + ib)·\alpha = \mathbb{Df} \bigg\rvert_{(a, b)}(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix}) = (\begin{smallmatrix}\frac{\partial u}{\partial x}(a, b) & \frac{\partial u}{\partial y}(a, b)\\\ -\frac{\partial u}{\partial y}(a, b) & \frac{\partial u}{\partial x}(a, b)\end{smallmatrix})(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix}) = \alpha_1\frac{\partial u}{\partial x}(a, b) + \alpha_2\frac{\partial u}{\partial y}(a, b)+i(-\alpha_1\frac{\partial u}{\partial y}(a, b) + \alpha_2\frac{\partial u}{\partial x}(a, b))$
$|\mathbb{Df} \bigg\rvert_{(a, b)}(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix})|^2 = (\alpha_1\frac{\partial u}{\partial x}(a, b) + \alpha_2\frac{\partial u}{\partial y}(a, b))^2 + (-\alpha_1\frac{\partial u}{\partial y}(a, b) + \alpha_2\frac{\partial u}{\partial x}(a, b))^2 = \alpha_1^2(\frac{\partial u}{\partial x}(a, b))^2+2\alpha_1\alpha_2\frac{\partial u}{\partial x}(a, b)\frac{\partial u}{\partial y}(a, b)+ \alpha_2^2(\frac{\partial u}{\partial y}(a, b))^2+\alpha_1^2(\frac{\partial u}{\partial y}(a, b))^2-2\alpha_1\alpha_2\frac{\partial u}{\partial x}(a, b)\frac{\partial u}{\partial y}(a, b)+\alpha_2^2(\frac{\partial u}{\partial x}(a, b))^2 = \alpha_1^2[(\frac{\partial u}{\partial x}(a, b))^2+(\frac{\partial u}{\partial y}(a, b))]+\alpha_2^2[(\frac{\partial u}{\partial y}(a, b))^2 + (\frac{\partial u}{\partial x}(a, b))^2]$
$|\mathbb{Df} \bigg\rvert_{(a, b)}(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix})|^2 = (\alpha_1^2+\alpha_2^2)((\frac{\partial u}{\partial x}(a, b))^2 + (\frac{\partial v}{\partial x}(a, b))^2)= |\alpha|^2|f'(a + ib)|^2$. Taking square roots:
$|\mathbb{Df} \bigg\rvert_{(a, b)}(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix})| = |\alpha||f'(a + ib)|$ so the map scales all direction by the same factor |f’(a + ib)| = |f’(z₀)|
$arg(\mathbb{Df} \bigg\rvert_{(a, b)}(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix})) = arg(f'(a + ib)) + arg(\alpha)$ so it rotates by arg(f’(a + ib)) = arg(z₀).
This means that in the vicinity of a point z₀ = a + ib where f’(z₀) ≠ 0, the function f(z) acts by rotating points about z₀ and scaling their distances from z₀. The complex derivative f’(z₀) encapsulates both the magnitude of this scaling and the angle of this rotation. Specifically, the modulus |f’(z₀)| determines the scaling factor, and the argument arg(f’(z₀)) determines the angle of rotation.
The term “amplitwist” describes the combined geometric effect of a complex derivative. The “ampli” part refers to the amplification or scaling factor, which is given by the modulus (absolute value) of the complex derivative, |f’(z₀)|. If we consider a small line segment emanating from a point z₀ in the complex plane, the length of this segment after being mapped by the function f(z) will be scaled by a factor of |f’(z₀)|. The square of this scaling factor is precisely the determinant of the Jacobian matrix of the corresponding ℝ² → ℝ² mapping.
This scaling is uniform in all directions from z₀, meaning that an infinitesimal circle centered at z₀ would be mapped to an approximately infinitesimal circle centered at f(z₀), albeit with a different radius. The “twist” refers to the rotation induced by the complex derivative. The argument of the complex derivative, arg(f’(z₀)), specifies the angle through which an infinitesimal vector emanating from z₀ is rotated about z₀ when mapped by f(z).