JustToThePoint English Website Version
JustToThePoint en español

Relationship Between Complex Differentiability and the Cauchy–Riemann Equations

For every problem there is always, at least, a solution which seems quite plausible. It is simple and clean, direct, neat and nice, and yet very wrong, #Anawim, justtothepoint.com

image info

Relationship Between Complex Differentiability and the Cauchy-Riemann Equations

Necessary Condition: Complex Differentiability Implies Cauchy-Riemann Equations

If a function f(z) = u(x, y) + iv(x, y) is complex-differentiable at a point z₀ =x₀ +iy₀, then its real and imaginary parts, u(x,y) and v(x,y), must satisfy the Cauchy-Riemann equations at that point: $\frac{\partial u}{\partial x}(x_0, y_0) = \frac{\partial v}{\partial y}(x_0, y_0), \frac{\partial u}{\partial y}(x_0, y_0) = -\frac{\partial v}{\partial x}(x_0, y_0)$.

Sufficient Condition: Cauchy-Riemann Equations (with Continuous Partial Derivatives) Imply Complex Differentiability

Proposition. Let f(z) = u(x, y) + iv(x, y) for z = x + iy ∈ D where D ⊆ ℂ is an open set. Assume that u and v have continuous first partial derivatives throughout D and they satisfy the Cauchy-Riemann equations at a point z ∈ D. Then, f'(z) exists, i.e., f is differentiable at z.

Consequences of the Cauchy-Riemann Equations

Proof:

Recall that the magnitude of a complex number is: $f(z) = \sqrt{u(x, y)² + v(x, y)²}$

So the condition |f(z)| = k ∀z ∈ D implies that u(x, y)² + v(x, y)² = k² for all (x, y) corresponding to z ∈ D. This means that as z varies over the domain D, the point (u(x, y), v(x, y)) always lies on a circle of radius k centered at the origin in the complex plane.

🍀 Case 1: k = 0 (The Trivial Case)

u(x, y)² + v(x, y)² = 0. Since both u(x, y)² and v(x, y)² are non-negatives, this implies u(x, y) = v(x, y) = 0 for all z = (x, y) ∈ D. Therefore, f(z) = 0 for all z = (x, y) ∈ D. So f is the zero function, which is clearly constant.

🍀 Case 2: k ≠ 0 (The Non-Trivial Case)

Now assume k ≠ 0, we can take the partial derivative of this equation u(x, y)² + v(x, y)² = k² with respect to x: (the right-hand is zero because k² is constant; on the left-hand size, apply the chain rule):

$2u(x, y)\frac{\partial u}{\partial x} + 2v(x, y)\frac{\partial v}{\partial x} = 0$

Likewise, let’s differentiate partially u(x, y)² + v(x, y)² = k² with respect to y:

$2u(x, y)\frac{\partial u}{\partial y} + 2v(x, y)\frac{\partial v}{\partial y} = 0$

Since f is differentiable (analytic) at every point in a domain D, it satisfies the Cauchy-Riemann equations everywhere in D, then applying these equations and dividing by two we have the system:

$\begin{cases} u(x, y)\frac{\partial u}{\partial x} - v(x, y)\frac{\partial u}{\partial y} = 0 \\\ u(x, y)\frac{\partial u}{\partial y} + v(x, y)\frac{\partial u}{\partial x} = 0 \end{cases}$

Let’s denote: $A = \frac{\partial u}{\partial x}, B = \frac{\partial u}{\partial y}$.

Then, our system becomes:

$\begin{cases} uA- vB = 0 \\\ uB + vA = 0 \end{cases}$

This is a linear system in A and B . We can solve it using Cramer’s rule or by elimination. Let’s use elimination. Multiply the first equation by u and the second by v:

$\begin{cases} u²A- uvB = 0 \\\ uvB + v²A = 0 \end{cases}$

Add these two equations: u²A + v²A = 0 ↭ (u² + v²)A = 0. Since u(x, y)² + v(x, y)² = k² ≠ 0 (By assumption, k ≠ 0), $A = \frac{\partial u}{\partial x} = 0$

Now multiply the first equation by v and the second by u:

$\begin{cases} uvA- v²B = 0 \\\ u²B + vuA = 0 \end{cases}$

Subtract the first from the second: v²B + u²B = 0 ⇒ (v² + u²)B = 0. Again, u(x, y)² + v(x, y)² = k² ≠ 0 (By assumption, k ≠ 0), $B = \frac{\partial u}{\partial y} = 0$

$\frac{\partial u}{\partial x} = 0 \text{ and } \frac{\partial u}{\partial y} = 0$

This means that u has zero directional derivatives in all directions. In multivariable calculus, this implies that u is constant on any connected region.


For a function u(x, y) that is differentiable (which is guaranteed in our case since f = u + iv is analytic, making u and v continuously differentiable), the directional derivative in the direction of a unit vector $\vec{v}$ = (a, b) is given by: $D_{\vec{v}}u = \nabla u · \vec{v} = \frac{\partial u}{\partial x}·a + \frac{\partial u}{\partial y}·b$

where $\nabla u = (\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y})$ is the gradient of u, $\vec{v}$ is a unit vector (a² + b² = 1). The dot product captures how u changes in the direction of $\vec{v}$. If $\frac{\partial u}{\partial x} = \frac{\partial u}{\partial y} = 0 \leadsto D_{\vec{v}}u = \nabla u · \vec{v} = \frac{\partial u}{\partial x}·a + \frac{\partial u}{\partial y}·b = 0·a + 0·b = 0$, and this holds for any direction $\vec{v}$.

Both partial derivatives tell us how steep the surface is when moving in the x and y -direction respectively. If both of these are zero, there is neither slope in the x- direction, nor slope in the y -direction. Therefore, there can’t be a slope in any direction that’s a combination of x and y. Visually, the surface is perfectly flat at that point.

The directional derivative of a differentiable function u at a point x in the direction of a unit vector $\vec{v}$ is $D_{\vec{v}}u(x)=\nabla u(x)\cdot \vec{v}$. If $D_{\vec{v}}u(x)=0$ for every direction $\vec{v}$, then the inner product with any $\vec{v}$ vanishes. That can only happen when $\nabla u(x) = (0,0,\dots,0) \forall x$ (gradient is identically zero). On a connected (in fact path-connected) region Ω, pick any two points P₁ and P₂. Choose a smooth curve C from P₁ to P₂. Then, the change in u along C is given by the line integral of its gradient: $u(P₂)-u(P₁)=\int_{C}\nabla u\cdot d\mathbf s=0.$ Since this holds for every pair of points, u takes the same value everywhere on Ω.


Since D is a domain (by definition, an open connected set), u must be constant throughout D .

Now let’s look at v. From the Cauchy-Riemann equations: $\frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y} = 0, \frac{\partial v}{\partial y} = \frac{\partial u}{\partial x} = 0$

So v also has zero partial derivatives everywhere in D, which means v is constant on D as well. Since both u and v are constant functions on D, their combination: f(z) = u(x, y) + iv(x, y) must also be constant on D.

Note. The proof relies on D being connected. If D were disconnected (e.g., two separate open regions), f could be one constant on one component and another constant on the other component, e.g., if D = D₁ ∪ D₂ are disjoint open sets, then:

$f(z) = \begin{cases} k, &z \in D_1\\\\ -k, &z \in D_2 \end{cases}$

This function would have ∣f(z)∣ = k everywhere, but wouldn’t be constant. Furthermore, in real analysis, you can have non-constant functions with constant magnitude (e.g., f(x) = cos(x) + isin(x) on ℝ). But in complex analysis, if a function is analytic (complex differentiable) and has constant magnitude on an open set, it must be constant.

Theorem. Let f(z) = u(x, y) + iv(x, y) be a complex function that is differentiable (analytic) at every point in a domain D ⊆ ℂ. If f'(z) = 0 for all z ∈ D, then f must be constant on D.

Proof.

Recall that for a complex function f(z) = u(x, y) + iv(x, y) where z = x + iy , the complex derivative is defined as: $f'(z) = \lim_{h \to 0} \frac{f(z + h) - f(z)}{h}$

When this limit exists for all z ∈ D, f is analytic on D. For an analytic function, the complex derivative can be expressed using partial derivatives: $f'(z) = \frac{\partial u}{\partial x}+i\frac{\partial v}{\partial x}$ This comes from considering the limit as h→0 along the real axis (h = t ∈ ℝ): $f'(z) = \lim_{t \to 0} \frac{f(z + t) - f(z)}{t}$

Alternatively, using the Cauchy-Riemann equations, we can also write: $f'(z) = \frac{\partial v}{\partial y}-i\frac{\partial u}{\partial y}$

By assumption, f(z) = 0, for all z ∈ D, then we have $\frac{\partial u}{\partial x}+i\frac{\partial v}{\partial x} = 0, \text{ and } \frac{\partial v}{\partial y}-i\frac{\partial u}{\partial y} = 0$.

For a complex number to be zero, both its real and imaginary parts must be zero: $\frac{\partial u}{\partial x} = \frac{\partial v}{\partial x} = \frac{\partial v}{\partial y} = \frac{\partial u}{\partial y} = 0$.

Therefore, we conclude that all first-order partial derivatives of both u and v are zero throughout D.

For a differentiable function u(x, y), the directional derivative in the direction of a unit vector $\vec{v} = (a, b)$ is: $D_{\vec{v}}u = \nabla u · \vec{v} = \frac{\partial u}{\partial x}a + \frac{\partial u}{\partial x}b = 0·a + 0·b = 0.$ This holds for any direction $\vec{v} = (a, b)$. The same applies to v(x, y).

This means that both u and v have zero directional derivatives in all directions at every point in D. In multivariable calculus, as it was previously shown, this implies that u and v are constant on any connected region. Therefore, f is constant.

Examples

  1. Identify u and v: u(x, y) = x, v(x, y) = -y.
  2. Compute Partial Derivatives. $\frac{\partial u}{\partial x} = 1, \frac{\partial u}{\partial y} = 0, \frac{\partial v}{\partial x} = 0, \frac{\partial v}{\partial y}= -1$. All partial derivatives exist and are continuous everywhere.
  3. Check Cauchy-Riemann Equations: $\frac{\partial u}{\partial x} = 1 \ne 0 = \frac{\partial v}{\partial y}$ at any point f, so $f(z)=\bar{z}$ is not differentiable at z.
  1. Identify u and v: u(x, y) = eˣcos(y), v(x, y) = eˣsin(y).
  2. Compute Partial Derivatives. $\frac{\partial u}{\partial x} = e^xcos(y), \frac{\partial u}{\partial y} = -e^xsin(y), \frac{\partial v}{\partial x} = e^xsin(y), \frac{\partial v}{\partial y}= e^xcos(y)$. All partial derivatives exist and are continuous everywhere (since exponentials and trigonometric functions are smooth).
  3. Check Cauchy-Riemann Equations: $\frac{\partial u}{\partial x} = e^xcos(y) = \frac{\partial v}{\partial y}✓, \frac{\partial u}{\partial y} = -e^xsin(y) = -\frac{\partial v}{\partial x}✓$. The Cauchy-Riemann equations are satisfied at all points (x,y) .
  4. Since all partial derivatives are continuous on ℝ2 and the Cauchy-Riemann equations hold everywhere, f(z) = eᶻ is complex differentiable everywhere.
  5. As previously noted, this derivation also provides expressions for the complex derivative itself: (Real Axis) f’(z) = $\frac{\partial u}{\partial x} + i\frac{\partial v}{\partial x} = e^xcos(y) + ie^xcos(y) = e^x(cos(y)+isin(y)) = e^xe^{iy} = e^{x+iy} = e^z, \frac{d}{dz}e^z = e^z$
  1. Identify Real and Imaginary Parts: u(x, y) = x² - y² (real part), v(x, y) = 2xy (imaginary part).
  2. Compute the First-Order Partial Derivatives. $\frac{\partial u}{\partial x} = 2x, \frac{\partial u}{\partial y} = -2y, \frac{\partial v}{\partial x} = 2y, \frac{\partial v}{\partial y}= 2x$. All partial derivatives exist and are continuous everywhere in ℝ2, since they are polynomials.
  3. Check the Cauchy-Riemann Equations. The Cauchy-Riemann equations are: $\frac{\partial u}{\partial x} = 2x = \frac{\partial v}{\partial y}✓, \frac{\partial u}{\partial y} = -2y = -\frac{\partial v}{\partial x}✓$. Both equations are satisfied for all (x, y) ∈ ℝ2.
  4. f(z) is complex differentiable at every point in ℂ — that is, it is entire (analytic on the whole complex plane).
  5. f’(z) = $\frac{\partial u}{\partial x} + i\frac{\partial v}{\partial x} = 2x + i(2y) =2(x+iy)=2z.$
  6. Notice: f(z) = x² - y² + i(2xy), where z = x + iy, hence f(z) = (x + iy)² = z², so this function is simply a polynomial, and hence automatically entire.
  1. Express f(z) in Cartesian Coordinates. x = r cos θ y = r sin θ. r² = x² + y². cos(2θ) = cos²θ - sin²θ = $\frac{x²}{r²}-\frac{y²}{r²} = \frac{x² - y²}{x² + y²}$. Similarly, sin(2θ) = 2·sin(θ)cos(θ) = $\frac{2xy}{x² + y²}$. Therefore, f(z) = $r²(\frac{x² - y²}{x² + y²} + i\frac{2xy}{x² + y²}) = (x² + y²)· (\frac{x² - y²}{x² + y²} + i\frac{2xy}{x² + y²})$ = (x² - y²) + i(2xy), for z ≠ 0. z² = (x + iy) = (x² - y²) + i(2xy), so f(z) = z² for z ≠ 0. Since $\lim_{z \to 0} f(z) = 0$, we can extend f continuously by defining f(0) = 0.
  2. Identify Real and Imaginary Parts (for z ≠ 0). u(x, y) = x² - y², v(x, y) = 2xy […] See previous example, it satisfies the Cauchy-Riemann equations everywhere, the partial derivatives are continuous, hence f is complex differentiable everywhere (for z ≠ 0).
  1. Express the function f in terms of x and y. z = x + iy, z̄ = x - iy, f(z) = z² + z̄ = (x + iy)² + (x - iy) = (x² - y² + 2ixy) + (x - iy) =[Group real and imaginary parts] (x² - y² + x) + i(2xy - y)
  2. Identify the real and imaginary parts. u(x, y) = x² - y² + x. v(x, y) = 2xy - y. These are smooth (polynomial) functions, so all partial derivatives exist and are continuous everywhere in ℝ2.
  3. Compute the First-Order Partial Derivatives. $\frac{\partial u}{\partial x} = 2x + 1, \frac{\partial u}{\partial y} = -2y, \frac{\partial v}{\partial x} = 2y, \frac{\partial v}{\partial y}= 2x-1$. All partials are continuous on ℝ2.
  4. Check the Cauchy-Riemann Equations. The Cauchy-Riemann equations are: $\frac{\partial u}{\partial x} = 2x + 1 \ne 2x -1 \frac{\partial v}{\partial y}✓, \frac{\partial u}{\partial y} = -2y = -\frac{\partial v}{\partial x}✓$. 2x+1 = 2x−1 ⇒ 1 = −1❌ Never true. The second equation holds for all (x,y), but the first fails for all x.

Since the Cauchy-Riemann equations are not satisfied at any point, we conclude: f(z) = z² + z̄ is not complex differentiable at any point in ℂ. Thus, f is nowhere analytic.

  1. Identify $u(x, y)$ and $v(x, y), u(x, y) = x^3 - 3xy^2, v(x, y) = 3x^2y - y^3$
  2. Compute Partial Derivatives. $\frac{\partial u}{\partial x} = 3x^2 - 3y^2, \frac{\partial u}{\partial y} = -6xy, \frac{\partial v}{\partial x} = 6xy, \frac{\partial v}{\partial y}= 3x^2 - 3y^2$. All partial derivatives exist and are continuous everywhere in ℝ2, since they are polynomials.
  3. Check the Cauchy-Riemann Equations. $ \frac{\partial u}{\partial x} = 3x^2 - 3y^2 = \frac{\partial v}{\partial y} = 3x^2 - 3y^2, \frac{\partial u}{\partial y} = -6xy = -\frac{\partial v}{\partial x}$
  4. Since both Cauchy-Riemann equations are satisfied and the partial derivatives of u and v are continuous (as they are polynomials), the function $f(z) = x^3 - 3xy^2 + i(3x^2y - y^3)$ is complex differentiable at every point in ℂ — that is, it is entire.
  5. Moreover, its derivative is: $f'(x) = \frac{\partial u}{\partial x} + i\frac{\partial v}{\partial x} = 3x^2 - 3y^2 + i(6xy) = 3(x^2 - y^2 + 2xy) = 3(x+iy)^2 = 3z^2.$ Thus, f’(z) = 3z2.
  6. Notice: f(z) = z3 = (x + iy)3 = x3 + i3x2y -3xy2 -iy3 = x3 -3xy2 + i(3x2y -y3); it is analytic everywhere in the complex plane (i.e., entire), and its derivative is: f’(z) = 3z2.
  7. This function is a special case of De Moivre’s Theorem. For any real angle θ and integer n, $(\cos(\theta) + i\sin(\theta))^n = \cos(n\theta) + i\sin(n\theta).$ Every nonzero complex number z can be expressed as z = $r\bigl(\cos\theta + i\sin\theta\bigr)$. Applying De Moivre’s Theorem with n = 3, we get $z^3 = \bigl[r(\cos(\theta) + i\sin(\theta))\bigr]^3 = r^3\bigl(\cos(3\theta) + i\sin(3\theta)\bigr) =[\text{So in polar coordinates:}] r^3e^{i3\theta}$ which again confirms it’s analytic for all z ∈ ℂ.

The continuity of the partial derivatives $\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}, \frac{\partial v}{\partial x}, \frac{\partial v}{\partial y}$ at a point (x₀, y₀) implies that the function f(x, y) = (u(x, y), v(x, y)), considered as a map from ℝ² → ℝ² is (real) differentiable at (x₀, y₀). This real differentiability means that the change in f can be approximated by a linear transformation (the Jacobian) plus an error term that goes to zero faster than the distance from (x₀, y₀).

The Jacobian matrix at (x₀, y₀) is $J_f(x₀, y₀) = (\begin{smallmatrix}\frac{\partial u}{\partial x}(x₀, y₀) & \frac{\partial u}{\partial y}(x₀, y₀)\\\ \frac{\partial v}{\partial x}(x₀, y₀) & \frac{\partial v}{\partial y}(x₀, y₀)\end{smallmatrix})$. Additionally, if f being real differentiable, the partial derivatives satisfy the Cauchy-Riemann equations at (x₀, y₀) $\frac{\partial u}{\partial x}(x_0, y_0) = \frac{\partial v}{\partial y}(x_0, y_0), \frac{\partial u}{\partial y}(x_0, y_0) = -\frac{\partial v}{\partial x}(x_0, y_0)$.

Substituting these into the Jacobian matrix simplifies it to: $J_f(x₀, y₀) = (\begin{smallmatrix}a & -b\\\ b & a\end{smallmatrix}) \text{, where } a = \frac{\partial u}{\partial x}(x₀, y₀), b = \frac{\partial v}{\partial x}(x₀, y₀)$. The simplified matrix $(\begin{smallmatrix}a & -b\\\ b & a\end{smallmatrix})$ is equivalent to or represents multiplication by the complex number a + ib: (a + ib)⋅(x + iy) = (ax −by) +i(bx + ay). a + ib = $\frac{\partial u}{\partial x}(x_0, y_0) + i\frac{\partial v}{\partial x}(x₀, y₀)$ is precisely f’(z₀). Thus, the real linear approximation (via the Jacobian) becomes a complex linear approximation when the Cauchy-Riemann equations hold.

Hence your first‑order (linear) approximation is not just real‑affine, but complex‑affine, f(z) is approximated by: f(z) ≈ f(z₀) + f’(z₀)(z − z₀), where f’(z₀) = $\frac{\partial u}{\partial x}(x_0, y_0) + i\frac{\partial v}{\partial x}(x₀, y₀)$. This complex linearity is the defining feature, the hallmark of complex differentiability.

When a complex function f: D ⊆ ℂ → ℂ, f(z) = f(x + iy) = u(x,y ) + iv(x, y) is viewed as a mapping from ℝ² to ℝ², f: D ⊆ ℝ² → ℝ², f(x, y) = (u(x, y), v(x, y)), its real differentiability is characterized by the existence of the Jacobian matrix. If f is real-differentiable at a point (a,b), its Jacobian matrix $\mathbb{Df} \bigg\rvert_{(a, b)}$ (also denoted $\mathbb{J_f}(a, b)$) is given by:

$\mathbb{Df} \bigg\rvert_{(a, b)} = \mathbb{J_f}(a, b) = (\begin{smallmatrix}\frac{\partial u}{\partial x}(a, b) & \frac{\partial u}{\partial y}(a, b)\\\ \frac{\partial v}{\partial x}(a, b) & \frac{\partial v}{\partial y}(a, b)\end{smallmatrix})$

This matrix represents the best linear approximation to the function f near the point (a, b). The existence of this matrix requires that all four partial derivatives $\frac{\partial u}{\partial x}, \frac{\partial u}{\partial y}, \frac{\partial v}{\partial x}, \frac{\partial v}{\partial y}$ exist at (a, b). If these partial derivatives exist and are continuous in a neighborhood of (a, b), then f is guaranteed to be real-differentiable at (a,b). The Jacobian matrix encapsulates how f transforms small changes in the input (x,y) near (a,b).

If a complex function f(z) = u(x, y) + iv(x, y) is complex-differentiable at a point z₀ = a + ib, then it is automatically real-differentiable at (a, b) (complex differentiability is a stronger condition), and its Jacobian matrix Jf(a, b) must have a very specific structure. This structure is a direct consequence of the Cauchy-Riemann equations. Let f′(z₀) = α + iβ. The complex differentiability implies that the linear transformation represented by the Jacobian matrix must act like multiplication by the complex number f′(z₀).

Complex differentiability at z₀ means the local behavior of f near z₀ is indistinguishable from multiplying by the complex number f′(z₀). This forces the real derivative (Jacobian matrix) to be a matrix representation of that complex multiplication.

Multiplication by α + iβ corresponds to the linear transformation: (α + iβ)⋅(x + iy) = (αx −βy) +i(βx + αy)

$(\begin{smallmatrix}x\\\ y\end{smallmatrix}) \to (\begin{smallmatrix}αx−βy\\\ βx+αy\end{smallmatrix}) = (\begin{smallmatrix}α & −β\\\ β & α\end{smallmatrix})(\begin{smallmatrix}x\\\ y\end{smallmatrix})$

Complex differentiability imposes a severe restriction on the Jacobian. If f is complex-differentiable at z₀, its Jacobian matrix must be of the form: Jf(a, b) = $(\begin{smallmatrix}α & −β\\\ β & α\end{smallmatrix})$. Comparing this with the general form of the Jacobian $(\begin{smallmatrix}\frac{\partial u}{\partial x} & \frac{\partial u}{\partial y}\\\ \frac{\partial v}{\partial x} & \frac{\partial v}{\partial y}\end{smallmatrix})$ and using the Cauchy-Riemann equations $\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \frac{\partial u}{\partial y} = -\frac{\partial v}{\partial x}$, we identify $\alpha = \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \beta = \frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y}$. Thus, the Jacobian matrix of a complex-differentiable function is not an arbitrary 2×2 real matrix but is constrained to this specific structure, a special “amplitwist” form if you want.

Geometrically, the local transformation is a conformal map: it combines a uniform scaling (amplification) by ∣α + iβ∣= $\sqrt{α²+ β²}$ and a rotation (twist) by arg(α+iβ). This is the origin of the term “amplitwist”.

Geometric Interpretation of the Complex Derivative

Consider a complex function f(z)=u(x, y) + iv(x, y), where z = x + iy. We can view this as a mapping from ℝ² to ℝ², specifically (x, y) ↦ (u(x, y), v(x, y)).

The total derivative of this mapping at a point (a, b) is represented by the Jacobian matrix, $\mathbb{J_f}(a, b)$. This matrix encapsulates all the first-order partial derivative information of the component functions u and v with respect to x and y. Specifically, the Jacobian matrix is defined as: $\mathbb{Df} \bigg\rvert_{(a, b)} = \mathbb{J_f}(a, b) = (\begin{smallmatrix}\frac{\partial u}{\partial x}(a, b) & \frac{\partial u}{\partial y}(a, b)\\\ \frac{\partial v}{\partial x}(a, b) & \frac{\partial v}{\partial y}(a, b)\end{smallmatrix}) =[\text{Cauchy-Riemman Equations}] (\begin{smallmatrix}\frac{\partial u}{\partial x}(a, b) & \frac{\partial u}{\partial y}(a, b)\\\ -\frac{\partial u}{\partial y}(a, b) & \frac{\partial u}{\partial x}(a, b)\end{smallmatrix})$ This matrix serves as the best linear approximation of the function f near the point (a, b). In other words, for small changes (Δx, Δy) away from the point (a, b), the change in the function f can be approximated by the product of the Jacobian matrix and the column vector representing the change: $\mathbb{J_f}(a, b)(\begin{smallmatrix}Δx\\\ Δy\end{smallmatrix})$.

This linear approximation is a fundamental concept in calculus, extending the idea of the derivative of a real-valued function to higher dimensions. The Jacobian matrix, therefore, describes how the function f transforms small neighborhoods around a point in its domain. Each entry in the Jacobian matrix represents the rate of change of one of the output components (u or v) with respect to one of the input variables (x or y).

For instance $\frac{\partial u}{\partial x}(a, b)$ tells us how rapidly the real part u of the complex function f(z) changes as x changes, while keeping y constant at b, and similarly for the other partial derivatives. The geometric interpretation of the Jacobian matrix is that it describes the local stretching, rotating, or transforming effect of the function near the point (a, b).

Let’s compute the determinant of this constrained Jacobian (Recall: for a function f to be complex differentiable, it must satisfy the Cauchy-Riemann equations): $det(\mathbb{J_f}(a, b)) = |\begin{smallmatrix}\frac{\partial u}{\partial x}(a, b) & \frac{\partial u}{\partial y}(a, b)\\\ -\frac{\partial u}{\partial y}(a, b) & \frac{\partial u}{\partial x}(a, b)\end{smallmatrix}| = (\frac{\partial u}{\partial x}(a, b))^2+(\frac{\partial u}{\partial y}(a, b))^2$

The complex derivative is defined as: f’(a + ib) = $\frac{\partial u}{\partial x}(a, b) + i\frac{\partial v}{\partial x}(a, b) =[\text{Using the Cauchy-Riemman Equations}] \frac{\partial u}{\partial x}(a, b) - i\frac{\partial u}{\partial y}(a, b)$

The magnitude of the complex derivative is: |f’(a + ib)| = $\sqrt{(\frac{\partial u}{\partial x}(a, b))^2 + (\frac{\partial u}{\partial y}(a, b))^2}$. Notice that: $|f'(a + ib)|^2 = (\frac{\partial u}{\partial x}(a, b))^2 + (\frac{\partial u}{\partial y}(a, b))^2 = det(\mathbb{J_f}(a, b))$

Assume f’(a + ib) ≠ 0, hence |f’(a + ib)| ≠ 0. Consider a small increment in the complex plane, $\alpha = \alpha_1 + i*\alpha_2$, represented as a vector: $(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix})$.

Since Df is a linear approximation of the function near the point (a, b) ≡ a + ib. So if we take a small increment to $\alpha1 + i*\alpha2$, it tells us how this small vector gets transformed:

$f'(a + ib)·\alpha = \mathbb{Df} \bigg\rvert_{(a, b)}(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix}) = (\begin{smallmatrix}\frac{\partial u}{\partial x}(a, b) & \frac{\partial u}{\partial y}(a, b)\\\ -\frac{\partial u}{\partial y}(a, b) & \frac{\partial u}{\partial x}(a, b)\end{smallmatrix})(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix}) = \alpha_1\frac{\partial u}{\partial x}(a, b) + \alpha_2\frac{\partial u}{\partial y}(a, b)+i(-\alpha_1\frac{\partial u}{\partial y}(a, b) + \alpha_2\frac{\partial u}{\partial x}(a, b))$

$|\mathbb{Df} \bigg\rvert_{(a, b)}(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix})|^2 = (\alpha_1\frac{\partial u}{\partial x}(a, b) + \alpha_2\frac{\partial u}{\partial y}(a, b))^2 + (-\alpha_1\frac{\partial u}{\partial y}(a, b) + \alpha_2\frac{\partial u}{\partial x}(a, b))^2 = \alpha_1^2(\frac{\partial u}{\partial x}(a, b))^2+2\alpha_1\alpha_2\frac{\partial u}{\partial x}(a, b)\frac{\partial u}{\partial y}(a, b)+ \alpha_2^2(\frac{\partial u}{\partial y}(a, b))^2+\alpha_1^2(\frac{\partial u}{\partial y}(a, b))^2-2\alpha_1\alpha_2\frac{\partial u}{\partial x}(a, b)\frac{\partial u}{\partial y}(a, b)+\alpha_2^2(\frac{\partial u}{\partial x}(a, b))^2 = \alpha_1^2[(\frac{\partial u}{\partial x}(a, b))^2+(\frac{\partial u}{\partial y}(a, b))]+\alpha_2^2[(\frac{\partial u}{\partial y}(a, b))^2 + (\frac{\partial u}{\partial x}(a, b))^2]$

$|\mathbb{Df} \bigg\rvert_{(a, b)}(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix})|^2 = (\alpha_1^2+\alpha_2^2)((\frac{\partial u}{\partial x}(a, b))^2 + (\frac{\partial v}{\partial x}(a, b))^2)= |\alpha|^2|f'(a + ib)|^2$. Taking square roots:

$|\mathbb{Df} \bigg\rvert_{(a, b)}(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix})| = |\alpha||f'(a + ib)|$ so the map scales all direction by the same factor |f’(a + ib)| = |f’(z₀)|

$arg(\mathbb{Df} \bigg\rvert_{(a, b)}(\begin{smallmatrix}\alpha_1\\\ \alpha_2\end{smallmatrix})) = arg(f'(a + ib)) + arg(\alpha)$ so it rotates by arg(f’(a + ib)) = arg(z₀).

This means that in the vicinity of a point z₀ = a + ib where f’(z₀) ≠ 0, the function f(z) acts by rotating points about z₀ and scaling their distances from z₀. The complex derivative f’(z₀) encapsulates both the magnitude of this scaling and the angle of this rotation. Specifically, the modulus |f’(z₀)| determines the scaling factor, and the argument arg(f’(z₀)) determines the angle of rotation.

The term “amplitwist” describes the combined geometric effect of a complex derivative. The “ampli” part refers to the amplification or scaling factor, which is given by the modulus (absolute value) of the complex derivative, |f’(z₀)|. If we consider a small line segment emanating from a point z₀ in the complex plane, the length of this segment after being mapped by the function f(z) will be scaled by a factor of |f’(z₀)|. The square of this scaling factor is precisely the determinant of the Jacobian matrix of the corresponding ℝ² → ℝ² mapping.

This scaling is uniform in all directions from z₀, meaning that an infinitesimal circle centered at z₀ would be mapped to an approximately infinitesimal circle centered at f(z₀), albeit with a different radius. The “twist” refers to the rotation induced by the complex derivative. The argument of the complex derivative, arg(f’(z₀)), specifies the angle through which an infinitesimal vector emanating from z₀ is rotated about z₀ when mapped by f(z).

Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.