JustToThePoint English Website Version
JustToThePoint en español

Differentiability at a Point: A Rigorous Perspective

Behind this mask there is more than just flesh. Beneath this mask there is an idea… and ideas are bulletproof, Alan Moore

Complex limits

Motivation: From Real to Multivariable to Complex

In single-variable calculus, the derivative f’(x) is simply a number —the slope of the tangent line. In higher dimensions, the situation is richer. However, regardless of the dimension, it fundamentally represents the best linear approximation of a function near a point, with an error term that vanishes faster than the displacement.

  1. Single-Variable Calculus, $f: \mathbb{R} \to \mathbb{R}$. The derivate $f'(x) \in \mathbb{R}$ is a scalar, the slope of the tangent line.
    Approximation: f(x + h) = f(x) + f′(x)h + o(∣h∣), where o(∣h∣) denotes an error term that decays faster than ∣h∣.
  2. Multivariable Calculus $f: \mathbb{R}^n \to \mathbb{R}^m$. The Jacobian matrix $DF(x) \in \mathbb{R}^{mxn}$, containing all first-order partial derivatives.
    Geometric Meaning: A linear transformation that best approximates the function locally.
    Approximation: F(x + h) = F(x) + DF(x)h + o(∥h∥), where $h \in \mathbb{R}^n$.
  3. Complex Functions $f: \mathbb{C} \to \mathbb{C}$. The derivate $f'(z) \in \mathbb{C}$ is a complex number.
    Geometric Meaning: Represents a rotation and scaling in the complex plane.
    Approximation: F(z + h) = F(z) + f’(z)h + o(∥h∥), where $h \in \mathbb{C}$.

Key Insight

In all cases, differentiability requires that the function can be approximated by a linear map (scalar, matrix, or complex multiplication) such that the error between the function and its linear approximation vanishes faster than the displacement. This ensures the linear map captures the dominant local behavior, unifying the concept of differentiability across dimensions.

The Definition of Differentiability

A function $F: \mathbb{R}^n \to \mathbb{R}^m$ is differentiable at a point x if it can be well-approximated by a linear map near x, $F(x + h) \approx F(x) + L(h)$ where L is linear, and the error in this approximation vanishes faster than ∥h∥ as h → 0.

Differentiability in higher dimensions is far richer than the single‐variable “limit of difference quotient.” It asks: can a function f : ℝn → ℝm be locally approximated by a linear map? The answer is encoded in the Jacobian matrix Df(x), which —if it exists— provides the unique best affine approximation f(x+h) ≈ f(x) + Df(x)h.

Definition (Fréchet Differentiability). Differentiability at a point. Let $f: ℝ^n \to ℝ^m$ be a function and let x be an interior point of the domain of f, $x \in \text{interior(dom f)}$. The function f is differentiable at x if there exists a matrix $Df(x) \in ℝ^{m \times n}$ that satisfies $\lim_{\substack{z \in \text{dom} f \\ z \neq x, z \to x}} \frac{||f(z) - f(x) - Df(x)(z-x)||_2}{||(z-x)||_2} = 0 [\star]$

The matrix DF(x) is called the derivative, differential, or Jacobian matrix of F at x.

Key Points

  1. Why x Must Be an Interior Point. The condition $x \in \text{interior(dom f)}$ ensures that for small displacements or perturbations h = z - x, the point z = x + h remains within the domain of f.
    We need f(x + h) to be defined for h in all directions approaching 0. If x is on the boundary, some directions lead outside dom(f), making the limit undefined.
  2. The Limit Process: $z \neq x, z \to x$, we are calculating the limit, meaning that z approaches x, but z is never actually equal to x because we are looking at the rate of change as we get arbitrary close to x, not the value at x itself. It emphasizes that the local behavior of f near x, not exactly at x, determines differentiability.
  3. Uniqueness of the Jacobian. If the limit exists (equivalently, if such a matrix Df(x) satisfying the limit in $[\star]$ exits), it is unique. In other words, there’s only one linear transformation that satisfies the definition and best approximates the function f at the point x (first order).
  4. Df(x)(z -x) represents the best linear approximation to the change in f near x. Df(x) is the Jacobian matrix (which represents the derivative as a linear transformation) and it’s multiplied by the vector (z - x) (which is a displacement vector in ℝn). The Jacobian tells us how f changes in direction and magnitude when moving a small amount from x in any direction.
    First-order Taylor expansion: $f(z) \approx f(x) + Df(x)(z−x)$
  5. Why the Norm Appears. The Euclidean norm ||···||2 denotes the Euclidean norm of a vector. It generalizes absolute value to vectors. It measures the length or magnitude of vectors. This is essential because we are dealing with vectors (not real numbers) in ℝn and ℝm and we need a way to measure their size or magnitude.
    Without norms, we cannot measure “closeness” or “smallness” in higher dimensions.
  6. The Relative Error Vanishes: The entire expression inside the limit $\frac{||f(z) - f(x) - Df(x)(z-x)||_2}{||(z-x)||_2} = \frac{\text{approximation error}}{\text{step size}}$ represents the relative error between the true change (f(z) - f(x)) and the linear approximation. The limit being 0 means that this relative error becomes arbitrarily small as z gets closer and closer to x. In other words, the linear approximation becomes increasingly accurate as we zoom in on the point x.
  7. Alternative Formulation using h. If we write or substitute z = x + h (h = z - x), then as $z \to x, h \to 0$. This gives us an alternative or equivalent form: $\lim_{\substack{x + h \in dom f \\ h \neq 0, h \to 0}} \frac{||f(x+h) - f(x) - Df(x)h||_2}{||h||_2} = 0$. This version is especially useful when writing Taylor expansions or computing limits directly.
  8. Computing the Jacobian Matrix Df(x) The Jacobian matrix is formed by taking partial derivatives of each component function: $Df(x) = \frac{∂f_i(x)}{∂x_j}_{1≤i≤m,1≤j≤n}$. This means the entry in the i-th row and j-th column of Df(x) is the partial derivative of the i-th component function fᵢ with respect to the j-th variable xⱼ, evaluated at the point x. More explicitly, the Jacobian is:
$$Df(x) = \begin{pmatrix} \dfrac{\partial f_1}{\partial x_1} & \dfrac{\partial f_1}{\partial x_2} & \cdots & \dfrac{\partial f_1}{\partial x_n} \\[12pt] \dfrac{\partial f_2}{\partial x_1} & \dfrac{\partial f_2}{\partial x_2} & \cdots & \dfrac{\partial f_2}{\partial x_n} \\[12pt] \vdots & \vdots & \ddots & \vdots \\[12pt] \dfrac{\partial f_m}{\partial x_1} & \dfrac{\partial f_m}{\partial x_2} & \cdots & \dfrac{\partial f_m}{\partial x_n} \end{pmatrix}$$

The Jacobian Df(x) is an m x n real matrix and this is the practical way to compute the Jacobian matrix. This matrix represents the linear map that best approximates f near x.

Two Perspectives on the Jacobian

Column View: Each column is the partial derivative with respect to one variable: $DF = \begin{pmatrix} | & | & & | \\[4pt] \frac{\partial F}{\partial x_1} & \frac{\partial F}{\partial x_2} & \cdots & \frac{\partial F}{\partial x_n} \\[4pt] | & | & & | \end{pmatrix}$

Row View: Each row is the gradient of one component function: $DF = \begin{pmatrix} — & \nabla F_1^T & — \\ — & \nabla F_2^T & — \\ & \vdots & \\ — & \nabla F_m^T & — \end{pmatrix}$

The definition of differentiability generalizes the familiar derivative from single-variable calculus. Instead of a number, the derivative becomes a matrix — the Jacobian — which represents the best linear approximation to the function at a point.

It captures the idea that a function can be locally approximated by a linear transformation. The Jacobian matrix is the matrix representation of this linear transformation, and its entries are the partial derivatives of the component functions. The use of norms is crucial for making the definition rigorous in higher dimensions. The use of limits guarantees that this approximation becomes arbitrarily accurate as we zoom in on the point of interest.

Theorem (The Jacobian as a Linear Map). If F is differentiable at x, then: $DF(x) \cdot h = \text{best linear approximation to } F(x+h) - F(x)$

The action of DF(x) on a vector h gives the approximate change in F: $F(x + h) \approx F(x) + DF(x) \cdot h$

Uniqueness. If F is differentiable at x, the derivative DF(x) is unique.

Proof

  1. Suppose both A and B are m × n matrices satisfying the definition. Then, $\lim_{h \to 0} \frac{\|F(x+h) - F(x) - Ah\|}{\|h\|} = 0 \quad \text{and} \quad \lim_{h \to 0} \frac{\|F(x+h) - F(x) - Bh\|}{\|h\|} = 0$
  2. For any h ≠ 0: $\frac{\|Ah - Bh\|}{\|h\|} = \frac{\|[F(x+h) - F(x) - Bh] - [F(x+h) - F(x) - Ah]\|}{\|h\|}$ $\leq \frac{\|F(x+h) - F(x) - Ah\|}{\|h\|} + \frac{\|F(x+h) - F(x) - Bh\|}{\|h\|}$
  3. As $h \to 0$, the ERS (equation right side) $\to 0$. Therefore, $\lim_{h \to 0} \frac{\|Ah - Bh\|}{\|h\|} = 0$
  4. For any unit vector u, take h = tu with $t \to 0$: $\frac{|A(tu) - B(tu)|}{|t|} = |Au - Bu|$
  5. This must equal 0, so Au = Bu for all unit vectors u, hence for all vectors.
  6. Therefore, A = B. ∎

Definition. The partial derivative of F with respect to $x_j$ at x is: $\frac{\partial F}{\partial x_j}(x) = \lim_{t \to 0} \frac{F(x + te_j) - F(x)}{t}$ where $e_j = (0, \ldots, 0, 1, 0, \ldots, 0)$ is the j-th standard basis vector.

The existence of all partial derivatives does NOT imply differentiability! Partial derivatives measure rates of change only along coordinate axes. Differentiability requires the function to behave linearly in ALL directions.

Counterexample: $f(x, y) = \begin{cases} \frac{xy}{x^2 + y^2} & (x, y) \neq (0, 0) \\ 0 & (x, y) = (0, 0) \end{cases}$

Partial derivatives at origin exist: $\frac{\partial f}{\partial x}(0, 0) = \lim_{t \to 0} \frac{f(t, 0) - f(0, 0)}{t} = \lim_{t \to 0} \frac{0}{t} = 0$, $\frac{\partial f}{\partial y}(0, 0) = \lim_{t \to 0} \frac{f(0, t) - f(0, 0)}{t} = \lim_{t \to 0} \frac{0}{t} = 0$

However, f is NOT differentiable at (0,0).

If differentiable, Df(0,0) = (0, 0) (the zero matrix), so: $\lim_{(h,k) \to (0,0)} \frac{|f(h, k) - 0 - 0|}{\sqrt{h^2 + k^2}} = \lim_{(h,k) \to (0,0)} \frac{|hk|}{(h^2 + k^2)^{3/2}}$

Along h = k = t: $\frac{|t^2|}{(2t^2)^{3/2}} = \frac{t^2}{2\sqrt{2}|t|^3} = \frac{1}{2\sqrt{2}|t|} \to \infty$. The limit doesn’t exist, so f is not differentiable at (0,0).

Theorem (Sufficient Condition). If all partial derivatives $\frac{\partial F_i}{\partial x_j}$ exist in a neighborhood of x and are continuous at x, then F is differentiable at x. This condition is written as “$F \in C^1$” or “F is continuously differentiable.”

Theorem (Differentiability Implies Continuity). If F is differentiable at x, then F is continuous at x.

Proof

  1. Since F is differentiable at x: $F(x + h) = F(x) + DF(x) \cdot h + o(\|h\|)$ where the remainder term satisfies $\frac{o(\| h\| )}{\| h\| }\rightarrow 0\quad \mathrm{as\ }h\rightarrow 0.$
  2. As $h \to 0$: (i) $DF(x) \cdot h \to 0$ (linear maps are continuous), (ii) By definition of the little‑o notation, $o(\| h\| )=\| h\| \varepsilon (h)\quad \mathrm{with}\quad \varepsilon (h)\rightarrow 0.$ Hence, $o(\| h\| )\rightarrow 0$.
  3. Putting everything together, $\lim_{h \to 0} F(x + h) = F(x) + 0 + 0 = F(x)$. So F is continuous at x. ∎

The Converse is False. Continuity does NOT imply differentiability!

Examples: f(x) = |x| is continuous at 0 but not differentiable there. F(x, y) = (|x|, |y|) is continuous everywhere but not differentiable on the coordinate axes.

Examples of Jacobians

This is precisely the definition of the n × n identity matrix, $DF(x) = I_n = \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{pmatrix}$

To find the Jacobian matrix $Df(\mathbf{x})$, we look at the function component by component. The $i$-th component of the output vector $f(\mathbf{x})$, denoted as $f_i(\mathbf{x})$, is the dot product of the $i$-th row of $A$ with the vector $\mathbf{x}$, $f_i(\mathbf{x}) = \sum_{j=1}^n a_{ij} x_j = a_{i1}x_1 + a_{i2}x_2 + \dots + a_{in}x_n$

We differentiate $f_i(\mathbf{x})$ with respect to a specific input variable $x_k$. Since the expression is linear (a simple sum of coefficients times variables), only the term containing $x_k$ survives, $\frac{\partial f_i}{\partial x_k} = \frac{\partial}{\partial x_k} (a_{ik}x_k) = a_{ik}$

The Jacobian matrix $J$ or $Df(\mathbf{x})$ is defined as the matrix where the entry in the $i$-th row and $k$-th column is $\frac{\partial f_i}{\partial x_k}$, $(Df(\mathbf{x}))_{ik} = a_{ik}$

Since the $(i, k)$ entry of the Jacobian is exactly the $(i, k)$ entry of the matrix $A$, the matrices are identical. Conclusion: $\boxed{D(A\mathbf{x}) = A}$

Intuition: The derivative represents the

Example (m = 2, n = 3). If A = $(\begin{smallmatrix}a_{11} & a_{12} & a_{13}\\\\a_{21} & a_{22} & a_{23}\end{smallmatrix})$ and $f(\vec{x}) = A\vec{x}$, then $Df(\vec{x}) = A = (\begin{smallmatrix}a_{11} & a_{12} & a_{13}\\\\a_{21} & a_{22} & a_{23}\end{smallmatrix})$

This shows that adding a constant vector does not change the Jacobian, as constant translations do not affect the local linear behavior. Translation doesn’t change the linear approximation.

We want to find the gradient $\nabla f(\mathbf{x}) = (\frac{\partial f}{\partial x_1}, \cdots, \frac{\partial f}{\partial x_n})$, which is a vector of partial derivatives.

  1. Expansion into Double Summation. Let’s verify the expansion. $A\mathbf{x}$ is a vector where the $i$-th component is $\sum_{j} a_{ij}x_j$ and $f(\mathbf{x}) = \sum_{i=1}^n x_i \left( \sum_{j=1}^n a_{ij} x_j \right) = \sum_{i=1}^n \sum_{j=1}^n a_{ij} x_i x_j$
  2. Differentiate with respect to $x_k$. We need to compute $\frac{\partial f}{\partial x_k}$. This is tricky because $x_k$ appears in the sum multiple times:
    When $i = k$: Terms like $a_{kj} x_k x_j$
    When $j = k$: Terms like $a_{ik} x_i x_k$
    Using the Product Rule on the general term $a_{ij} x_i x_j$:
    $\frac{\partial}{\partial x_k} (a_{ij} x_i x_j) = a_{ij} \left( \frac{\partial x_i}{\partial x_k} x_j + x_i \frac{\partial x_j}{\partial x_k} \right)$
    Recall that $\frac{\partial x_i}{\partial x_k}$ is 1 if $i=k$ and 0 otherwise (Kronecker delta $\delta_{ik}$).
    So, the sum splits into two parts:
    Part A (Index $i=k$): The derivative acts on the first $x$. $\sum_{j=1}^n a_{kj} x_j = (A\mathbf{x})_k ~ (k\text{-th component of } A\mathbf{x})$
    Part B (Index $j=k$): The derivative acts on the second $x$. $\sum_{i=1}^n a_{ik} x_i = (A^T\mathbf{x})_k$
    Note that $\sum_{i} a_{ik} x_i$ is the same as the row-vector multiplication of the $k$-th row of $A^T$ by $\mathbf{x}$. This equals the $k$-th component of $A^T\mathbf{x}$.
  3. Step 3: Combine. $\frac{\partial f}{\partial x_k} = (A\mathbf{x})_k + (A^T\mathbf{x})_k$ Since this holds for all $k$, the full gradient vector is: $\nabla f(\mathbf{x}) = A\mathbf{x} + A^T\mathbf{x} = (A + A^T)\mathbf{x}$
  4. If $A$ is symmetric, then $A = A^T$. Substituting this into our result: $\nabla f(\mathbf{x}) = (A + A)\mathbf{x} = 2A\mathbf{x}$
  5. The Jacobian of a function $f: \mathbb{R}^n \to \mathbb{R}^m$ is an m x n matric. Since f output a scalar, the Jacobian is a row vector: $DF(x) = (\frac{\partial f}{\partial x_1}, \cdots, \frac{\partial f}{\partial x_n}) = (\nabla f(\mathbf{x}))^T = (2A\mathbf{x})^T = 2x^TA^T$.
    If A is symmetric ($A = A^T$), we get the final form: $\boxed{DF(x) = 2x^TA}$.

This is the quadratic case with A = I: Gradient: $\nabla f(x) = 2x$, Jacobian (as 1 × n row vector): $Df(x) = 2x^T = (2x_1, 2x_2, \ldots, 2x_n)$

Component functions: F₁(x, y) = x² + y, F₂(x, y) = xy

Partial derivatives: $\frac{\partial F_1}{\partial x} = 2x, \quad \frac{\partial F_1}{\partial y} = 1$, $\frac{\partial F_2}{\partial x} = y, \quad \frac{\partial F_2}{\partial y} = x$

Jacobian: $DF(x, y) = \begin{pmatrix} 2x & 1 \\ y & x \end{pmatrix}$. At (1, 2): $DF(1, 2) = \begin{pmatrix} 2x & 1 \\ y & x \end{pmatrix}\begin{pmatrix} 1 \\ 2\end{pmatrix} \begin{pmatrix} 2 & 1 \\ 2 & 1 \end{pmatrix}$

Jacobian: $DF(x, y) = \begin{pmatrix} 1 & 1 \\ 2xy & x^2 \\ y\cos(xy) & x\cos(xy) \end{pmatrix}$

Jacobian: $DF(r, \theta) = \begin{pmatrix} \cos\theta & -r\sin\theta \\ \sin\theta & r\cos\theta \end{pmatrix}$

Determinant: $\det(DF) = r\cos^2\theta + r\sin^2\theta = r$. This is the Jacobian determinant for the change of variables from Cartesian to polar coordinates.

Quick Jacobian Recipes

Function Type F(x) Jacobian DF(x)
Identity x I
Linear Ax A
Affine Ax + b A
Quadratic xᵀAx xᵀ(A + Aᵀ)
Squared norm ∥x∥² 2xᵀ
Component-wise (f₁(x), …, fₘ(x)) (∇f₁ᵀ; …; ∇fₘᵀ)

Differentiable function

Definition. A function f is called differentiable if its domain f (dom(f) ⊆ ℝn) is open and f is differentiable at every point of its domain (∀x ∈ dom(f)).

Definition. F is continuously differentiable on an open set U if:

  1. F is differentiable on U (at every point of U), and
  2. All partial derivatives exist and are continuous on U.

This is the standard definition of $C^1$.

Theorem. A function F is in $C^1(U)$ if and only if all its partial derivatives exist and are continuous on U.

Theorem (Chain Rule). Let $U \subseteq \mathbb{R}^n$ and $V \subseteq \mathbb{R}^m$ be open sets. Consider the functions: $F: U \to \mathbb{R}^m$ (differentiable at $\mathbf{x} \in U$)$, G: V \to \mathbb{R}^p$ (differentiable at $\mathbf{y} = F(\mathbf{x}) \in V$).

Then, the composition $H = G \circ F$ is differentiable at $\mathbf{x}$, and its total derivative is the product of the individual derivatives: $\boxed{D(G \circ F)(\mathbf{x}) = DG(F(\mathbf{x})) \cdot DF(\mathbf{x})}$

If we represent the derivatives as matrices, the Jacobian of the composition is the matrix product of the individual Jacobians, $J_{G \circ F}(\mathbf{x}) = \underbrace{J_G(F(\mathbf{x}))}_{p \times m} \cdot \underbrace{J_F(\mathbf{x})}_{m \times n} = \underbrace{J_{G \circ F}(\mathbf{x})}_{p \times n}$

Proof

  1. Let $\mathbf{h}$ be a small displacement vector in the input space $\mathbb{R}^n$ near $\mathbf{x}$. The change in the first function $F$ is $\mathbf{k} = F(\mathbf{x} + \mathbf{h}) - F(\mathbf{x})$.
  2. Since F is differentiable at x, we can express this change as a linear part plus a small error: $F(x + h) = F(x) + DF(x) \cdot h + o(\|h\|)$ where the error term $E_F(\mathbf{h})$ vanishes quick enough, i.e., $\lim_{\mathbf{h} \to 0} \frac{\|E_F(\mathbf{h})\|}{\|\mathbf{h}\|} = 0$ (written as $o(\|\mathbf{h}\|)$).
  3. Next, let’s consider the second function $G$. Since $G$ is differentiable at $\mathbf{y} = F(\mathbf{x})$, for any small change $\mathbf{k}$ in its input, we also have: $G(\mathbf{y} + \mathbf{k}) - G(\mathbf{y}) = DG(\mathbf{y})\mathbf{k} + o(\|\mathbf{k}\|)$.
  4. We want to express the total change in the composite function $G(F(\mathbf{x} + \mathbf{h})) - G(F(\mathbf{x}))$.
    Note that $F(\mathbf{x} + \mathbf{h}) = F(\mathbf{x}) + \mathbf{k} = \mathbf{y} + \mathbf{k}$.
    $G(F(\mathbf{x} + \mathbf{h})) - G(F(\mathbf{x}))= DG(\mathbf{y}) \cdot [\underbrace{DF(\mathbf{x})\mathbf{h} + E_F(\mathbf{h})}_{\mathbf{k}}] + E_G(\mathbf{k})$
  5. Distribute the linear operator $DG(\mathbf{y})$: $= \underbrace{DG(\mathbf{y}) \cdot DF(\mathbf{x}) \cdot \mathbf{h}}_{\text{Candidate Linear Term}} + \underbrace{DG(\mathbf{y}) \cdot E_F(\mathbf{h}) + E_G(\mathbf{k})}_{\text{Total Error } E(\mathbf{h})}$
  6. Analyze the Error Terms. To prove differentiability, we must show that the total error $E(\mathbf{h})$ goes to 0 faster than $\mathbf{h}$ (i.e., is $o(\|\mathbf{h}\|)$).
    Term 1: $DG(\mathbf{y}) \cdot E_F(\mathbf{h})$.Since $DG(\mathbf{y})$ is a fixed linear map (bounded) and $E_F(\mathbf{h})$ gets small very fast ($o(\|\mathbf{h}\|)$), their product is definitely $o(\|\mathbf{h}\|)$.
    Term 2: We want to check the limit: $\lim_{\mathbf{h} \to 0} \frac{\|E_G(\mathbf{k})\|}{\|\mathbf{h}\|} =[\text{Assuming } k \ne 0] \lim_{\mathbf{h} \to 0} \left( \frac{\|E_G(\mathbf{k})\|}{\|\mathbf{k}\|} \cdot \frac{\|\mathbf{k}\|}{\|\mathbf{h}\|} \right)$
    $\frac{\|E_G(\mathbf{k})\|}{\|\mathbf{k}\|}$. As $\mathbf{h} \to 0$, we know $\mathbf{k} \to 0$ (because $F$ is continuous). By the definition of derivative for $G$, this ratio goes to 0.
    $\frac{\|\mathbf{k}\|}{\|\mathbf{h}\|}$. We know $\mathbf{k} \approx DF(\mathbf{x})\mathbf{h}$. Since linear maps are bounded, there is a constant $C$ such that $\|\mathbf{k}\| \leq C \|\mathbf{h}\|$. Therefore, this ratio is bounded (it doesn’t explode to infinity).
    $(\text{Something going to } 0) \times (\text{Something bounded}) = 0$
  7. Conclusion: The total change is the linear map $DG(\mathbf{y}) \cdot DF(\mathbf{x})$ acting on $\mathbf{h}$, plus a negligible error.

Therefore, the derivative is indeed the product of the matrices: $D(G \circ F)(\mathbf{x}) = DG(F(\mathbf{x})) \cdot DF(\mathbf{x})$

Jacobians of Complex Functions

A complex function f: ℂ → ℂ with f(z) = u(x, y) + iv(x, y) can be viewed as F: ℝ² → ℝ²: $F(x, y) = (u(x, y), v(x, y))$

The Jacobian of a Complex Function: $J_F = \begin{pmatrix} u_x & u_y \\ v_x & v_y \end{pmatrix}$

Examples:

The Jacobian has the form $\begin{pmatrix} a & -b \\ b & a \end{pmatrix}$ exactly when $u_x = v_y \quad \text{and} \quad u_y = -v_x$. These are the Cauchy-Riemann equations.

This matrix represents multiplication by the complex number w = a + ib.

Multiplication by w = a + ib acts on z = x + iy: $w \cdot z = (a + ib)(x + iy) = (ax - by) + i(bx + ay)$

As a map ℝ² → ℝ²: $(x, y) \mapsto (ax - by, bx + ay) = \begin{pmatrix} a & -b \\ b & a \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix}$

Writing w = $re^{i\theta} = r\cos(\theta) + i\sin(\theta)$ where r = |w| = $\sqrt{a² + b²}$ and θ = arg(w): $\begin{pmatrix} a & -b \\ b & a \end{pmatrix} = r \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}$

This is a rotation by θ followed by scaling by r. Furthermore, $\det\begin{pmatrix} a & -b \\ b & a \end{pmatrix} = a^2 + b^2 = |w|^2$. Complex-differentiable functions scale areas by |f’(z)|².

Bitcoin donation

JustToThePoint Copyright © 2011 - 2026 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.