If you’re going through hell, keep on going, Winston Churchill.

Sometimes people don’t want to hear the truth because they don’t want their illusions destroyed, Friedrich Nietzsche.

An algebraic equation is a mathematical statement that declares or asserts the equality of two algebraic expressions. These expressions are constructed using:

**Dependent and independent variables**. Variables represent unknown quantities. The independent variable is chosen freely, while the dependent variable changes in response to the independent variable.**Constants**. Fixed numerical values that do not change.**Algebraic operations**. Operations such as addition, subtraction, multiplication, division, exponentiation, and root extraction.

Definition. A differential equation is an equation that involves one or more dependent variables, their derivatives with respect to one or more independent variables, and the independent variables themselves, e.g., $\frac{dy}{dx} = 3x +5y, y’ + y = 4xcos(2x), \frac{dy}{dx} = x^2y+y, etc.$

It involves (e.g., $\frac{dy}{dx} = 3x +5y$):

**Dependent variables**:*Variables that depend on one or more other variables*(y).**Independent variables**: Variables upon which the dependent variables depend (x).**Derivatives**: Rates at which the dependent variables change with respect to the independent variables, $\frac{dy}{dx}$

The Existence and Uniqueness Theorem provides crucial insight into the behavior of solutions to first-order differential equations ODEs. It states that if:

- The function f(x, y) (the right-hand side of the ODE) in y’ = f(x, y) is continuous in a neighborhood around a point (x
_{0}, y_{0}) and - Its partial derivative with respect to y, $\frac{∂f}{∂y}$, is also continuous near (x
_{0}, y_{0}).

Then the differential equation y' = f(x, y) has a unique solution to the initial value problem through the point (x_{0}, y_{0}) .

A first-order linear differential equation (ODE) has the general form: a(x)y' + b(x)y = c(x) where y′ is the derivative of y with respect to x, and a(x), b(x), and c(x) are functions of x. If c(x) = 0, the equation is called homogeneous, i.e., a(x)y’ + b(x)y = 0.

The equation can also be written in the standard linear form as: y’ + p(x)y = q(x) where $p(x)=\frac{b(x)}{a(x)}\text{ and }q(x) = \frac{c(x)}{a(x)}$

A second-order linear homogeneous differential equation ODE with constant coefficients is a differential equation of the form: y'' + Ay' + By = 0 where:

- y is the dependent variable (a function of the independent variable t),
- y′ and y′′ are the first and second derivatives of y with respect to t,
- t is the independent variable,
- A and B are constants.

This equation is homogeneous, meaning that there are no external forcing terms (like a function of t) on the right-hand side.

In the study of differential equations and dynamical systems, non-linear autonomous systems play a crucial role due to their complex and rich behavior. One of the fascinating phenomena exhibited by such systems is the occurrence of limit cycles, which are closed trajectories representing periodic solutions.

Consider a general non-linear autonomous system of the form:

$\begin{cases} x’ = f(x, y) \\ y’ = g(x, y) \end{cases}$ where:

- x’ and y’ denote the time derivatives $\frac{dx}{dt}$ and $\frac{dy}{dt}$, respectively.
- f(x, y) and g(x, y) are
**non-linear functions that govern the time evolution of the variables x and y**. This system describes how the variables x and y change with respect to time. - f(x, y) and g(x, y) are
**non-linear functions that govern the time evolution of the variables x and y**.

This system describes how the variables x and y change with respect to time, based on their current values.

To better understand the behavior of this system, we can construct the velocity field $\vec{F}$ which provides a geometric interpretation of the system. The velocity field is defined as: $\vec{F} = f(x, y)\hat{\mathbf{i}}+g(x, y)\hat{\mathbf{j}}$ where

- $f(x, y)\hat{\mathbf{i}}$ represents the velocity component in the x-direction.
- $g(x, y)\hat{\mathbf{j}}$ represents the velocity component in the y-direction.
- $\hat{\mathbf{i}}$ and $\hat{\mathbf{j}}$ are unit vectors in the x and y directions, respectively.

This vector field describes how the values of x and y change over time at every point in the plane. Each point (x, y) has an associated vector $\vec{F}(x, y)$ indicating the direction and speed at which the system evolves from that point.

Solutions to the system are pairs of functions x(t) and y(t), but geometrically, they are trajectories or paths traced out by the evolving system in the xy-plane. These trajectories follow the direction of the vector field $\vec{F}$, meaning that at any point along the trajectory, the tangent to the path is given by $\vec{F}(x, y)$.

Refer to Figure i for a visual representation and aid in understanding it.

A critical point of the system occurs where the velocity field is zero, i.e., where the system's rate of change is zero. Mathematically, this happens when both $f(x_0, y_0) = 0, g(x_0, y_0) = 0$, meaning that the velocity at the point (x_{0}, y _{0}) is zero. At such points, the system is at equilibrium, meaning there is no motion, and the solution remains constant over time.

In terms of the field, these points are where: $\vec{F}(x_0, y_0) = f(x_0, y_0)\hat{\mathbf{i}}+g(x_0, y_0)\hat{\mathbf{j}} = \vec{0}$

Critical points are important because they *often represent stable or unstable equilibrium states where the system tends to “settle” or from which it may “escape” in the long term*, depending on the nature of the equilibrium.

The behavior near critical points can be analyzed using linearization and the eigenvalues of the Jacobian matrix:

J = $(\begin{smallmatrix}\frac{∂f}{∂x} & \frac{∂f}{∂y}\\ \frac{∂g}{∂x} & \frac{∂g}{∂y}\end{smallmatrix})\bigg|_{(x_0, y_0)}$

The eigenvalues of J determine the local behavior near the critical point:

- If both eigenvalues have negative real parts, the critical point is
**a stable node or focus**. Solutions near the critical point converge to it as t → ∞. - If both eigenvalues have positive real parts, it’s an
**unstable node or focus**. Solutions near the critical point diverge from it as t → ∞. - If eigenvalues are real and of opposite signs, the critical point is a
**saddle point**. Solutions approach the critical point along one eigenvector and move away along the other. - If eigenvalues are purely imaginary conjugates, the critical point is a
**center**. Solutions orbit around the critical point without converging or diverging

A closed trajectory is a path in the xy-plane that loops back to its starting point and then repeats itself indefinitely. In other words, if a trajectory is closed, **the system exhibits periodic behavior**, that is, after a certain period T, the system returns to its initial state, and this cycle repeats over and over.

Geometrically, a closed trajectory represents a periodic solution (x(t), y(t)) such that: x(t + T) = x(t), y(t + T) = y(t) for all t.

Importantly, **a closed trajectory does not cross itself** —the system cannot have two different velocity directions at the same point due to the uniqueness of solutions in differential equations (Refer to Figure v for a visual representation and aid in understanding it). This property ensures that the system’s motion is uniquely determined by the initial conditions and evolves in a smooth, continuous manner.

The simple harmonic oscillator is a fundamental example in differential equations and physics, representing systems that exhibit periodic motion, such as springs and pendulums under ideal conditions.

Consider the system of differential equations: $\begin{cases} x’ = y \\ y’ = -x \end{cases}$ where x = x(t) and y = y(t) are functions of time t (two variables that oscillate over time), and x’ and y’ denote derivatives of x and y, respectively with respect to time. This system describes how the variables x and y evolve over time, with each variable depending on the other.

This system describes a simple harmonic oscillator.

The matrix representation of the system is: $\vec{x’} = Ax$ where A = $(\begin{smallmatrix}0 & 1\\ -1 & 0\end{smallmatrix})$ is the coefficient matrix, and $\vec{x} = (\begin{smallmatrix}x\\ y\end{smallmatrix})$ is the state vector. This compact form allows us to use linear algebra techniques to solve the system.

To find the eigenvalues, we solve the characteristic equation: det(A-λI) = $det(\begin{smallmatrix}-λ & 1\\ -1 & -λ\end{smallmatrix}) = 0 ↭ (−λ)(−λ)−(1)(−1) = 0 ↭ λ^2 + 1 = 0 ↭ λ^2 = -1$. The eigenvalues are λ_{1} = i and λ_{2} = -i. These complex eigenvalues indicate that the system exhibits rotational behavior (oscillatory) in the xy-plane (phase plane).

To find the eigenvectors, we substitute each eigenvalue back into $(A−λI)\vec{v} = \vec{0}$

For λ_{1} = i, $(\begin{smallmatrix}-i & 1\\ -1 & -i\end{smallmatrix})(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix}) = (\begin{smallmatrix}0\\ 0\end{smallmatrix})$

From the first equation: -ia_{1} +a_{2} = 0 ⇒ a_{2} = ia_{1}. From the second row: -a_{1} -ia_{2} = 0 ⇒[Substitute a_{2} = ia_{1}] -a_{1} -i(ia_{1}) = -a_{1} + a_{1} = 0. We can choose a_{1} = 1 (since eigenvectors are determined up to a scalar multiple). Therefore, the eigenvector is: $\vec{v_1}=(\begin{smallmatrix}1\\ i\end{smallmatrix})$

For λ_{2} = -i, $(\begin{smallmatrix}i & 1\\ -1 & i\end{smallmatrix})(\begin{smallmatrix}a_1\\ a_2\end{smallmatrix}) = (\begin{smallmatrix}0\\ 0\end{smallmatrix})$

From the first equation: ia_{1} +a_{2} = 0 ⇒ a_{2} = -ia_{1}. Choosing a_{1} = 1, we have: $\vec{v_2}=(\begin{smallmatrix}1\\ -i\end{smallmatrix})$

The general complex solution to the system in complex form is a linear combination of the eigenvectors multiplied by the exponential of their eigenvalues: $\vec{x}(t) = c_1e^{it}\vec{v_1} + +c_2e^{-it}\vec{v_1}$ where c_{1} and c_{2} are complex constants determined by initial conditions.

Substitute the eigenvectors: $\vec{x}(t) = c_1e^{it}(\begin{smallmatrix}1\\ i\end{smallmatrix})+c_2e^{-it}(\begin{smallmatrix}1\\ -i\end{smallmatrix})$

To find real-valued solutions, we express the complex exponentials using Euler’s formula: $e^{it} = cos(t) + isin(t), e^{-it} = cos(t) - isin(t)$

Substitute back into the general solution: $\vec{x}(t) = c_1(cos(t)+isin(t))(\begin{smallmatrix}1\\ i\end{smallmatrix})+c_2(cos(t)-isin(t))(\begin{smallmatrix}1\\ -i\end{smallmatrix}) = (\begin{smallmatrix}c_1(cos(t)+isin(t))+c_2(cos(t)-isin(t))\\ c_1(icos(t)-sin(t))+c_2(-icos(t)-sin(t))\end{smallmatrix})$

Let’s use the real and imaginary parts of the complex solutions to construct real solutions.

First Real Solution. Take the real part of $e^{it}\vec{v_1}$: $\mathbb{R}[e^{it}\vec{v_1}] = (\begin{smallmatrix}cos(t)\\ -sin(t)\end{smallmatrix})$

Second Real Solution. Take the imaginary part of $e^{it}\vec{v_1}$: $\mathbb{ℑ}[e^{it}\vec{v_1}] = (\begin{smallmatrix}sin(t)\\ cos(t)\end{smallmatrix})$

Therefore, the general real solution is: $c_1(\begin{smallmatrix}cos(t)\\ -sin(t)\end{smallmatrix})+c_2(\begin{smallmatrix}sin(t)\\ cos(t)\end{smallmatrix})$ where c_{1} and c_{2} are real constants determined by initial conditions.

The solutions x(t) and y(t) represent sinusoidal functions with the same frequency but possibly different amplitudes and phases. The trajectories in the phase plane are closed curves, specifically circles or ellipses, depending on the constants c_{1} and c_{2}.

For the simple harmonic oscillator, this is a family of concentric circles centered at the origin, representing periodic motion [1]. Each trajectory is a closed curve, and the motion goes around clockwise indefinitely, which is typical of simple harmonic oscillators. (Refer to Figure ii for a visual representation and aid in understanding it).

In linear systems like the simple harmonic oscillator, all trajectories are closed curves (circles) but are not isolated —there is a family of closed trajectories filling the phase plane.

Linear systems do not have limit cycles in the strict sense because limit cycles are a feature of non-linear systems where closed trajectories are isolated.

For the simple harmonic oscillator, the trajectories are circles centered at the origin [1]. To see this, consider the expressions x(t) = c_{1}cos(t) + c_{2}sin(t), y(t) = -c_{1}sin(t) + c_{2}cos(t).

$x^2(t)+y^2(t) =[\text{Expanding both terms}] c_1^2·cos^2(t) + 2c_1c_2sin(t)cos(t) + c_2^2·sin^2(t) + c_1^2·sin^2(t) -2c_1c_2sin(t)cos(t) +c_2^2cos^2(t) =[\text{Simplify}] c_1^2(cos^2(t)+sin^2(t)) + c_2^2(sin^2(t) + cos^2(t)) + (2c_1c_2sin(t)cos(t)-2c_1c_2sin(t)cos(t)) = (c_1^2+c_2^2)(cos^2(t)+sin^2(t)) = c_1^2+c_2^2 ↭ x^2(t)+y^2(t) = c_1^2+c_2^2 = R^2$ where R = $\sqrt{c_1^2+c_2^2}$ is the radius of the circle. Therefore, the trajectories of the system are circles of radius R centered at the origin.

A limit cycle is a closed trajectory in the phase plane that is isolated, meaning that nearby trajectories are not closed and either spiral towards or away from the limit cycle.

Limit cycles are significant because they represent **sustained oscillations** in the system, which can be:

- A stable limit cycle is one where all neighboring trajectories approach the cycle as t → ∞. The system eventually returns to the limit cycle even if disturbed. In other words,
**a limit cycle attracts nearby trajectories, causing the system to settle into a repeating, periodic behavior**. Nearby trajectories spiral inward toward the limit cycle, getting closer and closer, but never quite touching it. (Refer to Figure iii for a visual representation and aid in understanding it). - An unstable limit cycle repels nearby trajectories. In this case, any small disturbance will cause the system to move away from the limit cycle, neighboring trajectories diverge from the limit cycle as t → ∞. The system does not return to the periodic motion described by the limit cycle after a disturbance.
- A semi-stable is one where trajectories on one side approach the limit cycle, while those on the other side move away. This means the limit cycle is stable from one direction and unstable from the other.

Limit cycles cannot occur in linear systems; they are unique to non-linear systems due to their complex interactions. In linear systems, any closed trajectories (such as circles or ellipses) are not isolated —they form a continuum of closed orbits filling the phase plane, and thus, they do not satisfy the definition of a limit cycle.

One real-world example of a limit cycle is the natural process of breathing. Breathing is a periodic motion that can be modeled as a limit cycle. If the system (your breathing) is disturbed, say, by a temporary obstruction, physical exercise, or a moment of anxiety, it will gradually return to its original, stable pattern of breathing. This resilience to disturbances makes it an example of a stable limit cycle.

Understanding whether a system has a limit cycle is a crucial challenge in the study of non-linear dynamical systems. Unfortunately, **there is no universal method to directly determine the existence of limit cycles in every situation**.

There are various approaches used by scientists to predict and identify limit cycles:

**Intuition and Physical Insight**: Systems modeled after physical phenomena can be analyzed using intuition about these phenomena. For example, in ecological models like predator-prey dynamics, oscillatory behavior is expected, suggesting the presence of limit cycles.**Computer Simulations**. Since there are no universal analytical methods for finding limit cycles, scientists frequently rely on numerical simulations.**Numerical simulations allow visualization of trajectories in the phase plane**. By simulating the system with different initial conditions, potential closed trajectories (limit cycles) can be identified. Software tools such as MATLAB, Mathematica, or Python libraries (like NumPy and SciPy) are commonly used.**Analytical Methods**: Although general methods are scarce, there are some analytical tools, like the Poincaré-Bendixson Theorem and Bendixson’s Criterion, that provide useful insights into the existence or non-existence of limit cycles in specific systems.

While finding limit cycles can be difficult, in some cases, **it is possible to rule out their existence by applying specific criteria.**

Bendixson’s Criterion is a method to exclude the possibility of closed trajectories (and hence, limit cycles) within a region of the plane.

Bendixson’s criterion. Let D be a simply connected region of the xy-plane. Consider a continuously differentiable vector field: $\vec{F} = f(x, y)\hat{\mathbf{i}} + g(x, y)\hat{\mathbf{j}}$ governing the system:

$\begin{cases} x’ = f(x, y) \\ y’ = g(x, y) \end{cases}$

The divergence of the vector field is given by: $div \vec{F} = f_x + g_y = \frac{∂f}{∂x} + \frac{∂g}{∂y}$, where f(x, y) and g(x, y) are the components of the vector field governing the time evolution of x and y, respectively. If the divergence $div(\vec{F})$ is continuous throughout the region D and does not change sign, (i.e., it is always positive or always negative) and is not identically zero, then there are no closed trajectories (and therefore no limit cycles) lying entirely within D.

If the divergence of the vector field is always positive or always negative in the region D, it implies that the flow is either consistently diverging (spreading out) or converging (coming together) throughout the region. This consistent behavior prevents the trajectories from closing onto themselves to form limit cycles within D.

- Consider the system:

$\begin{cases} x’ = x^3 + y^3 \\ y’ = 3x + y^3 + 2y \end{cases}$

The vector field for this system is $\vec{F} = (x^3 + y^3)\hat{\mathbf{i}} + (3x + y^3 + 2y)\hat{\mathbf{j}}$

Let’s compute the divergence of the vector field:
$div \vec{F} = f_x + g_y = 3x^2 + 3y^2 +2$. Notice that $div \vec{F} > 0$ in ℝ^{2} (it is always positive since x^{2}≥0, y^{2}≥0 ∀x, y and there is an additional positive term 2)⇒[By Bendixson’s criterion] Since the divergence is always positive and constant throughout the plane, there can be no closed trajectories in the xy-plane ⇒There are no limit cycles anywhere in the system.

It’s important to note that Bendixson’s Criterion can only be used to exclude the possibility of limit cycles within a region. If the divergence changes sign or is zero somewhere in D, the criterion does not provide any information about the existence of limit cycles; other methods must be used.

Proof (indirect proof)

Assume, for the sake of contradiction, that there exists a closed trajectory C within the region D. Let R be the region enclosed by the curve C (Refer to Figure iv for a visual representation and aid in understanding it).

Consider the vector field $\vec{F} = f(x, y)\hat{\mathbf{i}} + g(x, y)\hat{\mathbf{j}}$ governing the system:

$\begin{cases} x’ = f(x, y) \\ y’ = g(x, y) \end{cases}$

Since C is a trajectory of the system, the vector field (velocity field) $\vec{F}$ is always tangent to the curve (the curve is a trajectory, it is supposed to be going in the direction given by the vector field) at every point. Therefore, the normal vector and the vector field are perpendicular, i.e., their dot product is zero: $\vec{F}·\hat{\mathbf{n}} = 0$.

The flux integral of the vector field $\vec{F}$ across C is given by:

$\oint_{C} \vec{F}·\hat{\mathbf{n}}ds$ where:

- $\hat{\mathbf{n}}$ is the outward-pointing unit normal vector to C.
- ds is the differential arc length along C.

As we have previously stated, since $\vec{F}$ is tangent to C, $\vec{F}·\hat{\mathbf{n}} = 0$. Therefore, the flux across C is zero: $\oint_{C} \vec{F}·\hat{\mathbf{n}}ds = 0$

By Green’s Theorem, the flux integral over a closed curve C can be converted to a double integral over the region R enclosed by C: $\oint_{C} \vec{F}·\hat{\mathbf{n}}dS = \int\int_{R} div(\vec{F})dA$, where:

- $div(\vec{F}) = \frac{∂f}{∂x}+\frac{∂g}{∂y}$ is the divergence of the vector field.
- dA is the differential area element.

Since the flux across C is zero, we have: $\int\int_{R} div(\vec{F})dA = 0$

However, we have already assumed that:

- The divergence $div(\vec{F})$ is
**continuous**in R. - $div(\vec{F})$ is not zero and does not change sign throughout R, meaning it is either strictly positive or strictly negative.

Under these conditions, the divergence is either $div(\vec{F}) > 0$ or $div(\vec{F}) < 0$ for all points in R. Since R has a positive area, and $div(\vec{F}) $ is non-zero of a constant sign the integral of the divergence over the region R must be either $\int\int_{R} div(\vec{F})dA$ must be strictly positive ($div(\vec{F}) > 0$) or negative ($div(\vec{F}) < 0$) ⊥ Therefore, the assumption that a closed trajectory C exists is false. Hence, under the given conditions, there are no closed trajectories (limit cycles) within the region D.

**Critical point criterion** (Poincaré-Bendixson). If a closed trajectory C exists within a region D in the xy-plane, there must be at least one critical point (equilibrium point) inside the region enclosed by C.

A critical point is a point (x_{0}, y_{0}) where f(x_{0}, y_{0}) = 0, g(x_{0}, y_{0}) = 0.

Contrapositive Logic. The contrapositive of a statement “If A, then B” (A⇒B) is the logically equivalent statement “If not B, then not A” (¬B⇒¬A). Applying this to the critical point criterion:

- Original statement: If a closed trajectory exists, then there is a critical point inside it.
- Contrapositive: If there are no critical points in a region, then there are no closed trajectories (no limit cycles) in that region.

This means that if you can identify a region D in the plane that contains no critical points, then you can conclude that no closed trajectories (limit cycles) exist within D.

Consider the following non-linear autonomous system:

$\begin{cases} x’ = x^2 + y^2 + 1 \\ y’ = x^2 -y^2 \end{cases}$

To investigate whether limit cycles exist, we can first apply Bendixson’s Criterion. It states that that if the divergence of the vector field is continuous and does not change sign (i.e., it is either strictly positive or strictly negative) in a simply connected region D, then there are no closed trajectories (and hence no limit cycles) lying entirely within D.

Given the vector field $\vec{F} = f(x, y)\hat{\mathbf{i}}+g(x, y)\hat{\mathbf{j}}$ where $f(x, y) = x^2 + y^2 + 1, g(x, y) = x^2 -y^2$. The divergence of the vector field $\vec{F}$ is given by: $div(\vec{F}) = \frac{∂f}{∂x} + \frac{∂g}{∂y} = 2x-2y$.

**Analyzing the Divergence**. Set the divergence equal to zero: 2x -2y = 0 ⇒ x -y = 0 ⇒ x = y. This tells us that along the line y = x, the divergence is zero.

**Signs of the Divergence**

- To the right of this line y = x (i.e., where x > y), the divergence is positive ($div(\vec{F}) = 2x−2y > 0$).
- To the left of this line (i.e., where x < y), the divergence is negative ($div(\vec{F}) =2x−2y < 0$).

**Applying Bendixson’s Criterion**

In regions where the divergence is strictly positive or strictly negative and continuous, Bendixson’s Criterion tells us that no closed trajectories can exist entirely within those regions.

- To the right of y = x: No closed trajectories exist.
- To the left of y = x: No closed trajectories exist. However, along the line y = x, the divergence is zero. Therefore, Bendixson’s Criterion does not rule out the possibility of closed trajectories that cross this line or lie along it.

Next, we use the critical point criterion to check whether the system has any critical points. The Critical Point Criterion states:

- If a closed trajectory exists, there must be at least one critical point (equilibrium point) inside the region enclosed by the trajectory.
- Contrapositive: If there are no critical points inside a closed region, then there are no closed trajectories entirely within that region.

Critical points occur where both derivatives x′ and y′are zero simultaneously.

However, this is impossible because: $x^2+y^2+1 > 0$ (there are not real solutions to x^{2} + y^{2} = -1) ⇒ **There are no critical points in the real plane system**. ⇒Since there are no critical points in the system, by the contrapositive of the critical point criterion, we can conclude that the system does not have any limit cycles.

Since $x’ > 0$ for all $(x, y)$, all trajectories will move rightward indefinitely. This alone is sufficient to rule out limit cycles.

- Consider the spring Mass damper system with a mass of one (m = 1), and stiffness of 1 (k = 1), and a non-linear damping c (c = c(x) > 0).

To determine whether limit cycles can exist in the given spring-mass-damper system with non-linear damping, we will apply Bendixson’s Criterion. This criterion helps us identify whether closed trajectories, and hence limit cycles, are possible in a particular region of the phase plane by examining the divergence of the vector field.

The governing equation for this system is: $x’’ + c(x)x’ + x = 0.$

To analyze this system in the phase plane, we convert it to state space form by introducing a new variable y’ = x. Then, we have:

$\begin{cases} x’ = y \\ y’ = -x -c(x)y \end{cases}$

This gives us the vector field $\vec{F} = f(x, y)\hat{\mathbf{i}}+g(x, y)\hat{\mathbf{j}}$ where where f(x, y) = y, g(x, y) = -x -c(x)y.

To investigate whether limit cycles exist, we can apply Bendixson’s Criterion. It states that if the divergence of the vector field $\vec{F}$ is continuous and does not change sign (i.e., it is either strictly positive or strictly negative) in a simply connected region D, then there are no closed trajectories (and thus no limit cycles) can exist entirely within D.

The divergence of the vector field $\vec{F}$ is given by: $div(\vec{F}) = \frac{∂f}{∂x} + \frac{∂g}{∂y} = 0 -c(x)$

**Applying Bendixson’s Criterion:**

Since c(x) > 0 by definition, we have: $div(\vec{F}) = -c(x) < 0$ ∀x, y.

The divergence is strictly negative for all x and y (assuming c(x) is continuous). Thus:

- The divergence does not change sign in any region of the xy-plane.
- The divergence is also continuous in the phase plane since c(x) is continuous.
- By Bendixson’s Criterion, since the divergence is strictly negative and continuous across the entire plane, no closed trajectories (and therefore no limit cycles) can exist in this system.

This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and is based on MIT OpenCourseWare [18.01 Single Variable Calculus, Fall 2007].

- NPTEL-NOC IITM, Introduction to Galois Theory.
- Algebra, Second Edition, by Michael Artin.
- LibreTexts, Calculus and Calculus 3e (Apex). Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
- Field and Galois Theory, by Patrick Morandi. Springer.
- Michael Penn, and MathMajor.
- Contemporary Abstract Algebra, Joseph, A. Gallian.
- YouTube’s Andrew Misseldine: Calculus. College Algebra and Abstract Algebra.
- MIT OpenCourseWare [18.03 Differential Equations, Spring 2006], YouTube by MIT OpenCourseWare.
- Calculus Early Transcendentals: Differential & Multi-Variable Calculus for Social Sciences.