This site is being phased out.

Elementary ODEs

From Mathematics Is A Science
Jump to navigationJump to search

Motion: location from velocity

We start with ordinary differential equations (ODEs) of cochains with respect to their exterior derivative $d$. We choose a few simple examples that have explicit solutions.

This is the most elementary ODE with respect to a $0$-cochain $f$ over ${\mathbb R}$: $$df=G,$$ with $G$ some $1$-cochain.

This equation has a solution, i.e., a $0$-cochain that satisfies the equation for every $1$-cochain $a$: $$df(a)=G(a).$$ There are, in fact, multiple solutions here. We know that they differ by a constant, as antiderivatives should.

If we choose also an initial condition at node $A\in {\mathbb R}$: $$f(A)=r\in R,$$ such a solution is also unique. The explicit formula is: $$f(X)=r+G(AX),\ X\in {\mathbb R},$$ where $AX$ is the $1$-chain from $A$ to $X$, i.e., $\partial AX=X-A$.

To verify that this is indeed a solution, let's suppose $b:=BC\in {\mathbb R}$. Then, by the Stokes Theorem, we have $$\begin{array}{llllll} df(b)&=f(\partial b)=f(B-C)= f(B)-f(C)\\ &=[r+G(AB)]-[r+G(AC)]=G(AB-AC)=G(BC)\\ &=G(b), \end{array}$$ due to $G(AB+BC+CA)=0$.

We can use the above equation to model motion:

  • $f=f(t)$ is the location of an object at time $t$, a $0$-cochain defined on the integers ${\bf Z}$, and
  • $G=G(t)$ is its displacement, a $1$-cochain defined on the intervals $[0,1],[1,2], ...$.
Discrete antidifferention.png

We can give the above formula a more familiar, integral form: $$f(X)=r+\displaystyle\int_A^X G(x)dx,$$ which, in our discrete setting, is simply a summation: $$f(X)=r+\displaystyle\sum_{i=A}^{X-1} G\Big([i,i+1] \Big).$$

A more conventional way to represent motion is with an ODE with respect to:

  • the first derivative $f'$ instead of the exterior derivative, and
  • the velocity $v$ instead of the displacement. $\\$

Then the ODE becomes: $$f'=v.$$ Recall that both are dual $0$-cochains. With the same initial condition, the formula is similar: $$f(X)=r+\displaystyle\sum_{i=A}^{X-1} v\Big([i,i+1] \Big) \big|[i,i+1] \big|.$$

Exercise. Prove the formula.

Population growth

In more complex equations, the exterior derivative of a cochain depends on the cochain. The simplest example is: $$df=Gfq,$$ where $G:{\bf R}\to {\bf R}$ is some function and $q:C_1({\mathbb R})\to C_0({\mathbb R})$ is given by $$q\Big([n,n+1]\Big)=n.$$ The latter is used to make the degrees of the cochains in the equation match.

In particular, $f$ may represent the population with $G=k$ a constant growth rate. Then the equation becomes: $$df=kfq.$$ The actual dynamics resembles that of a bank account: once a year the current amount is checked and then multiplied by a certain constant number $k>1$. The case of $k=2$ is illustrated below:

Discrete population growth.png

For the initial condition $$f(0)=r\in {\bf R},$$ there is an explicit formula: $$f(X)=r(k+1)^X,\ X\in {\bf Z}.$$ The growth is exponential (geometric), as expected. To verify, suppose $b:=[B,B+1]$. Then compute: $$\begin{array}{lllll} df(b)&=f(B+1)-f(B)\\ &=r(k+1)^{B+1}-r(k+1)^B=r(k+1)^B((k+1)-1)\\ &=r(k+1)^Bk=f(B)k=fq(b)k. \end{array}$$

Exercise. Confirm that the dynamics is as expected for all values of $k>0$ and $k<0$.

Just as in the last subsection, we can represent the dynamics more conventionally with an ODE with respect to the first derivative $f'$ provided ${\mathbb R}$ is supplied with a geometry: $$f'=kfq.$$ This geometry allows us to consider variable time intervals. The solution of the IVP is given by: $$f(X)=r\displaystyle\prod_{i=0}^{X-1}\Big( k \big| [i,i+1] \big| +1 \Big).$$

Exercise. Prove the formula.

Motion: location from acceleration

If we are to study the motion that is produced by forces exerted on an object, we are compelled to specify the geometry of the space, in contrast to the previous examples.

Recall that

  • the location $r$ is a primal $0$-cochain;
  • the velocity $v=r'$ is a dual $0$-cochain;
  • the acceleration $a=v'$ is a primal $0$-cochain. $\\$

The ODE is: $$r' ' =a,$$ for a fixed $a$. The motion is understood as if, at the preset moments of time, the acceleration steps in and instantly changes the velocity, which stays constant until the next time interval.

This is a second order ODE with respect to $r$. Its initial value problem (IVP) includes both an initial location and an initial velocity.

To find the explicit solution, let's suppose that ${\mathbb R}$ has the standard geometry.

Let's recall what we have learned about antidifferentiation. If we know the velocity, this is how we find the location: $$r(t)=r_0+\sum_{i=0}^{t-1} v\Big([i,i+1]\Big),$$ where $r_0$ is the initial location. And if we know the velocity, we can find the acceleration by the same formula. $$v\Big([i,i+1]\Big)=v_0+\sum_{j=0}^{t-1} a(j),$$ where $v_0$ is the initial velocity. Therefore, $$r(t)=r_0+\sum_{i=0}^{t-1} \Big( v_0+\sum_{j=0}^{i-1} a(j) \Big) =r_0+ v_0t+\sum_{i=0}^{t-1} \sum_{j=0}^{i-1} a(j).$$

The case of a constant acceleration is illustrated below:

From acceleration to location.png

The formula is, of course, $$r(t)=r_0+ v_0t+\frac{at(t-1)}{2}.$$ The dependence is quadratic, as expected.

Exercise. Solve the ODE for ${\mathbb R}$ with variable time intervals.

Oscillating spring

Imagine an object of mass $m\in {\bf R}$ connected by (mass-less) spring to the wall.

Spring oscillation.png

We let $f(t)\in {\bf R}$ be the location of the object at time $t$ and assume that the equilibrium of the spring is located at $0\in {\bf R}$. As before, we think of $f$ as a cochain of degree $0$ over ${\mathbb R}$.

The equation of the motion is derived from Hooke's law: the force exerted on the object by the spring is proportional to the displacement of the object from the equilibrium: $$H = -k f ,$$ where $k\in {\bf R}$ is the spring constant.

Now, by the Second Newton's Law, the total force affecting the object is $$F=m a,$$ where $a$ is the acceleration, $a=f' '$.

As there are no other forces, the two forces are equal and we have our second order ODE: $$mf' '=-kf.$$

Let's assume that the geometry of ${\mathbb R}$ is standard and let $m=k:=1$. Then one of the solutions is the sequence $0,1,1,0,-1,-1,0, ...$. It is shown below along with its verification:

Spring oscillation solution.png

The dynamics is periodic, as expected.

Exercise. Find an explicit representation of this solution.

For the general uniform case, $\Delta_n=\Delta_n^\star=h$, we know from the last section that $f(n)=\sin tn$ and $f(n)=\cos tn $ satisfy our ODE, $mf' '=-k f$, provided $$\frac{k}{m}=\frac{4\sin^2\tfrac{t}{2}}{h^2}.$$ Because of the linearity of differentiation, any linear combination of these two functions, $$f(n)=A\sin tn +B \cos tn,\quad A,B\in {\bf R},$$ also satisfies the ODE.

In the non-uniform case, a closed formula may be impossible but a recursive representation is available as discussed below.

ODEs of cochains

Next we consider a few facts about general ODEs.

Broadly, an ODE is a dependence of directions on locations in space provided by ${\bf R}$ while its solutions exist over time ${\mathbb R}$ taken here with the standard geometry.

Definition. Given a function $$P:C_0({\mathbb R})\times {\bf R} \to {\bf R},$$ an ordinary differential equation (ODE) of cochains of order $1$ with right-hand side function $P$ is: $$df(AB)=P(A,f(A)),$$ where

  • cochain $f\in C^0({\mathbb R})$,
  • node $A\in {\mathbb R}$, and
  • edge $AB\in {\mathbb R}$.

The abbreviated version of the equation is below. $$\begin{array}{|c|} \hline \\ \quad df=Pf \quad \\ \\ \hline \end{array}$$ Note that the variables of $P$ encode: a time instant and the value of the cochain at this instant.

Example. Let's choose the following right-hand side functions. For a location-independent ODE, we let $$P(t,x):=G(v);$$ then the equation describes the location in terms of the velocity: $$df(AB)=G(AB).$$ For a time-independent ODE, we let $$P(t,x):=kx;$$ then the equation describes population growth: $$\hspace{.38in} df(AB)=kf(A). \hspace{.38in}\square$$

The nature of the process often dictates that the way a quantity changes depends only on its current value, and not on time. As a result, the right-hand side function $P$ is often independent of the first argument. This will be our assumption below. We suppose that the right-hand function can be seen as a function of one variable $P:R\to R$:

ODE RHS.png

Definition. An initial value problem (IVP) is a combination of an ODE and an initial condition (IC): $$df=Pf,\ f(A_0)=x_0\in {\bf R}.$$ Then a $0$-cochain $f$ on the ray ${\mathbb R} \cap \{A\ge A_0\}$ is called a (forward) solution of the IVP if it satisfies: $$df\Big([A,A+1] \Big)=P(f(A)),\ \forall A\ge A_0.$$

Because the exterior derivative in this setting is simply the difference of values, a solution $f$ is easy to construct iteratively.

Theorem (Existence). The following is a solution to the IVP above: $$f(A_0):=x_0,\ f(A+1):=f(A)+P(f(A)),\ \forall A\ge A_0.$$

A few solutions for the $P$ shown above are illustrated below:

IVP solution.png

Exercise. Define (a) a backward solution and (b) a two-sided solution of the IVP and (c) devise iterative procedures to construct them. (d) Do the three match?

Because of the existence property, the solutions to all possible IVPs fill the space, i.e., ${\bf R}$.

Theorem (Uniqueness). The solution to the IVP given above is the only one.

When we plot the above solutions together, we see that they do overlap:

IVP solution 1.png

This reminds us that only the right-sided uniqueness is guaranteed: whenever two solutions meet, they stay together.

Exercise. Find necessary conditions for the space to be filled, in an orderly manner:

ODE compactness.png

Next, is there anything we can we say about continuity?

Every $0$-cochain $f^0$ on ${\mathbb R}$ can be extended to a continuous function on $f:{\bf R} \to {\bf R}$. In contrast to this trivial conclusion, below continuity appears in a meaningful way.

Definition. For the given ODE, the forward propagation map of depth $c\in {\bf Z}^+$ at $A_0$ is a map $$Q_c:R \to {\bf R}$$ defined by $$Q_c(x_0):=f(A_0+c),$$ where $f$ is the solution of the IC: $$f(A_0)=x_0.$$

In other words, the forward propagation map is a self-map of the space of locations.

Exercise. Prove that $Q_c$ is independent of $A_0$.

Theorem (Continuous dependence on initial conditions). If $P$ is continuous on the second argument, the forward propagation map $Q_c:{\bf R}\to {\bf R}$ is continuous for any $c\in {\bf Z}^+$ and any $A_0\in {\mathbb R}$.

ODE.png

Exercise. Prove the theorem.

Exercise. Define and analyze an IVP for an ODE of order $2$.

ODEs of cell functions and chain functions

So far, we have represented motion by means of a $0$-form, as a function $$f:{\mathbb R}\to R,$$ where $R$ is our ring of coefficients. What if the space is also discrete? We will consider cell functions: $$f:{\mathbb R}\supset I \to {\mathbb R}.$$ where $I$ is a cell complex, typically an interval, serving as the time.

The construction largely follows the study of ODEs of cochains discussed previously, in spite of the differences. Compare their derivatives in the simplest case:

  • for a $0$-cochain $f:{\mathbb R}\to R$, we have

$$df\Big([A,A+1] \Big):=f(A+1)-f(A) \in R;$$

  • for a cell map $f:{\mathbb R} \to {\mathbb R}$, we have

$$f'\Big([A,A+1] \Big):= f_1\Big([A,A+1] \Big) \in C_1({\mathbb R};R).$$

Definition. Suppose $P$ is a vector field on ${\mathbb R}$. Then an ODE of cell maps generated by $P$ is: $$f'=Pf,$$ and its solution is a cell map $f:{\mathbb R}\supset I \to {\mathbb R}$ that satisfies: $$f'\Big([A,A+1] \Big)=P(f(A)),\ \forall A\in I\cap {\bf Z}.$$

We can simply take the examples of $1$st order ODEs of cochains given in this section and use them to illustrate ODEs of cell maps. We just need to set $R:={\bf Z}$. The difference is, this time the values of the right-hand side $P$ can only be $1$, $0$, or $-1$.

Definition. An initial value problem (IVP) on complex ${\mathbb R}$ is a combination of an ODE and an initial condition (IC): $$f'=Pf,\ f(A_0)=x_0\in K,\ A_0\in {\bf Z},$$ and a solution $f:{\mathbb R}\cap\{t\ge A_0\} \to {\mathbb R}$ of the ODE is called a solution of the IVP if it satisfies the initial condition.

Theorem (Uniqueness). The solution to the IVP above, if exists, is given iteratively by $$f(A_0):=x_0,\ f(A+1):=f(A)+\partial P(f(A)),\ \forall A\ge A_0.$$

Proof. Suppose $f$ is a solution, $f'=Pf$. Then we compute: $$\begin{array}{llllllll} &f_0(A+1)-f_0(A)&=f_0((A+1)-(A))\\ &&=f_0\Big(\partial [A,A+1] \Big)\\ &&=\partial f_1[A,A+1]\\ \hspace{.25in}&&=\partial P(f(A)).&\hspace{.25in}\blacksquare \end{array}$$

Just as before, only the right-sided uniqueness is guaranteed: when two solutions meet, they stay together.

What about existence? We have the formula, but is this a cell map? It can only be guaranteed under special restrictions.

Theorem (Existence). If the values of the vector field $P$ are edges of $K$, a solution to the IVP above always exists.

Proof. For $f$ to be a cell map, $f(A)$ and $f(A+1)$ have to be the endpoints of an edge in $K$. Clearly, they are, because $f(A+1)-f(A) =\partial P(f(A))$. $\blacksquare$

Definition. For a given ODE, the forward propagation map of depth $c\in {\bf Z}^+$ at $A_0$ is a map $$Q_c:K \to K$$ defined by $$Q_c(x_0):=f(A_0+c),$$ where $f$ is the solution with the IC: $$f(A_0)=x_0.$$

Exercise. Prove that if the vector field $P$ is time-independent, $Q_c$ is independent of $A_0$.

Suppose we have $g:{\mathbb R}\to {\mathbb R}$, a function defined so far only on the nodes. The two main cases are shown below.

Chain approximation.png

In the former case, we can create a cell map: $$g(AB):=XY,$$ by extending its values from vertices to edges. In the latter case, an attempt of cell extension (without subdivisions) fails as there is no single edge connecting the two vertices. However, there is a chain of edges: $$g(AB):=XY+YZ.$$

Even though the linearity cannot be assumed, the illustration alone suggests a certain continuity of this new “map”. In fact, chain maps are continuous in the algebraic sense: they preserve boundaries, $$g_0\partial = \partial g_1.$$ The idea is also justified by the meaning of the derivative of a cell map $f$: the $1$-chain maps of $f$!

Suppose we are given the chain complex $C$ of ${\mathbb R}$ to represent space: $$\partial: C_1 \to C_0.$$ Here, of the two modules, $C_0$ is generated by the locations and $C_1$ by the directions. We suppose we have a function, an analog of a vector field, $$P:C_0 \to C_1$$ representing the dependence of the directions on the locations.

The dynamics produced by the chain field is a sequence of $0$-chains given by this iteration: $$X_{n+1}:=X_n+\partial P(X_n).$$ It comes from an ODE of chain maps generated by $P$: $$g_1=Pg_0,$$ and its solution is a chain map $$g:C(I) \to C,$$ for some subcomplex $I \subset {\mathbb R}$, that satisfies the equation $$g_1[A,A+1]=P(g_0(A)),\ \forall A\in I\cap {\bf Z}.$$ An initial value problem is a combination of an ODE and an initial condition: $$g_1=Wg_0,\ g_0(A_0)=X_0\in C,\ A_0\in {\bf Z},$$ and a solution $g:C({\mathbb R}\cap\{t\ge A_0\}) \to C$ of the ODE is called a (forward) solution of the IVP if it satisfies the initial condition.

The examples at the beginning of this section can be understood as ODEs of chain maps.

Exercise. Provide chain fields for the examples of ODEs in this section.