This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

# ODEs

## Motion: location from velocity

Suppose $R$ is our ring of coefficients.

We start with a few simple ordinary differential equations (ODEs) with respect to the exterior derivative $d$ that have explicit solutions.

Given a complex $K$, this is the most elementary ODE with respect to a $0$-form $f$: $$df=G,$$ where $G$ some $1$-form over $K$.

This equation has a solution, i.e., a $0$-form that satisfies the equation everywhere, if and only if $G$ is exact. In that case, $$f\in d^{-1}(G),$$ where $d$ is understood as a linear map $d:C^0(K)\to C^1(K)$. There are, in fact, multiple solutions here. We know that they differ by a constant, as antiderivatives should.

If we choose an initial condition: $$f(A)=r\in R,\ A \in K^{(0)},$$ such a solution is also unique. The explicit formula is: $$f(X)=r+G(AX),\ X\in K^{(0)},$$ where $AX$ is any $1$-chain from $A$ to $X$ in $K$, i.e., any $1$-chain $AX=a\in K^{(1)}$ with $\partial a=X-A$.

The solution is well-defined for an exact $G$ (which is always the case when $K$ is acyclic). Indeed, if $a'$ is another such chain, we have $$[r+G(a)]-[r+G(a')]=G(a)-G(a')=G(a-a')=0,$$ because $a-a'$ is a cycle and exact forms vanish on cycles.

To verify that this is indeed a solution, suppose $b=BC\in K^{(1)}$. Then, by the Stokes Theorem, we have $$\begin{array}{llllll} df(b)&=f(\partial b)=f(B-C)= f(B)-f(C)\\ &=[r+G(AB)]-[r+G(AC)]=G(AB-AC)=G(BC)\\ &=G(b), \end{array}$$ since $G(AB+BC+CA)=0$.

Exercise. Give an example of $G$ so that the ODE has no solution.

We can use the above equation to model motion. Our domain is then the standard $1$-dimensional cubical complex $K={\mathbb R}$ and we are to study differential forms over ring $R={\bf R}$, the reals. We have:

• $f=f(t)$ is the location of an object at time $t$, a $0$-form defined on the integers ${\bf Z}$, and
• $G=G(t)$ is its displacement, a $1$-form defined on the intervals $[0,1],[1,2],...$.

In this setting, the above formula takes a more familiar, integral form: $$f(X)=r+\int_A^X G(x)dx,$$ which, in our discrete setting, is simply a summation: $$f(X)=r+\sum_{i=A}^{X-1} G([i,i+1]).$$

What about the discrete vs. the continuous? The time is $K={\mathbb R}$, which seems discrete, while the space is $R={\bf R}$, which seems continuous. Let's take an alternative point of view and consider only the structures that we actually use. First, since ring $R$ has no topology that we use,

• the space is algebraic.

Second, since complex $K$ has no algebra that we use,

• the time is topological.

Exercise. Solve the above ODE for the case of (a) circular space $R={\bf Z}_p$, and (b) circular time $K={\mathbb S}^1$. Hint: make sure your solution is well-defined.

A more conventional way to represent motion is with an ODE with respect to:

• the first derivative $f'$ instead of the exterior derivative, and
• the velocity $v$ instead of the displacement.

For the derivative to be defined, the time has to be represented by a geometric complex $K$. Then we have an ODE: $$f'=v.$$ Recall that both are dual $0$-forms.

Exercise. Show that the solution $f$ of this ODE is the same as above for the case of $K={\mathbb R}$ with the standard geometry: $|a|=1$ for all $a\in K,a\in K^\star$.

## Population growth

In more complex equations, the exterior derivative of a form depends on the form. When $K={\mathbb R}$, the simplest example is $$df=Gfq,$$ where $G:R\to R$ is some function and $q:C_1({\mathbb R})\to C_0({\mathbb R})$ is given by $$q([n,n+1])=n.$$ The latter is used to make the degrees of the forms match.

In particular, $f$ may represent the population with $G=k$ a constant growth rate. Then the equation becomes: $$df=kfq.$$ The actual dynamics resembles that of your bank account: once a year the current amount is checked and then multiplied by a certain constant number $k>1$. The case of $k=2$ is illustrated below:

For the initial condition $$f(0)=r\in R,$$ an explicit formula for the solution exists: $$f(X)=r(k+1)^X,\ X\in K^{(0)}={\bf Z}.$$ The growth is exponential (geometric), as expected.

To verify, suppose $b=[B,B+1]\in K^{(1)}$. Then consider: $$\begin{array}{lllll} df(b)&=f(B+1)-f(B)\\ &=r(k+1)^{B+1}-r(k+1)^B=r(k+1)^B((k+1)-1)\\ &=r(k+1)^Bk=f(B)k=fq(b)k. \end{array}$$

Exercise. Confirm that the dynamics is as expected for all values of $k>0$ and $k<0$.

Just as in the last subsection, we can represent the dynamics more conventionally with an ODE with respect to the first derivative $f'$ provided the time is represented by a geometric complex. This geometry allows us to consider variable time intervals.

Exercise. Set up an ODE for the derivative $f'$ and confirm that its solution is the same as above for the case of the standard geometry of the complex of time $K={\mathbb R}$.

## Motion: location from acceleration

If we are to study the motion that is produced by forces exerted on an object, we will have to take into account the geometry of the space, in contrast to the previous examples.

Suppose $K$ is a geometric complex of dimension $1$.

Recall that

• the location $r$ is a primal $0$-form;
• the velocity $v=r'$ is a dual $0$-form;
• the acceleration $a=v'$ is a primal $0$-form.

The ODE is $$r' ' =a,$$ for a given $a$. Here $r' '=\star d \star d r$, where $\star$ is the Hodge star operator of $K$.

The motion is understood as if, at the preset moments of time, the acceleration steps in and instantly changes the velocity, which stays constant until the next time point.

This is a second order ODE with respect to $r$. Its initial values problem (IVP) includes both an initial location and an initial velocity.

To find the explicit solution, let's suppose that $K={\mathbb R}$ has the standard geometry.

Let's recall what we have learned about antidifferentiation. If we know the velocity, this is how we find the location: $$r(t)=r_0+\sum_{i=0}^{t-1} v([i,i+1]),$$ where $r_0$ is the initial location. And if we know the velocity, we can find the acceleration by the same formula. $$v\left([i,i+1]\right)=v_0+\sum_{j=0}^{t-1} a(j),$$ where $v_0$ is the initial velocity.

Therefore, $$r(t)=r_0+\sum_{i=0}^{t-1} \left( v_0+\sum_{j=0}^{i-1} a(j) \right),$$ or $$r(t)=r_0+ v_0t+\sum_{i=0}^{t-1} \sum_{j=0}^{i-1} a(j).$$

The case of a constant acceleration is illustrated below:

The formula is, of course, $$r(t)=r_0+ v_0t+\frac{at(t-1)}{2}.$$ The dependence is quadratic, as expected.

Note that the formula works even with $R={\bf Z}$ as $t(t-1)$ is even.

Exercise. Solve the ODE for $K={\mathbb S}^1$ with the standard geometry.

Exercise. Solve the ODE for $K={\mathbb R}$ with variable time intervals.

## Oscillating spring

More elementary physics...

Imagine an object of mass $m\in R$ connected by (mass-less) spring to the wall.

The motion is, as before, within our ring of coefficients $R$. We let $f(t)\in R$ be the location of the object at time $t$ and assume that the equilibrium of the spring is located at $0\in R$. As before, we think of $f$ as a form of degree $0$ over a geometric complex $K$.

The equation of the motion is derived from Hooke's law: the force exerted on the object by the spring is proportional to the displacement of the object from the equilibrium: $$H = -k f ,$$ where $k\in R$ is the spring constant.

Now, by the Second Newton's Law, the total force affecting the object is: $$F=m a,$$ where $a$ is the acceleration: $$a=f' '=\star d \star d f.$$

Since there are no other forces, the two forces are equal and we have our second order ODE: $$mf' '=-kf.$$

Let's assume that the geometry of $K={\mathbb R}$ is standard and let $m=1$ and $k=1$. Then one of the solutions is $f: 0,1,1,0,-1,-1,0,…$. It is shown below along with its verification:

The dynamics is periodic, as expected.

Exercise. Find the explicit solutions of this ODE.

## ODEs of forms

We will not be concerned with geometry in the rest of this section. We suppose $K$ is a cell complex.

Broadly, an ODE creates a dependence of directions on locations, in $K$. To make this point clearer, we, initially, consider local $1$-forms on $K$, $\varphi\in T^1(K)$, i.e., $R$-valued functions on the tangent bundle of $K$: $$\varphi :T_1(K) \to R,$$ linear on each tangent space.

Note: An ODE has been used to create a dependence of the value of the exterior derivative of a $0$-form on the value of the form itself. However, the former is a $1$-form. That's why the population dynamics equation $df=kfq$ contains function $q:C_1({\mathbb R})\to C_0({\mathbb R})$ -- to make the right-hand side of the equation match the left-hand side; both are $1$-forms.

Definition. Given a cell complex $K$ and a function $$P:C_0(K)\times C_1(K)\times R\to R,$$ linear on the first and second arguments, an ordinary differential equation (ODE) of forms of order $1$ with right-hand side function $P$ is: $$df(A,AB)=P(A,AB,f(A)),$$ where

• $f\in C^0(K)$,
• $A\in C_0(K)$, and
• $AB\in C_1(K)$.

Note that the linearity requirements are justified by the linearity of the exterior derivative $df$ in the left-hand side.

The long version of the equation is: $$df(A,a)=P(A,a,f(A)),\ \forall A\in C_0(K),\ \forall a\in T_A(K),$$ and the abbreviated version is below. $$\begin{array}{|c|} \hline \\ \quad df=Pf \quad \\ \\ \hline \end{array}$$

Note that the variables of $P$ encode: a location, a direction at this location, and the value of the form at this location.

Exercise. A function $h:{\bf R}^n\to {\bf R}$ is called a potential of a vector field $V$ if $\operatorname{grad}h=V$. Interpret the problem of finding a potential of a given $V$ as an ODE and solve it for $\dim K= 2$.

Below we limit ourselves to “time-variable” ODEs: $$K:={\mathbb R}.$$

Example. Let's choose the following right-hand side functions. For a location-independent ODE, we let $$P(t,v,x):=G(v);$$ then the equation describes the location in terms of the velocity: $$df(A,AB)=G(AB).$$ For a time-independent ODE, we let $$P(t,v,x):=kx;$$ then the equation describes population growth: $$df(A,AB)=kf(A).$$ $\square$

Just as in the last example, the nature of the process often dictates that the way a quantity changes depends only on its current value, and not on time. As a result, the right-hand side function $P$ is often independent from the first argument. This will be our assumption below.

Over ${\mathbb R}$, the ODE $df=Pf$, stands for the following combination of two (one for each direction) independent equations with respect to local forms: $$df(A,[A,A+1])=P([A,A+1],f(A)),\ df(A,[A,A-1])=P([A,A-1],f(A)).$$ However, in $C_1({\mathbb R})=T_1({\mathbb R})/_\sim$, we have $$(A,[A,A+1])=-(A,[A,A-1]).$$ Then the linearity assumptions imply that the two equations are identical, for cochains. We will use only the first of the two equations. Indeed, if times reverses, so does the change.

Then, the right-hand function can be seen as simply a function of one variable $P:R\to R$:

As we have narrowed down the set the exterior derivatives of the solutions from local forms to cochains, $df$ is a cubical $1$-cochain: $$df:[A,A+1] \mapsto r\in R.$$

Definition. An initial value problem (IVP) is a combination of an ODE and an initial condition (IC): $$df=Pf,\ f(A_0)=x_0\in R.$$ Then a $0$-form $f$ on the ray ${\mathbb R} \cap \{A\ge A_0\}$ is called a (forward) solution of the IVP if it satisfies: $$df([A,A+1])=P(f(A)),\ \forall A\ge A_0.$$

Since the exterior derivative in this setting is simply the difference of values, a solution $f$ is easy to construct iteratively.

Theorem (Existence). A solution to the IVP above is given by: $$f(A_0):=x_0,\ f(A+1):=f(A)+P(f(A)),\ \forall A\ge A_0.$$

A few solutions for $P$ shown above are illustrated below:

Exercise. Define (a) a backward solution and (b) a two-sided solution of the IVP and (c) devise iterative procedures to construct them. (d) Do the three match?

Exercise. Provide an analog of this iterative procedure for an IVP with domain a graph $K$ and find out under what circumstances this procedure works or fails.

Because of the existence property, the solutions to all possible IVPs fill the space, i.e., $R$.

Theorem (Uniqueness). The (forward) solution to the IVP given above is the only one.

When we plot the above solutions together, we see that they do overlap:

This reminds us that only the right-sided uniqueness is guaranteed: when two solutions meet, they stay together.

Exercise. Find necessary conditions for the space to be filled, in an orderly manner:

Our ring $R$ has, for now, no topology. On the other hand, if we choose $R={\bf R}$, every $0$-form $f^0$ on ${\mathbb R}$ can be extended to a continuous function on $f:{\bf R} \to {\bf R}$. In contrast to this trivial conclusion, below continuity appears in a meaningful way.

Definition. For the given ODE, the forward propagation map of depth $c\in {\bf Z}^+$ at $A_0$ is a map $$Q_c:R \to R$$ defined by $$Q_c(x_0):=f(A_0+c),$$ where $f$ is the solution of the IC: $$f(A_0)=x_0.$$

In other words, the forward propagation map is a transformation of the space.

Exercise. Prove that $Q_c$ is independent from $A_0$.

Theorem (Continuous dependence on initial conditions). Suppose $R={\bf R}$. If $P$ is continuous on the second argument, the forward propagation map $Q_c:{\bf R}\to {\bf R}$ is continuous for any $c\in {\bf Z}^+$ and any $A_0\in {\mathbb R}$.

Exercise. Prove the theorem.

Exercise. Define and analyze an IVP for an ODE of order $2$.

## Vector fields and systems of ODEs

The IVP we have studied is: $$df=Pf,\ f(A_0)=x_0\in R.$$ Here a $0$-form $f$, which represents the variable quantity to be found, is defined on the ray ${\mathbb R} \cap \{A\ge 0\}$ and satisfies the following: $$df(A,[A,A+1])=P(f(A)),\ \forall A \in {\bf Z}^+.$$ A solution is given by: $$f(0):=x_0,\ f(A+1):=f(A)+P(f(A)),\ \forall A\ge 0.$$

What if we have two variable quantities?

Example. The simplest example is as follows:

• $x$ is the horizontal location, and
• $y$ is the vertical location.

Both are $0$-forms. Their respective displacements are $dx$ and $dy$. If either depends only on its own component of the location, the motion is described by this pair of ODEs of forms: $$dx=g(x),\ dy=h(y).$$ The solution consists of two solutions to the two, unrelated, ODEs either found with the formula above.

$\square$

The solution is easy because the two components are independent from each other.

Example (Predator-prey model). As an example of quantities that do interact, let $x$ be the number of prey and $y$ be the number of predators. Let $R={\bf R}$.

The prey have an unlimited food supply and reproduces geometrically as described above -- when there is no predator. Therefore, the gain of the prey population per unit of time is $\alpha x$ for some $\alpha\in {\bf R}^+$. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet, which, in turn, is proportional to $xy$. Therefore, the loss per unit of time is $\beta x y$ for some $\beta\in {\bf R}^+$. Then the prey's ODE is: $$dx = \alpha x - \beta x y.$$

The predators have only a limited food supply, i.e., the prey. We use, again, the rate of predation upon the prey proportional to $xy$. Then, the gain of predator population per unit of time is $\delta xy$ for some $\delta\in {\bf R}^+$. Without food, the predators die out geometrically as described previously. Therefore, the loss per unit of time is $\gamma y$ for some $\gamma\in {\bf R}^+$. The predator's ODE is: $$dy = \delta xy - \gamma y.$$

This system of these two, related, ODEs cannot be solved by reusing the formula for the $1$-dimensional case.

$\square$

Exercise. (a) Set up and solve the IVP for this system. (b) Find its equilibria.

Example. We also take as a model a fluid flow. The “phase space” ${\bf R}^2$ is the space of all possible locations. Then the position of a given particle is a function $u:{\bf Z}^+\to {\bf R}^2$ of time $t\ge 0$. Meanwhile, the dynamics of the particle is governed by the velocity of the flow, at each moment of time. Let $V:{\bf R}^2\to {\bf R}^2$ be a function. Then the velocity of a particle if it happens to be at point $u$ is $V(u)$. Then the next position may be seen as $u+V(u)$.

$\square$

A vector field is given when there is a vector attached to each point of the plane: $${\rm point} \mapsto {\rm vector}.$$ Then, a vector field is just a function $$V : {\bf R}^2 \to {\bf R}^2 .$$

Further, one can think of a vector field as a system of time-independent ODE: $$\begin{cases} dx=g(x,y),\\ dy=h(x,y), \end{cases}$$ where $g,h:{\bf R}^2 \to {\bf R}$ are functions and $x,y:{\bf Z} \to {\bf R}$ are $0$-forms. The corresponding IVP adds an initial condition: $$\begin{cases} x(t_0)=x_0,\\ y(t_0)=y_0. \end{cases}$$ As before, a (forward) solution to this IVP is a $0$-form on the ray ${\mathbb R} \cap \{A\ge 0\}$ that satisfies, $\forall A \ge 0$: $$\begin{array}{llll} dx(A,[A,A+1])=g(x(A),y(A)),\\ dy(A,[A,A+1])=h(x(A),y(A)). \end{array}$$

Next, we combine the two variables to form a vector: $$\begin{array}{ccccc} u:=(x,y)\in {\bf R}^2\\ du:=(dx,dy)\in {\bf R}^2. \end{array}$$

Note that the new vector variables still form a ring!

Then, the setup given at the top of the subsection: $$df=Pf,\ f(A_0)=x_0\in R,$$ is now applicable to vector fields: in all the three examples, we set $$R:={\bf R}^2.$$

More generally, a vector field supplies a direction to every location. In a cell complex $X$, it is a map from $X$ to its tangent bundle: $$T_1(X):=\bigsqcup _{A\in X} \left( \{A\} \times T_A(X) \right).$$

Thus, there is one vector picked from a whole vector space at each point:

We next make this idea more precise.

Definition. The bundle projection of complex $X$ is the map $$\pi=\pi_X:T_1(X) \to X$$ given by $$\pi(A,v):=A.$$

Proposition. If $i_X:X\hookrightarrow T_1(X)$ is the inclusion, $i_X(A)=(A,0)$, then $$\pi_Xi_X=\operatorname{Id}_X.$$

Definition. A vector field over complex $X$ is any function $$V:X\to T_1(X)$$ that is a section (i.e., a right inverse) of the bundle projection: $$\pi_X V=\operatorname{Id}_X.$$

Exercise. What algebraic structure do the vector fields over $X$ form?

Then the inclusion serves as the trivial (zero) vector field.

A typical vector field is $$V(A)=AB\in X,$$ where $B=B(A)$ is a vertex adjacent to $A$ that depends on $A$. Then the motion produced by the vector field takes one from vertex to vertex along an edge, or simply: $$A \mapsto B.$$ Its solution can be written as an iteration: $$A_{n+1}:=A_n+\partial_1 V(A_n),$$ where $\partial_1:C_1(X)\to C_0(X)$ is the boundary operator of $X$.

Exercise. Verify the last statement. What kind of function is this?

## Flow simulation with a spreadsheet

Spreadsheets, such Excel, can be used to model various dynamics and some of the processes can also be visualized.

We represent a flow with a vector field starting with dimension $1$, i.e., $R:={\bf Z}$. This vector field is the right-hand side function $P:{\bf Z}\to {\bf Z}$ of our ODE: $$df=Pf.$$ Each number tells us the velocity of a particle if it is located in this cell. The values of the velocities are provided in the spreadsheet. They are chosen in such a way that the particle can't leave this segment of the spreadsheet. They can be changed manually.

Then one can trace the motion of a single particle given by a $0$-form $f:{\bf Z}^+\to {\bf Z}$, shown as a red square:

Here, the velocities are seen in the elongated cells (edges), while the locations of the particles are the small square cells (vertices). Also,

• "x" stands for the current position,
• "o" for the previous position.

The formula used is as follows:

• =IF(R[-1]C=R9C3,"x",IF(R[-1]C=R9C2,"o",""))

The data, as it changes, is shown on the left.

For motion simulation, we carry out one iteration for every two pushes of the button. This is an example of such dynamics with three iterations:

For the file to work, the settings should be set, under “Formulas”:

• Manual,
• 1 Iteration.

To see the motion of the particle, simply push “Calculate now” as many times as necessary.

Exercise. Modify the code so that the spreadsheet models motion on the circle, no matter what velocities are chosen. Hint: choose an appropriate ring $R$.

Exercise. Modify the code so that, no matter what velocities are chosen, the object stays within, say, interval $[0,...,7]$, and bounces off its ends.

Next, we simulate a flow with a vector field in dimension $2$, i.e., $R:={\bf Z}^2$. This vector field is the right-hand side function $P:{\bf Z}^2\to {\bf Z}^2$ of our ODE. Each pair (right and down) of elongated cells starting at every square cell contains a pair of numbers that represents the velocity of a particle if it is located in this cell.

The vector field of dimension $2$ is made of two vector fields of dimension $1$ independently evaluated for the $x$ and $y$ variables. The components of the velocities are chosen to be only $-1$, $0$, or $1$ (illustrated with the little arrows) so that the jump is always to an adjacent location. They are also chosen in such a way that the particle can't leave this rectangle.

We trace the motion of a single particle given by a $0$-form $f:{\bf Z}^+\to {\bf Z}^2$, shown as a red "x".

Several iterations are shown in a single picture below:

Exercise. Modify the code to model motion on (a) the cylinder and (b) the torus.

Exercise. Modify the code so that, no matter what velocities are chosen, the object stays within, say, rectangle $[0,8]\times [0,7]$, and bounces off its walls.

## The derivative of a cell map

Consider the two ways to write the derivative of function $f$ at $x=a$: $$\frac{dy}{dx} = f'(a).$$ What we know from calculus is that the left-hand side is not a fraction but the equation can be re-written as if it is: $$dy = f'(a) dx.$$ The equation represents the relation between the increment of $x$ and that of $y$ -- in the vicinity of $a$. This information is written in terms of a new coordinate system, $(dx,dy)$ and the best affine approximation (given by the tangent line) becomes a linear function:

Things are much simpler in the discrete case.

Suppose $X$ and $Y$ are two cell complexes and $f:X\to Y$ is a cell map. Then “in the vicinity of point $a$” becomes “in the star of vertex $A$”:

In fact, if we ignore the algebra of the $x$- and $y$-axis, we realize that our equation is a relation between the elements of the tangent spaces of $A$ and $f(A)$:

Then, the above equation becomes: $$e_Y=f(AB)e_X,$$ where $e_X,e_Y$ are the basis vectors of $T_A(X), T_{f(A)} (Y)$, respectively.

The idea is: our function maps both locations and directions. The general case is illustrated below:

A cell map takes vertices to vertices and edges to edges and that's what makes the $0$- and $1$-chain maps possible. Then,

• the locations are taken care of by $f_0:C_0(X)\to C_0(Y)$, and
• the directions are taken care of by $f_1:C_1(X)\to C_1(Y)$.

Suppose also that $f(A)=P$, so that the location is fixed for now. Then the tangent spaces at these vertices are: $$T_A(X):=<\{AB \in X \}>\subset C_1(X),$$ $$T_P(Y):=<\{PQ \in Y \}>\subset C_1(Y).$$ We then define the derivative of $f$ at $A$ as a linear map $$f'(A):T_A(X) \to T_P(Y)$$ given by $$f'(A)(AB):=f_1(AB).$$

Example. Let's consider cell maps of the “cubical circle” (i.e., ${\bf S}^1$ represented by a $4$-edge cubical complex) to itself, $f: X \to X$:

For a given vertex we only need to look at what happens to the edges adjacent to it. We assume that the bases are ordered by the letters, such as $\{AB,BC\}$.

The derivatives of these functions are found below.

Identity: $$\begin{array}{lllllll} f_0(A) = A, &f_0(B) = B, &f_0(C) = C, &f_0(D) = D, \\ \Rightarrow & f'(A)(AB)=AB, &f'(A)(AD)=AD. \end{array}$$ It's the identity map.

Constant: $$\begin{array}{llllllll} f_0(A) = A, &f_0(B) = A, &f_0(C) = A, &f_0(D) = A, \\ \Rightarrow & f'(A)(AB)=AA=0, &f'(A)(AD)=AA=0. \end{array}$$ It's the zero map.

Vertical flip: $$\begin{array}{llllllllll} f_0(A) = D, &f_0(B) = C, &f_0(C) = B, &f_0(D) = A, \\ \Rightarrow & f'(A)(AB)=DC, &f'(A)(AD)=DA. \end{array}$$ The matrix of the derivative is $$f'(A)=\left[ \begin{array}{ccccccc} 0&1\\ 1&0 \end{array} \right]$$

$\square$

Exercise. Consider (a) the rotations; (b) the horizontal flip; (c) the diagonal flip; (d) the diagonal fold. Hint: the values of the derivative varies from point to point.

Since this construction is carried out for each vertex in $X$, we have defined a function on the whole tangent bundle.

Definition. The derivative of cell map $$f:X\to Y$$ between two cell complexes is the map between their tangent bundles, $$f':T(X) \to T(Y),$$ given by $$f'(A,AB):=(f_0(A),f_1(AB)).$$

Exercise. In this context, define the directional derivative and prove its main properties.

Theorem (Properties of the derivative). For a given vertex $A$ and an adjacent edge $AB$, the derivative satisfies the following properties:

• The derivative of a constant is zero in the second component:

$$(C)'=(C,0),\ C\in Y.$$

• The derivative of the identity is the identity:

$$(\operatorname{Id})'=\operatorname{Id}.$$

• The derivative of the composition is the composition of the derivatives:

$$(fg)'=f'g'.$$

• The derivative of the inverse is the inverse of the derivative:

$$(f^{-1})'=(f')^{-1}.$$

Exercise. Prove the theorem.

Exercise. Prove that if $|f|$ is a homeomorphism, then $f'=\operatorname{Id}$.

Notation. An alternative notation for the derivative is $Df$. It is also often understood as the tangent map of $f$ denoted by $T(f)$.

Exercise. Show that $T$ is a functor.

We have used the equivalence relation $$(A,AB)\sim (B,-BA)$$ to glue together the tangent spaces of a cell complex: $$T(K)/_{\sim}=C_1(K).$$

Theorem. Suppose $X,Y$ are two cell complexes and $f:X\to Y$ is a cell map. Then, the quotient map of the derivative $$[f']:T(X)/_{\sim} \to T(Y)/_{\sim}$$ is well defined and coincides with the $1$-chain map of $f$, $$f_1:C_1(X)\to C_1(Y).$$

Proof. Suppose $f_0(A)=P$ and $f_0(B)=Q$. Then we compute: $$\begin{array}{llllll} f'(A,AB)&=(f_0(A),f_1(AB))\\ &=(P,PQ)\\ &\sim (Q,-QP)\\ &=(f_0(B),-f_1(BA))\\ &=f'(B,-BA). \end{array}$$ We have proven the following: $$(A,AB)\sim (B,-BA) \Rightarrow f'(A,AB)\sim f'(B,-BA).$$ Thus $[f']$ is well defined. $\blacksquare$

## ODEs of cell maps

So far, we have represented motion by means of a $0$-form as a function $$f:{\mathbb R}^+\to R,$$ where $R$ is our ring of coefficients. Now, is this motion continuous?

Here, $R$ is our space of locations and it is purely algebraic. Even if we extend $f$ from ${\mathbb R}^+$ to ${\bf R}^+$, without a topology on $R$, the continuity of such an $f$ has no meaning! In fact, for such rings as ${\bf Z}$ and ${\bf Z}_m$ there is no meaningful choice of topology.

In contrast, motion is typically represented by a parametric curve, i.e., a continuous map $$p:{\bf R}^+\to U\subset {\bf R}^n,$$ where ${\bf R}^+$ serves as time and $U$ as the space.

Now, even though we can recognize $R={\bf R}^n$ as a space with the Euclidean topology, its subsets, such as $U={\bf R}^n\setminus \{0\}$, may lack algebraic structure. Indeed, these rings aren't rings:

Then, we can't use forms anymore.

To match the traditional approach, below we represent motion in cell complex $K$ by a “partial” cell map, $$f:{\mathbb R} \supset I \to K,$$ where $I$ is a cell complex, typically an interval.

This motion is then continuous in the sense that this cell map's realization is a continuous map: $$|f|:{\bf R} \supset |I| \to |K|.$$

We can still see forms if we look at the curve one point at a time:

Next, we will see how these curves are generated by vector fields. The analysis largely follows the study of ODEs of forms given previously, in spite of the differences. Compare their derivatives:

• for a $0$-form $f:K \to R$, we have

$$df(AB)=f(A+1)-f(A) \in R;$$

• for a cell map $f:{\mathbb R} \to K$, we have

$$f'(A,[A,A+1])=(f_0(A),f_1([A,A+1]))\in C_0(K;R)\times C_1(K;R).$$

Definition. Suppose $P(A,\cdot)$ is a vector field on complex $K$ parametrized by $A\in {\bf Z}$. Then, an ODE of cell maps generated by $P$ on complex $K$ is: $$f'=Pf,$$ and its solution is a cell map $f:{\mathbb R}\supset I \to K$ that satisfies the equation $$f'(A,[A,A+1])=P(A,f(A)),\ \forall A\in I\cap {\bf Z}.$$

For $K={\mathbb R}$, we can simply take the examples of $1$st order ODEs of forms given in the previous subsections and use them to illustrate ODEs of maps. We just need to set $R:={\bf Z}$. The difference is, this time the values of the right-hand side $P$ can only be $1$, $0$, or $-1$.

For $K={\mathbb R}^2$, the cubical case illustrated with Excel above doesn't apply here because a cell map has to follow the edges and can't jump diagonally. Then, one of the components of the vector field has to be $0$, as shown below:

Example. We change the values of the vector field in the spreadsheet above so that it models an ODE of cubical maps. Several iterations can be shown in a single picture, below:

$\square$

Exercise. Can the original spreadsheet be understood as a model of an ODE of cell maps?

Thus, if we represent motion with an ODE of cell maps,

• both time and space are topological.

Definition. An initial value problem (IVP) is a combination of an ODE and an initial condition (IC): $$f'=Pf,\ f(A_0)=x_0\in K,\ A_0\in {\bf Z},$$ and a solution $f:{\mathbb R}\cap\{t\ge A_0\} \to K$ of the ODE is called a (forward) solution of the IVP if it satisfies the initial condition.

Theorem (Uniqueness). The (forward) solution to the IVP above, if exists, is given iteratively by: $$f(A_0):=x_0,\ f(A+1):=f(A)+\partial P(A,f(A)),\ \forall A\ge A_0.$$

Proof. Suppose $f$ is a solution, $f'=Pf$. Then we compute: $$\begin{array}{llllll} f_0(A+1)-f_0(A)&=f_0((A+1)-(A))\\ &=f_0(\partial [A,A+1])\\ &=\partial f_1[A,A+1]\\ &=\partial P(A,f(A)). \end{array}$$ $\blacksquare$

Just as before, only the right-sided uniqueness is guaranteed: when two solutions meet, they stay together.

What about existence? We have the formula, but is this a cell map?

Theorem (Existence). If the values of the vector field $P$ are edges of $K$, a solution to the IVP above always exists.

Proof. For $f$ to be a cell map, $f(A)$ and $f(A+1)$ have to be the end-points of an edge in $K$. They are, because $f(A+1)-f(A) =\partial P(A,f(A))$. $\blacksquare$

Definition. For the given ODE, the forward propagation map of depth $c\in {\bf Z}^+$ at $A_0$ is a map $$Q_c:K \to K$$ defined by $$Q_c(x_0):=f(A_0+c),$$ where $f$ is the solution with the IC: $$f(A_0)=x_0.$$

Exercise. Prove that if the vector field $P$ is time-independent, $Q_c$ is independent from $A_0$.

Exercise. Compute the forward propagation map for the vector field illustrated above.

Exercise. Suppose $P$ is edge-valued. Is the forward propagation map $Q_c:K\to K$ a cell map?

## ODEs of chain maps

A cell map can't model jumping diagonally across a square.

That's why the existence theorem above requires that the values of the vector field are edges. Then, in the case of a cubical complex, one of the components has to be $0$.

This is a significant limitation of ODEs of cell maps, even in comparison to ODEs of forms.

The issue is related to the one previously discussed:

cell map extensions vs. chain map extensions

(subsection 5.3.10). Recall that in the former case, extensions may require subdivisions of the cell complex. The situation when the domain is $1$-dimensional is transparent:

In the former case, we can create a cell map: $$g(AB):=XY,$$ by extending its values from vertices to edges. In the latter case, an attempt of cell extension (without subdivisions) fails as there is no single edge connecting the two vertices. However, there is a chain of edges: $$g(AB):=XY+YZ.$$

Even though the linearity cannot be assumed, the illustration alone suggests a certain continuity of this new “map”. In fact, chain maps are continuous in the algebraic sense: they preserve boundaries, $$g_0\partial = \partial g_1.$$ The idea is also justified by the meaning of the derivative of a cell map $f$: $$f'(A,[A,A+1])=(f_0(A),f_1([A,A+1])).$$ It is nothing but a combination of the $0$- and the $1$-chain maps of $f$!

Now, ODEs.

Once again, a vector field at vertex $X$ only takes values in the tangent space $T_X(K)$. We need to incorporate the possibility of moving in the direction $XY+YZ$, while $YZ\not\in T_X(K)$. From this point on, the development is purely algebraic.

Suppose we are given a (short) chain complex $C$ to represent space: $$\partial: C_1 \to C_0.$$ Here, of the two modules, $C_0$ is generated by the locations and $C_1$ by the directions. Even though all the information is contained in the complex, we still define an analog of the tangent bundle to demonstrate the parallels with ODEs of forms and cell maps.

Definition. The chain bundle $T(C)$ of chain complex $C$ is the module $$T(C):=C_0\oplus C_1.$$

Then the bundle projection $$\pi=\pi_C:T(C) \to C_0,\ \pi(X,v)=X,$$ is simply the projection of the product. Then, if $i_{C_0}:C_0\hookrightarrow T(C)$ is the inclusion, $i_{C_0}(X)=(X,0)$, then $$\pi_C i_{C_0}=\operatorname{Id}_{C_0}.$$

Definition. A chain field over chain complex $C$ is any linear map $$W:C_0\to T(C)$$ that is a section of the bundle projection: $$\pi_C W=\operatorname{Id}_{C_0}.$$

The meaning is simple: $$W(X)=(X,P(X)),$$ for some $P:C_0 \to C_1$.

This is an illustration of possible values of $W$ at vertex $X$ (even though the chains don't have to start at $X$):

The boundary operator of $C$ is extended to the boundary operator of the chain bundle: $$\partial: T(C)\to C_0,$$ by setting $$\partial(X,a):=\partial a.$$

The dynamics produced by the chain field is a sequence of $0$-chains given by this iteration: $$X_{n+1}:=X_n+\partial W(X_n)=X_n+\partial P(X_n).$$

The chain bundle of a cell complex $K$ is $T(C(K))$. The following is obvious.

Proposition. A vector field $$V:K^{(0)}\to T(K)$$ over a cell complex generates -- by linear extension -- a chain field over its chain bundle: $$W:C_0(K)\to T(C(K)).$$

With the new definition, the motion produced by such a chain field is as before. For example, if $W(X)=XY$, then $$X_1= X+\partial W(X) = X+(Y-X) =Y.$$

Definition. Suppose $W(A,\cdot)$ is a chain field on a chain complex $C$ parametrized by $A\in {\bf Z}$ (for time). Then, an ODE of chain maps generated by $W$ on complex $C$ is: $$g_1=Wg_0,$$ and its solution is a chain map $$g:C(I) \to C,$$ for some subcomplex $I \subset {\mathbb R}$, that satisfies the equation $$g_1[A,A+1]=W(A,g_0(A)),\ \forall A\in I\cap {\bf Z}.$$

Example. Under this definition, a step of the moving particle on a square grid can be diagonal, without explicitly following the vertical and horizontal edges. For example, to get from $X=(0,0)$ to $Z=(1,1)$ without stopping at $Y=(0,1)$, set $$W(X):=(X,XY)+(Y,YZ) \in T(C).$$ Indeed, we have $$X\mapsto X+\partial W(X) = X+ (Y-X)+(Z-Y)=Z.$$ $\square$

Since every cell map generates its chain map, we have the following:

Proposition. If a map $f$ is a solution of the ODE of cell maps generated by a vector field $V$, then $g:=f_*$ is a solution of the ODE of chain maps generated by the chain field $W$ generated by $V$.

Definition. An initial value problem is a combination of an ODE and an initial condition: $$g_1=Wg_0,\ g_0(A_0)=X_0\in C,\ A_0\in {\bf Z},$$ and a solution $g:C({\mathbb R}\cap\{t\ge A_0\}) \to C$ of the ODE is called a (forward) solution of the IVP if it satisfies the initial condition.

Theorem (Existence and Uniqueness). The only (forward) solution to the IVP above is given iteratively by: $$\begin{array}{lllllll} g_0(A_0):=X_0,\quad \text{ and for all } A\ge A_0,\\ g_0(A+1):=g_0(A)+\partial W(A,g_0(A)),\\ g_1[A,A+1]:=W(A,g_0(A)). \end{array}$$

Proof. The linearity of $W$ on the second argument implies that $g_0,g_1$ so defined form a chain map. The proof that the ODE is satisfied is identical to the proof of the uniqueness theorem in the last subsection. $\blacksquare$

The uniqueness holds only in terms of chains. When applied to cell complexes, the result may be that the initial location is a vertex $X_n$ but the next “location” $X_{n+1}$ is not.

Example. Where do we go from $X=(0,0)$ if $$W(X):=(X,XY)+(X,XZ),$$ with $Y=(0,1),Z=(1,1)$? This is the result: $$X\mapsto X+\partial W(X) = X+ (Y-X)+(Z-X)=-X+Y+Z.$$ $\square$

This isn't a cell map because the outcome isn't a single vertex! The latter can be guaranteed though under certain restrictions.

Since every cell map generates its chain map, the existence theorem from the last subsection is still applicable: if a vector field $V$ is edge-valued, a solution to the IVP for maps always exists. Below is a stronger statement.

Theorem (Cell maps as solutions). If a chain field over the chain complex $C(K)$ of a cell complex $K$ satisfies the condition: $$\partial W(A,X)=Y-X,\text{ for some } Y\in K^{(0)},$$ then the (forward) solution to the IVP for chain maps on $K$ is a cell map.

The difference between a cell map as a solution of

• an ODE of cell maps and
• an ODE of chain maps

is illustrated below:

Thus, we have demonstrated that ODEs of cell maps are included in the new setting of ODEs of chain maps. What about ODEs of forms?

Suppose we are given $f$ a $0$-form on ${\mathbb R}$. Then we would like to interpret the pair $g=\{f,df\}$ as some chain map defined on $C({\mathbb R})$, the chain complex of time. What is the other chain complex $C$, the chain complex of space? Since these two forms take their values in ring $R$, we can choose $C$ to be the trivial combination of two copies of $R$: $$\partial=\operatorname{Id}:R \to R.$$ Below, we consider a more general setting of $k$-forms.

Theorem. Cochains are chain maps, as seen in the following commutative diagram: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{r|ccccccccccc} C(K):& ... & C_{k+2}(K) & \ra{\partial} & C_{k+1}(K) & \ra{\partial} & C_k(K) & \ra{\partial} & C_{k-1}(K) & ... \\ f: & \ & \ \da{0} & & \ \da{df} & & \ \da{f} & & \ \da{0}&\\ C: & ... & 0 & \ra{\partial=0} & R & \ra{\partial=\operatorname{Id}} & R & \ra{\partial=0} &0&... \end{array}$$

Proof. We need to prove the commutativity of each of these squares. We go diagonally in two ways and demonstrate that the result is the same. We use the duality $d=\partial^*$.

For the first square: $$df \partial =(\operatorname{Id}^{-1}f\partial)\partial =\operatorname{Id}^{-1}f0=0.$$ For the second square: $$f\partial =df=\operatorname{Id}df.$$ The third square (and the rest) is zero.

$\blacksquare$

Now, how do ODEs of forms generate ODEs of chain maps?

Suppose $df=Pf$ is an ODE of forms with $P:R\to R$ some function. Then, for $K={\mathbb R}$ and $k=0$, the above diagram becomes: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{r|cccccccccc} C({\mathbb R}):& ... & \ra{0} & C_1({\mathbb R}) & \ra{\partial} & C_0({\mathbb R}) & \ra{0} \\ f: & \ & & \ \da{df} & & \ \da{f} & & \ \\ C: & ... & \ra{0} & R & \ra{\operatorname{Id}} & R & \ra{0} \end{array}$$ The corresponding chain field, $$W:C_0=R\to T(C)=R\times R,$$ is chosen to be $$W(A):=(A,P(A)).$$

Proposition. A solution of an ODE of forms is a solution of the corresponding ODE of chain maps.

Exercise. Prove the proposition.

Thus, the examples at the beginning of this section can be understood as ODEs of chain maps.

Exercise. Provide chain fields for the examples of ODEs in this section.

Exercise. (a) Define the forward propagation map for ODEs of chain maps. (b) Explain its continuity.

We have shown that we can model flows with discrete vector fields. For example, the following picture could be a velocity field of a flow:

We have also modeled motion with cell maps and chain maps:

The last example of an ODE of chain maps, however, gives us something new.

Example. Suppose our chain field on ${\mathbb R}^2$ is given by: $$P(X):=\frac{1}{2}(X,XY)+\frac{1}{2}(X,XZ).$$ with $Y=(1,0),Z=(0,1)$. Where do we go from $X=(0,0)$? This is the result: $$X\mapsto X+\partial P(X) = X+ \frac{1}{2}(Y-X)+\frac{1}{2}(Z-X)=\frac{1}{2}Y+\frac{1}{2}Z.$$ We can interpret this as if some material initially located at $X$ has been split between adjacent two edges and these halves ended up in $Y$ and $Z$.

$\square$

A process that transfers material along a vector field is called advection.

Example. To model advection we can use the vector field on the spreadsheet given above. It is understood differently. If the values at the two adjacent to $X$ cells are $1$ and $1$,

• the flow particle follows vector $(1,1)$; or
• the material is split and each half goes along one of the two directions.

Several iterations of this process are shown with the new cells with non-zero values shown in blue:

That's what a forward propagation map of a chain ODE looks like!

The coloring in the illustration does not reflect the fact that the material is spread thinner and thinner with time. That's why the simulation also resembles spreading of an infection -- along preset directions -- with blue squares indicating newly infected individuals, as below:

$\square$

For a more direct interpretation of the spreadsheet, imagine that a liquid moves along the square grid which is thought of as a system of pipes:

Then, instead of the velocity vector attached to each cell, we think of our vector field as two numbers each of which represents the amount of liquid that follows that pipe. (Then, this is just a cubical $1$-form.)

In the picture above, the numbers represent “flows” through these “pipes”, with the direction determined by the direction of the $x,y$-axes. In particular,

• the “$1$” can be understood as: “$1$ cubic foot per second from left to right”.
• the “$2$” can be understood as: “$2$ cubic feet per second upward”.

For the sake of conservation of matter, these numbers have to be normalized. Then $1/3$ of the amount goes right and $2/3$ goes up.

To confirm that this model makes sense, let's see what happens if we let a drop of dye to be taken by such a flow. For computation, we follow the rule that

• the amount of dye in each cell is split and passed to the adjacent cells along the vectors of the vector field proportional to the values attached to the edges.

For example, we can choose $1$'s on the horizontal edges and $0$s on the vertical edges, i.e., $dx$. Then the flow will be purely horizontal. If we choose $dy$, it will be purely vertical.

Example. Now what if we choose the $1$-form with $1$'s on both vertical and horizontal edges, i.e., $dx+dy$? $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{cccccccccccccccccc} & &\ \da{1} & & \ \da{1} & & \ \da{1} \\ & \ra{1} & \bullet & \ra{1} & \bullet & \ra{1} & \bullet & \ra{1}\\ & &\ \da{1} & &\ \da{1} & & \ \da{1} \\ & \ra{1} & \bullet & \ra{1} & \bullet & \ra{1} & \bullet & \ra{1}\\ & &\ \da{1} & & \ \da{1} & & \ \da{1} \end{array}$$

It is simple: the amount of dye in each cell is split in half and passed to the two adjacent cells along the vectors. The liquid is flowing down and right equally.

How well does it model the flow? Even though dispersal is inevitable, the predominantly diagonal direction of the spreading of the dye is evident in this Excel simulation:

$\square$

Exercise. Find the formula for the diagonal elements.

Exercise. Create such a spreadsheet and confirm the pattern. What happens if the flow takes horizontally twice as much material as vertically?

Exercise. Devise an advection simulation with a circular dispersal pattern.

Exercise. (a) Explain how an arbitrary directed graph gives rise to advection and find its ODE. (b) What if, in addition to carrying material around, the flow also deposits a proportion of it in every location?