This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

# Calculus of chain maps

## Cell functions

Suppose we have two copies of the complex ${\mathbb R}$, ${\mathbb R}_x$ and ${\mathbb R}_y$, possibly representing time and space respectively. We are to study functions, $$f:G\to {\mathbb R}_y,\ \text{ for some } G\subset{\mathbb R}_x,$$ that will possibly represent motion in space. They have to somehow respect the cell structure of ${\mathbb R}$. Let's recall how cell functions are introduced.

Suppose $N,E$ are the set of nodes of ${\mathbb R}$ and $N_G,E_G$ the set of nodes of $G$. In particular, $$N_G:=\{A,B,C,...\},\ N:=\{X,Y,Z,...\};$$ $$E_G:=\{AB,BC,...\},\ E:=\{XY,YZ,...\}.$$

First, $f$ has to be a function that takes nodes to nodes: $$f:N_G \to N.$$ For example, this is a possibility: $$f(A)=X,\ f(B)=Y,\ X\ne Y.$$ Second, we need to understand what will happen to the edge $AB$. The only option is that the edge is taken to this edge, $$f(AB)=XY,$$ provided $XY\in {\mathbb R}_y$. In this case we say that the edge is cloned.

The second possibility is: $$f(A)=f(B)=X.$$ The only option is that the edge is taken to this node, $$f(AB)=X.$$ In this case, we say that the edge is collapsed.

In order to ensure the continuity of the resulting curve, we plot the nodes of the graph first and then attach edges to them. Therefore, we require from the edge function $f$ the following:

• for each edge $e$, $f$ takes its endpoints to the endpoints of $f(e)$. $\\$

Or, in a more compact form, $$f(A)=X, \ f(B)=Y \Longleftrightarrow f(AB)=XY.$$ Second, we require:

• for each edge $e$, if $e$ is taken to node $X$, then so are its endpoints, and vice versa. $\\$

Or, in a more compact form, $$f(A)=f(B)=X \Longleftrightarrow f(AB)=X .$$

We combine these two requirements in the following definition.

Definition. A function $f:G\to {\mathbb R}_y$, where $G\subset {\mathbb R}_x$, is called a cell function (or a cell map) when for any adjacent nodes $A,B$ in $G$ we have: $$f(AB)= \begin{cases} f(A)f(B) & \text{ if } f(A) \ne f(B),\\ f(A) & \text{ if } f(A) = f(B). \end{cases}$$

In the most abbreviated form, this Discrete Continuity Condition is: $$f(AB)=f(A)f(B),$$ with the understanding that $XX=X$.

Theorem. The composition of two cell functions is a cell function.

Exercise. Prove the theorem.

Exercise. Under what circumstances is there the inverse of a cell function which is also a cell function?

Then one of the familiar theorem from calculus holds.

Theorem (Intermediate Value Theorem). Given a cell map $f$, if $$f(A)<c<f(B)$$ for some nodes $A,B$ and some $c\in{\bf R}$, then there is a edge $A'B'$ located between the nodes $A,B$ such that $$c\in f( A'B').$$

Exercise. Prove the theorem.

There are very few cell functions $f:{\mathbb R}\to {\mathbb R}$. The reason is that as the input $x\in {\bf Z}$ (a node) increments by $1$, the output $f(x)\in {\bf Z}$ (also a node) increments by $-1,0$, or $1$. These three numbers are the only three possible slopes, $m=-1,0,1$, of a cell function given by a linear formula $f(n)=mx+b,\ b\in {\bf Z}$. There are no cell functions given by quadratic formulas, $f(x)=x^2$, etc.:

We will explore further possibilities.

## Chain functions of cell functions

We turn to algebra.

Example. We will consider two maps with the domain and range limited to these short lists of nodes and edges of ${\mathbb R}$: $$\begin{array}{lll} N_G=\{A,B,C\},& E_G=\{AB,BC\},\\ N_J=\{X,Y,Z\},& E_J=\{XY,YZ\}; \end{array}$$ Below, two cell functions and their graphs are shown:

We can be more specific. The first function is: $$\begin{array}{lllll} \dim =0:&f(A)=X, &f(B)=Y,&f(C)=Z,\\ \dim =1:&f(AB)=XY,&f(BC)=YZ.& \end{array}$$ This data is however incomplete! The reason is that in discrete calculus as it has been developed so far, the main building block is a chain. In fact, we think of these lists as bases of the chain groups: $$\begin{array}{lll} C_0(G)=< A,B,C >,& C_0(J)=< X,Y,Z >,\\ C_1(G)=< AB,BC >,& C_1(J)=< XY,YZ >. \end{array}$$

The key idea is to think of the cell maps as linear operators determined by their values on these bases.

There are two linear operators for the two dimensions $k=0,1$. They are written coordinate-wise as follows: $$\begin{array}{lllll} \dim =0:&f_0\Big ([1,0,0]^T\Big)=[1,0,0]^T,&f_0\Big ([0,1,0]^T\Big)=[0,1,0]^T,&f_0\Big ([0,0,1]^T\Big)=[0,0,1]^T,\\ \dim =1:&f_1\Big ([1,0]^T\Big)=[1,0]^T,&f_1\Big ([0,1]^T\Big)=[0,1]^T. \end{array}$$

The second function is given by $$\begin{array}{lllll} \dim =0:&f(A)=X, &f(B)=Y,&f(C)=Y,\\ \dim =1:&f(AB)=XY,&f(BC)=Y. \end{array}$$ The linear operators are written coordinate-wise as follows: $$\begin{array}{lllll} \dim =0:&f_0\Big([1,0,0]^T\Big)=[1,0,0]^T, &f_0\Big([0,1,0]^T\Big)=[0,1,0]^T, &f_0\Big([0,0,1]^T\Big)=[0,0,1]^T,\\ \dim =1:&f_1\Big([1,0]^T\Big)=[1,0]^T, &f_1\Big([0,1]^T\Big)=0. \end{array}$$ The very last item requires special attention: the collapsing of an edge in $G$ does not produce a corresponding edge in $J$. This is why we give it an algebraic meaning by setting the values to be zero. Even though $0$ isn't an edge, it is a chain!

It follows that the first linear operator is the identity and the second is a projection. $\square$

Exercise. Prove the last statement.

Exercise. Find the matrices of $f_0,f_1$ in the last example.

Exercise. Find the matrices of $f_0,f_1$ for $f:G\to J$ given by $f_E(AB)=XY, \ f_E(BC)=XY$.

We follow this idea and introduce a new concept.

Definition. The chain function of a cell function $f:G \to J$, where $G,J\subset {\mathbb R}$, is the pair of linear operators $f_{\Delta}:=\{f_0,f_1\}$: $$\begin{array}{lll} f_0:C_0(G) \to C_0(J),\\ f_1:C_1(G) \to C_1(J), \end{array}$$ generated by the values of $f$ on the $k$-cells, $k=0,1$, with $f_1(e)=0$ for every collapsed edge $e$.

By design, these two operators satisfy the Discrete Continuity Condition: $$f_1(AB)=f_0(A)f_0(B).$$ Let's express this condition in terms of the boundary operator of ${\mathbb R}$.

Since $$\partial (AB) = B-A,$$ the continuity condition takes this form: $$\partial (f_1(AB))=f_0( \partial (AB)).$$ It applies, without change, to the case of a collapsed edge. Indeed, if $AB$ collapses to $X$, both sides are $0$: $$\begin{array}{llllll} &\partial (f_1(AB)) &=\partial (0) &=0;\\ &f_0( \partial (AB))&= f_0( B-A)=f_0(B)-f_0(A)=X-X &=0.&& \end{array}$$

Theorem (Algebraic Continuity Condition). For any cell function $f:G \to J$, its chain function satisfies: $$\partial f_1(e)=f_0( \partial e),$$ for any edge $e$ in $G$.

Whenever the boundary operator can be properly defined, this condition will define cell functions in any dimension.

## Chain functions

The picture below suggests that a function of nodes and a function of chains of edges don't have to come from a cell function to satisfy this condition:

Here, $f(AB)=XY$ and then $f(AB)=XY+YZ$.

Definition. Suppose we have two graphs $G\subset {\mathbb R}_x$ and $J\subset {\mathbb R}_y$. A chain function is a pair of linear operators $f:=\{f_0,f_1\}$, $$\begin{array}{lll} f_0:C_0(G) \to C_0(J),\\ f_1:C_1(G) \to C_1(J), \end{array}$$ that satisfies the Algebraic Continuity Condition: for any edge $e$ in $G$, we have $$\partial f_1(e)=f_0( \partial e).$$

As we can see, unlike those of cell functions, the slopes of chain functions can be arbitrary integer numbers. Non-linear functions are also numerous, such as polynomials with integer coefficients and exponential functions with integer bases, etc.

Furthermore, if an arbitrary function of nodes $f_0$ is given, we can always fill the blanks with some $f_1$, so that, together, they form a chain function.

Exercise. Clarify and prove the last statement. Hint: What difference does it make if $G\ne {\mathbb R}$ or $J\ne {\mathbb R}$?

Also, if a cell function $f$ is given, a multiple of its chain function $f_\Delta=\{f_0,f_1\}$ is also a chain function. Moreover, the following two also form a chain function: $$g_k(a)=r_af_k(a),\ k=0,1,$$ for any choice of coefficients $r_a\in {\bf R},\ a\in C_k({\mathbb R})$.

Exercise. Prove the last statement.

Cell functions are now seen as “cell-valued” chain functions.

Exercise. Make the above statement precise by answering the question: what are the possible values of the matrix of the chain function of a cell function?

Theorem. The composition of two chain functions is a chain function.

Exercise. Prove the theorem.

Exercise. Under what circumstances is there the inverse of a chain function which is also a chain function?

To plot the graph of a chain function in ${\mathbb R}$, we plot those of $f_0$ and $f_1$ together as above or separately:

To emphasize the nature of a chain function as a function, we can use arrows:

Here we have two functions of cells:

• $f_0:\ 0\mapsto 2,\ 1\mapsto 4,\ 2\mapsto 3,\ 3\mapsto 3,\ 4\mapsto 2,\ 5\mapsto 1$; and
• $f_1:\ [0,1]\mapsto [2,4],\ [1,2]\mapsto [4,3],\ [2,3]\mapsto 3,\ [3,4]\mapsto [3,2],\ [4,5]\mapsto [2,1]$.

We can also rewrite and use the algebra of $C_k({\mathbb R})$:

• $f_0(0)=2,\ f_0(1)=4,\ f_0(2)=3,\ f_0(3)=3,\ f_0(4)=2,\ f_0(5)=1$; and
• $f_1\Big( [0,1] \Big)= [2,4],\ f_1\Big([1,2] \Big)= -[3,4],\ f_1\Big([2,3] \Big)= 0,\ f_1\Big([3,4]\Big)= -[2,3],\ f_1\Big([4,5]\Big)= -[1,2]$.

We next show that we can think of $1$-chain functions as integrals.

Example. Let's consider again the above function. For $B=0,1,2...$, we can think of $f_0(B)$ as the location of an object that moves with speed given by $f_1$ over a period of time represented by the $1$-chain $a=[0,B]$. To get from the values on the edges (velocities) to the values on the nodes (locations), i.e., from $f_0$ to $f_1$, we use the properties: $$\begin{array}{lllll} \big[ f_0(0),f_0(4) \big]&=f_1\Big( [0,4] \Big) &\text{ by } f_1(e)=f_0( \partial e)\\ &=f_1\Big( [0,1]+[1,2]+[2,3]+[3,4] \Big) \\ &=f_1\Big( [0,1]\Big)+f_1\Big( [1,2]\Big)+f_1\Big([2,3]\Big)+f_1\Big([3,2]\Big)&\text{ by additivity} \\ &=[2,4]-[3,4]+0-[2,3]\\ &=2. \end{array}$$ Therefore, $f_0(4)=2$. In the same manner, we can compute $f_0(1),...,f_0(5)$, as long as $f_0(0)$ is known. $\square$

The notation so justified is: $$f_0= \displaystyle\int f_1.$$ So, $f_0$ is an antiderivative of $f_1$!

As we shall see, in calculus either of the two functions $f_0$ or $f_1$ may come up by itself. They can also be much more general than the ones discussed above.

Definition. For some $G,J\subset {\mathbb R}$, a chain function of degree $k=0,1$, or simply a $k$-chain function, is a linear function $$s:C_k(G)\to C_k(J);$$ i.e., for all $r,t\in{\bf R}$ and all $a,b\in C_k(G)$, we have: $$s(r\hspace{.5mm} a+t\hspace{.5mm} b)=r\hspace{.5mm}s(a)+t\hspace{.5mm}s(b).$$

Separately, we write for nodes and edges: $$s(r\hspace{.5mm} A+t\hspace{.5mm} B)=r\hspace{.5mm} s(A)+t\hspace{.5mm} s(B) ,\ A,B\in {\mathbb R},$$ and $$s(r\hspace{.5mm} AB+t\hspace{.5mm} CD)=r\hspace{.5mm} s(AB)+t\hspace{.5mm} s(CD) ,\ AB,CD\in {\mathbb R}.$$

The set of $k$-chain functions is denoted by $CM^k({\mathbb R})$.

Chain functions can be represented as tables, see Spreadsheets. They can also be represented by formulas. In either case, only the values of the chain function on the cells are provided.

Warning. The algebraic operations, addition and scalar multiplication, used above are the formal operations that come with $C_k({\mathbb R})$. They are not to be confused with real addition and multiplication of numbers that come with ${\bf R}$.

The two are intermixed when the functions are represented by formulas. For example, these are $0$-chain functions: $$f(n)=3\cdot n^2+1,\ g(n)=\sin (\pi\cdot n),\ h(n)=2^n, \text{ etc. }$$ Here the input node, $n$, is treated as a real number to find the output node, $f(n)$. Now, the formulas only supply the values (real numbers) assigned to each node $n=...,-2,-1,0,1,2,...$. The rest of the values -- the function evaluated at chains as formal linear combinations of nodes -- are found from this data via linearity. However, the coefficients of the formal linear combinations cannot be used for computations. For example, $$f(n)=3n+1 \ \not\Rightarrow\ f(2n+3m) = 3(2n+3m)+1=6n+9m+1.$$ To tell nodes from numbers, let's add braces: $$f(\{n\})=\{3\cdot n+1\}.$$ Now, $f$ takes a formal linear combination of nodes, $$\begin{array}{lll} f(2\{n\}+3\{m\})&=2f(\{n\})+3f(\{m\})\\ &=2\{ 3 \cdot n+1 \}+3 \{ 3\cdot m+1 \}, \end{array}$$ to a formal linear combination of nodes.

Another issue is functions with non-integer values: $$f(n)=\tfrac{1}{3}\cdot \sqrt{n}+1.5,\ g(n)=\sin (n),\ h(n)=e^n, \text{ etc. }$$ Those can still be made sense of as functions $f:{\mathbb R}_x\to {\mathbb R}_y$ whenever there is an appropriate choice of nodes for ${\mathbb R}_y$. For example, given a function $f:{\bf R} \to {\bf R}$, we choose the nodes of ${\mathbb R}_x$ to be ${\bf Z}$ and the nodes of ${\mathbb R}_y$ to be $\{f(n):n\in {\bf Z}\}$. When the latter set has no accumulation points, $f$ becomes a $0$-chain function. Under these circumstances however algebraic operations with functions become difficult.

Recall also that the domain ${\mathbb R}^\star$ is a full copy of the domain ${\mathbb R}$ with the chains and the boundary operators. This time we have two pairs: $${\mathbb R}_x \text{ vs. } {\mathbb R}_x^\star \text{ and } {\mathbb R}_y \text{ vs. } {\mathbb R}_y^\star.$$ The new domains also has the full set of $k$-chain functions as $k$-chain-valued functions on their $k$-cells: $$f_k:C_k({\mathbb R}_x^\star) \to C_k({\mathbb R}_y^\star).$$ They are called dual chain functions while the ones on ${\mathbb R}_x$ are called primal chain functions. The exterior derivative is also given: $$d:CM^0({\mathbb R}^\star)\to CM^1({\mathbb R}^\star).$$

There is a direct relation between these two calculi.

Proposition. Given a primal $k$-chain function $f$, $k=0,1$, the following is a dual $(1-k)$-chain function: $$f^\star (a):=f(a^\star)^\star,\ \forall a\in C_{1-k}({\mathbb R}_x^\star).$$

Next, let's consider the derivatives of chain functions.

## The derivative of a $0$-chain map is a $1$-chain map

Even though we are familiar with the derivative of a function as the rate of change, the change will be initially our interest.

The functions we are dealing with are discrete. These are $0$-chain maps, they change abruptly and, consequently, our interest is the actual range of change. Specifically, suppose the input of $f$ changes from $A$ to $B$ then $f$ changes from $f(A)$ to $f(B)$. In other words, the edge $AB\in {\mathbb R}$ corresponds to the range of change, i.e., the interval $f(A)f(B)$. We have a function of intervals generated by $f$: $$AB \mapsto f(A)f(B).$$

Two observations. If the input changes in two increments, the result is the same, as expected: $$AC=AB+BC \mapsto f(A)f(C)=f(A)f(B)+f(B)(C).$$ If the input changes in the opposite way, so does the change of the output, as expected: $$BA=-AB \mapsto f(B)f(A)=-f(A)f(B).$$

Example. Let's look at this construction from the point of view of our study of motion. Suppose function $p$ gives the position and suppose

• at time $n$ hours we are at the $5$ mile mark: $f_0(n)=5$, and then
• at time $n+1$ hours we are at the $7$ mile mark: $f_0(n+1)=7$. $\\$

We don't know what exactly has happened during this hour; we may have passed the end-point and then turned back. We describe the whole event as follows: $$f_1\Big( [n,n+1] \Big)=[5,7].$$ This way, the elements of the domain of the velocity function are the edges. $\square$

The relation between a discrete function and its range is illustrated below:

Definition. The exterior derivative of a discrete function $f$ at edge $AB\in {\mathbb R}$ is defined to be the interval $$(df) \big(AB \big):=f(A)f(B),$$ represented as a $1$-chain.

Thus, we have:

• $f$ is defined on the $0$-cells, and
• $df$ is defined on $1$-cells.

Definition. The exterior derivative $df$ of a $0$-chain map $f$ is a $1$-chain map $df$ given on each interval by the above formula.

The exterior derivatives can be computed for $0$-chain maps given by tables, see Spreadsheets. When they are represented by formulas, the computations are straightforward: $$\begin{array}{llll}f(n)=3n^2+1& \Longrightarrow\ &(df)\big( [n,n+1] \big)=[3n^2+1,3(n+1)^2+1]&=(3n^2+1)+[0,6n+3];\\ g(n)=2^n& \Longrightarrow\ &(dh)\big( [n,n+1] \big)= [2^{n},2^{n+1}]&=2^n\cdot [1,2];\\ h(n)=n!& \Longrightarrow\ &(dh)\big( [n,n+1] \big)= [n!,(n+1)!]&=n!\cdot [1,n+1];\\ k(n)=\frac{1}{n}& \Longrightarrow\ &(dg)\big( [n,n+1] \big)= \Big[ \frac{1}{n},\frac{1}{n+1} \Big]&=\frac{1}{n} \cdot \Big[ 1,\frac{n}{n+1} \Big]. \end{array}$$

Note that here and elsewhere “$\cdot$” stands for the multiplication of real numbers and, as a result, of sets of real numbers, not the (formal) multiples of cells. The same applies to the addition “+”. The last column is meant to present an abbreviated version of the exterior derivative that has been already found.

The analogs of the familiar theorems from calculus are trivial.

We say that an edge $AB$ is positive, $AB>0$, whenever $B>A$.

Theorem (Monotonicity). A $0$-chain map is increasing (decreasing) on an interval if and only if its exterior derivative is positive (negative) on each edge in this interval; i.e., $\forall AB\in {\mathbb R}$, $$f(B)>f(A) \Longleftrightarrow (df) \big(AB\big) >0.$$

Corollary (Extreme Values). A $0$-chain map has a local maximum (minimum) at a point if and only if its exterior derivative is positive (negative) to the point's left and negative (positive) to its right; i.e., $\forall AC, CB\in {\mathbb R}$, $$f(B)>f(A) \text{ and } f(B)>f(C) \Longleftrightarrow (df) \big(AB\big) >0 \text{ and } (df) \big(BC\big) <0.$$

Theorem (Constant Function). A $0$-chain map is constant on an interval if and only if its exterior derivative on each edge in this interval is zero; i.e., $\forall AB\in {\mathbb R}$, $$f(B)=f(A) \Longleftrightarrow (df) \big(AB\big) =0.$$

To summarize, the exterior derivative of a $0$-chain map is the $1$-chain map equal, on each edge, to the range of its values.

Proposition. The exterior derivative is a linear operator $$d:CM^0({\mathbb R})\to CM^1({\mathbb R}).$$

Exercise. Prove the proposition.

The main fact about the exterior derivative keeep in mind is the following.

Proposition. For any $0$-chain function $f$, the pair $\{f,df\}$ is a chain function.

## Algebraic properties of the exterior derivative

We already know that the exterior derivative is linear: $$d(\alpha f+\beta g)=\alpha df+\beta dg,$$ for all chain maps $f,g$ and all $\alpha,\beta \in {\bf R}$. Therefore, we have the following two familiar facts:

Theorem (Sum Rule). For any two $k$-chain maps $f,g$, we have: $$d(f + g)= df + dg.$$

Theorem (Constant Multiple Rule). For any $k$-chain map $f$ and any $c \in {\bf R}$, we have: $$d(cf) = c\cdot df.$$

The following two are derived from the definition of the exterior derivative.

Theorem (Constant). The derivative of a constant is zero: for any $c \in {\bf R}$ $$dc=0.$$

Theorem (Identity). The derivative of the identity is the identity: $$d(\operatorname{Id})=\operatorname{Id}.$$

The following is derived from the fact that the composition of two chain functions is a chain function.

Theorem (Chain Rule). The derivative of the composition is the composition of the derivatives: $$d(fg)=df\hspace{.5mm} dg.$$

The result below then follows.

Theorem (Inverse). The derivative of the inverse is the inverse of the derivative: $$d(f^{-1})=(df)^{-1}.$$

Note that below, just as above, “$\cdot$” stands for the multiplication of real numbers and of sets of real numbers, not the formal multiplication of cells. The same applies to the addition and division. The only purpose is to present the formulas in a more compact form.

Theorem (Power Formula). For any positive integer $k$ and any edge $AB$, we have: $$d (A^{\underline {k}})\big(AB\big)=A^{\underline {k-1}}\cdot\big[A − k + 1,A+1\big],$$ where $$p^{\underline {q}} := p\cdot(p-1)\cdot(p-2)\cdot(p-3)\cdot...\cdot(p-q+1).$$

Proof. First, let $A=n,\ B=n+1$. Then $$\begin{array}{lll} d (x^{\underline {k}})\big(AB\big)&=d\big( n\cdot(n-1)\cdot(n-2)\cdot...\cdot(n-k+1) \big)\\ &=\big[n\cdot(n−1)\cdot(n−2)\cdot...\cdot(n−k+1),(n+1)\cdot n\cdot(n−1)\cdot...\cdot(n−k+2)\big] \\ &= n\cdot (n−1)\cdot ...\cdot (n−k + 2)\cdot\big[n-k+1,n+1\big] \\ &= n^{\underline {k-1}}\cdot \big[n-k+1,n+1\big]. \end{array}$$ $\square$

Theorem (Exponent Formula). For any positive real $b$ and any edge $AB$, we have: $$d(b^A)\big(AB\big)=b^A\cdot\big[1,b\big].$$

Proof. First, let $A=n,\ B=n+1$. Then $$\begin{array}{lll} d (b^{A})\big(AB\big)&=d\big( b^n \big)\\ &=\big[b^n,b^{n+1}\big] \\ &= b^n\cdot\big[1,b\big] . \end{array}$$ $\square$

Theorem (Trig Formulas). For any real $t$ and any edge $AB$, we have: $$\begin{array}{lllll} d(\sin (t\cdot A))\big(AB\big)=\sin (t\cdot n) + \big[0,2\cdot \sin \tfrac{t}{2} \cos \big(t\cdot (A+\tfrac{1}{2})\big)\big],\\ d(\cos (t\cdot A))\big(AB\big)=\cos (t\cdot n) + \big[0,- 2\cdot \sin \tfrac{t}{2} \sin \big(t\cdot (A+\tfrac{1}{2})\big)\big]. \end{array}$$

Proof. We use the formulas: $$\begin{array}{lllll} \sin u - \sin v = 2\cdot \sin\big(\tfrac{1}{2}\cdot(u-v)\big) \cos\big(\tfrac{1}{2}\cdot(u+v)\big);\\ \cos u + \cos v = -2\cdot \sin\big(\tfrac{1}{2}\cdot(u-v)\big) \sin\big(\tfrac{1}{2}\cdot(u+v)\big). \end{array}$$ Let $A=n,\ B=n+1$. Then $$\sin (t\cdot (n+1)) - \sin (t\cdot n) = 2\cdot \sin\big(\tfrac{1}{2}\cdot t\big) \cos\big(\tfrac{1}{2}\cdot t\cdot (2\cdot n+1)\big).$$ Therefore, $$\begin{array}{lllll} d(\sin (t \cdot A))\big(AB\big)&=\big[\sin (t\cdot n),\sin \big(t\cdot (n+1)\big)\big] \\ &=\sin (t \cdot n)+\big[0,\sin \big(t\cdot (n+1)\big)-\sin \big(t\cdot n\big)\big]\\ &=\sin (t \cdot n)+\big[0,2\cdot \sin\big(\tfrac{1}{2}\cdot t\big) \cos\big(\tfrac{1}{2}\cdot t \cdot (2\cdot n+1)\big)\big]. \end{array}$$ $\square$

Exercise. Finish the proof.

Theorem (Product Rule). Given two $0$-chain maps $f,g$, for any edge $AB$, we have: $$d(f \cdot g)\big(AB\big) = df\big(AB\big)\cdot g(A) + f(B)\cdot dg\big(AB \big).$$

Proof. $$\begin{array}{lll} d(f \cdot g)\big(AB\big)&=[(f \cdot g)(A), (f \cdot g)(B)]\\ &=\big[f(A) \cdot g(A), f(B) \cdot g(B)\big]\\ &=\big[f(A) \cdot g(A), f(B) \cdot g(A)\big] + \big[f(B) \cdot g(A), f(B) \cdot g(B)\big]\\ &= \big[f(A), f(B)\big]\cdot g(A) + f(B)\cdot \big[g(A), g(B)\big]\\ &= df\cdot \big(AB\big)g(A) + f(B)\cdot dg\big(AB \big). \end{array}$$ $\square$

Theorem (Quotient Rule). Given two $0$-chain maps $f,g$, any edge $AB$, we have: $$d(f/g)(AB) = \frac{df\big(AB \big)\cdot g(A) - f(A)\cdot dg\big(AB \big)}{g(A)\cdot g(B)},$$ provided $g(A),g(B) \ne 0$.

Proof. We use the Product Rule and the following computation: $$\begin{array}{lll} d(1 / g)\big(AB\big)&=\big[(1 / g)(A), (1 / g)(B)\big]\\ &=\Big[\frac{1}{g(A)}, \frac{1}{ g(B)} \Big]\\ &=\Big[\frac{g(B)}{ g(A)\cdot g(B)}, \frac{g(A)}{ g(A)\cdot g(B)}\Big]\\ &= \frac{1}{ g(A)\cdot g(B)} \cdot \big[g(B),g(A)\big]\\ &= -\frac{1}{ g(A)\cdot g(B)} \cdot dg\big(AB\big). \end{array}$$ $\square$

Exercise. Derive the “Power Formula” for $k<0$.

Exercise. Derive the “Log Formula”: $d(\log_b tA) \big(AB \big)=$?

## The Fundamental Theorem of Calculus

Suppose $f \in CM^0({\mathbb R})$. For every edge $AB\in{\mathbb R}$, let's take our definition of the exterior derivative $df \in CM^1({\mathbb R})$, $$df\big(AB \big) :=f(A)f(B),$$ written in the functional notation, and rewrite it in the integral notation: $$\displaystyle\int_{AB} df = f(A)f(B).$$

Now, what about integration over intervals “longer” than just a single edge? Let $A_0,A_1,...,A_k$ is a sequence of consecutive nodes, $k \in {\bf Z}$. Let's denote this chain by $$A_0A_k := A_0A_1+A_1A_2+...+A_{k-1}A_k.$$ It is a $1$-chain and $df$ is a $1$-chain map. Then we have: $$\begin{array}{llll} df(A_0A_k) &= df\big(A_0A_1&+A_1A_2&+...&+A_{k-1}A_k \big) \\ &= df\big(A_0A_1 \big)&+df\big(A_1A_2 \big)&+...&+df\big(A_{k-1}A_k \big) \\ &= f(A_0)f(A_1) &+ f(A_1)f(A_2) &+ ... &+ f(A_{k-1})f(A_{k}) \\ &= f(A_0)f(A_k). \end{array}$$ We simply applied the definition repeatedly.

Written in the conventional notation, the result takes this form: $$\displaystyle\int_{A_0A_k} df = f(A_0)f(A_k).$$

Theorem (Fundamental Theorem of Calculus). Given a $0$-chain map $f$, we have $$\begin{array}{|c|} \hline \\ \quad \displaystyle\int_{AB} df =f(A)f(B), \quad \\ \\ \hline \end{array}$$ for any pair of nodes $A,B$, where $AB$ stands for the chain of edges from $A$ to $B$.

Rewritten back in the functional notation, the interaction of the operators of exterior derivative and boundary takes the familiar form of the Algebraic Continuity Condition: $$df(a) =f(\partial a),\ \forall f\in CM^0({\mathbb R}),\ \forall a\in C_1({\mathbb R}).$$ It holds for all $1$-chain, not only the edges. Traditionally, this conclusion is stated as a theorem.

Theorem (Stokes Theorem). Given a $0$-chain map $f$, we have: $$\begin{array}{|c|} \hline \\ \quad df =f\partial \quad \\ \\ \hline \end{array}$$

## The derivative as the relative change

The presence of the geometry of ${\mathbb R}$ allows us to define the first derivative of a discrete function (i.e., a $0$-chain map) as “the rate of change” instead of just “change”, which is the exterior derivative.

Recall that, for a real-valued function $y=g(x)$, the rate of change is $$\frac{\text{change of } y}{ \text{change of } x}=\frac{g(B)-g(A)}{B-A},$$ over some interval $[A,B]$ in its domain. It is important to realize here that the definition remains equivalent when the denominator is required to be positive, i.e., $A<B$.

Now, this is the setting for a $0$-chain function $g:{\mathbb R}_x \to {\mathbb R}_y$:

In contrast to $dg$ representing the change of $g$, we will consider the relative change of $g$ over the edge $a=AB\in {\mathbb R}_x$: $$a=AB\mapsto\frac{g(A)g(B)}{|AB|}.$$ The numerator is simply $dg(a)$. In order to create the derivative as a function, what this new quotient should be assigned to?

This is $1$-chain in ${\mathbb R}_y$. If it was assigned to the edge $a=AB$, we would have simply a multiple of the exterior derivative, which is a $1$-chain function. The fact that there is no derivative of a $1$-chain function precludes the possibility of the second derivative to have any meaning (other than $0$). Instead, we utilize the dual structure of ${\mathbb R}_x^\star$ and use the dual of this chain, the $0$-chain (node) $P=a^\star$: $$P=AB^\star\mapsto\frac{g(A)g(B)}{|AB|}.$$ The result is a function from the dual $0$-chains to the primal $1$-chains. We take the dual of the latter in order to have a function from the dual $0$-chains to the dual $0$-chains, i.e., (dual) $0$-chain function!

Definition. The derivative of a primal $0$-chain function$g$ is the dual $0$-chain function given by its values at the nodes of ${\mathbb R}^\star$: $$g'(P):=\left( \frac{dg(P^\star)}{|P^\star|} \right)^\star=\frac{\big( dg(P^\star) \big)^\star} {|P^\star|} \in C_0({\mathbb R}_y^\star),\ \forall P\in {\mathbb R}_x^\star.$$

It can be further differentiated, as discussed in the next subsection.

Proposition. The derivative is the dual of the exterior derivative: $$g'=(dg)^\star.$$

This is how the definition works in the case of edges with equal lengths:

In general, there is also the denominator but the formula remains simple and the Sum Rule, the Constant Multiple Rule, the Product Rule, and the Quotient Rule all hold.

Further, for $0$-chain maps $f$ and $g$, we have $$f'=g' \Longleftrightarrow df=dg.$$ Secondly, the sign of the exterior derivative and that of the derivative coincide. These two facts allow us to restate the simple calculus theorems about the exterior derivative.

Theorem (Monotonicity). A $0$-chain map is increasing (decreasing) on an interval if and only if its derivative is positive (negative) on each edge in this interval; i.e., $$\forall A<B,\quad f(B)>f(A) \Longleftrightarrow f' \big(PQ^\star\big) >0\quad \forall A\le P < Q\le B.$$

Corollary (Extreme Values). A $0$-chain map has a local maximum (minimum) at a point if and only if its derivative is positive (negative) to point's left and negative (positive) to its right; i.e., $$\forall A<C<B,\quad f(B)>f(A) \text{ and } f(B)>f(C) \Longleftrightarrow f' \big(AB^\star\big) >0 \text{ and } f' \big(BC^\star\big) <0.$$

Theorem (Constant Function). A $0$-chain map is constant on an interval if and only if its derivative on each edge in this interval is zero; i.e., $$f(B)=f(A) \Longleftrightarrow f' \big(PQ^\star\big) =0\quad \forall A\le P < Q\le B.$$

The first derivative is computed with a spreadsheet. Link to file: Spreadsheets.

## The second derivative of a $0$-chain map

For a real-valued function $g$, the change of change at point $B$ is known to be $$\frac{\frac{g(C)-g(B)}{B-C} - \frac{g(B)-g(A)}{A-B}}{(A-C)/2}.$$ for some $A,C$ with $B$ between them. This is the setting of discrete calculus for a $0$-chain map $g:{\mathbb R}_x\to {\mathbb R}_y$:

We are to find, again, the relative change of the first derivative $g'$ -- from one edge to the adjacent one. It is important to notice here that the definition remains equivalent when the denominators are required to be positive, i.e., $A<C$. We can re-write then: $$\frac{\frac{g(B)g(C)}{|BC|} - \frac{g(A)g(B)}{|AB|}}{|AC|/2}.$$

The numerators of the two fractions are easily recognized as the exterior derivative of the $0$-chain map $g$ evaluated at the $1$-cells $a$ and $b$ respectively. Meanwhile, the denominators are lengths of these cells. The denominator of the whole fraction is, in fact, the distance between the centers of the cells $a$ and $b$. Therefore, we can think of it as the length of the dual cell of the vertex $B$.

Proposition. The second derivative, i.e., the derivative of the derivative: $$g' '=(g')',$$ of a primal $0$-chain function $g$ is a primal $0$-chain function given by its values at the nodes of ${\mathbb R}^\star$: $$g' '(B)=\frac{\Big( \frac{dg(b)}{|b|} - \frac{dg(a)}{|a|} \Big)^\star}{|B^\star|},\ \forall B\in {\mathbb R}^\star,$$ where $a,b$ are the $1$-cells adjacent to $B$.

This is how the formula works in the case of edges with equal lengths:

In the pursuit of closed solutions, we will use this notation for the primal and dual edges: $$I_n:=[n,n+1] , \quad I_n^\star:= \big[n-\tfrac{1}{2},n+\tfrac{1}{2}\big],$$ and for their lengths: $$\Delta _n:=| I_n |, \quad \Delta _n^\star:=| I_n^\star |.$$ Then, $$f'(n+\tfrac{1}{2})=\frac{f(n)f(n+1)}{\Delta_n},$$ $$f' '(n)=\frac{\Big( \frac{ f(n)f(n+1) }{\Delta_n} - \frac{f(n-1)f(n)}{\Delta_{n-1}}\Big)^\star}{\Delta_n^\star /2}.$$

In the case of the uniform geometry of ${\mathbb R}$, i.e., $\Delta_n=\Delta_n^\star:=h$, the formulas are simplified: $$f'(n+\tfrac{1}{2})=\frac{f(n)f(n+1)}{h},$$ $$f' '(n)=\frac{\Big( f(n)f(n+1)+f(n-1)f(n)\Big)^\star}{h^2 /2}.$$ Or, $$f' '(n)=\frac{\Big( f'(n+\tfrac{1}{2})-f'(n-\tfrac{1}{2})\Big)^\star}{h /2}.$$

## Newtonian physics, the discrete version

The concepts introduced above allow us to state some elementary facts about the discrete Newtonian physics, in dimension $1$.

Here both the time and space are given by ${\mathbb R}$.

The main quantities we are to study are:

• the location $r$ is a primal $0$-chain map;
• the displacement $dr$ is a primal $1$-chain map;
• the velocity $v=r'$ is a dual $0$-chain map;
• the momentum $p=mv$, where $m$ is a constant mass, is also a dual $0$-chain map;
• the impulse $J=dp$ is a dual $1$-chain map;
• the acceleration $a=r' '$ is a primal $0$-chain map.


Newton's First Law: If the net force is zero, then the velocity $v$ of the object is constant: $$F=0 \Longrightarrow v=const.$$

The law can be restated without invoking the geometry of time:

• If the net force is zero, then the displacement $dr$ of the object is constant:

$$F=0 \Longrightarrow dr=const.$$ The law shows that the only possible type of motion in this force-less and distance-less space-time is uniform; i.e., it is a repeated addition: $$r(t+1)=r(t)+c.$$

Newton's Second Law: The net force on an object is equal to the derivative of its momentum $p$: $$F=p'.$$

As we have seen, the second derivative is meaningless without specifying geometry, the geometry of time.

Newton's Third Law: If one object exerts a force $F_1$ on another object, the latter simultaneously exerts a force $F_2$ on the former, and the two forces are exactly opposite: $$F_1 = -F_2.$$

Law of Conservation of Momentum: In a system of objects that is “closed”; i.e.,

• there is no exchange of matter with its surroundings, and
• there are no external forces; $\\$

the total momentum is constant: $$p=const.$$

In other words, $$J=dp=0.$$ To prove, consider two particles interacting with each other. By the third law, the forces between them are exactly opposite: $$F_1=-F_2.$$ Due to the second law, we conclude that $$p_1'=-p_2',$$ or $$(p_1+p_2)'=0.$$

Exercise. State the equation of motion for a variable-mass system (such as a rocket). Hint: apply the second law to the entire, constant-mass system.

Exercise. Create a spreadsheet for all of these quantities.

To study physics in dimension $3$, one simply combines three chain functions for each of the above quantities: $$\begin{array}{ll} f_x:{\mathbb R}_t\to {\mathbb R}_x,\\ f_y:{\mathbb R}_t\to {\mathbb R}_y,\\ f_z:{\mathbb R}_t\to {\mathbb R}_z, \end{array}$$ into one: $$f=(f_x,f_y,f_z):{\mathbb R}_t\to {\mathbb R}_x\times {\mathbb R}_y\times {\mathbb R}_z= {\mathbb R}^3.$$ The space is made of cubes...

## Chain map as a change of variables

There are no, in general, compositions of cochains and, therefore, no chain rule of this kind. However, we can compose cochains with chain maps.

Suppose we have a cell function $p:{\mathbb R}_x \to {\mathbb R}_y$. What effect does $p$ have on the cochains? More generally, we look at a chain map $p=\{p_0,p_1\}$. When bijective, this function may represent a change of scale, units, or, generally, a change of variables. We will see that $p$ creates a function on cochains.

Let's write the maps of $0$- and $1$-chains along with $0$- and $1$-cochains on ${\mathbb R}_y$ in this diagram: $$\begin{array}{llll} \dim =0:& p_0:C_0({\mathbb R}_x)\to C_0({\mathbb R}_y),& f:C_0({\mathbb R}_y) \to {\bf R},\\ \dim =1:& p_1:C_1({\mathbb R}_x)\to C_1({\mathbb R}_y),& s:C_1({\mathbb R}_y) \to {\bf R}. \end{array}$$ Their compositions are: $$\begin{array}{llll} \dim =0:& fp_0:C_0({\mathbb R}_x)\to {\bf R},\\ \dim =1:& sp_1:C_1({\mathbb R}_x)\to {\bf R}. \end{array}$$ These two are nothing but $0$- and $1$-cochains on ${\mathbb R}_x$.

Proposition. The composition of a chain map and a $k$-cochain is a $k$-cochain.

As we have observed before, $p_1$ is, in a way, the derivative of $p_0$.

Theorem (Chain Rule). For a chain map $p=\{p_0,p_1\}$ and any $0$-cochain $g$ on ${\mathbb R}$, we have $$d(gp_0)=dgp_1.$$

Proof. First, we use the Stokes Theorem for the $0$-chain $gp_0$, as follows. For any $1$-chain $a$, we have: $$d(gp_0)(a)=(gp_0)(\partial a)=g(p_0(\partial a)).$$ Now, we use the Algebraic Continuity Property $p_0\partial =\partial p_1$ and then the Stokes Theorem for $g$: $$\hspace{1.5in} =g\partial(p_1(a))=(dg)(p_1(a)). \hspace{1.5in}\blacksquare$$

Example. For example, to compute the exterior derivative of $h(A)=2^{3A}$, we let $h:=gp_0$, where $$g(m):=2^m, \quad p_0(n):=3n.$$ Therefore, $$dg \big( [m,m+1] \big)=2^m$$ and $$p_1\big( [n,n+1] \big)=[3n,3(n+1)]=[3n,3n+1]+[3n+1,3n+2]+[3n+2,3n+3].$$ Therefore, by the Chain Rule we have: $$\begin{array}{lll} d(gp_0) \big( [n,n+1] \big) &= dgp_1\big( [n,n+1] \big) \\ &=dg\big( [3n,3n+1]+[3n+1,3n+2]+[3n+2,3n+3] \big)\\ &=dg\big( [3n,3n+1] \big)+ dg\big( [3n+1,3n+2] \big)+ dg\big( [3n+2,3n+3] \big)\\ &=2^{3n}+2^{3n+1}+2^{3n+2}\\ &=2^{3n}(1+2+2^2)\\ &=7\cdot 2^{3n}.\end{array}$$ The conclusion is verified by invoking the definition: $$dh \big( [n,n+1] \big) =h(n+1)-h(n)=2^{3(n+1)}-2^{3n}=2^{3n}(2^3-1).$$ $\square$

Exercise. Use the Chain Rule to find the exterior derivative of (a) $2^{rA},\ r\in {\bf Z}$; (b) $3^{A^2}$.

In the light of this theorem, let's examine, again, the possibility of thinking of the derivative of a discrete function $g:{\bf Z} \to {\bf R}$ as just another discrete function $g':{\bf Z} \to {\bf R}$. It is typically given by: $$g'(x)=g(x+1)-g(x),\ x\in {\bf Z}.$$ Now, let's differentiate $h(x)=g(-x)$. We have: $$h'(0)=h(1)-h(0)=g(-1)-g(0).$$ On the other hand, $$-g'(0)=-(g(1)-g(0))=g(0)-g(1),$$ no match! There is no chain rule in such a “calculus”.

Exercise. Verify that there is such a match for the derivatives of these functions, if we see them as $1$-cochains, and confirm both of the above versions of the chain rule.

Thus, for every choice of function $$p_k: C_k({\mathbb R}_x)\to C_k({\mathbb R}_y),$$ we have a correspondence $$s \mapsto s p_k,\ s\in C^k({\mathbb R}_y).$$ So, this function $p_k$ produces from the (complete) set of $k$-cochains on ${\mathbb R}_y$, i.e., $C^k({\mathbb R})$, a new (possibly incomplete) set of $k$-cochains on ${\mathbb R}_x$. This means that the correspondence points in the direction opposite to that of $p$!

Definition. Given a chain map $$p=\{p_0,p_1\},\quad p_k:C_k({\mathbb R}_x) \to C_k({\mathbb R}_y),\ k=0,1,$$ the function $$p^*=\{p^0,p^1\},\quad p^k:C^k({\mathbb R}_x) \leftarrow C^k({\mathbb R}_y),\ k=0,1,$$ given by $$p^k(s):= s p_k,\ \forall s\in C_k({\mathbb R}_y),$$ is called the cochain map of $p$.

Example (inclusion). Suppose: $$G=\{A,B\},\ H=\{A,B,AB\},$$ and $$p(A)=A,\ p(B)=B.$$ This is the inclusion:

$$\begin{array}{ll|ll} C_0(G)=< A,B >,& C_0(H)= < A,B >&\Longrightarrow C^0(G)=< A^*,B^* >,& C^0(H)= < A^*,B^* >,\\ p_0(A)=A,& p_0(B)=B & \Longrightarrow p^0=\operatorname{Id};\\ C_1(G)=0,& C_1(H)= < AB >&\Longrightarrow C^1(G)=0,& C^1(H)= < AB^* >\\ &&\Longrightarrow p^1=0. \end{array}$$ $\square$

Exercise. Modify the computation for the case when there is no $AB$.

Exercise. Compute the cochains for the following (pictured previously): $$\begin{array}{llll} &K=\{A,B,C,AB,BC\},\ L=\{X,Y,Z,XY,YZ\};\\ \text{(a) }&f(A)=X,\ f(B)=Y,\ f(C)=Z,\ f(AB)=XY,\ f(BC)=YZ;\\ \text{(b) }&f(A)=X,\ f(B)=Y,\ f(C)=Y,\ f(AB)=XY,\ f(BC)=Y. \end{array}$$

Example (identity). If $p$ is a bijection, then so are $p_0$ and $p_1$. Then the $k$-cochains on ${\mathbb R}_x$ and ${\mathbb R}_y$ are also in a bijective relation. The geometries of the two may be different though. $\square$

Example (folding). Folding of ${\mathbb R}_x$ in half and placing it on ${\mathbb R}_y$ results in a special set of cochains on ${\mathbb R}$: they have equal values on cells symmetric to each other. $\square$

Example (constant). A constant $p$ produces only trivial $1$-cochains. $\square$

Exercise. What if ${\mathbb R}_x$ is folded onto $[0,1]\subset {\mathbb R}_x$?

Suppose $s$ is a $1$-cochain. Then, $$(sp_1)(AB)=s(p_1(AB))=s\big( [p_0(A),p_0(B)] \big).$$ When presented in the integral notation this gives as the following.

Theorem (Integration by Substitution). Given a chain map $\{p_0,p_1\}$, we have $$\int_{A}^{B} sp_1=\int_{p_0(A)}^{p_0(B)} s,$$ for any $1$-cochain $s$.