This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

# Calculus on cubical complexes

## Contents

## Visualizing cubical forms

In calculus, the quantities to be studied are typically *real numbers*. We choose our ring of coefficients to be $R={\bf R}$.

Meanwhile, the locus is typically the Euclidean space ${\bf R}^n$. We choose for now to concentrate on the *cubical grid*, i.e., the infinite cubical complex acquired by dividing the Euclidean space into small, simple pieces (cubes). We denote it by ${\mathbb R}^n$.

In ${\mathbb R}^1$, these pieces are: points and (closed) intervals,

- the $0$-cells: $...,\ -3,\ -2, \ -1, \ 0, \ 1, \ 2, \ 3, \ ...$, and
- the $1$-cells: $...,\ [-2,-1], \ [-1,0] \,[0,1], \ [1,2], \ ...$.

In ${\mathbb R}^2$, these parts are: points, intervals, and squares (“pixels”):

Moreover, in ${\mathbb R}^2$, we have these cells represented as products:

- $0$-cells: $\{(0,0)\}, \{(0,1)\},...;$
- $1$-cells: $[0,1] \times \{0\}$, $\{0\} \times [0,1], ...;$
- $2$-cells: $[0,1] \times [0,1],....$

In this section, we will use the calculus terminology: *differential forms* instead of cochains.

Recall that within each of these pieces, a form is unchanged, i.e., it's a single number.

Then, the following is the simplest way to understand these forms.

**Definition.** A *cubical* $k$-*form* is a real-valued function defined on $k$-cells of ${\mathbb R}^n$.

This is how we plot the graphs of forms in ${\mathbb R}^1$:

And these are $0$-, $1$-, and $2$-forms in ${\mathbb R}^2$:

To emphasize the nature of a form as a function, we can use arrows:

Here we have two forms:

- a $0$-form with $0\mapsto 2,\ 1\mapsto 4,\ 2\mapsto 3,...$; and
- a $1$-form with $[0,1]\mapsto 3,\ [1,2]\mapsto .5,\ [2,3]\mapsto 1,...$.

A more compact way to visualize is this:

Here we have two forms:

- a $0$-form $Q$ with $Q(0)=2,\ Q(1)=4,\ Q(2)=3,...$; and
- a $1$-form $s$ with $s([0,1])=3,\ s([1,2])=.5,\ s([2,3])=1,...$.

We can also use letters to label the cells, just as before. Each cell is then assigned *two* symbols: one is its name (a latter) and the other is the value of the form at that location (a number):

Here we have:

- $Q(A)=2,\ Q(B)=4,\ Q(C)=3,...$;
- $s(AB)=3,\ s(BC)=.5,\ s(CD)=1,...$.

In general, we simply label the cells with numbers, as follows:

**Exercise.** Another way to visualize forms is with color. Implement this idea with Excel.

## Forms as integrands

It is common for a student to overlook the distinction between chains and cochains/forms and to speak of the latter as linear combinations of cells. The confusion is understandable because they “look” identical. Frequently, one just assigns numbers to cells in a complex as we did above.

The difference is that these numbers aren't the coefficients of the cells in some chain but the *values* of the $1$-cochain on these cells. The idea becomes explicit when we think in calculus terms:

- forms are integrands, and
- chains are domains of integration.

In the simplest setting, we deal with the intervals in the complex of the real line ${\mathbb R}$. Then the form assigns a number to each interval to indicate the values to be integrated and the chain indicates how many times the interval will appear in the integral, typically once:

Here, we have: $$\begin{array}{lllllllll} h(a)&=\int _a h \\ &=\int _{[0,1]} h &+ \int _{[1,2]} h &+\int _{[2,3]} h &+\int _{[3,4]} h&+\int _{[4,5]} h\\ &=3&+.5&+1&+2&+1. \end{array}$$

The simplest form of this kind is the form that assigns $1$ to each interval in the complex ${\mathbb R}$. We call this form $dx$. Then any form $h$ can be built from $dx$ by multiplying -- cell by cell -- by a discrete function that takes values $3,.5,1,2,1$ on these cells:

The main property of this new form is: $$\int _{[A,B]}dx=B-A.$$

**Exercise.** Show that every $1$-form in ${\mathbb R}^1$ is a “multiple” of $dx$:
$$h=Pdx.$$
What about $0$-forms?

Next, ${\mathbb R}^2$:

In the diagram,

- the names of the cells are given in the first row;
- the values of the form on these cells are given in the second row; and
- the algebraic representation of the forms is in the third.

The second row gives one a compact representation of the form without naming the cells.

Discrete differential forms (cochains) are real-valued, linear functions defined on chains:

One should recognize the second line as a line integral: $$\psi (h)=\int _h \psi .$$

What is $dx$ in ${\mathbb R}^2$? Naturally, its values on the edges parallel to the $x$-axis are $1$'s and on the one parallel to the $y$-axis are $0$'s:

Of course, $dy$ is the exact opposite. Algebraically, their representations are as follows:

- $dx([m,m+1]\times \{n\})=1,\ dx(\{m\} \times [n,n+1] )=0$;
- $dy([m,m+1]\times \{n\})=0,\ dy(\{m\} \times [n,n+1] )=1$.

Now we consider a general $1$-form: $$P dx + Q dy,$$ where $P,Q$ are discrete functions, not just numbers, that may vary from cell to cell. For example, this could be $P$:

**Exercise.** Show that every $1$-form in ${\mathbb R}^2$ is such a “linear combination” of $dx$ and $dy$.

At this point, we can integrate this form. For example, suppose $S$ is the chain that represents the $2\times 2$ square in this picture going clockwise. The edges are oriented, as always, along the axes. Let's consider the line integral computed along this curve one cell at a time starting at the left lower corner: $$\int _S Pdx = 0\cdot 0 + 1\cdot 0 + (-1)\cdot 1 + 1\cdot 1 + 0\cdot 0 + 2\cdot 0 + 3\cdot (-1) + 1\cdot (-1).$$ We can also compute: $$\int _S Pdy = 0\cdot 1 + 1\cdot 1 + (-1)\cdot 0 + 1\cdot 0 + 0\cdot (-1) + 2\cdot (-1) + 3\cdot 0 + 1\cdot 0.$$ If $Q$ is also provided, the integral $$\int _S Pdx+Qdy$$ is a similar sum.

Next, we illustrate $2$-forms in ${\mathbb R}^2$:

The double integral over this square, $S$, is $$\int _S Adxdy = 1+2+0-1=2.$$ And we can understand $dx \hspace{1pt} dy$ as a $2$-form that takes the value of $1$ on each cell:

**Exercise.** Compute $\int_S dxdy$, where $S$ is an arbitrary collection of $2$-cells.

## The algebra of forms

We already know that the discrete differential forms, as cochains, are organized into vector spaces, one for each degree. Let's review this first.

If $p,q$ are two forms of the same degree $k$, it is easy to define algebraic operations on them.

First, their *addition*. The sum $p + q$ is a form of degree $k$ too and is computed as follows:
$$(p+q)(a) := p(a) + q(a),$$
for every $k$-cell $a$.

As an example, consider two $1$-forms, $p,q$. Suppose these are their values defined on the $1$-cells (in green):

Then $p+q$ is found by: $$1+1=2,\ -1+1=0,\ 0+2=2,\ 3+0=3,$$ as we compute the four values of the new form one cell at a time.

Next, *scalar multiplication* is also carried out cell by cell:
$$(\lambda p)(a) := \lambda p(a), \ \lambda \in {\bf R},$$
for every $k$-cell $a$.

We know that these operations satisfy the required properties: associativity, commutativity, distributivity, etc. Thus, we have a vector space: $$C^k=C^k({\mathbb R}^n),$$ the space of $k$-forms on the cubical grid of ${\bf R}^n$.

There is, however, an operation on forms that we haven't seen yet.

Can we make $dxdy$ from $dx$ and $dy$? The answer is provided by the *wedge product* of forms:
$$dxdy=dx\wedge dy.$$
Here we have:

- a $1$-form $dx \in C^1({\mathbb R}_x)$ defined on the horizontal edges,
- a $1$-form $dy \in C^1({\mathbb R}_y)$ defined on the vertical edges, and
- a $2$-form $dxdy \in C^2({\mathbb R}^2)$ defined on the squares.

But squares are products of edges:
$$\alpha=a \times b.$$
Then, we simply set:
$$(dx\wedge dy) (a\times b):=dx(a)\cdot dy(b).$$
What about $dydx$? When we interchange the basis elements we make use of the anti-commutativity of cubical chains under products:
$$a \times b = -b\times a.$$
Then,
$$\begin{array}{llllllll}
(dy\wedge dx) (a\times b) &= (dy\wedge dx) (-b\times a)\\
&=-dy(b)\cdot dx(a)\\
&=-dx(a)\cdot dy(b) \\
&=-(dx\wedge dy) (a\times b).
\end{array}$$
Therefore, the wedge product is also *anti-commutative*. The result matches calculus:
$$\int _\alpha dy dx=-\int_\alpha dxdy.$$

Now, suppose we have two *arbitrary* $1$-forms $p,q$ and we want to define their wedge product on the square $\alpha:= a\times b$. We can't use the simplest definition:
$$(p \wedge q)(a \times b) \stackrel{?}{=} p(a) \cdot q(b) ,$$
as it fails to be anti-commutative:
$$(q \wedge p)(a \times b) = q(a) \cdot p(b) = p(b) \cdot q(a).$$
Then, the idea is that, since we need both of these terms:
$$p (a) q(b) \quad p (b) q(a),$$
we should combine them.

**Definition.** The *wedge product* of two $1$-forms is a $2$-form given by
$$(p \wedge q)(a \times b):=p (a) q(b) - p (b) q(a).$$

The minus sign is what gives us the *anti*-commutativity:
$$(p \wedge p)(a \times b):=q (a) p(b) - q (b) p(a)=-(p (a) q(b) - p (b) q(a)).$$

**Proposition.**
$$dxdx=0,\ dydy=0.$$

**Exercise.** Prove the proposition.

This is an illustration of the relation between the product of cubical chains and the wedge product of cubical forms:

The general definition is as follows.

Recall that, for our cubical grid ${\mathbb R}^n$, the cells are the cubes given as products: $$Q=\Pi _{k=1}^{n}A _k,$$ where each $A_k$ is either a vertex or an edge in the $k$th component of the space. We can derive the formula for the wedge product in terms of these components. If we omit the vertices, a $(p+q)$-cube can be rewritten as $$Q=\Pi _{i=1}^{p+q}I _i,$$ where $I_i$ is its $i$th edge.

**Definition.** The *wedge product* of the a $k$-form and a $m$-form is a $(k+m)$-form given by
$$(\varphi ^p \wedge \psi ^q)(I):=\sum _s (-1)^{\pi (s)}\varphi ^p(\Pi _{i=1}^{p}I _{s(i)}) \cdot \psi ^q(\Pi _{i=p+1}^{p+q}I _{s(i)}),$$
where the summation is taken over all permutations $s$ of $\{1,2,...,p+q\}$, $\pi (s)$ stands for the parity of $s$, and the superscripts are the degrees of the forms.

**Exercise.** Verify that $Pdx=P\wedge dx$. Hint: what is the dimension of the space?

**Proposition.** The wedge product satisfies the *skew-commutativity*:
$$\varphi ^m \wedge \psi ^n= (-1)^{mn} \psi ^n \wedge \varphi ^m.$$

Under this formula, we have the anti-commutativity when $m=n=1$, as above.

**Exercise.** Prove the proposition.

Unfortunately, the wedge product isn't associative!

**Exercise.** Give an example of this:
$$\phi ^1 \wedge (\psi ^1 \wedge \theta ^1) \ne (\phi ^1 \wedge \psi ^1) \wedge \theta ^1 .$$

The crucial difference between the linear operations and the wedge product is that the former two act *within* the space of $k$-forms:
$$+,\cdot : C^k \times C^k \to C^k;$$
while the latter acts *outside*:
$$\wedge : C^k \times C^m \to C^{k+m}.$$
We can make both operate within the same space if we define them on the *graded space* of all forms:
$$C^*:=C^0 \oplus C^1 \oplus...$$

## Derivatives of functions vs. derivatives of forms

The difference is:

- the derivative of a function is the
*rate of change*, while - the exterior derivative of a $0$-form is the
*change*.

The functions we are dealing with are discrete. At their simplest, they are defined on the integers:
$$n=...-1,0,2,3,4,....$$
They change abruptly and, of course, the change is simply the *difference of values*:
$$f(n+1)-f(n).$$
The only question is *where to assign* this number as the value of some new function. What is the nature of the input of this function?

The illustration below suggests the answer:

The output should be assigned to the (oriented) edge that connects $n$ to $n+1$: $$[n,n+1] \mapsto f(n+1)-f(n).$$ Assigning this number to either of the two end-points would violate the symmetry of the situation. If the input changes in the opposite way, so does the change of the output, as expected: $$[n+1,n]=-[n,n+1] \mapsto f(n)-f(n+1).$$

Let's look at this construction from the point of view of our study of motion. Suppose function $p$ gives the position and suppose

- at time $n$ hours we are at the $5$ mile mark: $p(n)=5$, and then
- at time $n+1$ hours we are at the $7$ mile mark: $p(n+1)=7$.

We don't know what exactly has happened during this hour. However, the simplest assumption would be that we have been walking at a constant speed of $2$ miles per hour. Now, instead of our velocity function $v$ assigning this value to each instant of time during this period, it is assigned to the *whole* interval:
$$v\Bigg( [n,n+1] \Bigg)=2.$$
This way, the elements of the domain of the velocity function are the *edges*.

The relation between a discrete function and its change is illustrated below:

**Definition.** The *exterior derivative* of a discrete function $f$ at $[n,n+1]$ is defined to be the number
$$(df)[n,n+1]:=f(n+1)-f(n).$$

On the whole domain of $f$, we have:

- $f$ is defined on the $0$-cells, and
- $df$ is defined on $1$-cells.

**Definition.** The *exterior derivative* of a $0$-form $f$ is a $1$-form $df$ given by the above formula.

Let's contrast:

- the derivative of a function $f$ at $x=a$ is a number $f'(a)$ assigned to point $a$, so the derivative function $f'$ of a function $f$ on ${\bf R}$ is another function on ${\bf R}$,
- the exterior derivative of a function $f$ at $[n,n+1]$ is a number $df([n,n+1])$ assigned to interval $[n,n+1]$, so the exterior derivative $df$ of a function $f$ on $C^0({\mathbb R})$ is a function on $C^1({\mathbb R})$.

Furthermore, if the interval was of length $h$, we would see the obvious difference between the derivative and the exterior derivative:
$$\frac{f(a+h)-f(a)}{h} \text{ vs. } f(a+h)-f(a).$$
Unlike the former, the latter can be defined over *any ring* $R$.

**Proposition.** The exterior derivative is a linear operator
$$d:C^0({\mathbb R})\to C^1({\mathbb R}). $$

**Exercise.** Prove the proposition.

**Exercise.** State and prove the analogs of the familiar theorems from calculus about the relation between the exterior derivative and: (a) monotonicity, and (b) extreme points.

Let's next approach this operator from the point of view of the *Fundamental Theorem of Calculus*.

Given $f \in C^0({\mathbb R})$, we take our definition of $df \in C^1({\mathbb R})$ as $$df([a,a+1]) := (f(a+1) - f(a))$$ and simply rewrite it in the integral notation: $$\displaystyle\int_{[a,a+1]} df = f(a+1) - f(a).$$ Observe that here $[a,a+1]$ is a $1$-cell and the points $\{a+1\},\{a\}$ make up the boundary of this cell. In fact, $$\partial [a,a+1]=\{a+1\}-\{a\},$$ taking into account the orientation of this edge.

What about integration over “longer” intervals? For $a,k \in {\bf Z}$, we have: $$\begin{array}{llll} \displaystyle\int_{[a,a+k]} df &= \displaystyle\int_{[a,a+1]} df + \displaystyle\int_{[a+1,a+2]}df + ... + \displaystyle\int_{[a+k-1,a+k]} df \\ &= f(a+1)-f(a) + f(a+2) - f(a+1) + ... + f(a+k) - f(a+(k-1)) \\ &= f(a+k) -f(a). \end{array}$$ We have simply applied the definition repeatedly.

Thus, the Fundamental Theorem of Calculus still holds, in its “net change” form.

In fact, we can now rewrite it in a fully algebraic way. Indeed, $$\sigma = [a,a+1]+[a+1,a+2]+...+[a+k-1,a+k]$$ is a $1$-chain and $df$ is a $1$-cochain. Then the above computation takes this form: $$\begin{array}{llll} df(\sigma) &= df([a,a+1]+[a+1,a+2]+...+[a+k-1,a+k]) \\ &= df([a,a+1])+df([a+1,a+2])+...+df([a+k-1,a+k]) \\ &= f(a+k) -f(a) \\ &= f(\partial[a,a+k]). \end{array}$$

The resulting interaction of the operators of (exterior) derivative and boundary,
$$\begin{array}{|c|}
\hline
\\
\quad df(\sigma) =f(\partial\sigma), \quad \\
\\
\hline
\end{array}$$
is an instance of the *General Stokes Theorem*.

**Exercise.** Explain the relation between this formula and the formula used for integration by substitution, $df = f' dx$.

To summarize,

*the derivative of a $0$-form is a $1$-form equal to the difference of its values.*

**Exercise.** Show how the definition, and the theorem, is applicable to any cubical complex $R \subset {\mathbb R}$. Hint:

Next we consider the case of the space of dimension $2$ and forms of degree $1$. Given a $0$-form $f$ (in red), we compute its derivative $df$ (in green):

Once again, it is computed by taking differences.

Let's make this more specific. We consider the differences horizontally (orange) and vertically (green):

According to our definition, we have:

- (orange) $df([a,a+1] \times \{b\}) := f\Bigg(\{(a+1,b)\}\Bigg) - f\Bigg(\{(a,b)\}\Bigg)$
- (green) $df(\{a \} \times [b,b+1]) := f\Bigg(\{(a, b+1)\}\Bigg) - f\Bigg(\{(a,b)\}\Bigg)$.

Therefore, we have: $$df = \langle \nabla f , dA \rangle,$$ where $$dA := (dx,dy), \quad \nabla f := (d_xf,d_yf).$$ The notation is justified if we interpret the above as “partial exterior derivatives”:

- $d_xf([a,a+1] \times \{b\}) := f(a+1,b) - f(a,b)$
- $d_yf(\{a \} \times [b,b+1]) := f(a, b+1) - f(a,b)$.

## The exterior derivative of a $1$-form

What about the higher degree forms?

Let's start with $1$-forms in ${\mathbb R}^2$. The exterior derivative is meant to represent the *change* of the values of the form as we move around the space. This time, we have possible change in both horizontal and vertical directions. Then, we will be able to capture these quantities with a single number as a *combination of the changes*:

If we concentrate on a single square, these differences are computed on the opposite edges of the square.

Just as in the last subsection, the question arises: where to assign this value? Conveniently, the resulting value can be given to the square itself.

We will justify the negative sign in the formula below.

With each $2$-cell given a number in this fashion, the exterior derivative of a $1$-form is a $2$-form.

**Exercise.** Define the exterior derivative of $1$-forms in ${\mathbb R}$.

Let's consider the exterior derivative for a $1$-form defined on the edges of this square oriented along the $x$- and $y$-axes:

**Definition.** The *exterior derivative* of a $1$-form $\varphi$ at $2$-cell $\tau$ is the difference of the changes of $\varphi$ with respect to $x$ and $y$ along the edges of $\tau$:
$$d \varphi(\tau) = \Bigg(\varphi(c) - \varphi(a) \Bigg) - \Bigg( \varphi(b) - \varphi(d) \Bigg).$$

Why minus? Let's rearrange the terms:
$$d \varphi(\tau) = \varphi(d) + \varphi(c) - \varphi(b) - \varphi(a).$$
What we see is that we go *full circle* around $\tau$, counterclockwise with the correct orientations! Of course, we recognize this as a line integral. We can read this formula as follows:
$$\int_{\tau}d \varphi=\int _{\partial \tau} \varphi.$$
Algebraically, it is simple:
$$d \varphi(\tau) = \varphi(d) + \varphi(c) + \varphi(-b) + \varphi(-a)= \varphi(d+c-b -a) = \varphi(\partial\tau). $$

The resulting interaction of the operators of exterior derivative and boundary takes the same form as for dimension $1$ discussed above: $$d\varphi =\varphi\partial.$$ Again, it is an instance of the General Stokes Theorem, which can be used as the definition of $d$.

Let's represent our $1$-form as
$$\varphi = A dx + B dy,$$
where $A,B$ are the *coefficient functions* of $\varphi$:

- $A$ is the numbers assigned to the horizontal edges: $\varphi (b),\varphi (d)$, and
- $B$ is the numbers assigned to the vertical edges: $\varphi (a),\varphi (c)$.

Now, if we think one axis at a time, we use the last subsection and conclude that

- $A$ is a $0$-form with respect to $y$ and $dA=(\varphi(b) - \varphi(d))dy$, and
- $B$ is a $0$-form with respect to $x$ and $dB=(\varphi(c) - \varphi(a))dx$.

Now, from the definition we have: $$\begin{array}{llllllll} d \varphi &= \Bigg( (\varphi(c) - \varphi(a) ) - ( \varphi(b) - \varphi(d) )\Bigg)dxdy\\ &= (\varphi(c) - \varphi(a) )dxdy - ( \varphi(b) - \varphi(d) )dxdy\\ &= (\varphi(c) - \varphi(a) )dxdy + ( \varphi(b) - \varphi(d) )dydx\\ &= \Bigg( (\varphi(c) - \varphi(a) )dx\Bigg)dy + \Bigg(( \varphi(b) - \varphi(d) )dy\Bigg)dx\\ &= dBdy+dA dx. \end{array}$$

We have proven the following.

**Theorem.**
$$d (A dx + B dy) = dA \wedge dx + dB \wedge dy.$$

**Exercise.** Show how the result matches Green's Theorem.

In these two subsections, we see the same pattern: if $\varphi \in C^k$ then $d \varphi \in C^{k+1}$ and

- $d \varphi$ is obtained from $\varphi$ by applying $d$ to each of the coefficient functions involved in $\varphi$.

## Cubical forms with a spreadsheet

This is how $0$-, $1$-, and $2$-forms are presented in Excel:

The difference from $0$-, $1$-, and $2$-chains is only that there are no gaps!

The exterior derivative in dimensions $1$ and $2$ can be easily computed according to the formulas provided above. The only difference from the algebra we have seen is that here we have to present the results in terms of the coordinates with respect to cells. They are listed at the top and on the left.

The case of ${\mathbb R}$ is explained below. The computation is shown on the right and explained on the left:

The Excel formulas are hidden but only these two need to be explained:

- first, “$B=\partial a$, $0$-chain numbers $B\underline{\hspace{.2cm}}i$ assigned to $0$-cells, differences of adjacent values of a” is computed by

$$=R[-4]C-R[-4]C[-1];$$

- second, “$df\underline{\hspace{.2cm}}i=df(a\underline{\hspace{.2cm}}i)$, $1$-cochain, differences of adjacent values of $f$ -- the output” is computed by

$$=R[-16]C-R[-16]C[1].$$

Thus, the exterior derivative is computed with the Stokes Theorem, as well as following the formula provided in the definition. We can see how the results match.

Link to file: Spreadsheets.

**Exercise.** Create a spreadsheet for “antidifferentiation”.

**Exercise.** (a) Create a spreadsheet that computes the exterior derivative of $1$-forms in ${\mathbb R}^2$ directly. (b) Combine it with the spreadsheet for the boundary operator to confirm the Stokes Theorem.

## Bird's-eye view of calculus

We have now access to a bird's-eye view of the topological part of calculus, as follows.

Suppose we are given the cubical grid ${\mathbb R}^n$ of ${\bf R}^n$. On this complex, we have the vector spaces of $k$-chains $C_k$. Combined with the boundary operator $\partial$, they form the chain complex $\{C_*,\partial\}$ of $K$: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{rrrrrrrrrrrrr} 0& \ra{\partial_{n+1}=0} & C_n & \ra{\partial_{n}}& ... &\ra{\partial_{1}} & C_0 &\ra{\partial_{0}=0} & 0 . \end{array} $$ The next layer is the cochain complex $\{C^*,d\}$, formed by the vector spaces of forms $C^k=(C_k)^*,\ k=0,1,...$: $$\newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{rrrrrrrrrrrrr} 0& \la{d=0} & C^n & \la{d} & ... & \la{d} & C^0 &\la{d=0} &0 . \end{array} $$ Here $d$ is the exterior derivative. The latter diagram is the “dualization” of the former as explained above: $$d\varphi (x)=\varphi\partial (x).$$ The shortest version of this formula is as follows.

**Theorem (General Stokes Formula).**
$$\partial ^*=d.$$

Rather than using it as a theorem, we have used it as a formula that defines the exterior derivative.

The main properties of the exterior derivative follow.

**Theorem.** The operator $d: C^k \to C^{k+1}$ is linear.

**Theorem (Product Rule - Leibniz Rule)**
$$d(\varphi \wedge \psi) = d \varphi \wedge \psi + (-1)^k \varphi \wedge d \psi .$$

**Exercise.** Prove the theorem for dimension $2$.

**Theorem (Double Derivative Property).** $dd : C^k \to C^{k+2}$ is zero.

**Proof.** We prove only $dd=0 : C^0({\mathbb R}^2) \to C^2({\mathbb R}^2)$.

Suppose $A,B,C,D$ are the values of a $0$-form $h$ at these vertices:

We compute the values of $dh$ on these edges, as differences. We have: $$-(B-A) + (C-D) + (B-C) - (A-D) = 0,$$ where the first two are vertical and the second two are horizontal. $\blacksquare$

In general, the property follows from the *double boundary property*.

The proof indicates that the two mixed partial derivatives are equal: $$\Phi_{xy} = \Phi_{yx},$$ just as in Clairaut's theorem.

**Exercise.** Prove $dd : C^1({\mathbb R}^3) \to C^3({\mathbb R}^3)$ is zero.

**Exercise.** Compute $dd : C^1 \to C^3$ for the following form:

How to introduce a non-trivial second derivative is discussed later.

## Algebraic properties of the exterior derivative

We start with forms on an arbitrary complex.

The exterior derivative is linear: $$d(\alpha f+\beta g)=\alpha df+\beta dg,$$ for all forms $f,g$ and all $\alpha,\beta \in {\bf R}$. Therefore, we have the following two familiar facts:

**Theorem (Sum Rule).** For any two $k$-forms $f,g$, we have
$$d(f + g)= df + dg.$$

**Theorem (Constant Multiple Rule).** For any $k$-form $f$ and $c \in {\bf R}$, we have:
$$d(cf) = c\cdot df.$$

For the next two, we limit the functions to the ones defined on the real line.

**Theorem (Power Formula).** On complex ${\mathbb R}$, we have for any positive integer $k$:
$$d (x^{\underline {k}})([n,n+1])=kn^{\underline {k-1}},$$
where
$$n^{\underline {k}} = n(n−1)(n−2)(n−3)...(n−k+1). $$

**Theorem (Exponent Formula).** On complex ${\mathbb R}$, we have for any positive real $b$:
$$d(b^n)([n,n+1])=(b-1)b^n.$$

**Theorem (Trig Formulas).** On complex ${\mathbb R}$, we have for any real $a$:
$$\begin{array}{lllll}
d(\sin an)([n,n+1])=2\sin a/2 \cos a(n+1/2),\\
d(\cos an)([n,n+1])=-2\sin a/2 \sin a(n+1/2).
\end{array}$$

**Theorem (Product Rule).** For any two $0$-forms $f,g$ on ${\mathbb R}$, we have
$$d(f \cdot g)([n,n+1]) = f(n + 1)dg([n,n+1]) + df([n,n+1])g(n).$$

**Theorem (Quotient Rule).** For any two $0$-forms $f,g$ on ${\mathbb R}$, we have
$$d(f/g)([n,n+1]) = \frac{df([n,n+1])g(n) − f(n)dg([n,n+1])}{g(n)g(n + 1)},$$
provided $g(n),g(n+1) \ne 0$.

**Exercise.** Prove these theorems.

**Exercise.** Derive the “Power Formula” for $k<0$.

**Exercise.** Derive the “Log Formula”: $d(\log_b an)([n,n+1])=?$.

**Theorem (Chain Rule 1).** For an integer-valued $0$-form $g$ on complex $K$ and any $0$-form $f$ on ${\mathbb R}$, we have
$$d(fg)=fdg.$$

**Proof.** Using the Stokes Theorem twice, with any $1$-chain $a$, we have:
$$d(fg)(a)=(fg)(\partial a)=f(g(\partial a))=f(dg)(a).$$
$\blacksquare$

Further, a cubical map $g:K \to {\mathbb R}$ generates a chain map $$g_k:C_k(K)\to C_k({\mathbb R}),\ k=0,1.$$ Then $g_0$ can be thought of as an integer-valued $0$-form on $K$, and $g_1$ is, in a way, the derivative of $g_0$.

**Theorem (Chain Rule 2).** For a cubical map $g:K \to {\mathbb R}$ and any $0$-form $f$ on ${\mathbb R}$, we have
$$d(fg_0)=dfg_1.$$

**Proof.** Using the Stokes Theorem once, with any $1$-chain $a$, we have again:
$$d(fg_0)(a)=(fg_0)(\partial a)=f(g_0(\partial a)).$$
Now, using the algebraic continuity property $\partial g=g\partial$ and the Stokes Formula, we conclude:
$$=f\partial(g_1(a))=(df)(g_1(a)).$$
$\blacksquare$

In the last section, we rejected the idea of the derivative of a discrete function $f:{\bf Z} \to {\bf R}$ as a function of the same nature $f':{\bf Z} \to {\bf R}$ on the grounds that this approach doesn't match our idea of the integral as the area under the graph of $f'$. Certainly, there are other issues. Suppose $g'(x)=g(x+1)-g(x),\ x\in {\bf Z}$. Now, what if we consider $h(x)=g(-x)$? Then $h'(0)=h(1)-h(0)=g(-1)-g(0)$. On the other hand, $-g'(0)=-(g(1)-g(0))=g(0)-g(1)$, no match! There is no chain rule in such a “calculus”.

**Exercise.** Verify that there is such a match for the derivatives of these functions, if we see them as $1$-cochains, and confirm both of the versions of the chain rule.

## Tangent spaces

Suppose we want to compute the *work* of a constant force along a straight path. As we know,
$$\text{ work } = \text{ force }\cdot \text{ distance}.$$
This simple formula only works if we carefully take into account the *direction* of motion relative to the direction of the force $F$. For example, if you move forward and then back, the work breaks into two parts and they may cancel each other. The idea is that the work $W$ may be positive or negative and we should speak of the *displacement* $D$ rather than the distance. We then amend the formula as:
$$W = F \cdot D.$$

Now, in the context of discrete calculus, the displacement $D$ may be given by a single oriented edge in ${\mathbb R}$, or a combination of edges. It is a $1$-*chain*. Furthermore, the force $F$ defines $W$ as a linear function of $D$. It is a $1$-*form*!

The need for considering directions becomes clearer when the dimension of the space is $2$ or higher. We use *vectors*. First, as we just saw, the work of the force is $W = \pm F \cdot D$ if $F || D$, and we have the plus sign when the two are collinear. Second, $W = 0$ if $F \perp D$. Therefore, only the *projection* of $F$ on $D$ matters when calculating the work and it is the projection when the length of $D$ is $1$.

Then, the work $W$ of force $F$ along vector $D$ is defined to be: $$W := \langle F , D \rangle .$$ It is simply a (real-valued) linear function of the displacement.

Our conclusion doesn't change: $D$ is a $1$-chain and $F$ is a $1$-form. Even though this idea allows us to continue our study, the example above shows that it is impossible to limit ourselves to cubical complexes. Below, we make a step toward discrete calculus over general cell complexes.

On a plane, the force $F$ may vary from location to location. Then the need to handle the displacement vectors, i.e., directions, arises, separately, at every point. The set of all possible directions at point $A\in V={\bf R}^2$ form a vector space of the same dimension. It is $V_A$, a copy of $V$, attached to each point $A$:

Next, we apply this idea to cell complexes.

First, what is the set of all possible directions on a *graph*? We've come to understand the edges starting from a given vertex as independent directions. That's why we will need as many basis vectors as there are edges, at each point:

Of course, once we start talking about *oriented* cells, we know it's about *chains*, over $R$.

**Definition.** For each vertex $A$ in a cell complex $K$, the *tangent space* at $A$ of $K$ is the set of $1$-chains over $R$ generated by the $1$-dimensional star of the vertex $A$:
$$T_A=T_A(K):=< \{AB \in K\} > \subset C_1(K).$$

**Proposition.** The tangent space $T_A(K)$ is a submodule of $C_1(K)$.

**Definition.** A *local* $1$-*form* on $K$ is a collection of linear functions for each of the tangent spaces,
$$\varphi_A: T_A\to R,\ A\in K^{(0)}.$$

The work of a force along edges of a cell complex is an example of such a function. Now, if we have a force in complex $K$ that *varies* from point to point, the work along an edge -- seen as the displacement -- will depend on the location.

**Proposition.** Every $1$-form (cochain) is a local $1$-form.

We denote the set of all local $1$-forms on $K$ by $T^1(K)$, so that $$C^1(K)\subset T^1(K).$$

Let's review the setup. First, we have the *space of locations* $X=K^{(0)}$, the set of all vertices of the cell complex $K$. Second, to each location $A\in X$, we associate the *space of directions* determined by the structure of $K$, specifically, by its edges. Now, while the directions at vertex $A\in K$ are given by the edges adjacent to $A$, we can also think of all $1$-*chains* in the star of $A$ as directions at $A$. They are subject to algebraic operations on chains and, therefore, form a module, $T_A$.

We now combine all the tangent spaces into one total tangent space. It contains all possible directions in each location: each tangent space $T_A$ to every point $A$ in $K$.

**Definition.** The *tangent bundle* of $K$ is the disjoint union of all tangent spaces:
$$T(K)=T_1(K):=\bigsqcup _{A\in K} \left( \{A\} \times T_A \right).$$

Here “$1$” refers to the $1$-chains that it is made of.

Then a local $1$-form is seen as a function on the tangent bundle, $$\varphi =\{\varphi_A: \ A\in K\}:T_1(K) \to R,$$ linear on each tangent space, defined by $$\varphi (A,AB):=\varphi_A(AB).$$

In particular, the work associates to every location and every direction at that location, a quantity:
$$\varphi(A,AB)\in R.$$
The total work over a path in the complex is the *line integral* of $\varphi$ over a $1$-chain $a$ in $K$. It is simply the sum of the values of the form at the edges in $a$:
$$\begin{array}{llllll}
a=&A_0A_1+A_1A_2+...+A_{n-1}A_n \Longrightarrow \\
&\int_a \varphi:= \varphi (A_0,A_0A_1)+\varphi (A_1,A_1A_2)+... +\varphi (A_{n-1},A_{n-1}A_n).
\end{array}$$

If this number $\varphi(A,AB)$ is the work performed by a given force while moving from $A$ to $B$ along $AB$, let's consider the physics point of view. We know that the work carried out by the force from $A$ to $B$ should be the negative of the one carried out from $B$ from $A$: $$\varphi(A,AB)= -\varphi(B,BA).$$ It follows that the work defined separately on each of the stars will have matched values on the overlaps. Therefore, it is well-defined as a linear map on $C_1(K)$.

**Theorem.** A local $1$-form that satisfies the condition above is a $1$-*cochain* on $K$.

Indeed, $$\varphi(A,-AB)= -\varphi(B,AB).$$

**Exercise.** Provide details of the proof.

This conclusion reveals a certain redundancy in the way we defined the space of all directions as the tangent bundle $T_1(K)$. We can then postulate that the direction from $A$ to $B$ is the opposite of the direction from $B$ to $A$: $$(A,AB)\sim -(B,BA).$$ This equivalence relation reduces the size of the tangent bundle via the quotient construction. Below, we can see the similarity between this new space and the space of tangents of a curve:

It looks as if the disconnected parts of $T_1(K)$, the tangent spaces, are glued together.

The fact that this equivalence relation preserves the operations on each tangent space implies the following.

**Theorem.**
$$T_1(K)/_{\sim}=C_1(K).$$

**Exercise.** Prove the theorem.

Then, $1$-forms are local $1$-forms that are well-defined on $C_1(K)$. More precisely, the following diagram commutes: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccc} T_{1}(K)& \ra{f} & R &\\ \da{p} & & || & \\ C_{1}(K)& \ra{f} & R, \end{array}$$ where $p$ is the identification map. We treat these two maps as the same and omit “local” when there can be no confusion.

**Exercise.** Define the analog of the exterior derivative $d:C^0(K)\to T^1(K)$.

Higher order differential forms are *multi*-linear functions but this topic lies outside the scope of this book.