This site is being phased out.

# Discrete functions

## Contents

## How do we approximate continuous processes?

Suppose we have a function, $f$, representing the position, $y$, of your car as a function of time, $x$:

Let's suppose, however, that we know only a “sampled” version of this function; i.e., we only know its values at, say, $0, 1, 2, 4, 5, ...$. These numbers may come from looking occasionally at your odometer. Suppose also that the derivative, $f'$, of this function -- representing the velocity of your car -- is also sampled:

You get those numbers from looking at your speedometer once in a while. Thus, while $f$ and $f'$ are unknown, we have instead two new functions: $g$ and $g'$, defined only at predetermined points of time:

We may call them *discrete functions*.

What do these two have to do with each other? Now, it would make sense if $g'$ was the derivative of $g$ in some sense. But with all the functions being discrete, there is no differentiation or the derivative in the conventional sense. Can we ever establish such a relation between two discrete functions?

The integral provides may be the answer. The *Fundamental Theorem of Calculus* states:
$$f(8)-f(0) = \displaystyle\int_0^8 f'(x) dx.$$

Now, we would like see such a formula to link the two new functions together: $$g(8)-g(0) \stackrel{?}{=} \displaystyle\int_0^8 g'(x) dx.$$

We rely on the conventional understanding of the Riemann integral as the area under the graph and that's why we turn $g'$ into a *step-function*:

The result is as follows. The right-hand side is:
$$g(8)-g(0) =10,$$
but the left-hand side is:
$$\displaystyle\int_0^1 g'(x)dx \approx 0.2+0.4+0.5+0.7+0.8+1+1.7+2.9+4 =12.2.$$
Then the Fundamental Theorem of Calculus fails and, with it, one of *the laws of physics*!

Of course, we can improve the sampling by taking more sample values (i.e., we look at the dials more often). Then the approximation improves too and, mathematically, the Riemann sums above converge to the Riemann integral, the actual displacement. In other words, we use step-functions to approximate the definite integral of a function.

For example, for $f(x)=x,\ x\in [0,1]$, the (left-end) Riemann sum is
$$L_n:= \displaystyle\sum_{i=0}^{n-1} \tfrac{i}{n}\tfrac{1}{n}=\tfrac{1}{n^2}\displaystyle\sum_{i=0}^{n-1} i=\tfrac{1}{n^2}\tfrac{n(n-1)}{2} \to \tfrac{1}{2},$$
as expected. In other words, the areas under the graphs of the approximating step-functions converge to the *area* under the graph of $f$.

To summarize, no specific approximation satisfies the laws of calculus (or the laws of physics).

## Discrete functions or discrete cochains?

We will introduce discrete structures that don't have to approximate but have to *mimic* the behavior of the continuous structures of calculus.

We already have a substitute for a function $f$ -- a discrete function $g$ defined at predetermined points of the real line. What is its derivative?

We consider, instead of the tangent lines of $f$, the secant lines of $f$ and, therefore, of $g$:

Then the values of $g'$ are the slopes on the secant lines over these intervals; e.g., $g'$ takes the values $g(1)-g(0),\ g(2)-g(1)$, etc. This step may be understood as “differentiation” of $g$. To confirm that this makes sense, we would like to “integrate” $g'$ and recover $g$.

The key question is: *where* should $g'$ be defined? For interval $[0,1]$, which one is correct:
$$g'(0)=g(1)-g(0) \text{ or } g'(1)= g(1)-g(0)?$$
Neither!

We would like to think of the integral in the usual sense -- as the area under the graph. That's why, instead, we prefer to assign this value to the points of the interval itself: $$g'(x)=g(m+1)-g(m) \text{ for } x\in (m,m+1).$$ This makes $g'$ a step-function:

*That* we can integrate over any of the intervals:
$$g(m+1)-g(m) = \displaystyle\int_m^{m+1} g'(x) dx ,$$
and the results match on each! Therefore, they match on the whole interval:
$$g(n)-g(0) = \displaystyle\int_0^n g'(x) dx .$$
In other words, the Fundamental Theorem of Calculus holds.

**Exercise.** Prove the last statement.

We should simplify things even further. The domain of the “derivative” $g'$ is understood as the set
$$(0,1) \cup (1,2) \cup (2,3) \cup \ ....$$
We are able skip the values $g'(0)=?,\ g'(1)=?,\ ...,$ because they don't affect the integrals. Now, it is more convenient to think of the domain of this function as the *collection* of edges rather than their union. For example,
$$g'\Big|_{(0,1)}= .3$$
is replaced with
$$g'\Big([0,1] \Big)= .3.$$
So, the inputs of $g'$ are the edges. Furthermore, the integral becomes the sum:
$$\begin{array}{lll}
\displaystyle\int_0^n g'(x) dx &= g'\Big([0,1] \Big) + ... + g'\Big([n-1,n] \Big) \\
&= g'\Big([0,1] + ... + [n-1,n] \Big) \\
&= g'\Big([0,n] \Big).
\end{array}$$

Thus, $g'$ is a function of “chains” of edges. Meanwhile, $g$ is a function of “chains” of nodes.

The bonus insight is then that $g$ and $g'$ are revealed to be of the same nature: they both are functions of *cells* (and chains of cells), just of different dimensions:
$$\begin{array}{r|c|c}
& g & g'\\
\hline
\text{domain:} & \{...,0, 1, 2, 3, ...\}& \{...,[0,1],[1,2],[2,3], ...\} \\
\text{inputs:} & \text{points/vertices} & \text{intervals/edges} \\
\text{dimension:} & 0 & 1
\end{array}$$
Accordingly, $g$ and $g'$ are called “cochains” of degree $0$ and $1$.

Above, we acquire $g$ by sampling $f$ and then acquire $g'$ by taking the differences of the values of $g$ (right then down):

Alternatively, we find $g'$ directly by integrating $f'$ on these intervals (down then right): $$g\Big([a,b] \Big):= \displaystyle\int_{[a,b]}f'dx.$$

## Calculus of data

Now, let's approach the issue from the opposite direction. What if we *start* with discrete data?

In real life, data is simply a series of numbers. For example, this is what the graph of such a function looks like in a common spreadsheet:

Can we do *calculus* with such functions?

The two primary components of calculus as we know it are the following two questions:

1) *What is the function's rate of change*?

The derivative tells us the rate of change of data that continuously varies. Its discrete analog is the *difference*, the difference of values at two consecutive points. More precisely, if the function is represented by a collection of points on the plane, then its “derivative” -- evaluated at one of the intervals rather than points -- is the slope of the line that connects the endpoints of the segment.

2) *What is the area under the graph of the function*?

The definite integral gives us the area under a continuous curve. Its discrete analog, as we just saw, is the *sum*, the sum of the values of the function for all points within the interval. Moreover, if this is a step-function, the “integral” is the sum of the areas of the rectangles determined by these steps, just as before.

Even though the graph above is made of *dots*, with a couple of clicks, we can plot the same function with *bars*:

So, the same discrete function has been interpreted in two different ways:

- for differentiation -- with
*dots*plotted above the integer values, and - for integration -- with
*rectangles*plotted over the segments. $\\$

We, again, arrive to the idea of $0$-cochains and $1$-cochains.

At this point, we can construct continuous counterparts of these cochains. For the $0$-cochains, it's interpolation: $$f(x):=(1-x)g(0)+xg(1),\quad \forall x\in [0,1].$$ For the $1$-cochains, it's simply extension: $$f'(x):=g'\Big([0,1]\Big),\quad \forall x\in [0,1].$$ These constructions are woven into one diagram below:

**Exercise.** Find the formulas for the rest of the values of these functions.

To summarize, an incremental motion is represented by a $0$-cochain. Its values provide the locations at every instant of the chosen sequence of moments of time while we think of the intermediate locations as unknown or unimportant. Meanwhile, a $1$-cochain gives the incremental changes of locations.

**Exercise.** Show that, given an initial location, all locations can be found from a given $1$-cochain.

**Exercise.** Given a $0$- and a $1$-cochain, what kind of continuous motion produces these cochains *exactly*?

## Chains as domains of integration

There are two versions of the same edge present: positive and negative. It is positive when the direction of the edge coincides with the direction of the axis. So, for the same cell $[a,b]\subset {\bf R}$,

- if $a < b$, then the direction of $[a,b]$ is positive, and
- if $a > b$, then the direction of $[a,b]$ is negative. $\\$

In other words, $$[b,a]=-[a,b].$$

In light of this idea, we can rewrite the *orientation of property of integral* from elementary calculus:
$$ \displaystyle\int_a^b f(x)dx = - \displaystyle\int_b^a f(x) dx,$$
in terms of chains:
$$ \displaystyle\int\limits_{-[a,b]} f(x)dx = - \displaystyle\int\limits_{[a,b]} f(x) dx,$$

We can also rewrite the *additivity property of integrals*:
$$ \displaystyle\int_a^b f(x) dx + \int_b^c f(x) dx = \displaystyle\int_a^c f(x) dx.$$
But first we observe that, if the intervals overlap, the property still makes sense, as the integral over the overlap is counted twice:

Then the property takes this form: $$ \displaystyle\int\limits_{[a,b]} f(x)dx + \displaystyle\int\limits_{[c,d]} f(x)dx = \displaystyle\int\limits_{[a,b]+[c,d]} f(x)dx.$$

The left-hand sides of these three identities can now be understood as a certain *function evaluated on these chains of edges*. For any given integrable function $f$, we can define a new function:
$$F( q ):= \displaystyle\int\limits_{q} f(x)dx,$$
where $q$ is a chain of edges.

According to the above identities, this function is linear! Such a function is called a *cochain* of degree $1$.

## Cell functions: nodes and edges

The discrete functions we have considered so far are simply $$f:{\bf Z} \to {\bf R}.$$ One can think of the former as the discretized time and the latter as the continuous space. It is the idea of infinite divisibility of space that makes the choice of ${\bf R}$ so plausible.

There are many quantities besides space that are infinitely divisible: time, heat, mass, money. However, they can be and are divided discrete parts:

- time: seconds, minutes, hours, etc.;
- space (discrete locations): milestones, staircase, gears, a clock with a jumping (rather than floating) hand or a digital clock;
- mass: molecules;
- heat, energy, light: particles (cf. “corpuscular theory”);
- money: coins, banknotes, etc. $\\$

These parts are thought of more as *states* than locations. One can also start with the discrete and continue with the discrete.

Thus a *discrete function* can also be any function
$$f:{\bf Z} \to {\bf Z}.$$

We already know, however, that the rate of change of such a function will be captured by its values on the *edges*. What happens to them?

Case 1: we consider discrete functions with a *single jump*.

These are the nodes and the edges involved: $$N_x=\{A,B\},\ E_x=\{AB\},\ N_y=\{X,Y\},\ E_y=\{XY\}.$$ What functions can we define between them?

Let's first assume that either

- 1. $f(A):=X, \ f(B):=Y$, or
- 2. $f(A):=Y, \ f(B):=X$. $\\$

What happens to the edges? We define new values for function $f$ -- of edges -- and, in either case, we choose:
$$f(AB):=XY.$$
This is seen as the *continuity* of this curve as well as of the combined function of nodes and edges:

When this happens, we say that the edge is *cloned*.

Case 2: we consider discrete functions with *no jump*.

Let's start over with this assumption that either

- 1. $f(A):=X, f(B):=X$, or
- 2. $f(A):=Y, f(B):=Y$. $\\$

The only way to make sense of these is to define the value of the edge to be a *node*, the node determined by the node values of the function:

- 1. $f(AB):=X$.
- 2. $f(AB):=Y$.

Then we have two constant functions.

When this happens, we say that the edge is “collapsed”.

Conversely, we can start with an edge and choose $$f(AB):=XY.$$ Then we can have several possible values for $A,B$:

- 1. $f(A):=X, \ f(B):=Y$,
- 2. $f(A):=Y, \ f(B):=X$,
- 3. $f(A):=X, \ f(B):=X$,
- 4. $f(A):=Y, \ f(B):=Y$. $\\$

The first two options make sense because they preserve the relation between the nodes. This is not the case for options 3 and 4! By looking at the graphs of these functions, we realize that what we accept here is *continuity* of these curves and what we reject is their *discontinuity*:

Let's consider three vertices $A,B,C$ this time: $$N_x=\{A,B,C\},\ E_x=\{AB,BC\},\ N_y=\{X,Y,Z\},\ E_y=\{XY,YZ\}.$$ What functions can we define between them? Consider just these three possibilities:

They are given by these functions:

- (1) $f(A)=X, \ f(B)=Y, \ f(C)=Z, \ f(AB)=XY, \ f(BC)=YZ$,
- (2) $f(A)=X, \ f(B)=Y, \ f(C)=Y, \ f(AB)=XY, \ f(BC)=Y$,
- (3) $f(A)=X, \ f(B)=Y, \ f(C)=Z, \ f(AB)=XY, \ f(BC)=Z$. $\\$

Even a casual look at the graphs reveals the difference: continuity vs. discontinuity.

Thus, *we plot the nodes of the graph first and then attach the edges to them*. Therefore, we need to require from the edge function $f$ that for each edge $e$ one of the following is satisfied:

- 1. $f$ takes the endpoints of $e$ to the endpoints of $f(e)$; or $\\$
- 2. if $e$ is taken by $f$ to a node $X$ then so are its endpoints, and vice versa.

Such a combination of a function of nodes and a function of edges will be called a *cell function*.

More generally, the edges in the $x$-axis may be taken to *chains* of edges in the $y$-axis:

In other words, we can have:

- $f(AB)=XY$, or
- $f(AB)=XY+YZ$.

This function gives us the edges in the output when the input is a single edge. In particular, the number of these edges is the rate of change of the original function. Thus, the derivative of the function of nodes can be thought of as the corresponding function of edges.