This site is being phased out.

Introduction to discrete calculus, continued

From Mathematics Is A Science
Jump to navigationJump to search

Cell functions

Suppose we have two copies of ${\mathbb R}$, ${\mathbb R}_x$ and ${\mathbb R}_y$, possibly representing time and space respectively. We are to study functions, $$f:G\to {\mathbb R}_y,\ \text{ for some } G\subset{\mathbb R}_x,$$ that will possibly represent motion in space. They have to somehow respect the cell structure of ${\mathbb R}$. Let's recall how cell functions are introduced.

Suppose $N,E$ are the set of nodes of ${\mathbb R}$ and $N_G,E_G$ the set of nodes of $G$. In particular, $$N_G:=\{A,B,C,...\},\ N:=\{X,Y,Z,...\};$$ $$E_G:=\{AB,BC,...\},\ E:=\{XY,YZ,...\}.$$

First, $f$ has to be a function that takes nodes to nodes: $$f:N_G \to N.$$ For example, this is a possibility: $$f(A)=X,\ f(B)=Y,\ X\ne Y.$$ Second, we need to understand what will happen to the edge $AB$. The only option is that the edge is taken to this edge, $$f(AB)=XY,$$ provided $XY\in {\mathbb R}_y$. In this case we say that the edge is cloned.

Graphs of graph functions.png

The second possibility is: $$f(A)=f(B)=X.$$ The only option is that the edge is taken to this node, $$f(AB)=X.$$ In this case, we say that the edge is collapsed.

Graphs of graph functions with discontinuity fixed.png

In order to ensure continuity of the resulting curve, we plot the nodes of the graph first and then attach edges to them. If we discover that this is impossible, no realization of the function can be continuous and it should be discarded.

Therefore, we require from the edge function $f$ the following:

  • for each edge $e$, $f$ takes its endpoints to the endpoints of $f(e)$. $\\$

Or, in a more compact form, $$f(A)=X, \ f(B)=Y \Longleftrightarrow f(AB)=XY.$$ Second, we require:

  • for each edge $e$, if $e$ is taken to node $X$, then so are its endpoints, and vice versa. $\\$

Or, in a more compact form, $$f(A)=f(B)=X \Longleftrightarrow f(AB)=X .$$

We combine these two requirements in the following definition.

Definition. A function $f:G\to {\mathbb R}_y$, where $G\subset {\mathbb R}_x$, is called a cell function (or a cell map) when for any adjacent nodes $A,B$ in $G$ we have: $$f(AB)= \begin{cases} f(A)f(B) & \text{ if } f(A) \ne f(B),\\ f(A) & \text{ if } f(A) = f(B). \end{cases}$$

In the most abbreviated form, this Discrete Continuity Condition is: $$f(AB)=f(A)f(B),$$ with the understanding that $XX=X$.

Exercise. Prove that the composition of two cell functions is a cell function.

Exercise. Under what circumstances is there the inverse of a cell function which is also a cell function?

There are very few cell functions $f:{\mathbb R}\to {\mathbb R}$. The reason is that as the input $A$ (a node) increments by $1$, the output $B=f(A)$ (also a node) increments by $-1,0$, or $1$. These three numbers are the only three possible slopes, $m=-1,0,1$, of a linear cell function $f(A)=mA+b,\ b\in {\bf Z}$. There are no non-linear functions! We will have to explore other possibilities.

Chain functions

We can now move on to algebra.

Example. We will consider the two maps but with the domain and range limited to these short lists of nodes and edges of ${\mathbb R}$: $$\begin{array}{lll} N_G=\{A,B,C\},& E_G=\{AB,BC\},\\ N_J=\{X,Y,Z\},& E_J=\{XY,YZ\}; \end{array}$$ Below, two cell functions and their graphs are shown:

Graph maps for 2 edges.png

We can be more specific. The first function is: $$\begin{array}{lllll} \dim =0:&f(A)=X, &f(B)=Y,&f(C)=Z,\\ \dim =1:&f(AB)=XY,&f(BC)=YZ.& \end{array}$$ This representation is however inadequate! The reason is that in discrete calculus as it has been developed so far, the main building block is a chain (and a cochain). In fact, we think of these lists as bases of the chain groups: $$\begin{array}{lll} C_0(G)=< A,B,C >,& C_0(J)=< X,Y,Z >,\\ C_1(G)=< AB,BC >,& C_1(J)=< XY,YZ >. \end{array}$$

The key idea is to think of the cell maps as linear operators determined by their values on these bases.

There are two linear operators for the two dimensions $k=0,1$. They are written coordinate-wise as follows: $$\begin{array}{lllll} \dim =0:&f_0\Big ([1,0,0]^T\Big)=[1,0,0]^T,&f_0\Big ([0,1,0]^T\Big)=[0,1,0]^T,&f_0\Big ([0,0,1]^T\Big)=[0,0,1]^T,\\ \dim =1:&f_1\Big ([1,0]^T\Big)=[1,0]^T,&f_1\Big ([0,1]^T\Big)=[0,1]^T. \end{array}$$

The second function is given by $$\begin{array}{lllll} \dim =0:&f(A)=X, &f(B)=Y,&f(C)=Y,\\ \dim =1:&f(AB)=XY,&f(BC)=Y. \end{array}$$ The linear operators are written coordinate-wise as follows: $$\begin{array}{lllll} \dim =0:&f_0\Big([1,0,0]^T\Big)=[1,0,0]^T, &f_0\Big([0,1,0]^T\Big)=[0,1,0]^T, &f_0\Big([0,0,1]^T\Big)=[0,0,1]^T,\\ \dim =1:&f_1\Big([1,0]^T\Big)=[1,0]^T, &f_1\Big([0,1]^T\Big)=0. \end{array}$$ The very last item requires special attention: the collapsing of an edge in $G$ does not produce a corresponding edge in $J$. This is why we give it an algebraic meaning by assigning it the zero value.

It follows that the first linear operator is the identity and the second can be thought of as a projection. $\square$

Exercise. Prove the last statement.

Exercise. Find the matrices of $f_0,f_1$ in the last example.

Exercise. Find the matrices of $f_0,f_1$ for $f:G\to J$ given by $f_E(AB)=XY, \ f_E(BC)=XY$.

We follow this idea and introduce a new concept.

Definition. The chain function of a cell function $f:G \to J$, where $G,J\subset {\mathbb R}$, is the pair of linear operators $f_{\Delta}:=\{f_0,f_1\}$: $$\begin{array}{lll} f_0:C_0(G) \to C_0(J),\\ f_1:C_1(G) \to C_1(J), \end{array}$$ generated by the values of $f$ on the $k$-cells, $k=0,1$, with $f_1(e)=0$ for every collapsed edge $e$.

By design, these two operators satisfy the Discrete Continuity Condition: $$f_1(AB)=f_0(A)f_0(B).$$ Let's express this condition in terms of the boundary operator.

We just use the fact that $$\partial (AB) = B-A,$$ and the continuity condition takes this form: $$\partial (f_1(AB))=f_0( \partial (AB)).$$ It applies, without change, to the case of a collapsed edge. Indeed, if $AB$ collapses to $X$, both sides are $0$: $$\begin{array}{llllll} &\partial (f_1(AB)) &=\partial (0) &=0;\\ &f_0( \partial (AB))&= f_0( B-A)=f_0(B)-f_0(A)=X-X &=0.&& \end{array}$$

Theorem (Algebraic Continuity Condition). For any cell function $f:G \to J$, its chain function satisfies: $$\partial f_1(e)=f_0( \partial e),$$ for any edge $e$ in $G$.

Whenever the boundary operator can be properly defined, we will use this condition in any dimension.

We now realize that a function doesn't have to be a cell map to satisfy this condition:

Chain approximation.png

Suppose again we have two graphs $G\subset {\mathbb R}_x$ and $J\subset {\mathbb R}_y$.

Definition. A chain map is the pair of linear operators $f:=\{f_0,f_1\}$: $$\begin{array}{lll} f_0:C_0(G) \to C_0(J),\\ f_1:C_1(G) \to C_1(J), \end{array}$$ that satisfies the Algebraic Continuity Condition: for any edge $e$ in $G$, $$\partial f_1(e)=f_0( \partial e).$$

As we can see, not all of them come from cell functions. And, unlike those of cell functions, the slopes of chain functions can be arbitrary, integer, numbers. Non-linear functions are also numerous, such as polynomials with integer coefficients and others: $$f_0(A)=3A^2+2A-1, \quad g_0(x)=2^A, \quad \text{ etc.}$$

For example, if an arbitrary function of nodes $f_0$ is given, we can always fill the blanks with some $f_1$, within ${\mathbb R}$.

Exercise. Find $f_1$ and $g_1$.

Exercise. Clarify and prove the last statement. Hint: What difference does it make if $G\ne {\mathbb R}$ or $J\ne {\mathbb R}$?

Also, if a cell map $f$ is given, a multiple of its chain function $f_\Delta=\{f_0,f_1\}$ is also a chain function. Moreover, the following two also form a chain function: $$g_k(a)=r_af_k(a),\ k=0,1,$$ for any choice of coefficients $r_a\in {\bf R},\ a\in C_k({\mathbb R})$.

Exercise. Prove the last statement.

Cell functions are now seen as “cell-valued” chain functions.

Exercise. Make the above statement precise by answering the question: what are the possible values of the matrix of the chain function of a cell function?

Exercise. Prove that the composition of two chain functions is a chain function.

Exercise. Under what circumstances is there the inverse of a chain function which is also a chain function?

Commutative diagrams

This very fruitful approach will be used throughout.

Compositions of functions can be visualized as flowcharts:

Composition as flowchart.png

In general, we represent a function $f : X \to Y$ diagrammatically as a black box that takes an input and releases an output (same $y$ for same $x$): $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccc} \text{input} & & \text{function} & & \text{output} \\ x & \ra{} & \begin{array}{|c|}\hline\quad f \quad \\ \hline\end{array} & \ra{} & y \end{array} $$ or, simply, $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} X & \ra{f} & Y. \end{array} $$ Suppose now that we have another function $g : Y \to Z$; how do we represent their composition $fg=g \circ f$?

To compute it, we “wire” their diagrams together consecutively: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccc} x & \ra{} & \begin{array}{|c|}\hline\quad f \quad \\ \hline\end{array} & \ra{} & y & \ra{} & \begin{array}{|c|}\hline\quad g \quad \\ \hline\end{array} & \ra{} & z \end{array}$$ The standard notation is the following: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} X & \ra{f} & Y & \ra{g} & Z. \end{array} $$ Or, alternatively, we may want to emphasize the resulting composition: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{rclcc} X & \ra{f} & Y \\ &_{gf} \searrow\quad & \da{g} \\ & & Z \end{array} $$ We say that the new function “completes the diagram”.

The point illustrated by the diagram is that, starting with $x\in X$, you can

  • go right then down, or
  • go diagonally; $\\$

and either way, you get the same result: $$g(f(x))=(gf)(x).$$

In the diagram, this is how the values of the functions are related: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{rrrlll} &x\in& X &\quad\quad \ra{f}\ & Y &\ni f(x) \\ &&& _{gf} \searrow \quad& \da{g} \\ &&& (gf)(x)\in & Z &\ni g(f(x)) \end{array} $$

Example. As an example, we can use this idea to represent the inverse function $f^{-1}$ of $f$. It is the function that completes the diagram with the identity function on $X$: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} & X & \quad\ra{f} & Y \\ & & _{{\rm Id}_X} \searrow\quad & \da{f^{-1}} \\ \qquad\qquad\qquad\qquad& & & X&\qquad\qquad\qquad\qquad\square \end{array}$$

Exercise. Plot the other diagram for this example.

Example. The restriction of $f:X\to Y$ to subset $A\subset X$ completes this diagram with the inclusion of $A$ into $X$: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{lccllllllll} & A &\quad \hookrightarrow & X \\ & & _{f|_A} \searrow\quad & \da{f} \\ \qquad\qquad\qquad\qquad& & & Y&\qquad\qquad\qquad\qquad\square \end{array}$$

Diagrams like these are used to represent compositions of all kinds of functions: continuous functions, graph functions, homomorphisms, linear operators, and many more.

Exercise. Complete the diagram: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{cclccc} \ker f & \hookrightarrow & X \\ & _{?} \searrow & \da{f}\\ & & Y& \end{array} $$

A commutative diagram may be of any shape. For example, consider this square: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{lclccccccc} X & \ra{f} & Y \\ \da{g'} & \searrow & \da{g} \\ X' & \ra{f'} & Y' \end{array} $$ As before, go right then down, or go down then right, with the same result: $$gf = f'g'.$$ Both give you the function of the diagonal arrow!

This identity is the reason why it makes sense to call such a diagram “commutative”. To put it differently,

vertical then horizontal is same as horizontal then vertical.
Commutative diagram.png

The illustration above explains how the blue and green threads are tied together in the beginning -- as we start with the same $x$ in the left upper corner -- and at the end -- where the output of these compositions in the right bottom corner turns out to be the same. It is as if the commutativity turns this combination of loose threads into a piece of fabric!

The algebraic continuity condition in the last subsection $$\partial _Jf_1(e)=f_0( \partial _G e)$$ is also represented as a commutative diagram: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{cccccccccc} C_1(G) & \ra{f_1} & C_1(J) \\ \da{\partial _G} & \searrow & \da{\partial _J} \\ C_0(G) & \ra{f_0} & C_0(J). \end{array} $$

Exercise. Draw a commutative diagram for the composition of two chain maps.

The derivatives

A cell map takes vertices to vertices and edges to edges and that's what makes the $0$- and $1$-chain maps possible. Then,

  • the locations are taken care of by $f_0:C_0(X)\to C_0(Y)$, and
  • the directions are taken care of by $f_1:C_1(X)\to C_1(Y)$.
Graph maps for 2 edges.png

Then, $f_1$ can be understood as the derivative of $f_0$!

Chain approximation.png

The only connection is via the Algebraic Continuity Condition: $$\partial f_1(e)=f_0( \partial e),$$ for any edge $e$ in $G$. Therefore, there may be many $f-1$ for a given $f_0$. Nonetheless, we write for convenience: $$df_0=f_1.$$

The construction largely follows the study of the exterior derivative of cochains discussed previously, in spite of the differences. Compare their derivatives in the simplest case:

  • for a $0$-cochain $f:{\mathbb R}\to R$, we have

$$df\Big([A,A+1] \Big):=f(A+1)-f(A) \in R;$$

  • for a cell map $f:{\mathbb R} \to {\mathbb R}$, we have

$$df_0\Big([A,A+1] \Big):= f_1\Big([A,A+1] \Big) \in C_1({\mathbb R};R).$$

Theorem (Properties of the derivative). The derivative satisfies the following properties:

  • The derivative of a constant is zero:

$$dC=0.$$

  • The derivative of the identity is the identity:

$$d(\operatorname{Id})=\operatorname{Id}.$$

  • The derivative of the composition is the composition of the derivatives:

$$d(fg)=df\hspace{.5mm} dg.$$

  • The derivative of the inverse is the inverse of the derivative:

$$d(f^{-1})=(df)^{-1}.$$

Chain map as a change of variables

There are no, in general, compositions of cochains and, therefore, no chain rule of this kind. However, we can compose cochains with chain maps.

Suppose we have a cell function $p:{\mathbb R}_x \to {\mathbb R}_y$. What effect does $p$ have on the cochains? More generally, we look at a chain map $p=\{p_0,p_1\}$. When bijective, this function may represent a change of scale, units, or, generally, a change of variables. We will see that $p$ creates a function on cochains.

Let's write the maps of $0$- and $1$-chains along with $0$- and $1$-cochains on ${\mathbb R}_y$ in this diagram: $$\begin{array}{llll} \dim =0:& p_0:C_0({\mathbb R}_x)\to C_0({\mathbb R}_y),& f:C_0({\mathbb R}_y) \to {\bf R},\\ \dim =1:& p_1:C_1({\mathbb R}_x)\to C_1({\mathbb R}_y),& s:C_1({\mathbb R}_y) \to {\bf R}. \end{array}$$ Their compositions are: $$\begin{array}{llll} \dim =0:& fp_0:C_0({\mathbb R}_x)\to {\bf R},\\ \dim =1:& sp_1:C_1({\mathbb R}_x)\to {\bf R}. \end{array}$$ These two are nothing but $0$- and $1$-cochains on ${\mathbb R}_x$.

Proposition. The composition of a chain map and a $k$-cochain is a $k$-cochain.

As we have observed before, $p_1$ is, in a way, the derivative of $p_0$.

Theorem (Chain Rule). For a chain map $p=\{p_0,p_1\}$ and any $0$-cochain $g$ on ${\mathbb R}$, we have $$d(gp_0)=dgp_1.$$

Proof. First, we use the Stokes Theorem for the $0$-chain $gp_0$, as follows. For any $1$-chain $a$, we have: $$d(gp_0)(a)=(gp_0)(\partial a)=g(p_0(\partial a)).$$ Now, we use the Algebraic Continuity Property $p_0\partial =\partial p_1$ and then the Stokes Theorem for $g$: $$\hspace{1.5in} =g\partial(p_1(a))=(dg)(p_1(a)). \hspace{1.5in}\blacksquare$$

Example. For example, to compute the exterior derivative of $h(A)=2^{3A}$, we let $h:=gp_0$, where $$g(m):=2^m, \quad p_0(n):=3n.$$ Therefore, $$dg \big( [m,m+1] \big)=2^m$$ and $$p_1\big( [n,n+1] \big)=[3n,3(n+1)]=[3n,3n+1]+[3n+1,3n+2]+[3n+2,3n+3].$$ Therefore, by the Chain Rule we have: $$\begin{array}{lll} d(gp_0) \big( [n,n+1] \big) &= dgp_1\big( [n,n+1] \big) \\ &=dg\big( [3n,3n+1]+[3n+1,3n+2]+[3n+2,3n+3] \big)\\ &=dg\big( [3n,3n+1] \big)+ dg\big( [3n+1,3n+2] \big)+ dg\big( [3n+2,3n+3] \big)\\ &=2^{3n}+2^{3n+1}+2^{3n+2}\\ &=2^{3n}(1+2+2^2)\\ &=7\cdot 2^{3n}.\end{array}$$ The conclusion is verified by invoking the definition: $$dh \big( [n,n+1] \big) =h(n+1)-h(n)=2^{3(n+1)}-2^{3n}=2^{3n}(2^3-1).$$ $\square$

Exercise. Use the Chain Rule to find the exterior derivative of (a) $2^{rA},\ r\in {\bf Z}$; (b) $3^{A^2}$.

In the light of this theorem, let's examine, again, the possibility of thinking of the derivative of a discrete function $g:{\bf Z} \to {\bf R}$ as just another discrete function $g':{\bf Z} \to {\bf R}$. It is typically given by: $$g'(x)=g(x+1)-g(x),\ x\in {\bf Z}.$$ Now, let's differentiate $h(x)=g(-x)$. We have: $$h'(0)=h(1)-h(0)=g(-1)-g(0).$$ On the other hand, $$-g'(0)=-(g(1)-g(0))=g(0)-g(1),$$ no match! There is no chain rule in such a “calculus”.

Exercise. Verify that there is such a match for the derivatives of these functions, if we see them as $1$-cochains, and confirm both of the above versions of the chain rule.

Thus, for every choice of function $$p_k: C_k({\mathbb R}_x)\to C_k({\mathbb R}_y),$$ we have a correspondence $$s \mapsto s p_k,\ s\in C^k({\mathbb R}_y).$$ So, this function $p_k$ produces from the (complete) set of $k$-cochains on ${\mathbb R}_y$, i.e., $C^k({\mathbb R})$, a new (possibly incomplete) set of $k$-cochains on ${\mathbb R}_x$. This means that the correspondence points in the direction opposite to that of $p$!

Definition. Given a chain map $$p=\{p_0,p_1\},\quad p_k:C_k({\mathbb R}_x) \to C_k({\mathbb R}_y),\ k=0,1,$$ the function $$p^*=\{p^0,p^1\},\quad p^k:C^k({\mathbb R}_x) \leftarrow C^k({\mathbb R}_y),\ k=0,1,$$ given by $$p^k(s):= s p_k,\ \forall s\in C_k({\mathbb R}_y),$$ is called the cochain map of $p$.

Example (inclusion). Suppose: $$G=\{A,B\},\ H=\{A,B,AB\},$$ and $$p(A)=A,\ p(B)=B.$$ This is the inclusion:

Maps of graphs 2.png

$$\begin{array}{ll|ll} C_0(G)=< A,B >,& C_0(H)= < A,B >&\Longrightarrow C^0(G)=< A^*,B^* >,& C^0(H)= < A^*,B^* >,\\ p_0(A)=A,& p_0(B)=B & \Longrightarrow p^0=\operatorname{Id};\\ C_1(G)=0,& C_1(H)= < AB >&\Longrightarrow C^1(G)=0,& C^1(H)= < AB^* >\\ &&\Longrightarrow p^1=0. \end{array} $$ $\square$

Exercise. Modify the computation for the case when there is no $AB$.

Exercise. Compute the cochains for the following (pictured previously): $$\begin{array}{llll} &K=\{A,B,C,AB,BC\},\ L=\{X,Y,Z,XY,YZ\};\\ \text{(a) }&f(A)=X,\ f(B)=Y,\ f(C)=Z,\ f(AB)=XY,\ f(BC)=YZ;\\ \text{(b) }&f(A)=X,\ f(B)=Y,\ f(C)=Y,\ f(AB)=XY,\ f(BC)=Y. \end{array}$$

Example (identity). If $p$ is a bijection, then so are $p_0$ and $p_1$. Then the $k$-cochains on ${\mathbb R}_x$ and ${\mathbb R}_y$ are also in a bijective relation. The geometries of the two may be different though. $\square$

Example (folding). Folding of ${\mathbb R}_x$ in half and placing it on ${\mathbb R}_y$ results in a special set of cochains on ${\mathbb R}$: they have equal values on cells symmetric to each other. $\square$

Example (constant). A constant $p$ produces only trivial $1$-cochains. $\square$

Exercise. What if ${\mathbb R}_x$ is folded onto $[0,1]\subset {\mathbb R}_x$?

Suppose $s$ is a $1$-cochain. Then, $$(sp_1)(AB)=s(p_1(AB))=s\big( [p_0(A),p_0(B)] \big).$$ When presented in the integral notation this gives as the following.

Theorem (Integration by Substitution). Given a chain map $\{p_0,p_1\}$, we have $$\int_{A}^{B} sp_1=\int_{p_0(A)}^{p_0(B)} s,$$ for any $1$-cochain $s$.