This site is being phased out.

Combinatorial cell maps

From Mathematics Is A Science
Jump to navigationJump to search

The definition

Below, we will see how the theory of simplicial maps is extended to general cell complexes.

Example (inclusion). As a quick example, consider the inclusion $f$ of the circle into the disk as its boundary:

Inclusion circle in disk.png

Here we have: $$f: K = {\bf S}^1 \to L = {\bf B}^2 .$$ After representing the spaces as cell complexes, we examine to what cell in $L$ each cell in $K$ is taken by $f$:

  • $f(A) = A,$
  • $f(a) = a.$ $\\$

Further, we compute the chain maps $$f_i: C_i(K) \to C_i(L),\ i=0,1,$$ as linear operators by defining them on the generators of these vector spaces:

  • $f_0(A) = A,$
  • $f_1(a) = a.$ $\\$

$\square$

We take the lead from simplicial maps: every $n$-cell $s$ is either cloned, $f(s) \approx s$, or collapsed, $f(s)$ is an $k$-cell with $k < n$.

Simplicial map.png

The result is used as a template to define cell maps. The difference is that, first, if $f(s)$ is attached to itself along its boundary but $s$ isn't, cloning doesn't produce a homeomorphism -- unless restricted to the interior $\dot{s}$ of $s$. The construction is illustrated for dimension $1$ below:

Cell map dim 1.png

The illustration of this construction for dimension $2$ also shows that, in contrast to the simplicial case, a collapsed cell may be stretched over several lower dimensional cells:

Cell map dim 2.png

Definition. Given two cell complexes $K$ and $L$, a continuous map $$f: |K| \to |L|$$ of their realizations is called a cell map (or cellular map) if for every $k$-cell $s$ in $K$ either

  • 1. $f(s)$ is a $k$-cell in $L$ and $f(\dot{s})\approx \dot{s}$ under $f$; or
  • 2. $f(s) \subset L^{(k-1)}$, where $L^{(k-1)}$ is the $(k-1)$-skeleton of $L$.

Exercise. List all possible ways complex $K=\{A,a,\alpha\}$ can be mapped to another cell complex.

Exercise. Represent the rotations of a circle through $\pi, \pi /2, \pi/3, \pi/4$ as cell maps.

Example. This projection of a triangle on a line segment is not a cell map:

Projection triangle on segment.png

$\square$

Exercise. Choose a different cell complex to represent $Y$ above in such a way that the projection is then a cell map. How about the rotation of a circle through $\sqrt{2}\pi$?

Example (projection). Let's consider the projection of the cylinder on the circle:

Cylinder projection with cells.png

We have:

  • $f(A) = f(B) = A'$, cloned;
  • $f(a) = f(b) = a'$, cloned;
  • $f(c) = A'$, collapsed;
  • $f(\tau) = a'$, collapsed. $\square$

Examples of cubical maps

The next step after defining a cell map is to construct a function that records what happens to the cells.

Let's consider maps of the “cubical circle” to itself $$f: X = {\bf S}^1 \to Y = {\bf S}^1 .$$ We represent $X$ and $Y$ as two identical cubical complexes $K$ and $L$ and then find an appropriate representation $g:K\to L$ for each $f$ in terms of their cells. More precisely, we are after $$g=f_{\Delta}:C(K)\to C(L).$$

Map S to S cubic.png

We will try to find several possible functions $g$ under the following condition:

  • (A) $g$ maps each cell in $K$ to a cell in $L$ of the same dimension, otherwise it's $0$. $\\$

The condition corresponds to the clone/collapse condition for a cell map $f$.

We make up a few examples.

Example (identity). $$\begin{array}{lllllll} g(A) = A, &g(B) = B, &g(C) = C, &g(D) = D, \\ g(a) = a, &g(b) = b, &g(c) = c, &g(d) = d. \end{array}$$ $\square$

Example (constant). $$\begin{array}{lllllll} g(A) = A, &g(B) = A, &g(C) = A, &g(D) = A, \\ g(a) = 0, &g(b) = 0, &g(c) = 0, &g(d) = 0. \end{array}$$ All $1$-cells collapse. Unlike the case of a general constant map, there are only $4$ such maps for these cubical complexes. $\square$

Example (flip). $$\begin{array}{lllllll} g(A) = D, &g(B) = C, &g(C) = B, &g(D) = A, \\ g(a) = c, &g(b) = b, &g(c) = a, &g(d) = d. \end{array}$$ This is a vertical flip; there are also the horizontal and diagonal flips, a total of $4$. Only these four axes allow condition (A) to be satisfied. $\square$

Example (rotation). $$\begin{array}{llllll} &\partial (f_1(AB)) &=\partial (0) &=0;\\ \hspace{.14in}&f_0( \partial (AB))&= f_0( A+B)=f_0(A)+f_0(B)=X+X &=0.&&\hspace{.14in}\square \end{array}$$

Exercise. Complete the example.

Next, let's try these values for our vertices: $$g(A) = A, \ g(B) = C,\ ...$$ This is trouble. Even though we can find a cell for $g(a)$, it can't be $AC$ because it's not in $L$. Therefore, $g(a)$ won't be aligned with its endpoints. As a result, $g$ breaks apart. To prevent this from happening, we need to require that the endpoints of the image in $L$ of any edge in $K$ are the images of the endpoints of the edge.

Furthermore, we want to ensure the cells of all dimensions remain attached after $g$ is applied and we require:

  • (B) $g$ takes boundary to boundary. $\\$

Algebraically, we arrive to the familiar algebraic continuity condition: $$\partial g = g \partial.$$

Exercise. Verify this condition for the examples above.

Observe now that $g$ is defined on complex $K$ but its values aren't technically all in $L$. There are also $0$s. They aren't cells, but rather chains. Recall that, even though $g$ is defined on cells of $K$ only, it can be extended to all chains, by linearity: $$g(A + B) = g(A) + g(B), ...$$

Thus, condition (A) simply means that $g$ maps $k$-chains to $k$-chains. More precisely, $g$ is a collection of functions (a chain map): $$g_k : C_k(K) \to C_k(L),\ k = 0, 1, 2, ....$$ For brevity we use the following notation: $$g : C(K) \to C(L).$$

Example (projection). $$\begin{array}{llllll} g(A) = A, &g(B) = A, &g(C) = D, &g(D) = A, \\ g(a) = 0, &g(b) = d, &g(c) = 0, &g(d) = d. \end{array}$$ Let's verify condition (B): $$\begin{array}{lllllll} \partial g(A) = \partial 0 = 0, \\ g \partial (A) = g(0) = 0. \end{array}$$ Same for the rest of $0$-cells. $$\begin{array}{llllllll} \partial g(a) = \partial (0) = 0,\\ g \partial (a) = g(A + B) = g(A) + g(B) = A + A = 0. \end{array}$$ Same for $c$. $$\begin{array}{lllllll} \partial g(b) = \partial (d) = A + D, \\ g \partial (b) = g(B + C) = g(B) + g(C) = A + D. \end{array}$$ Same for $d$. $\square$

Exercise. Try the “diagonal fold”: $A$ goes to $C$, while $C, B$ and $D$ stay.

In each of these examples, an idea of a map $f$ of the circle/square was present first, then $f$ was realized as a chain map $g$.

Notation: The chain map of $f$ us denoted by $$g = f_{\Delta} : C(K) \to C(L).$$

Let's make sure that this idea makes sense by reversing this construction. This time, we suppose instead that we already have a chain map $$g : C(K) \to C(L),$$ what is a possible “realization” of $g$: $$f=|g| : |K| \to |L|?$$ The idea is simple: if we know where each vertex goes under $f$, we can construct the rest of $f$ using linearity, i.e., interpolation.

Example (interpolation). A simple example first. Suppose $$K = \{A, B, a :\ \partial a = B-A \},\ L = \{C, D, b:\ \partial b = D-C \}$$ are two complexes representing the closed intervals. Define a chain map: $$g(A) = C,\ g(B) = D,\ g(a) = b.$$ If the first two identities is all we know, we can still construct a continuous function $f : |K| \to |L|$ such that $f_{\Delta} = g$. The third identity will be taken care of by condition (B).

Map interval to interval.png

If we include $|K|$ and $|L|$ as subsets of the $x$-axis and the $y$-axis respectively, the solution becomes obvious: $$f(x) := C + \tfrac{D-C}{B-A} \cdot (x-A).$$ This approach allows us to give a single formula for realizations of all chain operators: $$f(x) := g(A) + \tfrac{g(B)-g(A)}{B-A} \cdot (x-A).$$ For example, suppose we have a constant map: $$g(A) = C,\ g(B) = C,\ g(a) = 0.$$ Then $$\hspace{.33in}f(x) = C + \tfrac{C-C}{B-A} \cdot (x-A) = C.\hspace{.33in}\square$$

Of course, this repeats exactly the construction of the geometric realization of a simplicial map. There is no difference as long as we stay within dimension $1$, as in all the examples above. For dimensions above $1$, we can think by analogy.

Example. Let's consider a chain map $g$ of the complex $K$ representing the solid square.

Closed 2cell.png

Knowing the values of $g$ on the $0$-cells of $K$ gives us the values of $f=|g|$ at those points. How do we extend it to the rest of $|K|$?

An arbitrary point $u$ in $|K|$ is represented as a convex combination of $A, B, C, D$: $$u = sA + tB + pC + qD, \text{ with } s + t + p + q = 1.$$ Then we define $f(u)$ to be $$f(u) := sf(A) + tf(B) + pf(C) + qf(D).$$ Accordingly, $f(u)$ is a convex combination of $f(A), f(B), f(C), f(D)$. But all of these are vertices of $|L|$, hence $f(u) \in |L|$. $\square$

Example (projection). Let's consider the projection: $$\begin{array}{llllll} g(A) = A, &g(B) = A, &g(C) = D, &g(D) = A, \\ g(a) = 0, &g(b) = d, &g(c) = 0, &g(d) = d, \\ g(\tau) = 0. \end{array}$$ Then, $$\begin{array}{lllllllllll} f(u) &= sf(A) + tf(B) + pf(C) + qf(D) \\ &= sA + tA + pD + qD \\ &= (s+t)A + (p+q)D. \end{array}$$ Due to $(s+t) + (p+q) = 1$, we conclude that $f(u)$ belongs to the interval $AD$. $\square$

Exercise. Are these maps well-defined? Hint: the construction for simplicial maps is based on barycentric coordinates.

Modules

Before we proceed to build the homology theory of maps, we review what it takes to have arbitrary ring of coefficients $R$.

Recall that

  • with $R={\bf Z}$, the chain groups and the homology groups are abelian groups,
  • with $R={\bf R}$ (or other fields), the chain groups and the homology groups are vector spaces, and now
  • with an arbitrary $R$, the chain groups and the homology groups are modules.

Informally,

modules are vector spaces over rings.

The following definitions and results can be found in the standard literature such as Hungerford, Algebra (Chapter IV).

Definition. Given a commutative ring $R$ with the multiplicative identity $1_R$, a (commutative) $R$-module $M$ consists of an abelian group $(M, +)$ and a scalar product operation $R \times M \to M$ such that for all $r,s \in R$ and $x, y \in M$, we have:

  • $r(x+y) = rx + ry$,
  • $(r+s)x = rx + sx$,
  • $(rs)x = r(sx)$,
  • $1_Rx = x$. $\\$

The scalar multiplication can be written on the left or right.

If $R$ is a field, an $R$-module is a vector space.

The rest of the definitions are virtually identical to the ones for vector spaces.

A subgroup $N$ of $M$ is a submodule if it is closed under scalar multiplication: for any $n \in N$ and any $r\in R$, we have $rn \in N$.

A group homomorphism $f: M \to N$ is a (module) homomorphism (or a linear operator) if it preserves the scalar multiplication: for any $m,n \in M$ and any $r, s \in R$, we have $f(rm + sn) = rf(m) + sf(n)$.

A bijective module homomorphism is an (module) isomorphism, and the two modules are called isomorphic.

Exercise. Prove that this is a category.

The kernel of a module homomorphism $f : M \to N$ is the submodule of $M$ consisting of all elements that are taken to zero by $f$. The isomorphism theorems of group theory are still valid.

A module $M$ is called finitely generated if there exist finitely many elements $v_1,v_2, ...,v_n \in M$ such that every element of $M$ is a linear combination of these elements (with coefficients in $R$).

A module $M$ is called free if it has a basis. This condition is equivalent to: $M$ is isomorphic to a direct sum of copies of the ring $R$. Every submodule $L$ of such a module is a summand; i.e., $$M=L\oplus N,$$ for some other submodule $N$ of $M$.

Of course, ${\bf Z}^n$ is free and finitely generated. This module is our primary interest because that's what a chain group over the integers has been every time. It behaves very similarly to ${\bf R}^n$ and the main differences lie in these two related areas.

First, the quotients may have torsion, such as in ${\bf Z}/ 2{\bf Z} \cong {\bf Z}_2$. We have seen this happen in our computations of the homology groups.

Second, some operators invertible over ${\bf R}$ may be singular over ${\bf Z}$. Take $f(x)=2x$ as an example.

We will refer to finitely generated free modules as simply modules.

Maps and chain maps

The topological setup above is now translated into algebra. From a cell map, we construct maps on the chain groups of the two cell complexes.

Definition. Given a cell map $f: |K| \to |L|$, the $k$th chain map generated by map $f$, $$f_k: C_k(K) \to C_k(L),$$ is defined on the generators as follows. For each $k$-cell $s$ in $K$,

  • 1. if $s$ is cloned by $f$, define $f_k(s) := \pm f(s)$, with the sign determined by the orientation of the cell $f(s)$ in $L$ induced by $f$;
  • 2. if $s$ is collapsed by $f$, define $f_k(s) := 0$. $\\$

Also, $$f_{\Delta} = \{f_i:i=0,1, ... \}:C(K)\to C(L)$$ is the (total) chain map generated by $f$.

In the items 1 and 2 of the definition, the left-hand sides are the values of $s$ under $f_k$, while the right hand sides are the images of $s$ under $f$.

Note: The notion of orientation was fully developed for both simplicial and cubical complexes. For the general case of cell complexes, we just point out that for dimensions $0$ and $1$ the notion applies without change, while in dimension $2$ the orientation of a cell is simply the direction of a trip around its boundary. Also, we can avoid dealing with the issue of orientation by resigning, with a certain penalty, to the algebra over $R={\bf Z}_2$.

Example (projection). Let's consider the projection of the cylinder on the circle again:

Cylinder projection cell map.png

From the images of the cells under $f$ listed above, we conclude:

  • $f_0(A) = f_0(B) = A';$
  • $f_1(a) = f_1(b) = a';\ f_1(c) = 0;$
  • $f_2(\tau) = 0.$

Now we need to work out the operators:

  • $f_0:\ C_0(K) = < A, B > \to C_0(L) = < A' >;$
  • $f_1:\ C_1(K) = < a, b, c >\to C_1(L) = < a' >;$
  • $f_2:\ C_2(K) = < \tau > \to C_2(L) = 0.$

From the values of the operators on the basis elements, we conclude:

  • $f_0 = [1, 1];$
  • $f_1 = [1, 1, 0];$
  • $f_2 = 0.$$\square$

Recall also that a chain map as a combination of maps between chain complexes, in addition, has to take boundary to boundary, or, more precisely, has to commute with the boundary operator.

Theorem. If $f$ is a cell map, then $f_{\Delta}$ is a chain map: $$\partial_kf_k = f_{k-1}\partial_k,\ \forall k.$$

Proof. Done for simplicial maps. $\blacksquare$

Example. To continue the above example, the diagram has to commute: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{c|cccccccccccccc} & 0 & \ra{\partial_3} & C_2 & \ra{\partial_2} & C_1 & \ra{\partial_1} & C_0 & \ra{\partial_0} &0 \\ \hline C(K): & 0 & \ra{0} & {\bf R} & \ra{\partial_2} & {\bf R}^3 & \ra{\partial_1} & {\bf R}^2 & \ra{0} &0 \\ f_{\Delta}: &\ \ \da{0} & &\ \ \ \da{f_2} & & \ \da{f_1} & & \ \da{f_0} & & \ \da{0}\\ C(L): & 0 & \ra{0} & 0 & \ra{\partial_2} & {\bf R} & \ra{\partial_1} & {\bf R} & \ra{0} &0 \\ \end{array} $$ Let's verify that the identity is satisfied in each square above. First we list the boundary operators for the two chain complexes: $$\begin{array}{llllllllll} \partial^K_2(\tau)=a-b, &\partial^K_1(a)=\partial^K_1(b)=0, \partial^K_1(c)=A-B, &\partial^K_0(A)=\partial^K_0(B)=0;\\ \partial^L_2=0, &\partial^L_1(a')=0, &\partial^L_0(A')=0. \end{array}$$ Now we go through the diagram from left to right. $$\begin{array}{l|l|l|l} \partial_2f_2(\tau) = f_1\partial_2(\tau)&{\partial}_1f_1(a) = f_0{\partial}_1(a)&\partial_1f_1(b) = f_0\partial_1(b)&{\partial}_1f_1(c) = f_0{\partial}_1(c)\\ \partial_2(0) = f_1(a - b) &{\partial}_1(a') = f_0(0)&\partial_1(a') = f_0(0)&{\partial}_1(0) = f_0(B - A) = A' - A'\\ 0 = a' - a' = 0, {\rm \hspace{3pt} OK}&0 = 0, {\rm \hspace{3pt} OK}&0 = 0, {\rm \hspace{3pt} OK}&0 = 0, {\rm \hspace{3pt} OK} \end{array}$$ So, $f_{\Delta}$ is indeed a chain map. $\square$

Exercise. Represent the cylinder as a complex with two $2$-cells, find an appropriate cell complex for the circle and an appropriate cell map for the projection, compute the chain map, and confirm that the diagram commutes.

For a cell map, we have defined its chain maps as the homomorphisms between the chain groups of the complexes. At this point, we can ignore the origin of these new maps and proceed to chain maps in a purely algebraic manner. Fortunately, this part was fully developed for simplicial complexes and maps and, being algebraic, the development is identical for cell complexes. A quick review follows.

First, we suppose that we have two chain complexes, i.e., combinations of modules and homomorphisms between these modules, called the boundary operators: $$\begin{array}{llll} M:=\{M_i,\partial^M_i:&M_i\to M_{i-1}:&i=0,1, ...\},\\ N:=\{N_i,\partial^N_i:&N_i\to N_{i-1}:&i=0,1, ...\}, \end{array}$$ with $$M_{-1}=N_{-1}=0.$$ As chain complexes, they are to satisfy the “double boundary identity”: $$\begin{array}{llll} \partial^M_i \partial^M_{i+1} =0,& i=0,1,2, ...;\\ \partial^N_i \partial^N_{i+1} =0,& i=0,1,2, .... \end{array}$$ The compact form of this condition is, for both: $$\partial\partial =0.$$

Second, we suppose that we have a chain map as a combination of homomorphisms between the corresponding items of the two chain complexes: $$f_{\Delta}=\{f_i:\ M_i\to N_i:\ i=0,1, ...\}:M\to N.$$ As a chain map, it is to satisfy the “algebraic continuity condition”: $$\partial^M_i f_i = f_{i-1} \partial^N_i,\ i=0,1, ....$$ The compact form of this condition is: $$f\partial =\partial f.$$ In other words, the diagram commutes: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccccccc} M_{i+1}& \ra{\partial^M_{i+1}} & M_i\\ \da{f_{i+1}}& \searrow & \da{f_i}\\ N_{i+1}& \ra{\partial^N_{i+1}} & N_i \end{array} $$

This combination of modules and homomorphisms forms a diagram with the two chain complexes occupying the two rows and the chain map connecting them by the vertical arrows, item by item: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{cccccccccccccccccccc} ...& \to & M_{k+1}& \ra{\partial^M_{k+1}} & M_{k}& \ra{\partial^M_{k}} & M_{k-1}& \to &...& \to & 0\\ ...& & \da{f_{k+1}}& & \da{f_{k}}& & \da{f_{k-1}}& &...& & \\ ...& \to & N_{k+1}& \ra{\partial^N_{k+1}} & N_{k}& \ra{\partial^N_{k}} & N_{k-1}& \to &...& \to & 0 \end{array} $$ Each square commutes.

Example. From the last subsection, we have the following chain maps: $$\begin{array}{lllllllllll} f_0: & M_0 = < A, B > & \to & N_0 = < A' >, & f_0(A) = f_0(B) = A';\\ f_1: & M_1 = < a, b, c > & \to & N_1 = < a' >, & f_1(a) = f_1(b) = a', f_1(c) = 0;\\ f_2: & M_2 = < \tau > & \to & N_2 = 0, & f_2(\tau) = 0.\ \end{array}$$ From the algebraic point of view, the nature of these generators is irrelevant... Now, the diagram is: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{c|cccccccccc} & 0 & \ra{\partial_3} & C_2 & \ra{\partial_2} & C_1 & \ra{\partial_1} & C_0 & \ra{\partial_0} &0 \\ \hline M: & 0 & \ra{0} & < \tau > & \ra{[1,-1,0]^T} & < a, b, c > & \ra{{\tiny \left[\begin{array}{ccc}0&0&-1\\0&0&1\end{array}\right]}} & < A,B > & \ra{0} &0 \\ f_{\Delta}: &\ \da{0} & &\ \ \da{0} & &\quad\quad \da{[1,1,0]} & &\ \quad\da{[1,1]} & &\ \ \da{0}\\ N: & 0 & \ra{0} & 0 & \ra{0} & < a' > & \ra{0} & < A' > & \ra{0} & 0 \end{array} $$ $\square$

Examples of chain maps

Let's consider a few examples with some typical maps that exhibit common behavior.

We start with maps of the circle to itself. Such a map can be thought of as a circular rope, $X$, being fitted into a circular groove, $Y$. You can carefully transport the rope to the groove without disturbing its shape to get the identity map, or you can compress it into a tight knot to get the constant map, etc.:

Map S to S as rope.png

To build cell map representations for these maps, we use the following cell complexes of the circles:

Map from S to S.png

Example (constant). If $f: X \to Y$ is a constant map, all $k$-chains of $X$ with $k>0$ collapse or, algebraically speaking, are mapped to $0$ by $f_{\Delta}$. Meanwhile, all $0$-chains classes of $X$ are mapped to the same $0$-chain of $Y$. $\square$

Example (identity). If $f$ is the identity map, we have $$\begin{array}{llllll} &f_{\Delta} (A) = A', \\ \hspace{.39in}&f_{\Delta} (a) = a'.&\hspace{.39in}\square \end{array}$$

Example (flip). Suppose $f$ is a flip (a reflection about the $y$-axis) of the circle. Then $$\hspace{.42in} f_1(a) = -a'. \hspace{.42in}\square$$

Example (turn). You can also turn (or rotate) the rope before placing it into the groove. The resulting map is very similar to the identity regardless of the degree of the turn. Indeed, we have: $$f_*(a) = a'.$$ Even though the map is simple, this isn't a cell map! $\square$

Example (wrap). If you wind the rope twice before placing it into the groove, you get: $$f_*(a) = 2a'.$$ Once again, this isn't a cell map! $\square$

We will need to subdivide the complex(es) to deal with this issue$. . .$

Example (inclusion). In the all examples above we have $$f_*(A) = A'.$$ Let's consider an example that illustrates what else can happen to $0$-classes. Consider the inclusion of the two endpoints of a segment into the segment. Then $f : X = \{A, B\} \to Y = [A,B]$ is given by $f(A) = A,\ f(B) = B$. Now, even though $A$ and $B$ aren't homologous in $X$, their images under $f$ are, in $Y$. So, $f(A) \sim f(B)$. In other words, $$f_*([A]) = f_*([B]) = [A] = [B].$$ Algebraically, $[A] - [B]$ is mapped to $0$. $\square$

Example (collapse). A more advanced example of collapse is that of the torus to a circle, $f: X = {\bf T}^2 \to Y = {\bf S}^1$.

Torus collapse.png

We choose one longitude $L$ (in red) and then move every point of the torus to its nearest point on $L$. $\square$

Example (inversion). And here's turning the torus inside out:

Torus from cylinder 5.png

$\square$

Exercise. Consider the following self-maps of the torus ${\bf T}^2$:

  • (a) collapsing it to a meridian,
  • (b) collapsing it to the equator (above),
  • (c) collapsing a meridian to a point,
  • (d) gluing the outer equator to the inner equator,
  • (e) turning it inside out. $\\$

For each of those,

  • describe the cell structure of the complexes,
  • represent the map as a cell map,
  • compute the chain maps of this map.

Example (embedding). We consider maps from the circle to the Möbius band: $$f:{\bf S}^1 \to {\bf M}^2.$$ First, we map the circle to the median circle of the band:

Map S1 to M2 median.png

As you can see, we are forced to subdivide the band's square into two to turn this map into a cell map. Then we have: $$f(X)=f(Y)=M \text{ and } f(x)=m.$$ Second, we map the circle to the edge of the band:

Map S1 to M2 edge.png

As you can see, we have to subdivide the circle's edge into two edges to turn this map into a cell map. Then we have: $$f(X)=B,\ f(Y)=A \text{ and } f(x)=b,\ f(y)=d.$$ $\square$

Exercise. Provide details of the computations in the last example.

Exercise. Consider a few possible maps for each of these and compute their chain maps:

  • embeddings of the circle into the torus;
  • self-maps of the figure eight;
  • embeddings of the circle to the sphere.

Let's review how the theory of maps is built one more time:

A cell map produces its chain map.

That's the whole theory.

Now, algebraically: $$\begin{array}{cccccccccc} f: &|K| &\to & |L|& \leadsto \\ \\ f_{\Delta}: &C(K) & \to & C(L) & \leadsto \end{array}$$

Theorem. The identity cell map induces the identity chain map: $$\Big( \operatorname{Id}_{|K|} \Big) _i= \operatorname{Id}_{C_i(K)}.$$

Theorem. The chain map of the composition of two cell maps is the composition of their chain maps: $$(gf)_i = g_if_i.$$

The derivative of a cell map

Consider the two standard ways to write the derivative of function $f$ at $x=a$: $$\tfrac{dy}{dx} = f'(a).$$ What we know from calculus is that the left-hand side is not a fraction but the equation can be rewritten as if it is: $$dy = f'(a) dx.$$ The equation represents the relation between the increment of $x$ and that of $y$ -- in the vicinity of $a$. This information is written in terms of a new coordinate system, $(dx,dy)$ and the best affine approximation (given by the tangent line) becomes a linear function in this system:

Tangent and differentials.png

Things are much simpler in the discrete case.

Suppose $X$ and $Y$ are two cell complexes and $f:X\to Y$ is a cell map. Then “in the vicinity of point $a$” becomes “in the star of vertex $A$”:

Tangent and differentials discrete.png

If we zoom out on the last picture, this is what we see:

Derivative and tangent spaces discrete.png

A cell map takes vertices to vertices and edges to edges and that's what makes the $0$- and $1$-chain maps possible. Then,

  • the locations are taken care of by $f_0:C_0(X)\to C_0(Y)$, and
  • the directions are taken care of by $f_1:C_1(X)\to C_1(Y)$.

Example. Let's consider cell maps of the “cubical circle” (i.e., ${\bf S}^1$ represented by a $4$-edge cubical complex) to itself, $f: X \to X$:

Map S to S cubic.png

Given a vertex, we only need to look at what happens to the edges adjacent to it. We assume that the bases are ordered according to their letters, such as $\{AB,BC\}$.

The derivatives of these functions are found below.

Identity: $$\begin{array}{lllllll} f_0(A) = A, &f_0(B) = B, &f_0(C) = C, &f_0(D) = D, \\ \Longrightarrow & f'(A)(AB)=AB, &f'(A)(AD)=AD. \end{array}$$ It's the identity map.

Constant: $$\begin{array}{llllllll} f_0(A) = A, &f_0(B) = A, &f_0(C) = A, &f_0(D) = A, \\ \Longrightarrow & f'(A)(AB)=AA=0, &f'(A)(AD)=AA=0. \end{array}$$ It's the zero map.

Vertical flip: $$\begin{array}{llllllllll} f_0(A) = D, &f_0(B) = C, &f_0(C) = B, &f_0(D) = A, \\ \Longrightarrow & f'(A)(AB)=DC, &f'(A)(AD)=DA. \end{array}$$ The matrix of the derivative is $$\hspace{.37in}f'(A)=\left[ \begin{array}{ccccccc} 0&1\\ 1&0 \end{array} \right]. \begin{array}{cccc} \\ \hspace{.37in}\square \end{array}$$

Exercise. Repeat these computations for (a) the rotations; (b) the horizontal flip; (c) the diagonal flip; (d) the diagonal fold. Hint: the value of the derivative varies from point to point.

Theorem (Properties of the derivative). For a given vertex and an adjacent edge, the derivative satisfies the following properties: $\\$

$\hspace{5mm}\bullet$ The derivative of a constant is zero in the second component: $$(C)'=(C,0),\ C\in Y.$$ $\hspace{5mm}\bullet$ The derivative of the identity is the identity: $$(\operatorname{Id})'=\operatorname{Id}.$$ $\hspace{5mm}\bullet$ The derivative of the composition is the composition of the derivatives: $$(fg)'=f'g'.$$ $\hspace{5mm}\bullet$ The derivative of the inverse is the inverse of the derivative: $$(f^{-1})'=(f')^{-1}.$$

Exercise. Prove the theorem.

Exercise. Prove that if $|f|$ is a homeomorphism, then $f'=\operatorname{Id}$.

Chain maps

A cell map can't model jumping diagonally across a square.

Vector field and a jump.png

The issue is related to one previously discussed: cell map extensions vs. chain map extensions (subsection V.3.10). Recall that in the former case, extensions may require subdivisions of the cell complex. The situation when the domain is $1$-dimensional is transparent:

Chain approximation.png

In the former case, we can create a cell map: $$g(AB):=XY,$$ by extending its values from vertices to edges. In the latter case, an attempt of cell extension (without subdivisions) fails as there is no single edge connecting the two vertices. However, there is a chain of edges: $$g(AB):=XY+YZ.$$

Even though the linearity cannot be assumed, the illustration alone suggests a certain continuity of this new “map”. In fact, we know that chain maps are continuous in the algebraic sense: they preserve boundaries, $$g_0\partial = \partial g_1.$$ The idea is also justified by the meaning of the derivative of a cell map $f$: $$f'\Big(A,[A,A+1] \Big)= \Big(f_0(A),f_1([A,A+1]) \Big).$$ It is nothing but a combination of the $0$- and the $1$-chain maps of $f$...

Suppose we are given $f$, a $0$-form on ${\mathbb R}$. Then we would like to interpret the pair $g=\{f,df\}$ as some chain map defined on $C({\mathbb R})$, the chain complex of time. What is the other chain complex $C$, the chain complex of space? Since these two forms take their values in ring $R$, we can choose $C$ to be the trivial combination of two copies of $R$: $$\partial=\operatorname{Id}:R \to R.$$ Below, we consider a more general setting of $k$-forms.

Theorem. Cochains are chain maps, in the following sense: for every $k$-cochain $f$ on $K$, there is a chain map from $C(K)$ to the chain complex $C$ with only one non-zero part, $\operatorname{Id}:C_{k+1}=R \to C_k=R$, as shown in the following commutative diagram: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{r|ccccccccccc} C(K):& ... & C_{k+2}(K) & \ra{\partial} & C_{k+1}(K) & \ra{\partial} & C_k(K) & \ra{\partial} & C_{k-1}(K) & ... \\ f: & \ & \ \da{0} & & \ \da{df} & & \ \da{f} & & \ \da{0}&\\ C: & ... & 0 & \ra{\partial=0} & R & \ra{\partial=\operatorname{Id}} & R & \ra{\partial=0} &0&... \end{array} $$

Proof. We need to prove the commutativity of each of these squares. We go diagonally in two ways and demonstrate that the result is the same. We use the duality $d=\partial^*$.

For the first square: $$df \partial =(\operatorname{Id}^{-1}f\partial)\partial =\operatorname{Id}^{-1}f0=0.$$ For the second square: $$f\partial =df=\operatorname{Id}df.$$ The third square (and the rest) is zero. $\blacksquare$