This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

# Cohomology

## Vectors and covectors

What is the relation between $2$, the number, and “doubling”, the function $f(x)=2\cdot x$?

Linear algebra helps one appreciate this seemingly trivial relation. Indeed, the answer is given by a linear operator, $${\bf R} \to \mathcal{L}({\bf R},{\bf R}),$$ from the reals to the vector space of all linear functions on the reals. In fact, it's an isomorphism!

More generally, suppose

• $R$ is a commutative ring, and
• $V$ is a finitely generated free module over $R$.

Definition. Let the dual of $V$ be defined by $$V^* := \{ \alpha :V \to R, \alpha \text{ linear }\}.$$

If the elements of $V$ are called vectors, those of $V^*$ are called covectors.

Example. An illustration of a vector in $v\in V={\bf R}^2$ and a covector in $u\in V^*$ is given below:

Here, a vector is just a pair of numbers, while a covector is a match of each unit vector to a number. The linearity is visible.

$\square$

Exercise. Explain the alternative way a covector can be visualized as shown below. Hint: It resembles an oil spill.

Example. In the above example, it is easy to see a natural way of building a vector $w$ from this covector $u$. Indeed, let's pick $w$ such that

• the direction of $w$ is that of the one that gives the largest value of the covector $u$ (i.e., $2$), and
• the magnitude of $w$ is that value of $u$.

So the result is $w=(2,2)$. Moreover, covector $u$ can be reconstructed from this vector $w$.

$\square$

Exercise. What does this construction have to do with the norm of a linear operator?

Following the idea of this terminology, we add “co” to a word to indicate its dual. Such is the relation between chains and cochains (forms). In that sense, $2$ is a number and $2x$ is a “conumber”, $2^*$.

Exercise. What is a “comatrix”?

## The dual basis

Proposition. The dual $V^*$ of module $V$ is also a module, with the operations for $\alpha, \beta \in V^*,\ r \in R$ given by: $$\begin{array}{llll} (\alpha + \beta)(v) := \alpha(v) + \beta(v),\ v \in V;\\ (r \alpha)(w) := r\alpha(w),\ w \in V. \end{array}$$

Exercise. Prove the proposition for $R={\bf R}$. Hint: Start with indicating what $0, -\alpha \in V^*$ are, and then refer to theorems of linear algebra.

Below we assume that $V$ is finite-dimensional. Suppose also that we are given a basis $\{u_1,...,u_n\}$ of $V$.

Definition. The dual $u_p^*\in V^*$ of $u_p\in V$ is defined by: $$u_p^*(u_i):=\delta _{ip} ,\ i = 1,...,n;$$ or $$u_p^*(r_1u_1+...+r_nu_n) = r_p.$$

Exercise. Prove that $u_p^* \in V^*$.

Definition. The dual of the basis of $V$ is $\{u_1^*,...,u_n^*\}$.

Example. The dual of the standard basis of $V={\bf R}^2$ is shown below:

$\square$

Let's prove that the dual of a basis is a basis. It takes two steps.

Proposition. The set $\{u_1^*,...,u_n^*\}$ is linearly independent in $V^*$.

Proof. Suppose $$s_1u_1^* + ... + s_nu_n^* = 0$$ for some $r_1,...,r_k \in R$. This means that $$s_1u_1^*(u)+...+s_nu_n^*(u)=0 ,$$ for all $u \in V$. For each $i=1,...,n$, we do the following. We choose $u:=u_i$ and substitute it into the above equation: $$s_1u_1^*(u_i)+...+s_nu_n^*(u_i)=0.$$ Then we use $u_j^*(u_i)=\delta_{ij}$ to reduce the equation to: $$s_10+...+s_i1+...+s_n0=0.$$ We conclude that $s_i=0$. The statement of the proposition follows. $\blacksquare$

Proposition. The set $\{u_1^*,...,u_n^*\}$ spans $V^*$.

Proof. Given $u^* \in V^*$, let $r_i := u^*(u_i) \in R,\ i=1,...,n$. Now define $$v^* := r_1u_1^* + ... + r_nu_n^*.$$ Consider $$v^*(u_i) = r_1u_1^*(u_i) + ... + r_nu_n^*(u_i) = r_i.$$ So the values of $u^*$ and $v^*$ match for all elements of the basis of $V$. Thus $u^*=v^*$. $\blacksquare$

Exercise. Find the dual of ${\bf R}^2$ for two different choices of bases.

Corollary. $$\dim V^* = \dim V = n.$$

Therefore, by the Classification Theorem of Vector Spaces, we have the following:

Corollary. $$V^* \cong V.$$

Even though a module is isomorphic to its dual, the behaviors of the linear operators on these two spaces aren't “aligned”, as we will show. Moreover, the isomorphism is dependent on the choice of basis.

The relation between a module and its dual is revealed if we look at vectors as column-vectors (as always) and covectors as row-vectors: $$V = \left\{ x=\left[ \begin{array}{c} x_1 \\ \vdots \\ x_n \end{array} \right] \right\}, \quad V^* = \{y=[y_1,...,y_n]\}.$$ Then, we can multiply the two as matrices: $$y\cdot x=[y_1,...,y_n] \cdot \left[ \begin{array}{c} x_1 \\ \vdots \\ x_n \end{array} \right] =x_1y_1+...+x_ny_n.$$

Exercise. Prove the formula.

As before, we utilize the similarity to the dot product and, for $x\in V,y\in V^*$, represent $y$ evaluated at $x$ as: $$\langle x,y \rangle:=y(x).$$ This isn't the dot product or an inner product, which is symmetric. It is called a pairing: $$\langle \ ,\ \rangle: V^*\times V \to R,$$ which is linear with respect to either of the components.

Exercise. Show that the pairing is independent from a choice of basis.

Exercise. Prove that, if the spaces are finite-dimensional, we have $$\dim \mathcal{L}(V,U) = \dim V \cdot \dim U.$$

Exercise. When $V$ is infinite-dimensional with a fixed basis $\gamma$, its dual is defined as the set of all linear functions $\alpha:V\to R$ that are equal to zero for all but a finite number of elements of $\gamma$. (a) Prove the infinite-dimensional analogs of the results above. (b) Show how they fail to hold if we use the original definition.

## The dual operators

Next, we need to understand what happens to a linear operator $$A:V \to W$$ under duality. The answer is uncomplicated but also unexpected, as the corresponding dual operator goes in the opposite direction: $$A^*:V^* \leftarrow W^*.$$ This isn't just because of the way we chose to define it: $$A^*(f):=f A;$$ a dual counterpart of $A$ can't be defined in any other way! Consider this diagram: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} & V & \ra{A} & W \\ & _{g \in V^*} & \searrow & \da{f \in W^*} \\ & & & R, \end{array}$$ where $R$ is our ring. If this is understood as a commutative diagram, the relation between $f$ and $g$ is given by the equation above. Therefore, we acquire (and have to) $g$ from $f$ by $g=fA$, and not vice versa.

Furthermore, the diagram also suggests that the reversal of the arrows has nothing to do with linearity. The issue is “functorial”.

We restate the definition in our preferred notation.

Definition. Given a linear operator $A:V \to W$, its dual operator $A^*:W^* \to V^*$, is given by: $$\langle A^*g,v \rangle=\langle g,Av \rangle ,$$ for every $g\in W^*,v\in V$.

The matching behavior produced by the duality is non-trivial.

Theorem.

• (a) If $A$ is one-to-one, then $A^*$ is onto.
• (b) If $A$ is onto, then $A^*$ is one-to-one.

Proof. To prove part (a), observe that $\operatorname{Im}A$, just as any submodule in the finite-dimensional case, is a summand: $$W=\operatorname{Im}A\oplus N,$$ for some submodule $N$ of $W$. Consider some $f\in V^*$. Now, there is a unique representation of every element $w\in W$ as $w=w'+n$ for some $w'\in \operatorname{Im}A$ and $n\in N$. Therefore, there is a unique representation of every element $w\in W$ as $w=A(v)+n$ for some $v\in V$ and $n\in N$, since $A$ is one-to-one. Then, we can define $g \in W^*$ by $\langle g,w \rangle:= \langle f,v \rangle$. Finally, we have: $$\langle A^*g,v \rangle =\langle g,Av \rangle=\langle g,w-n \rangle=\langle g,w \rangle-\langle g,n \rangle=\langle f,v \rangle+0=\langle f,v \rangle.$$ Hence, $A^*g=f$. $\blacksquare$

Exercise. Prove part (b).

Theorem. For finite-dimensional $V,W$, the matrix of $A^*$ is the transpose of that of $A$: $$A^*=A^T.$$

Exercise. Prove the theorem.

The compositions are preserved under the duality but in reverse (just as with the inverses):

Theorem. $$(BA)^*=A^*B^*.$$

Proof. Consider: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} & V & \ra{A} & W & \ra{B} & U\\ & _{g \in V^*} & \searrow & \da{f \in W^*} & \swarrow & _{h \in U^*} \\ & & & R. \end{array}$$ $\blacksquare$

Exercise. Finish the proof.

As you see, the dual $A^*$ behaves very much like, but is not to be confused with, the inverse $A^{-1}$.

Exercise. When do we have $A^{-1}=A^T$?

The isomorphism between $V$ and $V^*$ is very straight-forward.

Definition. The duality isomorphism of the module $V$, $$D_V: V \to V^*,$$ is given by $$D_V(u_i):=u_i^*,$$ where $\{u_i\}$ is a basis of $V$ and $\{u^*_i\}$ is its dual.

In addition to the compositions, as we saw above, this isomorphism preserves the identity.

Theorem. $$(\operatorname{Id}_V)^*=\operatorname{Id_{V^*}}.$$

Exercise. Prove the theorem.

However, because of the reversed arrows, we can't say that this isomorphism “preserves linear operators”. Therefore, the duality does not produce a functor but rather a different kind of functor discussed later in this section.

Now, for $A:V \to U$ a linear operator, the diagram below isn't commutative: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{lllllllll} & V & \ra{A} & U \\ & \da{D_V} & \ne & \da{D_U} \\ & V^* & \la{A^*} & U^*\\ \end{array}$$

Exercise. Why not? Give an example.

Exercise. Show how a change of basis of $V$ affects differently the coordinate representation of vectors in $V$ and covectors in $V^*$.

However, the isomorphism with the second dual $$V^{**}:= (V^*)^*$$ given by: $$D_{V^*}D_V:V\cong (V^*)^*$$ does preserve linear operators, in the following sense.

Theorem. The following diagram is commutative: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllll} & V & \ra{A} & U \\ & \da{D_V} & & \da{D_U} \\ & V^* & & U^*\\ & \da{D_{V^*}} & & \da{D_{U^*}} \\ & V^{**} & \ra{A^{**}} & U^{**}\\ \end{array}$$

Exercise. (a) Prove the commutativity. (b) Demonstrate that the isomorphism is independent from the choice of basis of $V$.

Our conclusion is that we can think of the second dual (but not the first dual) of a module as the module itself: $$V^{**}=V.$$ Same applies to the second duals of linear operators: $$A^{**}=A.$$

Note: When the dot product above is replaced with a particular choice of inner product (to be considered later), we have an identical duality theory.

## The cochain complex

Recall that cohomology theory is the dual of homology theory and it starts with the concept of a $k$-cochain on a cell complex $K$. It is any linear function from the module of $k$-chains to $R$: $$s:C_k(K)\to R.$$ Then chains are the vectors and the cochains are the corresponding covectors.

We use the duality theory we have developed to define the module of $k$-cochains as the dual of the module of the $k$-chains: $$C^k(K):=\big( C_k(K) \big)^*,$$

Further, the $k$th coboundary operator of $K$ is the dual of the $(k+1)$th boundary operator: $$\partial^k:=\left( \partial _k \right)^*:C^k\to C^{k+1}.$$

It is given by the Stokes formula: $$\langle \partial ^k Q,a \rangle := \langle Q, \partial_{k+1} a \rangle,$$ for any $(k+1)$-chain $a$ and any $k$-cochain $Q$ in $K$.

Theorem. The matrix of the coboundary operator is the transpose of that of the boundary operator: $$\partial^k=\left( \partial_{k+1} \right)^T.$$

Definition. The elements in $Z^k:=\ker \partial ^*$ are called cocycles and the elements of $B^k:=\operatorname{Im} \partial^*$ are called coboundaries.

The following is a crucial result.

Theorem (Double Coboundary Identity). Every coboundary is a cocycle, i.e., for $k=0,1,...$, we have $$\partial^{k+1}\partial^k=0.$$

Proof. It follows from the fact that the coboundary operator is the dual of the boundary operator. Indeed, $$\partial ^* \partial ^* = (\partial \partial)^* = 0^*=0,$$ by the double boundary identity. $\blacksquare$

The cochain modules $C^k=C^k(K),\ k=0,1,...$, form the cochain complex $\{C^*,\partial ^*\}$: $$\newcommand{\la}{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{rrrrrrrrrrr} 0& \la{0} & C^N & \la{\partial^{N-1}} & ... & \la{\partial^{0}} & C^0 &\la{0} &0 . \end{array}$$ According to the theorem, a cochain complex is a chain complex, just indexed in the opposite order.

Our illustration of a cochain complex is identical to that of the chain complex, but with the arrows reversed:

Recall that a cell complex $K$ is called acyclic if its chain complex is an exact sequence: $$\operatorname{Im}\partial_k=\ker \partial_{k-1}.$$ Of course, if a cochain complex is exact as a chain complex, it is also called exact:

Exercise. (a) State the definition of an exact cochain complex in terms of cocycles and coboundaries. (b) Prove that $\{C^k(K)\}$ is exact if and only if $\{C_k(K)\}$ is exact.

To see the big picture, we align the chain complex and the cochain complex in one, non-commutative, diagram: $$\newcommand{\ra}{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \begin{array}{ccccccccc} ...& \la{\partial^{k}} & C^k & \la{\partial^{k-1}} & C^{k-1} & \la{\partial^{k-1}} &... \\ ...& & \da{\cong} & \ne & \da{\cong} & &... \\ ...& \ra{\partial_{k+1}} & C_k & \ra{\partial_{k}} & C_{k-1} & \ra{\partial_{k}} & ... \end{array}$$

Exercise. What can you say about the complexes if this diagram is commutative?

## Social choice: higher order voting

Recall that we previously considered $n$ alternatives/candidates, $\{A,B,C,...\},$ placed at the vertices of a simplicial complex $K$.

Let's consider some examples.

On the most basic level, voters evaluate the alternatives. For example,

• voter $a$ votes: candidate $A$ is worth $1$; translated:
• $a(A)=1.$

Here, a $0$-chain $A$ is evaluated by a $0$-cochain $a$.

On the next level, voters compare the alternatives (i.e., the vertices of $K$) to each other. For example,

• voter $a$ votes: candidate $A$ is worse, by $1$, than candidate $B$; translated:
• $a(AB)=1.$

What do we do with comparison votes? A comparison vote, such as the one above, may have come from a rating; however, there are many possibilities: $$a(A)=0,a(B)=1\ \text{ or } a(A)=1,a(B)=2\ \text{ etc.}$$ In addition, this comparison vote might conflict with others; for example, the total vote may be circular: $$a(B)-a(A)=1,\ a(C)-a(B)=1,\ a(A)-a(C)=1.$$ We don't try to sort this out; instead, we create a single pairwise comparison:

• $b(AB):=1.$

Here, a $1$-chain $AB$ is evaluated by a $1$-cochain $b$.

In the case of three candidates, there are three votes of degree $0$ and three votes of degree $1$:

Exercise. Analyze in this fashion the game of rock-paper-scissors.

Thus,

• the $0$-cochains are the evaluations of single candidates, while
• the $1$-cochains are the pairwise comparisons of the candidates.

The voter may vote for chains as linear combinations of the alternatives; for example, “the average of candidates $A$ and $B$”: $$a\left(\frac{A+B}{2}\right)=1.$$

Note: We can understand this chain as the 50-50 lottery between $A$ and $B$.

Exercise. Represent the following vote as a cochain:

• the average of candidates $A$ and $B$ is worse, by $B$, than candidate $C$.

On the next level, the voter may express himself in terms of comparisons of comparisons. For example, the voter may judge:

• the combination of the advantages of candidate $B$ over $A$, $C$ over $B$, and $C$ over $A$ is $1$; translated:
• $b(AB)+b(BC)+b(CA)=1$.

We simply create a single triple-wise comparison:

• $c(ABC):=1.$

Here, a $2$-chain $ABC$ is evaluated by a $2$-cochain $c$.

Exercise. Show that the triple vote above (a) may come from a several pairwise comparisons and (b) may be in conflict with other votes.

Let's sum up. These are possible votes:

• Candidate $A_0$ is evaluated by a vote of degree $0$: $a^0\in C^0$ and $a^0(A_0)\in R$.
• Candidates $A_0$ and $A_1$ are evaluated by a vote of degree $1$: $a^1\in C^1$ and $a^1(A_0A_1)\in R$.
• Candidates $A_0$, $A_1$, and $A_2$ are evaluated by a vote of degree $2$: $a^2\in C^2$ and $a^2(A_0A_1A_2)\in R$.
• ...
• Candidates $A_0$, $A_1$, ... $A_k$ are evaluated by a vote of degree $k$: $a^k\in C^k$ and $a^k(A_0A_1...A_k)\in R$.

Definition. A vote is any cochain in complex $K$: $$a=a^0+a^1+a^2+...,\ a^i\in C^i(K).$$

Now, how do we make sense of the outcome of such a vote? Who won?

In the simplest case, we ignore the higher order votes, $a^1,a^2,...$, and choose the winner to be the one with the highest rating: $$winner:=\arg\max _{i\in K^{(0)}} a^0(i).$$

But do we really have to discard the information about pairwise comparisons? Not necessarily: if $a^1$ is a rating comparison vote (i.e., a coboundary), we can use it to create a new set of ratings: $$b^0:=(\partial^0)^{-1}(a^1).$$ We then find the candidate(s) with the highest value of $$c^0:=a^0+b^0$$ to become the winner.

Example. Suppose $a^0=1$ and $$a^1(AB)=1,\ a^1(BC)=-1,\ a^1(CA)=0.$$ We choose: $$b^0(A)=0,\ b^0(B)=1,\ b^0(C)=0,$$ and observe that $$\partial^0 b^0=a^1.$$ Therefore, $B$ is the winner!

$\square$

Thus, the comparison vote helps to break the tie.

Exercise. Show that such a winner (or winners) is well-defined.

Exercise. Show that the definition is guaranteed to apply only when $K^{(1)}$ is a tree.

Exercise. Give other examples of how $a^1$ helps determine the winner when $a^0=0$.

The inability of the voter to assign a number to each single candidate to convey his perceived value (or quality or utility or rating) is what makes comparison of pairs necessary. By a similar logic, the inability of the voter to assign a number to each pair of candidates to convey their relative value is what could make comparison of triples necessary. However, we need to find a single winning candidate! Simply put, can $a^2\ne 0$ help determine the winner when $a^0=0, a^1=0$? Unfortunately, there is no $b^0\in C^0$ such that $\partial^1\partial^0b^0=a^2$.

## Cohomology

Definition. The $k$th cohomology group of $K$ is the quotient of the cocycles over the coboundaries: $$H^k=H^k(K):=Z^k / B^k = \frac {\ker \partial^{k+1}} {\operatorname{Im}\partial^k}.$$

It is then the homology of the “reversed” cochain complex.

Most of the theorems about homology have corresponding theorems about cohomology. Often, the latter can be derived from the former via duality. Sometimes it is unnecessary.

We next discuss cohomology in the context of connectedness and simple connectedness.

Proposition. The constant functions are $0$-cocycles.

Proof. If $\varphi \in C^0({\mathbb R})$, then $\partial^* \varphi([a,a+1]) = \varphi (a+1)-\varphi(a) = 0$. A similar argument applies to $C^0({\mathbb R}^n)$. $\blacksquare$

Proposition. The $0$-cocycles are constant on a path-connected complex.

Proof. For ${\mathbb R}^1$, we have $\partial^* \varphi([a,a+1]) = 0$, or $\varphi(a+1) - \varphi(a) = 0$. Therefore, $\varphi(a+1) = \varphi(a)$, i.e., $\varphi$ doesn't change from a vertex to the next. $\blacksquare$

The general case is illustrated below: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} \bullet & \ra{} & \bullet & & \\ \ua{} & & \da{} & & \\ \bullet & & \bullet & \ra{} & \bullet & \ra{} & \bullet \end{array}$$

Exercise. Prove that a complex is path-connected if and only if any two vertices can be connected by a sequence of adjacent edges. Use this to finish the proof of the proposition.

Corollary. $\dim \ker \partial^0 =$ number of path components of $|K|$.

We summarize the analysis as follows.

Theorem. $H^0(K) \cong R^m$, where $m$ is the number of path components of $|K|$.

Now, simple-connectedness.

Example (circle). Consider a cubical representation of the circle: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} \bullet& \ra{ } & \bullet \\ \ua{ } & & \ua{ } \\ \bullet& \ra{ } & \bullet \end{array}$$ Here, the arrows indicate the orientations of the edges -- along the coordinate axes. A $1$-cochain is just a combination of four numbers: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} \bullet& \ra{q} & \bullet \\ \ua{p} & & \ua{r} \\ \bullet& \ra{s} & \bullet \end{array}$$

First, which of these cochains are cocycles? According to our definition, they should have “horizontal difference - vertical difference” equal to $0$: $$(r-p)-(q-s)=0.$$ For example, we can choose them all equal to $1$.

Second, what $1$-cochains are coboundaries? Here is a $0$-cochain and its coboundary: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} \bullet^1 & \ra{1} & \bullet^2 \\ \ua{1} & & \ua{1} \\ \bullet^0 & \ra{1} & \bullet^1 \end{array}$$ So, this $1$-form is a coboundary. But this one isn't: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{llllllllllll} \bullet & \ra{1} & \bullet \\ \ua{1} & & \ua{-1} \\ \bullet & \ra{-1} & \bullet \end{array}$$ This fact is easy to prove by solving a little system of linear equations, or we can simply notice that the complete trip around the square adds up to $4$ not $0$.

Therefore, $H^1\ne 0$.

$\square$

We accept the following without proof.

Theorem. If $|K|$ is simply-connected, $H^1(K)=0$.

These topological results for homology and cohomology match! Does it mean that homology and cohomology match in other dimensions? Consider first the fact that duality gives as the isomorphism: $$C_k\cong C^k.$$ Second, the quotient construction of cohomology is identical to the one that defined homology. However, this doesn't mean that the resulting quotients are also isomorphic; in general, we have: $$H_k\not\cong H^k.$$ For example, the quotient construction, over ${\bf Z}$, creates torsion components for homology and cohomology. Those components often don't match, as in the case of the Klein bottle considered in the next subsection.

A more subtle difference is that the cohomology isn't just a module; it also has a graded ring structure provided by the wedge product.

Example (sphere with bows). To see the difference this makes, consider these two spaces: the sphere with two bows attached and the torus:

Their homology groups coincide in all dimensions. The cohomology groups also coincide, but only as vector spaces! The basis elements in dimension $1$ behave differently under the wedge product. For the sphere with bows, we have: $$[a ^*]\wedge [b ^*]=0,$$ because there is nowhere for this $2$-cochain to “reside”. Meanwhile for the torus, we have: $$[a ^*]\wedge [b ^*]\ne 0.$$ $\square$

## Computing cohomology

In the last subsection, we used cochains to detect the hole in the circle. Now, we compute the cohomology without shortcuts -- the above theorems -- just as a computer would do.

The starting points is,

• $\dim C^k(K) = \#$ of $k$-cells in $K$.

Example (circle). We go back to our cubical complex $K$ and compute $H^1(K)$:

Consider the cochain complex of $K$: $$C^0 \stackrel{\partial^0}{\to} C^1 \stackrel{\partial^1}{\to} C^2 = 0.$$ Observe that, since $\partial ^1=0$, we have $\ker \partial^1 = C^1$.

Let's list the bases of these vector spaces. Of course, we start with the bases of the chains $C_k$, i.e., the cells: $$\{A,B,C,D\},\ \{a,b,c,d\};$$ and consider the dual bases of the cochains $C^k$: $$\{A^*,B^*,C^*,D^*\},\ \{a^*,b^*,c^*,d^*\};$$ They are given by $$A^*(A)=\langle A^*,A \rangle =1,...$$

We find next the formula for $\partial^0$, a linear operator, which is its $4 \times 4$ matrix. For that, we just look at what happens to the basis elements:

$A^* = [1, 0, 0, 0]^T= \begin{array}{ccc} 1 & - & 0 \\ | & & | \\ 0 & - & 0 \end{array} \Longrightarrow \partial^0 A^* = \begin{array}{ccc} \bullet & -1 & \bullet \\ 1 & & 0 \\ \bullet & 0 & \bullet \end{array} = a^*-b^* = [1,-1,0,0]^T;$

$B^* = [0, 1, 0, 0]^T= \begin{array}{ccc} 0 & - & 1 \\ | & & | \\ 0 & - & 0 \end{array} \Longrightarrow \partial^0 B^* = \begin{array}{ccc} \bullet & 1 & \bullet \\ 0 & & 1 \\ \bullet & 0 & \bullet \end{array} = b^*+c^* = [0,1,1,0]^T;$

$C^* = [0, 0, 1, 0]^T= \begin{array}{ccc} 0 & - & 0 \\ | & & | \\ 0 & - & 1 \end{array} \Longrightarrow \partial^0 C^* = \begin{array}{ccc} \bullet & 0 & \bullet \\ 0 & & -1 \\ \bullet & 1 & \bullet \end{array} = -c^*+d^* = [0,0,-1,1]^T;$

$D^* = [0, 0, 0, 1]^T= \begin{array}{ccc} 0 & - & 0 \\ | & & | \\ 1 & - & 0 \end{array} \Longrightarrow \partial^0 D^* = \begin{array}{ccc} \bullet & 0 & \bullet \\ -1 & & 0 \\ \bullet & -1 & \bullet \end{array} = -a^*-d^* = [-1,0,0,-1]^T.$

Now, the matrix of $\partial^0$ is formed by the column vectors listed above: $$\partial^0 = \left[ \begin{array}{cccc} 1 & 0 & 0 & -1 \\ -1 & 1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \end{array} \right].$$

Now, from this data, we find the kernel and the image using the standard linear algebra.

The kernel is the solution set of the equation $\partial^0v=0$. It may be found by solving the corresponding system of linear equations with the coefficient matrix $\partial^0$. The brute force approach is the Gaussian elimination, or we can simply notice that the rank of the matrix is $3$, so the dimension of the kernel is $1$. Therefore, $$\dim H^0=1.$$ We have a single component!

The image is the set of $u$'s with $\partial^0v=u$. Once again Gaussian elimination is a simple but effective approach, or we simply notice that the dimension of the image is the rank of the matrix, $3$. In fact, $$\operatorname{span} \{ \partial^0(A^*), \partial^0(B^*), \partial^0(C^*), \partial^0(D^*) \} = \operatorname{Im} \partial^0 .$$ Therefore, $$\dim H^1=\dim \left( C^1 / \operatorname{Im} \partial^0 \right) = 4-3=1.$$ We have a single hole!

Another way to see that the columns aren't linearly independent is to compute $\det\partial^0 =0$.

$\square$

Exercise. Compute the cohomology of the T-shaped graph.

Exercise. Compute the cohomology of the following complexes: (a) the square, (b) the mouse, and (c) the figure 8, shown below.

Example (cylinder). We create the cylinder $C$ by an equivalence relation of cells of the cell complex: $$a \sim c;\ A \sim D,\ B \sim C.$$

The chain complex is known and the cochain complex is computed accordingly: $$\newcommand{\ra}{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\la}{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{lccccccccccc} & & C_{2} & \ra{\partial} & C_{1} & \ra{\partial} & C_0 \\ & & < \tau > & \ra{?} & < a,b,d > & \ra{?} & < A,B > \\ & & \tau & \mapsto & b - d \\ & & & & a & \mapsto & B-A \\ & & & & b & \mapsto & 0 \\ & & & & d & \mapsto & 0 \\ & & C^{2} & \la{\partial^*} & C^{1} & \la{\partial^*} & C^0 \\ & & <\tau^*> & \la{?} & < a^*,b^*,d^* > & \la{?} & < A^*,B^* > \\ & & 0 & \leftarrowtail& a^* & & \\ & & \tau^* & \leftarrowtail& b^* & & \\ & & -\tau^* & \leftarrowtail& d^* & & \\ & & & & -a^* & \leftarrowtail & A^* \\ & & & & a^* & \leftarrowtail & B^* \\ {\rm kernels:} & & Z^2=<\tau^*> && Z^{1}=<a^*, b^*-d^* > & & Z^{0}=< A^*+B^* > \\ {\rm images:} & & B^2=<\tau^*> && B^{1}=< a^* > & & B^{0} = < 0 > \\ {\rm quotients:}& & H^2=0 && H^{1}=< [b^*]-[d^*] >\cong {\bf Z}& & H^{0} = < [A^*]+[B^*] > \cong {\bf Z} \end{array}$$

So, the cohomology is identical to that of the circle.

$\square$

In these examples, the cohomology is isomorphic to the homology. This isn't always the case.

Example (Klein bottle). The equivalence relation that gives ${\bf K}^2$ is: $$c \sim a,\ d \sim -b.$$

The chain complex and the cochain complex are: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \begin{array}{lccccccccccc} & & C_{2} & \ra{\partial} & C_{1} & \ra{\partial} & C_0 \\ & & < \tau > & \ra{?} & < a,b > & \ra{?} & < A > \\ & & \tau & \mapsto & 2a \\ & & & & a & \mapsto & 0 \\ & & & & b & \mapsto & 0 \\ & & C^{2} & \la{\partial^*} & C^{1} & \la{\partial^*} & C^0 \\ & & <\tau^*> & \la{?} & < a^*,b^* > & \la{?} & < A^* > \\ & & 2\tau^* & \leftarrowtail& a^* & & \\ & & 0 & \leftarrowtail& b^* & & \\ & & & & 0 & \leftarrowtail & A^* \\ {\rm kernels:} & & Z^2 = <\tau^*> && Z^1 = <b^*> & & Z^0 = < A > \\ {\rm images:} & & B^2 = <2\tau^*> && B^1 = 0 & & B^0 = 0 \\ {\rm quotients:}& & H^2 = \{ \tau^*|2\tau^*=0 \}\cong {\bf Z}_2 && H^1 = <b^*>\cong {\bf Z} & & H^0 = < [A^*] > \cong {\bf Z} \end{array}$$

$\square$

Exercise. Show that $H^*({\bf K}^2;{\bf R})\cong H({\bf K}^2;{\bf R})$.

Exercise. Compute $H^*({\bf K}^2;{\bf Z}_2)$.

Exercise. Compute the cohomology of the rest of these surfaces:

## Homology vs. cohomology maps

Our illustration of cochain maps is identical to that of chain maps, but with the arrows reversed:

The constructions are very similar and so are the results.

Homology and cohomology respect maps that respect boundaries.

One can think of a function between two cell complexes $$f:K\to L$$ as one that preserves cells, i.e., the image of a cell is also a cell, possibly of a lower dimension: $$a\in K \Longrightarrow f(a) \in L.$$ If we expand this map by linearity, we have a map between (co)chain complexes: $$f_k:C_k(K) \to C_k(L)\ \text{ and } \ f^k:C^k(K) \leftarrow C^k(L).$$ What makes $f_k$ and $f^k$ continuous, in the algebraic sense, is that, in addition, they preserve (co)boundaries: $$f_k(\partial a)=\partial f_k(a)\ \text{ and } \ f^k(\partial^* s)=\partial^* f^k(s).$$ In other words, the (co)chain maps make these diagrams -- the linked (co)chain complexes -- commute: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{cccccccccccccccc} \la{\partial} & C_k(K) & \la{\partial} & C_{k+1}(K) &\la{\partial} &\quad&\ra{\partial^*} & C ^k(K) & \ra{\partial^*} & C ^{k+1}(K) &\ra{\partial^*} \\ & \da{f_*} & & \da{f_*} &&\text{ and }&& \ua{f^*} & & \ua{f^*}\\ \la{\partial} & C_k(L) & \la{\partial} & C_{k+1}(L) & \la{\partial} &\quad& \ra{\partial^*} & C ^k(L) & \ra{\partial^*} & C ^{k+1}(L) & \ra{\partial^*} \end{array}.$$

They generate the (co)homology maps: $$f_* : H_k(K) \to H_k(L)\ \text{ and } \ f^* : H^k(K) \leftarrow H^k(L),$$ as the linear operators given by $$f_*([x]) := [f_k(x)]\ \text{ and } \ f^*([x]) := [f^k(x)],$$ where $[\cdot ]$ stands for the (co)homology class.

Exercise. Prove that the latter is well-defined.

The following is obvious.

Theorem. The identity map induces the identity (co)homology map: $$(\operatorname{Id}_{|K|})_* = \operatorname{Id}_{H(K)}\ \text{ and } \ (\operatorname{Id}_{|K|})^* = \operatorname{Id}_{H^*(K)}.$$

This is what we derive from what we know about compositions of cell maps.

Theorem. The (co)homology map of the composition is the composition of the (co)homology maps $$(gf)_* = g_*f_*\ \text{ and } \ (gf)^* = f^*g^*.$$

Notice the change of order in the latter case!

This is the realm of category theory, explained later: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{cccccccc} K & \ra{f} & L \\ & \searrow ^{gf} & \da{g} \\ & & M \end{array}\quad \leadsto \quad \begin{array}{cccccccc} H(K) & \ra{f_*} & H(L) & &\\ & \searrow ^{g_*f_*} & \da{g_*}\\ & & H(M) \end{array} \text{ and }\ \begin{array}{cccccccc} H^*(K) & \la{f^*} & H^*(L) & &\\ & \nwarrow ^{f^*g^*} & \ua{g^*}\\ & & H^*(M) \end{array}.$$

Theorem. Suppose $K$ and $L$ are cell complexes. If a map $$f : |K| \to |L|$$ is a cell map and a homeomorphism, and $$f^{-1}: |L| \to |K|$$ is a cell map too, then the (co)homology maps $$f_*: H_k(K) \to H_k(L)\ \text{ and } \ f^*: H^k(K) \leftarrow H^k(L),$$ are isomorphisms for all $k$.

Corollary. Under the conditions of the theorem, we have: $$(f^{-1})_* = (f_*)^{-1}\ \text{ and } \ (f^{-1})^* = (f^*)^{-1}.$$

Theorem. If two maps are homotopic, they induce the same (co)homology maps: $$f \simeq g\ \Rightarrow \ f_* = g_* \ \text{ and } \ f^* = g^*.$$

In the face of the isomorphisms of the groups and the matching behavior of the maps, let's not forget who came first: $$\newcommand{\ra}{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccc} K & \to & C(K) & \stackrel{D}{\longrightarrow} & C^*(K)\\ & &\downarrow ^{\sim} & & \downarrow ^{\sim}\\ & &H(K) & \stackrel{?}{\longleftrightarrow} & H^*(K). \end{array}$$

Also, in spite of the fact that the cochain module is the dual of the chain module, we can't assume that the cohomology module is the dual of the homology module. In fact, we saw an example when they aren't isomorphic. However, the example was for the integer coefficients. What if all these modules are vector spaces? We state the following without proof (see Bredon, p. 282).

Theorem. If $R$ is a field, the cohomology vector space is the dual of the homology vector space: $$H(K;R)^*=H^*(K;R);$$ therefore, they are isomorphic: $$H(K;R) \cong H^*(K;R).$$

Exercise. Define “cocohomology” and prove that it is isomorphic to homology.

## Computing cohomology maps

Example (inclusion). Suppose: $$G=\{A,B\},\ H=\{A,B,AB\},$$ and $$f(A)=A,\ f(B)=B.$$ This is the inclusion:

Then, on the (co)chain level we have: $$\begin{array}{ll|ll} C_0(G)=< A,B >,& C_0(H)= < A,B >&\Rightarrow C^0(G)=< A^*,B^* >,& C^0(H)= < A^*,B^* >,\\ f_0(A)=A,& f_0(B)=B & \Rightarrow f^0=f_0^T=\operatorname{Id};\\ C_1(G)=0,& C_1(H)= < AB >&\Rightarrow C^1(G)=0,& C^1(H)= < AB^* >\\ &&\Rightarrow f^1=f_1^T=0. \end{array}$$ Meanwhile, $\partial ^0=0$ for $G$ and for $H$ we have: $$\partial ^0=\partial _1^T=[-1, 1].$$ Therefore, on the cohomology level we have: $$\begin{array}{llllll} H^0(G):= \frac{ Z^0(G) }{ B^0(G) }&=\frac{ C^0(G) }{ 0 } &=< [A^*],[B^*] > ,\\ H^0(H):= \frac{ Z^0(H) }{ B^0(H) }&=\frac{ \ker \partial ^0 }{ 0 } &=< [A^*+B^*] >,\\ H^1(G):= \frac{ Z^1(G) }{ B^1(G) }&=\frac{0 }{ 0 } &=0,\\ H^1(H):= \frac{ Z^1(H) }{ B^1(H) }&=\frac{0 }{ 0 } &=0. \end{array}$$ Then, for $f^k:H^k(H)\to H^k(G),\ k=0,1$, we compute from the definition, $$\begin{array}{lllllll} [f^0]([A^*]):=[f^0(A^*)]=[A^*], &[f^0]([B^*]):=[f^0(B^*)]=[B^*] &\Rightarrow [f^0]=[1,1]^T;\\ f^1=0&&\Rightarrow [f^1]=0. \end{array}$$ The former identity indicates that $A$ and $B$ are separated if we reverse the effect of $f$.

$\square$

Exercise. Modify the computation for the case when there is no $AB$.

Exercise. Compute the cohomology maps for the following two two-edge simplicial complexes and these simplicial maps: $$K=\{A,B,C,AB,BC\},\ L=\{X,Y,Z,XY,YZ\};$$ $$f(A)=X,\ f(B)=Y,\ f(C)=Z,\ f(AB)=XY,\ f(BC)=YZ;$$ $$f(A)=X,\ f(B)=Y,\ f(C)=Y,\ f(AB)=XY,\ f(BC)=Y.$$

Example (rotation and collapse). Suppose we are given the following complexes and maps: $$G=H:=\{A,B,C,AB,BC,CA\},$$ $$f(A)=B,\ f(B)=C,\ f(C)=A.$$ This is a rotated triangle (left):

The cohomology maps are computed as follows: $$\begin{array}{lllll} f^1(AB^*+BC^*+CA^*)&=f^1(AB^*)+f^1(BC^*)+f^1(CA^*)\\ &=CA^*+AB^*+BC^*\\ &=AB^*+BC^*+CA^*. \end{array}$$ Therefore, the cohomology map $[f^1]:H^1(H)\to H^1(G)$ is the identity. Conclusion: the hole is preserved.

Also (right) we collapse the triangle onto one of its edges: $$f(A)=A,\ f(B)=B,\ f(C)=A.$$ Then, $$\begin{array}{llllll} f^1(AB^*+BC^*+CA^*)&=f^1(AB^*)+f^1(BC^*)+f_1(CA^*)\\ &=(AB^*+CB^*)+0+0\\ &=2\partial^0(B^*). \end{array}$$ Therefore, the cohomology map is zero. Conclusion: collapsing of an edge causes the hole collapse too.

$\square$

Exercise. Provide the details of the computations.

Exercise. Modify the computation for the case (a) a reflection and (b) a collapse to a vertex.

## Functors

The duality reverses arrows but preserves everything else. This idea deserves a functorial interpretation.

We generalize a familiar concept below.

Definition. A functor ${\mathscr F}$ from category ${\mathscr C}$ to category ${\mathscr D}$ consists of two functions: $${\mathscr F}:\text{Obj}({\mathscr C}) \rightarrow \text{Obj} ({\mathscr D}),$$ and, if ${\mathscr F}(X)=U,{\mathscr F}(Y)=V$, we have, for a covariant functor: $${\mathscr F}={\mathscr F}_{X,Y}:\text{Hom}_{\mathscr C}(X,Y) \rightarrow \text{Hom}_{\mathscr D}(U,V),$$ and for a contravariant functor: $${\mathscr F}={\mathscr F}_{X,Y}:\text{Hom}_{\mathscr C}(X,Y) \rightarrow \text{Hom}_{\mathscr D}(V,U).$$ We assume that the functor preserves,

• the identity: ${\mathscr F}(\operatorname{Id}_{X}) = \operatorname{Id}_{{\mathscr F}(X)}$; and
• the compositions:
• covariant, ${\mathscr F}(g f) = {\mathscr F}(g) {\mathscr F}(f)$, or
• contravariant, ${\mathscr F}(g f) = {\mathscr F}(f) {\mathscr F}(g)$.

Exercise. Prove that duality is a contravariant functor. Hint: consider $\mathcal{L}(U,\cdot)$ and $\mathcal{L}(\cdot,V)$.

Exercise. What happens to compositions of functors?

We will continue to refer to covariant functors as simply “functors” when there is no ambiguity.

The latter condition can be illustrated with these commutative diagrams: $$\newcommand{\ra}{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccc} &X & \ra{f} & Y \\ & & _{gf}\searrow & \ \da{g}\\ & & & Z \end{array} \leadsto \begin{array}{ccccccccccc} & {\mathscr F}(X) & \ra{{\mathscr F}(f)} & {\mathscr F}(Y)\\ & & _{{\mathscr F}(gf)}\searrow & \ \ \da{{\mathscr F}(g)}\\ & & & {\mathscr F}(Z) \end{array} \ \text{ and} \begin{array}{cccccccccc} {\mathscr F}(X) & \la{{\mathscr F}(f)} & {\mathscr F}(Y)\\ & _{{\mathscr F}(gf)}\nwarrow & \ \ \ua{{\mathscr F}(g)}\\ & & {\mathscr F}(Z) \end{array}$$

Previously we proved the following:

Theorem.

• Homology is a covariant functor from cell complexes and maps to modules and homomorphisms.
• Cohomology is a contravariant functor from cell complexes and maps to modules and homomorphisms.

Exercise. Outline the cohomology theory for relative cell complexes and cell maps of pairs. Hint: $C^*(K,K')=C^*(K) / C^*(K')$.

Cohomology is just as good a functorial tool as homology. Below, we apply it to a problem previously solved with homology.

Example. We consider the question, “Can a soap bubble contract to the ring without tearing?” and recast it as an example of the Extension Problem, “Can we extend the map of the circle onto itself to the whole disk?”

Can we find a continuous $F$ to complete the first diagram below? With the cohomology functor, we answer: only if we can complete the last one. $$\newcommand{\ra}{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccccccccccc} {\bf S}^{n-1} & \hookrightarrow & {\bf B}^{n} && & H^{n-1}({\bf S}^{n-1}) & \leftarrow & H^{n-1}({\bf B}^{n}) & & {\bf Z} & \leftarrow & 0 \\ & \searrow ^{\operatorname{Id}}& \ \ \ua{F=?} && \leadsto & _{} & \nwarrow ^{\operatorname{Id}} & \ \ua{?} & = & & \nwarrow ^{\operatorname{Id}}& \ua{?}\\ & & {\bf S}^{n-1} && & & & H^{n-1}({\bf S}^{n-1}) & & & & {\bf Z} \end{array}$$ Just as with homology, we see that the last diagram is impossible to complete.

$\square$

Exercise. Provide such analysis for extension of the identity map of the circle to the circle to that of the torus.

Exercise. List possible maps for each of these based on the possible cohomology maps:

• inclusions of the circle into the torus;
• self-maps of the figure eight;
• inclusions of the circle into the sphere.