This site is being phased out.

Tangent bundles

From Mathematics Is A Science
Jump to navigationJump to search

Tangent spaces

Suppose we want to find the work of a constant force along a straight path. As we know, $$\text{ work } = \text{ force }\cdot \text{ distance}.$$ This simple formula only works if we carefully take into account the direction of motion relative to the direction of the force $F$. For example, if you move forward and then back, the work breaks into two parts and they may cancel each other. The idea is that the work $W$ may be positive or negative and we should speak of the displacement $D$ rather than the distance. We then amend the formula: $$W = F \cdot D.$$

Now, in the context of discrete calculus, the displacement $D$ may be given by a single oriented edge in ${\mathbb R}$, or a combination of edges. It is a $1$-chain. Furthermore, the force $F$ defines $W$ as a linear function of $D$. It is a $1$-form!

The need for considering directions becomes clearer when the dimension of the space is $2$ or higher. We use vectors. First, as we just saw, the work of the force is $W = \pm F \cdot D$ if $F || D$, and we have the plus sign when the two are collinear. Second, $W = 0$ if $F \perp D$. Therefore, only the projection of $F$ on $D$ matters when calculating the work and it is the projection when the length of $D$ is $1$.

Work along a straight path.png

Then the work $W$ of force $F$ along vector $D$ is defined to be $$W := \langle F , D \rangle .$$ It is simply a (real-valued) linear function of the displacement.

Our conclusion doesn't change: $D$ is a $1$-chain and $F$ is a $1$-form. Even though this idea allows us to continue our study, the example shows that it is impossible to limit ourselves to cubical complexes. Below, we make a step toward discrete calculus over general cell complexes.

On a plane, the force $F$ may vary from location to location. Then the need to handle the displacement vectors, i.e., directions, arises, separately, at every point. The set of all possible directions at point $A\in V={\bf R}^2$ form a vector space of the same dimension. It is $V_A$, a copy of $V$, attached to each point $A$:

Bivectors on the plane.png

Next, we apply this idea to cell complexes.

First, what is the set of all possible directions on a graph? We've come to understand the edges starting from a given vertex as independent directions. That's why we will need as many basis vectors as there are edges, at each point:

Simplicial tangent spaces on graph.png

Of course, once we start talking about oriented cells, we know it's about chains, over $R$.

Definition. For each vertex $A$ in a cell complex $K$, the tangent space at $A$ of $K$ is the set of $1$-chains over ring $R$ generated by the $1$-dimensional star of the vertex $A$: $$T_A=T_A(K):=< \{AB \in K\} > \subset C_1(K).$$

Proposition. The tangent space $T_A(K)$ is a submodule of $C_1(K)$.

Definition. A local $1$-form on $K$ is a collection of linear functions for each of the tangent spaces, $$\varphi_A: T_A\to R,\ A\in K^{(0)}.$$

The work of a force along edges of a cell complex is an example of such a function. Now, if we have a force in complex $K$ that varies from point to point, the work along an edge -- seen as the displacement -- will depend on the location.

Work along a path.png

Proposition. Every $1$-form (cochain) is a local $1$-form.

We denote the set of all local $1$-forms on $K$ by $T^1(K)$, so that $$C^1(K)\subset T^1(K).$$

Let's review the setup. First, we have the space of locations $X=K^{(0)}$, the set of all vertices of the cell complex $K$. Second, to each location $A\in X$, we associate the space of directions determined by the structure of $K$, specifically, by its edges. Now, while the directions at vertex $A\in K$ are given by the edges adjacent to $A$, we can also think of all $1$-chains in the star of $A$ as directions at $A$. They are subject to algebraic operations on chains and, therefore, form a module, $T_A$.

We now combine all the tangent spaces into one total tangent space. It contains all possible directions in each location: each tangent space $T_A$ to every point $A$ in $K$.

Definition. The tangent bundle of $K$ is the disjoint union of all tangent spaces: $$T(K):=\bigsqcup _{A\in K} \Big( \{A\} \times T_A \Big).$$

Then a local $1$-form is seen as a function on the tangent bundle, $$\varphi =\{\varphi_A: \ A\in K\}:T(K) \to R,$$ linear on each tangent space, defined by $$\varphi (A,AB):=\varphi_A(AB).$$

In particular, the work associates to every location and every direction at that location, a quantity: $$\varphi(A,AB)\in R.$$ The total work over a path in the complex is the line integral of $\varphi$ over a $1$-chain $a$ in $K$. It is simply the sum of the values of the form at the edges in $a$: $$\begin{array}{llllll} a=&A_0A_1+A_1A_2+...+A_{n-1}A_n \Longrightarrow \\ &\displaystyle\int_a \varphi:= \varphi (A_0,A_0A_1)+\varphi (A_1,A_1A_2)+... +\varphi (A_{n-1},A_{n-1}A_n). \end{array}$$

Flow through complex varies.png

Let's suppose, once again, that this number $\varphi(A,AB)$ is the work performed by a given force while moving from $A$ to $B$ along $AB$. We know that this work should be the negative of the one carried out while going in the opposite direction; i.e., $$\varphi(A,AB)= -\varphi(B,BA).$$ From the linearity of $\varphi$, it follows $\varphi(A,AB)= \varphi(B,AB)$. Then, the local form $\varphi$ defined separately on each of the stars must have matched values on their overlaps. Therefore, it is well-defined as a linear map on $C_1(K)$.

Theorem. A local $1$-form that satisfies the matching condition above is a $1$-cochain of $K$.

This conclusion reveals a potential redundancy in the way we defined the space of all directions as the tangent bundle $T(K)$. We can then postulate that the direction from $A$ to $B$ is the opposite of the direction from $B$ to $A$: $$(A,AB)\sim -(B,BA).$$ This equivalence relation reduces the size of the tangent bundle via the quotient construction. Below, we can see the similarity between this new space and the space of tangents of a curve:

Tangent bundle dim 1.png

It looks as if the disconnected parts of $T(K)$, the tangent spaces, are glued together.

The fact that this equivalence relation preserves the operations on each tangent space implies the following.

Theorem. $$T(K)/_{\sim}=C_1(K).$$

Exercise. Prove the theorem.

Then $1$-forms are local $1$-forms that are well-defined on $C_1(K)$. More precisely, the following diagram commutes for any $f\in C^1(K)$: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} \newcommand{\la}[1]{\!\!\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{ccccccccccccc} T(K)\ & \ra{f} & R &\\ \da{p} & & || & \\ C_1(K)\ & \ra{f} & R, \end{array}$$ where $p$ is the identification map. This justifies our focus on $1$-forms as $1$-cochains.

Exercise. Define the analog of the exterior derivative $d:C^0(K)\to T^1(K)$.

Higher order differential forms are multi-linear functions but this topic lies outside the scope of this chapter.

The derivative of a cell map

Consider the two standard ways to write the derivative of function $f$ at $x=a$: $$\tfrac{dy}{dx} = f'(a).$$ What we know from calculus is that the left-hand side is not a fraction but the equation can be rewritten as if it is: $$dy = f'(a) dx.$$ The equation represents the relation between the increment of $x$ and that of $y$ -- in the vicinity of $a$. This information is written in terms of a new coordinate system, $(dx,dy)$ and the best affine approximation (given by the tangent line) becomes a linear function in this system:

Tangent and differentials.png

Things are much simpler in the discrete case.

Suppose $X$ and $Y$ are two cell complexes and $f:X\to Y$ is a cell map. Then “in the vicinity of point $a$” becomes “in the star of vertex $A$”:

Tangent and differentials discrete.png

In fact, we can ignore the algebra of the $x$- and $y$-axis if we think of our equation as a relation between the elements of the tangent spaces of $A$ and $f(A)$. If we zoom out on the last picture, this is what we see:

Derivative and tangent spaces discrete.png

Then the above equation becomes: $$e_Y=f(AB)e_X,$$ where $e_X,e_Y$ are the basis vectors of $T_A(X), T_{f(A)} (Y)$, respectively.

The idea is: our function maps both locations and directions. The general case is illustrated below:

Derivative as tangent map.png

A cell map takes vertices to vertices and edges to edges and that's what makes the $0$- and $1$-chain maps possible. Then,

  • the locations are taken care of by $f_0:C_0(X)\to C_0(Y)$, and
  • the directions are taken care of by $f_1:C_1(X)\to C_1(Y)$.

Suppose also that $f(A)=P$, so that the location is fixed for now. Then the tangent spaces at these vertices are: $$T_A(X):=<\{AB \in X \}>\subset C_1(X), \quad T_P(Y):=<\{PQ \in Y \}>\subset C_1(Y).$$

Definition. The derivative of a cell map $f$ at vertex $A$ is a linear map $$f'(A):T_A(X) \to T_P(Y)$$ given by $$f'(A)(AB):=f_1(AB).$$

Example. Let's consider cell maps of the “cubical circle” (i.e., ${\bf S}^1$ represented by a $4$-edge cubical complex) to itself, $f: X \to X$:

Map S to S cubic.png

Given a vertex, we only need to look at what happens to the edges adjacent to it. We assume that the bases are ordered according to their letters, such as $\{AB,BC\}$.

The derivatives of these functions are found below.

Identity: $$\begin{array}{lllllll} f_0(A) = A, &f_0(B) = B, &f_0(C) = C, &f_0(D) = D, \\ \Longrightarrow & f'(A)(AB)=AB, &f'(A)(AD)=AD. \end{array}$$ It's the identity map.

Constant: $$\begin{array}{llllllll} f_0(A) = A, &f_0(B) = A, &f_0(C) = A, &f_0(D) = A, \\ \Longrightarrow & f'(A)(AB)=AA=0, &f'(A)(AD)=AA=0. \end{array}$$ It's the zero map.

Vertical flip: $$\begin{array}{llllllllll} f_0(A) = D, &f_0(B) = C, &f_0(C) = B, &f_0(D) = A, \\ \Longrightarrow & f'(A)(AB)=DC, &f'(A)(AD)=DA. \end{array}$$ The matrix of the derivative is $$\hspace{.37in}f'(A)=\left[ \begin{array}{ccccccc} 0&1\\ 1&0 \end{array} \right]. \begin{array}{cccc} \\ \hspace{.37in}\square \end{array}$$

Exercise. Repeat these computations for (a) the rotations; (b) the horizontal flip; (c) the diagonal flip; (d) the diagonal fold. Hint: the value of the derivative varies from point to point.

As this construction is carried out for each vertex in $X$, we have defined a function on the whole tangent bundle.

Definition. The derivative of cell map $$f:X\to Y$$ between two cell complexes is the map between their tangent bundles, $$f':T(X) \to T(Y),$$ given by $$f'(A,AB):=(f_0(A),f_1(AB)).$$

Exercise. In this context, define the directional derivative and prove its main properties.

Theorem (Properties of the derivative). For a given vertex and an adjacent edge, the derivative satisfies the following properties: $\\$

$\hspace{5mm}\bullet$ The derivative of a constant is zero in the second component: $$(C)'=(C,0),\ C\in Y.$$ $\hspace{5mm}\bullet$ The derivative of the identity is the identity: $$(\operatorname{Id})'=\operatorname{Id}.$$ $\hspace{5mm}\bullet$ The derivative of the composition is the composition of the derivatives: $$(fg)'=f'g'.$$ $\hspace{5mm}\bullet$ The derivative of the inverse is the inverse of the derivative: $$(f^{-1})'=(f')^{-1}.$$

Exercise. Prove the theorem.

Exercise. Prove that if $|f|$ is a homeomorphism, then $f'=\operatorname{Id}$.

Notation: An alternative notation for the derivative is $Df$. It is also often understood as the tangent map of $f$ denoted by $T(f)$.

Exercise. Show that $T$ is a functor.

We have used the equivalence relation $$(A,AB)\sim (B,-BA)$$ to glue together the tangent spaces of a cell complex: $$T(K)/_{\sim}=C_1(K).$$

Theorem. Suppose $X,Y$ are two cell complexes and $f:X\to Y$ is a cell map. Then the quotient map of the derivative $$[f']:T(X)/_{\sim} \to T(Y)/_{\sim}$$ is well-defined and coincides with the $1$-chain map of $f$, $$f_1:C_1(X)\to C_1(Y).$$

Proof. Suppose $f_0(A)=P$ and $f_0(B)=Q$. Then we compute: $$\begin{array}{llllll} f'(A,AB)&=(f_0(A),f_1(AB))\\ &=(P,PQ)\\ &\sim (Q,-QP)\\ &=(f_0(B),-f_1(BA))\\ &=f'(B,-BA). \end{array}$$ We have proven the following: $$(A,AB)\sim (B,-BA) \Longrightarrow f'(A,AB)\sim f'(B,-BA).$$ Therefore, $[f']$ is well-defined. $\blacksquare$

Chain maps

A cell map can't model jumping diagonally across a square.

Vector field and a jump.png

The issue is related to one previously discussed: cell map extensions vs. chain map extensions (subsection V.3.10). Recall that in the former case, extensions may require subdivisions of the cell complex. The situation when the domain is $1$-dimensional is transparent:

Chain approximation.png

In the former case, we can create a cell map: $$g(AB):=XY,$$ by extending its values from vertices to edges. In the latter case, an attempt of cell extension (without subdivisions) fails as there is no single edge connecting the two vertices. However, there is a chain of edges: $$g(AB):=XY+YZ.$$

Even though the linearity cannot be assumed, the illustration alone suggests a certain continuity of this new “map”. In fact, we know that chain maps are continuous in the algebraic sense: they preserve boundaries, $$g_0\partial = \partial g_1.$$ The idea is also justified by the meaning of the derivative of a cell map $f$: $$f'\Big(A,[A,A+1] \Big)= \Big(f_0(A),f_1([A,A+1]) \Big).$$ It is nothing but a combination of the $0$- and the $1$-chain maps of $f$...

Suppose we are given $f$, a $0$-form on ${\mathbb R}$. Then we would like to interpret the pair $g=\{f,df\}$ as some chain map defined on $C({\mathbb R})$, the chain complex of time. What is the other chain complex $C$, the chain complex of space? Since these two forms take their values in ring $R$, we can choose $C$ to be the trivial combination of two copies of $R$: $$\partial=\operatorname{Id}:R \to R.$$ Below, we consider a more general setting of $k$-forms.

Theorem. Cochains are chain maps, in the following sense: for every $k$-cochain $f$ on $K$, there is a chain map from $C(K)$ to the chain complex $C$ with only one non-zero part, $\operatorname{Id}:C_{k+1}=R \to C_k=R$, as shown in the following commutative diagram: $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{r|ccccccccccc} C(K):& ... & C_{k+2}(K) & \ra{\partial} & C_{k+1}(K) & \ra{\partial} & C_k(K) & \ra{\partial} & C_{k-1}(K) & ... \\ f: & \ & \ \da{0} & & \ \da{df} & & \ \da{f} & & \ \da{0}&\\ C: & ... & 0 & \ra{\partial=0} & R & \ra{\partial=\operatorname{Id}} & R & \ra{\partial=0} &0&... \end{array} $$

Proof. We need to prove the commutativity of each of these squares. We go diagonally in two ways and demonstrate that the result is the same. We use the duality $d=\partial^*$.

For the first square: $$df \partial =(\operatorname{Id}^{-1}f\partial)\partial =\operatorname{Id}^{-1}f0=0.$$ For the second square: $$f\partial =df=\operatorname{Id}df.$$ The third square (and the rest) is zero. $\blacksquare$