This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.
The algebra of oriented cells
Contents
 1 Are chains just “combinations” of cells?
 2 The algebra of chains with coefficients
 3 The role of oriented chains in calculus
 4 Orientations and boundaries
 5 Computing boundaries with spreadsheets
 6 A homology algorithm for dimension 2
 7 The boundary of a cube in the Ndimensional space
 8 The boundary operator
Are chains just “combinations” of cells?
Consider the following topological problem:
The nature of the problem is topological because the rubber band is allowed to stretch and shrink and it won't change the answer.
But do we even understand the meaning of “go around”? After all, the band might go back and forth, cancelling some of the wraps. Intuitively, we can let the band to fully contract and then count... We will sort this out later but, for now, we'll just try to see if we can make any progress with the tools we've got.
Mathematically, what we are dealing with is parametric curves:
We are considering maps from a closed interval to the ring (annulus), $f:[0,1]\to A$, with the ends taken to the same point: $f(0)=f(1)$. Then, we want to somehow count the number of turns.
At this point, the only available topological tool that seems to match the idea of a parametric curve is the $1$cycles. Let's try to classify the $1$cycles in the circle, or a ring, in terms of their homology and see if it solves our problem.
In the “hollow square” on the left, it seems clear that
 ”the chain $q=a+b+c+d$ goes once around the hole”.
If we think of chains as “combinations” of cells, we are forced to rely on the cancellation $2x=0$, which leads to binary arithmetic. This produces unexpected results:
 $p:=q+q=(a+b+c+d)+(a+b+c+d)=0$, even though it seems that “the chain $p$ goes twice around the hole”.
Exercise. Verify the following:
 if “the chain $r$ goes thrice around the hole”, then $r=p$;
 if “the chain $s$ goes once in the opposite direction”, then $s=p$;...
Continue.
Thus, we have only one $1$cycle that's not homologous to $0$.
Our conclusion is now clear: without a way to record the directions of the edges of a cycle, we can't speak of how many turns it makes!
Example. Observe that the problem isn't with choosing the cycles that have edges cancelled. In the “thick hollow square”, we can choose these two $1$cycles with no overlaps:
With the algebra we have learned, we can easily conclude the following about the homology of these cycles:
 $a\not\sim 0$ as $a$ isn't a boundary, and
 $b\not\sim 0$ as $b$ isn't a boundary, but
 $c=a+b\sim 0$ as $c=a+b$ is the boundary of the $2$chain $s$.
Again, homology theory, as we know it, is unable to classify the cycles according to the number of times they go around the hole.
To see this example from another angle, we can think of the curve $c=a+b$ as the curve $a$ followed by the curve $b$. In that sense, the curve $c$ circles the hole twice. But with directionless edges, we can also think of $c=a+b$ as a sequence of edges that form a curve that doesn't complete the full circle:
$\square$
This is why we need to learn to count directed edges  even at the expense of added complexity. We will rebuild the algebra of chains and boundaries for directed edges and, furthermore, “oriented” cells.
Note: Later we will solve the original problem by looking at the loops as maps and examining their homology maps. The number of turns is called the degree of a map.
The algebra of chains with coefficients
It appears that all we need to do in order to be able to count turns is to assign a direction to each edge of our cubical grid. Fortunately, they are already chosen for us; they are the directions of the axes:
However, we should be able to choose any other combination of directions and our evaluation of the topology of a given cubical complex should remain the same.
No matter how it is done, this step  by design  has dramatic implications. We notice that $$AB \ne BA!$$ In fact, $$AB=BA.$$ It follows that $AB+AB \ne 0$, which makes us discard the cancellation rule we have been using: $x+x=0$. By doing so we are rejecting the binary arithmetic itself.
This time, chains are still “combinations of cells” but now these cells can, without cancelling, accumulate: $x+x=2x$, etc. In fact, one can think informally of chain $2x$ as if cell $x$ is visited twice. These are the interpretations of the new, directed chains:
 chain $x$ visits cell $x$ once going in the positive direction (that of the axis);
 chain $2x$ visits cell $x$ twice;
 chain $3x$ visits cell $x$ thrice;
 chain $x$ visits cell $x$ once going in the negative direction;
 chain $0=0x$ doesn't visit cell $x$ (or any other);
 chain $x+2y+5z$ visits cell $x$ once, cell $y$ twice, and cell $z$ five times  in no particular order.
Definition. A $k$chain in a cubical complex $K$ is a “formal linear combination of $k$cells” in the complex: $$S=r_1a_1+r_2a_2+...+r_na_n,$$ where $a_1,a_2,...,a_n$ are $k$cells in $K$ and the coefficients $r_1,r_2,...,r_n$ are some scalars.
These “scalars” are chosen from some collection $R$. What kind of set is $R$? First, we need to be able to add chains, $$\begin{array}{lll} (r_1a_1+r_2a_2+...+r_na_n)+(s_1a_1+s_2a_2+...+s_na_n)\\ \qquad \qquad\qquad\qquad\qquad =(r_1+s_1)a_1+(r_2+s_2)a_2+...+(r_n+s_n)a_n, \end{array}$$ and to multiply chains by scalars, $$\begin{array}{lll} p(r_1a_1+r_2a_2+...+r_na_n)\\ \qquad \qquad\qquad\qquad\qquad =(pr_1)a_1+(pr_2)a_2+...+(pr_n)a_n. \end{array}$$ This makes us choose $R$ to be a ring. Of course, if we choose $R={\bf Z}_2$, we are back to the binary theory, which remains a valid choice.
As a preview, the consequences of different choices of coefficients are outlined below:
Suppose $K$ is a finite cubical complex.
Main choices for coefficients, $R=$  ${\bf Z}_2$ (binary)  ${\bf Z}$ (integral)  ${\bf R}$ (real) 
Observations:  ${\bf Z}_2$ is a mere projection of ${\bf Z}$.  ${\bf Z}$ is the best for counting.  ${\bf R}$ contains ${\bf Z}$ but patches over finer details. 
Algebraically, $R$ is:  a finite field  an infinite cyclic group  an infinite field 
Notation: add “$;R$”  $C_k(K;{\bf Z}_2),H_k(K;{\bf Z}_2)$  $C_k(K;{\bf Z}),H_k(K;{\bf Z})$  $C_k(K;{\bf R}),H_k(K;{\bf R})$ 
Algebraically, $C_k(K;R)$ is:  an abelian group with all elements of order $2$, or a vector space over ${\bf Z}_2$ (also a list)  a free finitelygenerated abelian group  a finitedimensional vector space over ${\bf R}$ 
Decomposition of $C_k(K;R)=$  ${\bf Z}_2\oplus ...\oplus {\bf Z}_2$  ${\bf Z}\oplus ...\oplus {\bf Z}$  ${\bf R}\oplus ...\oplus {\bf R}$ 
Studied in:  Modern algebra  Linear algebra  
Similar choices for $R$:  other finite rings and fields: ${\bf Z}_p,p=3,4,...$  The Universal Coefficient Theorem computes all others from $H(K;{\bf Z})$!  other infinite fields: ${\bf C},{\bf Q}$ 
Accessibility: 
1. Chains are lists of cells.

1. The meaning of chains $2x,3x,x$ is clear.

1. Vector spaces are more familiar than groups.

Boundary operator (and homology maps) is:  same, $\mod 2$, matrix as the other two  same integervalued matrix  
a homomorphism  a linear operator  
Bases of chain groups are:  the sets of all the $k$cells of $K$  
Bases of cycle and boundary groups are:  same, $\mod 2$, chains as the other two  same integervalued chains  
Algebraically, $H_k(K;R)$ is:  a finite abelian group with all elements of order $2$ (a list)  a finitelygenerated abelian group with a possible torsion part (${\bf Z}_2$ for the Klein bottle)  a finitedimensional vector space 
Decomposition of $H_k(K;R)=$  ${\bf Z}_2\oplus ...\oplus {\bf Z}_2$  ${\bf Z}\oplus ...\oplus {\bf Z}\oplus {\bf Z}_{p_1}\oplus ...\oplus {\bf Z}_{p_s}$  ${\bf R}\oplus ...\oplus {\bf R}$ 
Outcomes:  Same generators, same dimension/rank, same count of topological features: components, tunnels, voids, etc. (except for the torsion part)! 
Exercise. What are possible drawbacks of using $R={\bf Z}_3$? ${\bf Z}_4$?
For simplicity and convenience, we choose chains and, therefore, homology with $$\begin{array}{c} \hline \text{real coefficients}\\ \hline \end{array}$$ for the rest of this chapter. We will omit “$;{\bf R}$” in our notation when there is no confusion: $H_k(K)=H_k(K;{\bf R})$.
The role of oriented chains in calculus
As we now understand, in every cubical complex there are two versions of the same edge present: positive and negative. It is positive when the direction of the edge coincides with the direction of the axis. So, for the same cell $[a,b]\subset {\bf R}$,
 if $a < b$, then the direction of $[a,b]$ is positive, and
 if $a > b$, then the direction of $[a,b]$ is negative.
In other words, $$[b,a]=[a,b].$$
In light of this approach, we can rewrite the orientation of property of integral from elementary calculus: $$\int_a^b f(x)dx = \int_b^a f(x) dx,$$ in terms of chains: $$\int\limits_{[a,b]} f(x)dx = \int\limits_{[a,b]} f(x) dx,$$
We can also rewrite the additivity property of integrals: $$\int_a^b f(x) dx + \int_b^c f(x) dx = \int_a^c f(x) dx.$$ But first we observe that, if the intervals overlap, the property still makes sense, as the integral over the overlap is counted twice:
Then the property takes this form: $$\int\limits_{[a,b]} f(x)dx +\int\limits_{[c,d]} f(x)dx = \int\limits_{[a,b]+[c,d]} f(x)dx.$$
Even the scalar multiplication property is given an additional meaning: $$\int\limits_{r[a,b]} f(x)dx = r\int\limits_{[a,b]} f(x) dx.$$
The lefthand sides of these three identities can now be understood as a certain function evaluated on these chains, as follows. Suppose complex $K$ represents interval $[A,B] \subset {\bf R}$. Then for any given integrable function $f:[A,B]\to {\bf R}$, we can define a new function: $$F:C_1(K)\to {\bf R}$$ by $$F( s ):=\int\limits_{s} f(x)dx, \quad \forall s\in C_1(K).$$ Then, according to the above identities, this function is linear! Indeed, we've shown: $$F(ru+tv)=rF(u)+tF(v),\quad r,t \in {\bf R}, u,v \in C_1(K).$$
Exercise. (a) Define a similar function for $0$chains. (b) Interpret the Fundamental Theorem of Calculus.
A linear function $F:C_1(K)\to {\bf R}$ is called $1$cochain on $K$. In the context of calculus, it is also called a (discrete) differential form of degree $1$. This idea is the basis of the modern approach to calculus.
Conclusion: oriented chains serve as domains of integration.
Orientations and boundaries
A crucial part of homology theory has been the cancellation rule of $R={\bf Z}_2$. With this rule, all cells that appear twice in a given chain, such as the boundary of a cell, cancel. Without this rule, this will no longer automatically happen!
The idea is then to assume that  before any computations starts  an “orientation” is assigned to each cell in $K$. Then, each cell $s$ may appear in computations with either positive, as $s$, or negative orientation, as $s$. This should be done in such a way that, when computing the boundary of a chain $a+b$ with two adjacent cells $a,b$ present, each “internal face” $s$ (shared by $a$ and $b$) will appear twice, but with the opposite orientations and, therefore, cancel: $s+(s)=0$.
But what is an orientation?
We already know the answer for $1$cells. The orientation means a choice of the direction of the edge and this is how they are supposed to cancel:
To have them cancel, what if we choose the orientation for either of the $2$cells involved to mean the choice of the counterclockwise direction around it? Then clockwise direction will be the opposite orientation. Next, the orientations of the boundary $1$cells are “read” by going around the $2$cell. Then, indeed, there will be $s$ coming from the boundary of the first cell and $s$ from the boundary of the second.
What about $0$cells? The idea of an orientation of a point might seem strange. Instead, let's ask again, how do these cells cancel?
The boundary of a $1$cell $AB$ is, as before, a linear combination of its two endpoints $A$ and $B$, but what are the coefficients? What helps here is that we also expect the boundary of $AB+BC$ to be a linear combination of $A$ and $C$, so $B$ has to cancel: $$\begin{array}{lll} ?A+?C&=\partial (AB+BC)&=\partial (AB)+\partial (BC)\\ &=(?A+?B)+(?B+?C)&=?A+(?B+?B)+?C. \end{array}$$ Therefore, $B$ has to appear with the opposite signs! It might appear with “$+$” in $\partial (AB)$ and with “$$” in $\partial (BC)$. But the only difference between them is the position of $B$ with respect to the direction of these edges, the beginning or the end. We choose then the boundary of an edge to be equal to its final point minus the initial point: $$\partial (a) = \partial (AB) := BA.$$ Then, the orientation of a $0$cell works as expected even though it doesn't have a clear geometric meaning. It seems that the orientation means what the algebra tells us it means!
Of course, the boundaries of $0$chains are all, as before, equal to $0$; $$\partial (A) := 0.$$
Example. Let's confirm that the definition works. We assigned random orientations to $1$ and $2$cells:
Then, $$\partial (ab) = \partial a  \partial b =(AB)(BC)=AC,$$ and the interior vertex $B$ is cancelled. And so it is for the chain $ba$, but not for $a+b$.
Next, $$\partial (\alpha\beta) = \partial \alpha \partial \beta =(a+f+e+g)(bc+d+f)=a+b+cd+e+g,$$ and the interior edge $f$ is cancelled. And so it is for the chain $\beta  \alpha$, but not for $\alpha+\beta$.
$\square$
Exercise. Find three linearly independent $1$cycles in the above example and compute their boundaries.
We are now to reconstruct the homology theory for the new setting.
Proposition. For any cubical complex $K$ (and any choice of orientations of its cells), the composition of boundary operators $$\partial _1\partial _2:C_2(K)\to C_0(K)$$ is zero.
Exercise. (a) Prove the proposition. (b) Show that the homology groups are welldefined.
Definition. The homology groups of a cubical complex $K$ are the vector spaces defined by $$H_k(K):=\ker \partial _k / \operatorname{Im} \partial _{k+1}, \ k=0,1.$$ The Betti numbers are the dimensions of the homology groups (over ${\bf R}$): $$\beta_k(K):=\dim H_k(K), \ k=0,1.$$
Exercise. Represent the sets below as realizations of cubical complexes. For each of them:
 (a) find the chain groups and find the boundary operator as a matrix;
 (b) using only part (a) and linear algebra, find $Z_k,B_k,H_k$ for all $k$, including the generators;
 (c) repeat the computations for two more choices of coefficients $R={\bf Z},{\bf Z}_3$.
Exercise. Starting with the cubical complex $K$ from the last exercise, add/remove cells so that the new complexes $K'$ satisfy:
 $\beta_0(K')=0,1,2$, or
 $\beta_1(K')=1,2,3$.
Present the chain complexes and use linear algebra, such as $$\dim \left( {\bf R}^n / {\bf R}^m \right) =nm,$$ to prove the identities.
The $3$dimensional case is more complex. Even though we understand the meaning of the orientation of these $2$cells, what about these two adjacent $3$cells $\alpha$ and $\beta$?
We need to define orientation for them in such a way that the $2$cell $\lambda$ would appear with two opposite orientations  as a boundary face of $\alpha$ and $\beta$. How? Notice that understanding orientation as an order of its vertices works for $1$cells, say, $a=AB$, and for $2$cells, say, $\tau =ABCD$. So, maybe it's a good idea to try the same for the $3$cells, say, $\alpha = ABCDEFG$?
Exercise. Show that such a definition won't produce what we expect. Hint: does it really work for dimension $2$?
Defining orientation for the case of higher dimensions will require using the product structure of the space.
Computing boundaries with spreadsheets
This is how these simple formulas for dimension $1$ are implemented with a spreadsheet program such as Excel:
Link to the file: Spreadsheets.
Example. As an illustration, this is a $2$chain and its boundary, a $1$chain, computed and presented in Excel:
The direction of an edge is given by $1$ if it goes along the axis and $1$ if it goes against it. Observe how the directions change as the boundary curve goes counterclockwise.
$\square$
Exercise. What happens to the values of the boundary chain if we replace all $1$'s in the $2$chain on the left with $2$s? What happens to the shape of the boundary if we make them random?
Just as in the binary case, the approach to the algorithm for computing the boundary operator is the opposite to that of the algebra we have seen. Instead of computing the boundary of each of the chain's $k$cells as the sum of this cell's boundary cells and then simplifying, we just add the values (with appropriate signs) of the $k$cells of the chain adjacent to each $(k1)$cell. As before, the code is very simple.
Dimension $0$: For each vertex, we compute the sum of the values at the four adjacent edges with appropriate signs:
 $=s!RC[1]+s!RC[1]s!R[1]C+s!R[1]C$,
where $s$ refers to the worksheet that contains the chain. Then,
 if the curve isn't there, we have $0+0+0+0=0$;
 if the curve is passing by once, we have $1+01+0=0$ (in some order);
 if the curve is passing by twice, we have $11+11=0$;
 if the curve begins or ends here, we have $1+0+0+0=1$ or $1$.
Dimension $1$: For each edge, we, similarly, compute the sum of the values at the two adjacent faces with appropriate signs,
 horizontal: $=s!R[1]Cs!R[1]C$,
 vertical: $=s!RC[1]+s!RC[1]$.
Then,
 if the edge is away from the region, we have $0+0=0$;
 if the edge is inside the region, we have $11=0$;
 if the edge is on the boundary of the region, we have $1+0=1$.
Exercise. Modify the code to compute boundaries over ${\bf Z}_p$.
Example. We can compute boundaries of more complex chains:
We demonstrated here that the result of two boundary operators applied consecutively is, again, $0$.
$\square$
Exercise. Define the group $C$ of all chains (as opposed to the collection of all groups $C_k$ of $k$chains, $k=0,1,...$). Define the boundary operator for $C$. Hint: consider the picture above.
A homology algorithm for dimension 2
The image below has two components, the first one with two holes:
We will assume that the binary images have black objects on white background.
Exercise. How is the count affected if we take the opposite approach?
We will use $1$cycles, understood as circular sequences of edges, to partition the image. For example, below, $A$ and $B$ are $0$cycles, $C$ and $C’$ are $1$cycles:
The result is an unambiguous representation of the regions by the curves that enclose them. A $0$cycle is traversed clockwise and a $1$cycle is traversed counterclockwise.
Observe that, in the either case, black is on the left.
The algorithm is incremental. The cycles are constructed as pixels, one by one, are added to the image. The result is an increasing sequence of complexes called a filtration: $$K_1\hookrightarrow K_2 \hookrightarrow ... \hookrightarrow K_m.$$
According to the cell decomposition of the image, every pixel also contains edges and vertices; therefore, the process of adding a pixel starts with adding its vertices and then its edges, unless those are already present as parts of other pixels.
Exercise. Suggest another order of cells so that we still have a cubical complex at every stage.
After all the edges have been added, there is always a $1$cycle inside the pixel. It is “removed” as the face closes the hole.
As the new pixels are being added, components merge, holes split, etc. Thus, the topology of the image changes. This process is captured by cycles as they appear and disappear, merge and split.
Outline:
 All pixels in the image are ordered in such a way that all black pixels come before white ones.
 Following this order, each pixel is processed:
 $\bullet$ add its vertices, unless those are already present as parts of other pixels;
 $\bullet$ add its edges, unless those are already present as parts of other pixels;
 $\bullet$ add the face of the pixel.
 At every step, the graph is given a new node and arrows that connect the nodes in order to represent the merging and the splitting of the cycles:
 $\bullet$ adding a new vertex creates a new component;
 $\bullet$ adding a new edge may connect two components, or create, or split a hole;
 $\bullet$ adding the face to the hole eliminates the hole.
Only adding an edge presents any challenge. Thanks to the cell decomposition of the image, there are only $4$ cases to consider.
Case (a): the new edge connects two different $0$cycles.
These two $0$cycles merge.
Case (b): the new edge connects a $0$cycle to itself.
This $0$cycle gives birth to a new $1$cycle.
Case (c): the new edge connects a $1$cycle to itself.
This $1$cycle splits into two.
Case (d): the new edge connects a $1$cycle to a $0$cycle (inside).
This $0$cycle is absorbed into the $1$cycle.
Exercise. Describe such an algorithm for the (a) triangular and (b) hexagonal grid.
The boundary of a cube in the Ndimensional space
In the presence of the product structure, the decision about orientations can be made for us and in a uniform fashion: all edges are directed along the coordinate axes. For $1$cells in the plane, the formulas are: $$\partial \left(\{0 \} \times [0,1]\right) = \{(0,0) \} + \{(0,1) \},$$ $$\partial \left([0,1] \times \{0 \}\right) = \{(0,0) \} + \{(1,0) \}.$$
For a $2$cell, its boundary should, as we know, be the four oriented edges appearing as we go counterclockwise around the square:
Then, $$\begin{array}{lllll} &\partial \left([0,1] \times [0,1]\right) \\ &= [0,1] \times \{0 \} + \{1 \} \times [0,1]  [0,1] \times \{1 \}  \{0 \} \times [0,1]. \end{array}$$ Here the edges are always oriented along the axes and going counterclockwise around the square produces these signs. To confirm that this makes sense, let's factor and rearrange the terms: $$\begin{array}{llll} & = & [0,1] \times \left( \{0 \}  \{1 \} \right) & + \left( \{1 \}  \{0 \} \right) \times [0,1] \\ & = &  [0,1] \times \partial [0,1] & + \partial [0,1] \times [0,1]. \end{array}$$ That's how products behave under the boundary operator.
Warning: Don't make a calc1type mistake by assuming that the boundary/derivative of the product is the product of the boundaries/derivatives! For example, the boundary of the square $a\times b$ isn't the product of the boundaries of the two edges $a,b$ that form it: $$\partial (a\times b) \ne \partial a \times \partial b.$$ Indeed, the lefthand side is the four edges of the square and the righthand side is its four vertices. Even the dimensions/units don't match!
The correct formula of the product rule for boundaries is: $$\partial (a\times b)=\partial a \times b +(1)^{\dim a} a \times \partial b,$$ for cells and chains of all dimensions.
Exercise. Use to the formula to compute the boundary of a cube $\partial\left( [0,1]\times[0,1]\times[0,1] \right)$ and confirm that the opposite faces appear with opposite orientations.
We will rely on the product structure to compute the boundary of the $k$dimensional cube in the $N$dimensional space.
Recall that a $k$cube is the product of $k$ edges and some vertices. In ${\mathbb R}^N$, a cubical $k$cell can be represented as the product if the $N$dimensional space is seen as the product of $N$ copies of ${\bf R}$: $$\begin{array}{cccccccccccccccc} Q^k&=\Pi_{i=1}^N A_i&= &A_1&\times &... &\times &A_{s1}& \times &A_s &\times &A_{s+1}&\times &...& \times &A_N\\ \cap & \cap &&\cap & &&&\cap &&\cap &&\cap &&&&\cap \\ {\bf R}^N&=\Pi_{i=1}^N {\bf R}^1&=& {\bf R}&\times& ... &\times &{\bf R}& \times &{\bf R}& \times &{\bf R}&\times &... &\times &{\bf R}. \end{array}$$ Here, each $A_i$ is a subset of the $i$th axis, ${\bf R}$, which is either
 an edge, such as $[m,m+1]$, or
 a vertex, such as $\{m\}$.
There are exactly $k$ edges on the list. Note that this representation is unique.
Example. Here's an example of a $2$cube, a square, in the $3$dimensional space: $$Q^2=[a,b]\times [c,d] \times \{e\},$$ illustrated below:
$\square$
Even though the meaning of the boundary as shown above is clear, we proceed to give a general definition.
We already understand products of cubes. If
 $A^p$ is an $p$cube in ${\mathbb R}^n,$
 $B^q$ is an $q$cube in ${\mathbb R}^m,$ then
 $A^q\times B^p$ is an $(p+q)$cube in ${\mathbb R}^{n+m}$.
We also need to understand products of chains. If
 $a \in C_p({\mathbb R}^n)$,
 $b \in C_q({\mathbb R}^m)$, then
 $a\times b \in C_{p+q}({\mathbb R}^{n+m})$.
Predictably, this product is defined via summation of the pairwise products of the cubes. If
 $a = \sum_i a_i P_i$, and
 $b = \sum_i b_i Q_i$,
where $a_i,b_i$ are the scalars and $P_i,Q_i$ are the cells (finitely many!), then
 $a\times b := \sum_i a_ib_i (P_i\times Q_i)$.
Note: this product is “bilinear”.
Definition of the boundary is inductive.
As the base of induction, we define the boundary for $1$cells, the edges. Suppose $$Q^1:=\Pi_{i=1}^N A_i,$$ with
 $A_s:=[m_s,m_s+1]$ and
 $A_i:=\{m_i\}$ for $i \ne s$,
for some particular choice of $s$. Its boundary is also a product: $$\partial Q^1=\Pi_{i=1}^N B_i.$$ What are the values of these components? We define, as before, the boundary with a change in a single component:
 $B_s=\partial A_s=\{m_s+1\}  \{m_s\}$ and
 $B_i=A_i=\{m_i\}$ for $i \ne s$.
We rephrase the formula: $$\partial \left( \{m_1\}\times... \times \{m_{s1}\}\times [m_s,m_s+1]\times \{m_{s+1}\}\times ... \times \{m_N\} \right)$$ $$= \{m_1\}\times... \times \{m_{s1}\}\times (\{m_s+1\}\{m_s\}) \times \{m_{s+1}\}\times ... \times \{m_N\}.$$ This is, as expected, the endpoint
 $\{m_1\}\times... \times \{m_{s1}\}\times \{m_s+1\} \times \{m_{s+1}\}\times ... \times \{m_N\}$
minus the beginning point of the edge:
 $\{m_1\}\times... \times \{m_{s1}\}\times \{m_s\} \times \{m_{s+1}\}\times ... \times \{m_N\}$.
Thus, we have defined the boundary for all edges in the $N$dimensional space for all $N$. We also know the boundary of vertices: $\partial V:=0$.
Next we assume that the boundary is defined for all $k$cells  in the $N$dimensional space for all $N$  and then extend the definition to $(k+1)$cells, as follows. We simply use the fact that any $(k+1)$cell $Q^{k+1}$ is the product of a $k$cell $Q^k$ with an edge $E$: $$Q^{k+1}=Q^k\times E.$$
More precisely, we deal with these cells in these Euclidean spaces:
 $Q^k\subset {\bf R}^n$ is a $k$cube, and
 $A=E\subset {\bf R}^1$ is an edge, or
 $A=V\subset {\bf R}^1$ is a vertex, then
 $Q^k\times A \subset {\bf R}^n \times {\bf R}^1$ is a $k$ or a $(k+1)$cube.
Finally, the boundary of a cubical cell $Q^{k+1}=Q^k\times A$ constructed this way is defined in terms of its components.
 Case 1: if $A$ is an edge $E$, then
$$\partial (Q^k\times E):=\partial Q^k \times E +(1)^k Q^k \times \partial E.$$
 Case 2: if $A$ is a vertex $V$, then
$$\partial (Q^k\times V):=\partial Q^k \times V. $$ $\square $
Of course, both cases are just manifestations of the more general product rule, as $\partial V=0$.
Using this definition, we can compute the boundary of any cubical cell: $$A_1\times ... \times A_{s1} \times A_s \times A_{s+1}\times ... A_n$$ by adding one component $A_s$ at a time and, depending on whether this is an edge or a vertex, and use the appropriate formula from above:
 $A_1$,
 $A_1\times A_2$,
 $A_1\times A_{3}\times A_3$,
 ...
 $A_1\times ... \times A_{s1} \times A_s$,
 ...
 $A_1\times ... \times A_{s1} \times A_s \times A_{s+1}\times ... \times A_n$.
Example. Let's compute the boundary of $$Q^2:=[0,1]\times \{5\} \times [3,4].$$ Here
 $A_1:=[0,1]$,
 $A_2:=\{5\}$,
 $A_3:=[3,4]$.
Starting from the left. For $\partial A_1$: $$\partial [0,1]=\{1\}\{0\}.$$ Next for $\partial (A_1\times A_2)$. From the definition, since the second component is a vertex, it's case 2. We substitute the last equation: $$\partial ([0,1]\times\{5\})=\partial [0,1]\times\{5\} =(\{1\}\{0\})\times \{5\}.$$ Now for $\partial (A_1\times A_2\times A_3)$. The last component is an edge, so it's case 1. We substitute the last equation: $$\begin{array}{lll} \partial \left( ([0,1]\times\{5\}) \times [3,4] \right)\\ =\partial ([0,1]\times\{5\}) \times [3,4] +(1)^1 ([0,1]\times\{5\}) \times \partial [3,4]\\ =(\{1\}\{0\})\times \{5\}\times [3,4]([0,1]\times\{5\})\times (\{4\}\{3\}). \end{array}$$ To check that the answer makes sense, verify these facts: it's in $3$space; its dimension is $1$; there are $4$ edges; all lie in plane $y=5$, etc.
$\square$
Exercise. Illustrate this computation with a picture similar to the one in the beginning of the subsection. From that picture, compute the boundary of square $Q$ and of the whole cube.
The boundary operator
As usual, we extend the boundary operator from the one defined for the cells only to one that is defined for all chains. We say that “we use linearity” and this is what we mean. As every $k$chain is a formal linear combination of $k$cells, the $k$cells form the basis of the space of $k$chains $C_k$. Now this is what we know from linear algebra:
 a linear operator is fully defined by its values on the basis elements; and
 any choice of the values on the basis determines a linear operator.
The latter is exactly what we have. Indeed, $\partial _k$ is defined on the $k$cells and now these values are combined to produce the values for all $k$chains: $$\partial _k \left( \sum_i r_i p_i \right) = \sum_i r_i \partial _k p_i ,$$ where $p_i$ are the $k$cells (finitely many) and $r_i$ are the real scalars.
With that, we have a sequence of vector spaces and linear operators: $$ \newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{ccccccccccccc} ...& \ra{\partial_{k+2}} & C_{k+1}(K)& \ra{\partial_{k+1}} & C_{k}(K)& \ra{\partial_{k}} & C_{k1}(K)& \ra{\partial_{k1}}&...&\ra{\partial_{1}} & C_{1}(K)& \ra{\partial_{0}}& 0, \end{array} $$ where $K$ is a cubical complex. Then, just as before, we can define and compute the cycle groups and the boundary groups of $K$.
But in order to continue to homology, we need to reestablish the fact that all boundaries are cycles.
From the definition of the boundary, we have the following:
Proposition. Suppose:
 $Q^k\subset {\bf R}^n$ is a $k$chain, and
 $A\subset {\bf R}^1$ is $0$ or $1$chain, and
 $Q^k\times A \subset {\bf R}^n \times {\bf R}^1$ is a $k$ or a $(k+1)$chain.
Then $$\partial (Q^k\times A)= \partial Q^k \times A +(1)^k Q^k \times \partial A.$$
Exercise. Prove the proposition.
The formula is illustrated below for $k=2$. The cube as the product of a square and an edge is on the left and the two terms of the formula are shown on the right:
An even more general result is below:
Theorem (Product Formula for Boundaries). Suppose
 $a \in C_i({\mathbb R}^n)$,
 $b \in C_j({\mathbb R}^m)$, and
 $a\times b \in C_{i+j}({\mathbb R}^{n+m})$.
Then $$\partial (a\times b)=\partial a \times b +(1)^{i} a \times \partial b.$$
Theorem (Double Boundary Identity). For oriented chains of cubical complexes, $$\partial\partial =0.$$
Proof. The proof is based on the same induction as the definition.
The inductive assumption is that $$\partial\partial Q^i=0$$ for all cubical $i$cells, for all $i=0,1,...,k$, in any $N$dimensional space, for all $N$.
Now we represent a $(k+1)$cube as the product of a $k$cube and an edge as the last component: $$Q^{k+1}=Q^k\times E.$$ Now we just compute what we want. From the definition: $$\partial Q^{k+1}=\partial ( Q^{k}\times E)=\partial Q^k \times E +(1)^k Q^k \times \partial E.$$ Next, $$\begin{array}{llllll} \partial \partial Q^{k+1} & = \partial\left( \partial Q^k \times E + (1)^k Q^k \times \partial E\right) & \text{ and by linearity of } \partial ...\\ & =\partial\left( \partial Q^k \times E\right) + (1)^k \partial\left(Q^k \times \partial E\right) & \text{ now use the above proposition, twice ...}\\ & =\partial\partial Q^k \times E +(1)^{k1}\partial Q^k \times \partial E \\ & +(1)^k \left( \partial Q^k \times \partial E +(1)^k Q^k\times \partial\partial E\right) & \text{ now use the assumption ...} \\ & =0 \times E +(1)^{k1}\partial Q^k \times \partial E\\ & +(1)^k \left( \partial Q^k \times \partial E +(1)^k Q^k\times 0\right)\\ & =(1)^{k1}\partial Q^k \times \partial E+(1)^k \partial Q^k \times \partial E \\ & =0. \end{array}$$ $\blacksquare$
Is it a coincidence that this computation feels like differentiation in elementary calculus?
Exercise. Supply the missing part of the above proof. Hint: the last component of the cube may be a vertex.
Exercise. Derive from the theorem the same formula for the binary arithmetic.
The meaning of the theorem is that the sequence above is a chain complex. Therefore, just as before, we can define and compute the homology groups of a cubical complex.
Definition. The homology groups of a cubical complex $K$ are the vector spaces defined by $$H_k(K):=\ker \partial _k / \operatorname{Im} \partial _{k+1}, \quad k=0,1,....$$ The Betti numbers are the dimensions of the homology groups (over ${\bf R}$): $$\beta_k(K):=\dim H_k(K), \ k=0,1,....$$
Exercise. Compute the homology of the $3\times 3 \times 3$ cube with $7$ cubes removed:
Just as before, the reduced homology groups $\tilde{H}_k(K)$ of complex $K$ are defined via the same formulas except for dimension $0$: $$\tilde{H}_k(K):=\begin{cases} \ker \partial _k / \operatorname{Im} \partial _{k+1}, \quad &\text{ for } k=1,...,\\ \ker \epsilon / \operatorname{Im} \partial _{1} \quad &\text{ for } k=0, \end{cases}$$ where $\epsilon$ is given by: $$\epsilon (r_1c_1+...+r_nc_n):=r_1+...+r_n, \ r_i\in {\bf Z}.$$
The reduced homology groups of the point are all trivial and we can even write them as one: $$\tilde{H}(\{p\})=0.$$
Exercise. Write the chain complex for the reduced homology and express $\tilde{H}_k(K)$ in terms of $H_k(K)$.