This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

Difference between revisions of "Discrete calculus article"

From Mathematics Is A Science
Jump to navigationJump to search
(Related)
(Chains and cochains)
Line 136: Line 136:
  
 
==Chains and cochains==
 
==Chains and cochains==
 +
 +
[[File:Simplicial complex example.svg|thumb|200px|A simplicial 3-complex.]]
 +
 +
A '''simplicial complex''' <math>\mathcal{K}</math> is a set of [[Simplex|simplices]] that satisfies the following conditions:
 +
:1. Every [[Simplex#Elements|face]] of a simplex from <math>\mathcal{K}</math> is also in <math>\mathcal{K}</math>.
 +
:2. The non-empty [[Set intersection|intersection]] of any two simplices <math>\sigma_1, \sigma_2 \in \mathcal{K}</math> is a face of both <math>\sigma_1</math> and <math>\sigma_2</math>.
 +
 +
[[File:Simplicial homology - exactness of boundary maps.svg|thumb|right|The boundary of a boundary of a 2-simplex (left) and the boundary of a 1-chain (right) are taken. Both are 0, being sums in which both the positive and negative of a 0-simplex occur once. The boundary of a boundary is always 0. A nontrivial cycle is something that closes up like the boundary of a simplex, in that its boundary sums to 0, but which isn't actually the boundary of a simplex or chain. ]]
 +
 +
By definition, an [[orientability|orientation]] of a ''k''-simplex is given by an ordering of the vertices, written as (''v''<sub>0</sub>,...,''v''<sub>''k''</sub>), with the rule that two orderings define the same orientation if and only if they differ by an [[even permutation]]. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean.
 +
 +
Let ''S'' be a simplicial complex. A [[Chain (algebraic topology)|simplicial ''k''-chain]] is a finite [[free abelian group#formal sum|formal sum]]
 +
 +
:<math>\sum_{i=1}^N c_i \sigma_i, \,</math>
 +
where each ''c''<sub>''i''</sub> is an integer and σ<sub>''i''</sub> is an oriented ''k''-simplex. In this definition,  we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example,
 +
:<math> (v_0,v_1) = -(v_1,v_0).</math>
 +
 +
The group of ''k''-chains on ''S'' is written ''C<sub>k</sub>''. This is a [[free abelian group]] which has a basis in one-to-one correspondence with the set of ''k''-simplices in ''S''. To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices.
 +
 +
Let σ = (''v''<sub>0</sub>,...,''v''<sub>''k''</sub>) be an oriented ''k''-simplex, viewed as a basis element of ''C<sub>k</sub>''. The '''boundary operator'''
 +
 +
:<math>\partial_k: C_k \rightarrow C_{k-1}</math>
 +
 +
is the [[homomorphism]] defined by:
 +
 +
:<math>\partial_k(\sigma)=\sum_{i=0}^k (-1)^i (v_0 , \dots , \widehat{v_i} , \dots ,v_k),</math>
 +
 +
where the oriented simplex
 +
:<math>(v_0 , \dots , \widehat{v_i} , \dots ,v_k)</math>
 +
 +
is the ''i''<sup>th</sup> face of ''σ'', obtained by deleting its ''i''<sup>th</sup> vertex.
 +
 +
In ''C<sub>k</sub>'', elements of the subgroup
 +
 +
:<math>Z_k = \ker \partial_k</math>
 +
 +
are referred to as '''cycles''', and the subgroup
 +
 +
:<math>B_k = \operatorname{im} \partial_{k+1}</math>
 +
 +
is said to consist of '''boundaries'''. 
 +
 +
A direct computation shows that ∂<sup>2</sup> = 0. In geometric terms, this says that the boundary of anything has no boundary. Equivalently, the abelian groups
 +
 +
:<math>(C_k, \partial_k)</math>
 +
 +
form a [[chain complex]]. Another equivalent statement is that ''B<sub>k</sub>'' is contained in ''Z<sub>k</sub>''.
 +
 +
[[Image:Triangles for simplical homology.jpg|thumb|100 px| A simplicial complex with 2 1-holes]]
 +
 
A '''chain complex''' <math>(A_*, d_*)</math> is a sequence of [[vector space]]s ..., ''A''<sub>0</sub>, ''A''<sub>1</sub>, ''A''<sub>2</sub>, ''A''<sub>3</sub>, ''A''<sub>4</sub>, ... connected by [[linear operator]]s (called '''boundary operators''' or '''differentials''') ''d''<sub>''n''</sub> : ''A''<sub>''n''</sub>→''A''<sub>''n''−1</sub>, such that the composition of any two consecutive maps is the zero map. Explicitly, the differentials satisfy ''d''<sub>''n''</sub> ∘ ''d''<sub>''n''+1</sub> = 0, or with indices suppressed, ''d''<sup>2</sup> = 0. The complex may be written out as follows.
 
A '''chain complex''' <math>(A_*, d_*)</math> is a sequence of [[vector space]]s ..., ''A''<sub>0</sub>, ''A''<sub>1</sub>, ''A''<sub>2</sub>, ''A''<sub>3</sub>, ''A''<sub>4</sub>, ... connected by [[linear operator]]s (called '''boundary operators''' or '''differentials''') ''d''<sub>''n''</sub> : ''A''<sub>''n''</sub>→''A''<sub>''n''−1</sub>, such that the composition of any two consecutive maps is the zero map. Explicitly, the differentials satisfy ''d''<sub>''n''</sub> ∘ ''d''<sub>''n''+1</sub> = 0, or with indices suppressed, ''d''<sup>2</sup> = 0. The complex may be written out as follows.
  
Line 164: Line 214:
 
The index ''n'' in either ''A''<sub>''n''</sub> or ''A''<sup>''n''</sup> is referred to as the '''degree''' (or '''dimension'''). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension.  
 
The index ''n'' in either ''A''<sub>''n''</sub> or ''A''<sup>''n''</sup> is referred to as the '''degree''' (or '''dimension'''). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension.  
  
The elements of the individual groups of a (co)chain complex are called '''(co)chains'''. The elements in the kernel of ''d'' are called '''(co)cycles''' (or '''closed''' elements), and the elements in the image of ''d'' are called '''(co)boundaries''' (or '''exact''' elements). Right from the definition of the differential, all boundaries are cycles.  
+
The elements of the individual groups of a (co)chain complex are called '''(co)chains'''. The elements in the kernel of ''d'' are called '''(co)cycles''' (or '''closed''' elements), and the elements in the image of ''d'' are called '''(co)boundaries''' (or '''exact''' elements). Right from the definition of the differential, all boundaries are cycles.
  
 
==Related==
 
==Related==

Revision as of 03:23, 31 August 2019

Template:Short description Template:About

Discrete calculus or "the calculus of discrete functions", is the mathematical study of incremental change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. The word calculus is a Latin word, meaning originally "small pebble"; as such pebbles were used for calculation, the meaning of the word has evolved and today usually means a method of computation. Meanwhile, calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the study of continuous change.

It has two major branches, differential calculus and integral calculus. Differential calculus concerns incremental rates of change and the slopes of discrete curves. Integral calculus concerns accumulation of quantities and the areas under and between such curves. These two branches are related to each other by the fundamental theorem of discrete calculus.

The study of these concepts of change starts with their discrete form. The development is dependent on a parameter, the increment $\Delta x$ of the independent variable. If we so choose, we can make the increment smaller and smaller and find the continuous counterparts of these concepts as limits. Informally: $$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!} \begin{array}{ccccc} \begin{array}{|cc|}\hline\text{ discrete }\\ \text{ calculus }\\ \hline\end{array}& \ra{\quad\Delta x\to 0\quad} &\begin{array}{|cc|}\hline\text{ infinitesimal }\\ \text{ calculus }\\ \hline\end{array} \end{array}$$

History

The early history of discrete calculus is the history of calculus. Such basic ideas as the difference quotients and the Riemann sums appear implicitly or explicitly. After the limit it taken, however, they are never to be seen again.

Discrete calculus remains interlinked with infinitesimal calculus especially exterior calculus, i.e., the calculus of differential forms. Discrete calculus relies on "discrete differential forms", i.e., cochains. It cannot then be separated from the rest of exterior calculus or from algebraic topology. Therefore, the credit for the creation of discrete calculus should first go to the following individuals (roughly 1850 - 1950):

The current development of discrete calculus is driven by the needs of applied modeling.


Principles

Differential calculus is the study of the definition, properties, and applications of the difference quotient of a function. The process of finding the difference quotient is called differentiation. Given a function and a point in the domain, the difference quotient at that point is a way of encoding the small-scale (i.e., from the point to the next) behavior of the function. By finding the difference quotient of a function at every point in its domain, it is possible to produce a new function, called the difference quotient function or just the difference quotient of the original function. In formal terms, the difference quotient is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by deriving the squaring function turns out to be something close to the doubling function.

In more explicit terms the "doubling function" may be denoted by $g(x)=2x$ and the "squaring function" by $f(x)=x^2$. The "difference quotient" now takes the function $f$, defined by the expression "$x^2$", as an input, that is all the information—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to output another function, the function $g(x)=2x$, as will turn out.

The most common symbol for a difference quotient is: $$\frac{\Delta f}{\Delta x}.$$ This notation is known as Leibniz's notation.

If the input of the function represents time, then the difference quotient represents change with respect to time. For example, if $f$ is a function that takes a time as input and gives the position of a ball at that time as output, then the difference quotient of $f$ is how the position is changing in time, that is, it is the velocity of the ball.

If a function is linear (that is, if the graph of the function is a straight line), then the function can be written as $y=mx + b$, where $x$ is the independent variable, $y$ is the dependent variable, $b$ is the $y$-intercept, and:

[math]m= \frac{\text{rise}}{\text{run}}= \frac{\text{change in } y}{\text{change in } x} = \frac{\Delta y}{\Delta x}.[/math]

This gives an exact value for the slope of a straight line. If the graph of the function is not a straight line, however, then the change in $y$ divided by the change in $x$ varies. The difference quotient give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let Template:Math be a function, and fix a point Template:Math in the domain of Template:Math. Template:Math is a point on the graph of the function. If Template:Math is the increment of $x$, then Template:Math is the value of $x$ after (or before) Template:Math. Therefore, Template:Math is the increment of Template:Math. The slope between these two points is

[math]m = \frac{f(a+h) - f(a)}{(a+h) - a} = \frac{f(a+h) - f(a)}{h}.[/math]

So Template:Math is the slope of the line between Template:Math and Template:Math.

Here is a particular example, the difference quotient of the squaring function at the input 3. Let $f(x)=x^2$ be the squaring function. Then:

[math]\begin{align}\frac{\Delta f}{\Delta x}(3) &={(3+h)^2 - 3^2\over{h}} \\ &={9 + 6h + h^2 - 9\over{h}} \\ &={6h + h^2\over{h}} \\ &= 6 + h \end{align} [/math]

The process just described can be performed for any point in the domain of the squaring function. This defines the difference quotient function of the squaring function, or just the difference quotient of the squaring function for short. A computation similar to the one above shows that the difference quotient of the squaring function is the doubling function plus $h$.

Integral calculus is the study of the definitions, properties, and applications of the Riemann sums. The process of finding the value of an sum is called integration. In technical language, integral calculus studies a certain linear operator.

The Riemann sum inputs a function and outputs a number, which gives the algebraic sum of areas between the graph of the input and the x-axis.

A motivating example is the distances traveled in a given time.

[math]\mathrm{Distance} = \mathrm{Speed} \cdot \mathrm{Time}[/math]

If the speed is constant, only multiplication is needed, but if the speed changes, we evaluate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the distance traveled in each interval.

File:Constant velocity.png
Constant velocity
File:Riemann sum as region under curve.svg
Integration can be thought of as measuring the area under a curve, defined by Template:Math, between two points (here Template:Math and Template:Math).

When velocity is constant, the total distance traveled over the given time interval can be computed by multiplying velocity and time. For example, travelling a steady 50 mph for 3 hours results in a total distance of 150 miles. In the diagram on the left, when constant velocity and time are graphed, these two values form a rectangle with height equal to the velocity and width equal to the time elapsed. Therefore, the product of velocity and time also calculates the rectangular area under the (constant) velocity curve. This connection between the area under a curve and distance traveled can be extended to any irregularly shaped region exhibiting a fluctuating velocity over a given time period. If Template:Math in the diagram on the right represents speed as it varies over time, the distance traveled (between the times represented by Template:Math and Template:Math) is the area of the shaded region Template:Math.

To evaluate that area, a method would be to divide up the distance between Template:Math and Template:Math into a number of equal segments, the length of each segment represented by the symbol Template:Math. For each small segment, we can choose one value of the function Template:Math. Call that value Template:Math. Then the area of the rectangle with base Template:Math and height Template:Math gives the distance (time Template:Math multiplied by speed Template:Math) traveled in that segment. Associated with each segment is the value of the function above it, Template:Math. The sum of all such rectangles gives the area between the axis and the curve, which is the total distance traveled.

The notation for Riemann sum is:

[math]\sum_a^b f(x)\, \Delta x.[/math]

The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the difference quotients to the Riemann sums. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.

The fundamental theorem of calculus states: If a function Template:Math is defined on a partition of the interval Template:Math and if Template:Math is a function whose difference quotient is Template:Math, then

[math]\sum_{a}^{b} f(x)\,\Delta x = F(b) - F(a).[/math]

Furthermore, for every Template:Math in the interval Template:Math,

[math]\frac{\Delta}{\Delta x}\sum_a^x f(t)\, \Delta t = f(x).[/math]

This is also a prototype solution of a difference equation. Difference equations relate an unknown function to its difference or difference quotient, and are ubiquitous in the sciences.

Applications

Discrete calculus is used indirectly in every branch of the physical sciences, actuarial science, computer science, statistics, engineering, economics, business, medicine, demography, and in other fields wherever a problem can be mathematically modeled and an optimal solution is desired. It allows one to go from (non-constant) rates of change to the total change or vice versa, and many times in studying a problem we know one and are trying to find the other.

Physics makes particular use of calculus; all discrete concepts in classical mechanics and electromagnetism are related through discrete calculus. The mass of an object of known density that varies incrementally, the moment of inertia of such objects, as well as the total energy of an object within a discrete conservative field can be found by the use of discrete calculus. An example of the use of discrete calculus in mechanics is Newton's second law of motion: historically stated it expressly uses the term "change of motion" which implies the difference quotient saying The change of momentum of a body is equal to the resultant force acting on the body and is in the same direction. Commonly expressed today as Force = Mass × acceleration, it invokes discrete calculus when the change is incremental because acceleration is the difference quotient of velocity with respect to time or second difference quotient of the spatial position. Starting from knowing how an object is accelerating, we use Riemann sums to derive its path.

Maxwell's theory of electromagnetism and Einstein's theory of general relativity are also expressed in the language of differential calculus. Chemistry also uses calculus in determining reaction rates and radioactive decay. In biology, population dynamics starts with reproduction and death rates to model population changes.

Discrete calculus can be used in conjunction with other mathematical disciplines. For example, it can be used in probability theory to determine the probability of a discrete random variable from an assumed density function.

Discrete Green's Theorem is applied in an instrument known as a planimeter, which is used to calculate the area of a flat surface on a drawing. For example, it can be used to calculate the amount of area taken up by an irregularly shaped flower bed or swimming pool when designing the layout of a piece of property. It can be used to efficiently calculate sums of rectangular domains in images, in order to rapidly extract features and detect object; another algorithm that could be used is the summed area table.

In the realm of medicine, calculus can be used to find the optimal branching angle of a blood vessel so as to maximize flow. From the decay laws for a particular drug's elimination from the body, it is used to derive dosing laws. In nuclear medicine, it is used to build models of radiation transport in targeted tumor therapies.

In economics, calculus allows for the determination of maximal profit by providing a way to easily calculate both marginal cost and marginal revenue.

Discrete calculus is the standard way to solve differential equations. For instance, spacecraft use difference equations to approximate curved courses within zero gravity environments.

Calculus of differences

See [1][2][3].

A (forward) difference is an expression of the form

[math] \Delta_h[f](x) = f(x + h) - f(x). [/math]
[math]\Delta c = 0[/math]
[math]\Delta (a f + b g) = a \,\Delta f + b \,\Delta g[/math]
[math] \begin{align} \Delta (f g) &= f \,\Delta g + g \,\Delta f + \Delta f \,\Delta g \end{align}[/math]
[math]\begin{align} \sum_{n=a}^{b} \Delta f(n) &= f(b+1)-f(a) \end{align}[/math]

See references.[4][5][6][7]

Chains and cochains

File:Simplicial complex example.svg
A simplicial 3-complex.

A simplicial complex [math]\mathcal{K}[/math] is a set of simplices that satisfies the following conditions:

1. Every face of a simplex from [math]\mathcal{K}[/math] is also in [math]\mathcal{K}[/math].
2. The non-empty intersection of any two simplices [math]\sigma_1, \sigma_2 \in \mathcal{K}[/math] is a face of both [math]\sigma_1[/math] and [math]\sigma_2[/math].
File:Simplicial homology - exactness of boundary maps.svg
The boundary of a boundary of a 2-simplex (left) and the boundary of a 1-chain (right) are taken. Both are 0, being sums in which both the positive and negative of a 0-simplex occur once. The boundary of a boundary is always 0. A nontrivial cycle is something that closes up like the boundary of a simplex, in that its boundary sums to 0, but which isn't actually the boundary of a simplex or chain.

By definition, an orientation of a k-simplex is given by an ordering of the vertices, written as (v0,...,vk), with the rule that two orderings define the same orientation if and only if they differ by an even permutation. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean.

Let S be a simplicial complex. A simplicial k-chain is a finite formal sum

[math]\sum_{i=1}^N c_i \sigma_i, \,[/math]

where each ci is an integer and σi is an oriented k-simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example,

[math] (v_0,v_1) = -(v_1,v_0).[/math]

The group of k-chains on S is written Ck. This is a free abelian group which has a basis in one-to-one correspondence with the set of k-simplices in S. To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices.

Let σ = (v0,...,vk) be an oriented k-simplex, viewed as a basis element of Ck. The boundary operator

[math]\partial_k: C_k \rightarrow C_{k-1}[/math]

is the homomorphism defined by:

[math]\partial_k(\sigma)=\sum_{i=0}^k (-1)^i (v_0 , \dots , \widehat{v_i} , \dots ,v_k),[/math]

where the oriented simplex

[math](v_0 , \dots , \widehat{v_i} , \dots ,v_k)[/math]

is the ith face of σ, obtained by deleting its ith vertex.

In Ck, elements of the subgroup

[math]Z_k = \ker \partial_k[/math]

are referred to as cycles, and the subgroup

[math]B_k = \operatorname{im} \partial_{k+1}[/math]

is said to consist of boundaries.

A direct computation shows that ∂2 = 0. In geometric terms, this says that the boundary of anything has no boundary. Equivalently, the abelian groups

[math](C_k, \partial_k)[/math]

form a chain complex. Another equivalent statement is that Bk is contained in Zk.

File:Triangles for simplical homology.jpg
A simplicial complex with 2 1-holes

A chain complex [math](A_*, d_*)[/math] is a sequence of vector spaces ..., A0, A1, A2, A3, A4, ... connected by linear operators (called boundary operators or differentials) dn : AnAn−1, such that the composition of any two consecutive maps is the zero map. Explicitly, the differentials satisfy dndn+1 = 0, or with indices suppressed, d2 = 0. The complex may be written out as follows.

[math] \cdots \xleftarrow{d_0} A_0 \xleftarrow{d_1} A_1 \xleftarrow{d_2} A_2 \xleftarrow{d_3} A_3 \xleftarrow{d_4} A_4 \xleftarrow{d_5} \cdots [/math]

The cochain complex [math](A^*, d^*t)[/math] is the dual notion to a chain complex. It consists of a sequence of vector spaces ..., A0, A1, A2, A3, A4, ... connected by linear operators dn : AnAn+1 satisfying dn+1dn = 0. The cochain complex may be written out in a similar fashion to the chain complex.

[math] \cdots \xrightarrow{d^{-1}} A^0 \xrightarrow{d^0} A^1 \xrightarrow{d^1} A^2 \xrightarrow{d^2} A^3 \xrightarrow{d^3} A^4 \xrightarrow{d^4} \cdots [/math]

The index n in either An or An is referred to as the degree (or dimension). The difference between chain and cochain complexes is that, in chain complexes, the differentials decrease dimension, whereas in cochain complexes they increase dimension.

The elements of the individual groups of a (co)chain complex are called (co)chains. The elements in the kernel of d are called (co)cycles (or closed elements), and the elements in the image of d are called (co)boundaries (or exact elements). Right from the definition of the differential, all boundaries are cycles.

Related

Further reading

References

  1. Template:Cite book
  2. Template:Cite book
  3. Template:Cite book
  4. Template:Cite book
  5. Ames, W. F., (1977). Numerical Methods for Partial Differential Equations, Section 1.6. Academic Press, New York. Template:ISBN.
  6. Hildebrand, F. B., (1968). Finite-Difference Equations and Simulations, Section 2.2, Prentice-Hall, Englewood Cliffs, New Jersey.
  7. Template:Cite journal.