This site is devoted to mathematics and its applications. Created and run by Peter Saveliev.

# Series

## Contents

- 1 From linear to quadratic approximations
- 2 The Taylor polynomials
- 3 Sequences of functions
- 4 Infinite series
- 5 Examples of series
- 6 From finite sums via limits to series
- 7 Divergence
- 8 Series with non-negative terms
- 9 Comparison of series
- 10 Absolute convergence
- 11 The Ratio Test and the Root Test
- 12 Power series
- 13 Calculus of power series

## From linear to quadratic approximations

Approximating functions is like approximating numbers (such as $\pi$ or the Riemann integral) but harder.

Recall from Chapter 10 that *linearization* means replacing a given function $y=f(x)$ with a linear function $y=L(x)$ that best approximates it at a given point. It is called its best linear approximation and its happens to be the linear function the graph of which is the tangent line at the point. The replacement is justified by the fact that when you zoom in on the point, the tangent line will merge with the graph:

However, there is a more basic approximation: a constant function, $y=C(x)$.

**Example (root).** Let's review this example from Chapter 10: how do we compute $\sqrt{4.1}$ without actually evaluating $f(x)=\sqrt{x}$? We approximate. And to approximate the number of $\sqrt{4.1}$, we approximate the function $f(x)=\sqrt{x}$ “around” $a=4$.

We first approximate the function with a *constant* function:
$$C(x)=2.$$
This value is chosen because $f(a)=\sqrt{4}=2$. Then we have:
$$\sqrt{4.1}=f(4.1)\approx C(4.1)=2.$$
It is a crude approximation:

The other, linear, approximation is visibly better. We approximate the function with a *linear* function:
$$L(x) = 2 + \frac{1}{4} (x - 4).$$
This value is chosen because $f(a)=\sqrt{4}=2$ and $f'(a)=\frac{1}{4}$. Then we have:
$$\sqrt{4.1}=f(4.1)\approx L(4.1)=2 + \frac{1}{4} (4.1 - 4)=2.025.$$
$\square$

We have for a function $y=f(x)$ and $x=a$:

- a constant approximation: $C(x)=f(a)$, and
- a linear approximation: $L(x)= f(a) + f'(a) (x - a)$.

We should notice early on that the latter just adds a new (linear) term to the former! Also, the latter is better than the former -- but only when we need more accuracy. Otherwise, the former is worse because it requires more computation.

Can we do better than the *best* linear approximation? No. Can we do better than the best *linear* approximation? Yes.

Below we illustrate how we attempt to approximate a function around the point $(1,1)$ with *constant* functions first; from those we choose the horizontal line through the point. This line then becomes one of many *linear* approximations of the curve that pass through the point; from those we choose the tangent line.

Now, we shall see that these are just the two first steps in a *sequence* of approximations! The tangent line becomes one of many *quadratic* curves that pass through the point... and are tangent to the curve. Which one of *those* do we choose?

In order to answer that, we need to review and understand how the best constant and the best linear approximations were chosen. In what way are they the best?

Suppose a function $y=f(x)$ is given and we would like to approximate its behavior in the vicinity of a point, $x=a$, with another function $y=T(x)$. The latter is to be taken from some class of functions that we find suitable.

What we need to consider is the *error*, i.e., the difference between the function $f$ and its approximation $T$:
$$E(x) = | f(x) -T(x) | . $$

We are supposed to minimize the error function in some way.

Of course, the error function $y=E(x)$ is likely to grow with no limit as we move away from our point of interest, $x=a$... but we don't care. We want to minimize the difference in the *vicinity* of $a$ which means making sure that the limit of the error as $x\to a$ goes to $0$!

**Theorem (Best constant approximation).** Suppose $f$ is continuous at $x=a$ and
$$C(x)=k$$
is any of its constant approximations (i.e., arbitrary constant functions). Then, the error $E$ of the approximation approaches $0$ at $x=a$ if and only if the constant is equal to the value of the function $f$ at $x=a$; i.e.,
$$\lim_{x\to a} (f(x) -C(x))=0\ \Longleftrightarrow\ k=f(a).$$

**Proof.** Use the *Sum Rule* and then the continuity of $f$:
$$0=\lim_{x\to a} (f(x) -C(x))=\lim_{x\to a} f(x) -\lim_{x\to a}C(x)=f(a)-k.$$
$\blacksquare$

That's the analog of the following theorem from Chapter 10.

**Theorem (Best linear approximation).** Suppose $f$ is differentiable at $x=a$ and
$$L(x)=f(a)+m(x-a)$$
is any of its linear approximations. Then, the error $E$ of the approximation approaches $0$ at $x=a$ faster than $x-a$ if and only if the coefficient of the linear term is equal to the value of the derivative of the function $f$ at $x=a$; i.e.,
$$\lim_{x\to a} \frac{ f(x) -L(x) }{x-a}=0\ \Longleftrightarrow\ m=f'(a).$$

**Proof.** Use the *Sum Rule* for limits and then the differentiability of $f$:
$$0=\lim_{x\to a} \frac{ f(x) -L(x) }{x-a}=\lim_{x\to a} \frac{ f(x) -f(a) }{x-a} -\lim_{x\to a}m=f'(a)-m.$$
$\blacksquare$

Comparing the conditions
$$f(x) -C(x)\to 0 \text{ and } \frac{ f(x) -L(x) }{x-a}\to 0$$
reveals the similarity and the difference in how we minimize the error! The difference is in the degree: *how fast the error function goes to zero*. Indeed, we learned in Chapter 10 that the latter condition means that $f(x) -L(x)$ converges to $0$ faster than $x-a$, i.e.,
$$f(x) -L(x)=o(x-a);$$
there is no such restriction for the former.

So far, this is what we have discovered: linear approximations are built from the best constant approximation by adding a linear term. The best one of those has the slope (its own derivative) equal to the derivative of $f$ at $a$. How the sequence of approximations will progress is now clearer: quadratic approximations are built from the best linear approximation by adding a quadratic term.

But which one of those is the best?

**Theorem (Best quadratic approximation).** Suppose $f$ is twice continuously differentiable at $x=a$ and
$$Q(x)=f(a)+f'(a)(x-a)+p(x-a)^2$$
is any of its quadratic approximations. Then, the error $E$ of the approximation approaches $0$ at $x=a$ faster than $(x-a)^2$ if and only if the coefficient of the quadratic term is equal to half of the value of the second derivative of the function $f$ at $x=a$; i.e.,
$$\lim_{x\to a} \frac{ f(x) -Q(x) }{(x-a)^2}=0\ \Longleftrightarrow\ p=\frac{1}{2}f' '(a).$$

**Proof.** We apply *L'Hopital's rule* twice:
$$\begin{array}{lll}
0&=\lim_{x\to a} \frac{ f(x) -Q(x) }{(x-a)^2}&\text{ ...first...}\\
&=\lim_{x\to a} \frac{ f'(x) -f'(a)-2p(x-a) }{2(x-a)}\\
&=\lim_{x\to a} \frac{ f' '(x) -2p }{2}&\text{ ...and second...}\\
&=\lim_{x\to a} \frac{ f' '(x)}{2} -p\\
&=\frac{1}{2}f' '(a)-p.
\end{array}$$
$\blacksquare$

Once again, the condition of the theorem means that $f(x) -Q(x)$ converges to $0$ faster than $(x-a)^2$, or $$f(x)-Q(x)=o((x-a)^2).$$

We start to see a pattern:

- the degrees of the approximating polynomials are growing, and
- the degrees of the derivatives being taken into account are growing too.

**Example (root).** For the original example of $f(x)=\sqrt{x}$ at $a=4$, we have:
$$\begin{array}{lllll}
f(x)&=\sqrt{x}&\Longrightarrow & f(4)&=2;\\
f'(x)&=(x^{1/2})'=\frac{1}{2}x^{-1/2}&\Longrightarrow & f'(4)&=\frac{1}{4};\\
f' '(x)&=\left( \frac{1}{2}x^{-1/2} \right)'=-\frac{1}{4}x^{-3/2}&\Longrightarrow & f' '(4)&=-\frac{1}{32}.
\end{array}$$
Therefore, we have:
$$\begin{array}{lll}
T_0(x)&=2,&&&\text{ same value as }f...\\
T_1(x)&=2&+\frac{1}{4}(x-4),&&\text{ and same slope as }f...\\
T_2(x)&=2&+\frac{1}{4}(x-4)&-\frac{1}{2\cdot 32}(x-4)^2&\text{ and same concavity as }f...
\end{array}$$
$\square$

**Example (sin).** Let's approximate $f(x)=\sin x$ at $x=0$. First, the values of the function and the derivatives:
$$\begin{array}{lllll}
f(x)&=\sin x&\Longrightarrow & f(0)&=0;\\
f'(x)&=\cos x&\Longrightarrow & f'(0)&=1;\\
f' '(x)&=-\sin x&\Longrightarrow & f' '(0)&=0.
\end{array}$$

Therefore, the best quadratic approximation is: $$T_2(x)=0+1(x-0)-\frac{0}{2}(x-0)^2 =x.$$ Same as the linear! Why? Because $\sin$ is odd. $\square$

**Example (cos).** Let's approximate $f(x)=\cos x$ at $x=0$. First, the values of the function and the derivatives:
$$\begin{array}{lllll}
f(x)&=\cos x&\Longrightarrow & f(0)&=1;\\
f'(x)&=-\sin x&\Longrightarrow & f'(0)&=0;\\
f' '(x)&=-\cos x&\Longrightarrow & f' '(0)&=-1.
\end{array}$$
Therefore, the best quadratic approximation is:
$$T_2(x)=1+0(x-0)-\frac{1}{2}(x-0)^2 =1-\frac{1}{2}x^2.$$

No linear term! Why? Because $\cos$ is even. $\square$

**Example (edge behavior).** The applicability of the theorems depends on the function. Consider these three familiar functions:

These are the results:

- 1. Function $f(x)=\sin \frac{1}{x}$ (with $f(0)=0$) is not continuous at $0$ and none of theorems apply. There is no good approximation at $0$, of any kind.
- 2. Function $g(x)=x\sin \frac{1}{x}$ (with $f(0)=0$) is continuous at $0$ and the first theorem applies, but it's not differentiable and the other two do not. The best constant approximation at $0$ is $T_0(x)=0$ but it's not, and there is none, a good linear approximation.
- 3. Function $h(x)=x^2\sin \frac{1}{x}$ (with $f(0)=0$) is differentiable at $0$ and the first two theorem apply, but it's not twice differentiable and the last one does not. The best linear (and constant) approximation at $0$ is $T_1(x)=0$ but it's not, and there is none, a good quadratic approximation.

$\square$

**Example (root).** Back to the original example of $f(x)=\sqrt{x}$ at $a=4$. One can guess where this is going:
$$\begin{array}{rcccccccc|ll}
\text{constant:}&&&&&&&2&=T_0(x)&f -T_0 =o(1)\\
\text{linear:}&&&&&\frac{1}{4}(x-4)&+&2&=T_1(x)&f -T_1 =o(x-a)\\
\text{quadratic:}&&&-\frac{1}{2\cdot 32}(x-4)^2&+&\frac{1}{4}(x-4)& +&2&=T_2(x)&f -T_2 =o((x-a)^2)\\
\text{cubic:}&(?)(x-4)^3&-&\frac{1}{2\cdot 32}(x-4)^2&+&\frac{1}{4}(x-4)& +&2&=T_3(x)&f -T_3 =o((x-a)^3)\\
\text{}&\vdots&&\vdots&&\vdots&&\vdots&\vdots&\vdots\\
\end{array}$$
We add a term every time; it's a recursion! Such as sequence is called a “series”. $\square$

## The Taylor polynomials

Now the general theory.

In order to be able to concentrate on a particular value of $x$, we can find a special analog of the standard form of the polynomial for each $x=a$. The polynomials are still sums of powers just not of $x$ but of $(x-a)$. A familiar example is for linear polynomials that can be put in the *slope-intercept form* or the *point-slope form*:
$$f(x)=mx+b=m(x-a)+d.$$

**Proposition.** For each real number $a$, every degree $n$ polynomial $P$ can be represented in the form *centered at $x=a$*, i.e.,
$$P(x)=c_0+c_1(x-a)+c_2(x-a)^2+...+c_n(x-a)^n,$$
for some real numbers $c_0,...,c_n$.

**Proof.** We use this *change of variables*: $x\mapsto x+a$. $\blacksquare$

It is among these polynomials we will choose the best approximation at $x=a$.

Below is a table that shows the progress of better and better *polynomial approximations* following the ideas developed in the last section. The degrees of these polynomials are listed in the first column, while the first row shows the degree of the terms of these polynomials:
$$\begin{array}{r|ccc}
\text{degrees}&n&&...&&3&&2&&1&&0&&\\
\hline
0&&&&&&&&&&&c_0&=T_0&\\
1&&&&&&&&&c_1(x-a)&+&c_0&=T_1&\\
2&&&&&&&c_2(x-a)^2&+&c_1(x-a)& +&c_0&=T_2&\\
3&&&&&c_3(x-a)^3&+&c_2(x-a)^2&+&c_1(x-a)& +&c_0&=T_3&\\
\vdots&&&&&\vdots&&\vdots&&\vdots&&\vdots&\vdots\\
n&c_n(x-a)^n&+&...&+&c_3(x-a)^3&+&c_2(x-a)^2&+&c_1(x-a)& +&c_0&=T_n&\\
\end{array}$$

Applying the *Power Formula* repeatedly produces more and more but smaller and smaller coefficients until it reaches $1$:
$$\begin{array}{lll}
\left(x^n\right)'&&&&=&n\cdot &x^{n-1}&\Longrightarrow\\
\left(x^n\right)' '&=&n\cdot &\left( x^{n-1}\right)'&=&n\cdot (n-1)\cdot &x^{n-2}&\Longrightarrow\\
\left(x^n\right)' ' '&=&n\cdot (n-1)\cdot &\left(x^{n-2}\right)'&=&n\cdot (n-1)\cdot (n-2)\cdot &x^{n-3}&\Longrightarrow\\
\vdots&&\vdots&\vdots&&\vdots&\vdots\\
\left(x^n\right)^{(n-1)}&=&n\cdot (n-1)\cdot ...\cdot 3\cdot &\left(x^{2}\right)'&=&n\cdot (n-1)\cdot (n-2)\cdot ...\cdot 3\cdot 2\cdot &x^1&\Longrightarrow\\
\left(x^n\right)^{(n)}&=&n\cdot (n-1)\cdot ...\cdot 3\cdot 2\cdot &x'&=&n\cdot (n-1)\cdot (n-2)\cdot ...\cdot 3\cdot 2\cdot &1&=n!
\end{array}$$
So, that's why the *factorial* appears in the forthcoming formulas... Same for powers of $(x-a)$.

**Definition.** Suppose $f$ is an $n$ times continuously differentiable at $x=a$. Then the *$n$th Taylor polynomial*, $n=0,1,2,...$, is defined recursively by:
$$T_0=f(a),\quad T_{n}(x)=T_{n-1}+c_{n}(x-a)^{n},$$
with the coefficients, called the *Taylor coefficients*, given by:
$$c_0=f(a),\ c_1=f'(a),\ c_2=\frac{1}{2}f^{(2)}(a), \ ...,\ c_n=\frac{1}{n!}f^{(n)}(a).$$

In other words, we have: $$T_0=f(a),\quad T_{n+1}(x)=T_n+\frac{1}{(n+1)!}f^{(n+1)}(a)(x-a)^{n+1},$$ and, in sigma notation: $$T_n(x)=\sum _{i=0}^n \frac{1}{i!}f^{(i)}(a)(x-a)^i.$$ This is indeed a polynomial of degree $n$. It is centered at $a$.

**Theorem.** Suppose $f$ is an $n$ times continuously differentiable at $x=a$. The first $n$ derivatives of the $n$th Taylor polynomial of a function $f$ agree with those of the function $f$; i.e.,
$$T_n^{(n)}(a)=f^{(n)}(a).$$
Conversely, this polynomial is the only one that satisfies this property.

**Proof.** We differentiate term by term and simplify:
$$\begin{array}{lll}
T_n'(x)&=\left( \sum _{i=0}^n \frac{1}{i!}f^{(i)}(a)(x-a)^i \right)'&\text{...we use SR and CMR }\\
&=\sum _{i=0}^n \frac{1}{i!}f^{(i)}(a)\cdot \left( (x-a)^i \right)'&\text{...we use PF now}\\
&=\sum _{i=1}^n \frac{1}{i!}f^{(i)}(a) \cdot i(x-a)^{i-1}&\text{...the }0\text{th term is gone as a constant...}\\
&=\sum _{i=1}^n \frac{1}{(i-1)!}f^{(i)}(a) (x-a)^{i-1}&\text{...we substitute }k=i-1\\
&=\sum _{k=0}^{n-1} \frac{1}{k!}f^{(k+1)}(a) (x-a)^{k}.
\end{array}$$
This is a degree $n-1$ polynomial to be further differentiated. We substitute $x=a$ and only the $0$th term is left:
$$T_n'(a)=\sum _{k=0}^{n-1} \frac{1}{k!}f^{(k+1)}(a) (a-a)^{k}=f'(a).$$
We continue in the same manner with the rest of the derivatives:
$$\begin{array}{lll}
T_n' '(x)&=\left( \sum _{i=0}^{n-1} \frac{1}{i!}f^{(i+1)}(a) (x-a)^{i} \right)'=...&
\end{array}$$
And so on. $\blacksquare$

**Exercise.** Finish the proof.

**Exercise.** Prove the converse.

**Exercise.** The $n$th degree Taylor polynomial of an $n$th degree polynomial is that polynomial.

**Exercise.** Prove that the Taylor polynomials of an even (odd) function have only even (odd) terms.

**Example (exponent).** Some functions are so easy to differentiate that we can quickly find *all* of its Taylor polynomials. For example, consider
$$f(x)=e^x$$
at $x=0$. Then
$$f^{(i)}(0)=e^x\bigg|_{x=0}=1.$$
Therefore,
$$T_n(x)=\sum _{i=0}^n \frac{1}{i!}x^i.$$

Therefore, we have an approximation formula: $$e\approx 1+\frac{1}{1!}+\frac{1}{2!}+\frac{1}{3!}+...+\frac{1}{n!},$$ which gives a better accuracy with each new term.

The recursive formula for the Taylor polynomials is especially convenient for plotting: $$T_{n+1}(x)=T_n(x)+\frac{1}{(n+1)!}(x-a)^{n+1}.$$ We use it in a spreadsheet as follows: $$\texttt{ =RC[-1]+R8C/FACT(R7C)*(RC1-R4C2)^R7C }$$ We can create as many as we like in each column:

The first three approximations are: $$T_0(x)=1,\ T_1(x)=x+1,\ T_2(x)=\frac{1}{2}x^2+x+1.$$ We can see how the curves start to resemble the original graph -- but only in the vicinity of $x=0$. We know that polynomials have the property that $$T_n(x)\to \infty \text{ as } x\to \infty.$$ Therefore, they can never get close to the horizontal asymptote for $x\to -\infty$. For $x \to +\infty$, we know from L'Hopital's Rule that polynomials are too slow to compete with the exponential function. $\square$

How well do our approximations work?

Recall how a linear approximation provides a “funnel”, made of two parabolas, where the unknown part of the function must reside:

Just as the error bounds for Riemann integrals discussed in Chapter 13, the error bounds for Taylor polynomials depend on *a priori* bounds for the derivatives. The illustration still applies but the blue line, the approximation, might be a parabola now or higher degree polynomial. We accept the result below without proof.

**Theorem (Error bound).** Suppose a function $f$ is $(n+1)$ times differentiable at $x=a$. Suppose also that for each $i=0,1,2,...,n+1$, we have
$$|f^{(i)}(t)|<K_i \text{ for all } t \text{ between } a \text{ and } x,$$
and some real number $K_i$. Then
$$E_n(x)=|f(x)-T_n(x)|\le K_{n+1}\frac{|x-a|^{n+1}}{(n+1)!}.$$

In addition to the error bounds for Riemann integrals, this error bound resembles the next Taylor term.

**Example (root).** Let's see how close we are to the truth with our quadratic approximation of $f(x)=\sqrt{x}$ around $a=4$ from the last section. Recall that we have:
$$T_2(x)=2+\frac{1}{4}(x-4)-\frac{1}{2\cdot 32}(x-4)^2.$$
The result comes from our computations of the derivatives up to the second to which we add the third:
$$\begin{array}{lllll}
f(x)&=\sqrt{x}&&\Longrightarrow & f(4)&=2;\\
f'(x)&=(x^{1/2})'&=\frac{1}{2}x^{-1/2}&\Longrightarrow & f'(4)&=\frac{1}{4};\\
f' '(x)&=\left( \frac{1}{2}x^{-1/2} \right)'&=-\frac{1}{4}x^{-3/2}&\Longrightarrow & f' '(4)&=-\frac{1}{32};\\
f^{(3)}(x)&=\left( -\frac{1}{4}x^{-3/2} \right)'&=-\frac{1}{4}\frac{-3}{2}x^{-5/2}&=\frac{3}{8}x^{-5/2}.
\end{array}$$
Next, we notice that $f^{(3)}$ is decreasing. Therefore, over the interval $[4,+\infty)$ our best (smallest) upper bound for it is its initial value:
$$|f^{(3)}(x)|\le |f^{(3)}(4)|=\frac{3}{8}4^{-5/2}=\frac{3}{8\cdot 32}=\frac{3}{256}.$$
So, our best (smallest) choice of the upper bound is:
$$K_3=\frac{3}{256}.$$
Then,
$$E_2(x)=|f(x)-T_2(x)|\le K_{3}\frac{|x-4|^{3}}{3!}=\frac{3}{256\cdot 3!}|x-4|^{3}=\frac{1}{512}|x-4|^{3}.$$
This is where the graph of $y=\sqrt{x}$ lies:

In particular, we can find an interval for $\sqrt{4.1}$: $$E_2(4.1)=\frac{1}{512}|4.1-4|^{3}=\frac{.001}{512}\approx .000002.$$ Therefore, $$T_2(4)-.000002\le \sqrt{4.1} \le T_2(4)+.000002.$$ $\square$

**Example (exponent).** Let's estimate $e^{-.01}$ within $6$ decimals. In other words, we need to find such an $n$ that we are guaranteed to have:
$$\left| e^{-.01}-T_n(-.01)\right|<10^{-6},$$
where $T_n$ is the $n$th Taylor polynomial of $e^x$ around $x=0$. We estimate the derivative of this function on the interval $[-.01,0]$:
$$\left|\left(e^x\right)^{(i)}\right|=e^x\le 1=K_i.$$
Then, how do we make the error bound satisfy this:
$$\left| e^{-.01}-T_n(-.01)\right| \le K_{n+1}\frac{|x-a|^{n+1}}{(n+1)!}<10^{-6}?$$
We re-write and solve this inequality for $n$:
$$\frac{.1^{n+1}}{(n+1)!}<10^{-6}.$$
A large enough $n$ will work. We simply go through a few values of $n=1,2,3...$ until the inequality is satisfied:
$$\begin{array}{c|ll}
n&3&4&5&\\
\hline
\frac{.1^{n+1}}{(n+1)!}&0.000004167&0.000000083&0.000000001\\
\end{array}$$
It's $n=4$ and, therefore, the answer is:
$$T_4(-.01)=\sum _{i=0}^4 \frac{1}{i!}(-.01)^i.$$
$\square$

We now come back to the requirement from last section that the approximations have to provide faster and faster convergence to zero of the error.

**Corollary (Error convergence).** Suppose a function $f$ is $(n+1)$ times differentiable at $x=a$. Suppose also that for each $i=0,1,2,...,n+1$, we have
$$|f^{(i)}(t)|<K_i \text{ for all } t \text{ between } a \text{ and } x,$$
and some real number $K_i$. Then
$$\frac{E_n(x)}{|x-a|^{n}}=\frac{|f(x)-T_n(x)|}{|x-a|^{n}}\to 0 \text{ as } x\to a.$$

## Sequences of functions

Let's take a closer look at the last theorem. First, the Taylor polynomials of $f$ form

- a
*sequence of functions*$T_n:\ n=0,1,2,3...$.

Second, suppose we fix a value $x$ within the interval, then we have

- a
*sequence of numbers*$T_n(x):\ n=0,1,2,3...$.

This sequence has the following property according to the theorem: $$T_n(x)\to f(x) \text{ as } n\to \infty.$$

Since this convergence occurs for *each* $x$, we can speak of convergence of the whole sequence of Taylor polynomials. The general idea is the following.

**Definition.** Suppose we have a sequence of functions $f_n$ defined on interval $I$. We say that the sequence *converges point-wise on* $I$ to a function $f$ if for every $x$ the values of these functions at $x$ converge, as a sequence of numbers, to the value of $f$ at $x$, i.e.,
$$f_n(x)\to f(x).$$
Otherwise, we say that the sequence *diverges point-wise*.

Therefore, it only takes divergence for a single value of $x$ to make the whole sequence of functions diverge.

**Example (shrink).** This is how, typically, a sequence converges, to the zero function in this case:

The functions chosen here are:
$$g(x)=3-\cos x,\quad f_n(x)=\left(\frac{2}{3}\right)^n g(x),$$
or any other function $g$. The coefficients are getting smaller and shrink the graph of the original function toward the $x$-axis. The proof is routine:
$$|f_n(x)|=\left|\left( \frac{2}{3}\right)^ng(x)\right|=|g(x)|\left| \left( \frac{2}{3}\right)^n\right|\to 0 ,$$
by the *Constant Multiple Rule*. Thus, each numerical sequence produced from this sequence of functions converges to $0$:

$\square$

**Example.** Another simple choice of a sequence is:
$$f_n(x)=f(x)+\frac{1}{n} \to f(x)+0=f(x),$$
for each $x$, no matter that $f$ is. $\square$

We also saw these examples of sequences of functions in Chapters 7 and 11 respectively.

**Example (secants).** Let's interpret the convergence of the secant lines to the tangent line as convergence of functions:

Let's make this specific. Suppose a function $f$ is defined on an open interval that contains $x=a$. For each $n=1,2,3,...$, define a function $f_n$ as the linear function that passes through these two points: $$(a,f(a)) \text{ and } \left( a+\frac{1}{n},f\left( a+\frac{1}{n} \right)\right).$$ In a sense, the latter converges to the former. Then, $$f_n(x)=f(a)+\frac{f(a+1/n)-f(a)}{1/n}(x-a).$$ If $F$ is differentiable at $x=a$, the fraction, its slope, converges to $f'(a)$ as $n\to \infty$ according to the definition of the derivative: $$\frac{f(a+1/n)-f(a)}{1/n} \to f'(a).$$ Therefore, $$\begin{array}{ccc} f_n(x)&=&f(a)&+&\frac{f(a+1/n)-f(a)}{1/n}&(x-a)\\ \downarrow&&||&&\downarrow&||\\ f(x)&=&f(a)&+&f'(a)&(x-a). \end{array}$$ This new function is the best linear approximation of the $f$ at $a$ and its graph is the tangent line of $f$ at $a$. When the function $f$ is not differentiable at this point, our sequence of functions diverges. $\square$

**Example (Riemann sums).** Consider also the sequence of step functions that represent the left-end Riemann sums. Let's make this specific. Suppose $f$ is defined on interval $[a,b]$. Suppose also that the interval is equipped with a specific augmented partition, the left-end, for each $n=0,1,2,3,...$, with $\Delta x=(b-a)/n$ and $x_i=a+\Delta x \cdot i$.

Then, for each $n=0,1,2,3,...$, define a step-function piece-wise as follows:
$$f_n(x)=F(x_i)\text{ when } x_i\le x < x_{i+1}.$$
When $f$ is continuous, we know that this sequence will converge to $f$ point-wise. When the function is not continuous on this interval, our sequence of functions might diverge. It doesn't *have to* diverge, however, as the example of $f$ that is itself a step-function shows. $\square$

**Exercise.** Consider also the sequences of step functions that represent the right and middle point Riemann sums and the trapezoid approximations.

In all these examples, the convergence of functions means convergence of (many) sequences of *numbers*. However, the whole graphs of these functions $f_n$ seem to completely accumulate toward the graph of $f$. This is a “stronger” kind of convergence.

**Definition.** Suppose we have a sequence of functions $f_n$ defined on interval $I$. We say that the sequence *converges uniformly on* $I$ to function $f$ if the maximum value of the difference converges, as a sequence of numbers, to zero:
$$\sup_{I}|f_n(x)- f(x)|\to 0.$$
Otherwise, we say that the sequence *diverges uniformly*.

In other words, the graphs of the functions of the sequence will *eventually* fit entirely within a strip around $f$, no matter how narrow:

It is clear that every uniformly convergent sequence converges point-wise (to the same function): $$\sup_{I}|f_n(x)- f(x)|\to 0\ \Longrightarrow\ |f_n(x)- f(x)|\to 0 \text{ for each }x\text{ in }I.$$ The converse isn't true.

**Example.** We have a sequence of continuous functions defined on $I=[0,1]$ such that
$f_n(0)=0$ and $f_n(x)=0$ for $x>\tfrac{1}{n+1}$ and the rest of the values produce a “tooth” of height $1$:

Then, we have:

- $f_n$ converges to $0$ point-wise; but
- $f_n$ does not converge to $0$ uniformly.

$\square$

**Example.** From the last theorem, we conclude that, whenever the second derivative of $f$ is bounded on a bounded interval $I=[a,b]$, the sequence of Taylor polynomials converges to $f$ uniformly on $I$. On an unbounded interval, the uniform convergence isn't guaranteed; for example, consider $f(x)=\cos x$ on $(-\infty,+\infty)$:

A polynomial of any degree will eventually go to infinity... Point-wise convergence remains. $\square$

Let's compare the two again by referring to the original definition of convergence:

- $f_n$ converges to $f$ point-wise if
*for each $x$*, for any $\varepsilon >0$ there is $N>0$ such that

$$n>N \ \Longrightarrow\ |f_n(x)-f(x)|<\varepsilon;$$

- $f_n$ converges to $f$ uniformly if
- for any $\varepsilon >0$ there is $N>0$ such that,
*for each $x$*,

$$n>N \ \Longrightarrow\ |f_n(x)-f(x)|<\varepsilon;$$ As you see, we just moved “for each $x$” within the sentence.

**Exercise.** Investigate convergence of the sequence $f_n(x)=\frac{1}{nx}$.

## Infinite series

Series are sequences.

Our goal is to be able to find out the extent to which our approximations work. Specifically, we need to know *for what values of $x$* the sequence of the values of the Taylor polynomials $T_n(x)$ of a function $f$ converges to the value of the function $f(x)$.

Following the development in the last section, we can consider sequences of *any* functions or polynomials. What makes Taylor polynomials different? It's in the *recursive* formula for the Taylor polynomials, for a fixed $x$:
$$T_{n+1}(x)=T_n(x)+c_{n+1}(x-a)^{n+1}.$$
The formula continues to compute the Taylor polynomials of higher and higher degrees by simply adding new terms to the previous result. Taylor polynomials are just a special case of the following.

**Definition.** A sequence $q_n$ of polynomials given by a recursive formula:
$$q_{n+1}(x)=q_n(x)+c_{n+1}(x-a)^{n+1},\ n=0,1,2,...,$$
for some fixed number $a$ and a sequence of coefficients $c_n$, is called a *power series centered at* $a$.

Power series may come from fully formed.

How do we add together the infinitely many terms of such a sequence? *Via limits* is the only answer. The most important definition of this chapter is next.

**Definition.** Suppose $a_n:\ n=s,s+1,s+2,...$ is a sequence. Its *sequence of partial sums* $p_n:\ n=s,s+1,s+2,...$ is a sequence defined by the following recursive formula:
$$p_s=a_s,\quad p_{n+1}=p_n+a_n.$$

This process is a familiar way of creating new sequences from old. Imagine that we stack the terms of the original sequence $a_n$ (left) on top of each other to produce $p_n$, a new sequence (right):

In the next several sections, we will concentrate on such sequences of *numbers* (rather than functions) and occasionally apply the results to power series.

The original sequence is left behind, and it is *the limit of this new sequence* that we are after.

**Example.** In either of the two tables below, we have a sequence given in the first two columns. Its $n$th term formula is known and, because of that, its limit is also easy to find. The third column shows the sequence of partial sums of the first. Its $n$th term formula is unknown and, because of that, its limit is not easy to find.
$$\begin{array}{c|c|lll}
n&a_n&p_n\\
\hline
1&\frac{1}{1}&\frac{1}{1}\\
2&\frac{1}{2}&\frac{1}{1}+\frac{1}{2}\\
3&\frac{1}{3}&\frac{1}{1}+\frac{1}{2}+\frac{1}{3}\\
\vdots&\vdots&\vdots\\
n&\frac{1}{n}&\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+...+\frac{1}{n}\\
\vdots&\vdots&\vdots\\
\downarrow&\downarrow&\downarrow\\
\infty&0&?
\end{array}\quad\quad
\begin{array}{c|c|lll}
n&a_n&p_n\\
\hline
1&\frac{1}{1}&\frac{1}{1}\\
2&\frac{1}{2}&\frac{1}{1}+\frac{1}{2}\\
3&\frac{1}{4}&\frac{1}{1}+\frac{1}{2}+\frac{1}{4}\\
\vdots&\vdots&\vdots\\
n&\frac{1}{2^n}&\frac{1}{1}+\frac{1}{2}+\frac{1}{4}+...+\frac{1}{2^{n-1}}\\
\vdots&\vdots&\vdots\\
\downarrow&\downarrow&\downarrow\\
\infty&0&?
\end{array}$$
$\square$

**Example.** An example from Chapter 5 shows what can happen if we ignore the issue of convergence:
$$\begin{array}{cccccc}
0& \overset{\text{?}}{=\! =\! =} &0&&+0&&+0&&+0&&+...\\
& \overset{\text{?}}{=\! =\! =} &(1&-1)&+(1&-1)&+(1&-1)&+(1&-1)&+...\\
& \overset{\text{?}}{=\! =\! =} &1&-1&+1&-1&+1&-1&+1&-1&+...\\
& \overset{\text{?}}{=\! =\! =} &1&+(-1&+1)&+(-1&+1)&+(-1&+1)&+(-1&+1)&...\\
& \overset{\text{?}}{=\! =\! =} &1&+0&&+0&&+0&&+0&&+...\\
& \overset{\text{?}}{=\! =\! =} &1.
\end{array}$$

We go through -- implicitly -- *three* series. The problem occurs because the second series diverges. To detect the switch, we observe that the sequences underlying the series are different (we list the first two):
$$\begin{array}{c|c|lll}
n&a_n&p_n&=&p_n\\
\hline
1&0=1-1&0&=&0\\
2&0=1-1&0+0&=&0\\
3&0=1-1&0+0+0&=&0\\
\vdots&\vdots&\vdots&&\vdots\\
n&0=1-1&0+0+0+...+0&=&0\\
\vdots&\vdots&\vdots&&\vdots\\
\downarrow&\downarrow&&&\downarrow\\
\infty&0&&&0
\end{array}\quad\quad
\begin{array}{c|c|lll}
n&a_n&p_n&=&p_n\\
\hline
1&1&1&=&1\\
2&-1&1-1&=&0\\
3&1&1-1+1&=&1\\
\vdots&\vdots&\vdots&&\vdots\\
n&(-1)^n&1-1+1-...+(-1)^n&=&1\text{ or }0\\
\vdots&\vdots&\vdots&&\vdots\\
\downarrow&\downarrow&&&\downarrow\\
\infty&\text{DNE}&&&\text{DNE}
\end{array}$$
As a divergent sequence, the second one cannot be equal to anything. $\square$

**Exercise.** Which of the “$\overset{\text{?}}{=\! =\! =}$” signs above is incorrect?

Warning: every series is the sequence of partial sums of some sequence.

**Example.** Is it possible to find the limit of the sequence of partial sums $p_n$ without first finding the formulas for its $n$th term? Formulas that have “...” don't count! Neither do formulas that rely the *sigma notation*.
$$\begin{array}{l|l|l}
&\text{list of terms}&\text{formula for }n\text{th term}\\
\hline
\text{original sequence:}&1,\ \frac{1}{2},\ \frac{1}{3},\ \frac{1}{4},...& \frac{1}{n}\\
\hline
\text{partial sums sequence:}&1,& \\
&1+\frac{1}{2},\\
&1+\frac{1}{2}+\frac{1}{3},\\
&...&\\
&1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+...+\frac{1}{n}&\sum_{k=1}^n\frac{1}{k}\\
\hline
\end{array}$$
$\square$

The challenge posed by series is that they never come with a $n$th term formula! As an example, a $n$th partial sum of a power series is a polynomial of $x$ and it can only be represented as a sum of $n$ terms.

**Definition.** For a sequence $a_n$, the limit $S$ of its sequence of partial sums $p_n$ is called by the *sum of the sequence* or, more commonly, the *sum of the series*:
$$S=\lim_{n \to \infty} \sum_{i=s}^n a_i.$$
When the limit of partial sums exists, we say that the *series converges*. When the limit does not exist, we say that the *series diverges*. A special case of divergence is the divergence to (plus or minus) infinity and we say that the sum is *infinite*:
$$\lim_{n \to \infty} \sum_{i=s}^n a_i=\infty.$$

Warning: the starting point, $s$, doesn't affect convergence but does affect the sum when the series converges.

**Example.** Note that a power series may converge for some values of $x$ and diverge for others. For example,
$$1+x+x^2+...$$
converges for $x=0$ but diverges for $x=1$. We also demonstrate below that it converges for $x=1/2$. $\square$

This is an abbreviated **notation** to write the limit of partial sums:
$$\sum_{i=s}^\infty a_i =\lim_{n \to \infty} \sum_{i=s}^n a_i.$$
Recall that here $\Sigma$ stands for “S” meaning “sum”. This is how the notation is deconstructed:
$$\begin{array}{rlrll}
\text{beginning}&\text{and end values for }k\\
\downarrow&\\
\begin{array}{r}\infty\\ \\k=0\end{array}&\sum \bigg(\quad \frac{1}{2^k} + \frac{1}{3^k} \quad\bigg) &=&\frac{7}{2}.\\
&\qquad\qquad\uparrow&&\uparrow\\
&\qquad\text{a specific sequence}&&\text{a specific number, infinity, or “DNE”}
\end{array}$$

Warning: Riemann sums aren't series.

The word *series* can only refer to one, or a combination of all, of the three:

- the original sequence $a_n$,
- its sequence of partial sums $p_n$, and
- the limit of the latter.

Warning: the limit of $a_n$ and the limit of $p_n$ re not the same.

**Example.** When there are only finitely many non-zero terms in the sequence, its sum is simple:
$$a_i=0 \text{ for each } i>N \ \Longrightarrow\ \sum_{i=1}^\infty a_i=\sum_{i=1}^N a_i.$$
The series converges to this number. $\square$

**Example.** This is a simple example of a divergent sequence:
$$a_i=1 \text{ for each } i \ \Longrightarrow\ \sum_{i=1}^\infty a_i=\lim_{n \to \infty} \sum_{i=1}^n 1=\lim_{n \to \infty}n=\infty.$$
$\square$

We saw that, when facing infinity, algebra may fail. But it doesn't when the series converges! In that case, the series can be subjected to algebraic operations. Furthermore, the power series too can be subjected to calculus operations...

**Example.** For a given real number, we can construct a series that tends to that number -- via truncations of its decimal approximations. The sequence
$$a_n=0.9 ,\ 0.09 ,\ 0.009 ,\ 0.0009 ,\ . . . \text{ tends to } 0 .$$
But its sequence of partial sums
$$p_n=0.9 ,\ 0.99 ,\ 0.999 ,\ 0.9999 , . . . \text{ tends to } 1 .$$
The sequence
$$a_n=0.3 ,\ 0.03 ,\ 0.003 ,\ 0.0003 ,\ . . . \text{ tends to } 0 .$$
But its sequence of partial sums
$$p_n=0.3 ,\ 0.33 ,\ 0.333 ,\ 0.3333 ,\ . . . \text{ tends to } 1 / 3 .$$
The idea of series then helps us understand infinite decimals.

- What is the meaning of $.9999...$? It is the sum of the series:

$$\sum_{i=1}^\infty 9\cdot 10^{-i}.$$

- What is the meaning of $.3333...$? It is the sum of the series:

$$\sum_{i=1}^\infty 3\cdot 10^{-i}.$$ $\square$

**Exercise.** Find such a series for $1/6$.

We know that a convergent sequence can have only one limit.

**Theorem (Uniqueness).** A series can have only one sum (finite or infinite).

Thus, there can be no two limits and we are justified to speak of *the* sum.

Note that this conclusion implies that any power series defines a function with its domain consisting of those values of $x$ for which the series converges.

**Example.** The way we use the limits to transition from sequences to series is familiar. It is identical to the transition from a function $f$ to its integral over $[0,\infty)$:

Indeed, compare: $$\begin{array}{ll} \int_1^\infty f(x)\, dx&=\lim_{b \to \infty}\int_1^b f(x)\, dx,\\ \sum_{i=1}^\infty a_i &=\lim_{n \to \infty} \sum_{i=1}^n a_i. \end{array}$$ Furthermore, the latter will fall under the scope of the former if we choose $f$ to be the step-function produced by the sequence $a_n$: $$f(x)=a_{[x]}.$$ $\square$

## Examples of series

The key to evaluating sums of series (or discovering that they diverge) is to find an *explicit formula for the $n$th partial sum* $p_n$ of $a_n$.

We have to be careful and not take the former over the latter: $$\lim_{n \to \infty}a_n\ \text{ vs. }\ \lim_{n \to \infty}p_n.$$

**Example (constant).** Recall about the *constant sequence* that, for any real $c$, we have
$$\lim_{n \to \infty}c = c.$$
The result tells us nothing about the series produced by the sequence! Consider instead:
$$c+c+c+...=\lim_{n \to \infty}\sum_{k=1}^n c = \lim_{n \to \infty} nc.$$
Therefore, such a constant series diverges unless $c=0$. The result is written as follows:
$$\sum_{k=1}^\infty c=\infty.$$
$\square$

**Example (arithmetic).** More generally, consider what we know about *arithmetic progressions*; for any real numbers $m,b>0$, we have:
$$\lim_{n \to \infty}(b+nm) =
\begin{cases}
-\infty &\text{ if } m<0,\\
b &\text{ if } m=0,\\
+\infty &\text{ if } m>0.
\end{cases}$$
The result tells us nothing about the series! Instead, let's examine the partial sums and, because each is comprised of only finitely many terms, we can manipulate them algebraically before finding the limit, as follows:
$$\begin{array}{llll}
b+(b+m)+(b+2m)+(b+3m)+...&=\lim_{n \to \infty}\sum_{k=0}^n (b+km) \\
\end{array}$$
$\square$

**Exercise.** Show that such a series diverges unless $b=m=0$:
$$\sum_{k=0}^\infty (b+km)=\infty.$$

There are more interesting series.

**Definition.** The series produced by the sequence of reciprocals $a_n=1/n$,
$$\sum_{k=1}^\infty\frac{1}{k}=\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+...+\frac{1}{k}+...,$$
is called the *harmonic series*.

Below we show the underlying sequence, $a_n=1/n$, that is known to converge to zero (left). Meanwhile, plotting the first $3000$ terms of the sequence of the partial sums of the series seems to suggest that it also converges (right):

Examining the data, we find that the sum isn't large, so far:
$$\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+...+\frac{1}{3000}\approx 8.59.$$
We know better than to think that this tells us anything; the series *diverges* as we will show in this chapter.

**Example (geometric).** If we replace the power of $k$ in the harmonic series, which is $-1$, with $-1.1$, the series,
$$\sum_{k=1}^\infty\frac{1}{k^{1.1}},$$
the graph of its partial sums looks almost exactly the same:

However, we show below that it is convergent! $\square$

**Example (factorials).** In contrast, this is how fast the series of the *reciprocals of the factorials*,
$$\sum_{k=1}^\infty\frac{1}{k!},$$
converges:

As shown in earlier in this chapter, it converges to $e$. $\square$

One of the most important series is the following.

**Definition.** The series produced by the geometric progression $a_n=a\cdot r^n$ with ratio $r$,
$$\sum_{k=0}^{\infty} ar^k=a+ar^1+ar^2+ar^3+ar^4+...+ar^k+...,$$
is called the *geometric series* with ratio $r$.

Recall the fact about geometric *progressions* from Chapter 1.

**Theorem.** The geometric progression with ratio $r$ converges and diverges according to the value of $r$:
$$\lim_{n \to \infty}r^n =
\begin{cases}
\text{diverges } &\text{ if } r \le -1,\\
0 &\text{ if } |r|<1,\\
1 &\text{ if } r=1,\\
+\infty &\text{ if } r>1.
\end{cases}$$

Warning: the result tells us nothing about the convergence of the series.

**Example (geometric).** The construction shown below is recursive; it cuts a square into four, takes the first three, and then repeats the procedure with the last one:

Each step creates two terms in the following geometric series that seems to add up to $1$: $$\sum_{k=1}^n\frac{1}{2^k}=\left(\frac{1}{2}+\frac{1}{4}\right)+\left(\frac{1}{8}+\frac{1}{16}\right)+...=1.$$ The construction suggests that we have a convergent series for $r=1/2$. $\square$

Let's investigate the partial sums of the general geometric series under the following restrictions: $$r\ne 0,\ a\ne 0.$$ Below, we just repeat the analysis form Chapter 1. We write the $n$th partial sum $p_n$ in the first row, its multiple $rp_n$ (all terms are multiplied by $r$)in the second, subtract them, and then cancel the terms that appear twice: $$\begin{array}{lllll} p_n &=ar^0 &+ ar^1 &+ ar^2 &+ ... &+ ar^{n-1} &+ ar^n&\text{...subtract}\\ \quad rp_n&=\quad ar^1 &+\quad ar^2 &+\quad ar^3 &+ ... &+\quad ar^{n} &+\quad ar^{n+1}\\ \hline p_n-rp_n&=ar^0-ar^1 &+ ar^1-ar^2 &+ ar^2-ar^3 &+ ... &+ ar^{n-1}-ar^{n}&+ar^n-ar^{n+1}\\ &=ar^0 & & & &&\quad\qquad -ar^{n+1}. \end{array}$$ Therefore, $$p_n(1-r)=a-ar^{n+1}.$$ Thus, we have an explicit formula for the $n$th term of the partial sum: $$p_n=a\frac{1-r^{n+1}}{1-r}.$$ The conclusion is the following.

**Theorem (Geometric Series).** The geometric series with ratio $r$ converges if and only if
$$|r|<1;$$
in that case, the sum is:
$$\begin{array}{|c|}\hline \quad \sum_{k=0}^\infty ar^k=\frac{a}{1-r}. \quad \\ \hline\end{array}$$

**Proof.** We use the above formula and the familiar properties of limits:
$$\begin{array}{llll}
\sum_{k=0}^n ar^k&=\lim_{n\to \infty}p_n\\
&=\lim_{n\to \infty}a\frac{1-r^{n+1}}{1-r}\\
&=\frac{a}{1-r}\lim_{n\to \infty}(1-r^{n+1})\\
&=\frac{a}{1-r}\left(1-\lim_{n\to \infty}r^{n+1}\right).
\end{array}$$
To finish, we invoke the theorem above about the convergence of geometric *progressions*. $\square$

The theorem is confirmed numerically below:

Such a powerful theorem will allow us to study other series by comparing them to a geometric series.

**Example (Zeno's paradox).** Recall a simple scenario: as you walk toward a wall, you can never reach it because once you've covered half the distance, there is still distance left, etc.

We now know that the sum of the distances is $1$ as a geometric series:
$$\frac{1}{1}+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+...=1.$$
What resolves the paradox however is the fact that the *time* periods also form a geometric series:
$$\frac{1}{1}\frac{1}{v}+\frac{1}{2}\frac{1}{v}+\frac{1}{4}\frac{1}{v}+\frac{1}{8}\frac{1}{v}+...=\frac{1}{v},$$
where $v$ is the speed of the person. The time isn't infinite! $\square$

**Example.** Let's represent the number $.44444...$ as a series. If we can demonstrate convergence, this is what we have:
$$\begin{array}{lll}
.4444...&=.4 &+.04 &+.004 &+.0004&+...\\
&=4\cdot .1 &+4\cdot .04 &+4\cdot .004 &+4\cdot .0004&+...\\
&=4\cdot 10^{-1} &+4\cdot 10^{-2} &+4\cdot 10^{-3} &+4\cdot 10^{-4}&+...
\end{array}$$
This is a geometric series with the first term $a=.4$ and the ratio $r=.1<1$. Therefore, it converges by the theorem to the following:
$$\sum_{i=0}^n.4\cdot .1^n=\sum_{i=0}^n ar^n=\frac{a}{1-r}=\frac{.4}{1-.1}=\frac{4}{9}.$$
$\square$

**Exercise.** Use the last example to show that for any digit $d$ we have the following representation:
$$.\overline{dddd....}=\frac{d}{9}.$$

**Example (geometric).** Here is another geometric interpretation of a geometric series:

We start with a simple observation:
$$1=\frac{3}{4}+\frac{1}{4},$$
and then keep using it in the following:
$$\begin{array}{lll}
1&=\frac{3}{4}&+\frac{1}{4}\\
&=\frac{3}{4}&+\frac{1}{4}\cdot 1\\
&=\frac{3}{4}&+\frac{1}{4}\bigg( \frac{3}{4}&+\frac{1}{4}\bigg)\\
&=\frac{3}{4}&+\frac{3}{4^2}&+\frac{1}{4^2}\cdot 1\\
&=\frac{3}{4}&+\frac{3}{4^2}&+\frac{1}{4^2}\bigg( \frac{3}{4}&+\frac{1}{4}\bigg)\\
&=...\\
&=\sum_{n=1}^\infty \frac{3}{4^n}.
\end{array}$$
This *infinite* computation makes sense thanks to the last theorem. $\square$

**Example (power series).** Let's not forget why we are doing this. Consider the familiar power series
$$1+x+x^2+x^3+...$$
It converges for $x=0$ and diverges for $x=2$. We know more now. For each $x$, this *is a geometric series* with ratio $r=x$. Therefore, it converges for every $x$ that satisfies $|x|<1$. The sum is a number and this number is the value of a *function* at $x$. The interval $(-1,1)$ is the *domain* of the function defined this way. The theorem even provides a formula for this function:
$$1+x+x^2+x^3+..=\frac{1}{1-x}.$$
The difference between these two functions given here is clear; it's their domains. The partial sums of the series are polynomials that approximate the function:

Note how the approximation fails outside the interval $(-1,1)$. $\square$

**Exercise.** Confirm that these are Taylor polynomials of the function.

**Exercise.** Find the sums of the series generated by each of these sequences or show that it doesn't exist:

- (a) $1/1,\ 1/3,\ 1/5,\ 1/7,\ 1/9,\ 1/11,\ 1/13, ...$;
- (b) $1/.9,\ 1/.99,\ 1/.999,\ 1/.9999,\ ...$;
- (c) $1,\ -1,\ 1,\ -1,\ ...$;
- (d) $1,\ -1/2,\ 1/4,\ -1/8,\ ...$;
- (e) $1,\ 1/4,\ 1/16\ ,1/64,\ ...$.

**Example (telescoping series).** When a series isn't geometric, it might still be possible to simplify the partial sums via algebraic tricks and find its sum:
$$\begin{align}
\sum_{k=1}^\infty \frac{1}{k(k+1)} & = \sum_{k=1}^\infty \left( \frac{1}{k} - \frac{1}{k+1} \right) \\
& = \lim_{n\to\infty} \sum_{k=1}^n \left( \frac{1}{k} - \frac{1}{k+1} \right) \\
& = \lim_{n\to\infty} \left\lbrack {\left(1 - \frac{1}{2}\right) + \left(\frac{1}{2} - \frac{1}{3}\right) + ... + \left(\frac{1}{n} - \frac{1}{n+1}\right) } \right\rbrack \\
& = \lim_{n\to\infty} \left\lbrack { 1 + \left( - \frac{1}{2} + \frac{1}{2}\right) + \left( - \frac{1}{3} + \frac{1}{3}\right) + ... + \left( - \frac{1}{n} + \frac{1}{n}\right) - \frac{1}{n+1} } \right\rbrack \\
& = \lim_{n\to\infty} \left\lbrack { 1 - \frac{1}{n+1} } \right\rbrack \\
&= 1.
\end{align}$$
$\square$

**Exercise.** Explain how the above computation is different from the following and why it matters:
$$\begin{align}
\sum_{k=1}^\infty \frac{1}{k(k+1)} & = \sum_{k=1}^\infty \left( \frac{1}{k} - \frac{1}{k+1} \right) \\
& = \left(1 - \frac{1}{2}\right) + \left(\frac{1}{2} - \frac{1}{3}\right) + ... + \left(\frac{1}{n} - \frac{1}{n+1}\right) +... \\
& = 1 + \left( - \frac{1}{2} + \frac{1}{2}\right) + \left( - \frac{1}{3} + \frac{1}{3}\right) + ... + \left( - \frac{1}{n} + \frac{1}{n}\right) + ... \\
&= 1.
\end{align}$$

**Exercise.** Use the algebraic trick from the last example to evaluate this integral:
$$\int \frac{1}{x(x+1)} \, dx.$$

The *Geometric Series Theorem*, as well as other theorems about convergence of series presented below, allows us to be bolder with computations that involve infinitely many steps.

**Exercise.** Suppose a chocolate bar contains a coupon and $10$ of such coupons can be exchanged for another bar. But this bar also contains a coupon and this coupon is worth $1/10$ of a bar! If the price of a bar is $\$1$, how much chocolate are you actually getting?

## From finite sums via limits to series

Initially, *there are no series*. A series is shorthand for what we do with sequences. At the next stage, **notation** takes over and the series are started to be treated as entities:
$$\sum_{k=m}^{\infty} s_k\ \text{ is the limit of partial sums of }\ s_k.$$

Warning: every sequence is the sequence of partial sums of some sequence; for any given sequence $p_n$, we simply define a new one via the difference construction: $$s_{n+1}=\Delta p_n=p_{n+1}-p_n.$$

As a matter of **notation** we will often omit the bounds in the sigma notation for series:
$$\sum s_k\ \text{ instead of }\ \sum_{k=m}^{\infty} s_k.$$
The reason is that the difference between $\sum_{k=a}^{n} s_k$ and $\sum_{k=b}^{n} s_k$ is just a finitely many terms and, therefore, they either both converge or both diverge, according to the *Truncation Principle* from Chapter 5.

In Chapter 1 we introduced *finite sums*, i.e., sums of the terms of a given sequence $s_n$ over an interval $[a,b]$ of values of $k$:
$$\sum_{k=p}^q s_k=s_p+s_{p+1}+...+s_q,$$
and their properties. We used those properties to study the Riemann sums in Chapter 11; these sums are the areas under the graphs of the step-functions produced by the sequences. Now we just need to transition to *infinite sums* via limits.

Next, we prove some theorem about convergence of series: if an algebraic relation exists for finite sum or sums, then this relation remains valid for series -- provided they converge!

First, the *comparison properties*.

If two sequences are comparable then so are their sums:

In fact, this simple algebra tells the whole story:
$$\begin{array}{rcl}
u&\le&U,\\
v&\le&V,\\
\hline
u+v&\le&U+V.\\
\end{array}$$
The only difference is that we have more than just two terms:
$$\begin{array}{rcl}
u_p&\le&U_p,\\
u_{p+1}&\le&U_{p+1},\\
...&...&...\\
u_q&\le&U_q,\\
\hline
u_p+...+u_q&\le&U_p+...+U_q,\\
\sum_{n=p}^{q} u_n &\le&\sum_{n=p}^{q} U_n.
\end{array}$$
This is the *Comparison Rule for Sums* we saw in Chapter 1. If we zoom out, we see that the larger sequence always contains a larger area under its graph:

Taking the limit $q\to\infty$ allows us to make the following conclusion based on the *Comparison Rule for Limits*.

**Theorem (Comparison Rule for Series).** Suppose $u_n$ and $U_n$ are sequences. Then:
$$u_n\le U_n \ \Longrightarrow\ \sum u_n \le \sum U_n,$$
provided either of the two series converges.

**Example.** Consider this obvious fact:
$$2^n-1\le 2^n\ \Longrightarrow\ \frac{1}{2^n-1}\ge\frac{1}{2^n}.$$
It follows that
$$\sum_{n=1}^\infty\frac{1}{2^n-1} \ge 1,$$
provided the series converges. $\square$

**Exercise.** What conclusions, if any, can we draw if we replace “$=\pm\infty$” in the last part of the theorem with “$=\mp\infty$”? What conclusions, if any, can we draw if we replace “$=\pm\infty$” in the last part of the theorem with “diverges”?

A related result is the following.

**Theorem (Strict Comparison Rule for Series).** Suppose $u_n$ and $U_n$ are sequences. Then:
$$u_n < U_n \ \Longrightarrow\ \sum u_n < \sum U_n,$$
provided either of the two series converges.

**Exercise.** Prove the theorem.

Now the *algebraic properties* of the series.

The picture below illustrates the idea of adding sums:

In fact, this simple algebra, the *Associative Property*, tells the whole story:
$$\begin{array}{lll}
&u&+&U,\\
+\\
&v&+&V,\\
\hline
=&(u+v)&+&(U+V).\\
\end{array}$$
The only difference is that we have more than just two terms:
$$\begin{array}{rcl}
u_p&+&U_p,\\
u_{p+1}&+&U_{p+1},\\
...&...&...\\
u_q&+&U_q,\\
\hline
u_p+...+u_q&+&U_p+...+U_q,\\
=(u_p+U_p)+&...&+(u_q+U_q),\\
=\sum_{n=p}^{q} (u_n &+&U_n).
\end{array}$$
This is the *Sum Rule for Sums* we saw in Chapter 1. Taking the limit $q\to\infty$ allows us to make the following conclusion based on the *Sum Rule for Limits*.

**Theorem (Sum Rule for Series).** Suppose $u_n$ and $U_n$ are sequences. If the two series converge then so does their term-by-term sum and we have:
$$\sum \left( u_n + U_n \right) = \sum u_n + \sum U_n.$$

In other words, we can *add two convergent series term by term*.

**Example (adding series).** Compute the sum:
$$\sum_{n=1}^\infty\left( \frac{1}{2^n}+\frac{e^{-n}}{3}\right).$$
This sum is a limit and we can't assume that the answer will be a number. Let's *try* to apply the Sum Rule:
$$\begin{array}{lllll}
\sum_{n=1}^\infty\left( \frac{1}{2^n}+\frac{e^{-n}}{3}\right)&=\sum_{n=1}^\infty \frac{1}{2^n}&+\sum_{n=1}^\infty\frac{e^{-n}}{3}&\text{...yes, but only if the two sums converge!}\\
&=\sum_{n=1}^\infty \left(\frac{1}{2}\right)^n&+\sum_{n=1}^\infty\frac{1}{3}\left( e^{-1} \right)^n&\text{...do they?}
\end{array}$$
These are two geometric series with the ratios equal to, respectively, $1/2$ and $1/e$. Both are smaller than $1$ and therefore the series converge! This conclusion justifies our use of the Sum Rule and the computation that follows:
$$\begin{array}{lllll}
\sum_{n=1}^\infty\left( \frac{1}{2^n}+\frac{e^{-n}}{3}\right)&=\sum_{n=1}^\infty \left(\frac{1}{2}\right)^n&+\sum_{n=1}^\infty\frac{1}{3}\left( e^{-1} \right)^n&\\
&=\frac{1/2}{1-1/2}&+\frac{1/3}{1-1/e}.&\text{This is the answer!}
\end{array}$$
We used the formula, $\frac{a}{1-r}$, from the last section. $\square$

The picture below illustrates the idea of multiplication of the terms of the sum:

In fact, this simple algebra, the *Distributive Property*, tells the whole story:
$$\begin{array}{lll}
c\cdot(&u&+&U)\\
=&cu&+&cU.\\
\end{array}$$
The only difference is that we have more than just two terms:
$$\begin{array}{rcl}
c&\cdot&u_p,\\
c&\cdot&u_{p+1},\\
...&...&...\\
c&\cdot&u_q,\\
\hline
c&\cdot&(u_p+...+u_q),\\
=c&\cdot&\sum_{n=p}^{q} u_n.
\end{array}$$
This is the *Constant Multiple for Sums* we saw in Chapter 1. Taking the limit $q\to\infty$ allows us to make the following conclusion based on the *Constant Multiple for Limits*.

**Theorem (Constant Multiple Rule for Series).** Suppose $u_n$ is a sequence. If the series converges then so does its term-by-term multiple and we have:
$$ \sum (c\cdot s_n) = c \cdot \sum s_n.$$

In other words, we can *multiply a convergent series by a number term by term*.

**Example (power series).** Note that we can also add two power series term by term:
$$\begin{array}{lll}
&\sum_{n=m}^{\infty} a_n(x-a)^n &+& \sum_{n=m}^{\infty}b_n(x-a)^n \\
=&\sum_{n=m}^{\infty} \bigg( a_n(x-a)^n &+& b_n(x-a)^n \bigg) \\
=&\sum_{n=m}^{\infty}(a_n+b_n)(x-a)^n ,
\end{array}$$
for all $x$ for which the series converge, creating a new power series. Note also that we can multiply a power series term by term:
$$\begin{array}{lll}
&c\cdot \sum_{n=m}^{\infty} a_n(x-a)^n\\
=&\sum_{n=m}^{\infty} c\cdot a_n(x-a)^n \\
=&\sum_{n=m}^{\infty}(ca_n)(x-a)^n ,
\end{array}$$
for all $x$ for which the series converges, creating a new power series. $\square$

Just as with integrals, there are no direct counterparts for the Product Rule and the Quotient Rule.

Just as with integrals, these rules are shortcuts: they allow us to avoid having to use the definition every time we need to compute a sum.

Let's make sure that our *decimal system* is on a solid ground. What is the meaning of
$$\pi=3.14159...?$$
The infinite decimal is understood as an infinite sum:
$$\begin{array}{ll}
3.14159...&=3&+.1&+.04&+ .001&+.0005&+.00009&+...\\
&=3&+1\cdot .1&+4\cdot .01&+ 1\cdot .001&+ 5\cdot .0001&+ 9\cdot .00001&+...\\
&=3&+1\cdot .1^1&+4\cdot .1^2&+ 1\cdot .1^3&+ 5\cdot .1^4&+ 9\cdot .1^5&+...
\end{array}$$
The power indicates the placement of the digit within the representation. In general, we have a sequence of integers between $0$ and $9$, i.e., *digits*, $d_1,d_2,...,d_n,...$. It is meant to define a real number via a series:
$$\overline{.d_1d_2...d_n...}\ \overset{\text{?}}{=\! =\! =}\ \sum_{k=1}^\infty d_k \cdot (.1)^k.$$
The theorem proves that such a definition makes sense.

**Theorem.** Suppose $d_k$ is a sequence of integers between $0$ and $9$. Then the series
$$\sum_{k=1}^\infty d_k\cdot .1^k= d_1\cdot .1+d_2\cdot .01+ d_3\cdot .001+...$$
converges.

**Proof.** First, the sequence of partial sums of this series is *increasing*:
$$p_{n+1}=p_{n}+d_{n+1}\cdot .1^{n+1}>p_n.$$
It is also *bounded*:
$$\begin{array}{lll}
p_n&=d_1\cdot .1+d_2\cdot .01+ d_3\cdot .001+...+ d_n\cdot .1^n\\
&<10\cdot .1+10\cdot .01+ 10\cdot .001+...+ 10\cdot .1^n\\
&<10\cdot .1+10\cdot .01+ 10\cdot .001+...+ 10\cdot .1^n+...\\
&=\frac{1}{1-.1}&\text{...because it's a geometric series with }r=.1.\\
&=9.
\end{array}$$
Therefore, the sequence is convergent by the *Monotone Convergence Theorem*. $\blacksquare$

The result explains why the Monotone Convergence Theorem is also known as the *Completeness Property of Real Numbers*.

**Exercise.** Prove the theorem by using the Comparison Rule for Series instead.

**Exercise.** State and prove an analogous theorem for *binary* arithmetic.

Moreover, the decimal numbers are also subject of the algebraic operations according to the algebraic theorems above.

**Example.** We can add infinite decimals:
$$\begin{array}{llll}
u&=\overline{.u_1u_2...u_n...}&=\sum_{k=1}^\infty u_k \cdot (.1)^k,\\
v&=\overline{.v_1v_2...v_n...}&=\sum_{k=1}^\infty v_k \cdot (.1)^k,\\
\hline
u+v&&=\sum_{k=1}^\infty (u_k+v_k) \cdot (.1)^k.
\end{array}$$
We can also multiply an infinite decimal by another real number:
$$\begin{array}{llll}
u&=\overline{.u_1u_2...u_n...}&=\sum_{k=1}^\infty u_k \cdot (.1)^k,\\
c\cdot u&&=\sum_{k=1}^\infty c\cdot u_k \cdot (.1)^k.
\end{array}$$
The formulas don't tell us the decimal representations of $u+v$ (unless $u_n+v_n<10$) or $c\cdot u$. $\square$

## Divergence

What if we face algebraic operations with series that *diverge*? The laws above tell us nothing! For example, this is the *Sum Rule*:
$$\sum s_n,\ \sum t_n \text{ converge } \Longrightarrow \sum \left( s_n + t_n \right) \text{ converges; } $$
but this isn't:
$$\sum s_n,\ \sum t_n \text{ diverge } \not\Longrightarrow \sum \left( s_n + t_n \right) \text{ diverges. } $$

**Exercise.** Show that this statement would indeed be untrue.

**Example.** Compute the sum:
$$\sum_{n=1}^\infty\left( \frac{1}{2^n}+\frac{e^n}{3}\right).$$
This sum is a limit and we can't assume that the answer will be a number. Let's *try* to apply the Sum Rule:
$$\begin{array}{lllll}
\sum_{n=1}^\infty\left( \frac{1}{2^n}+\frac{e^{n}}{3}\right)&\ \overset{\text{SR?}}{=\! =\! =}\ \sum_{n=1}^\infty \frac{1}{2^n}&+\sum_{n=1}^\infty\frac{e^{n}}{3}&\text{...yes, but only if the two series converge!}\\
&=\sum_{n=1}^\infty \left(\frac{1}{2}\right)^n&+\sum_{n=1}^\infty\frac{1}{3}e^n&\text{...do they?}
\end{array}$$
These are two geometric series with the ratios equal to, respectively, $1/2$ and $e$. The first one is smaller than $1$ and therefore the series converges but the second one is larger than $1$ and therefore the series diverges! The conclusion is that it is *unjustified to use the Sum Rule*. We pause at this point... and then try to solve the problem by other means. $\square$

Recall that, in the case of infinite limits, we follow the *algebra of infinities* ($c\ne 0$):
$$\begin{array}{|lll|}
\hline
\text{number } &+& (\pm\infty)&=\pm\infty\\
\pm\infty &+& (\pm\infty)&=\pm\infty\\
c &\cdot & (\pm\infty)&=\pm\operatorname{sign}(c)\infty\\
\hline
\end{array}$$
It follows that the series in the last example diverges to infinity.

These formulas suggest the following divergence result for series.

**Theorem (Squeeze Theorem for Series -- Divergence).** Suppose $s_n$ and $t_n$ are sequences. Suppose that, for integer $p$, we have:
$$s_n\geq t_n \text{ for } n\ge p. $$
Then,
$$\begin{array}{lll}
\sum s_n =-\infty &\Longrightarrow & \sum t_n=-\infty;\\
\sum s_n =+\infty &\Longleftarrow & \sum t_n=+\infty.
\end{array}$$

**Proof.** It follows from the *Squeeze Theorem for Sequences -- Divergence*. $\blacksquare$

**Example.** Consider this obvious fact:
$$\frac{1}{2-1/n}\ge\frac{1}{2}.$$
It follows that
$$\sum_{n=1}^\infty\frac{1}{2-1/n} =\infty.$$
$\square$

**Exercise.** Give examples of series that show that the converses are untrue.

Not all divergent series diverge to infinity as the ones in this theorem.

**Theorem (Divergence of Constant Multiple).** Suppose $s_n$ is a sequence. Then,
$$\sum s_n \text{ diverges }\ \Longrightarrow\ \sum cs_n \text{ diverges},$$
provided $c\ne 0$.

**Theorem (Divergence of Sum).** Suppose $s_n$ and $t_n$ are sequences . Then,
$$\begin{array}{lll}
\sum s_n \text{ diverges, }&\sum t_n \text{ converges }& \Longrightarrow& \sum (s_n+t_n) \text{ diverges;}\\
\sum s_n \text{ diverges, }&\sum t_n \text{ diverges }& \Longrightarrow& \sum (s_n+t_n) \text{ diverges;}
\end{array}$$
provided the latter series have non-negative terms.

**Exercise.** Prove these theorems.

Note that according to these theorems, the algebra of power series is only legitimate within their domains, i.e., for those values of $x$ for which they converge.

The pattern that we may have noticed is that constantly adding positive numbers that grow will give you infinity at the limit. The same conclusion is of course true for a constant sequence. What if the sequence decreases, $a_n\searrow$? Then it depends. For example, $a_n=1+1/n$ decreases but the series still diverges. It appears that the sequence should at least decrease to zero...

The actual result is crucial.

**Theorem (Divergence Test).** If the sequence $a_n$ doesn't converge to $0$, its sum diverges.

**Proof.** We have to invoke the definition! But let's turn to the *contrapositive* of the theorem:

- if the sum of a series converges then the underlying sequence converges to $0$.

In other words, we need to prove this: $$\lim_{n\to\infty}\sum_{k=1}^n a_k =P \ \Longrightarrow\ \lim_{n\to\infty}a_n=0,$$ where $P$ is some number, or for the sequence of partial sums $p_n$, we have $$\lim_{n\to\infty}p_n=P \ \Longrightarrow\ \lim_{n\to\infty}a_n=0,$$ We prove this by contradiction. Recall the recursive formulas for the partial sums: $$p_{n+1}=p_n+a_n,$$ and, accordingly, $$a_n=p_n-p_{n-1}.$$ Therefore, $$\begin{array}{lll} \lim_{n\to\infty}a_n&=\lim_{n\to\infty}(p_n-p_{n-1})&\text{...we apply the Sum Rule for sequences...}\\ &=\lim_{n\to\infty}p_n-\lim_{n\to\infty}p_{n-1}&\\ &=P-P\\ &=0. \end{array}$$ $\blacksquare$

The converse of theorem is untrue as seen from the example of the harmonic series.

**Example.**
$$\begin{array}{lll}
\lim \left( 1+\frac{1}{n} \right)\ne 0&\Longrightarrow& \sum \left( 1+\frac{1}{n} \right) \text{ diverges;}\\
\lim \sin n\ne 0&\Longrightarrow& \sum\sin n \text{ diverges;}\\
\lim \frac{1}{n^2}= 0&\Longrightarrow& \text{test fails.}
\end{array}$$
$\square$

## Series with non-negative terms

Their convergence is easier to determine. All we need is the *Monotone Convergence Theorem* from Chapter 5. Indeed, the sequence of partial sums of a series $\sum a_n$ with non-negative terms, $a_n>0$, is *increasing*:
$$p_{n+1}=p_{n}+a_{n+1}\ge p_n.$$
This is the example of a harmonic series:

If such a sequence is also *bounded*, it is convergent. Therefore, such a series can't just diverge, such as $\sum \sin n$, it has to *diverge to infinity*.

**Theorem (Non-negative Series).** A series $\sum a_n$ with non-negative terms, $a_n\ge 0$, can only have two outcomes:

- it converges, or
- it diverges to infinity.

Since the latter option is written as
$$\sum a_n=\infty,$$
the former is often written -- in the new **notation** -- as
$$\sum a_n<\infty,$$
whenever the value of the sum is not being considered.

Many theorems in the rest of the chapter will only tell the former from the latter...

**Example.** Since
$$\lim \left(1+\frac{(-1)^n}{n}\right)=1\ne 0,$$
the series
$$\sum \left(1+\frac{(-1)^n}{n}\right),$$
fails the *Divergence Test* and, therefore, diverges. Moreover,
$$\sum \left(1+\frac{(-1)^n}{n}\right) =\infty.$$
$\square$

**Exercise.** Prove that the harmonic series diverges by following on this observation: for each $k$ consecutive terms, $\frac{1}{k+1},\ \frac{1}{k+2},\ ..., \frac{1}{2k}$, they are all $\ge \frac{1}{2k}$, so their sum is $\ge \frac{k}{2k}=\frac{1}{2}$.

We now address the issue of series vs. improper integrals over infinite domains. Both are limits (and the notation matches too): $$\begin{array}{ll} \int_1^\infty f(x)\, dx&=\lim_{b \to \infty}\int_1^b f(x)\, dx,\\ \sum_{i=1}^\infty a_i &=\lim_{n \to \infty} \sum_{i=1}^n a_i. \end{array}$$ And both represent areas under graphs of certain functions:

We can conjecture now that if $f$ and $a_n$ are related, then these limits, though not equal, may be related. The sequence may come from *sampling* the function:
$$a_n=f(n),\ n=1,2,3,...,$$

**Example.** Consider this pair:
$$f(x)=e^x \text{ and }a_n=e^{-n}.$$
What is the relation between the integral of the former and the sum of the latter? Both are the areas of certain regions and we can place one below or above the other:

The improper integral is easy to compute: $$\int_1^\infty e^{-x}\, dx=\lim_{b \to \infty}\int_1^b e^{-x}\, dx=\lim_{b \to \infty}- (e^{-b}-e^1)=e.$$ Meanwhile, the series is geometric with $r=1/e$: $$\sum_{n=1}^\infty e^{-n}=\frac{1/e}{1-1/e}.$$ Both converge! $\square$

A more general result is the following.

**Theorem.** The following two either both converge or both diverge (to infinity) for any $r>0$:

- the improper integral of the exponential function with base $1/r$ over $[1,\infty)$ and
- the geometric series with ratio $1/r$;

i.e., $$\int_1^\infty r^{-x}\, dx < \infty \ \Longleftrightarrow\ \sum_{n=1}^\infty r^{-n} < \infty.$$

**Exercise.** Prove the theorem.

We will now try to apply the idea works to a general series with non-negative terms if we can match it with an improper integral.

First, we might discover that our series is “dominated” by a *convergent* improper integral:
$$\sum_{k=1}^\infty a_n \le \int_1^\infty f(x)\, dx< \infty.$$
The condition is satisfied when:
$$a_n \le f(x) \text{ for all }x\text{ in }[n,n+1].$$

Or, we might discover that our series “dominates” a *divergent* improper integral:
$$\sum_{k=1}^\infty a_n \ge \int_1^\infty f(x)\, dx= \infty.$$
The condition is satisfied when:
$$a_n \ge f(x) \text{ for all }x\text{ in }[n,n+1].$$

There is a way to combine these two conditions into one. The idea is to “squeeze” the sequence between *two* functions. But where does the other function come from? We shift the graph of $f$ to the right by $1$ unit in order to put it *above* the sequence:
$$f(x)\le a_n \le f(x-1).$$

The condition that make sure that the picture is justified are listed below.

**Theorem (Integral Comparison Test).** Suppose on $[1,\infty)$,

- $f$ is continuous;
- $f$ is decreasing;
- $f$ is non-negative.

Suppose also that we have a sequence: $$a_n=f(n),\ n=1,2,...$$ Then the improper integral and the series below, $$\int_1^\infty f(x)\, dx \text{ and }\sum_{n=1}^\infty a_n,$$ either both converge or both diverge (to infinity); i.e., $$\int_1^\infty f(x)\, dx<\infty\ \Longleftrightarrow\ \sum_{n=1}^\infty a_n<\infty.$$

**Proof.** From the second condition, we conclude that for every $n=2,3...$ and all $n\le x\le n+1$, we have
$$f(x)\le a_n \le f(x-1).$$
Therefore, we have:
$$\int_n^{n+1} f(x)\, dx\le a_n \le \int_n^{n+1} f(x-1)\, dx,$$
or, after a substitution on the right,
$$\int_n^{n+1} f(x)\, dx\le a_n \le \int_{n-1}^{n} f(x)\, dx.$$
Adding all these for $n=2,3,4,...$, we obtain:
$$\int_2^\infty f(x)\, dx\le \sum_{n=2}^\infty a_n \le \int_1^\infty f(x)\, dx.$$
Now, either of the two conclusions of this theorem follows from the corresponding part of the *Non-negative Series* Theorem above. $\blacksquare$

Warning: the sum of the series can be estimated by the integrals but remains unknown.

This is the main step of the proof:

Warning: as far as power series, such as $1+x+x^2+...$, are concerned, this isn't the same $x$!

Unlike a geometric series, the following is impossible to evaluate explicitly: *the harmonic series diverges* because
$$\int_1^\infty \frac{1}{x}\, dx=\lim_{x\to+\infty}\ln x=+\infty.$$

More general is the following.

**Corollary ($p$-series).** A *$p$-series*, i.e.,
$$\sum \frac{1}{n^p},$$

- converges when $p>1$ and
- diverges when $0<p\le 1$.

**Proof.** Once the function $f$ is chosen:
$$f(x)=\frac{1}{x^p},$$
the rest of the proof is purely computational. Indeed,
$$\begin{array}{lll}
\int_1^\infty f(x)\, dx&=\int_1^\infty x^{-p}\, dx\\
&=\begin{cases}
\lim_{b\to\infty}\frac{1}{-p+1}x^{-p+1}\bigg|_1^b&\text{ if } p\ne 1;\\
\lim_{b\to\infty}\ln x\bigg|_1^b&\text{ if } p= 1;
\end{cases}\\
&=\begin{cases}
\lim_{b\to\infty}\frac{1}{-p+1}(b^{-p+1}-1^{-p+1})&\text{ if } p\ne 1;\\
\lim_{b\to\infty}(\ln b-\ln 1)\bigg|_1^b&\text{ if } p= 1;
\end{cases}\\
&=\begin{cases}
\frac{1}{-p+1}&\text{ if } p> 1;\\
\infty&\text{ if } p< 1;\\
\infty&\text{ if } p= 1.
\end{cases}\\
\end{array}$$
$\blacksquare$

Thus, *the harmonic series* diverges but also separates the divergent $p$-series from the convergent ones:

**Exercise.** Show that the theorem fails if we drop the assumption that the function is decreasing. Can this assumption be weakened?

**Exercise.** Show that the theorem fails if we drop the assumption that the function is non-negative. Can this assumption be weakened?

## Comparison of series

In the last section, we matched series with improper integrals in order to derive the convergence or divergence of the former from that of the latter. Now we follow this idea but, instead, compare series to other series.

Warning: the Comparison Rule for Series doesn't help here because it assumes the convergence of both series.

The plan is: compare a new series (with non-negative terms) to an old one the convergence/divergence of this is known. The starting point is the *Comparison Rule* for sums (Chapter 1): the sum of a sequence with smaller terms is smaller; i.e., if $a_n\le b_n$, then we have for any $p,q$ with $p\le q$:
$$\sum_{n=p}^{q} a_n \le\sum_{n=p}^{q} b_n.$$
Therefore, the sequence of partial sums of the first series is “dominated” by that of the second. If the second converges, its sum is an upper bound for the partial sums of the first. Then, the first series converges too by the theorem in the last section.

This is the main result.

**Theorem (Direct Comparison Test).** Suppose we have two series with non-negative terms that satisfy the following:
$$0\le a_n\le b_n,$$
for all $n$. Then, then the convergence/divergence of the larger/smaller implies the convergence/divergence of the smaller/larger; i.e.,
$$\begin{array}{lll}
\sum a_n <\infty&\Longleftarrow&\sum b_n<\infty;\\
\sum a_n =\infty&\Longrightarrow&\sum b_n=\infty.
\end{array}$$

**Exercise.** Prove the second part.

In order to apply the theorem to a particular -- new -- series, we should try to modify its formula while paying attention to whether it is getting smaller or larger.

**Example.** Let's go backwards at first. We consider the $p$-*series* and see what we can derive the known convergence facts about them.

Some series can be *modified into* such a series:
$$\frac{1}{n^2+1}\le\frac{1}{n^2}.$$
We remove “$+1$” and the new series is “larger”. Then, by the theorem, the convergence of the original series (on left) follows from the convergence of this $p$-series with $p=2>1$:
$$\sum\frac{1}{n^2+1}\le\sum\frac{1}{n^2}<\infty.$$

Similarly, we can modify this series: $$\frac{1}{n^{1/2}-1}\ge\frac{1}{n^{1/2}}.$$ We remove “$-1$” and the new series is “smaller”. Then, the divergence of the original series (on left) follows from the divergence of this $p$-series with $p=1/2<1$: $$\sum\frac{1}{n^{1/2}+1}\ge\sum\frac{1}{n^{1/2}}=\infty.$$

Now let's try to modify this series: $$\frac{1}{n^2-1}\ge\frac{1}{n^2}.$$ We remove “$-1$” and the new series is “smaller”. Then, the divergence of the original series (on left) follows from the divergence... wait, this $p$-series with $p=2>1$ converges! So, we have: $$\sum\frac{1}{n^2-1}\ge\sum\frac{1}{n^2}<\infty.$$ There is nothing we can conclude from this observation.

Similarly, let's try to modify this series: $$\frac{1}{n^{1/2}+1}\le\frac{1}{n^{1/2}}.$$ We remove “$+1$” and the new series is “larger”. Then, the convergence of the original series (on left) follows from the convergence... but this $p$-series with $p=1/2<1$ diverges! So, we have: $$\sum\frac{1}{n^{1/2}-1}\le\sum\frac{1}{n^{1/2}}=\infty.$$ There is nothing we can conclude from this observation. $\square$

**Example.** Removing “$-1$” and “$+1$” failed to produce *useful* series for these two:
$$\frac{1}{n^2-1} \text{ and } \frac{1}{n^{1/2}+1}.$$
We will have to be subtler in finding comparisons.

Let's try multiplication. We add “$2$” in the numerator and discover that the following is true for all $n=2,3,...$: $$\frac{1}{n^2-1}\le\frac{2}{n^2}.$$ The new series is “larger”. Then, the convergence of the original series (on left) follows from the convergence of this (multiple of a) $p$-series with $p=2>1$: $$\sum\frac{1}{n^2-1}\le\sum\frac{1}{n^2}<\infty.$$

Next, we follow this idea for the other series. We add “$2$” in the denominator and discover that the following is true for all $n=1,2,...$: $$\frac{1}{n^{1/2}+1}\ge\frac{1}{2n^{1/2}}.$$ The new series is “smaller”. Then, the divergence of the original series (on left) follows from the divergence of this (multiple of a) $p$-series with $p=1/2<1$: $$\sum\frac{1}{n^{1/2}+1}\ge\sum\frac{1}{2n^{1/2}}=\infty.$$ $\square$

**Example.** But how do we know what series to compare to?

Let's take all these series and put their corresponding *functions* side by side. The first three to be compared:
$$\frac{1}{x^2-1} \text{ vs. } \frac{1}{x^2} \text{ vs. } \frac{1}{x^2+1}.$$

The second three: $$\frac{1}{x^{1/2}-1} \text{ vs. } \frac{1}{x^{1/2}} \text{ vs. } \frac{1}{x^{1/2}+1}.$$

We know from Chapter 10 that these triples -- as well as their multiples -- have similar behavior at infinity, in the following sense:
$$\frac{1}{x^2-1} \div \frac{1}{x^2}\to 1 \text{ and } \frac{1}{x^{1/2}+1} \div\frac{1}{x^{1/2}}\to 1.$$
Furthermore, their improper integrals converge in the former case and diverge in the latter. The *Integral Comparison Test* gives us the desired conclusions about the convergence/divergence of the series. $\square$

What is the lesson? The *direct* comparisons often fail to prove convergence and a better way to compare two series is to consider their relative *order of magnitude* as presented in Chapter 10. It is defined as the *limit* of their ratio! We conclude, just as above, that the convergence/divergence of the larger/smaller implies the convergence/divergence of the smaller/larger.

**Theorem (Limit Comparison Test).** Suppose we have two sequences with non-negative terms:
$$a_n\ge 0,\ b_n> 0,$$
for all $n$. Suppose also that the limit of their ratio below exists (as a number or infinity):
$$\lim_{n\to\infty} \frac{a_n}{b_n}=L.$$
Then, there are three cases:
$$\begin{array}{lllll}
\text{Case 1, }&L>0 : &\sum b_n <\infty&\Longleftrightarrow&\sum a_n<\infty;\\
\text{Case 2, }&L=0 : &\sum b_n <\infty&\Longrightarrow&\sum a_n<\infty;\\
\text{Case 3, }&L=\infty :&\sum b_n =\infty&\Longrightarrow&\sum a_n=\infty.
\end{array}$$

**Proof.** In Cases 1 and 2, $L$ is a number. Then the *definition of the limit of a sequence* states:

- for each $\varepsilon >0$ there is an $N$ such that for all $n>N$ we have

$$\left| \frac{a_n}{b_n}-L \right|<\varepsilon.$$
Let's choose $L=1$. Then, for the found $N$, we have
$$ \frac{a_n}{b_n}<L+\varepsilon=L+1,$$
for all $n<N$. In other words, we have a comparison of (the tails of) two sequences:
$$ a_n<(L+1)b_n.$$
Now, if $\sum b_n<\infty$, then $\sum (L+1)b_n<\infty$ by the *Constant Multiple Rule for Series*, then $\sum a_n<\infty$ by the *Direct Comparison Test*. $\blacksquare$

**Exercise.** Prove Case 3.

Thus, Case 1 is the case of a “perfect” match between $\sum a_n$ and $\sum a_n$: both converge or both diverge. In Case 2, the denominator “dominates” the numerator. In Case 3, the numerator “dominates” the denominator. This is similar to our study of horizontal asymptotes in Chapters 5, 6, and 10.

**Example.** Consider the series:
$$\sum\frac{1}{\sqrt{n^2+n+1}}.$$
We need to determine to what simpler series this series is “similar”. The leading term of the expression inside the radical is $n^2$. Therefore, we should compare our series to the following:
$$\sum\frac{1}{\sqrt{n^2}}=\sum\frac{1}{n},$$
the divergent harmonic series. We evaluate the limit of the ratio now:
$$\begin{array}{lll}
\frac{1}{\sqrt{n^2+n+1}}\div\frac{1}{n}&=\frac{n}{\sqrt{n^2+n+1}}\\
&=\frac{1}{\sqrt{n^2+n+1}/n}\\
&=\frac{1}{\sqrt{(n^2+n+1)/n^2}}\\
&=\frac{1}{\sqrt{1+1/n+1/n^2}}\\
&\to \frac{1}{\sqrt{1+0+0}}&\text{ as } n\to \infty \\
&=1.
\end{array}$$
So, Case 1 of the *Limit Comparison Theorem* applies and we conclude that our series diverges. $\square$

**Exercise.** Justify the intermediate steps in the above computation.

**Exercise.** How does the theorem apply if we remove $+n$ from the above series?

**Exercise.** How does the theorem apply if we replace $n^2$ with $n^3$ in the above series?

The idea of the theorem applies well to *power series*. If we have two, $\sum c_n(x-a)^n$ and $\sum d_n(x-a)^n$, the condition of the theorem becomes:
$$\frac{c_n(x-a)^n}{d_n(x-a)^n}=\frac{c_n}{d_n}\to L,$$
when $c_n,d_n>0$ and $x>a$. However, are these non-negative term series? No; just passing through $x=a$ will change the sign of the term for each odd $n$! The applicability of these theorems is very limited.

## Absolute convergence

From this point, we abandon the severe restriction that the terms of the series have to be positive. Then, none of the results in the last two sections applies!

The plan is to make from our series a non-negative series and see if it converges or diverges in hope that this will tell us something about the original series.

We will go in the opposite direction. We can take any series with non-negative terms and make it “alternate”:
$$\sum a_n,\ a_n\ge 0\ \leadsto\ \sum (-1)^na_n.$$
Now, if the original series converges, is the new series *more* likely or *less* likely to converge?

**Example.** Some of those are easy to analyze.

- First, $\sum (-1)^n$ diverges -- according to the
*Divergence Test*. And so does $\sum 1$. - Second, $\sum (-1)^n\frac{1}{2^n}$ converges -- according to the
*Geometric Series Theorem*. And so does $\sum \frac{1}{2^n}$.

$\square$

In general, this is what such a pair of series looks like (the sequences are above and their sequences of partial sums are below):

As we know and can see here, the non-negative one produces the sequence of partial sums that is increasing. It may or may not converge depending on how much we add at every step. But for the latter, half of these up-steps are *cancelled* by the down-steps! This suggests that if the former is slowing down then so is the latter. The convergence of the former then implies the convergence of the latter...

**Example.** Consider the familiar the $p$-series with $p=2$:
$$\sum \frac{1}{n^2}.$$
It is convergent. Its “alternating” version is
$$\sum (-1)^n \frac{1}{n^2}.$$
They are shown above. How do these two *compare*? The former appear “smaller” than the latter:
$$(-1)^n \frac{1}{n^2}\le \frac{1}{n^2}.$$
Does it mean that it must converge? No, it might still diverge because the *Non-negative Series Theorem* doesn't apply. However, *there is* a non-negative series here: the sum of the two; i.e.,
$$(-1)^n \frac{1}{n^2}+\frac{1}{n^2}\ge 0.$$
What do we know about it? A clever observation is the following inequality:
$$(-1)^n \frac{1}{n^2}+\frac{1}{n^2}\le \frac{2}{n^2},$$
We have two a non-negative term series and the bigger one is convergent!

Therefore, the smaller series is convergent too,
$$\sum \left((-1)^n \frac{1}{n^2}+\frac{1}{n^2}\right)<\infty,$$
and so is the original series,
$$\sum (-1)^n \frac{1}{n^2}<\infty,$$
by the *Sum Rules for Series*. $\square$

Let's generalize this example.

**Theorem (Squeeze Theorem for Series).** Suppose sequences $a_n$, $b_n$ with $b_n\ge 0$ satisfy:
$$-b_n\le a_n\le b_n$$
for all $n$. Then, if $\sum b_n$ converges then so does $\sum a_n$.

**Proof.** First note that the squeeze that we have only proves anything about $a_n$ if, according to the *Squeeze Theorem*, $b_n\to 0$. Even then all we derive is that $a_n\to 0$ too. What about the *series*? We take a different approach. First, we add $b_n$ to the three parts of the above inequality:
$$0\le a_n+b_n\le b_n+b_n=2b_n.$$
Let's define a new sequence:
$$c_n=a_n+b_n$$
for all $n$. Then, we have a *new squeeze*:
$$0\le c_n\le 2b_n.$$
The last sequence produces a series, $\sum 2b_n$, that converges by the *Constant Multiple Rule for Series*. Then $\sum c_n$ also converges by the *Non-negative Series Theorem* and the *Direct Comparison Theorem*. Finally, the original series,
$$\sum a_n=\sum (c_n-b_n),$$
converges as the difference of two convergent series by the *Sum Rules for Series*. $\blacksquare$

So, we can use the fact that $b_n$ converges to prove that $a_n$ converges too. But how do find this convenient $b_n$?

There is one natural choice: $$b_n=|a_n|.$$ This time the squeeze is “perfect”; not only the sequence is bounded by those two, it is, in fact, always equal to one or the other:

It's as if a ball is continuously bouncing off the ceiling and the floor of a corridor...

**Definition.** We say that a series $\sum a_n$ *converges absolutely* if its series of absolute values, $\sum |a_n|$, converges.

The last theorem implies the following important result.

**Theorem (Absolute Convergence Theorem).** If a series converges absolutely then it converges.

**Example.** For power series, the theorem becomes:
$$\sum |c_n||x-a|^n<\infty \ \Longrightarrow\ \sum c_n(x-a)^n \text{ converges},$$
for each $x$. The series of absolute values of the Taylor series of the exponential function is illustrated below:

Look at the cusps; these aren't powers and the new series is *not* a power series. $\square$

**Definition.** We say that a series $\sum a_n$ *converges conditionally* if it converges but its series of absolute values, $\sum |a_n|$, does not.

Then, every (numerical) series converges either absolutely or conditionally.

**Example.** Of course, all convergent non-negative term series converge absolutely. For example, all $p$-series with $p>1$,
$$\sum \frac{1}{n^p},$$
converge and, therefore, converge absolutely. $\square$

**Example.** Conversely, for every convergent series with non-negative terms, we know now other (absolutely) convergent series. For example, a $p$-series with $p>1$,
$$\sum \frac{1}{n^p},$$
converges, then its alternating version,
$$\sum \frac{(-1)^n}{n^p},$$
is also convergent, absolutely. $\square$

**Example.** On the other hand, a $p$-series with $p\le 1$,
$$\sum \frac{1}{n^p},$$
diverges, does it mean that its alternating version,
$$\sum \frac{(-1)^n}{n^p},$$
also diverges? No. $\square$

The following resolves the issue.

**Theorem (Leibniz Alternating Series Test).** Suppose a sequence $b_n$ satisfies:

- 1. $b_n>0$ for all $n$;
- 2. $b_n>b_{n+1}$ for all $n$; and
- 3. $b_n\to 0$ as $n\to \infty$.

Then the alternating version of the series $\sum b_n$, the series $\sum (-1)^n b_n$, converges.

**Proof.** The *idea* of the proof is as follows. First, the sequence alternates between positive and negative. As a result, the sequence of its partial sums goes up and down at every step. Furthermore, each step is smaller than the last and the swing is diminishing. Moreover, it is diminishing to zero. That's convergence!

Let's consider the sequence of partial sums of our series: $$p_n=\sum_{k=1}^n (-1)^k b_k.$$

We examine the behavior in the subsequences of odd- and even-numbered terms. For the odd: $$\begin{array}{lll} p_{2k+1}-p_{2k-1}&=(-1)^{2k} b_{2k}+(-1)^{2k+1} b_{2k+1}\\ &=b_{2k}- b_{2k+1}\\ &>0&\text{by condition 2.} \end{array}$$ Therefore, $$p_{2k+1}\nearrow.$$ Similarly, for the even $$\begin{array}{lll} p_{2k+2}-p_{2k}&=(-1)^{2k+1} b_{2k+1}+(-1)^{2k+2} b_{2k+2}\\ &=-b_{2k+1}+ b_{2k+2}\\ &<0&\text{by condition 2.} \end{array}$$ Therefore, $$p_{2k}\searrow.$$

We have two monotone sequences that are also bounded:
$$p_1\le p_n \le p_2.$$
Therefore, both converge by the *Monotone Convergence Theorem*.

Next, consider these two limits. By the *Squeeze Theorem* we have:
$$\lim_{n\to\infty}(-1)^{n}b_{n}=0,$$
from condition 3. Then, by the *Sum Rule* we have:
$$\lim_{n\to\infty}p_{2k+1}-\lim_{n\to\infty} p_{2k}=\lim_{n\to\infty}(p_{2k+1}-p_{2k})=\lim_{n\to\infty}(-1)^{2k+1}b_{2k+1}=0.$$
Then, the limits of the odd and the even partial sums are equal.

Therefore, the whole sequence of partial sums $p_n$ converges to the same limit. $\blacksquare$

**Exercise.** Provide a proof for the last step.

**Corollary.** All alternating $p$-series,
$$\sum \frac{(-1)^n}{n^p},$$
converge.

So, the alternating $p$-series,
$$\sum \frac{(-1)^n}{n^p},\ \text{ with }\ 0<p<1,$$
converge *conditionally*. The converse of the *Absolute Convergence Theorem* above fails:
$$\begin{array}{lll}
\text{The series converges}&\Longrightarrow\\
\text{absolutely. }&\not\Longleftarrow
\end{array} \quad \text{The series converges.}$$

## The Ratio Test and the Root Test

The challenge of the comparison test that we have considered is often how to choose series good for comparison.

In this section, we choose a single type of series and derive all the conclusions we can. This choice is, of course, *geometric series*.

The most well-understood series is the standard geometric series $\sum r^n$. Its convergence is fully determined by its *ratio* $r\ge 0$:

- 1. if $r<1$, then $\sum r^n$ converges absolutely;
- 2. if $r>1$, then $\sum r^n$ diverges.

In other words, the sequence has to go to $0$ fast enough for the series to converge.

This idea and these two conditions reappear in the case of a generic series. Such a series $\sum a_n$ also has a *ratio*:
$$r_n=\frac{a_{n+1}}{a_n}.$$
This is a sequence and, of course, it depends on $n$. But its *limit* does not! As it turns out, the series exhibits the same convergence pattern as the geometric series with this ratio.

**Theorem (Ratio Test).** Suppose $a_n$ is a sequence with non-zero terms. Suppose the following limit exists (as a number or as infinity):
$$r=\lim_{n\to\infty} \left| \frac{a_{n+1}}{a_n} \right|.$$
Then,

- 1. if $r<1$, then $\sum a_n$ converges absolutely;
- 2. if $r>1$, then $\sum a_n$ diverges.

If $r=1$, we say that the test fails.

**Proof.** Suppose, for a sequence with positive terms we have:
$$r=\lim_{n\to\infty} \frac{a_{n+1}}{a_n}.$$
Then, from the definition of limit, we conclude:
$$\frac{a_{n+1}}{a_n}<s,\text{ for all } n\ge N,$$
for some $N$ and *any* $s>r$. Therefore,
$$a_{n+1}<sa_n,\text{ for all } n\ge N,$$
the inequality that we can apply multiple times starting at any term $m>N$:
$$a_{m}<sa_{m-1}<s(sa_{m-2})=s^2a_{m-2}<s^2(sa_{m-3})=s^3a_{m-3}<...<s^{m-N}a_N.$$
We have a comparison of our series and a geometric series with ratio $s$:
$$a_m<\frac{a_N}{s^N}\cdot s^m.$$
The latter converges when $s<1$ and then, by the *Direst Comparison Test*, so does $\sum a_n$. Since $s$ is *any* number above $r$, this condition is equivalent to $r<1$. $\blacksquare$

**Exercise.** Complete the proof of the theorem.

**Example.** Let's analyze this series:
$$\sum\frac{2^n}{n!}.$$
Its sequence is
$$a_n=\frac{2^n}{n!},$$
therefore, our limit is:
$$\begin{array}{ll}
r=\frac{a_{n+1}}{a_n}&=\frac{2^{n+1}/(n+1)!}{2^n/n!}\\
&=\frac{2^{n+1}}{2^n}\frac{n!}{(n+1)!}\\
&=\frac{2}{1}\frac{1}{n+1}\\
&\to 0&\text{ as }n\to\infty.
\end{array}$$
Therefore, the series converges by the theorem. $\square$

What we truly care about however are the *power series*! Let's consider one centered at some $a$:
$$\sum c_n(x-a)^n.$$
Then the theorem applies to this numerical sequence:
$$a_n=c_n(x-a)^n,$$
as follows. We find the limit:
$$\begin{array}{ll}
r(x)&=\lim_{n\to\infty} \left| \frac{a_{n+1}}{a_n} \right|\\
&=\lim_{n\to\infty} \left| \frac{c_{n+1}(x-a)^{n+1}}{c_n(x-a)^n} \right|\\
&=\lim_{n\to\infty} \left| \frac{c_{n+1}(x-a)}{c_n} \right|&\text{ by CMR}\\
&=|x-a|\lim_{n\to\infty} \left| \frac{c_{n+1}}{c_n} \right|&<1?
\end{array}$$
Then, by the theorem, the series converges for those values of $x$ for which $r(x)<1$ and diverges for those for which $r(x)>1$. If we set
$$R=\frac{1}{\lim_{n\to\infty} \left| \frac{c_{n+1}}{c_n} \right|},$$
and this is a number, the condition becomes:
$$|x-a|<R.$$
But this is an *interval* centered at $a$:
$$\{x:\ |x-a|<R\}=\{x:\ a-R<x<a+R\}.$$

**Corollary (Ratio Test for power series).** Suppose $c_n$ is a sequence with non-zero terms. Suppose the following limit exists (as a number or as infinity):
$$c=\lim_{n\to\infty} \left| \frac{c_{n+1}}{c_n} \right|.$$
Then, there are three cases:
$$\begin{array}{ll|ll}
&&\text{the series }\sum c_n(x-a)^n\ ...\\
\hline
\text{Case 1: }&c=0& \text{converges for all }x;\\
\hline
\text{Case 2: }&0<c<\infty& \text{converges for each }x\text{ in the interval } (a-1/c,a+1/c)\\
&&\ \text{ and diverges for each }x\text{ in the rays }(-\infty,a-1/c),\ (a+1/c,+\infty);\\
\hline
\text{Case 3: }&c=\infty& \text{diverges for all }x\ne a.
\end{array}$$

With $R=\frac{1}{c}$, we have convergence inside the interval $(a-R,a+R)$ and divergence outside. The *end-points* of this interval, $a-R \text{ and } a+R,$ will have to be treated separately.

**Example.** The convergence of $1+x+x^2+...$ is illustrated below. The two partial sums,
$$1+x+x^2+...+x^9 \text{ and }1+x+x^2+...+x^{10},$$
are shown:

One can infer the divergence for $x$ outside the domain $(-1,1)$. Since $c_n=1$, we have $R=1$, and the theorem is confirmed. $\square$

**Example.** Let's confirm the convergence of the Taylor series for $f(x)=e^x$ centered at $a=0$. We know that this is a power series with
$$c_n=\frac{1}{n!}.$$
Then,
$$c=\lim_{n\to\infty} \left| \frac{c_{n+1}}{c_n} \right|=\lim_{n\to\infty} \frac{1/(n+1)!}{1/n!} =\lim_{n\to\infty} \frac{n!}{(n+1)!}=\lim_{n\to\infty} \frac{1}{n}=0.$$
The series converges for all $x$. $\square$

Next, there is another way to extract the ratio $r$ from a geometric series $a_n=r^n$:
$$r=\sqrt[n]{a_n}.$$
The rest is analogous to the *Ration Test*.

**Theorem (Root Test).** Suppose $a_n$ is a sequence. Suppose the following limit exists (as a number or infinity):
$$r=\lim_{n\to\infty} \sqrt[n]{|a_{n}|}.$$
Then,

- 1. if $r<1$, then $\sum a_n$ converges absolutely;
- 2. if $r>1$, then $\sum a_n$ diverges.

If $r=1$, the test fails.

**Proof.** Suppose, for a sequence with positive terms we have:
$$r=\lim_{n\to\infty} \sqrt[n]{\frac{a_{n}}{a_0}}.$$
Then, from the definition of limit, we conclude:
$$\sqrt[n]{\frac{a_{n}}{a_0}}<R,\text{ for all } n\ge N,$$
for some $N$ and *any* $R>r$. Therefore,
$$a_{n}<R^n a_0,\text{ for all } n\ge N.$$
We thus have a comparison of our series and a geometric series with ratio $R$. The latter converges when $R<1$ and then, by the *Comparison Test*, so does $\sum a_n$. Since $R$ is *any* number above $R$, this condition is equivalent to $r<1$. $\blacksquare$

**Exercise.** Complete the proof of the theorem.

**Exercise.** Examine the convergence of the series $\sum \frac{n}{2^n}$.

**Example.** The *Root Test* requires the $n$th term sequence to be known! In contrast, the *Ratio Test* can be applied to sequences define recursively. For example, let
$$a_{n+1}=a_n\cdot \frac{2n+1}{2^n}.$$
There is no direct formula but its convergence is proven by the following computation:
$$\frac{a_{n+1}}{a_n}=\frac{\frac{2(n+1)+1}{2^{n+1}}}{\frac{2n+1}{2^n}}=\frac{2n+3}{2n+1}\frac{2^{n+1}}{2^n}=\frac{2n+3}{2n+1}2\to 2.$$
The series converges. $\square$

Let's consider a *power series* again:
$$\sum c_n(x-a)^n.$$
Then the theorem applies to this numerical sequence:
$$a_n=c_n(x-a)^n,$$
as follows:
$$r(x)=\lim_{n\to\infty} \sqrt[n]{|c_n(x-a)^n|}=|x-a|\lim_{n\to\infty} \sqrt[n]{|c_n|}.$$
Then it converges for those values of $x$ for which we have:
$$|x-a|<\frac{1}{\lim_{n\to\infty} \sqrt[n]{|c_n|}}.$$
The parameter $r$ is defined via a limit that is quite different from the one in the *Ratio Test* but the conclusions about it are the same.

**Corollary (Root Test for power series).** Suppose $c_n$ is a sequence. Suppose the following limit exists (as a number or as infinity):
$$c=\lim_{n\to\infty} \sqrt[n]{|c_n|}.$$
Then, there are three cases:
$$\begin{array}{ll|ll}
&&\text{the series }\sum c_n(x-a)^n\ ...\\
\hline
\text{Case 1: }&c=0& \text{converges for all }x;\\
\hline
\text{Case 2: }&0<c<\infty& \text{converges for each }x\text{ in the interval } (a-1/c,a+1/c)\\
&&\ \text{ and diverges for each }x\text{ in the rays }(-\infty,a-1/c),\ (a+1/c,+\infty);\\
\hline
\text{Case 3: }&c=\infty& \text{diverges for all }x\ne a.
\end{array}$$

The end-points, $a-1/c$ and $a+1/c$, have to be treated separately.

**Exercise.** What is the relation between the two limits in the two corollaries?

## Power series

In the beginning of the chapter, we showed how functions produce power series: via their Taylor polynomials. Since then, we have also demonstrated how power series produce functions: via convergence. We continue below with the latter.

**Definition.** A *power series centered at* point $x=a$ is a series that depends on the independent variable $x$, in the following way:
$$\sum_{n=0}^\infty c_n(x-a)^n,$$
with the sequence of numbers $c_n$ referred to as the *coefficients of the series*.

For a fixed $x$, the input, if the series, thereby made numerical, converges, we have a sum. It can be seen, therefore, as an output of a function. Thus, *a power series is a function:*
$$f(x)=\sum_{n=0}^\infty c_n(x-a)^n.$$

**Definition.** The *domain* of a power series is the set of those $x$'s for which the series converges. It is also called the *region of convergence* of the series.

Warning: $x=a$ is always in the domain.

**Example (geometric).** We already know that this familiar power series converges on this interval and diverges outside:
$$1+x+x^2+x^3+...=\frac{1}{1-x}, \text{ for all } -1<x<1. $$
To find the domain, we consider the end-points:
$$1+x+x^2+x^3+...\bigg|_{x=1}=1+1+1^2+1^3+...=1+1+1+1+..., \text{ divergent!} $$
and
$$1+x+x^2+x^3+...\bigg|_{x=-1}=1+(-1)+(-1)^2+(-1)^3+...=1-1+1-1+..., \text{ divergent!} $$
Therefore, the interval $(-1,1)$ is the whole domain of this function.

$\square$

The following is a reminder.

**Theorem.** If a series converges absolutely then it converges:
$$\sum |c_n|\cdot |x-a|^n<\infty \ \Longrightarrow\ \sum c_n(x-a)^n \text{ converges},$$
for each $x$.

**Theorem (Interval of Convergence).**

- 1. The domain of a power series

$$\sum_{n=0}^\infty c_n(x-a)^n,$$
is always an interval containing $a$. It is called the *interval of convergence*.

- 2. The convergence is absolute inside the interval of convergence.
- 3. The convergence is uniform on any closed interval inside the interval of convergence.

**Proof.** For simplicity we assume that $a=0$. If the only convergent value is $x=0$, we are done; that's the interval. Suppose now that there is another value, $x=b\ne 0$, i.e.,
$$\sum_{n=0}^\infty c_nb^n \text{ converges.}$$
Then, by the *Divergence Test*, we have
$$c_nb^n\to 0,$$
and, in particular,
$$|c_nb^n|\le M \text{ for some }M.$$
Therefore, we can estimate our series as follows:
$$|c_nx^n| \le |c_nb^n| \left| \frac{x}{b}\right|^n\le M \left| \frac{x}{b}\right|^n.$$
The last sequence is a geometric progression and its series converges whenever $|x/b|<1$, or $|x|<|b|$. Therefore, by the *Direct Comparison Test*, our series converges absolutely for every $x$ in the interval $(-|b|,|b|)$. We conclude that any two points in the domain produce an interval that lies entirely inside the domain. We proved in Chapter 5 that this property is equivalent to that of being an interval.

In the proof, we can see that the convergence is also absolute.

For the last part, we modify the above proof slightly. Suppose we have a number $p$ with $1<p<1$. Then, for any $x$ in the interval $[-p|b|,p|b|]$, we have a comparison of our series with a geometric progression *independent* of $x$:
$$|c_nx^n| \le M \left| \frac{x}{b}\right|^n \le M \frac{|x|}{|b|}^n \le M \frac{p|b|}{|b|}^n \le Mp^n.$$
$\blacksquare$

**Definition.** The distance $R$ (that could be infinite) from $a$ to the nearest point for which the series diverges is called the *radius of convergence* of the power series centered at $a$.

This definition is legitimate according to the *Existence of $\sup$ Theorem* from Chapter 5.

If $R$ is the radius of convergence, then the length of the domain interval is $2R$ (we will see in Chapter 23 that there is in fact a *circle* of radius $R$).

**Theorem.** Suppose $R$ is the radius of convergence of a power series
$$\sum_{n=0}^\infty c_n(x-a)^n.$$
Then,

- 1. when $R<\infty$, the domain of the series is an interval with the end-points $a-R$ and $a+R$ (possibly included or excluded), and
- 2. when $R=\infty$, the domain of the series is $(-\infty,+\infty )$.

We have established the first two of these facts previously: $$\begin{array}{ll|l} \text{series}&\text{its sum}&\text{its domain}\\ \hline \sum_{n=0}^\infty x^n&=\frac{1}{1-x}&(-1,1);\\ \sum_{n=0}^\infty \frac{1}{n!}x^n&=e^x&(-\infty,+\infty);\\ \sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!}x^{2k+1}&=\sin x&(-\infty,+\infty);\\ \sum_{k=0}^\infty \frac{(-1)^k}{(2k)!}x^{2k}&=\cos x&(-\infty,+\infty). \end{array}$$ Note that the power series for an odd function, $\sin x$, has only the odd terms, $n=2k+1$, present, while the power series for an even function, $\cos x$, has only the even terms, $n=2k$, present.

**Exercise.** Prove the last two. Hint: start with the Taylor polynomials.

**Example (substitution).** Sometimes we can find the power series representation of a function by ingeniously applying the formula for the geometric series, backward:
$$\frac{1}{1-r}=\sum_{n=0}^\infty r^n,\text{ for } r\ne 1.$$
We have used this idea to find the following representation:
$$\frac{1}{1-x}=\sum_{n=0}^\infty x^n.$$
Now, the function
$$f(x)=\frac{1}{1-x^2}$$
is recognized as the left-hand side of the above formulas with $r=x^2$. Therefore, we can simply write it as a power series (with only even terms):
$$\frac{1}{1-x^2}=\sum_{n=0}^\infty (x^2)^n=\sum_{n=0}^\infty x^{2n}.$$
Similarly, we choose $r=x^3$, after factoring, below:
$$\frac{x}{1-x^3}=x\frac{1}{1-x^3}=x\sum_{n=0}^\infty (x^3)^n=\sum_{n=0}^\infty x^{3n+1}.$$
One more:
$$\frac{1}{1-2x}=\sum_{n=0}^\infty (2x)^n=\sum_{n=0}^\infty 2^n\cdot x^n.$$
The center of the resulting series may be elsewhere:
$$\frac{1}{x}=\frac{1}{1-(1-x)}=\sum_{n=0}^\infty (1-x)^n=\sum_{n=0}^\infty (-1)(x-1)^n.$$
These, and many similar, results are obtained via a *change of variables*. $\square$

The above terminology simplifies the two results established previously.

**Theorem (Ratio and Root Test).** Suppose $c_n$ is a sequence with non-zero terms. Suppose one of the following two limits exists (as a number or as infinity):
$$R=\lim_{n\to\infty} \left| \frac{c_{n}}{c_{n+1}} \right|\ \text{ or }\ R=\frac{1}{\lim_{n\to\infty} \sqrt[n]{|c_n|}}.$$
Then, $R$ is the radius of convergence of the power series
$$\sum_{n=0}^{\infty} c_n(x-a)^n.$$

**Example.** Let's consider
$$\sum \frac{x^n}{n}.$$
Following the *Ratio Test*, we need to compute the radius of convergence as the following limit:
$$R=\lim_{n\to\infty} \left| \frac{1/n}{1/(n+1)} \right|=\lim_{n\to\infty} \frac{n+1}{n}=1.$$
Therefore, the interior of the domain is $(-1,1)$. Now the end-points:
$$\begin{array}{lll}
x=1&\Longrightarrow& \sum \frac{x^n}{n}&=\sum \frac{1}{n}&\text{is the divergent harmonic series};\\
x=-1&\Longrightarrow& \sum \frac{x^n}{n}&=\sum \frac{(-1)^n}{n}&\text{is the convergent alternating harmonic series}.
\end{array}$$
Therefore, the domain is $[-1,1)$. $\square$

There is one and only one representation of a function as a power series.

**Theorem (Uniqueness of Power Series).** If two power series are equal, as functions, on an open interval $(a-r,a+r),\ r>0$, then their corresponding coefficients are equal too, i.e.,
$$\begin{array}{rlllll}
&\sum_{n=0}^{\infty} c_n(x-a)^n&=&\sum_{n=0}^{\infty} d_n(x-a)^n &\text{ for all } a-r<x<a+r,\\ \Longrightarrow &\qquad c_n&=&\qquad d_n &\text{ for all } n=0,1,2,3...
\end{array}$$

**Proof.** We prove that the coefficients are equal, $c_n=d_n$, one by one following $n=0,1,2,...$. The trick consists of a repeated substitution $x=a$ into this and derived formulas; all terms with $(x-a)$ disappear every time and we are left with just one to be cancelled from the last equation. We assume that $x\ne a$ throughout.

On the first try, we are left with
$$c_0=d_0.$$
Indeed, if two functions are equal then they are equal to $x=a$. Now these two terms are cancelled from our equation producing:
$$\sum_{n=1}^{\infty} c_n(x-a)^n=\sum_{n=1}^{\infty} d_n(x-a)^n.$$
As you can see, the summation starts with $n=1$ now. The power of $(x-a)$ in every term is then at least $1$. We can now divide the whole equation by $(x-a)$, justified by the *Constant Multiple Rule*, producing:
$$\sum_{n=1}^{\infty} c_n(x-a)^{n-1}=\sum_{n=1}^{\infty} d_n(x-a)^{n-1}.$$
We substitute $x=a$ again and all terms except the very first one disappear:
$$c_1=d_1.$$

After $m$ such steps, we have: $$\sum_{n=m}^{\infty} c_n(x-a)^{n-m}=\sum_{n=m}^{\infty} d_n(x-a)^{n-m}.$$ The next step is again: substitute $x=a$, conclude that $c_m=d_m$, then divide by $(x-a)$. And so on. $\blacksquare$

**Exercise.** Point out in the proof where (and why) we need to use the condition $r>0$.

## Calculus of power series

Can we replace functions with power series?

We start with *algebra*.

Just as with functions in general, we can carry out (some) algebraic operations on power series producing new power series. However, what is truly important is that we can do these operations *term by term*. The idea comes from our experience with *polynomials*; after all the operations we want to put the result in the standard form, i.e., with all terms arranged according to the powers.

First, we can multiply a series by a number one term at a time:
$$k\cdot \sum_{n=0}^{\infty} c_n(x-a)^n =\sum_{n=0}^{\infty}(kc_n)(x-a)^n ,$$
for all $x$ for which the series converges. The conclusion is justified by the *Constant Multiple Rule* for series.

Second, we can add two series one term at a time:
$$\sum_{n=0}^{\infty} c_n(x-a)^n + \sum_{n=0}^{\infty}d_n(x-a)^n =\sum_{n=0}^{\infty}(c_n+d_n)(x-a)^n ,$$
for all $x$ for which the series converge. Of course, the algebra of power series is only legitimate within their domains, i.e., for those values of $x$ for which they converge. The conclusion is justified by the *Sum Rule* for series.

In other words, these “infinite” polynomials behave just like ordinary polynomials, wherever they converge.

Next is differentiation and integration, i.e., *calculus*, of power series.

We will see that, just as with functions in general, we can carry out the calculus operations on power series producing new power series. However, what is truly important is that we can do these operations *term by term*.

**Example (differentiation).** Let's differentiate the terms of the power series representation of the exponential function:
$$\begin{array}{lllll}
e^x&=&1&+&x&+&\frac{1}{2!}x^2&+&\frac{1}{3!}x^3&+&...&+&\frac{1}{n!}x^n&+&\frac{1}{(n+1)!}x^{n+1}&+...\\
\downarrow^\frac{d}{dx}&&\downarrow^\frac{d}{dx}&&\downarrow^\frac{d}{dx}&&\downarrow^\frac{d}{dx}&&\ \downarrow^\frac{d}{dx}&&& &\quad\downarrow^\frac{d}{dx}&&\qquad\downarrow^\frac{d}{dx}\\
(e^x)'&\overset{\text{?}}{=\! =\! =}&0&+&1&+&\frac{1}{2!}2x&+&\frac{1}{3!}3x^2&+&...&+&\frac{1}{n!}nx^{n-1}&+&\frac{1}{(n+1)!}(n+1)x^{n}&+...\\
&=&&&\ 1&+&x&+&\frac{1}{2!}x^2&+&...&+&\frac{1}{(n-1)!}x^{n-1}&+&\frac{1}{n!}x^n&+...\\
&=&e^x.
\end{array}$$
It works! $\square$

**Example (integration).** Let's integrate the terms of the power series representation of the exponential function:
$$\begin{array}{lllll}
e^x&=&&1&+&x&+&\frac{1}{2!}x^2&+&...&+&\frac{1}{n!}x^n&+&\frac{1}{(n+1)!}x^{n+1}&+...\\
\downarrow^\int&&&\downarrow^\int&&\downarrow^\int&&\ \downarrow^\int&&&&\quad\downarrow^\int& &\quad\downarrow^\int&&\quad\downarrow^\int\\
\int e^x\, dx&\overset{\text{?}}{=\! =\! =}&C+&x&+&\frac{1}{2}x^2&+&\frac{1}{2!}\frac{1}{3}x^3&+&...&+&\frac{1}{n!}\frac{1}{n+1}x^{n+1}&+&\frac{1}{(n+1)!}\frac{1}{n+2}x^{n+2}&+...\\
&=&C+&x&+&\frac{1}{2!}x^2&+&\frac{1}{3!}x^3&+&...&+&\frac{1}{(n+1)!}x^{n+1}&+&\frac{1}{(n+2)!}x^{n+2}&+...\\
&=&C+&e^x.
\end{array}$$
It works! $\square$

The following theorem is central.

**Theorem (Term-by-Term Calculus).** Suppose the radius of convergence of a power series,
$$f(x)=\sum_{n=0}^{\infty} a_n(x-a)^n,$$
is positive or infinite. Then the function $f$ represented by this power series is differentiable (and, therefore, integrable) on the interval of convergence and the power series representations of its derivative and its antiderivative converge on this interval and are found by term-by-term differentiation and integration of the power series of $f$ respectively, i.e.,
$$f'(x)=\left(\sum_{n=0}^{\infty} c_n(x-a)^n\right)'=\sum_{n=0}^{\infty} \left(c_n(x-a)^n\right)',$$
and
$$\int f(x)\, dx=\int \left(\sum_{n=0}^{\infty} c_n(x-a)^n\right)\, dx=\sum_{n=0}^{\infty} \int c_n(x-a)^n\, dx.$$

With this theorem, there is no need for the rules of differentiation or integration except for the *Power Formula*! Let's find the explicit formulas, in the standard form.

**Corollary (Term-by-Term Differentiation).** On an interval of convergence of the power series we have:
$$f'(x)=\sum_{n=0}^{\infty} \left(c_n(x-a)^n\right)'=\sum_{n=1}^{\infty} n c_n(x-a)^{n-1}=\sum_{k=0}^{\infty} (k+1) c_{k+1}(x-a)^{k}.$$

Note the initial index of $1$ instead of $0$ in the formula for the derivative.

**Corollary (Term-by-Term Integration).** On an interval of convergence of the power series we have:
$$\int f(x)\, dx=\sum_{n=0}^{\infty} \int c_n(x-a)^n\, dx= C+\sum_{n=0}^{\infty} \frac{c_n}{n+1}(x-a)^{n+1}=C+ \sum_{k=1}^{\infty} \frac{c_{k-1}}{k}(x-a)^{k}.$$

**Example (differential equations).** In contrast to the last example, let's use the theorem to “discover” a solution rather than confirm it. Suppose we need to solve this differential equation:
$$f'=f.$$
We assume that the unknown function $y=f(x)$ is differentiable and, therefore, is represented by a term-by-term differentiable power series. We differentiate the series and then match the terms according to the equation:
$$\begin{array}{ccc}
f&=&c_0&+&c_1x&+&c_2x^2&+&c_3x^3&+&...&+&c_n x^n&+&...\\
f'&=&\ &&c_1&+&2c_2x&+&3c_3x^2&+&...&+&nc_n x^{n-1}&+&...\\
\Longrightarrow&&&\swarrow&&\swarrow&&\swarrow&&...&&\swarrow&&\\
f'&=&c_1&+&2c_2x&+&3c_3x^2&+&...&+&nc_n x^{n-1}&+&(n+1)c_{n+1}x^{n}&+...\\
||&&||&&||&&||&&&&||&&||\\
f&=&c_0&+&c_1x&+&c_2x^2&+&...&+&c_{n-1}x^{n-1}&+&c_{n}x^n&+...\\
\end{array}$$
According to the *Uniqueness of Power Series*, the coefficients have to be equal! We have created a sequence of equations:
$$\begin{array}{ccc}
&c_1&2c_2&3c_3&...&(n+1)c_{n+1}&...\\
&||&||&||&&||\\
&c_0&c_1&c_2&...&c_n&...\\
\end{array}$$
We can start solving these equation from left to right:
$$\begin{array}{lll}
&c_1&\Longrightarrow\ c_1=c_0 & &2c_2&\Longrightarrow\ c_2=c_0/2& &3c_3&\Longrightarrow\ c_3=c_0/(2\cdot 3)& &4c_4&...\\
&||&&&||&&&||&&&||\\
&c_0& &\Longrightarrow&c_1=c_0& &\Longrightarrow&c_2=c_0/2& &\Longrightarrow&c_3=c_0/(2\cdot 3)&...\\
\end{array}$$
Therefore,
$$c_n=\frac{c_0}{n!}.$$
The problem is solved! Indeed, we have a formula for the function we are looking for:
$$f(x)=\sum_{n=0}^\infty \frac{c_0}{n!}x^n=c_0\sum_{n=0}^\infty \frac{1}{n!}x^n,$$
and it will give us the values of $f$ with any accuracy we want. The only missing part in this program is the proof of *convergence*; it is done with the *Ratio Test* (seen previously). As a *bonus* (just a bonus!), we recognize the resulting series: $f(x)=c_0e^{x}$. $\square$

This method of solving differential equations is further discussed in Chapter 23.

**Definition.** A function defined on an open interval that can be represented by a power series is called *analytic* on this interval.

**Corollary.** Every analytic function is infinitely many times differentiable.

What about the converse? What kind of functions can be represented by power series?

The picture illustrates the limitations of representing functions by power series (smaller domain):

The term-by-term differentiation allows us to rediscover the *Taylor polynomials*. Suppose we already have a power series representation of a function with a radius of convergence $R>0$:
$$f(x)=c_0+c_1(x-a)+c_2(x-a)^2+c_3(x-a)^3+...+c_n(x-a)^n+...$$
Let's express the coefficients in terms of the function itself. The trick is partly the same as in the proof of the last theorem: we substitute $x=a$ into this and the derived formulas, except this time we don't divide but rather differentiate. First substitution gives us what we already know:
$$f(a)=c_0.$$
We differentiate,
$$f'(x)=c_1+2c_2(x-a)+3c_3(x-a)^2+...+n c_n(x-a)^{n-1}+...,$$
and substitute $x=a$ giving us:
$$f'(a)=c_1.$$
We differentiate one more time,
$$f' '(x)=2c_2+3c_3(x-a)+...+n(n-1)c_n(x-a)^{n-2}+...,$$
and substitute $x=a$ giving us:
$$f' '(a)=2c_2.$$
After $m$ steps we have:
$$f^{(m)}(x)=m (m-1)...3\cdot 2\cdot 1\cdot c_m+m (m-1)...3\cdot 2\cdot c_{m-1}(x-a)+\ ...\ +n\cdot (n-1)\cdots (n-m)\cdot c_n(x-a)^{n-m}+\ ...,$$
and substituting $x=a$ gives us:
$$f^{(m)}(a)=m!c_m.$$
The result is presented below.

**Theorem (Taylor Coefficients).** If a function is represented by a power series with a positive radius of convergence,
$$f(x)=c_0+c_1(x-a)+c_2(x-a)^2+c_3(x-a)^3+...+c_n(x-a)^n+...,$$
then its coefficients are the *Taylor coefficients*:
$$c_n=\frac{f^{(n)}(a)}{n!}.$$

Thus, the $n$th partial sum of this series is the $n$th Taylor polynomial of $f$. The series is called the *Taylor series* of the analytic function $f$ at $x=a$.

The surprising conclusion is that the whole analytic function is determined by the values of its derivatives *at a single point*.

**Corollary.** Suppose $f,g$ are analytic on interval $(a-R,a+R),\ R>0$. If they have matching derivatives of all orders at $a$, they are equal on this interval; i.e.,
$$f^{(n)}(a)=g^{(n)}(a)\ \text{ for all } n=0,1,2,3... \ \Longrightarrow\ f(x)=g(x)\ \text{ for all }x\text{ in } (a-R,a+R).$$

**Exercise.** Prove the corollary. Hint: consider $f-g$.

Since these derivatives are, in turn, fully determined by the behavior of the function on a small (no matter how small) interval around this point, we conclude that there is only *one way* to extend the function beyond this interval!

In other words, analytic functions are “predictable”: once we have drawn a tiny piece of the graph, there is only one way to continue to draw it. We can informally interpret this idea as follows: drawing a curve with a single stroke of the pen produces an analytic function but stopping in the middle to decide how to proceed is likely to prevent this from happening.

What kind of functions can be represented by power series?

**Theorem (Representation by Power Series).** Suppose a function $f$ is infinitely many times differentiable on interval $(a-R,a+R)$ and these derivatives are bounded by the same constant $M$:
$$|f^{(n)}(x)|\le M \quad \text{ for all } x \text{ in } (a-R,a+R).$$
Then $f$ is analytic.

**Proof.** Just compare the Taylor series of the function with this convergent series:
$$\sum \frac{M}{n!}.$$
$\blacksquare$

**Exercise.** Weaken the boundedness condition.

**Example (Taylor series via substitutions).** Find the power series presentation of
$$f(x)=x^{-3}$$
around $a=1$. We differentiate and substitute:
$$\begin{array}{llllll}
f(x)&=x^{-3} & \Rightarrow & f(1)=1& \Rightarrow & c_0=1;\\
f'(x)&=(-3)x^{-4} & \Rightarrow & f'(1)=-3& \Rightarrow & c_1=-3;\\
f' '(x)&=(-3)(-4)x^{-5} & \Rightarrow & f' '(1)=12& \Rightarrow & c_2=12/2=6;\\
f^{(3)}(x)&=(-3)(-4)(-5)x^{-6} & \Rightarrow & f^{(3)}(1)=-60& \Rightarrow & c_3=-60/6=-10;\\
...\\
f^{(n)}(x)&=(-3)(-4)\cdots (-2-n)x^{-3-n} & \Rightarrow & f^{(n)}(1)=\frac{(-1)^n(2+n)!}{2} & \Rightarrow & c_n=(-1)^n\frac{(n+1)(n+2)}{2}.
\end{array}$$
That's the $n$th Taylor coefficient. Hence,
$$f(x)=\sum_{n=1}^\infty (-1)^n\frac{(n+1)(n+2)}{2}(x-a)^n,$$
but what is the interval of convergence. The radius of convergence is the distance to the nearest point for which there is no convergence, which appears to be $0$, so $R=1$. We confirm this with the *Ratio Test*:
$$\lim_{n\to\infty}\left| (-1)^n\frac{(n+1)(n+2)}{2} \div (-1)^{n+1}\frac{(n+2)(n+3)}{2} \right|=\lim_{n\to\infty}\frac{(n+1)(n+2)}{(n+2)(n+3)}=1.$$
Then the end-points of the interval are $0$ and $2$. The series converges inside the interval and what's left is the convergence at the end-points:
$$\begin{array}{llll}
x=0,&\sum (n+1)(n+2) &\text{ diverges;}\\
x=2,&\sum (n+1)(n+2)(-1)^n &\text{ diverges.}\\
\end{array}$$
Thus, $(0,2)$ is the interval of convergence of the series. It is also the domain of the series even though it's smaller than that of the original function. $\square$

The mismatch between the domains of functions and the intervals of convergence of their power series representation is the (main) reason why the match between calculus of functions and calculus of power series is imperfect. At least, the differentiation and integration, according to the *Term-by-Term Differentiation and Integration*, are perfectly reflected in this mirror:
$$\newcommand{\ra}[1]{\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!}
\newcommand{\la}[1]{\!\!\!\!\!\xleftarrow{\quad#1\quad}\!\!\!\!\!}
\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
\newcommand{\ua}[1]{\left\uparrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
\begin{array}{ccc}
f&\ra{Taylor}&\sum c_n(x-a)^n\\
\ \da{\frac{d}{dx}}& &\ \da{\frac{d}{dx}}\\
f' & \ra{Taylor}&\sum \left(c_n(x-a)^n\right)'
\end{array}\qquad
\begin{array}{ccc}
f&\ra{Taylor}&\sum c_n(x-a)^n\\
\ \da{\int}& &\ \da{\int}\\
\int f\, dx & \ra{Taylor}&\sum \int\left(c_n(x-a)^n\right)\, dx
\end{array}
$$
In the first diagram, we start with a function at the top left and then we proceed in two ways:

- right: find its Taylor series, then down: differentiate the result term by term; or
- down: differentiate it, then right: find its Taylor series.

The result is the same!