1.3. Series#

It is almost always the case in real analysis that we’re going to be concerned with what happens when we take sums (or averages) of long sequences. These are known as series, and we need a few basic definitions to get us started.

1.3.1. Basic definitions#

To start with, we have a specific term, called a partial sum, for the sum of a portion of a sequence:

Definition 1.13 (Partial sum)

Let \(\mathcal A \subset \mathbb R\) be a set, and let \((a_n)\) be a sequence on \(\mathcal A\). A \(m\)-element partial sum of the sequence is:

\[ s_m = \sum_{n = 1}^m a_n\]

This allows us to define a series:

Definition 1.14 (Series)

Let \(\mathcal A \subset \mathbb R\) be a set, and let \((a_n)\) be a sequence on \(\mathcal A\). A series on \(\mathcal A\) is the inifinte sum:

\[ \sum_{n = 1}^\infty a_n\]

With \(s_m = \sum_{n = 1}^m a_n\) the \(m^{th}\) partial sum, we define the sum of the series to be:

\[\sum_{n = 1}^\infty a_n = \lim_{m \rightarrow \infty} s_m\]

If the limit exists and is finite, we say that the series converges; otherwise, we say that the series diverges.

1.3.2. Taylor’s theorem#

One of the most useful results from real analysis is known as Taylor’s Theorem, which effectively asserts that we can restate functions satisfying specific conditions using Taylor series. We can then, in effect, “chop off” terms from this Taylor series to obtain arbitrarily precise estimates of the function itself using polynomials. We’ve already seen the class of continuously differentiable functions Definition 1.12, so we are ready to understand the concept behind the Taylor series:

Lemma 1.4 (Taylor series)

Suppose that:

  1. \(x_0 \in \mathbb R\),

  2. let \(f : \mathbb R \rightarrow \mathbb R\) be a function,

  3. \(\mathcal X \subseteq \mathbb R\) is a neighborhood about \(x_0\), on which \(f \in C^\infty\).

Then for \(x \in \mathcal X\):

\[ f(x) = \sum_{k = 0}^\infty \frac{f^{(k)}(x_0)}{k!}(x - x_0)^k\]

This allows us to express functions which are infinitely differentiable about a neighborhood of a point \(x_0\) as polynomials. Now we can state Taylor’s Theorem:

Theorem 1.2 (Taylor’s Theorem)

Suppose that:

  1. \(n \geq 1\),

  2. \(x_0 \in \mathbb R\),

  3. let \(f : \mathbb R \rightarrow \mathbb R\) be a function,

  4. \(\mathcal X\) is a neighborhood about \(x_0\), on which \(f \in C^n\).

Then for \(x \in \mathcal X\):

\[\begin{split} f(x) &= T_n(f, x, x_0) + \mathcal o \left(|x - x_0|^n\right),\\ T_n(f, x, x_0) &= \sum_{k = 0}^n \frac{f^{(k)}(x_0)}{k!}(x - x_0)^k\end{split}\]

The core interpretation of Taylor’s Theorem that we will need for our study is that \(T_n(f, x, x_0)\) is a polynomial which “agrees” with \(f\) in terms of derivatives about \(x_0\) up to order \(n\), and further, will be arbitrarily precise within a neighborhood of \(x_0\). The idea of Taylor’s Theorem is that we are, in effect, “chopping off” the Taylor series at an arbitrary point, and are left with a remainder that is of a (potentially) extremely low order \(o(|x - x_0|^n)\) when \(x\) is in a neighborhood of \(x_0\).

1.3.2.1. Examples of Taylor series#

Let’s take a look at some common Taylor series that will arise. As an exercise, try to work through some of these yourself:

Example 1.1 (Taylor series of \(\frac{1}{1 - x}\))

Suppose that \(x \in (-1, 1)\). Then the Taylor series of \(\frac{1}{1 - x}\) about \(x_0 = 0\) is:

\[\frac{1}{1 - x} = 1 + x + x^2 + x^3 + ... = \sum_{k = 0}^\infty x^k\]

Example 1.2 (Taylor series of exponential)

Suppose that \(x \in \mathbb R\). Then the Taylor series of \(\exp(x)\) about \(x_0 = 0\) is:

\[\begin{split}\exp(x) &= 1 + x + \frac{x^2}{2!} + \frac{x^ 3}{3!} + ... \\ &= \sum_{k = 0}^\infty \frac{x^k}{k!}\end{split}\]

Example 1.3 (Taylor series of \(\log(1 + x)\))

Suppose that \(x \in (-1, 1]\). Then the Taylor series of \(\log(1 + x)\) about \(x_0 = 0\) is:

\[\begin{split}\log(1 + x) &= x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} ... \\ &= \sum_{k = 0}^\infty (-1)^{k - 1}\frac{x^k}{k}\end{split}\]

Example 1.4 (Taylor series of cosine)

Suppose that \(x \in \mathbb R\). Then the Taylor series of \(\cos(x)\) about \(x_0 = 0\) is:

\[\begin{split}\cos(x) &= 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + ... \\ &= \sum_{k = 0}^\infty (-1)^k \frac{x^{2k}}{(2k)!}\end{split}\]

Due to the evenness of \(\cos(x)\) (that is, \(\cos(x) = \cos(-x)\)), we only end up with even powers.

Example 1.5 (Taylor series of sine)

Suppose that \(x \in \mathbb R\). Then the Taylor series of \(\sin(x)\) about \(x_0 = 0\) is:

\[\begin{split}\sin(x) &= x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + ... \\ &= \sum_{k = 0}^\infty (-1)^k \frac{x^{2k + 1}}{(2k + 1)!}\end{split}\]

Due to the oddness of \(\sin(x)\) (that is, \(\sin(x) = -\sin(x)\)), we only end up with odd powers.

1.3.2.2. Consequences of Taylor’s theorem#

Next is an important consequence of Taylor’s theorem that will arise in several places in the book. Let’s start with an important sequence:

Lemma 1.5 (Limit is exponential)

Let \(\mathcal C \subseteq \mathbb R\) be a set, and let \((c_n)\) be a sequence on \(\mathcal C\). Suppose that \(c \in \mathbb R\). Then:

\[ c_n \xrightarrow[n \rightarrow \infty]{} c \Rightarrow \left(1 + \frac{c_n}{n}\right)^n \xrightarrow[n \rightarrow \infty]{} \exp(c)\]

Proof. With some manipulations, notice that:

\[\begin{split} \left(1 + \frac{c_n}{n}\right)^n &= \exp\left\{n\log\left(1 + \frac{c_n}{n}\right)\right\} \\ &= \exp\left\{n\left(\frac{c_n}{n} + o\left(\left|\frac{c_n}{n}\right|^2\right)\right)\right\}\end{split}\]

Which is an order \(1\) Taylor expansion of \(\log(1 + x)\), by Example 1.3. For large \(n\), \(o\left(\left|\frac{c_n}{n}\right|^2\right) = o\left(\left|\frac{1}{n}\right|^2\right)\). Then for large \(n\):

\[\begin{split} \left(1 + \frac{c_n}{n}\right)^n &= \exp\left\{c_n + o(n^{-1})\right\} \\ &\xrightarrow[n \rightarrow \infty]{} \exp(c), \,\,\,\, c_n + o(n^{-1}) \rightarrow c\end{split}\]

Which follows because \(\exp(x)\) is continuous, so the limit of the exponential is the exponential of the limit.

1.3.3. Convergence concepts#

We’re going to be extremely interested in the behavior of these series and their behaviors. This means that we need some tools to easily let us determine for which particular sequences \((a_n)\) that the resulting series converges/diverges. These tend to be called convergence tests.

1.3.3.1. Comparison test#

First up is the simplest test, the comparison test:

Lemma 1.6 (Comparison Test)

Let \(\mathcal A \subset \mathbb R\), and let \((a_n)\) and \((b_n)\) be sequences on \(\mathcal A\) where all \(a_n, b_n \geq 0\). Then:

  1. If \(\sum_{n = 1}^\infty b_n\) is a convergent series and \(a_n \leq b_n\) for all \(n\), then \(\sum_{n = 1}^\infty\) converges.

  2. If \(\sum_{n = 1}^\infty b_n\) is a divergent series and \(a_n \geq b_n\) for all \(n\), then \(\sum_{n = 1}^\infty a_n\) diverges.

When we have easy-to-use series that we already know behavior about, this can make proving convergence somewhat trivial.

1.3.3.2. Integral test#

Next up is the integral test:

Lemma 1.7 (Integral Test)

Let \(\mathcal A \subset \mathbb R\), and let \((a_n)\) be a sequence on \(\mathcal A\). Let \(f : \mathbb R \rightarrow \mathbb R\) be a function, where in particular \(f(n) = a_n\). If \(f\) is positive, continuous, and decreasing, then both:

\[ \sum_{n = 1}^\infty a_n \text{ and }\int_1^\infty f(n)\text{d} n\]

converge or diverge.

Sometimes, when faced with a seemingly daunting example, it might make sense to just string together multiple tests to prove your desired result. Let’s look at quick example of the integral test and the comparison test being used together in action:

Example 1.6

Let’s consider the series:

\[ \sum_{n = 2}^\infty \frac{1}{n(\log n)^s}\]

For a given value of \(s \in (-\infty, \infty)\), does the series converge or diverge?

Hint: first, consider what happens for \(s = 1\). Next, use this result to deduce something about the case of \(s < 1\). Finally, consider the case of \(s > 1\).

1.3.3.3. \(p\)-series test#

Next up is probably the most important convergence result that you will want to know inside and out, the \(p\)-series convergence test:

Lemma 1.8 (\(p\)-series Convergence Test)

The \(p\)-series is given by, with \(p > 0\):

\[ \sum_{n = 1}^\infty \frac{1}{n^p}\]

If \(p > 1\), then the series converges. If \(p \in (0, 1]\), then the series diverges.

1.3.3.4. Ratio Test#

The next test is one of the more easily used convergence tests, as it amounts to taking the limit of two numbers. It is called the ratio test, and is based on the ratios of consecutive terms; e.g., \(r_n \triangleq \frac{a_{n + 1}}{a_n}\). The statement is rather big and involved:

Lemma 1.9 (Ratio Test)

Let \(\mathcal A \subset \mathbb R\), and let \((a_n)\) be a sequence on \(\mathcal A\). Consider the series \(\sum_{n = 1}^\infty a_n\), where \(a_n \neq 0\) for large \(n\). Let \(L = \lim_{n \rightarrow \infty}|r_n|\), where \(r_n = \frac{a_{n + 1}}{a_n}\). If \(L\) exists, then:

  1. If \(L < 1\), the series converges absolutely.

  2. If \(L = 1\), the test is inconclusive.

  3. If \(L > 1\), the series diverges.

Further, if \(L\) does not exist, let \(S = \limsup_{n \rightarrow \infty} |r_n|\) and \(I = \liminf_{n \rightarrow \infty}|r_n|\). Then:

  1. If \(S > 1\), the series converges absolutely.

  2. If \(R < 1\), the series diverges.

  3. If \(|r_n| \geq 1\) for all large \(n\), regardless of the value of \(R\), the series diverges.

  4. Otherwise, the test is inconclusive.

So, the nice aspect of this test is, we only need to think about the behavior of \(r_n\) in the tail. This means we can couple the ratio test with other nice results that operate on ratios of two numbers, like L’Hôpital’s rule, for determining behavior about the series. The downside is that these inconclusive cases can sometimes be rather annoying.

1.3.3.5. Root Test#

The next test can also be useful if we have terms that can easily be upper bounded by an \(n^{th}\) root, such as polynomial expressions. This is called the root test:

Lemma 1.10 (Root Test)

Let \(\mathcal A \subset \mathbb R\), and let \((a_n)\) be a sequence on \(\mathcal A\). Consider the series \(\sum_{n = 1}^\infty a_n\), and let \(C = \limsup_{n \rightarrow \infty} \sqrt[n]{|a_n|}\). If:

  1. \(C < 1\), the series converges absolutely.

  2. If \(C > 1\), the series diverges.

  3. If \(C = 1\), and further, \(\sqrt[n]{|a_n|} \downarrow 1\) as \(n \rightarrow \infty\), the series diverges.

  4. Otherwise, the test is inconclusive.

1.3.3.6. Limit Comparison Test#

The next test looks only at the limits of a pair of sequence to determine behaviors, and is pretty similar to the comparison test.

Lemma 1.11 (Limit Comparison Test)

Let \(\mathcal A \subset \mathbb R\), and let \((a_n)\) and \((b_n)\) be two sequences on \(\mathcal A\). Further, assume that \(a_n \geq 0\), and \(b_n > 0\), for all \(n \in \mathbb N\). Then:

  1. If \(\lim_{n \rightarrow \infty}\frac{a_n}{b_n} > 0\) is finite, then \(\sum_{n \in \mathbb N} a_n\) converges \(\iff \sum_{n \in \mathbb N} b_n\) converges.

  2. If \(\lim_{n \rightarrow \infty}\frac{a_n}{b_n} = 0\) and \(\sum_{n \in \mathbb N} b_n\) converges, then \(\sum_{n \in \mathbb N} b_n\) converges.

  3. If \(\lim_{n \rightarrow \infty}\frac{a_n}{b_n} = \infty\) and \(\sum_{n \in \mathbb N} b_n\) diverges, then \(\sum_{n \in \mathbb N} a_n\) diverges.

Let’s take a look at how to use this one with an example, where we chain together the comparison test with the limit comparison test and the \(p\)-series convergence test:

Example 1.7

Consider the series \(\sum_{n \in \mathbb N}\left(1 - \frac{1}{n^{1 - \epsilon}}\right)^n\), where \(\epsilon \in (0, 1)\).

Hint: compare to the series \(\sum_{n \in \mathbb N}\exp(-n^\epsilon)\), and then use the limit comparison test to a \(p\)-series.

1.3.3.7. Alternating series test#

The next test considers what happens when we have an alternating series (the series alternates between positive and negative values), and the absolute value of the terms converge to zero. This is called the alternating series test:

Lemma 1.12 (Alternating Series Test)

Let \(\mathcal A \subset \mathbb R\), and let \((a_n)\) be a sequence on \(\mathcal A\), where \(a_n = (-1)^n b_n\), or \(a_n = (-1)^{n + 1}b_n\), for \(b_n \geq 0\). If:

  1. \(b_n \xrightarrow[n \rightarrow \infty]{} 0\),

  2. \(b_n\) is a decreasing sequence (\(b_{n + 1} \leq b_n\) for all \(n\)),

Then \(\sum_{k = 1}^n a_k \xrightarrow[n \rightarrow \infty]{} L < \infty\), and the series is convergent.

1.3.4. Convergence of tails of series and tails#

An important result we will use repeatedly throughout the book is that the definition of a series converging is that the partial sums possess a limit. The consequence of this is that, conveniently, this means that the tail of the series converge to zero. We’ll formalize this intuitive result with a lemma:

Lemma 1.13 (Convergent Series and the Tail Sum)

Let \(\mathcal A \subset \mathbb R\) be a set, and let \((a_n)\) be a sequence on \(\mathcal A\). Let \(s_n = \sum_{k \in [n]}a_k\) be the \(n^{th}\) partial sum. Then:

\[\sum_{k = 1}^na_k \xrightarrow[n \rightarrow \infty]{} L < \infty \iff \sum_{k = n+1}^\infty a_k \xrightarrow[n \rightarrow \infty]{} 0.\]

Since this statement is an “if and only if” (as denoted by \(\iff\)), this means that we have to separately prove that each side implies the other. We’ll typically do that in this book by numbering the proof with two parts. For this proof, notice that we’re going to make use of the \(\vee\) and \(\wedge\) operators. \(f \vee g\) just means the maximum of \(f\) and \(g\), and \(f \wedge g\) just means the minimum of \(f\) and \(g\).

Proof. \(\Rightarrow\) Suppose that \(\sum_{k = 1}^n a_k \xrightarrow[n \rightarrow \infty]{} L < \infty\).

By definition of a limit, for any \(\epsilon > 0\), there exists \(N\) s.t. for all \(n > N\):

\[\begin{split}\left|\sum_{k = 1}^n a_k - L\right| &< \epsilon \\ \left|\sum_{k = 1}^n a_k - \sum_{k = 1}^\infty a_k\right| &< \epsilon,\,\,\,\,\sum_{k = 1}^\infty a_k = L < \infty\\ \left|\sum_{k = n + 1}^\infty a_k\right| &< \epsilon\end{split}\]

Which is the definition of \(\sum_{k = n + 1}^\infty a_k \xrightarrow[n \rightarrow \infty]{} 0\).

\(\Leftarrow\) Suppose that \(\sum_{k = n + 1}^\infty a_k \xrightarrow[n \rightarrow \infty]{} 0\).

Let \(s_n = \sum_{k \in [n]}a_k\). Then:

\[\begin{split}\left|s_n - s_m\right| &= \left|\sum_{k = (m \wedge n) + 1}^{m \vee n}a_k\right| = \left|\sum_{k \geq m + 1}a_k - \sum_{k \geq n + 1}a_k\right| \\ &\leq \left|\sum_{k \geq m + 1}a_k\right| + \left| \sum_{k \geq n + 1}a_k\right|,\,\,\,\,\text{Triangle Inequality} \\ &\xrightarrow[n, m \rightarrow \infty]{} 0.\,\,\,\,\text{by supposition}\end{split}\]

Which shows that \(\{s_n\}\) is Cauchy.

Then since a Cauchy sequence has a finite limit, \(s_n \xrightarrow[n \rightarrow \infty]{} L < \infty\).

Next, we have another useful lemma which will be extremely useful when we try to make statements about averages of sequences. Averages of sequences will be a major topic when we talk about laws of large numbers, since laws of large numbers typically attempt to make statements about means of random quantities:

Lemma 1.14 (Tail terms going to zero implies average goes to zero)

Let \(\mathcal A \subset \mathbb R\) be a set, and let \((a_n)\) be a sequence on \(\mathcal A\). Suppose that \(a_n \xrightarrow[n \rightarrow \infty]{} 0\). Then \(\frac{1}{n}\sum_{k = 1}^n a_n \xrightarrow[n \rightarrow \infty]{} 0\).

Proof. Since \(a_n \rightarrow 0\), for every \(\epsilon > 0\), there exists \(N\) s.t. for all \(n > N\), \(|a_n| < \epsilon\).

Then:

\[\begin{split}0 \leq \left|\frac{1}{n} \sum_{k = 1}^n a_k\right| &= \frac{1}{n}\left| \sum_{k = 1}^N a_k + \sum_{k = N + 1}^n a_k\right| \\ &\leq \frac{1}{n}\left| \sum_{k = 1}^N a_k\right| + \frac{1}{n}\left|\sum_{k = N + 1}^n a_k\right|,\,\,\,\,\text{triangle ineq.} \\ &\leq \frac{1}{n} C + \frac 1 n \sum_{k = N + 1}^n |a_k|,\,\,\,\, C \triangleq \left|\sum_{k = 1}^N a_k\right|,\text{ triangle ineq.} \\ &\leq \frac{1}{n} C + \epsilon,\,\,\,\, (n - N + 1) < n, |a_k| < \epsilon \\ &\xrightarrow[n \rightarrow \infty]{} \leq \epsilon.\end{split}\]

Taking \(\epsilon \downarrow 0\) and noting that the quantity is lower-bounded by zero, the sandwich theorem gives that \(\frac 1 n \sum_{k = 1}^n \xrightarrow[n \rightarrow \infty]{} 0\).