You start a sponsored read-a-thon. On day 1 you read 10 pages. Each day you read 3 more pages than the day before. After 30 days, how many pages have you read in total? Adding them up one by one would take a while — but there’s a pattern.
To walk from here to a wall, you must first walk halfway. Then half of the remaining distance. Then half again. You take infinitely many steps — so how do you ever arrive? The answer is that the distances form a series whose infinite sum is exactly the distance to the wall.
Both problems need the same tool. This chapter builds it.
25.1 What the notation is saying
A sequence is a list of numbers following a rule. A series is the sum of those numbers, in order.
Write the sequence \(3, 7, 11, 15, 19\). The corresponding series is:
\[3 + 7 + 11 + 15 + 19 = 45\]
That works when you have a short list. For 300 terms, writing out every addition is unrealistic. Mathematicians use a shorthand — the sigma notation — named after the Greek letter \(\Sigma\) (capital sigma).
\[\sum_{i=1}^{n} a_i\]
Read it as: the sum of \(a_i\), as \(i\) runs from 1 to \(n\).
The \(i\) below the sigma is the index — it’s a counter that starts at 1 and steps up by 1 until it reaches \(n\). At each step, you substitute that value of \(i\) into the expression \(a_i\) and add the result to your running total.
A concrete example before using sigma in formulas. Write the sum \(1 + 4 + 9 + 16 + 25\) — the first five perfect squares — using sigma:
Adding all \(n\) of them gives the arithmetic series\(S_n\), read as the sum of the first \(n\) terms.
To see why the formula works, try a small example first. Take the arithmetic sequence \(3, 6, 9, 12\) (four terms, \(d = 3\)). Write the sum forwards and backwards, then add column by column:
Add the two lines together. Each column pairs a term from the top with a term from the bottom. Every pair sums to \(a_1 + a_n\), and there are \(n\) such pairs:
\[2S_n = n(a_1 + a_n)\]
\[\boxed{S_n = \frac{n}{2}(a_1 + a_n)}\]
Since \(a_n = a_1 + (n-1)d\), you can write this entirely in terms of the first term and the common difference:
\[S_n = \frac{n}{2}\bigl(2a_1 + (n-1)d\bigr)\]
Use whichever form is more convenient. If you know the first and last term, use the first. If you only know \(a_1\) and \(d\), use the second.
The Gauss pairing trick
Carl Friedrich Gauss was reportedly seven years old when his teacher set the class the task of summing all integers from 1 to 100, hoping for an hour of quiet. Gauss had the answer in seconds: 5050.
His insight: pair the first term with the last, the second with the second-to-last, and so on. Each pair sums to 101. There are 50 pairs. So \(50 \times 101 = 5050\).
This is not a shortcut — it is the formula. \(S_{100} = \frac{100}{2}(1 + 100) = 50 \times 101 = 5050\).
The formula encodes the same pairing, for any arithmetic series. That is what elegant mathematical thinking looks like: notice a structure, use it, and get there in seconds instead of hours.
25.2.2 Geometric series
A geometric sequence has a first term \(a_1\) and a common ratio \(r\). The terms are:
If you multiply the sum by \(r\), each term shifts up by one power — almost matching the original sum. Subtracting cancels all but the first and last terms.
The trick here is to multiply both sides by \(r\):
The form with \((1 - r^n)\) in the numerator is the convention — both forms are algebraically identical, but this one is easier to read when \(|r| < 1\).
When \(r = 1\) all terms are equal to \(a_1\), so \(S_n = n \cdot a_1\). That case doesn’t need the formula.
25.2.3 Infinite geometric series
Here is something that sounds impossible: add infinitely many numbers and get a finite result.
Look at the partial sums of the series \(1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots\)
The sums are getting closer to 2 — but never exceeding it, and never quite reaching it. The terms are shrinking. Each new term adds less than the last. The sum is converging toward a fixed value.
Look at the formula \(S_n = \frac{a_1(1 - r^n)}{1 - r}\) when \(|r| < 1\). As \(n\) grows, \(r^n\) gets closer and closer to 0 — and when \(|r| < 1\), eventually it is so small we treat it as 0:
Read \(S_\infty\) as the sum of infinitely many terms.
For the series above: \(a_1 = 1\), \(r = \frac{1}{2}\). So \(S_\infty = \frac{1}{1 - \frac{1}{2}} = \frac{1}{\frac{1}{2}} = 2\). Exactly 2.
What if \(|r| \geq 1\)? Then \(r^n\) does not approach 0 — it either stays at 1 (when \(r = 1\)) or grows without bound (when \(|r| > 1\)) or oscillates growing (when \(r \leq -1\)). The terms never shrink to nothing, so the sum never settles. It diverges — it has no finite value. You can add as many terms as you like and the sum will keep growing (or oscillating further and further from zero).
This is worth sitting with for a moment. It’s genuinely strange that you can add infinitely many things and get a finite answer. The reason it works is not that infinity is somehow small — it’s that the terms shrink fast enough. Halving at every step is fast enough. Shrinking by one percent at every step is fast enough. Staying the same size is not.
You can check this with the arithmetic series formula: \(a_1 = 2\), \(d = 3\), \(n = 4\). \(S_4 = \frac{4}{2}(2 + 11) = 2 \times 13 = 26\). Same answer.
Changing the index. The index variable is a dummy — it doesn’t matter what letter you use. \(\displaystyle\sum_{i=1}^{n} a_i\) and \(\displaystyle\sum_{k=1}^{n} a_k\) are the same sum.
Splitting a sum. You can split a sum that has two parts:
\[\sum_{i=1}^{n} c \cdot a_i = c \sum_{i=1}^{n} a_i\]
25.3 Worked examples
Example 1 (Finance). A car loan of $12 000 charges 0.5% interest per month. You plan to repay in a lump sum after exactly 24 months — no intermediate payments, interest compounds monthly. What is the total you owe at the end?
Each month the balance is multiplied by \(1.005\). After 24 months:
Now ask: what is the total interest paid? That is \(A - 12000 = \$1\,526.40\).
The balance itself is a geometric sequence with first term \(12000\) and ratio \(1.005\). The total interest is what the geometric growth added over 24 steps.
Example 2 (Computing). Merge sort splits an array of \(n\) elements in half repeatedly, sorts each half, and merges. The merging step at each level does work proportional to \(n\). The number of levels is \(\log_2 n\). The total work can be written as:
\[T = n + n + n + \cdots \quad (\log_2 n \text{ terms})\]
This is an arithmetic series with all terms equal to \(n\):
\[T = n \cdot \log_2 n\]
This is the source of the famous \(O(n \log n)\) complexity of merge sort. Each level of the recursion tree contributes \(n\) operations; summing \(\log_2 n\) such levels gives the total.
Example 3 (Science). Zeno’s paradox resolved. A runner is 8 metres from a wall. She runs to the halfway point (4 m), then halfway again (2 m), then halfway again (1 m), and so on. The distances form a geometric sequence:
The infinite sum of all the halfway-steps is exactly 8 metres — the distance to the wall. The paradox dissolves: infinitely many steps do not require infinite time or distance. They require exactly the right amount of both.
Example 4 (Engineering). A simple periodic signal can be approximated by a partial sum of sine waves. Consider the first five odd harmonics of a square wave at frequency \(f\):
\[s(t) \approx \sum_{k=0}^{3} \frac{1}{2k+1}\sin\bigl((2k+1) \cdot 2\pi f t\bigr)\]
Expanding the first four terms (\(k = 0, 1, 2, 3\)):
\[s(t) \approx \sin(2\pi f t) + \frac{1}{3}\sin(6\pi f t) + \frac{1}{5}\sin(10\pi f t) + \frac{1}{7}\sin(14\pi f t)\]
The coefficients \(1, \frac{1}{3}, \frac{1}{5}, \frac{1}{7}\) form a series — the terms shrink, so the infinite sum converges. More terms give a closer approximation to the square wave. This is Fourier series: decomposing a complex signal as an infinite sum of simple ones. The tool you need to make sense of it is exactly the infinite series convergence you just learned.
You might notice the coefficients \(1, \frac{1}{3}, \frac{1}{5}, \ldots\) shrink slowly — the series \(1 + \frac{1}{3} + \frac{1}{5} + \cdots\) does diverge on its own (like the harmonic series). But in the Fourier series each coefficient multiplies an oscillating sine function. Those sines partially cancel each other out, which is what allows the series to converge. The full explanation requires calculus.
25.4 Where this goes
The phrase “approaches a finite value as \(n \to \infty\)” was used informally here. Limits and continuity (Vol 5, Ch 1) makes that precise: what does it mean, formally, for a sequence to converge to a number? The epsilon-delta definition is the rigorous version of the intuition you’ve been building. Every infinite series you have seen in this chapter is, underneath, a limit.
Integral calculus (Vol 5, Ch 3) connects series to continuous accumulation. A Riemann sum — the standard way to define the integral — is a series: you divide an area into strips, compute the area of each strip, and sum. As the number of strips goes to infinity, the series converges to the exact area. The passage from “finite sum” to “integral” is the same passage from “series” to “infinite series” you made in this chapter. Fourier series (Vol 7) is the direct destination of example 4: a systematic way to represent any periodic function as an infinite sum of sines and cosines. Every result in Fourier analysis stands on the geometric series convergence you proved here.
Where this shows up
A mortgage calculator computes the total repayment using the geometric series formula for present value of an annuity — every bank does this calculation thousands of times a day.
Merge sort, binary search, and most divide-and-conquer algorithms have costs that are sums of a geometric series — this is why their Big-O complexity involves logarithms.
Numerical methods for differential equations (Runge-Kutta, finite differences) rely on Taylor series to approximate function values — an infinite polynomial series truncated at a practical number of terms.
Audio compression (MP3, AAC) works by computing a Fourier series decomposition, discarding components the ear cannot hear, and reconstructing the signal from the remainder.
25.5 Exercises
These are puzzles. Each has a clean numerical answer, but the interesting part is identifying the series type and setting up the formula before you calculate.
Exercise 1. The first term of an arithmetic series is 5, the common difference is 4, and there are 20 terms. Find the sum.
Code
{const el =makeStepperHTML(1, [ { op:"Identify the values",eq:"a_1 = 5,\\quad d = 4,\\quad n = 20",note:null }, { op:"Write the sum formula",eq:"S_n = \\frac{n}{2}\\bigl(2a_1 + (n-1)d\\bigr)",note:"Use this form when you know a₁ and d but not the last term." }, { op:"Substitute",eq:"S_{20} = \\frac{20}{2}\\bigl(2 \\times 5 + 19 \\times 4\\bigr) = 10\\bigl(10 + 76\\bigr)",note:null }, { op:"Simplify",eq:"S_{20} = 10 \\times 86 = 860",note:null }, { op:"Check",eq:"a_{20} = 5 + 19 \\times 4 = 81;\\quad S_{20} = \\frac{20}{2}(5 + 81) = 10 \\times 86 = 860 \\checkmark",note:null }, ]);return el;}
Exercise 2. A geometric series has first term 3 and common ratio 2. Find the sum of the first 8 terms.
Code
{const el =makeStepperHTML(2, [ { op:"Identify the values",eq:"a_1 = 3,\\quad r = 2,\\quad n = 8",note:null }, { op:"Write the sum formula",eq:"S_n = \\frac{a_1(1 - r^n)}{1 - r}",note:null }, { op:"Substitute",eq:"S_8 = \\frac{3(1 - 2^8)}{1 - 2} = \\frac{3(1 - 256)}{-1}",note:null }, { op:"Simplify numerator",eq:"S_8 = \\frac{3 \\times (-255)}{-1} = \\frac{-765}{-1}",note:"Dividing a negative by a negative gives a positive." }, { op:"Compute",eq:"S_8 = 765",note:null }, { op:"Check",eq:"3 + 6 + 12 + 24 + 48 + 96 + 192 + 384 = 765 \\checkmark",note:null }, ]);return el;}
Exercise 3. A bouncing ball is dropped from 10 m. Each bounce reaches 60% of the previous height. Find the total distance travelled by the ball before it comes to rest.
(Hint: the ball travels down, up, down, up, … Be careful to count both the downward and upward legs of each bounce.)
Code
{const el =makeStepperHTML(3, [ { op:"Set up the distances",eq:"\\text{Down: } 10 + 6 + 3.6 + \\cdots;\\quad \\text{Up: } 6 + 3.6 + 2.16 + \\cdots",note:"The first drop (10 m) has no matching upward leg, so split the sum." }, { op:"Sum the downward legs",eq:"S_{\\text{down}} = \\frac{10}{1 - 0.6} = \\frac{10}{0.4} = 25 \\text{ m}",note:"Infinite geometric series: a₁ = 10, r = 0.6." }, { op:"Sum the upward legs",eq:"S_{\\text{up}} = \\frac{6}{1 - 0.6} = \\frac{6}{0.4} = 15 \\text{ m}",note:"The ball first rises to 6 m after the first bounce." }, { op:"Total distance",eq:"S_{\\text{total}} = 25 + 15 = 40 \\text{ m}",note:null }, { op:"Check (alternative formula)",eq:"S_{\\text{total}} = 10 + 2 \\times \\frac{6}{1-0.6} = 10 + 30 = 40 \\text{ m} \\checkmark",note:"Adding the initial drop once and all bounce arcs (up + down) twice." }, ]);return el;}
Exercise 4. Evaluate the sum \(\displaystyle\sum_{i=1}^{6}(2i + 3)\).
(First expand and add term by term to check your answer, then verify using the arithmetic series formula.)
Exercise 5. An apprentice electrician earns $18 per hour in their first year and gets a $1.50 per hour raise every year. They plan to work in this trade for 12 years. Their annual hours are 1820 each year. What are their total earnings over the 12 years?
(Note: compute total annual earnings for each year first, then sum the 12 annual totals.)
Exercise 6. Show that the infinite series \(\displaystyle\sum_{k=1}^{\infty} \frac{1}{4^k}\) converges, and find its sum.
Then explain in one sentence why the harmonic series \(1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \cdots\) does not have a finite sum. (Hint: think about how quickly the terms are shrinking compared to a geometric series.)
Code
{const el =makeStepperHTML(6, [ { op:"Write the first few terms",eq:"\\frac{1}{4} + \\frac{1}{16} + \\frac{1}{64} + \\cdots",note:null }, { op:"Identify a₁ and r",eq:"a_1 = \\frac{1}{4},\\quad r = \\frac{1}{4}",note:"Each term is one-quarter of the previous term." }, { op:"Check convergence condition",eq:"|r| = \\frac{1}{4} < 1 \\implies \\text{converges}",note:null }, { op:"Apply the infinite sum formula",eq:"S_{\\infty} = \\frac{a_1}{1 - r} = \\frac{\\frac{1}{4}}{1 - \\frac{1}{4}} = \\frac{\\frac{1}{4}}{\\frac{3}{4}} = \\frac{1}{3}",note:null }, { op:"Reasoning about the harmonic series",eq:"\\frac{a_{k+1}}{a_k} = \\frac{1/(k+1)}{1/k} = \\frac{k}{k+1} \\to 1 \\text{ as } k \\to \\infty",note:"The harmonic series is not geometric — there is no fixed ratio r. The ratio between consecutive terms is k/(k+1), which approaches 1 as k grows. The terms shrink, but more and more slowly — not fast enough to produce a finite sum, unlike a geometric series where |r| is fixed below 1." }, { op:"Check",eq:"\\frac{1}{4} + \\frac{1}{16} + \\frac{1}{64} + \\frac{1}{256} \\approx 0.332 \\approx \\frac{1}{3} \\checkmark",note:null }, ]);return el;}