SeriesΒΆ

Let \(a_n\) be a sequence. We define another sequence, called the partial sums, \(S_n\), by induction: \(S_0=a_0\) and \(S_{n+1}=S_n+a_{n+1}\). Intuitively, we think of those sums as approaching summing up all elements in \(a_n\). If the sequence of partial sums converges, we call this the sum of \(a_n\), and denote it by \(\sum a_n\).

As a simple example, if \(a_n=1/2^n\), then \(S_n=2-1/2^n\) (by induction). Then \(S_n\) converges to \(2\) and \(\sum 1/2^n=2\).

A “series” is the partial sums of a sequence, and so we say that the series \(a_n\) converges if \(S_n\) converges.

We note that although the sum is not a tail property, the convergence of the series is: if \(S_n\) converges, after all, so does \(S_{n+N}\), since sequence convergence is a tail property, and then so does \(S_{n+N}-S_N\) – but this is the sequence of sums of the \(N\)-tail.

We also note that since for a sequence to converge it must be Cauchy, if \(S_n\) is Cauchy than for every \(\epsilon\) there is an \(N\) such that if \(n,m>N\) \(|S_n-S_m|<\epsilon\). In particular, if \(m=n-1\), \(|a_n|<\epsilon\) and therefore \(a_n\) converges to \(0\). This is our first negative criterion of series convergence: if a sequence does not converge to \(0\), its series does not converge.

Assume \(a_n\) is a sequence such that the series \(|a_n|\) converges. In that case, the partial sum sequence is therefore Cauchy, and so converges. If we call the partial sum sequence for \(|a_n|\) by the name of \(T_n\), we can show by induction on \(k\) that \(|S_{n+k}-S_n|\leq |T_{n+k}-T_n|\) – the induction step is the triangle inequality. Therefore \(S_n\) is Cauchy as well, and so converges. When the series \(a_n\) converges, we say that \(a_n\) converges absolutely.

Assume \(0\leq a_n\leq b_n\) and the series \(b_n\) converges. Therefore the sequence of \(b\) partial sums is bounded. Since the sequence of \(b\) partial sums, at each point, is greater than the sequence of \(a\) partial sums, we get that the partial sum sequence for \(a\) is bounded. We also see that the sequence of partial sums for \(a\) is monotonically increasing since \(0\leq a_n\). Therefore, the series \(a_n\) converges. In particular, if \(|a_n|\leq |b_n|\), and \(b_n\) converges absolutely, so does \(a_n\).

This is our first criteria of positive convergence. As an application, note that if \(g(g+2g+g)/4>g\), but since \(h<1\), \(h^20\) and \(h=1/(1+a)\). We note that \((1+a)^n\geq 1+na\) (by induction, for example) and so \(h^n<1/na\), and so \(nh^n<1/a\). In particular, \(ng^n<nh^{2n}=nh^nh^n<h^n/a\). Since the series \(h^n\) converges, so does the series \(h^n/a\) and by the criteria above, sum ng^n also converges.

This is an example of a so-called power series: let \(a_n\) be a sequence. For any \(x\), the series \(a_nx^n\) is called the power series of \(a_n\) at \(x\). We have shown, above, that the power series \(\sum nx^n\) converges for every \(x|x|\). Then for any natural \(n\), \(|a_{n+N}x^{n+N}|<|a_Nx^N|(|x|/N)^n\). Since series convergence is a tail property, we see that the series \(a_nx^n\) converges, so that the power series of the inverse factorial converges at every point!

For further analysis of power series, it is useful to have the following result: if \(\Sigma a_n\) converges absolutely, and \(k_n\) is a sequence of natural numbers such that every natural number appears exactly once, then \(\Sigma a_{k_n}\) also converges, and to the same sum. Since \(k_n\) includes every natural number, for every natural number, we can ask at which place does it appear. We call that sequence \(l_n\). Since \(a_n\) converges absolutely, given an \(\epsilon\), there is a \(N\), such that \(|S_{n+j}-S_n|N,j>0\) where \(S\) is the sequence of partial sums of \(|a_n|\). Treating that a sequence of \(k\), and looking at the limit, we can see that \(\Sigma |a_{n+j}|\leq \epsilon\). If we take an \(M\) large enough that \(n>M\) implies both \(k_n>N\) and \(l_n>M\), we have \(|S[a_{k_n}]-S[a_n]|<2\epsilon\), and therefore \(|\Sigma a_{k_n}-\Sigma a_n|<2\epsilon\). Since this is true for every \(\epsilon\), \(|\Sigma a_{k_n}-\Sigma a_n|=0\).

This is useful for power series in the following way: if we have \(\Sigma a_n x^n\) and \(\Sigma b_n x^n\) power series, what happens when we multiply them? Inside the common region of convergence, we know that the multiplication is well defined. So we have that \(S[a_n x^n]S[b_n x^n]\) converges, and so if we use the right \(k_n\), and move to a subsequence, we can have that \(\Sigma [a_0b_n+a_1b_{n-1}+...+a_{n-1}b_1+a_nb_0]x^n\) is the multiplication.