Convergence acceleration of infinite sequences

Infinite sequences often arise in mathematics, especially in the form of the finite partial sums corresponding to an infinite series. In simple cases it possible to compute the limit (if it exists) of a sequence analytically. However, sequences for which it is very difficult or unknown how to compute the limit analytically are not uncommon in mathematics, nor in applications in science and engineering. In particular, when the sequence elements are partial sums including ever more terms from an infinite series, each sequence element becomes more time consuming to compute than the previous one. I recently came across interesting review articles (by Weniger) on techniques for accelerating the convergence of infinite sequences.

Given a convergent infinite sequence a_0,a_1,a_2,\ldots of real numbers, the convergence rate may be characterized by

\displaystyle \rho = \lim_{n\to\infty} \frac{a_{n+1} - a}{a_n - a}

where a is the limit. If 1 > |\rho|, the sequence is said to be linearly convergent. If \rho = 1, the sequence is said to be logarithmically convergent, and the case |\rho| > 1 does not occur for a convergent series. (In fact, the techniques mentioned here are somewhat more general as they may also be used to assign limits to certain divergent sequences. My interest here is restricted to convergent sequences, though.) The idea now is to transform the sequence to a new sequence with better convergence properties. A linear Ansatz of the form

\displaystyle b_n = \sum_{k=0}^n \mu_{nk} a_k

is the simplest approach. Weniger mentions that such linear sequence transformations are well characterized mathematically, with known necessary and sufficient conditions under which the new sequence converges to the same limit as the original sequence. Nonlinear sequence transformations, while not as well characterized mathematically, can be more powerful and succeed where linear sequence transformations fail. There are obviously enormously many possible forms a nonlinear Ansatz may take, but since no single method can accelerate the convergence of all sequences a natural approach is to use available information about remainder terms as the starting point. Writing each sequence element as the sum of the limit and a remainder term,

a_n = a + r_n,

the strategy is to seek a transformation that minimizes the remainder terms. A simple remainder term-model is

(R1) a_n = a + c\lambda^n.

The convergence rate then becomes \rho = \lambda. Since there are three unknowns on the right hand side of Eq. (R1), three sequence elements a_n, a_{n+1}, a_{n+2} are sufficient estimate the model parameters. With the notation c' = c \lambda^n one may write

a_{n+2} = a + c' \lambda^2,
a_{n+1} = a + c' \lambda,
a_n      =  a + c',

from which it follows that

\displaystyle L = \frac{c' L(L-1)}{c' (L-1)} = \frac{a_{n+2} - a_{n+1}}{a_{n+1} - a_n} = \frac{\Delta a_{n+1}}{\Delta a_n},
\displaystyle c' = \frac{\Delta a_n}{L-1},
\displaystyle a = a_n - c' = a_n - \frac{\Delta a_n}{\Delta a_{n+1}/\Delta a_n - 1} = a_n - \frac{\Delta a_n^2}{\Delta^2 a_n}.

Thus, the transformed sequence becomes b_n = a_n - \Delta a_n^2 / \Delta^2 a_n. This is Aitken’s \Delta^2 formula. It obviously works well for linearly convergent sequences that are well approximated by the model in Eq. (R1). The above derivation fails for |\rho|=|\lambda|=1 and Aitken’s formula may be expected not to work well for logarithmically convergent sequences. An interesting feature is that Aitken’s formula may easily be iterated by transforming already transformed sequences. Generalizing the remainder term-model to a sum of several exponential terms and requiring that not only the sequence transformation, but also its iteration, minimizes remainder terms leads to Wynn’s epsilon algorithm.

Another technique is illustrated by Wynn’s rho algorithm. This transformation is obtained by introducing a function S(x) which must reproduce the sequence at given sampling points, S(x_n) = a_n. For the purposes of the rho algorithm the sequence of sampling points should tend to infinity as n \to \infty. Though not a necessary assumption, the sampling points may for concreteness be taken to be x_n = n + \beta, as exemplified by Weniger. The general strategy is now to define a sequence of approximate functions S_k(x) which reproduce the sequence on the first k sample points. In addition, it should be possible to easily determine the limit of these approximate functions of x_n as n \to \infty. In Wynn’s rho algorithm, rational functions of the form

\displaystyle S_k(x) = \frac{c_k x^k + \ldots + c_1 x + c_0}{d_k x^k + \ldots + d_1 x + d_0},

with b_k = 1, are chosen. It is simple to compute the limit of these functions, since S_k(x_n) = S_k(n+\beta) \to c_k/d_k as n \to \infty. There are 2k+1 independent parameters that need to be determined in the function S_k(x). Given as many elements of the sequence, it is then possible to determine all parameters and estimate the limit of the sequence by c_k/d_k. More specifically, the parameters are obtained from the linear equation system

c_k x_{n+j}^k + \ldots + c_1 x_{n+j} + c_0 = (d_k x_{n+j}^k + \ldots + d_1 x_{n+j} + d_0) a_{n+j},

with j = 0,1,\ldots,2k. It turns out that the limit estimates resulting from this model can be expressed compactly as follows

\rho^{(n)}_{-1} = 0, \quad \rho^{(n)}_0 = a_n,

\displaystyle \rho^{(n)}_{k+1} = \rho^{n+1}_{k-1} + \frac{x_{n+k+1} - x_n}{\rho^{n+1}_k - \rho^{(n)}_k}.

The zeroth-order estimates \rho^{(n)}_0 are obviously identical to the sequence elements. Even higher-order estimates, \rho^{(n)}_{2k}, provide hopefully better estimates of the limit. The quantities of odd order, \rho^{(n)}_{2k+1}, are merely auxilary quantities rather than estimates.

Wynn’s rho algorithm is successful for many logarithmically convergent sequences, but fails to accelerate the convergence of linearly convergent sequences. An attempt to remedy this, by combining Wynn’s rho algorithm with Wynn’s epsilon algorithm for linearly convergent sequences, leads to Brezinski’s theta algorithm. The latter has apparently proven quite successful in practice, being able to accelerate both linearly and logarithmically convergent sequences.

Nevertheless, while sequence transformations are very useful, no single transformation can be expected to always accelerate convergence. Sidi gives a few interesting examples of infinite series which, due to the sign pattern of the terms that are summed, are not accelerated by e.g. the theta algorithm. One example is the sequence of partial sums

\displaystyle a_n = \sum_{k=0}^n \frac{(-1)^{\lfloor k/2 \rfloor}}{k+1} \to \frac{\pi}{4} + \frac{\log(2)}{2},

with the sign pattern ++--++--\ldots.

Much more can be said on this topic and on specific nonlinear sequence transformations. For example: (i) Weniger has interesting examples of diverse applications of these methods in science. (ii) Some of the methods can be generalized to sequences of vectors, which should be of interest for many iterative linear algebra methods, e.g. iterative solution of linear equation systems or eigenvalue equations. (iii) Delahaye and Germain-Bonne proved that there exists no universal accelerator for logarithmically convergent sequences.


P. R. Graves-Morris, D. E. Roberts, and A. Salam. The epsilon algorithm and related topics. J. Comp. Appl. Math. 122:51 (2000).

A. Sidi. A Challenging Test for Convergence Accelerators: Summation of a Series with a Special Sign Pattern. Appl. Math. E-notes 6:225, (2006).

E. J. Weniger. Nonlinear sequence transformations for the acceleration of convergence and the summation of divergent series. Computer Phys. Rep. 10:191 (1989). eprint math/0306302 (very long review)

E. J. Weniger. Nonlinear Sequence Transformations: Computational Tools for the Acceleration of Convergence and the Summation of Divergent Series. eprint math/0107080 (shorter review)


Comments are closed.

%d bloggers like this: