Jump to content

Sequence transformation

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Linas (talk | contribs) at 17:26, 2 June 2008 (Euler's transform: remove entire section, its now in series acceleration). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In mathematics, a sequence transformation is an operator acting on a given space of sequences. Sequence transformations include linear mappings such as convolution with another sequence, and resummation of a sequence and, more generally, are commonly used for series acceleration, that is, for improving the rate of convergence of a slowly convergent sequence or series. Sequence transformations are also commonly used to compute the antilimit of a divergent series numerically, and are used in conjunction with extrapolation methods.

Overview

Classical examples for sequence transformations include the binomial transform, Möbius transform, Stirling transform and others.

An important part of this article is about techniques for series acceleration, which are often applied in numerical analysis, where they are used to improve the speed of numerical integration. Series acceleration techniques may also be used, for example, to obtain a variety of identities on special functions. Thus, the Euler transform applied to the hypergeometric series gives some of the classic, well-known hypergeometric series identities.

Two classical techniques for series acceleration are Euler's transformation of series[1] and Kummer's transformation of series[2]. A variety of much more rapidly convergent and special-case tools have been developed in the 20th century, including Richardson extrapolation, introduced by Lewis Fry Richardson in the early 20th century but also known and used by Hirotaka Takebe in 1722, the Aitken delta-squared process, introduced by Alexander Aitken in 1926 but also known and used by Takakazu Seki in the 18th century, the e-algorithm given by Peter Wynn in 1956, the Levin u-transform, and the Wilf-Zeilberger-Ekhad method or WZ method.

For alternating series, several powerful techniques, offering convergence rates from all the way to for a summation of terms, are described by Cohen et al. [3].

Definitions

For a given sequence

,

the transformed sequence is

,

where the members of the transformed sequence are usually computed from some finite number of members of the original sequence, i.e.

for some which often depends on (cf. e.g. Binomial transform). In the simplest case, the and the are real or complex numbers. More generally, they may be elements of some vector space or algebra.

In the context of acceleration of convergence, the transformed sequence is said to converge faster than the original sequence if

where is the limit of , assumed to be convergent. In this case, convergence acceleration is obtained. If the original sequence is divergent, the sequence transformation acts as extrapolation method to the antilimit .

If the mapping is linear in each of its arguments, i.e., for

for some constants , the sequence transformation is called a linear sequence transformation. Sequence transformations that are not linear are called nonlinear sequence transformations.

Application

Examples of such nonlinear sequence transformations are Padé approximants and Levin-type sequence transformations.

Especially nonlinear sequence transformations often provide powerful numerical methods for the summation of divergent series or asymptotic series that arise for instance in perturbation theory, and may be used as highly effective extrapolation methods.

Aitken method

Main article: Aitken's delta-squared process

A simple nonlinear sequence transformation is the Aitken extrapolation or delta-squared method,

defined by

.

This transformation is commonly used to improve the rate of convergence of a slowly converging sequence; heuristically, it eliminates the largest part of the absolute error.

Minimum polynomial extrapolation

While Aitken's method is the most famous, it often fails for vector sequences. An effective method for vector sequences is the Minimum Polynomial Extrapolation. It is usually phrased in terms of the fixed point iteration:

Given iterates in , one constructs the matrix whose columns are the differences. Then, one computes the vector where denotes the Moore-Penrose Pseudoinverse of . The number 1 is then appended to the end of , and the extrapolated limit is

where is the matrix whose columns are the iterates starting at 2.

The following 4 line MATLAB code segment implements the MPE algorithm:

U=x(:,2:end-1)-x(:,1:end-2);
c=-pinv(U)*(x(:,end)-x(:,end-1));
c(end+1,1)=1;
s=(x(:,2:end)*c)/sum(c);


References

  1. ^ Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 3, eqn 3.6.27". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 16. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
  2. ^ Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 3, eqn 3.6.26". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 16. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
  3. ^ Henri Cohen, Fernando Rodriguez Villegas, and Don Zagier, "Convergence Acceleration of Alternating Series", Experimental Mathematics, 9:1 (2000) page 3.
  • C. Brezinski and M. Redivo Zaglia, Extrapolation Methods. Theory and Practice, North-Holland, 1991.
  • G. A. Baker, Jr. and P. Graves-Morris, Padé Approximants, Cambridge U.P., 1996.