Jump to content

Adaptive Simpson's method

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Artoria2e5 (talk | contribs) at 05:57, 28 September 2018 (Python). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Adaptive Simpson's method, also called adaptive Simpson's rule, is a method of numerical integration proposed by G.F. Kuncir in 1962.[1] It is probably the first recursive adaptive algorithm for numerical integration to appear in print,[2] although more modern adaptive methods based on Gauss–Kronrod quadrature and Clenshaw–Curtis quadrature are now generally preferred. Adaptive Simpson's method uses an estimate of the error we get from calculating a definite integral using Simpson's rule. If the error exceeds a user-specified tolerance, the algorithm calls for subdividing the interval of integration in two and applying adaptive Simpson's method to each subinterval in a recursive manner. The technique is usually much more efficient than composite Simpson's rule since it uses fewer function evaluations in places where the function is well-approximated by a cubic function.

A criterion for determining when to stop subdividing an interval, suggested by J.N. Lyness,[3] is

where is an interval with midpoint , , , and are the estimates given by Simpson's rule on the corresponding intervals and is the desired tolerance for the interval.

Simpson's rule is an interpolatory quadrature rule which is exact when the integrand is a polynomial of degree three or lower. Using Richardson extrapolation, the more accurate Simpson estimate for six function values is combined with the less accurate estimate for three function values by applying the correction . The thus obtained estimate is exact for polynomials of degree five or less.

Sample implementations

A common implementation technique shown below is passing down f(a), f(b), f(m) along with the interval [a, b]. These values, used for evaluating S(a, b) at the parent level, will again be used for the subintervals. Doing so cuts down the cost of each recursive call from 6 to 2 evaluations of the input function. The size of the stack space used stays linear to the layer of recursions.

Python

Here is an implementation of adaptive Simpson's method in Python.

from __future__ import division # python 2 compat
# "structured" adaptive version, translated from Racket
def simpsons_1call(f, a, fa, b, fb):
    """Evaluates the Simpson's Rule, also returning m and f(m) to reuse"""
    m = (a+b) / 2
    fm = f(m)
    return (m, fm, abs(b-a) / 6 * (fa + 4 * fm + fb))

def recursive_asr(f, a, fa, b, fb, eps, whole, m, fm):
    """
    Efficient recursive implementation of adaptive Simpson's rule.
    Function values at the start, middle, end of the intervals are retained.
    """
    lm, flm, left  = simpsons_1call(f, a, fa, m, fm)
    rm, frm, right = simpsons_1call(f, m, fm, b, fb)
    delta = left + right - whole
    if abs(delta) <= 15 * eps:
        return left + right + delta / 15
    return recursive_asr(f, a, fa, m, fm, eps/2, left , lm, flm) +\
           recursive_asr(f, m, fm, b, fb, eps/2, right, rm, frm)

def adaptive_simpsons_rule(f, a, b, eps):
    """Calculate integral of f from a to b with max error of eps."""
    fa, fb = f(a), f(b)
    m, fm, whole = simpsons_1call(f, a, fa, b, fb)
    return recursive_asr(f, a, fa, b, fb, eps, whole, m, fm)

from math import sin
print(adaptive_simpsons_rule(sin, 0, 1, 1e-09))

C

Here is an implementation of the adaptive Simpson's method in C99 that avoids redundant evaluations of f and quadrature computations.

#include <math.h>  // include file for fabs and sin
#include <stdio.h> // include file for printf and perror
#include <errno.h>

/** Adaptive Simpson's Rule Wrapper
 *  (fills in cached function evaluations) */
float adaptiveSimpsons(float (*f)(float),     // function ptr to integrate
                       float a, float b,      // interval [a,b]
                       float epsilon,         // error tolerance
                       int maxRecDepth) {     // recursion cap
    errno = 0;
    float h = b - a;
    float fa = f(a), fb = f(b), fm = f((a + b)/2);
    float S = (h/6)*(fa + 4*fm + fb);
    return adaptiveSimpsonsAux(f, a, b, epsilon, S, fa, fb, fm, maxRecDepth);
}

/** Adaptive Simpson's Rule, Recursive Core */
float adaptiveSimpsonsAux(float (*f)(float), float a, float b, float epsilon,
                          float whole, float fa, float fb, float fm, int rec) {
    float m   = (a + b)/2,  h   = (b - a)/2;
    float lm  = (a + m)/2,  rm  = (m + b)/2;
    float flm = f(lm),      frm = f(rm);
    float left  = (h/6) * (fa + 4*flm + fm);
    float right = (h/6) * (fm + 4*frm + fb);
    float delta = left + right - whole;

    if (rec <= 0) errno = ERANGE; // Leave a note if we bailed out early
    // Lyness 1969 + Richardson extrapolation; see article
    if (rec <= 0 || fabs(delta) <= 15*epsilon)
        return left + right + (delta)/15;
    return adaptiveSimpsonsAux(f, a, m, epsilon/2, left,  fa, fm, flm, rec-1) +
           adaptiveSimpsonsAux(f, m, b, epsilon/2, right, fm, fb, frm, rec-1);
}

/** usage example */
int main() {
    // Let I be the integral of sin(x) from 0 to 2
    float I = adaptiveSimpsons(sin, 0, 2, 0.001, 100);
    printf("integrate(sin, 0, 2) = %lf\n", I);   // print the result
    perror("adaptiveSimpsons");                  // Was it successful?
    return 0;
}

This implementation has been incorprated into a C++ ray tracer intended for X-Ray Laser simulation at Oak Ridge National Laboratory,[4] among other projects. The ORNL version has been enhanced with a call counter, templates for different datatypes, and wrappers for integrating over multiple dimensions.[4]

Racket

Here is an implementation of the adaptive Simpson method in Racket with a behavioral software contract. The exported function computes the indeterminate integral for some given function f. [5]

;; -----------------------------------------------------------------------------
;; interface, with contract 
(provide/contract
 [adaptive-simpson 
  (->i ((f (-> real? real?)) (L real?) (R  (L) (and/c real? (>/c L))))
       (#:epsilon (ε real?))
       (r real?))])


;; -----------------------------------------------------------------------------
;; implementation 
(define (adaptive-simpson f L R #:epsilon  .000000001])
  (define f@L (f L))
  (define f@R (f R))
  (define-values (M f@M whole) (simpson-1call-to-f f L f@L R f@R))
  (asr f L f@L R f@R ε whole M f@M))

;; the "efficient" implementation
(define (asr f L f@L R f@R ε whole M f@M)
  (define-values (leftM  f@leftM  left*)  (simpson-1call-to-f f L f@L M f@M))
  (define-values (rightM f@rightM right*) (simpson-1call-to-f f M f@M R f@R))
  (define delta* (- (+ left* right*) whole))
  (cond
    [(<= (abs delta*) (* 15 ε)) (+ left* right* (/ delta* 15))]
    [else (define epsilon1 (/ ε 2))
          (+ (asr f L f@L M f@M epsilon1 left*  leftM  f@leftM) 
             (asr f M f@M R f@R epsilon1 right* rightM f@rightM))]))

;; evaluate half an intervel (1 func eval)
(define (simpson-1call-to-f f L f@L R f@R)
  (define M (mid L R))
  (define f@M (f M))
  (values M f@M (* (/ (abs (- R L)) 6) (+ f@L (* 4 f@M) f@R))))

(define (mid L R) (/ (+ L R) 2.))

The code is an excerpt of a "#lang racket" module and that includes a (require rackunit) line.

  • Henriksson (1961) is a non-recursive variant of Simpson's Rule. It "adapts" by integrating from left to right and adjusting the interval width as needed.[2]
  • Kuncir's Algorithm 103 (1962) is the original recursive, bisecting, adaptive integrator. Algorithm 103 consists of a larger routine with a nested subroutine (loop AA), made recursive by the use of the goto statement. It guards against the underflowing of interval widths (loop BB), and aborts as soon as the user-specified eps is exceeded. The termination criteria is , where n is the current level of recursion and S(2) is the more accurate estimate.[1]
  • McKeeman's Algorithm 145 (1962) is a similarly recursive integrator that splits the interval into three instead of two parts. The recursion is written in a more familiar manner.[6] The 1962 algorithm, found to be over-cautious, uses for termination, so a 1963 improvement uses instead.[3]: 485, 487 
  • Lyness (1969) is almost the current integrator. Created as a set of four modifications of McKeeman 1962, it replaces trisection with bisection to lower computational costs (Modifications 1+2, coinciding with the Kuncir integrator) and improves McKeeman's 1962/63 error estimates to the fifth order (Modification 3). Modification 4, not described here, contains provisions for roundoff error that override the user-specified precision to match the best achievable.[3] The new return value with extrapolation is related to Boole's rule and Romberg's method.[3]: 489 

Bibliography

  1. ^ a b G.F. Kuncir (1962), "Algorithm 103: Simpson's rule integrator", Communications of the ACM, 5 (6): 347, doi:10.1145/367766.368179
  2. ^ a b For an earlier, non-recursive adaptive integrator more reminiscent of ODE solvers, see S. Henriksson (1961), "Contribution no. 2: Simpson numerical integration with variable length of step", BIT Numerical Mathematics, 1: 290
  3. ^ a b c d J.N. Lyness (1969), "Notes on the adaptive Simpson quadrature routine", Journal of the ACM, 16 (3): 483–495, doi:10.1145/321526.321537
  4. ^ a b Berrill, Mark A. "RayTrace-miniapp: src/AtomicModel/interp.hpp · de5e8229bccf60ae5c1c5bab14f861dc0326d5f9". ORNL GitLab.
  5. ^ Felleisen, Matthias. "[racket] adaptive simpson integration". Racket Mailing-list "users". Retrieved 26 September 2018.
  6. ^ McKeeman, William Marshall (1 December 1962). "Algorithm 145: Adaptive numerical integration by Simpson's rule". Communications of the ACM. 5 (12): 604. doi:10.1145/355580.369102.