Automatic differentiation
In mathematics and computer algebra, automatic differentiation (AD), also called algorithmic differentiation or computational differentiation,[1][2] is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, and accurate to working precision.
Automatic differentiation is not:

- Symbolic differentiation, or
- Numerical differentiation (the method of finite differences).
These classical methods run into problems: symbolic differentiation leads to inefficient code (unless carefully done) and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce round-off errors in the discretization process and cancellation. Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems.
The chain rule, forward and reverse accumulation
Fundamental to AD is the decomposition of differentials provided by the chain rule. For the simple composition the chain rule gives
Usually, two distinct modes of AD are presented, forward accumulation (or forward mode) and reverse accumulation (or reverse mode). Forward accumulation specifies that one traverses the chain rule from right to left (that is, first one computes and then ), while reverse accumulation has the traversal from left to right.

Forward accumulation
Forward accumulation automatic differentiation is the easiest to understand and to implement. The function is interpreted (by a computer or human programmer) as the sequence of elementary operations on the work variables , and an AD tool for forward accumulation adds the corresponding operations on the second component of the augmented arithmetic.
Original code statements | Added statements for derivatives |
---|---|
(seed) | |
(seed) | |
The derivative computation for needs to be seeded in order to distinguish between the derivative with respect to or . The table above seeds the computation with and and we see that this results in which is the derivative with respect to . Note that although the table displays the symbolic derivative, in the computer it is always the evaluated (numeric) value that is stored. Figure 2 represents the above statements in a computational graph.
In order to compute the gradient of this example function, that is and , two sweeps over the computational graph is needed, first with the seeds and , then with and .
The computational complexity of one sweep of forward accumulation is proportional to the complexity of the original code.
Forward accumulation is superior to reverse accumulation for functions with as only one sweep is necessary, compared to sweeps for reverse accumulation.

Reverse accumulation
Reverse accumulation traverses the chain rule from left to right, or in the case of the computational graph in Figure 3, from top to bottom. The example function is real-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed in order to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of some of the work variables , which may represent a significant memory issue.
The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function in the primal causes in the adjoint; etc.
Reverse accumulation is superior to forward accumulation for functions with , where forward accumulation requires roughly n times as much work.
Backpropagation of errors in multilayer perceptrons, a technique used in machine learning, is a special case of reverse mode AD.
Jacobian computation
The Jacobian of is an matrix. The Jacobian can be computed using sweeps of forward accumulation, of which each sweep can yield a column vector of the Jacobian, or with sweeps of reverse accumulation, of which each sweep can yield a row vector of the Jacobian.
Beyond forward and reverse accumulation
Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of with a minimum number of arithmetic operations is known as the "optimal Jacobian accumulation" (OJA) problem. OJA is NP-complete.[3] Central to this proof is the idea that there may exist algebraic dependences between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.
Automatic differentiation using dual numbers
Forward mode automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number which will represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of dual numbers. Computer programs often implement this using the complex number representation.
Replace every number with the number , where is a real number, but is nothing but a symbol with the property . Using only this, we get for the regular arithmetic
and likewise for subtraction and division.
Now, we may calculate polynomials in this augmented arithmetic. If , then
where denotes the derivative of with respect to its first argument, and , called a seed, can be chosen arbitrarily.
The new arithmetic consists of ordered pairs, elements written , with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to analytic functions we obtain a list of the basic arithmetic and some standard functions for the new arithmetic:
and in general for the primitive function ,
where and are the derivatives of with respect to its first and second arguments, respectively.
When a binary basic arithmetic operation is applied to mixed arguments—the pair and the real number —the real number is first lifted to . The derivative of a function at the point is now found by calculating using the above arithmetic, which gives as the result.
Vector arguments and functions
Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator, which finds the directional derivative of at in the direction by calculating using the same arithmetic as above.
Higher order differentials
The above arithmetic can be generalized, in the natural way, to calculate parts of the second order and higher derivatives. However, the arithmetic rules quickly grow very complicated: complexity will be quadratic in the highest derivative degree. Instead, truncated Taylor series arithmetic is used. This is possible because the Taylor summands in a Taylor series of a function are products of known coefficients and derivatives of the function. Currently, there exists efficient Hessian automatic differentiation methods that calculate the entire Hessian matrix with a single forward and reverse accumulation. There also exist a number of specialized methods for calculating large sparse Hessian matrices.
Implementation
Forward-mode AD is implemented by a nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: source code transformation or operator overloading.
Source code transformation (SCT)

The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions.
Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult.
Operator overloading (OO)

Operator overloading is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations.
Operator overloading for forward accumulation is easy to implement, and also possible for reverse accumulation. However, current compilers lag behind in optimizing the code when compared to forward accumulation.
Software
- C/C++
Package License Approach Brief Info ADC Version 4.0 nonfree OO ADIC free for noncommercial SCT forward mode ADMB BSD SCT+OO ADNumber dual license OO arbitrary order forward/reverse ADOL-C CPL 1.0 or GPL 2.0 OO arbitrary order forward/reverse, part of COIN-OR AMPL free for students SCT FADBAD++ free for
noncommercialOO uses operator new CasADi LGPL OO/SCT Forward/reverse modes, matrix-valued atomic operations. ceres-solver BSD OO A portable C++ library that allows for modeling and solving large complicated nonlinear least squares problems CppAD EPL 1.0 or GPL 3.0 OO arbitrary order forward/reverse, AD<Base> for arbitrary Base including AD<Other_Base>, part of COIN-OR; can also be used to produce C source code using the CppADCodeGen library. OpenAD depends on components SCT Sacado GNU GPL OO A part of the Trilinos collection, forward/reverse modes. Stan BSD OO Estimates Bayesian statistical models using Hamiltonian Monte Carlo. TAPENADE Free for noncommercial SCT CTaylor free OO truncated taylor series, multi variable, high performance, calculating and storing only potentially nonzero derivatives, calculates higher order derivatives, order of derivatives increases when using matching operations until maximum order (parameter) is reached, example source code and executable available for testing performance
- Fortran
Package License Approach Brief Info ADF Version 4.0 nonfree OO ADIFOR >>>
(free for non-commercial)SCT AUTO_DERIV free for non-commercial OO OpenAD depends on components SCT TAPENADE Free for noncommercial SCT
- Matlab
Package License Approach Brief Info AD for MATLAB GNU GPL OO Forward (1st & 2nd derivative, Uses MEX files & Windows DLLs) Adiff BSD OO Forward (1st derivative) MAD Proprietary OO ADiMat ? SCT Forward (1st & 2nd derivative) & Reverse (1st)
- Python
Package License Approach Brief Info ad BSD OO first and second-order, reverse accumulation, transparent on-the-fly calculations, basic NumPy support, written in pure python FuncDesigner BSD OO uses NumPy arrays and SciPy sparse matrices,
also allows to solve linear/non-linear/ODE systems and
to perform numerical optimizations by OpenOptScientificPython CeCILL OO see modules Scientific.Functions.FirstDerivatives and
Scientific.Functions.Derivativespycppad BSD OO arbitrary order forward/reverse, implemented as wrapper for CppAD including AD<double> and AD< AD<double> >. pyadolc BSD OO wrapper for ADOL-C, hence arbitrary order derivatives in the (combined) forward/reverse mode of AD, supports sparsity pattern propagation and sparse derivative computations uncertainties BSD OO first-order derivatives, reverse mode, transparent calculations algopy BSD OO same approach as pyadolc and thus compatible, support to differentiate through numerical linear algebra functions like the matrix-matrix product, solution of linear systems, QR and Cholesky decomposition, etc. pyderiv GNU GPL OO automatic differentiation and (co)variance calculation CasADi LGPL OO/SCT Python front-end to CasADi. Forward/reverse modes, matrix-valued atomic operations.
- .NET
Package | License | Approach | Brief Info |
---|---|---|---|
AutoDiff | GNU GPL | OO | Automatic differentiation with C# operators overloading. |
FuncLib | MIT | OO | Automatic differentiation and numerical optimization, operator overloading, unlimited order of differentiation, compilation to IL code for very fast evaluation. |
- Haskell
Package | License | Approach | Brief Info |
---|---|---|---|
ad | BSD | OO | Forward Mode (1st derivative or arbitrary order derivatives via lazy lists and sparse tries) Reverse Mode Combined forward-on-reverse Hessians. Uses Quantification to allow the implementation automatically choose appropriate modes. Quantification prevents perturbation/sensitivity confusion at compile time. |
fad | BSD | OO | Forward Mode (lazy list). Quantification prevents perturbation confusion at compile time. |
rad | BSD | OO | Reverse Mode. (Subsumed by 'ad'). Quantification prevents sensitivity confusion at compile time. |
- Octave
Package | License | Approach | Brief Info |
---|---|---|---|
CasADi | LGPL | OO/SCT | Octave front-end to CasADi. Forward/reverse modes, matrix-valued atomic operations. |
- Java
Package | License | Approach | Brief Info |
---|---|---|---|
JAutoDiff | - | OO | Provides a framework to compute derivatives of functions on arbitrary types of field using generics. Coded in 100% pure Java. |
Apache Commons Math | Apache License v2 | OO | This class is an implementation of the extension to Rall's numbers described in Dan Kalman's paper[4] |
References
- ^ Neidinger, Richard D. (2010). "Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming" (PDF). SIAM Review. 52 (3): 545–563.
- ^ http://www.ec-securehost.com/SIAM/SE24.html
- ^ Naumann, Uwe (2008). "Optimal Jacobian accumulation is NP-complete". Mathematical Programming. 112 (2): 427–441. doi:10.1007/s10107-006-0042-z.
{{cite journal}}
:|contribution=
ignored (help); Unknown parameter|month=
ignored (help) - ^ Kalman, Dan (2002). "Doubly Recursive Multivariate Automatic Differentiation" (PDF). Mathematics Magazine. 75 (3): 187–202.
{{cite journal}}
: Unknown parameter|month=
ignored (help)
Literature
- Rall, Louis B. (1981). Automatic Differentiation: Techniques and Applications. Lecture Notes in Computer Science. Vol. 120. Springer. ISBN 3-540-10861-0.
- Griewank, Andreas; Walther, Andrea (2008). Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. Other Titles in Applied Mathematics. Vol. 105 (2nd ed.). SIAM. ISBN 978-0-89871-659-7.
- Neidinger, Richard (2010). "Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming" (PDF). SIAM Review. 52 (3): 545–563. doi:10.1137/080743627. Retrieved 2013-03-15.
External links
- www.autodiff.org, An "entry site to everything you want to know about automatic differentiation"
- Automatic Differentiation of Parallel OpenMP Programs
- Automatic Differentiation, C++ Templates and Photogrammetry
- Automatic Differentiation, Operator Overloading Approach
- Compute analytic derivatives of any Fortran77, Fortran95, or C program through a web-based interface Automatic Differentiation of Fortran programs
- Description and example code for forward Automatic Differentiation in Scala
- Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem