Jump to content

Draft:ODE filter

From Wikipedia, the free encyclopedia
Visualization of the filtering procedure for the logistic ordinary differential equation. The first column shows the prior, which is a two-times-integrated Wiener process. The last row shows the residual, which is equal to the measurement operator (i.e., x(t) − f(x(t))). The second column shows the prior after being initialized by the initial condition. The third row shows the posterior distribution obtained by running an extended Kalman filter of order one for the ODE. All subplots display the mean (solid, thick line) and the 95% confidence interval (shaded area), as well as ten samples from the corresponding distribution (solid lines). The dashed black line shows the ground truth solution of the logistic ODE. The blue dots represent the observed measurements.

ODE filters are a class of probabilistic numerical methods for solving ordinary differential equations (ODEs) that frame the problem of finding a solution to an initial value problem as a Bayesian inference task. Solutions are modeled as probability distributions that also account for the discretization error introduced through the numerical approximation. This probabilistic treatment provides an computation-aware alternative to classical numerical ODE-solvers, which typically only provide point-wise error bounds. It further enables sampling of joint trajectories from the posterior, as well as quantifying uncertainty about the underlying ODE itself. ODE filters offer a flexible framework for incorporating additional information, such as measurements or conservation laws.

The general ODE filtering procedure consists of two steps: The first is the definition of a Prior distribution, for the ODE solution, in the form of a stochastic processes, more specifically, as Gauss–Markov processes. Discretization results in nonlinear Gaussian state space models. Second, use a general Bayesian filtering and smoothing algorithm, from the field of recursive Bayesian estimation, to find numerical solutions to the ODE. The naming convention of ODE filters typically reflects the underlying filtering algorithm: For instance, combining the particle filter with this framework yields the particle ODE filter. Since smoothers are extensions of filters, the literature often refers to both under the umbrella term "ODE filters".

Theory

[edit]

Problem setup

[edit]

Consider a first-order ordinary differential equation (ODE) initial value problem (IVP) of the form

The vector field is assumed to be Lipschitz-continuous such that a unique global solution to this ODE-IVP exists according to the Pickard-Lindelöf theorem [1]. Classical Numerical ODE Solvers, are algorithms that compute approximate solutions on a discrete mesh [2]. Probabilistic ODE Solvers additionally estimate the uncertainty introduced through the discretization. ODE Filters are a type of probabilistic ODE solvers, that adopt a Bayesian Inference framework to compute a posterior[3]:

This is done in two steps. First, prior data and likelihood is defined by modeling solutions to the ODE as a Gauss-Markov process. Second, the posterior is computed with Bayesian filtering and smoothing algorithms[3].

Step 1: Gauss-Markov process

[edit]

Prior

[edit]

The prior is specified by a linear time invariant (LTI) stochastic differential equation (SDE) of the form

It describes a stochastic process ("the system"):

The state of the system are realizations , where and model , and respectively. Te remaining sub-vectors may be used to model higher-order derivatives. is a state transition matrix, a diffusion matrix, and is a vector standard Wiener process. Note that and need to be chosen such that holds[4]. Solutions to LTI-SDEs are Gauss-Markov processes (GMP)[5], of the form:

Where and . Gauss-Markov processes are computationally favorable over generic Gaussian process priors, as they allow for inference with linear time complexity compared to the general GP inference complexity of [6][4]. Discretization of the GMP results in a linear Gaussian state space model (LGSSM):

Data

[edit]

Information about the ODE is introduced through a measurement operator (i.e. state misalignment), known as an information operator in this context[4],

If it were possible to condition on the event for all , this would result in a point-mass on the true solution[7] . This is computationally intractable (just like for all other forms of numerical simulation), which motivates the discretization of the problem and conditioning the state on discrete observations The information operator can be modified to solve higher-order ODEs. Multiple information operators can be added to incorporate additional information, such as conservation laws or measurement data[8][9].

Likelihood

[edit]

The likelihood of the observations given the GMP prior is:

To improve the model's generality and account for numerical errors introduced by discretization, a measurement variance can be added to [4][10]. The observations can be incorporated into the LGSSM as a nonlinear measurement model, yielding the final nonlinear Gaussian state space model (NLGSSM), that is the basis of all ODE filters:

Here , , , and . The desired posterior can now be computed iteratively by any filtering and smoothing algorithms known from signal processing or recursive Bayesian estimation.

Step 2: Bayesian filter and smoother

[edit]

The most general ODE filter and smoother algorithms to solve the NLGSSM are equivalent to the general recursive Beyesian estimation algorithms:

Algorithm: ODE Filter
 1: Initialize 
 2: for  do:
 3:     optional: adapt dynamic model 
 4:     optional: choose step size 
 5:     predict 
 6:     observe the ODE 
 7:     update 
 8:     optional: backward transition 
 9: end for
10: return 
Algorithm: ODE Smoother
 1:  
 2: for  do:
 3:     compute 
 4: end for
 5: return  

For nonlinear observation models, the probability distributions become non-Gaussian, rendering the above equations intractable. Therefore, to define a specific ODE filter and smoother, the predict, update, and compute steps must be specified in a tractable form. The full posterior can be computed from

Having access to the full posterior enables sampling of joint trajectories that classical numerical methods do not allow.

Implementation Details

[edit]

Choice of prior

[edit]

The choice of a prior distribution is replaced by choosing the free parameters of the GMP. That is choosing , however, under some constraints. One of the most popular and most widely used GMPs is the q-times integrated Wiener process (i.e. integrated Brownian motion) [11]. For simplicity we can set w.l.o.g. [1][12]. The q-times Integrated Wiener process (IWP) is defined such that the sub-coordinates model the first q-derivatives of i.e.. Further, is a standard Wiener Process. This is equivalent to the drift and dispersion matrices:

Note that for the full drift and dispersion matrices follow from the one-dimensional versions with a Kronecker product , and [11]. This yields the following known closed form solutions for the transition matrices[12]:

The ideal initialization of the IWP that considers all available information from the ODE with initial value is to evaluate all derivatives at :

Here is recursively defined from and , and is the elementwise product. Computationally this can be efficiently computed via Taylor mode automatic differentiation [1][3].

Choice of filter and smoother

[edit]

One possible approach to building efficient inference algorithms for nonlinear measurement operators is via linearization. This results in a Gaussian inference framework, which is computationally desirable because it allows efficient closed-form inference. Taylor approximation provides one viable linearization technique, whereas the the degree and the locality/globality result in different subcategories of extended Kalman filters. Choosing a local, first order Taylor expansion corresponds to the extended Kalman filter of order 1 (EKF1). Other options are a Taylor approximation of order 0, leading to the EKF0 algorithm, or a global linearization, known as the iterated extended Kalman smoother (IEKS) [1].

Example: EKF1

[edit]

The first-order Taylor expansion of around a point is:

where is the Jacobian of . This leads to an affine observation model

The linearization point is chosen as the predicted mean from the previous observation . With this, the predict, update, and compute equations all take on Gaussian form, for which closed form inference is possible. See extended Kalman filter for more details.

Examples

[edit]
Visualisation of the ODE filtering and smoothing procedure for the logistic ordinary differential equation using a two-times integrated Wiener process prior and an extended Kalman filter of order one. First, iterative forward filtering (brown) with forward prediction (black), then iterative smoothing (rose), and finally sampling from the full posterior distribution. The thick line shows the mean, with the shaded area showing the 95% confidence interval. The blue dots represent the observed data and the dashed line represents the ground truth.

Logistic ODE

[edit]

As a one dimensional example, consider the following logistic differential equation IVP:

for which a analytical solution exists:

To solve this ODE-IVP with a PDE filter, the prior, discretization, as well as the predict, update, and compute steps need to be specified.

Prior

[edit]

2-times IWP:

with Taylor mode initialization:

Data and lielihood

[edit]

We discretized the time domain uniformly with and observe data without any noise, hence .

Filter and smoother

[edit]

As filtering algorithm we use the EKF1 algorithm with the Jacobian is .

Software

[edit]

See also

[edit]

References

[edit]
  1. ^ a b c d Hennig, Philipp; Osborne, Michael A.; Kersting, Hans P. (2022). Probabilistic Numerics: Computation as Machine Learning. Cambridge: Cambridge University Press. doi:10.1017/9781316681411. ISBN 978-1-316-68141-1.
  2. ^ Tronarp, Filip; Särkkä, Simo; Hennig, Philipp (2021). "Bayesian ODE solvers: the maximum a posteriori estimate". Statistics and Computing. 31 (3): 23. doi:10.1007/s11222-021-09993-7.
  3. ^ a b c d Krämer, Peter Nicholas (2024-04-11), Implementing Probabilistic Numerical Solvers for Differential Equations, Universitaet Tuebingen, Hennig, Philipp (Prof Dr.), Universität Tübingen, doi:10.15496/PUBLIKATION-94093, retrieved 2025-09-26
  4. ^ a b c d Tronarp, Filip; Kersting, Hans; Särkkä, Simo; Hennig, Philipp (2019). "Probabilistic solutions to ordinary differential equations as nonlinear Bayesian filtering: a new perspective". Statistics and Computing. 29 (6): 1297–1315. doi:10.1007/s11222-019-09900-1.
  5. ^ Särkkä, Simo; Solin, Arno (2019). Applied Stochastic Differential Equations. Institute of Mathematical Statistics Textbooks. Cambridge: Cambridge University Press. doi:10.1017/9781108186735. ISBN 978-1-108-18673-5.
  6. ^ Øksendal, Bernt (2003). "Stochastic Differential Equations". Universitext. doi:10.1007/978-3-642-14394-6. ISBN 978-3-540-04758-2. ISSN 0172-5939.
  7. ^ Cockayne, Jon; Oates, Chris J.; Sullivan, T. J.; Girolami, Mark (2019). "Bayesian Probabilistic Numerical Methods". SIAM Review. 61 (3): 756–789. doi:10.1137/17M1139357. ISSN 0036-1445.
  8. ^ Schmidt, Jonathan; Krämer, Nicholas; Hennig, Philipp (2022-07-05), A Probabilistic State Space Model for Joint Inference from Differential Equations and Data, arXiv:2103.10153
  9. ^ Bosch, Nathanael; Tronarp, Filip; Hennig, Philipp (2021-10-20), Pick-and-Mix Information Operators for Probabilistic ODE Solvers, arXiv:2110.10770, retrieved 2025-10-01
  10. ^ Kersting, Hans (2021-03-11), Uncertainty-Aware Numerical Solutions of ODEs by Bayesian Filtering, Universitaet Tuebingen, Hennig, Philipp (Prof Dr.), Universität Tübingen, doi:10.15496/PUBLIKATION-54639, retrieved 2025-09-25
  11. ^ a b Bosch, Nathanael (2025-05-15), A Flexible and Efficient Framework for Probabilistic Numerical Simulation and Inference, Universitaet Tuebingen, Hennig, Philipp (Prof Dr.), Universität Tübingen, doi:10.15496/PUBLIKATION-106849, retrieved 2025-09-25
  12. ^ a b Kersting, Hans; Sullivan, T. J.; Hennig, Philipp (2020). "Convergence rates of Gaussian ODE filters". Statistics and Computing. 30 (6): 1791–1816. doi:10.1007/s11222-020-09972-4. ISSN 1573-1375. PMC 7527376. PMID 33088027.
  13. ^ Bosch, Nathanael (2024-09-30). "ProbNumDiffEq.jl: Probabilistic Numerical Solvers for Ordinary Differential Equations in Julia". Journal of Open Source Software. 9 (101): 7048. Bibcode:2024JOSS....9.7048B. doi:10.21105/joss.07048. ISSN 2475-9066.
  14. ^ Wenger, Jonathan; Krämer, Nicholas; Pförtner, Marvin; Schmidt, Jonathan; Bosch, Nathanael; Effenberger, Nina; Zenn, Johannes; Gessner, Alexandra; Karvonen, Toni (2021-12-03), ProbNum: Probabilistic Numerics in Python, arXiv:2112.02100