Sequential estimation
In statistics, sequential estimation refers to estimation methods in sequential analysis where the sample size is not fixed in advance. Instead, data is evaluated as it is collected, and further sampling is stopped in accordance with a pre-defined stopping rule as soon as significant results are observed. The generic version is called optimal Bayesian Estimator, which is the theoretical underpinning for every sequential estimator (but cannot be instantiated directly). It includes a Markov-Process for the state propagation and measurement process for each state, which yields some typical statistical independence relations. The Markov-Porcess describes the propagagion of a probability distribution over discrete time instances and the measurement is the information one has about each time instant, which is usually less informative than the state. Only the observed sequence will - together with the models - accumulate the information of all measurements and the corresponding markov process to yield better estimates.
From that, the Kalman Filter (and its variants), the particle filter, the histogram filter and others can be derived. It depends on the models, which one to use and requires experience to chose the right one. In most cases, the goal is to estimate the state sequence from the measurements. In other cases, one can use the description to estimate the parameters of a noise process for example. One can also accumulate the unmodeled statistical behavior of the states projected in the measurement space (called innovation sequence, which naturally includes the orthogonality principle in its derivations to yield an independence relation and therefore can be also casted into a Hilbert space representation, which makes things very intuitive for a smaller audience) over time and compare it with a threshold, which then corresponds to the aforementioned stopping criterion.
The statistical behaviour of the heuristic/sampling methods (e.g. particle filter or histogram filter) depends on many parameters and implementation details and should not be used in safety critical applications (since it is very hard to yield theoretical garanties or do proper testing), unless one has a very good reason.
If there is a dependence of each state on an overall entity (e.g. a map or simply an overall state variable), one typically uses SLAM (simultenous localization and mapping) techniques, which includes the sequential estimator as a special case (when the overall state variable has just one state). It will estimate the state sequence and the overall entity.
There are also none-causal variants, that have all measurements at the same time, batches of measurements or revert the state evolution to go backwards again. These are then, however, not real capable anymore and only sufficient for post processing. Other variants do several passes to yield a rough estimate first and then refine it by the following passes.
See also
References
- Thomas S. Ferguson (1967) Mathematical statistics: A decision theoretic approach., Academic Press. ISBN 0-12-253750-5
- Wald, Abraham (1947). Sequential Analysis. New York: John Wiley and Sons. ISBN 0-471-91806-7.
See Dover reprint: ISBN 0-486-43912-7
{{cite book}}
: ISBN / Date incompatibility (help)