Jump to content

Hidden Markov model

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Vecter (talk | contribs) at 05:52, 23 February 2009 (Probability of an observed sequence). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Probabilistic parameters of a hidden Markov model (example)
x — states
y — possible observations
a — state transition probabilities
b — output probabilities

A hidden Markov model (HMM) is a statistical model in which the system being modeled is assumed to be a Markov process with unknown parameters; the challenge is to determine the hidden parameters from the observable data. The extracted model parameters can then be used to perform further analysis, for example for pattern recognition applications. An HMM can be considered as the simplest dynamic Bayesian network.

In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a hidden Markov model, the state is not directly visible, but variables influenced by the state are visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states.

Hidden Markov models are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.

Architecture of a hidden Markov model

The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variable x(t) is the hidden state at time t (with the model from the above diagram, x(t) ∈ { x1x2x3 }). The random variable y(t) is the observation at time t (y(t) ∈ { y1y2y3y4 }). The arrows in the diagram (often called a trellis diagram) denote conditional dependencies.

From the diagram, it is clear that the conditional probability distribution of the hidden variable x(t) at time t, given the value of the hidden variable x(t − 1), depends only on the value of the hidden variable x(t − 1): the values at time t − 2 and before have no influence. This is called the Markov property. Similarly, the value of the observed variable y(t) only depends on the value of the hidden variable x(t) (both at time t).

Temporal evolution of a hidden Markov model
Temporal evolution of a hidden Markov model

Probability of an observed sequence

The observation sequence above can be produced by the following state sequences.
5 3 2 5 3 2
5 3 1 2 1 2
4 3 2 5 3 2
4 3 1 2 1 2
3 1 2 5 3 2
Transition and observation probabilities are indicated by the line opacity.

The probability of observing a sequence

of length L is given by

where the sum runs over all possible hidden-node sequences

Brute-force calculation of P(Y) is intractable for most real-life problems, as the number of possible hidden node sequences is typically extremely high and scales exponentially with the length of the sequence. The calculation can however be sped up enormously using the forward algorithm [1] or the equivalent backward algorithm.

Using hidden Markov models

There are three canonical problems associated with HMM

  • Given the parameters of the model, compute the probability of a particular output sequence, and the probabilities of the hidden state values given that output sequence. This problem is solved by the forward-backward algorithm.
  • Given the parameters of the model, find the most likely sequence of hidden states that could have generated a given output sequence. This problem is solved by the Viterbi algorithm.
  • Given an output sequence or a set of such sequences, find the most likely set of state transition and output probabilities. In other words, discover the parameters of the HMM given a dataset of sequences. This problem is solved by the Baum-Welch algorithm or the Baldi-Chauvin algorithm.

A concrete example

Template:HMM example

This example is further elaborated in the Viterbi algorithm page.

Applications of hidden Markov models

History

Hidden Markov Models were first described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s.[2]

In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.[3]

See also

Notes

  1. ^ Rabiner, p. 262
  2. ^ Rabiner, p. 258
  3. ^ Durbin et al.

References

  • Lawrence R. Rabiner (1989). "A tutorial on Hidden Markov Models and selected applications in speech recognition" (PDF). Proceedings of the IEEE. 77 (2): 257–286. {{cite journal}}: Unknown parameter |month= ignored (help) [1]
  • Richard Durbin, Sean R. Eddy, Anders Krogh, Graeme Mitchison (1999). Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press. ISBN 0-521-62971-3.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • Lior Pachter and Bernd Sturmfels (2005). Algebraic Statistics for Computational Biology. Cambridge University Press. ISBN 0-521-85700-7.
  • Olivier Cappé, Eric Moulines, Tobias Rydén (2005). Inference in Hidden Markov Models. Springer. ISBN 0-387-40264-0.{{cite book}}: CS1 maint: multiple names: authors list (link)
  • Kristie Seymore, Andrew McCallum, and Roni Rosenfeld. Learning Hidden Markov Model Structure for Information Extraction. AAAI 99 Workshop on Machine Learning for Information Extraction, 1999 (also at CiteSeer: [2]).
  • Li J, Najmi A, Gray RM (2000). "Image classification by a two dimensional hidden Markov model". IEEE Transactions on Signal Processing. 48 (2): 517–533. {{cite journal}}: Unknown parameter |month= ignored (help)CS1 maint: multiple names: authors list (link)
  • Ephraim Y, Merhav N (2002). "Hidden Markov processes". IEEE Trans. Inform. Theory. 48: 1518–1569. {{cite journal}}: Unknown parameter |month= ignored (help)
  • B. Pardo and W. Birmingham. Modeling Form for On-line Following of Musical Performances. AAAI-05 Proc., July 2005.
  • Thad Starner, Alex Pentland. Visual Recognition of American Sign Language Using Hidden Markov. Master's Thesis, MIT, Feb 1995, Program in Media Arts
  • Satish L, Gururaj BI (April 2003). "Use of hidden Markov models for partial discharge pattern classification". IEEE Transactions on Dielectrics and Electrical Insulation.

The path-counting algorithm, an alternative to the Baum-Welch algorithm: