Hidden Markov model

x — states
y — possible observations
a — state transition probabilities
b — output probabilities
A hidden Markov model (HMM) is a statistical model in which the system being modeled is assumed to be a Markov process with unobserved state. A HMM can be considered as the simplest dynamic Bayesian network.
In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a hidden Markov model, the state is not directly visible, but output, dependent on the state, is visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states. Note that the adjective 'hidden' refers to the state sequence through which the model passes, not to the parameters of the model; even if the model parameters are known exactly, the model is still 'hidden'.
Hidden Markov models are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.
Simplified Mental Model
Like all statistical concepts, a hidden markov process can be visualized as the familiar Urn problem, like this one, from Rabiner 1994: A genie is in a room that is not visible to the researcher. It is drawing balls labelled y1, y2, y3 from the urns in that room and puts them on a conveyor belt, where we can observe their sequence. We know that there are urns X1, X2, X3 in the room. We also know that the genie has some procedure to choose urns, which is not completely random: It is going first to Urn X1, then draws from X2 (probability a12, 100%). Once it has drawn from X2, it can either continue to draw from X1 (a21, e.g. 50%), or continue with X3 (a13, e.g. 50%). Once it has drawn from urn X3, it will stop. This way to choose urn can be called a Markov process. It can be described by the upper part of the diagram at the top of this article.
As we said that the markov process itself cannot be observed and only the sequence of labelled balls can be observed, it is called a "hidden markov process". This is illustrated by the lower part of the diagram above, where one can see that balls y1, y2, y3, y4 can be drawn at each state. Even if we know the composition of the urns and we just observed a sequence of three balls y1, y1 and y1 on the conveyer belt, we still cannot be sure at which urn (=at which state) the genie is at the moment. We can however work out at which urn the genie is with maximum probability.
This urn model might sound artificial but it corresponds to many real-world situations, e.g. message transmission. Someone can pronounce Hallo, allo, Hall' or Ha-lo and still mean the same thing (the word "HALLO"). There are many different sounds (balls) for a given message (state, urn) and often the speaker will more of less choose them randomly. Theoretical reasoning about processes of this type as shown on this page proved to be a useful tool for decoding or correcting messages in various situations (e.g. speech recognition, radio noise cancellation, DNA sequence "decoding", etc).
Architecture of a hidden Markov model
The diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt any of a number of values. The random variable x(t) is the hidden state at time t (with the model from the above diagram, x(t) ∈ { x1, x2, x3 }). The random variable y(t) is the observation at time t (y(t) ∈ { y1, y2, y3, y4 }). The arrows in the diagram (often called a trellis diagram) denote conditional dependencies.
From the diagram, it is clear that the conditional probability distribution of the hidden variable x(t) at time t, given the values of the hidden variable x at all times, depends only on the value of the hidden variable x(t − 1): the values at time t − 2 and before have no influence. This is called the Markov property. Similarly, the value of the observed variable y(t) only depends on the value of the hidden variable x(t) (both at time t).

Probability of an observed sequence

5 3 2 5 3 2
4 3 2 5 3 2
3 1 2 5 3 2
Transition and observation probabilities are indicated by the line opacity.
The probability of observing a sequence
of length L is given by
where the sum runs over all possible hidden-node sequences
Brute-force calculation of P(Y) is intractable for most real-life problems, as the number of possible hidden node sequences is typically extremely high and scales exponentially with the length of the sequence. The calculation can however be sped up enormously using the Forward-backward algorithm.
Using hidden Markov models
There are three canonical problems associated with HMM:
- Given the parameters of the model, compute the probability of a particular output sequence. This requires summation over all possible state sequences, but can be done efficiently using the forward algorithm, which is a form of dynamic programming.
- Given the parameters of the model and a particular output sequence, find the state sequence that is most likely to have generated that output sequence. This requires finding a maximum over all possible state sequences, but can similarly be solved efficiently by the Viterbi algorithm.
- Given an output sequence or a set of such sequences, find the most likely set of state transition and output probabilities. In other words, derive the maximum likelihood estimate of the parameters of the HMM given a dataset of output sequences. No tractable algorithm is known for solving this problem exactly, but a local maximum likelihood can be derived efficiently using the Baum-Welch algorithm or the Baldi-Chauvin algorithm. The Baum-Welch algorithm is an example of a forward-backward algorithm, and is a special case of the Expectation-maximization algorithm.
For either of the first two problems one can additionally ask about statistical significance. What is the probability that a sequence drawn from some null distribution will have an HMM probability (in the case of the forward algorithm) or a maximum state sequence probability (in the case of the Viterbi algorithm) at least as large as that of a particular output sequence?[1] When an HMM is used to evaluate the relevance of a hypothesis for a particular output sequence, the statistical significance indicates the false positive rate associated with accepting the hypothesis for the output sequence.
A concrete example
This example is further elaborated in the Viterbi algorithm page.
Applications of hidden Markov models
- Cryptanalysis
- Speech recognition
- Machine translation
- Partial discharge
- Gene prediction
- Alignment of bio-sequences
History
Hidden Markov Models were first described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s.[2]
In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences [3], in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.[4]
Types of Hidden Markov Models
Hidden Markov Models can model complex Markov processes where the states emit the observations according to some probability distribution. One such example of distribution is Gaussian distribution, in such a Hidden Markov Model the states output is represented by a Gaussian distribution.
Moreover it could represent even more complex behavior when the output of the states is represented as mixture of two or more Gaussians, in which case the probability of generating an observation is the product of the probability of first selecting one of the Gaussians and the probability of generating that observation from that Gaussian.
See also
- Bayesian inference
- Estimation theory
- Hierarchical hidden Markov model
- Layered hidden Markov model
- Hidden semi-Markov model
- Variable-order Markov model
- Sequential dynamical system
- Conditional random field
- Baum-Welch algorithm
- Viterbi algorithm
- Poisson hidden Markov model
- Hidden Bernoulli Model
- HMMER, a free hidden Markov model program for protein sequence analysis
- HHpred / HHsearch free server and software for protein sequence searching
Notes
References
- Lawrence R. Rabiner (1989). "A tutorial on Hidden Markov Models and selected applications in speech recognition" (PDF). Proceedings of the IEEE. 77 (2): 257–286. doi:10.1109/5.18626.
{{cite journal}}
: Unknown parameter|month=
ignored (help) [1] - Xuedong Huang, M. Jack, and Y. Ariki (1990). Hidden Markov Models for Speech Recognition. Edinburgh University Press. ISBN 0748601627.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Richard Durbin, Sean R. Eddy, Anders Krogh, Graeme Mitchison (1999). Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press. ISBN 0-521-62971-3.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - M. Bishop and E. Thompson (1986). "Maximum Likelihood Alignment of DNA Sequences". Journal of Molecular Biology. 190: 159–165.
- Xuedong Huang, Alex Acero, and Hsiao-Wuen Hon (2001). Spoken Language Processing. Prentice Hall. ISBN 0-13-022616-5.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Lior Pachter and Bernd Sturmfels (2005). Algebraic Statistics for Computational Biology. Cambridge University Press. ISBN 0-521-85700-7.
- Olivier Cappé, Eric Moulines, Tobias Rydén (2005). Inference in Hidden Markov Models. Springer. ISBN 0-387-40264-0.
{{cite book}}
: CS1 maint: multiple names: authors list (link) - Kristie Seymore, Andrew McCallum, and Roni Rosenfeld. Learning Hidden Markov Model Structure for Information Extraction. AAAI 99 Workshop on Machine Learning for Information Extraction, 1999 (also at CiteSeer: [2]).
- Li J, Najmi A, Gray RM (2000). "Image classification by a two dimensional hidden Markov model". IEEE Transactions on Signal Processing. 48 (2): 517–533. doi:10.1109/78.823977.
{{cite journal}}
: Unknown parameter|month=
ignored (help)CS1 maint: multiple names: authors list (link) - Ephraim Y, Merhav N (2002). "Hidden Markov processes". IEEE Trans. Inform. Theory. 48: 1518–1569. doi:10.1109/TIT.2002.1003838.
{{cite journal}}
: Unknown parameter|month=
ignored (help) - Newberg LA (2009). "Error statistics of hidden Markov model and hidden Boltzmann model results". BMC Bioinformatics. 10: article 212. doi:10.1186/1471-2105-10-212. PMC 2722652. PMID 19589158.
{{cite journal}}
: Unknown parameter|month=
ignored (help)CS1 maint: unflagged free DOI (link) - B. Pardo and W. Birmingham. Modeling Form for On-line Following of Musical Performances. AAAI-05 Proc., July 2005.
- Thad Starner, Alex Pentland. Visual Recognition of American Sign Language Using Hidden Markov. Master's Thesis, MIT, Feb 1995, Program in Media Arts
- Satish L, Gururaj BI (April 2003). "Use of hidden Markov models for partial discharge pattern classification". IEEE Transactions on Dielectrics and Electrical Insulation.
The path-counting algorithm, an alternative to the Baum-Welch algorithm:
- Davis RIA, Lovell BC (2000). "Comparing and evaluating HMM ensemble training algorithms using train and test and condition number criteria". Journal of Pattern Analysis and Applications. 0 (0): 1–7.
External links
- Hidden Markov Model (HMM) Toolbox for Matlab (by Kevin Murphy)
- Hidden Markov Model Toolkit (HTK) (a portable toolkit for building and manipulating hidden Markov models)
- Hidden Markov Model R-Package to setup, apply and make inference with discrete time and discrete space Hidden Markov Models
- Hidden Markov Models (an exposition using basic mathematics)
- GHMM Library (home page of the GHMM Library project)
- CL-HMM Library (HMM Library for Common Lisp)
- Jahmm Java Library (general-purpose Java library)
- A step-by-step tutorial on HMMs (University of Leeds)
- Software for Markov Models and Processes (TreeAge Software)
- Hidden Markov Models (by Narada Warakagoda)
- HMM and other statistical programs (Implementation in C by Tapas Kanungo)
- The hmm package A Haskell library for working with Hidden Markov Models.
- GT2K Georgia Tech Gesture Toolkit (referred to as GT2K)
- Forward algorithm
- Switching Autoregressive Hidden Markov Model (SAR HMM)
- Hidden Markov Models -online calculator for HMM - Viterbi path and probabilities. Examples with perl source code.