Talk:Hidden Markov model
![]() | Statistics Unassessed | |||||||||
|
![]() | Robotics Start‑class High‑importance | |||||||||
|
Is the reference to Plato's allegory absolutely necessary ? I don't feel like it helps the explanation here.
- Agreed, Plato's cave seems like a bit of a stretch here. Probably a more down-to-earth explanation would be appropriate. Feel free to rework it if you have a better expression. Happy editing, Wile E. Heresiarch 15:31, 31 May 2004 (UTC)
Mental Model
I've added the genie-ball-urn model from Rabiner 89 just at the start of the article. People who search for hmm in wikipedia are searching for something understandable. I think this model is much more readable to someone who has never attended a lecture on markov processes. Maximilianh (talk) 11:28, 7 July 2010 (UTC)
error in image File:HMMsequence.svg (3rd diagram)
state 2 doesn't produce the star so sequences 2 and 4 aren't possible for the observed sequence. I don't know how to correct this error. —Preceding unsigned comment added by Leven101 (talk • contribs) 15:42, 1 October 2009 (UTC)
bioinformatics reference
The use of HMMs in bioinformatics references Durbin et all from 1999:
In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.[3]
I think a far better reference would be Churchill GA (1989) Stochastic models for heterogeneous DNA sequences. Bull Math Biol 51:79–94, which I think might be the first paper suggesting that a DNA sequence is determined by states in a Markove Model. —Preceding unsigned comment added by 209.222.206.50 (talk) 16:03, 22 April 2009 (UTC)
What is token in this context ?
"over the possible output tokens. "
About rain.
" In this example, there is only a 30% chance that tomorrow will be sunny if today is rainy. "
Rationally, after a rainy day, the likely chance of a sunny day is much higher since rain is a release of water that was deposited in the sky as clouds during sunny days. . .
Shushinla 06:10, 14 October 2005 (UTC)
- It's only a hypothetical example. --MarkSweep✍ 06:41, 14 October 2005 (UTC)
User:Jiali's references
Not every paper that makes use of HMMs belongs in the References section (eg this one doesn't) - what makes these papers suitable? — ciphergoth 08:54, 2 March 2006 (UTC)
I found this interesting as well: HMM used to analyze sequences of HTTPS requests to find the most plausible resources accessed. 88.217.80.238 (talk) 14:49, 6 July 2008 (UTC)
The three functions of HMM
Maybe we can add a paragraph describing the three main functions: Evaluate, Decode and Learn. What do you think? JeDi 09:46, 24 July 2006 (UTC)
Formal definition needed, confusing diagram
I think this article needs a more formal definition where the various components (state transition matrix, output probabilities, state space, etc) are listed explicitly.

x — hidden states
y — observable outputs
a — transition probabilities
b — output probabilities
Moreover, I think there is a problem with the diagram describing state transitions (i.e. the one at the top right), which, as far as I can tell, is supposed to graphically illustrate the probabilistic parameters of an HMM, i.e. the probabilities of state transitions and of making observations. The first problem that I see is that the diagram uses x1, x2, x3 to denote states, yet x(t) is used elsewhere to denote the hidden state variable at time step t -- i.e. x(t) is a variable that can take on one of the values in a state space S (which here contains x1, x2, x3). Perhaps s1, s2, s3 would be more appropriate as states. Also, quite a few transitions are missing from the diagram. Furthermore, the diagram apparently contains three outputs y1, y2, y3 -- yet not all probabilities are specified through appropriate edges; only three probabilities are given: The probability of observing y1 given that the state is x1 (b1), the probability of observing y2 given state x2 (b2), and the probability of observing y3 given state x3 (b3). Shouldn't the model specify a probability for observing any output given any state, i.e. for in the observation space and in the state space? As it is, I think the diagram creates more confusion than it resolves; the second diagram showing the general architecture is what I expect to see in an article on HMMs.
Graphs
We does everyone think of the following replacement graphs/diagrams in SVG format? --Thorwald 22:51, 10 June 2007 (UTC)


- The fonts in the second one are unreadably tiny. linas 02:10, 1 September 2007 (UTC)
Computer vision category
I removed this article from the computer vision category. HMM is not unrelated to CV but
- It is not a methodology developed within CV or specific to CV
- HMM is already listed under the machine learning category which is linked to the CV category.
--KYN 08:39, 28 July 2007 (UTC)
Probability of an observed sequence
"There exist a variety of algorithms that, while not providing exact results, do provide reasonably good results, with considerably less demand on storage and compute time. These include the forward-backward algorithm ..." (emphasis mine)
Please correct me if I'm wrong, but doesn't the forward-backward algorithm actually compute the exact probability of an observed sequence? I'm referring to my Biological Sequence Analysis book (Durbin, Eddy, Krogh, & Mitchison), page 58:
"...the full probability can itself be calculated by ... the forward algorithm."
--Loniousmonk 23:22, 30 September 2007 (UTC)
Yes indeed, the forward-backward algorithm is exact. Tomixdf 05:58, 1 October 2007 (UTC)
Hi, the textbook explanation of HMM's link seems to be broken. —Preceding unsigned comment added by 68.162.14.226 (talk) 03:50, 11 January 2008 (UTC)
Indeed I wondered about the reference to the Viterbi algorithm for speeding up the calculation of posterior probabilities of an observed sequence. It is my understanding that the forward-backward algorithm calculates the posterior probability of a sequence and the Viterbi calculates the most likely path of sequences. This is at least mentioned in the section "Using hidden Markov models". Therefore the sentence "The calculation can however be sped up enormously using the Forward algorithm [1] or the equivalent Backward algorithm" is not consistent in the context of the article and references the wrong algorithm, IMHO. In order to not confuse the article and to leave any modifications to the original author I will not do any changes here but start editing the forward-backward page. Hopefully this will be less confusing.