Jump to content

Maximum-entropy Markov model

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Qwertyus (talk | contribs) at 17:08, 30 June 2010 (stub). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

In machine learning, a maximum entropy Markov model (MEMM) is a graphical model for sequence labeling that combine features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. MEMMs find applications in natural language processing, specifically in information extraction.

Linear chain conditional random fields were designed to overcome certain weaknesses of MEMMs.