Jump to content

Factored language model

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Gang Ji (talk | contribs) at 23:30, 13 July 2005. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Factored language model (FLM) is an extension of conventional Language model. In an FLM, each word is viewed as a vector of k factors: wi = {fi1, ..., fik}. An FLM provides the probabilistic model P(f|f1, ..., fN).

Like N-gram models, smoothing techniques are necessary in parameter estimation. In particular, generalized backing-off is used in training an FLM.

References

  • {{cite conference}}: Empty citation (help)