Jump to content

Talk:Word n-gram language model

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is the current revision of this page, as edited by Qwerfjkl (bot) (talk | contribs) at 06:09, 31 January 2024 (Implementing WP:PIQA (Task 26)). The present address (URL) is a permanent link to this version.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Todos

[edit]
  • Add a history section. Jurafsky and Martin has a short but useful section about this.
  • Add a section on smoothing
  • Add a section on applications

Colin M (talk) 17:25, 10 March 2023 (UTC)[reply]

I was also thinking of merging in a bunch of content from n-gram (which is currently an awkward combination of being about n-grams themselves and n-gram models). But there's a complication in that that article covers n-gram models as applied to a broader range of sequences, where as this article is currently focused on modelling sequences of words. We could have yet another article about n-gram models more broadly, but it doesn't seem like there are enough differences to make that distinction worth it. Probably better to broaden the scope of this article to match that. Colin M (talk) 17:56, 10 March 2023 (UTC)[reply]