Jump to content

Small language model

From Wikipedia, the free encyclopedia
This is the current revision of this page, as edited by Citation bot (talk | contribs) at 03:40, 29 April 2025 (Added date. | Use this bot. Report bugs. | Suggested by Dominic3203 | Linked from User:AlexNewArtBot/CleanupSearchResult | #UCB_webform_linked 1201/2145). The present address (URL) is a permanent link to this version.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Small language models (SLMs) are artificial intelligence language models designed for human natural language processing including language and text generation. Unlike large language models (LLMs), small language models are much smaller in scale and scope.

Typically, an LLM's number of training parameters is in the hundreds of billions, with some models even exceeding a trillion parameters. The size of any LLM is vast because it contains a large amount of information, which allows it to generate better content. However, this requires enormous computational power, making it impossible for an individual to train a large language model using just a single computer and GPU.

Small language models, on the other hand, use far fewer parameters, typically ranging from a few million to a few billion. This make them more feasible to train and host in resource-constrained environments such as a single computer or even a mobile device.[1][2][3][4]

See also

[edit]

References

[edit]
  1. ^ Rina Diane Caballar (31 October 2024). "What are small language models?". IBM.
  2. ^ John JOhnson (25 February 2025). "Small Language Models (SLM): A Comprehensive Overview". Huggingface.
  3. ^ Kate Whiting. "What is a small language model and how can businesses leverage this AI tool?". The World Economic Forum.
  4. ^ "SLM (Small Language Model) with your Data". Microsoft. 11 July 2024.