Jump to content

History of natural language processing

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Spencerk (talk | contribs) at 00:58, 25 June 2010 (Created page with 'The '''history of Natural language processing''' describes the advances of Natural language processing There is some overlap with the [[history of machine tran...'). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

The history of Natural language processing describes the advances of Natural language processing There is some overlap with the history of machine translation, and the history of artificial intelligence.

In 1950, Alan Turing published his famous article "Computing Machinery and Intelligence"[1] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably - on the basis of the conversational content alone - between the program and a real human.

  • The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. The authors claimed that within three or five years, machine translation would be a solved problem.[2]
  • In the 60's, SHRDLU, a natural language system working in restricted "blocks worlds" with restricted vocabularies, worked extremely well, leading researchers to great optimism.

An early success was Daniel Bobrow's program STUDENT, which could solve high school algebra word problems.[3]


However, the real progress was much slower, and after the ALPAC report in 1966, which found that ten years long research had failed to fulfill the expectations, the funding was dramatically reduced.

Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction. When the "patient" exceeded the very small knowledge base, ELIZA might provide a generic response, for example, responding to "My head hurts" with "Why do you say your head hurts?". ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules.[4]


A semantic net represents concepts (e.g. "house","door") as nodes and relations among concepts (e.g. "has-a") as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[5] and the most successful (and controversial) version was Roger Schank's Conceptual Dependency.[6] During the 70's many programmers began to write 'conceptual ontologies', which structured real-word information into computer-understandable data:

  • MARGIE (Schank, 1975)
  • SAM (Cullingford, 1978)
  • PAM (Wilensky, 1978)
  • TaleSpin (Meehan, 1976)
  • QUALM (Lehnert,1977),
  • Politics (Carbonell, 1979)
  • Plot Units (Lehnert 1981).

During this time, many chatterbots were written including

Starting in the late 1980s, as computational power increased and became less expensive, more interest began to be shown in statistical models for machine translation.

  • KL-ONE [Sondheimer et al, 1984],
  • MOPTRANS [Lytinen , 1984].
  • KODIAK (Wilensky, 1986)
  • Absity [Hirst 1987]

References

  1. ^ (Turing 1950)
  2. ^ Hutchins, J. (2005)
  3. ^ McCorduck 2004, p. 286, Crevier 1993, pp. 76−79, Russell & Norvig 2003, p. 19
  4. ^ McCorduck 2004, pp. 291–296, Crevier 1993, pp. 134−139
  5. ^ Crevier 1993, pp. 79−83
  6. ^ Crevier 1993, pp. 164−172