Jump to content

History of natural language processing

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by BusyCoder (talk | contribs) at 15:39, 17 February 2011 (Software). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The history of natural language processing describes the advances of natural language processing. There is some overlap with the history of machine translation and the history of artificial intelligence.

Theoretical history

The history of machine translation dates back to the seventeenth century, when philosophers such as Leibniz and Descartes put forward proposals for codes which would relate words between languages. All of these proposals remained theoretical, and none resulted in the development of an actual machine.

The first patents for "translating machines" were applied for in the mid 1930s. One proposal, by Georges Artsrouni was simply an automatic bilingual dictionary using paper tape. The other proposal, by Peter Troyanskii, a Russian, was more detailed. It included both the bilingual dictionary, and a method for dealing with grammatical roles between languages, based on Esperanto.

In 1950, Alan Turing published his famous article "Computing Machinery and Intelligence"[1] which proposed what is now called the Turing test as a criterion of intelligence. This criterion depends on the ability of a computer program to impersonate a human in a real-time written conversation with a human judge, sufficiently well that the judge is unable to distinguish reliably - on the basis of the conversational content alone - between the program and a real human.

In 1957, Noam Chomsky’s Syntactic Structures revolutionized Linguistics with 'universal grammar', a rule based system of syntactic structures.[2]

However, the real progress of NLP was much slower, and after the ALPAC report in 1966, which found that ten years long research had failed to fulfill the expectations, funding was dramatically reduced internationally.

In 1969 Roger Schank introduced the conceptual dependency theory for natural language understanding.[3] This model, partially influenced by the work of Sydney Lamb, was extensively used by Schank's students at Yale University, such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner.

In 1970, William A. Woods introduced the augmented transition network (ATN) to represent natural language input.[4] Instead of phrase structure rules ATNs used an equivalent set of finite state automata that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years.

Software

Software Year Creator Description Reference
SHRDLU 1970 Terry Winograd a natural language system working in restricted "blocks worlds" with restricted vocabularies, worked extremely well
Georgetown experiment 1954 Georgetown University and IBM involved fully automatic translation of more than sixty Russian sentences into English.
LIFER/LADDER 1978 Hendrix a natural language interface to a database of information about US Navy ships.
SAM (software) 1978 Cullingford
MARGIE 1975 Schank
STUDENT 1964 Daniel Bobrow could solve high school algebra word problems.[5]
ELIZA 1964 Joseph Weizenbaum a simulation of a Rogerian psychotherapist, rephrasing her response with a few grammar rules.[6]
PAM (software) 1978 Wilensky
TaleSpin (software) 1976 Meehan
QUALM Lehnert
Politics (software) 1979 Carbonell
Plot Units (software) 1981 Lehnert
MUMBLE (software) 1982 McDonald
KL-ONE 1974 Sondheimer et al a knowledge representation system in the tradition of semantic networks and frames; it is a frame language.
MOPTRANS 1984 Lytinen
KODIAK (software) 1986 Wilensky
PARRY 1972 Kenneth Colby A chatterbot
Racter 1983 William Chamberlain and Thomas Etter chatterbot that generated English language prose at random.
Jabberwacky 1982 Rollo Carpenter chatterbot with stated aim to "simulate natural human chat in an interesting, entertaining and humorous manner".
Absity (software) 1987 Hirst
Watson (artificial intelligence software) 2011 IBM A question answering system that won the Jeopardy! contest, defeating the best human players.

References

  1. ^ (Turing 1950)
  2. ^ "SEM1A5 - Part 1 - A brief history of NLP". Retrieved 2010-06-25.
  3. ^ Roger Schank, 1969, A conceptual dependency parser for natural language Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden, pages 1-3
  4. ^ Woods, William A (1970). "Transition Network Grammars for Natural Language Analysis". Communications of the ACM 13 (10): 591–606 [1]
  5. ^ McCorduck 2004, p. 286, Crevier 1993, pp. 76−79, Russell & Norvig 2003, p. 19
  6. ^ McCorduck 2004, pp. 291–296, Crevier 1993, pp. 134−139