Jump to content

Conceptual dependency theory

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by VanishedUserABC (talk | contribs) at 14:02, 19 February 2010. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Conceptual dependency theory is a model of natural language understanding used in artificial intelligence systems.

Roger Schank at Stanford University introduced the model in 1969.[1] This model was extensively used by Schank's students at Yale University such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner.

Schank developed the model to represent knowledge for natural language input into computers. Partly influenced by the work of Sydney Lamb, his goal was to make the meaning independent of the words used in the input, i.e. two sentences identical in meaning, would have a single representation. The system was also intended to draw logical inferences.[2]

Notes

  1. ^ Roger Schank, 1969, A conceptual dependency parser for natural language Proceedings of the 1969 conference on Computational linguistics, Sång-Säby, Sweden pages 1-3
  2. ^ Cardiff University on Conceptual dependency theory [1]