Jump to content

Never-Ending Language Learning

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 2620:0:2250:2107:e15c:568d:cfc5:d1bf (talk) at 22:27, 18 November 2013 (See also). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Never-Ending Language Learning system (NELL) is a semantic machine learning system developed by a research team at Carnegie Mellon University, and supported by grants from DARPA, Google, and the NSF, with portions of the system running on a supercomputing cluster provided by Yahoo!.[1]

Process and goals

The goal of NELL and other semantic learning systems, such as IBM's Watson system, is to be able to develop means of answering questions posed by users in natural language with no human intervention in the process.[2] Oren Etzioni of the University of Washington lauded the system's "continuous learning, as if NELL is exercising curiosity on its own, with little human help".[1]

By October 2010, NELL has doubled the number of relationships it has available in its knowledge base and has learned 440,000 new facts, with an accuracy of 87%.[3][1] Team leader Tom M. Mitchell, chairman of the machine learning department at Carnegie Mellon described how NELL "self-corrects when it has more information, as it learns more", though it does sometimes arrive at incorrect conclusions. Accumulated errors, such as the deduction that Internet cookies were a kind of baked good, led NELL to deduce from the phrases "I deleted my Internet cookies" and "I deleted my files" that "computer files" also belonged in the baked goods category.[4] Clear errors like these are corrected every few weeks by the members of the research team and the system is allowed to continue its learning process.[1]

References

  1. ^ a b c d "Aiming to Learn as We Do, a Machine Teaches Itself". New York Times. October 4, 2010. Retrieved 2010-10-05. Since the start of the year, a team of researchers at Carnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. {{cite news}}: Cite has empty unknown parameter: |coauthors= (help)
  2. ^ Trader, Tiffany. "Machine Learns Language Starting with the Facts", HPCwire, October 5, 2010. Accessed October 5, 2010.
  3. ^ "NELL: Never-Ending Language Learning", Carnegie Mellon University. Accessed October 5, 2010.
  4. ^ VanHemert, Kyle. "Right Now A Computer Is Reading Online, Teaching Itself Language", Gizmodo, October 6, 2010. Accessed October 5, 2010.