Jump to content

Retroactive learning

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Vanished user e175adb86e72bb96a1706f7ab31b9df8 (talk | contribs) at 21:04, 19 July 2010. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Retroactive learning [1]: reviewing experiences and learning from them when sufficient time (or another resource) becomes available.

Often, it is not possible to learn while an event is occurring because the agent lacks the specific information or resources that it needs to learn. For example, an agent in a real-time environment may not have time to apply an iterative learning algorithm while it is performing a task. However, when time becomes available, the agent can replay the events and learn from them then. Episodic memory allows previous experiences to be relived or rehearsed once the resources are available so it can be reanalyzed with new knowledge or additional experiences.

  1. ^ Andrew M. Nuxoll: Enhancing Intelligent Agents with Episodic Memory. Dissertation, 2007. http://deepblue.lib.umich.edu/bitstream/2027.42/57720/2/anuxoll_1.pdf