Jump to content

Retroactive learning

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by R'n'B (talk | contribs) at 20:12, 22 December 2010 (Fix links to disambiguation page Agent). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Retroactive learning [1]: reviewing experiences and learning from them when sufficient time (or another resource) becomes available.

Often, it is not possible to learn while an event is occurring because the agent lacks the specific information or resources that it needs to learn. For example, an agent in a real-time environment may not have time to apply an iterative learning algorithm while it is performing a task. However, when time becomes available, the agent can replay the events and learn from them then. Episodic memory allows previous experiences to be relived or rehearsed once the resources are available so it can be reanalyzed with new knowledge or additional experiences.

  1. ^ Andrew M. Nuxoll: Enhancing Intelligent Agents with Episodic Memory. Dissertation, 2007. http://deepblue.lib.umich.edu/bitstream/2027.42/57720/2/anuxoll_1.pdf