Jump to content

Dynamic programming

From Simple English Wikipedia, the free encyclopedia
Revision as of 12:27, 10 December 2012 by Eptalon (talk | changes) (added Category:Computer science using HotCat)

Dynamic programming is a method of solving problems, which is used in computer science, mathematics and economics. Using this method, a complex problem is split into simpler problems, which are then solved. At the end, the solutions of the simpler problems are used to find the solution of the initial problem. Richard Bellman, an US mathematician first used the term in the 1940s, when he wanted to solve problems in the field of Control theory. Dynamic programming can be used in the cases where it is possible to split a problem into many smaller problems, which are all quite similar. Bellman also stated what is known as Bellman's Principle of Optimality today:

An optimal policy has the property that whatever the initial state and

initial decision are, the remaining decisions must constitute an optimal

policy with regard to the state resulting from the first decision.

— Bellman, 1957