Value function
The value function of an optimization problem gives the value attained by the objective function at a solution, depending on the parameters of the problem.[1] In an economic context, where the objective function usually represents utility, the value function is conceptually equivalent to the indirect utility function.
In a problem of optimal control, the value function is implicitly defined as the supremum of the objective function taken over the set of admissible controls. The parameters of the problem are the initial conditions of the system: the initial value of the state variable and the initial time.[2] Generally an optimal control problem may be written:
to be maximized over all admissible controls for which the corresponding trajectory of , with initial condition and some terminal constraint. Then the value function is defined as
By Bellman's principle of optimality, which roughly states that any optimal policy at time taking the current state as initial condition must be optimal for the remaining problem, gives rise to an important functional recurrence equation, known as the Bellman equation.
Although unknown until a solution to the optimization problem is found, the value function itself can be used to find a solution.[3]
References
- ^ Mas-Colell, Andreu; Whinston, Michael D.; Green, Jerry R. (1995). Microeconomic Theory. New York: Oxford University Press. p. 964. ISBN 0-19-507340-1.
- ^ Kamien, Morton I.; Schwartz, Nancy L. (1991). Dynamic Optimization : The Calculus of Variations and Optimal Control in Economics and Management (2nd ed.). Amsterdam: North-Holland. p. 259. ISBN 0-444-01609-0.
- ^ Stokey, Nancy L.; Lucas, Robert E. Jr. (1987). Recursive Methods in Economic Dynamics. Cambridge: Harvard University Press. pp. 13–14. ISBN 0-674-75096-9.