Jump to content

Markov reward model

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Gareth Jones (talk | contribs) at 17:15, 23 October 2013 (Gareth Jones moved page Markov reward process to Markov reward model: more common term for the model). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In probability theory, a Markov reward process is a stochastic process which extends either a Markov chain or continuous-time Markov chain by adding a reward rate to each state. An additional variable records the reward accumulated up to the current time.[1] Features of interest in the model include expected reward at a given time and expected time to accumulate a given reward.[2] The model appears in Ronald A. Howard's book.[3]

Markov chain

Continuous-time Markov chain

The accumulated reward can be computed numerically over the time domain or by evaluating the linear hyperbolic system of equations which describe the accumulated reward using transform methods or finite difference methods.[4]

References

  1. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1007/978-1-4615-1387-2_2, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1007/978-1-4615-1387-2_2 instead.
  2. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1007/978-3-642-11492-2_10, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1007/978-3-642-11492-2_10 instead.
  3. ^ Howard, R.A. (1971). Dynamic Probabilistic Systems, Vol II: Semi-Markov and Deccision Processes. New York: Wiley. ISBN 0471416657.
  4. ^ Attention: This template ({{cite doi}}) is deprecated. To cite the publication identified by doi:10.1016/0377-2217(89)90335-4, please use {{cite journal}} (if it was published in a bona fide academic journal, otherwise {{cite report}} with |doi=10.1016/0377-2217(89)90335-4 instead.