Jump to content

Talk:Partially observable Markov decision process

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 99.237.210.206 (talk) at 03:34, 27 June 2008. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This page lacks lots of information. A better description of algorithms, some nicer examples, and links to introductory texts would be really good.

It is also sort of biased: only one application is mentioned, when there are actually lots of great work done with POMDPs. —Preceding unsigned comment added by 201.37.187.41 (talk) 09:47, 24 October 2007 (UTC)[reply]

I agree with this. Hoey et al. is not a notable application (in fact, POMDP's are a poor choice for this problem domain, anyway). Look at Nick Roy's or Sebastian Thrun's or Joelle Pineau's work. Also, Hoey et al. did not use millions of states. POMDP's do not scale well to millions of states. So, this reference is wrong and should be removed. I won't do it myself because I don't have an account and will invariably piss off an Admin if I try to do this. A good source of info is the POMDP tutorial by Cassandra (pomdp.org). 99.237.210.206 (talk) 03:34, 27 June 2008 (UTC)[reply]

rework

I am working on this page whenever I can. I believe it should include some information on the derivation of the belief space MDP, and distinguish between exact and approximate solution techniques. I will add new sections and subsections when they're ready. I'll link to some new developments, such as continuous state POMDPs, bayes-adaptive POMDPs and POMDPs with imprecise parameters. Please let me know if you plan on working on this and let's do it together. Beniz (talk) 21:57, 12 April 2008 (UTC)[reply]