Jump to content

Talk:Partially observable Markov decision process

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Hankhuck (talk | contribs) at 18:02, 15 December 2010. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconRobotics Start‑class Low‑importance
WikiProject iconThis article is within the scope of WikiProject Robotics, a collaborative effort to improve the coverage of Robotics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the project's importance scale.

This page lacks lots of information. A better description of algorithms, some nicer examples, and links to introductory texts would be really good.

It is also sort of biased: only one application is mentioned, when there are actually lots of great work done with POMDPs. —Preceding unsigned comment added by 201.37.187.41 (talk) 09:47, 24 October 2007 (UTC)[reply]


Actually, you can solve POMDPs with billions of states. The problem is not how large the state space is, but how dense your transition funcion is (or similarly, how large are the A_s sets). :-) I actually think people should be more careful when they claim to have solved "large" POMDPs. They usually mention "numbers of states", but the complexity of solving POMDPs is NOT exponential on the number of states. The number of actions and observations is much more of a problem: value iteration is O ( S . A^Z^h ) if you consider a single set of states. It will be similar if you define action sets per state.

rework

I am working on this page whenever I can. I believe it should include some information on the derivation of the belief space MDP, and distinguish between exact and approximate solution techniques. I will add new sections and subsections when they're ready. I'll link to some new developments, such as continuous state POMDPs, bayes-adaptive POMDPs and POMDPs with imprecise parameters. Please let me know if you plan on working on this and let's do it together. Beniz (talk) 21:57, 12 April 2008 (UTC)[reply]


software

There are several POMDP-solving programs out there... I guess there should be either a more complete list of them in the article, or none.