Jump to content

Talk:Examples of Markov chains

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by AppliedStatistics (talk | contribs) at 08:51, 5 April 2015 ({{WPStatistics}}). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
WikiProject iconMathematics Start‑class Low‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-priority on the project's priority scale.
WikiProject iconStatistics Unassessed
WikiProject iconThis article is within the scope of WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the importance scale.

Hello all. It was I who originally added the link to "Markov chain example" from Markov chain, so I claim most of the responsibility for this article being empty(ish) for so long. :-) I have finally added a worked example to the page! I'm not an expert on them (just had a short introduction in linear algebra) so some peer review of my contribution would be most welcome.

I have moved some of the previous content of the page here for discussion:

[This page has so little content now that I trust adding this comment to this page will not upset much. The statement above may not be true: one could construct a game in which the present move is determined not only by the present roll of the die and the current state, but also by the history of rolls of the die on previous turns.
This page could be very good if it were called Examples (plural) of Markov chains and listed a variety of examples.]
Please add more examples.
I would like to add examples like 1. customers brand loyalty, 2. study of life of newspapers subscriptions etc.
i dont know more about these markov chains..and if anyone would like to help me out please mail me at rohang82@hotmail.com (thanx)

Please add further examples (short descriptions like snakes and ladders, or longer worked examples). Customer brand loyalty and newspaper subscription examples could well be possible, if there's someone how knows enough about them to contribute examples!


I also agree that "Examples of Markov chains" is a better title for the article.--Ejrh 04:14, 18 Dec 2003 (UTC)


Moved. --Ejrh

PS. A special request to anyone who is good at mathemtical writing: please fix up the "steady state vector" section of the weather example. I kind of ran out of inspiration at that point.  :-/--Ejrh 04:18, 18 Dec 2003 (UTC

Note: The following in this entry is, apparently, imprecise: "In a game such as poker or blackjack, a player can gain an advantage by remembering which hands have already been played (and hence which cards are no longer in the deck)". In most forms of poker currently spread, the deck(s) are reshuffled between hands, making card counting as such pointless. In poker, however, it is very advantageous to note patterns in player behavior (betting tendencies, hands raised pre-flop for games such as omaha and hold'em, general aggression, etc., etc.). Anonymous, 18 Jun 2007

Directions of Markov chains

The examples on this page are given as the transpose of what I am used to as the normal practice. On the wikipedia page about markov chains and in all textbooks I know Markov chains are formulated using left row vectors not right column vectors with the transition matrix as the transpose of what you have given. I imagine that there are textbooks using the other formulation. However, I think it is confusing to use both notations on wikipedia without at least some acknowledgement given. --Richard Clegg 09:02, 28 March 2006 (UTC)[reply]


I agree and think this should be made a high priority. It thoroughly confused me whilst researching Markov chains for an assignment, particularly since the Wikipedia Markov article refers to it the opposite way yet links to this page. -- Jono

The weather example is straight from [[1]]. 24.255.46.150 08:21, 5 May 2006 (UTC)[reply]

Thanks -- I'll deal with this by either giving a new example or requesting this page be deleted. --Richard Clegg 09:17, 6 May 2006 (UTC)[reply]
That page is copied from Wikipedia, not the other way around. Note that his formula images use Wikipedia's typesetting and all formulas elsewhere on the page use a different style. And guess what... at the bottom of the page he even cites Wikipedia, though he doesn't point to the right page or mention that the content was copied verbatim. Fredrik Johansson 10:51, 6 May 2006 (UTC)[reply]
Ooops -- *blush* Thanks. Apologies. Well, I guess we can consider that sorted then. --Richard Clegg 11:04, 6 May 2006 (UTC)[reply]

Blackjack and Markov Chains

I disagree with the claim that blackjack cannot be treated as a Markov chain because it depends on previous states. This claim relies on a very narrow idea of what the current state is: i.e. what your current cards are and what the dealer's cards are. However, if you expand the current state to include all cards that have been seen since the last shuffle or conversely all the remaining cards in the deck, then the next card drawn depends only on the current state. This is similar to the idea of a "Markov chain of order m" as described in the Markov Chain article: you can construct a classical (order 1) chain from any order m chain by changing the state space.

The problem is that the next state depends on the decision of the player; so, you have to add a player strategy, which may be randomised, before you can make it a Markov chain. Therefore, you can investigate the value of a strategy by looking at the hitting probability of the "Win" states integrated over all possible starting states. --Nzbuu 12:16, 30 November 2007 (UTC)[reply]

Monopoly

Actually, is Monopoly not also slightly non-Markovian? There is a certain amount of 'memory' in the sense that the 'Chance'-cards are not shuffled after every draw (so the chance of drawing a certain card that has not yet been drawn is higher). --Victor Claessen 11:31, 2 December 2008 (UTC)[reply]

Players also decide whether to buy a property (if available) when landing on it, and whether to build houses on properties they own. These aspects don't seem to fit the Markov model. 24.46.211.89 (talk) 19:03, 30 October 2009 (UTC)[reply]

Stable vs Oscillatory Models

The examples given are nice; however, I think one behavior of these models is missing: oscillation. A good example of a transition matrix that oscillates rather than converges would provide a more robust example. —Preceding unsigned comment added by 96.248.79.229 (talk) 16:02, 16 April 2011 (UTC)[reply]