Jump to content

Talk:Two envelopes problem/Archive 3

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Rich Farmbrough (talk | contribs) at 12:31, 18 July 2011 (Subst "Unsigned IP" and minor fixes using AWB). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Archive 1Archive 2Archive 3Archive 4Archive 5Archive 10

Archived Harder Problem

The solution above doesn’t explain what’s wrong if the player is allowed to open the first envelope before being offered the option to switch. In this case, A (in step 7 of the expected value calculation) is indeed a constant. Hence, the proposed solution in the first case breaks down and another explanation is needed.

reworded "constant" to "consistent" in first proposed solution.

Opening the envelope to check the amounts does not fix the inconsistent use of the variable A in step 7 of the original problem.128.113.65.168 (talk) 17:55, 8 October 2008 (UTC)

Archived Proposed solution [to the harder problem]

Once the player has looked in the envelope, new information is available—namely, the value A. The subjective probability changes with new information, so the assessment of the probability that A is the smaller and larger sum changes. Therefore step 2 above isn’t always true and is thus the proposed cause of this paradox. [This paragraph is not true given that the problem with step 7 was inconsistent use of variable A, and not the requirement that the amount in your opened envelope is a constant. Invoking "subjective probability" seems to be incorrect here, in that it doesn't invoke Bayes's forumula in order to recalculate any probablities based on the observation made. 128.113.65.168 (talk) 18:40, 8 October 2008 (UTC)]

Step 2 can be justified, however, if a prior distribution can be found such that every pair of possible amounts {X, 2X} is equally likely, where X = 2nA, n = 0, ±1, ±2,.... But as this set is unbounded (i.e., genuinely infinite) a uniform probability distribution over all values in this set cannot be made. In other words, if each pair of envelopes had a non-zero constant probability, all probabilities would add up to more than 1. So some values of A must be more likely than others. However, it is unknown which values are more likely than others; that is, the prior distribution is unknown. [The argument above is: S is a set with an infinite number of members with non-zero probability, it follows that the sum of the probablity of all members must be greater than 1. This is not true because the elements may then be assigned infinitessimal probabilities. The rest of the paragraph makes no sense because it proposes increasing probablities of certain members when the problem was that the total probablity should be less than 1. I am deleting the "harder problem", and while preserving the second paragraph of "the harder problem" to "twin argument". .128.113.65.168 (talk) 18:40, 8 October 2008 (UTC)]

Other comments

isn't harder. If you say to me "I'm going to flip a coin. If it's heads, I'll give you $20 dollars, tails you give me $10" I will say OK, let's go. Then if you ask me again, I'll keep at it until either you're bankrupt or we've flipped one-hundred tails in a row, in which case, $1000 is a small price to pay to be able to say that I paid a thousand dollars to watch a man flip 100 tails in a row. I'd probably drop $100 to tattoo those words onto my hands so everytime I flipped a coin I got to remember that ridiculous event. A simpler way to see this is to suggest I put ten dollars in your hat, you put twenty dollars in your hat, we flip a coin, and the winner gets all the money in the hat. I will walk away wearing your hat because it's a stupid game.

Point being, there is no contradiction when two people have opened their envelopes, looked inside, and decided to switch. They both see that they have a 50% chance of losing half their money and a 50% chance of doubling it. The potential gain outweighs the potential loss. This is a simple gamble and I'd take it every time. If the problem was "One envelope has an extra $20 bill in it" it would not be any better to switch, since you might lose twenty or gain twenty. —Preceding unsigned comment added by 129.97.194.132 (talk) 17:12, 25 September 2008 (UTC)

If you are offered an envelope and you're being told the other envelope contains either half or double the amount than yours, would you swap ? I'd say, based on this information, you should not make a decision at all. If you'd also been told that the amount in the other envelope was calculated from the amount in yours, I would swap. If, on the other hand, you'd een told that the amount in your envelope was calculated from the amount in the other envelope, I would definately keep my envelope. The 2-envelope problem puts you in the situation you don't know which amount was calculated on which, and both possibilites are equally likely to have occured, so you should not make a decision whether to stay or swap. —Preceding unsigned comment added by 193.29.5.6 (talk) 11:27, 1 April 2010 (UTC)

So-called open problem

Currently this problem is called an "open problem" without even providing a reference for that statement. This paradox has a trivial solution according to statisticians (see for example, http://www.maa.org/devlin/devlin_0708_04.html). In philosophy, it still seems to be a matter of some discussion. Rather than simply stating "open problem", we should dicuss who considers it an open problem (some philosophers) and who does not (statisticians), and provide references for both sides. Simply calling it an "open problem" is grossly misleading and incorrect. Tomixdf (talk) 08:16, 12 October 2009 (UTC)

The page you are linking refers only to the first version of the paradox, not on the second and harder one, indeed the conclusion of the author is:
To summarize: the paradox arises because you use the prior probabilities to calculate the expected gain rather than the posterior probabilities. As we have seen, it is not possible to choose a prior distribution which results in a posterior distribution for which the original argument holds; there simply are no circumstances in which it would be valid to always use probabilities of 0.5.
so apparently he is not considering the harder problem when a probability distribustion is given and the paradox still holds.--Pokipsy76 (talk) 17:27, 17 May 2010 (UTC)

I suggest we put in Devlin's nice explanation for the statistics side (case closed for statistics), with references. Then we can have a section on why this is still considered an open problem in philosophy, again with references. Tomixdf (talk) 09:08, 12 October 2009 (UTC)

Devlin's article is already represented in the article as an external link which is appropriate for that kind of text. What makes you think his solution is without opponents? The irony here is that you start out to claim, as if it were a fact, that there is no controversy here, only to reveal that you think that Devlin's very short explanation from 2004 ended all controversies. This is simply not true. Unfortunately we can't rearrange the article according to your personal opinions, however strong your feelings might be. iNic (talk) 22:19, 12 October 2009 (UTC)
We need to find a consensus as this is turning into an edit war. You can't simply keep on calling this an "open problem" without providing a decent reference. Quantum gravity is an open problem: every agrees that it is not solved. The enveloppe paradox is NOT widely accepted as an open problem. For statisticians, it simply arises from not applying the rules of the Bayesian calculus corectly. We can certainly mention that _some_ still see it as an open problem. But the way in which it is presented now is totally misleading. Tomixdf (talk) 07:59, 13 October 2009 (UTC)
Calling frequentist statistics "more technical" than Bayesian statistics illustrates the extremely low quality of this article quite nicely. What on earth is meant by that? No wonder the article is flagged as problematic. Tomixdf (talk) 08:04, 13 October 2009 (UTC)
I read through the reference that was provided for the "Note". This reference in no way mentions that "frequentist statistics is more technical", or any other of the nonsense statements in the Note. Tomixdf (talk) 18:40, 13 October 2009 (UTC)

Please try to calm down. Simply deleting sections because you don't like the content isn't called edit warring, it's called vandalism. Your "arguments" for your vandalism doesn't make sense. Try to improve the article instead of destroying it. It can be greatly improved by anyone that cares to read a substantial part of the articles (and not just one or two!). When you have studied the subject you are welcome back in helping to improve the article. iNic (talk) 22:41, 13 October 2009 (UTC)

Let's stick to the topic and to arguments, please. (a) Where in the reference that you provide does it say that "frequentist statistics is more technical than Bayesian statistics"? (b) Where is the reference that this is considered to be an open problem in statistics? One can find many articles that attack the theory of evolution. That does not mean it is an "open problem in science". (c) There is a controversy out there - I do not dispute that at all. It just needs to be described in a correct way. For example: this is a solved problem according to A and B, but C and D dispute this, because of this and this reason. Again, there is no agreement that this is an open problem, so it should not be stated as an undisputed fact. (d) If this is an article on Bayesian decision theory, then why is there no example of the application of the Bayesian calculus? The (again unreferenced) "proposed solution" is clearly wrong in that respect - it does not result from any Bayesian reasoning. Tomixdf (talk) 07:14, 14 October 2009 (UTC)
This article IS disputed. We are disputing it right now, and there are numerous complaints about the article in the talk page. So removing the "disputed" box is unreasonable. Tomixdf (talk) 07:17, 14 October 2009 (UTC)

But please go ahead and write your own account of this problem and it's true solution, according to you! There are lots of space on the internet. There is space for both of us, believe me. One other editor did exactly that in the past because he thought that this article didn't show the "true" solution. He had studied three published articles and seen the light. You find his article here Exchange paradox. It will be very interesting to read your account where Devlin is the Darwin of the two envelopes problem, and the rest of us are just irrational dumbfucks. If this trend continues we will in the end have a bouquet of articles at Wikipedia all claiming that this problem is solved and not an open problem. But of course, they will not claim that the same solution is the true solution... Retorical question: who are then the real dumbfucks? iNic (talk) 19:31, 14 October 2009 (UTC)

iNic, you are now edit-warring. You've made three reverts in 24 hours, and you are using reverts as a substitute for answering the valid points raised above by Tomixdf. You're also calling our edits "vandalism" which misrepresents what you are doing. If you really think we are vandalising the article, report us to the admins and see what happens. Removing a paragraph which is poorly referenced and contains multiple absurdities is necessary to stop the article being misleading. Are you going to engage in constructive discussion or are we going to involve the edit-warring noticeboard? MartinPoulter (talk) 22:21, 14 October 2009 (UTC)
Within Bayesian statistics, this is a solved problem, which is even used in teaching Bayesian statistics (see for example Teaching statistics, Vol. 31:2, 2009, pg. 39-41 for a recent reference with the solution, and many others). The solution for the problem as formulated on Wikipedia has even been discussed in a peer reviewed publication (Teaching statistics, Vol. 30:3, 2008, pg. 86-88). Nonetheless, it seems that this problem is still very much discussed in publications on the philosophy of probability (ie. by philosophers). The article should simply reflect this situation: to Bayesian statisticians this is a solved problem with a trivial solution, but philosophers are still discussing for this/that reason. An interesting story, in fact, which can be resolved without edit war IMO. Tomixdf (talk) 06:51, 15 October 2009 (UTC)
So there are two articles on Wikipedia about the same problem? In one it is called an open problem in Bayesian statistics (without providing a reference), and in the other it isn't (with references)? What a mess - this needs to be fixed. On first glance, the other article is fine. It's the explanation that is found in Devlin and many other references, and it involves using the prior distribution of the amount in the envelopes (which you need to solve the probem). Tomixdf (talk) 07:44, 15 October 2009 (UTC)

Aha there is finally a solution that conclusively solves the problem? Wow that's really great news! Why didn't you tell me from the beginning? And why don't you add this final solution to the article??? At the same time you of course need to explain why all other suggestions for a solution are wrong. Also don't forget to mention who finally solved the problem first. I will applaud this kind of enhancement of the article, any day! iNic (talk) 23:14, 15 October 2009 (UTC)

But until you do this kind of total rewriting of the article I will consider all partial deletions of the article as it stands now as vandalism. The reason is that the article doesn't make sense if it tries to hide the fact that the problem is open when at the same time several different solutions are displayed in the article. Everyone that can read will be able to see that an article that says that a problem is solved while it displays several contradictory solutions is incoherent. iNic (talk) 23:14, 15 October 2009 (UTC)

As there already is a wikipedia article of your liking partially covering this subject (the one by Xbert) I suggest that you start out editing that article to include your new (or old?) groundbreaking news. In this way we will have one article claiming the problem is solved and another one (this one) where all solutions are welcome, which will of course include your favorite solution. iNic (talk) 23:14, 15 October 2009 (UTC)

I will again request that you adopt a constructive attitude, stick to arguments, and avoid name-calling (calling people "dumbfucks" does not belong in a discussion on a Wikipedia talk page), threats and sarcasm. Often an article gets better when several editors with different initial opinions try to reach consensus - I suggest we do that. (a) I did not say the problem is solved: I said the problem is considered to be solved within the Bayesian statistics community (which it is). I also provided several references to back that up (see above), and can provide a ton more. (b) What are your _arguments_ for not including this? (c) Are you aware that in Wikipedia, having two articles on the same subject which contradict each other is not acceptable? I'm not interested in more threats or sarcasm, but expect arguments this time as answer to (a), (b) and (c). Thanks. Tomixdf (talk) 05:48, 16 October 2009 (UTC)
This article is clearly strongly disputed (see above discussion, and I'm not the only one taking part). I've added the disputed box again. I don't understand why you keep removing it: it simply flags that there is a discussion going on. We can remove it when we have reached consensus. Tomixdf (talk) 05:53, 16 October 2009 (UTC)

(a) It is obvious from the article itself that the problem isn't solved. Everyone that can read can see that. For everyone that bothers to read the references it's even more obvious. The article would be too long if all ideas would be represented. Your claim that the problem is already solved "within the Bayesian statistics community" is of course unreferenced and absurd. All suggestions so far has has come from "within the Bayesian statistics community," (except the ones that try to solve the non-probabilistic variant). The articles you refer to doesn't support your claim at all, on the contrary. The second sentence in Falk 2008 that you refer to states that the problem "has not yet been entirely settled." So for those that can't infer that conclusion themselves can at least take Falk's word for it. (b) Sure I will include these references, no problem. (Falk and Nickerson's papers have in fact been included in the list on this page before as 'forthcoming.') (c) Sure I agree here too. There shouldn't be two articles covering the same subject, let alone a bouquet of articles. If your read the talk pages of Xberts article you will see that I've tried to merge the articles before. But sometimes I just give up when confronted with compact stupidity. I'm sorry for my strong language in this context. iNic (talk) 03:15, 19 October 2009 (UTC)

I note that INic is refusing to debate the specific points of the disputed text but is using claims of vandalism (which can't be backed up) as a substitute for proper procedure. The claim (twice) that Bayesianism is "less technical" than another interpretation is unsupported (and absurd) original research, and INic shows no interest in actually the defending the disputed text. MartinPoulter (talk) 15:42, 17 October 2009 (UTC)

It's always vandalism when someone deletes an entire section aimed at placing the the subject of an article into the correct context. It's definitely not "proper procedure." The readers that want to read more about Bayesianism, frequentism and their similarities and differences will follow those links to learn more. I suggest that you do the same. iNic (talk) 03:15, 19 October 2009 (UTC)

iNic: That is not a correct definition of vandalism. Please read WP:VAND (In particular, the second paragraph: "edits/reverts over a content dispute are never vandalism") -- Foogod (talk) 00:38, 27 October 2010 (UTC)

Consensus for progress

Just to spell it out, here is the argument for removing the edit-warred "Note".

  • "Because the subjectivistic interpretation of probability is closer to the layman's conception of probability, this paradox is understood by almost everybody." -two unreferenced, dubious factual statements.
  • "(This follows from the fact that Bayesianism is a project that tries to mathematically capture the lay man's conception of probability, without running into paradoxes.)" -unreferenced and obviously false "fact".
  • "However, for a working statistician or probability theorist endorsing the more technical frequency interpretation of probability this puzzle isn't a problem, as the puzzle can't even be properly stated when imposing those more technical restrictions." -two descriptions of frequentism as "more technical" are unreferenced and absurd.
  • <ref>Priest and Restall, ''Envelopes and Indifference'' [http://consequently.org/papers/envelopes.pdf PDF], February 2003</ref> -Self-published source rather than reliable source. Source doesn't back up the statements made in the note.

Since there are two editors setting out arguments in terms of policy why the note should be removed and one editor who is edit-warring rather than addressing these obvious problems, we have consensus for a change. MartinPoulter (talk) 14:42, 18 October 2009 (UTC)

Please see my comment above. iNic (talk) 03:15, 19 October 2009 (UTC)
...which doesn't even address the points I've raised. You do realise that Wikipedia has a no original research policy, and that this policy is not optional? MartinPoulter (talk) 12:15, 19 October 2009 (UTC)
Agreed. The "note" is OR and unreferenced: it has to go. Tomixdf (talk) 06:24, 20 October 2009 (UTC)

Generalizing the fallacity

Should we dedicate a page for the fallacities we can intuively make when calculating the expected value and support it with a variety of examples, like this paradox ? Are the fallacies even clear ? —Preceding unsigned comment added by 94.225.129.117 (talk) 07:48, 3 April 2010 (UTC)

This would be a great idea. A nice example to think about is the following:

Suppose God gives you some money and gives you the choice to keep or switch it. He also tells you either A or B: (assume the prior distribution of any money God has is linear and a coin has 50% chance on tail / head)

A. If you decide to switch, I'll toss a coin. On tails, I'll double the amount, on head, I'll half it. B. If you decide to switch, You'll get the initial amount before I tossed a coin (tails -> it doubled, head -> it halved).

What would you decide in situation A, respectively B knowing your goal is to get the most money out of God.

Note: the two envelope paradox combines both situations. —Preceding unsigned comment added by 94.225.56.12 (talk) 21:28, 5 January 2011 (UTC)

Calling all Probability Theorists: This article needs an edit

Not a probability expert, but from reading the article, talk page, and links, it does seem like there's a concensus solution. Though it doesn't come through very well in the article.

Putting all the peices together, looks like step seven (in the first statement of the paradox) is the incorrect step, because it assumes a uniform distribution (of, say, the amount of money in the lesser envelope). I don't think "impossible probability distribution" is the right phrase (quoting one of the above comments), but in any case, the distribution is not specified in the problem setup, so it's a mistake to assume - in step seven - that it's uniform. For example, what if you were told in advance that the probability distribution is really trivial - like, the amount in the lesser envelope has a 100% chance of being $10. Given this information, step seven would be obviously wrong. Well, you're not told that the lesser envelope has a 100% chance of being $10 - but that doesn't make step seven any more reasonable - since you aren't told anything at all. The fallacy is to assume a uniform distribution when no distribution is specified.

This reasoning is hinted at in the article - for example, it shows that a uniform distribution would have divergent expected values, and that things come out better when you use different variables (for which you aren't assuming a distribution). Hopefully a subject-matter expert will come along and simplify things - it's too much of a re-write for a non-mathematician to attempt.

I'm not suggesting that all the flavor from the article be removed - clearly is has become almost a cultural phenomenon which is itself deserving of some attention - and maybe there are some philosophical implications as well - but it would be improved if the concensus solution were clearly stated somewhere in there.

206.124.141.187 (talk) 11:06, 12 April 2010 (UTC)

By the way, has anyone else noticed the silliness of this statement in the "non-probabilistic" section?

By swapping, the player may gain A (in the case when A = $10) or lose A/2 (in the case when A = $20). So the potential gain is strictly greater than the potential loss.

If I may gain A when A is $10, or lose A/2 when A is $20... turns out that's $10 either way. I think this version of the paradox is self-solving.  :)

206.124.141.187 (talk) 11:25, 12 April 2010 (UTC)

I agree, this article needs a serious edit. The given solutions do not make clear that the paradox is related to a priori distribution assumptions. It also needs some sources. Martin Hogbin (talk) 22:30, 16 May 2010 (UTC)

For an intuitive resolution does anyone think this argument is along the right lines:

Before we can consider whether to swap or not we should consider what is the average sum we should expect to find in our initial choice. Since there is no limit set in the question to the sum that might be placed in the envelope the average sum we should expect is infinite. Herein lies the resolution of the problem. The other envelope therefore might contain infinity/2 or infinity*2 both of which are infinity. Dealing with infinity is like dividing by zero. You can get any answer you want. Martin Hogbin (talk) 22:41, 16 May 2010 (UTC)

Much better now

Well done, Dilaudid. The article is much better after your recent changes. Martin Hogbin (talk) 10:27, 27 May 2010 (UTC)

I think that Clark and Shackel's argument is bogus. Finding an example with infinite sets where there is no paradox does not mean that infinite sets cannot be responsible for this paradox. It is just as easy to use division by zero to show that 2=2 as it is to use division be zero to show that 2=3. Martin Hogbin (talk) 10:33, 27 May 2010 (UTC)

Disruptide edits by iNic again

We're having trouble with iNic again. This time he's using a reference that clearly states that the problem is solved to claim the exact opposite. I've got the article in front of me, and it's MILES away from stating that it is an "open problem". I've reverted using essentially verbatim quotes from the article, but I'm sure that won't be much help. Opinions welcome. I'd welcome a consensus but past experiences indicate that this user does not listen to reason and is abusive (calling people "dumbfucks", stupid etc.). Tomixdf (talk) 10:37, 9 June 2010 (UTC)

I do not think it is fair to call any edits to this article disruptive considering its present state. It is not even clear what the exact problem is. WE need to find more sources, preferably on-line ones that can be easily checked out. Martin Hogbin (talk) 12:55, 9 June 2010 (UTC)
It's disruptive to willfully attribute statements to references that do not contain them, in order to push a POV, while refusing to find a consensus. IMO it's easy to find a consensus. Something like "In statistics, this is widely considered a trivially solved problem [references from stat journals here]. In philosophy, some variants of the paradox still cause debate [references from philosophy journals here]." Also, in WP, a decent peer reviewed article that is accessible online from any university library (such as the ref I provided) is just about as good as it can get. Tomixdf (talk) 13:57, 9 June 2010 (UTC).

I agree with Martin Hogbin; considering the present bad state of the article any (sane) edit will be to the better. I will have more time for Wikipedia now and will try to restore this article. I say "try" because I have the unpleasant feeling that tomixdf will revert almost every edit I will make. User Tomixdf has his own peculiar view of the state of this problem. He claims, without any proof or reference, that this problem is solved "among Bayesian statisticians" but still unsolved "among philosophers." How a problem can be both solved and unsolved at the same time is of course very hard to understand for anyone that adhere to ordinary logic. Who supposedly solved the problem and when it was solved he has failed to explain. This is not a bit surprising of course because it is still unsolved. But this basic, simple and obvious fact (that the problem is still unsolved) can't be stated explicitly in the article as long as Mr tomixdf is in control of the article. In lack of valid arguments he even resort to ad hominem attacks to keep the article in a bad shape. The paper he refers to above is a short article by Ruma Falk which first two sentences read "This probability paradox is one of the most widely discussed problems in recent years. It has not yet been entirely settled, and many find it still disturbing notwithstanding the many insights gained in studies that have analysed the problem from diverse angles" And yet, according to tomixdf, this paper "clearly states that the problem is solved"(!) I hate to get personal, but either I can't read or he can't read. iNic (talk) 01:30, 20 June 2010 (UTC)

There are 1000s of creationists who do not accept the theory of evolution; that does not mean that evolution theory should be presented as an "open problem in science" in Wikipedia. Also, you know very well that Falk's article calls this a solved problem with a trivial solution; he simply mentions that there are many subtle VARIANTS of the problem, which leads to "futile controversies". I simply cannot understand why you keep misrepresenting the sources. For the final time, I request that you adopt a constructive attitude. Tomixdf (talk) 08:51, 20 June 2010 (UTC)

Your "creationist analogy" is totally unfounded and absurd as an analogy in this case. This analogy showcases your personal view on the subject but you have never been able to substantiate this view in any way. I have long ago asked you who, if this analogy were correct, would be the "Darwin" of the Two Envelopes problem. I never got an answer. It would also be interesting to know who you think plays the role of "creationists" in this case. I never got an answer to that either. Please keep in mind that this is your silly analogy, not mine. Last time I asked you about this you complained to the administrators that my tone in my questions was too ironic for your taste. But if you push a POV that is silly, you must be prepared to get questions about it that might have to be on the same level of silliness. iNic (talk) 02:13, 21 June 2010 (UTC)

And no, Falk doesn't call this a solved problem at all. On the contrary she correctly states that this problem "has not been entirely settled" as you can read in the second sentence of his article (as quoted in full above for your convenience). It is easy to find other papers stating the same thing, for example in the recently added paper in the Further reading section by McDonnell and Abbott from 2009. The first two sentences in their paper reads "The two-envelope problem has a long history, and it is sometimes called the exchange paradox (Zabell 1988) or the two-box paradox (Agnew 2004). It is a difficult, yet, important problem in probability theory that has intrigued mathematicians for decades and has evaded consensus on how it should be treated." iNic (talk) 02:13, 21 June 2010 (UTC)

And no again, Falk doesn't at all say that all other variants of the problem that differs from her own only lead to futile controversies. What she says is instead this "There are many variations of this elusive problem in which subtle changes in assumptions, or details of the underlying experiment, call for different solutions. Those apparently minor distinctions have sometimes been overlooked by readers and resulted in futile controversies. To avoid ambiguity, I start by referring to Wikipedia’s puzzle entitled ‘Two Envelopes Problem’." So to be sure to avoid all potential ambiguities in her paper she uses the clear statement of the problem that she could find, at that time, in this Wikipedia article. iNic (talk) 02:13, 21 June 2010 (UTC)

I leave your personal attacks on me without comment. iNic (talk) 02:13, 21 June 2010 (UTC)

I have never reported you to the administrators, even after you started calling your oponents in the discussion "dumbfucks" and "stupid", but I certainly will if you start butchering the article again. The "creatonist analogy" is right on spot: even though there is a trivial solution in Bayesian probability theory (see Falk: "I try to dispell the doubts that sometimes linger despite the sound arguments in the above sources"), the problem continues to raise discussion from outsiders. Within the Bayesian community, the basic form of the paradox is a solved, trivial problem. And once again, I think it is fine to point out that there is a discussion about certain formulations of the paradox in certain communities. So again, there is a simple consensus possible, IMO. Tomixdf (talk) 08:59, 21 June 2010 (UTC)

Do you know that POV-pushing is not at all accepted at Wikipedia? I already know that you think that your silly analogy is "spot on," so I didn't ask for yet another statement of that kind. What I asked for was the sources you have for this claim. If the only source you have is your own opinion we need to move on. (If you want to promote your own interpretation of this problem you are welcome do so on the arguments page. The talk page is devoted to serious editorial discussions only. Please respect that.) iNic (talk) 22:52, 21 June 2010 (UTC)

My issues with the current statement of the problem and solution

The player is presented with two indistinguishable envelopes, each of which contains a positive sum of money. One envelope contains twice as much as the other (say $1 and $2).

There are two possible interpretations of this: (1) the player is told what sums the envelopes contain, for example $1 and $2, or (2) the sums of money can be anything, with $1 and $2 in the statement of the problem being just an example of sums of money with one being twice the other.

The given "solution" -- that A and B both have expected value $1.5 -- is of course only consistent with interpretation (1). I have to say that when I've encountered the problem before (e.g. in Smullyan's Satan, Cantor and Infinity) interpretation (2) has been explicit. Indeed, interpretation (2) is much more troubling, since you can't simply say "by symmetry, the conclusion is obvious" -- if you don't know the amounts, there is still a paradox about whether or not to switch after you've opened one envelope but in that case there is no symmetry. So surely the presentation ought to focus on interpretation (2) -- otherwise the reader who notices the two interpretations may go away saying, "Well, (1) has been solved, but what about (2)?"

"The error is at the sixth step above. It imitates a calculation of expected value, but is mathematically nonsense. Demonstrably, the result 1.25 A is not a value but a random variable. Any rigorous investigation of the game with conditional probability concludes that A and B both have expected value $1.5."

"Demonstrably" is an infuriating word. It here seems to mean "There is a proof of this, but this Wikipedia page is too small to contain it." What is the proof?91.105.61.167 (talk) 21:41, 15 June 2010 (UTC)

Exactly right. The real problem is that this article is not based on what is said in reliable sources but is just one editors opinion on the subject. We need to find out what the scientific literature says about this problem. Martin Hogbin (talk) 23:02, 15 June 2010 (UTC)

This article it has been in free fall for quite some time. A lot of bad, unsourced edits from anonymous editors has been added and some good sourced sections have been removed. I will try to restore the article as it once were and start to improve it from there. That is, if user tomixdf doesn't revert every change I do... iNic (talk) 01:49, 20 June 2010 (UTC)

That previous version claimed absurdities such as "the paradox cannot be stated in frequentist statistics" and "frequentist statistics is more technical than Bayesian statistics". This was all backed up with "references" that did not back up these statements at all. I was not the only editor who judged the article an absolute mess (see discussion above). Tomixdf (talk) 11:55, 20 June 2010 (UTC)
Can you provide some references for your statement that, 'Within the Bayesian community, the basic form of the paradox is a solved, trivial problem'. Surely we need to start by reading and understanding these references then incorporating what they say into the article. I would be happy to help with this. If there is a significant minority view, or if another solution holds in some circumstances we should say this too. Martin Hogbin (talk) 11:06, 21 June 2010 (UTC)
I agree. Apart from the article that I already provided in the introduction of the article, this article seems to give a good overview of the situation. It states: "The original version of the two envelope paradox is not all that paradoxical", but also "there are strengthened versions of the two envelope paradox where the familiar reasoning seems to hold, but it still does not make any sense to prefer the other envelope." It is these strengthened versions of the paradox (involving infinite expected utilities) that still elicit discussion. Tomixdf (talk) 13:25, 21 June 2010 (UTC)
Do we have any papers on the subject from peer-reviewed mathematical or statistical journals? We also need sources stating the different versions of the problem. Martin Hogbin (talk) 23:07, 21 June 2010 (UTC)
The Falk paper (which is now being abused by iNic to call ALL versions of the paradoc "unsolved") is a peer-reviewed paper in a statistical journal. The other reference I provided (which is also peer reviewed, BTW) contains many other peer reviewed references. It's very sad that the article is turning to bogus again. Tomixdf (talk) 17:59, 22 June 2010 (UTC)
I have some statistical background - I think it's pretty clear that the exchange paradox (or two envelope problem, if you prefer) is resolved. This link has a nice summary of the literature: [1]. "I think it fair to say that most mathematicians would consider the two-envelope paradox to be resolved for all practical purposes. But there are some loose ends that continue to trouble

some thinkers (philosophers particularly)." I appreciate that this may be an unsolved philosophical problem (is there any other kind?) but it is not an unsolved statistical problem. If it were, statistics would be in a very bad state. --Dilaudid (talk) 15:15, 23 July 2010 (UTC)

This is an unsolved problem in decision theory, not in statistics. No one ever claimed that it's an unsolved problem in statistics. This article has been in a bad state for a long time due to anonymous attacks but at least this part is restored now. I'm sorry if the article confused you before. I will restore the whole article as soon as I find time. iNic (talk) 00:03, 24 July 2010 (UTC)
I do get confused a lot :) I'm glad that you agree that this is a solved problem in statistics. If it's an unsolved problem in decision theory - is it worth adding a short note to the article explaining what the decision theory paradox is? I'm not 100% clear what decision theory is. Dilaudid (talk) 21:08, 24 July 2010 (UTC)
It's not the case that we have two problems here, one in "statistics" and one in "decision theory" where the first one is solved and the second unsolved. Sorry if my wording made you think that. The problem itself is safely within decision theory -- and only within decision theory. It is not a statistical problem at all. So that this problem isn't an unsolved problem in statistics DOES NOT mean that it is a "solved problem in statistics." I repeat: it's not a problem in the domain of statistics at all. Therefore it's neither solved nor unsolved in statistics. It's a non-statistical problem. This is made clear in the very first sentence of the article where you also find a link to decision theory. "The two envelopes problem is a puzzle or paradox within the subjectivistic interpretation of probability theory; more specifically within Bayesian decision theory." I don't know how to state this in a more clear manner. If you have an idea how to improve it please let me know. iNic (talk) 15:42, 25 July 2010 (UTC)

Two Envelopes "Paradox" is Resolved - New sources

Hi Folks, I think this "unsolved paradox" is actually solved. In order to get this information out to as many contributors as possible I thought I'd make a new section. Check this source out, and we can process it into the article. [2] --Dilaudid (talk) 15:25, 23 July 2010 (UTC)

Where was this source published? Martin Hogbin (talk) 00:31, 26 July 2010 (UTC)
No this is not a solved problem and your new source doesn't disprove all other solutions, let alone is there a general consensus that Federico O’Reilly finally resolved all problems in this paper. To claim that is just silly. iNic (talk) 00:12, 24 July 2010 (UTC)
iNic - I take your point (section above) that you consider this an unsolved paradox in decision theory, hopefully we can get some clarification on this. Since we both agree that this is a solved problem in statistics (indeed a fallacy, as the "Teaching Statistics" source maintains) - so can anyone who regards this as an unsolved statistical paradox please chat here? If everyone agrees this is a fallacy, we can start to move towards de-mystifying this article. Dilaudid (talk) 21:14, 24 July 2010 (UTC)
No I never said that this is a "solved problem in statistics." And yes, everyone agrees that the reasoning contains at least one fallacy. Where different writers disagree are where the fallacy is and what kind of fallacy it is. If you want to improve the article please read at least ten of the published papers to get an overview of the debate. This discussion page doesn't reflect the academic debate about this problem at all. iNic (talk) 16:00, 25 July 2010 (UTC)
Are any of the published papers freely available online? Martin Hogbin (talk) 13:23, 26 July 2010 (UTC)
Some are available online for free, others are available online but not for free, and still others are available for free but only at the library. iNic (talk) 14:21, 26 July 2010 (UTC)
I have looked at the few papers available online and I have to agree with you that there is no universally agreed solution to this problem, there is hardly an agreed problem. The solution given in this article is actually part of the paradox according to one paper. Martin Hogbin (talk) 17:36, 27 July 2010 (UTC)
Exactly. The simple form of the paradox has a simple solution; nobody disputes that. But there are many intricate variants (typically involving infinity somehow, or ambiguously formulated) that can most definitely be regarded as "unsolved", or at least "controversial". You can find a very good discussion of this in Falk (Teaching statistics, vol. 30, 2008), but there are numerous other references. So the current Wikipedia article gives a completely false view: it states that ALL versions of the paradox are unsolved, which is totally incorrect. On top of that, it limits the paradox to the "subjective interpretation of probability", which is equally incorrect. Note that none of these statements are backed up by the two references provided. Tomixdf (talk) 08:52, 28 July 2010 (UTC)
You will see that I have deleted the 'Solution' section given because it was completely unsourced. It seemed to be the OR of one particular editor. Solutions of that type are presented by some sources but are discredited by others. We need to give a balanced view of the situation in the article showing the generally accepted academic consensus on the subject if there is one. Martin Hogbin (talk) 09:13, 28 July 2010 (UTC)
That is fine - I was talking about the introduction. I hope you at least agree that the current introduction is not acceptable? Why don't we put the Falk discussion in, as an example of a solution for one version of the paradox? Tomixdf (talk) 09:26, 28 July 2010 (UTC)
Why not? I am new to this problem but it is clear that there is not one simple, clear, universally accepted solution. Martin Hogbin (talk) 12:12, 28 July 2010 (UTC)

Martin - continuing from your line above, there seem to be at least 3 separate paradoxes that have come out of this. Two are trivial, one appears to be unresolved. 1) Smullyan's "logical paradox", a trivial fallacy, see Albers' "Trying to Resolve the Two-Envelope Problem" [3]. Smullyan uses the same word to refer to two separate things. This is the same as the random variable argument that some have tried to add to the (wiki) article. 2) A second fallacy - the failure to incorporate the information about the amount in the envelope into a posterior/conditional probability (see Albers, p.90, top). This is also trivial, and is the solution mentioned in the current article. 3) A decision theory problem, [4]. Dietrich and List are using the paradox as a highly unrealistic test case for decision theory, to see if decision theory can be applied to infinite expectations. This loses any basis in the real world, since real world expectations of quantities of money cannot be infinite, but it appears to be an interesting problem. So I would regard the first two paradoxes as "explained" or "resolved" and the third as open (this is the point that iNic was trying to make).

I might be wrong, but I think that the difference between 1 and 2 is that before you look at the amount in the envelope, 1 applies - X is a random variable. After you look at the amount in the envelope, X needs to be treated as evidence - it's a definite amount of money.

A few more things - the word "solved" was an unfortunate word for me to use, the paradox may be resolved/explained without the problem (of whether to swap or not) being solved. In answer to Martin's question - I don't think O'Reilly was published anywhere. Is it worth grouping the sources by which paradox they are referring to? Also Tomixdf/Martin - do you have a free source for Falk? I'd like to read the paper. Cheers all. Dilaudid (talk) 17:37, 31 July 2010 (UTC)

I agree with several things that you say. As with all problems of this nature it is important to decide exactly what the question is. There seem to be several versions of the problem, some where you look inside the envelope before deciding and others where you do not. The question also needs to be quite clear on what sums could possibly be in the envelopes.
Once all this is settled we need to see which situations are indeed paradoxical in that two different lines of reasoning give two different answers. In some formulations it could be correct that you should swap.
What notable formulations are there and is one regarded as the 'classic' formulation? Martin Hogbin (talk) 18:19, 31 July 2010 (UTC)

lack of explanation

there is not enough substance in the article to denote there even being a problem, example: why baysian probability even kicks in with theese few outcomes and so few data, esp. because as any middle school algebra student can tell you. this word problem can only result, by its own definition in two outcomes, x or 2x. any math that cannot stand up to its own definition is suspect to say the least.--67.170.10.189 (talk) 01:56, 29 August 2010 (UTC)

I'd like to disagree. This is a well known problem and this article is both comprehensive and concise, and more details can be found via the lists of further reading and external links. E.g. see my contribution to the latter for one view of how a closely related problem remains even when a trivial resolution is realised, which was in last May's edition of The Reasoner. Username12321 (talk) 09:19, 3 September 2010 (UTC)
I agree there clearly is some kind of paradox here, depending on exactly what the question is. This article ideally needs to state exactly what problem formulations are being considered, what paradoxes there are for each, and how each paradox is resolved, all based on reliable sources of course. Not easy. Martin Hogbin (talk) 10:01, 27 October 2010 (UTC)

Some organisation

Reading the cited and other sources it would seem that two important distinctions are made in the formulation of this problem. One distinction is whether or not the player opens his envelope. Case where the envelope is opened are generally easier to understand and explain and they are sometimes used as a basis for deductions about cases where the envelope is not opened.

The other important distinction is in the range or distribution of sums that might be in the envelopes. For cases where there is a simple finite distribution of possible sums (pretty well any realistic formulation) it seems to be generally agreed that the problem is easily solved. It is in the case of infinite distributions that there appears to be no general agreement.

I would like to rewrite this page to show the various possibilities and to help make clear where there is agreement between sources and where there is not. Is anyone interested in joining me? Martin Hogbin (talk) 12:20, 1 January 2011 (UTC)

Sure I'm interested! There are a lot of interesting ideas around this paradox that easily can be stated in the article, so a lot of improvements are for sure possible. The article had the kind of structure you talk about a couple of years ago with different sections for different versions/interpretations/formulations of the paradox. Another thing that I think would be an improvement is to create a separate page with links to further reading. That page could be the 'complete' reference guide for anyone that want to learn more about this. iNic (talk) 16:01, 7 January 2011 (UTC)

Merge with Necktie Paradox

I propose that Necktie Paradox is merged into this page, perhaps as a short paragraph or section of its own. It seems to fundamentally be the same problem. Thoughts? Andeggs 22:29, 22 December 2006 (UTC)

In a way that paradox is already mentioned here in the history section in the form of a wallet game. Maurice Kraitchik wrote about the necktie paradox already 1943, please see the text by Caspar Albers in the bibliography for a reference and citation. So I do agree we have a historical connection here, but I also think that the Necktie Paradox deserves an article of its own. Many of the ideas developed around the two envelopes problem isn't directly applicable to the Necktie Paradox. So historically it's the same problem, but now they are separate problems. However, this historical connection could be stressed more in both articles. iNic 00:28, 23 December 2006 (UTC)
Do not merge. Although essentially the same problem, they are expressed very differently and have different existences in the literature etc. Cross-refer but keep separate. Snalwibma 14:01, 4 January 2007 (UTC)
Do not merge. I agree with Snalwibma.--Pokipsy76 12:08, 30 January 2007 (UTC)
Merge The "history section (Two-envelope paradox#History of the paradox seems to be exactly this problem. —ScouterSig 19:12, 26 March 2007 (UTC)
Do not merge I agree that the history of the two-envelope paradox (the wallet switch) is the same as the necktie paradox, but as noted therein, the envelope problem differs in the presented relationship between values. The envelope problem is very much different to consider than Necktie. I fully agree with iNic above that perhaps the solution is further stressing of the common historic root.130.113.110.75 06:01, 12 April 2007 (UTC)

Also, this page is very similar to Exchange paradox— Preceding unsigned comment added by 64.81.53.69 (talk) 05:00, 11 April 2007

I definitely agree with merging with the Exchange paradox. I support further discussion on merging the necktie paradox, as they do seem similar, and the necktie paradox lacks a lot of the analysis of the other two articles, so would would suit being mentioned as a variation of the problem. Jamesdlow (talk) 04:54, 11 April 2008 (UTC)

I think it would be a bad idea to merge with the Exchange paradox. As the discussion page on that article mentions, that page seems more like an article on the paradox, while this page is more like a puzzle giving versions of the paradox and solutions. I also think this article is not in very good shape; citations are thin and the ones I checked don't actually say what they are cited for. Warren Dew (talk) 02:37, 3 May 2008 (UTC)

The envelope is slightly different then the nectie, but the neckties is 100% same as the wallet. Cheaper guy gets the booty! 76.112.206.81 (talk) 07:13, 30 September 2008 (UTC)

Do not merge. The envelope model is superior to the neck-tie device - in the neck-tie conundrum it is an artificial construct to say that a man could see it is a 50/50 chance that his neck-tie was cheapest (in reality a man would guess his tie was cheap/average/expensive) - but in the envelope model the statement that one envelope contains double the other can be considered absolute. Gomez2002 (talk) 13:53, 14 February 2011 (UTC)

Stop substantially editing this if you aren't an expert

I understand referencing sources for citation, but if you need to check the literature to understand how to unravel this 'paradox,' you're not qualified to edit this page. Please be humble and don't. You can argue that 'experts' disagree how to resolve it, but the reason we need experts writing technical articles is that not every PhD who publishes is good at what they (or especially others) do, and experts can sift the noise. This leaves the question of who the real experts are, which we don't need to answer. It's enough to say that if you aren't certain that you're an expert on this topic, you aren't, even if you've read a dozen papers about it.

Paradoxes arise when two different formulations of supposedly the same problem contain divergent, implicit assumptions. The paradox disappears when you identify the missing or altered assumptions in the flawed model. This has been done ad nauseum on these pages. The answers have been expounded comprehensively, but that work is either absent, or marginalized by unwarranted hedging and convolutions in the article, particularly 1) the last sentence of the intro, 2) the title 'Possible Solutions,' 3) the last sentence of that section, and 4) most of the 'A second problem' section. If you claim ultimate editorial authority over this article and you aren't sure of this, please pass the torch. I'm not embarrassed for you, but I cringe a little that, even as a work in progress, you default to favoring ambiguous quibbling over rigorous lucidity. This is not a perplexing philosophical debate, it's semantic obfuscation that has been exposed and corrected. Hand the reins off to workhorses who are at least clever enough to have solved the puzzle themselves. — Preceding unsigned comment added by Nimblecymbal (talkcontribs) 18:56, 26 March 2011 (UTC)