You are presented with two envelopes, each of which contains a sum of money. You are told that one envelope contains an amount exactly double the amount in the other envelope. You will be permitted to keep the money in the envelope you choose.

You choose an envelope at random, but before you open it, you are given the option to change your selection. You reason that if the amount of money in the first envelope is *x*, then if you switch you have an equal chance of receiving either 2*x* or .5*x*. You calculate your expected return by adding 2*x *and .5*x* to reach 2.5*x*, then dividing by 2 to reach 1.25*x*. This is more than you have now, so you decide to switch.

Is there anything wrong with your reasoning? What if you are given the option to change your selection again?

The line of reasoning that makes switching seem favorable leads to the paradoxical prospect of switching back and forth endlessly, because it is always favorable to switch. This is absurd, so the reasoning must be flawed in some way. This problem has led to much debate among mathematicians and philosophers.

One analysis of the flaw is that *x *stands for two different things in the equation. When referring to 2*x*, *x* refers to the smaller amount; when referring to .5*x*, *x* refers to the larger amount. This invalidates the equation.

A common-sense way of resolving the paradox is to observe that, unlike in the Monty Hall problem, there is no change in the situation before you are given the opportunity to switch. You had a 50-50 chance of choosing the larger amount, and you either did or you didn’t. Switching does not improve your odds.

I don’t like the common-sense resolution, because it isn’t a resolution. A resolution will tell you what is wrong with one of the candidate solutions, and all it did was provide a second solution. Without explaining why the other is wrong, it is a paradox, not a resolution.

I also don’t like the explanation given above, “x stands for two different things in the equation … the smaller amount [or] the larger amount.” The point of a random variable is that it “stands for” the set of all possibilities. So while this is attempting to explanation the error, it is technically incorrect.

I prefer an example of what that explanation was trying to say, which is that treating the envelope as having a specific value, even the unknown X, requires knowing how X was picked.

1) Say I prepare two envelopes, with $5 and $10, and give them to you for the above experiment. I tell you these values. But since the envelopes are sealed, all it means to you is that one has twice as much as the other. The point is that you can’t consider the any one value as having the potential to both win, and lose. That is, your envelope has $5, switching can only increase the value.

2) Say I prepare two pairs, one with ($5,$10) and one ($10,$20). You pick one at random for the above experiment. Again, all you know for certain is that is that one of the two has twice as much as the other. If your envelope has $10, then the “you should switch” argument above is actually correct! You have a 50% chance to lose $5, and a 50% chance to gain $10, so the expected gain is $2.50. But if you have $20, you lose $10.

3) Say I prepare ten pairs, nine with ($5,$10) and one ($10,$20). Now, if you have $10, there is a 90% chance you will lose $5, and only a 10% chance you will gain $10. So the expectation is a loss of $3.50.

The point is that the 50%:50% split applies only to whether your envelope is the higher or lower envelope, not to whether the specific value in the envelope is the higher or lower value. That is determined by both the 50%:50% split, and the relative probabilities that the envelopes were prepared with ($X/2,$X) or ($X,$2X).