< Earlier Kibitzing · PAGE 243 OF 243 ·
Later Kibitzing> 
Oct1114   ljfyffe: <just ran into a paradox>  it's bad enough banging into just one dox! 

Oct1214
  al wazir: <Sneaky>: I'll answer while trying to work it out  thinking on my feet, so to speak. In the first version of the game, here are the possible results from the first three throws: 25% HH (I win)
25% HT (you win)
12.5% THH (I win)
12.5% THT (you win)
12.5% TTH (next toss ends the game)
12.5% TTT (inconclusive)
Obviously this can be extended indefinitely. After four tosses beginning with TTTH, one of us has to win on the next, with equal chances, but after TTTT we're still waiting. In general, as long as the coin comes up tails nothing is settled, but as soon as a head shows, one or the other of us has to win, with equal probability. Now look at the second version. I'll list the possible starting sequences, my coin first and then yours: HH HH (I win)
HH HT (tie)
HH TH (I win)
HH TT (I win)
HT HH (inconclusive)
HT HT (you win)
HT TH (inconclusive)
HT TT (inconclusive)
TH HH (inconclusive)
TH HT (you win)
TH TH (inconclusive)
TH TT (inconclusive)
TT HH (inconclusive)
TT HT (you win)
TT TH (inconclusive)
TT TT (inconclusive)
Summary: Each of us wins three, there is one tie, and nine are inconclusive, all with equal probability. After the tie, I have a 50:50 chance of winning on the next toss, whereas you will need at least two more throws. For two of the inconclusives, we both got H on our second toss, with means that we have equal chances of winning on the next toss, and for one inconclusive neither of us got an H or our second, so neither of us can win on the next toss; but there are *five* inconclusives where you got an H on your second toss and I didn't, versus only *one* where I got an H on my second toss and you didn't. Consequently, your chances of winning on the third toss are better than mine. There are five equally probable ways for you to win outright on your next toss, versus only two where I can. I'm not going to carry the analysis any further, but will just argue that something similar must happen after any number of tosses. In any case, the longer the sequence, the less likely it will be for the game to last that long, so we're talking about diminishing probabilities. I conclude that if the probabilities of all the (infinite) possible sequences are added up, the HT player will come out ahead. 

Oct1214
  Sneaky: <Al> I've concluded that HT wins 5/8 of the time but I don't have a proof yet. Your analysis is surely on the right track. I find it surprising that the simultaneous version should work out differently, don't you? Not that it's so hard to understand, but my gut instinct told me it should be the same. 

Oct1314
  Sneaky: OK here's my English analysis. Common sense shall prevail, no need for maths (yet). In variation #1: We keep flipping the single coin until an H appears. At that point, the very next flip will determine the winner, with a fair 50% chance for each. So that game is fair. In variation #2: It's easier to get HT than to get HH. Here's why: If you are going for HT: You wait until you get an H, then you have a 50% chance to achieve your goal on the next flip. However if you don't (you get an H instead) you have a 50% chance to achieve your goal on the flip after that. You will win whenever your streak of Hs come to an end. If you are going for HH: As above, you wait until you get a H, then you have a 50% chance to achieve your goal on the next flip. However if you don't (you get a T instead) then you are *two* flips away from achieving your goal minimum. So it's not symmetrical. Now here's the paradox stated in English: The analysis above for variation #2 shows that it takes fewer flips to get an HT sequence than an HH sequence. Given that fact, how then can the first variation of the game possibly be fair? 

Oct1314
  al wazir: <Sneaky>: Fiveeighths sounds about right. I think the easiest way to check it would be to write a little program (20 or 30 lines should be all it takes) to run a million or so trials and find out empirically the proportion each player wins. If I feel sufficiently motivated I'll do it. Your argument about asymmetry overlooks something: In the event of a tie, the HH player is in a position to win on the next toss, but the HT player has to wait *two* tosses at a minimum. (I'm assuming that the game continues as usual after a tie; if it starts all over again _de novo_ then the above statement isn't correct.) I don't really see anything paradoxical about the result. Though the HT player is more likely to win than the HH player after getting an H in the second game, a "streak of Hs" guarantees a win for the HH player in the first game. Different games, different outcomes. 

Oct1314   nok: <If you are going for HT: You wait until you get an H, then you have a 50% chance to achieve your goal on the next flip. However if you don't (you get an H instead) you have a 50% chance to achieve your goal on the flip after that.> The first game would be over by then, so you don't get to show your advantage. 

Oct1314   ljfyffe: Why is there being, and not rather nothing? 

Oct1414
  al wazir: All right, I couldn't stand the suspense any longer. Here is the Fortran code I wrote: program Toss
! Declaration
integer*4 coin(2,2), HH /0/, HT /0/, kount /0/, ktot /1000000/, null logical HHwin, HTwin
integer*4 idum /654321/
! Execution
do kount = 1, ktot
coin(1,2) = 0
coin(2,2) = 0
HHwin = .false.
HTwin = .false.
do while (HHwin .eqv. HTwin)
HHwin = .false.
HTwin = .false.
coin(1,1) = coin(1,2)
coin(2,1) = coin(2,2)
coin(1,2) = 2*ran2(idum)
coin(2,2) = 2*ran2(idum)
if (coin(1,1) .eq. 1 .and. coin(1,2) .eq. 1) then HH = HH + 1 HHwin = .true.
endif
if (coin(2,1) .eq. 1 .and. coin(2,2) .eq. 0) then HT = HT + 1 HTwin = .true.
endif
if (HHwin .and. HTwin) then
HH = HH  1
HT = HT  1
HHwin = .false.
HTwin = .false.
endif
enddo
enddo
null = ktot  HH  HT
print 1, HH, HT, null
1 format ('HH wins =', I7, ', HT wins =', I7, ', no one =', I7) end
The subroutine ran2 is the longperiod random number generator of L'Ecuyer with BaysDurham shuffle, which _Numerical Recipes_ rates as top of the line. The result was 587739 for HT and 412261 wins for HH. I ran it a second time with a different seed (654321) and got 587875 wins for HT and 412125 for HH. That's as close to being the same answer as can be expected, since the relative statistical error in a million trials is on the order of 1/1000, i.e., they should agree to within ~1000, and they do. This is *not* 5/8 vs. 3/8; to within the statistical error it's 10/17 vs. 7/17 (more precisely, 47/80 vs. 33/80). The rigorous proof that HT should win 10/17 (or 47/80) of the time is left as an exercise for the reader. 

Oct1414
  Sneaky: <Why is there being, and not rather nothing?> This flax weighs three pounds. 

Oct1414
  Sneaky: By the way, here's the site I found the coin flipping paradox
http://web.mit.edu/~emin/www.old/wr... The author writes <At first glance it seems like the second game also results in each player having a probability of winning of 1/2. However this is not true! The player who bets on (Heads, Tails) will win much more often. The simplest way to see this is by calcuting the average number of flips it takes each player to win.> Unfortunately the notation he uses to calculate the probabilities looks foreign to me, but I get the general idea. An average of 6 flips to get HH, an average of 4 flips to get HT. He also has some good pages on the ever perplexing envelope paradox, first the general paradox http://web.mit.edu/~emin/www.old/wr... and then a very interesting variation in which the envelopes always contain powers of 3: http://web.mit.edu/~emin/www.old/wr... 

Oct1414
  al wazir: <Sneaky>: I read as far as <Let the random variable N represent the resulting number of flips>, and then stopped. The concept of a "randomly chosen" integer is not well defined. Since there are an infinite number of integers, the average value is infinite. Any finite number is therefore less than average, which means the choice cannot be "random." 

Oct1414
  Sneaky: <al wazir> <The concept of a "randomly chosen" integer is not well defined.> I disagree, it's very well defined.
(For those who didn't read the article, we're talking here about a distribution for n expressed as this: "You flip a coin until heads appears. Let n=the number of flips.") <Since there are an infinite number of integers, the average value is infinite.> That doesn't follow. Nothing prevents us from defining a distribution where the larger numbers are so unlikely that their contribution to the average is finite; and in fact, that's exactly what we've done here. To compute the average of the distribution above, it's like computing any average. We weight each integer with the odds of it happening. In bigsigma notation (as best as I'm able to do it here) we are looking at this: ∞
Σ n*(1/2^n)
n=1
That, my friend, has a very finite average: it converges to 2. 1*(1/2) + 2*(1/4) + 3*(1/8) + 4*(1/16) ... = 2
All this means is: "on the average, it takes two flips of a coin to get heads." We should be shocked if it came out to anything else. <Any finite number is therefore less than average, which means the choice cannot be "random."> You are thinking in the right way to crack this mystery but the problem definition is not the time to break out the big guns. The problem of infinite averages does indeed enter into this, but not when it comes to defining n. Consider: there's nothing stopping us from actually playing this game in real life. Take a coin and flip it until a head appears. Count the flips. Now you know a number that I don't know. Why is that not random? 

Oct1414
  al wazir: <Sneaky: Nothing prevents us from defining a distribution where the larger numbers are so unlikely that their contribution to the average is finite; and in fact, that's exactly what we've done here.> Of course. Random numbers can be generated with infinitely many distributions. But when someone says, e.g., "choose a number at random," the implication is that all available numbers are equally probable. I glanced at the first site you linked to and felt it was unnecessarily turgid and complicated. I think I already understood the resolution of the (original) envelope paradox. So when I started reading the second one and came to the sentence "Let the random variable N represent the resulting number of flips...," I said to myself, here we go again, and stopped there, as I said. But I didn't make a vow never to read on. 

Oct1514
  al wazir: <Sneaky>: I went back and read the description of the second envelope problem  carefully. (The first time I only skimmed it, and that's why I misunderstood what the problem poser meant by "random." Sorry about that.) It's really a cute problem. Let me try to analyze it step by step. As he says, if the envelope you get contains 1, then the other is certain to contain 3  so of course you switch. Now suppose the envelope you get first contains 3. The the other one can contain either 1 or 9; there are no other possibilities. If it contains 1, the first toss of the coin must have yielded a head, i.e., the sequence of tosses must have been simply H; if 9, then the sequence of tosses must have been TH. The probability of the first sequence is 1/2; the probability of the second sequence is 1/4, so the probability that the second envelope contains 1 is twice the probability that it contains 9. Therefore the probability that it contains 1 is 2/3 and the probability that it contains 9 is 1/3, because 2/3 is twice 1/3, and the two probabilities have to add up to 1. This means your *expectation* when you switch is (2/3)x1 + (1/3)x9 = 3 2/3. Since 3 2/3 is more than 3, it makes sense to switch. And likewise, if your envelope contains 9, the other one contains either 3 or 27, with respective probabilities of 2/3 and 1/3. So the expected value of the contents of the other envelope is (2/3)x3 + (1/3)x27 = 11, which is more than 9, so again you switch. And in general, if the envelope you start with contains 3^N, then the other one contains either 3^(N1) or 3^(N+1), and the expected value of the contents resulting from a switch is (2/3)x3^(N1) + (1/3)x3^(N+1) = 2x3*(N2) + 3^N, so by switching you stand to gain 2x3^(N2). (This formula reduces to the result for the two special cases I already discussed.) Simple. BUT WAIT A MINUTE. If you always gain by switching, you don't even have to look at the contents of the first envelope. You should *automatically* switch, every time. And on average, you must come out ahead. Huh?
The resolution of this paradox is similar to that for the old envelope paradox. Whatever units the numbers in the envelopes refer to (dollars, pounds, euros, rubles), the number that exist is finite. Under some circumstances three times the number in the envelope *might* exceed the total in the entire known universe. That is, if the result of the coin tosses is TTT...TH (a long, long, long string of tails before the first head), the number 3^(N+1) might be so big that it exceeds the total of the dollars, pounds, etc., that exist. In that case, the number in the second envelope *must* be smaller, and if you switch you'll take a bath. You'll lose the biggest fortune anyone ever lost in the history of the world. Of course that TTT...TH string of tosses is highly unlikely, but in the highly unlikely case that it occurs, the amount you'll lose will just balance the gains from all the other swaps that can occur. And in fact, it doesn't take an impossibly long streak of tails: 3^19 is more than a billion, but the odds against the corresponding sequence of tosses  18 tails before the first head  are "only" a little more than a half million to one. Now I'll go back to the poser's site and read his answer. Once again, his explanation exceeded my attention span, so I'll stick with my own. 

Oct1514
  Sneaky: <Whatever units the numbers in the envelopes refer to (dollars, pounds, euros, rubles), the number that exist is finite.> I think you have the right idea in mind but just complaining that we might run out of money isn't good enough. For the sake of argument let's say that this a game show and the values in the envelopes are merely "points" that you accumulate on the show. If you get more points than the other contestant you win a brand new car. So now you should have no problem if we have any arbitrarily large number n with ridiculously huge 3^n envelope values. But the fact that 3^n gets big so quickly is at the heart of this riddle. Here's my resolution: You compute that if the envelope you have contains X, the other one has an EV of 11/9 * X. Since that's bigger than X, you conclude it's smart to switch. However, what happens when you try to solve the question "What is the EV of one randomly selected envelope of a pair of envelopes?" You learn that it has an infinite EV, just like the EV in the St. Petersburg paradox. So given that, we know that the above conclusion 11/9 * X > X is as meaningless as saying 11/9 * infinity > infinity. That cursed infinity has led us astray once more. 

Oct1514
  al wazir: <Sneaky: I think you have the right idea in mind but just complaining that we might run out of money isn't good enough.> Yes, that was more in the nature of a motivation than a proof. <For the sake of argument let's say that this a game show and the values in the envelopes are merely "points" that you accumulate on the show. If you get more points than the other contestant you win a brand new car.> Suppose you are a contestant in that game show. There must be an (unknown, possibly unknowable) number beyond which any number is "too big" to be contained in an envelope, even in the gameshow version of the problem. Suppose you have been *told* what that biggest number is. Suppose the biggest number an envelope can possibly contain is 3^20 (approximately 3.5 billion). You open your envelope and see that the number it contains is 59049 (3^10). Do you switch? Yes, you know that on average switching will increase your point total. But two times out of three it will get you a smaller number, 19683. You are competing against other contestants who have their own envelopes and have to make their own decisions. They *might* have the same number you have, but how likely is that? It would have taken ten coin tosses to generate your number, 19683, and the odds against that are more than 1000 to 1. Unless there are at least a thousand contestants competing against you, or the game lasts more than a thousand rounds, or some combination of these, your number is probably going to win. It will be almost as likely to win even if scores are cumulated over many rounds. And in the extremely unlikely case that one of your competitors has an envelope containing a bigger number, you can't do better than earn a tie even by switching. So you would be foolish to switch. <"What is the EV of one randomly selected envelope of a pair of envelopes?" You learn that it has an infinite EV, just like the EV in the St. Petersburg paradox.> I thought of pointing out the connection with martingales, but for that I would have had to do a formal calculation. And then I would have gotten bogged down in something as hard to plow through as the answer the proponent of the problem gave. I will simply state  again  without proof that once an upper limit is imposed, the expected value of switching is zero, because the extremely unlikely risk of a big loss from switching away from the biggest value cancels out all the (more likely but smaller) expected gains from switching when the first envelope contains a smaller number. 

Oct1514
  Sneaky: <Suppose you are a contestant in that game show. There must be an (unknown, possibly unknowable) number beyond which any number is "too big" to be contained in an envelope, even in the gameshow version of the problem.> I thought of that while composing my game show metaphor. True, even if it's just a number written down on a card, or displayed by a computer, even if the number is presented in scientific notation, or with any fancy bignumbernotation you can devise, it's still possible to have a number that can't be expressed that way using every particle in the universe. But this is an idealized math problem, we can ignore such physical constraints for the sake of argument, can't we? <You are competing against other contestants who have their own envelopes and have to make their own decisions. They *might* have the same number you have, but how likely is that?> I had thought for a moment stipulating that both contestants receive the same envelopes simultaneously, e.g. my "n" value is your "n" value each round. But I didn't want to complicate matters needlessly. I reasoned that if switching conveys a benefit, it would do so with or without that stipulation. <It would have taken ten coin tosses to generate your number, 19683, and the odds against that are more than 1000 to 1. Unless there are at least a thousand contestants competing against you, or the game lasts more than a thousand rounds, or some combination of these, your number is probably going to win.> I see what you're saying, but I think you've taken the game show metaphor further than is useful. I'm just trying to answer the question: what strategy will net you the most points, on the average? To pursue that, we have to assume the contestants are trying to maximize their total score (even though as you pointed out, that's not a good strategy to actually win the car.) If you want, perhaps stipulate that there is some bonus you get if you win by a very large amount, so they have a motivation to turn a big score into a bigger one. <the expected value of switching is zero, because the extremely unlikely risk of a big loss from switching away from the biggest value cancels out all the (more likely but smaller) expected gains from switching when the first envelope contains a smaller number.> The fellow at http://web.mit.edu/~emin/www.old/wr... covered the upperlimit scenario to help clarify the paradox but he reached a different conclusion. However his conditions were different. In his variation, the upper limit is known to the contestants. He writes: <Let us modify the original game so that we always stop after T coin flips. This modification prevents infinities from entering the picture and signficantly affects the situation. Specifically, in the modified game the always switch strategy is no longer optimal. This is because the maximum possible amount of money in an envelope is 3^T. Thus the optimal strategy is to keep an envelope if its value is 3^T and otherwise exchange it.> Let's look at a very small case, the most degenerate case that still has meaning. Suppose T=3, so the only possible values are 1, 3, and 9. I'm sure we are both in agreement that if you get a 1 you should switch it, and if you get a 9 you should keep it. The question then is, what to do when you get a 3? According to the author, you should switch anything except the 9s, so the 3 should be switched. I find it hard to argue against his logic, given those very specific conditions. But here's where the weirdness creeps back in: if T=3, we say "always switch unless you get 3^3." If T=100, we say "always switch unless you get 3^100". So what do we do if T is infinity? Always switch! 

Oct1514
  al wazir: <Sneaky: If T=100, we say "always switch unless you get 3^100". So what do we do if T is infinity? Always switch!> And this is paradoxical. But as you yourself have pointed out, when infinity is treated as if it is an actual, attainable number, paradoxes always result. I don't think we're disagreeing here. 

Oct1614
  Sneaky: No, we're not disagreeing—I took some exceptions with the way you were expressing your discomfort with the problem. The mere existence of infinities in a problem doesn't render it meaningless, but it does mean that you have to be on your toes, lest you arrive at absurd conclusions. I'm sure we can agree with what David Chalmers said of the more orthodox envelope problem: this is "just another example of a familiar phenomenon, the strange behaviour of infinity." (By the way, I said T=3 in my previous post, but of course I meant T=2. T can be 0, 1, or 2.) 

Oct1614
  WannaBe: Anyone who disputes/disagree or find faults with a MIT paper must be superbrilliant guy. 

Oct1614
  al wazir: <WannaBe: Anyone who disputes/disagree or find faults with a MIT paper must be superbrilliant guy.> Or from Princeton. 

Oct1614
  Shams: <al wazir> Did you ever work with anyone at the IAS? 

Oct1614
  al wazir: <Shams>: I had to google to find what 'IAS' refers to, which should tell you something. Princeton University and the Institute for Advanced Studies are separate organizations, though there are some people who have connections with both. I know Freeman Dyson from when he was working there, but in connection with something else, not from collaborating with him in research. Possibly someone I have worked with at some time has been employed there  academics move around a lot  but I can't think of anyone offhand. My work has been almost entirely in applied physics, which is by definition not the "advanced" kind. 

Oct1714
  Sneaky: You know Freeman Dyson? I am humbled. 

Oct1714
  al wazir: <Sneaky: You know Freeman Dyson? I am humbled.> It's no big deal. Why, Dyson's nextdoor neighbors, his grocer and bank teller and dentist and postman know him. Lots of people do. 


< Earlier Kibitzing · PAGE 243 OF 243 ·
Later Kibitzing> 


