# The Monty Hall Problem

5 replies [Last post]
Yamahako
Offline
Joined: 12/01/2010

I'm probably slow to this, but I recently read a wikipedia article on the Monty Hall Problem. (http://en.wikipedia.org/wiki/Monty_Hall_Problem) I found it fascinating. The basic concept is this - if given a choice of 3 concealed options, 1 of which being the one you want and 2 being ones you don't want. If you select (but not verify) one of the concealed options - then a knowledgeable observe eliminates one of the remaining options (that is one of the undesirable options), and offers to allow you to switch your choice to the other concealed option, you are 33% MORE likely to get the option you are looking for by switching (as in you'll have a 66% chance of getting what you want). Please read the wiki link for a better understanding as it explains it better than I can.

Part of what intrigues me is how counter-intuitive it is. And I've been wondering how to incorporate this concept into game design. Partly as a skill test (knowledgeable players will be able to gain an advantage if they understand how the system works) and partly as ways to tweak the scenario to give desired results.

For a couple of examples of what I mean - I'll describe some other similar scenarios. You'll need to understand how the above works to get what I'm talking about...

Gambling Game Scenario
You pay 50\$. There are 3 cards, 2 with 0\$ on them, and 1 with 100. Utilize the Monty Hall Scenario. You pick a card, the dealer picks a card and flips it over revealing a 0\$. Then gives you the option to pay 15\$ to switch.

Optimal play (switching) leaves the player ahead by 1\$ per game over infinity games. Non-optimal play (staying) leaves the player down 17\$ per game over infinity games.

4-door scenario - MH opens 1
This time there are 4 concealed objects (for purposes of board game design, I see these working best at cards). You choose one, then the opponent (dealer, GM, moderator, etc.) looks at the remaining 3 cards and flips 1 over (a negative one). Choosing to switch only imparts a 12.5% improved chance at getting what you want (from 25% to 37.5%).

4-door scenario - MH opens 2
Same as above, but the opponent flips 2 negative cards over. Chances improve by switching to 75%.

2 Colors of Doors
This one was proposed by a friend of mine.
2 Red Backed Cards, 2 Blue Backed Cards - still 1 object that's desirable.
You choose a card, the opponent looks at the cards of the OPPOSITE color, and flips 1 undesirable one over - and offers a chance to switch.
Stay - 25% chance at a good card, switch to the remaining opposite choice card 50% chance at a good card. And I THINK (please confirm or deny my suspicions on this) if you switch to the other card of the same color you picked you'd have a 33% chance at the good card (this seems weird though...)

Anyway - anyone have any interesting ways to incorporate these into game design? I'll post more as I come up with them - or further implementations.

55cards
Offline
Joined: 10/11/2008

I too think the Monty Hall Problem is a fascinating little bit of maths, although because those maths are so fixed, I'm not sure how useful they are to the game designer (the relative wisdom of the player's choices will always be known and always be the same, hence the player will always behave in the same way).

Two things:

1. The numbers of your first example are slightly misleading, because you've rounded them up and down (as I'm sure you're aware). The expected return per game of switching is really \$1.33, and the expected loss of not doing so is \$16.66. Essentially the players either pay \$50 to have a one-third chance of winning \$100 (an obvious loss, and a choice no player would make) or pay \$65 to have a two-thirds chances of winning \$100 (a small but worthwhile return). But there's no game (I would say) since there's only one possible 'strategy' that doesn't result in an obvious and predictable loss for the player.

2. I'm confident that the analysis of your last example, where the doors are coloured in pairs, is incorrect. When the opponent flips over a card of the opposite colour, no new information about the position of the 'good' card is introduced. Since there is only one good card and two cards of each colour, the opponent can always choose to reveal a undesirable card of the opposite colour to the player's initial choice, and so by doing so reveals nothing new. The situation in this example is exactly the same then, as your second example, and the player should always switch and may pick a card of either colour when he does so. Provided he switches his chance of success goes from 25% to 37.5%, and this is regardless of the colour of the card he chooses. In a sense you can split the player's objective into two independent goals: trying to get the guess where the location of the money (initially 1 out of 4), and trying to guess the colour of money card (1 out of 2: red or blue). When the opponent reveals an undesirable card, the player can switch to improve his chances of guessing the location, but the chances that his initial guess about the colour were correct remain exactly the same, so switching makes no difference since it's 50:50 either way.

Yamahako
Offline
Joined: 12/01/2010
55cards wrote:I too think the

55cards wrote:
I too think the Monty Hall Problem is a fascinating little bit of maths, although because those maths are so fixed, I'm not sure how useful they are to the game designer (the relative wisdom of the player's choices will always be known and always be the same, hence the player will always behave in the same way).

Two things:

1. The numbers of your first example are slightly misleading, because you've rounded them up and down (as I'm sure you're aware). The expected return per game of switching is really \$1.33, and the expected loss of not doing so is \$16.66. Essentially the players either pay \$50 to have a one-third chance of winning \$100 (an obvious loss, and a choice no player would make) or pay \$65 to have a two-thirds chances of winning \$100 (a small but worthwhile return). But there's no game (I would say) since there's only one possible 'strategy' that doesn't result in an obvious and predictable loss for the player.

Yes, I was rounding. And I agree it's not a "game" outside of the way the term would be used for other games of chance (like Candy Land, or Chutes and Ladders). But it could be interesting in the context of another game. Especially if the answer is less certain than just 1 card you want, and 2 you don't want. Like if it was 1 card you want, and 1 you were indifferent about, and 1 that would be certainly negative to you. Though I think the 3 card scenario is far less interesting than the 4 card scenario for game design purposes - because while the player is stronger in switching - they have 2 choices to switch to. Making it still a gamble.

55cards wrote:
2. I'm confident that the analysis of your last example, where the doors are coloured in pairs, is incorrect. When the opponent flips over a card of the opposite colour, no new information about the position of the 'good' card is introduced. Since there is only one good card and two cards of each colour, the opponent can always choose to reveal a undesirable card of the opposite colour to the player's initial choice, and so by doing so reveals nothing new. The situation in this example is exactly the same then, as your second example, and the player should always switch and may pick a card of either colour when he does so. Provided he switches his chance of success goes from 25% to 37.5%, and this is regardless of the colour of the card he chooses. In a sense you can split the player's objective into two independent goals: trying to get the guess where the location of the money (initially 1 out of 4), and trying to guess the colour of money card (1 out of 2: red or blue). When the opponent reveals an undesirable card, the player can switch to improve his chances of guessing the location, but the chances that his initial guess about the colour were correct remain exactly the same, so switching makes no difference since it's 50:50 either way.

Well that's what I'm wondering...

If you look at it this way. There's a 50% chance that the card is a red backed card, and a 50% chance that the card is a blue backed card. I choose a red card. There's a 25% chance it's the card I chose. My opponent looks at the blue cards, and then flips over a "goat". That means the unflipped blue card has a 50% chance to be the "car" doesn't it?

Now, the unchosen red card is where (to me) it get's confusing. The card I picked doesn't have improved chances of being the "car" But the card I didn't pick previously now has a 1 in 3 chance of being the car..

I know its counter-intuitive... especially since the percentages all add up. But the other red card just can't really be 25% chance can it? I'm also wondering if the new chances would change to be 50% - blue card, 37.5% - unchosen red card, 12.5% - chosen red card. But it also doesn't make sense that the originally chosen card has a lower then 25% chance.

If I'm understanding your response correctly, you feel the 2 red cards have 25% chance, and the unchosen blue card has a 50% chance? Is that correct?

DogBoy
Offline
Joined: 12/15/2009
Yamahako wrote: 55cards

Yamahako wrote:

55cards wrote:
2. I'm confident that the analysis of your last example, where the doors are coloured in pairs, is incorrect. When the opponent flips over a card of the opposite colour, no new information about the position of the 'good' card is introduced. Since there is only one good card and two cards of each colour, the opponent can always choose to reveal a undesirable card of the opposite colour to the player's initial choice, and so by doing so reveals nothing new. The situation in this example is exactly the same then, as your second example, and the player should always switch and may pick a card of either colour when he does so. Provided he switches his chance of success goes from 25% to 37.5%, and this is regardless of the colour of the card he chooses. In a sense you can split the player's objective into two independent goals: trying to get the guess where the location of the money (initially 1 out of 4), and trying to guess the colour of money card (1 out of 2: red or blue). When the opponent reveals an undesirable card, the player can switch to improve his chances of guessing the location, but the chances that his initial guess about the colour were correct remain exactly the same, so switching makes no difference since it's 50:50 either way.

Well that's what I'm wondering...

If you look at it this way. There's a 50% chance that the card is a red backed card, and a 50% chance that the card is a blue backed card. I choose a red card. There's a 25% chance it's the card I chose. My opponent looks at the blue cards, and then flips over a "goat". That means the unflipped blue card has a 50% chance to be the "car" doesn't it?

Now, the unchosen red card is where (to me) it get's confusing. The card I picked doesn't have improved chances of being the "car" But the card I didn't pick previously now has a 1 in 3 chance of being the car..

I know its counter-intuitive... especially since the percentages all add up. But the other red card just can't really be 25% chance can it? I'm also wondering if the new chances would change to be 50% - blue card, 37.5% - unchosen red card, 12.5% - chosen red card. But it also doesn't make sense that the originally chosen card has a lower then 25% chance.

If I'm understanding your response correctly, you feel the 2 red cards have 25% chance, and the unchosen blue card has a 50% chance? Is that correct?

I'm not sure if that's what the previous poster intended, but those numbers are correct.
The 2 red cards have 25% chance, and the unchosen blue card has 50% chance.

50% of the time, the winning card will be red. In these cases, the blue card which is flipped can tell you nothing about whether you picked the winning red card or the losing red card. Both red cards will be at 50% and the unflipped blue card will be 0%.
50% of the time, the winning card will be blue. In these cases, the unflipped blue card will be at 100% and the red cards 0% each.

On average,

P(Red chosen) = (50% x 50%) + (50% x 0%) = 25%
P(Red unchosen) = (50% x 50%) + (50% x 0%) = 25%
P(Blue unflipped) = (50% x 0%) + (50% x 100%) = 50%

In other words, you double your return by switching to the remaining card of the same colour as the revealed card.

If you doubt this, give it a try and find out!

55cards
Offline
Joined: 10/11/2008
Now I get it...

DogBoy wrote:

I'm not sure if that's what the previous poster intended, but those numbers are correct.
The 2 red cards have 25% chance, and the unchosen blue card has 50% chance.

It wasn't what I intended, but the numbers ARE indeed correct! (Took me a while to realise it though.) Just like in the original 3-card situation, the player is swapping his initial 1 card for (effectively) 2 cards, hence doubling his chances. In the 3-card situation this means going from 33% to 66%, in the 4-card/2-colour situation this means going from 25% to 50% (the extra card doesn't change the game, but it does reduce the absolute, but not relative, percentages).

Think of it this way (this is how I rationalised it!): If the player picks a red card, the opponent must choose the blue cards. Now, whether or not the opponent flips a card the choice for the player is the same: Keep you red card or switch and get *both* the blue cards. That's the effective outcome for the player: swapping one card for two (the fact that the opponent may helpfully reveal a blank blue card doesn't change the equation). As I say, that is of course exactly the choice given to the player in the classic 3-card scenario. The 4th card changes the percentages, but not the rationale of the player's choice.

Thanks for proposing this twist on the problem — it's an interesting thought experiment! — and report back if you find a way to incorporate it into a game prototype.

Yamahako
Offline
Joined: 12/01/2010
55cards wrote:DogBoy

55cards wrote:
DogBoy wrote:

I'm not sure if that's what the previous poster intended, but those numbers are correct.
The 2 red cards have 25% chance, and the unchosen blue card has 50% chance.

It wasn't what I intended, but the numbers ARE indeed correct! (Took me a while to realise it though.) Just like in the original 3-card situation, the player is swapping his initial 1 card for (effectively) 2 cards, hence doubling his chances. In the 3-card situation this means going from 33% to 66%, in the 4-card/2-colour situation this means going from 25% to 50% (the extra card doesn't change the game, but it does reduce the absolute, but not relative, percentages).

Think of it this way (this is how I rationalised it!): If the player picks a red card, the opponent must choose the blue cards. Now, whether or not the opponent flips a card the choice for the player is the same: Keep you red card or switch and get *both* the blue cards. That's the effective outcome for the player: swapping one card for two (the fact that the opponent may helpfully reveal a blank blue card doesn't change the equation). As I say, that is of course exactly the choice given to the player in the classic 3-card scenario. The 4th card changes the percentages, but not the rationale of the player's choice.

Thanks for proposing this twist on the problem — it's an interesting thought experiment! — and report back if you find a way to incorporate it into a game prototype.

Here's one way that I'm working on incorporating. Player A wants a particular card. They select the card. Player B chooses 2 other cards that presumably Player A doesn't want.

Player B randomizes the cards and then places them face down. Player A chooses a card, and then player B flips one of the other cards face up. Player B may then sets a resource cost for Player A to change his or her selection (possibly within a range).

Depending on the need for Player A to get the card he's looking for, he may or may not want to pay the resource cost to switch. Especially since Player A knows that Player B knows where the desired card is.

Player B gets some interesting choices out of this interaction, because they may not want to set the price so high that Player A won't pay it - if they want to get resources, however they may ALSO want to bait Player A into making a specific choice (to either stay or switch). Which will influence their decision to set the price.

Potentially, Player A could switch, for free, to the flipped over card. Presumably this would be free (or even gain him a resource payout) to compensate for the card being less useful to him (or even detrimental).

-OR- Player B might be have to take X resources from a total of Y resources. Player A can keep his choice for free, pay X to switch to the unflipped other card, to GAIN Y-X resources to take the flipped card. Like Player B has to assign 10 resources to the 2 unflipped cards. She assigns a value of 3 for Player A to buy the unflipped card, and a value of 7 that Player A would gain by taking the flipped card.

Obviously this is all abstract, and would require a good game for this to work. But it's at least some interesting interactions.