Saturday 14 December 2013

The Game Theory Behind Golden Balls


Golden Balls is a simple, fascinating and sometimes hilarious game show in the UK that must have been created by an economist. It is a game show where prizes should theoretically never be won. Yet to the dismay of the economist who orchestrated the game something very interesting happens in reality.

The game involves players accumulating prize money in two rounds of pre-play. In the final round players have the opportunity to share or steal the accumulated prize money. This round is a one-stage game where two players can communicate briefly before simultaneously choosing whether to “Split” or “Steal”. If both players Split, the prize money is distributed evenly. If one Splits and the other Steals, the latter wins all the money. Finally, both Splitting means no prize money is won at all. The game can be written in normal form below, where M represents the accumulated prize money, green relates to the first player and orange to the second:

P1\P2
Split
Steal
Split
M/2 , M/2
0 , M
Steal
M , 0
0 , 0

This game is a “weak prisoner’s dilemma.” For the economists reading, it has three pure-strategy Nash equilibiria in (Split, Steal), (Steal, Split) & (Steal, Steal), as well as an infinite number of mixed-strategy equilibiria where one person Steals with certainty and the other randomises. For those that have not studied game theory, the analysis of the game is simple. If your opponent chooses to Split, you should choose to Steal since winning all the money is more than half of it. If your opponent chooses to Steal, you are indifferent between Splitting and Stealing. Given that Nash Equilibrium relies on knowing what the other player is going to play, it seems most rational to always play Steal. In other words, if there is a non-zero probability of your opponent ever playing Split, then you should always play Steal. This is because Steal is a weakly dominant strategy (you are never worse off by playing it). Only if you correctly believe your opponent is choosing to Steal with certainty can a (Split, Steal) equilibrium be possible.

It is called a prisoner’s dilemma because the most likely equilibrium of mutual Stealing is Pareto dominated by mutual Splitting. In other words, both players can be made strictly better off if they deviate together to Split. However, in a stage game without repetition, this equilibrium should never occur. If you believe your opponent will Split, you should always Steal. A working paper at the University of Zuirch* found some very interesting results. By studying outcomes of the game it finds that there is a 33% mutual cooperation rate of both Splitting. The paper then goes on to investigate what increases the likelihood of both players Splitting such as handshakes, racial bias and so on. However I am more interested in why Splitting could ever possibly result in equilibrium in a one-stage game such as this. I came up with two potential explanations, one of which I dislike but felt the need to mention and the other which, with its pitfalls, does seem to explain the data.

1) Miscoordination

It is important to realise, given the payoff structure, that both Splitting is not a matter of mutual cooperation but instead a situation of miscoordination. The reason being is that without repetition of the game, there is no incentive to coordinate. Once one player Splits, the other must Steal. Hence the cheap talk before choosing your action is meaningless. If you have convinced your opponent to irrationally Split then you should rationally choose to Steal. As a side note, I find it ironic that players try to convince each other to Split with the lure of Splitting as well. Theoretically the best strategy to ensure you win the entire prize is to convince her you will Steal. Only in this situation will she be indifferent between her two actions. When she believes you will Split, she will be better off by Stealing.

The reason both Splitting can be a result of miscoordination is when both players choose different Nash equilibrium. If one believes that the equilibrium will be (Split, Steal) and the other (Steal, Split) then both Splitting may result. In other words, if both players believe their opponent is Stealing with certainty, an equilibrium where they both Split is possible. The problem with this explanation is that Nash equilibrium relies on correctly knowing your other player’s strategy. As I mentioned above, if there is any non-zero probability of a player choosing to Split then the opponent should always choose Steal. Hence, you would only really choose to Split if you are 100% sure your opponent is Stealing and hence you are indifferent. 

Furthermore, your payoff from mutual Stealing may actually be higher than when you Split and your opponent Steals. This is because in the former you “fail to win” but in the latter you have “lost.” Or put more simply, if you know your opponent is going to Steal, are you really indifferent between Stealing and Splitting? Perhaps I am filled with too much animosity but I would rather my opponent got nothing than stole everything. Hence in this sense, stealing is a strictly dominant strategy and miscoordination cannot be a good explanation.

2) Payoff Re-model

The results of the Zurich study are interesting. Stake size and communication do actually have significant effects on the outcome. There is a negative correlation between cooperation and stake size. Actions such as mutual promising to Split and a handshake increase cooperation. This suggests that the payoff of players do not wholly depend on the monetary gain. I believe we can crudely remodel the game with an additive term to the Split payoff of k(x), where k is increasing in x and x is a set of characteristics such a reputational concerns. They key point is that x can be endogenously given, in the sense that the cheap talk and affection towards your opponent can affect it. For example, if you do not want to be seen stealing from an old lady who you really admire on national television then x may be very high. The normal form becomes:

P1\P2
Split
Steal
Split
M/2 + k(x) , M/2 + k(x)
0 + k(x) , M
Steal
M , 0 + k(x)
0 , 0

Now Splitting is a dominant strategy for a particular player if their value of k(x) is larger than M/2, meaning that regardless of what the other player does, this player should Split. If this is the case for both players then (Split, Split) is a dominant strategy equilibrium. For example, a handshake and a mutual promise will greatly increase reputational concerns or guilt from not Splitting and hence lead to a large k(x) for both players. Note we could have modelled the game by subtracting k(x) from the Steal payoffs as well.

The larger the monetary prize for a given level of x, the less likely k(x) will exceed M/2, explaining the negative correlation between prize money and cooperation. In other words, at some prize level, the potential gain from Stealing all the money exceeds the costs of a damaged reputation or guilt.

Where one player's k(x) exceeds M/2 but the other player's does not, the only Nash equilibrium is (Split, Steal). Where neither player's k(x) exceeds M/2, both (Split, Steal) and (Steal, Split) are Nash equilibria. These explain the unilateral cooperation rate of 55% by players.

The problem with this revised model is that now (Steal,Steal) isn't ever a Nash equilibrium if k(x) is some non-zero value. In light of this, it might be more appropriate to remove k(x) when the other play Steals meaning that players only avoid repetitional damage by Splitting when the opponent is Splitting as well.

Another problem with this argument is that it simply restates the payoffs rather than provide rational explanations of cooperation in the original game.  However, if players are only concerned with their monetary payoff, then how can you explain a situation where both Split?

* http://www.econ.uzh.ch/faculty/graetz/publications/wp1006.pdf

No comments:

Post a Comment