The Dilemma of giving Christmas Gifts
This post gives a game theoretic explanation as to why we exchange gifts. On twitter @alexip tweeted “‘Let’s agree not to give each other presents for Christmas’ is just another case of the prisoner’s dilemma #gametheory”. This post builds on that and investigates the premise fully in an evolutionary context investigating different values of how good it feels to give and receive a gift :)
To illustrate this consider the situation where Alex and Camille are approaching Christmas:
Alex: How about we don’t buy Christmas present for each other this year?
Camille: Sounds great.
Let us describe how this situation corresponds to a prisoner’s dilemma.
- If Alex and Camille cooperate and indeed keep their promise of not getting gifts than let us assume they both get a utility of \(R\) (reward).
- If Alex cooperates but Camille decides to defect and nonetheless give a gift then Alex will feel a bit bad and Camille will feel good, so Alex gets a utility of \(S\) (sucker) and Camille a utility of \(T\) (temptation).
- Vice versa if Camille cooperates but Alex decides to give a gift.
- If both Alex and Camille go against their promise then they both get a utility of \(P\) (punishment).
This looks something like:
If we assume that we feel better when we give gifts and will be keen to ‘cheat’ a promise of not giving then that corresponds to the following inequality of utilities:
In this case we see that if Camille chooses to cooperate then Alex’s best response is to play defect (as \(T>R\)):
If Camille is indeed going to not give a gift, then Alex should give a gift.
Also if Camille chooses to defect then Alex’s best response is to defect once again (as \(P>S\)):
If Camille is going to ‘break the promise’ then Alex should give a gift.
So no matter what happens: Alex should defect.
In game theory this is what is called a dominating strategy, and indeed this situation is referred to as a Prisoner’s Dilemma and is what Alex was referring to in his original tweet.
How does reputation effect gift giving?
So far all we are really modelling is a SINGLE exchange of gifts. If we were to exchange gifts every year we would perhaps learn to trust each other, so that when Camille says they are not going to give a gift Alex has reason to believe that they will indeed not do so.
This is called an iterated Prisoner’s dilemma and has been the subject of a great amount of academic work.
Let us consider two types of behaviour that Camille and Alex could choose to exhibit, they could be:
- Alternator: give gifts one year and not give gifts the next.
- TitForTat: do whatever the other does the previous year.
Let us assume that Alex and Camille will be faced with this situation for 10 years. I’m going to use the Python Axelrod library to illustrate things:
We see that Alex and Camille never actually exchange gifts the same year (the 😀 means that the particular player cooperates, the 🎁 that they don’t and give a gift).
Most of the ongoing Iterated Prisoner’s Dilemma research is directly due to a computer tournament run by Robert Axelrod in the 1980s. In that work Axelrod invited a variety of computer strategies to be submitted and they then played against each other. You can read more about that here: axelrod.readthedocs.org/en/latest/reference/description.html but the important thing is that there are a bunch of ‘behaviours’ that have been well studied and that we will look at here:
- Cooperator: never give gifts
- Defector: always give gifts
- Alternator: give gifts one year and not give gifts the next.
- TitForTat: do whatever the other does the previous year.
- TwoTitForTat: will start by not giving a gift but if the other player gives a gift will give a gift the next two years.
- Grudger: start by not giving gifts but if at any time someone else goes against the promise: give a gift no matter what.
What we will now do is see how much utility (how people feel about their gift giving behaviour) if we have a situation where 6 people exchange gifts for 50 years and each person acts according to one of the above behaviours.
For our utility we will use the following values of \(R, P, S, T\):
Here is how we can do this in python:
We see that the people that do the best are the last two: TwoTitForTat and Grudger. These are people who are quick enough to react to people who won’t keep their promise but that do give hope to people who will!
## At a population level: evolution of gift giving
We can consider this in an evolutionary context where we see how the behaviour is allowed to evolve amongst a whole population of people. This particular type of game theoretic analysis is concerned not in micro interactions but long term macro stability of the system.
Here is how we can see this using Python:
What we see is that over time, the population evolves to only Cooperator, TitForTat, Grudger and TwoTitsForTat, but of course in a population with only those strategies everyone is keeping their promise, cooperating and not giving gifts.
Let us see how this changes for different values of \(R, P, S, T\).
To check if not giving presents is evolutionary stable we just need to see what the last population numbers are for the Alternator and Defector. Here is a Python function to do this:
We see that for the default values of \(R, P, S, T\) we have:
As already seen we have that for these values we end up with everyone keeping to the promise. Let us increase the value of \(T\) by a factor of 100:
We here see, that if the utility of giving a gift when the receiver is not giving one in return is very large, the overall population will always give a gift:
## Seeing the effect of how good giving gifts makes us feel
The final piece of analysis I will carry out is a parameter sweep of the above:
- \(5\leq T \leq 100\)
- \(3\leq R < T\)
- \(1\leq P < R\)
- \(0\leq S < P\)
All of this data sweep is in this csv file. Here is the distribution of parameters for which everyone gives a gift (reneging on the promise):
Here is the distribution of parameters for which everyone keeps their promise and does not give gifts:
We see that people keep their promise if the \(T\) utility (the utility of being tempted to break the promise) is very high compared to all other utilities.
Carrying out a simple logistic regression we see the coefficients of each of the variables as follows:
- \(P\): 3.121547
- \(R\): -2.942909
- \(S\): 0.007738
- \(T\): -0.107386
The parameters that have a positive effect on keeping the promise is \(R\) and \(S\) which is the reward for the promise being kept and for not giving a gift but receiving one.
TLDR
Agreeing to not give gifts at Christmas can be an evolutionary stable strategy, but this is only in the specific case where the utility of ‘giving’ is less than the utility of ‘not giving’. Given that in practice this promise is almost always broken (that’s my personal experience anyway) this would suggest that people enjoy giving gifts a lot more than receiving them.
Merry christmas 🎄🎁⛄️.