A common misconception about evolution is that “The fittest organisms in a population are those that are strongest, healthiest, fastest, and/or largest.” However, as that link indicates, survival of the fittest is implied at the genetic level: and implies that evolution favours genes that are most able to continue in the next generation for a given environment. In this post, I’m going to take a look at a high performing strategy from the Iterated Prisoner’s dilemma that was obtained through an evolutionary algorithm. I want to see how well it does in other environments.

## Background

This is all based on the Python Axelrod package which makes iterated prisoner dilemma research straightforward and really is just taking a look at Martin Jones’s blog post which described the evolutionary analysis performed to get a strategy (EvolvedLookerUp) that is currently winning the overall tournament for the Axelrod library (with 108 strategies):

The strategy in question is designed to do exactly that and as you can see does it really well (with a substantial gap between it’s median score and the runner up: DoubleCrosser).

There are some things lacking in the analysis I’m going to present (which strategies I’m looking at, number of tournaments etc…) but hopefully the numerical analysis is still interesting. In essence I’m taking a look at the following question:

If a strategy is good in a big environment, how good is it in any given environment?

From an evolutionary point of view this is kind of akin to seeing how good a predator a shark would be in any random (potentially land based) environment…

## Generating the data

Thanks to the Axelrod, library it’s pretty straightforward to quickly experiment with a strategy (or group of strategies) in a random tournament:

This runs a tournament and returns the rankings and wins for the input strategies. For example, let’s see how Cooperator and Defector do in a random tournament with 2 other strategies:

We can then use the above function to see how Cooperator and Defector do:

We see that cooperator ranks last (getting no wins), and defector just before last (getting 2 wins). This is confirmed by the actual tournament results:

The idea is to reproduce the above for a variety of tournament sizes, repeating random samples for each size and looking at the wins and ranks for the strategies we’re interested in.

This script generates our data:

The above creates tournaments up to a size of 25 other strategies, with 20 random tournaments for each size, creating six data files:

## Analysing the data

I then used this Jupyter notebook to analyse the data.

Here is what we see for the EvolvedLookerUp strategy:

The line is fitted to the median rank and number of wins (recall for each number of strategies, 20 different sampled tournaments are considered) We see that (as expected) as the number of strategies increases both the median rank and wins increases, but what is of interest is the rate at which that increase happens.

Below is the fitted lines for all the considered strategies:

Here are the fits (and corresponding plots) for the ranks:

• EvolvedLookerUp: $$y=0.49x-0.10$$ plot
• TitForTat: $$y=0.53-0.45$$ plot
• Cooperator: $$y=0.42x+1.40$$ plot
• Defector: $$y=0.75x-0.33$$ plot
• DoubleCrosser: $$y=0.51x-0.47$$ plot

Here are the fits (and corresponding plots) for the wins:

• EvolvedLookerUp: $$y=0.28x+0.06$$ plot
• TitForTat: $$y=0.00x+0.00$$ plot
• Cooperator: $$y=0.00x+0.00$$ plot
• Defector: $$y=0.89x+0.14$$ plot
• DoubleCrosser: $$y=0.85-0.10$$ plot

It seems that the EvolvedLookerUp strategy does continue to do well (with a low coefficient of 0.49) in these random environments. However what’s interesting is that the simple Cooperator strategy also seems to do well (this might indicate that the random samples are creating ‘overly nice’ conditions).

All of the above keeps the 5 strategies considered separated from each, here is the analysis repeated when combining the strategies with the random sample:

Below is the fitted lines for all the considered strategies:

Here are the fits (and corresponding plots) for the ranks:

• EvolvedLookerUp: $$y=0.42x+2.05$$ plot
• TitForTat: $$y=0.44+1.95$$ plot
• Cooperator: $$y=0.64+0.00$$ plot
• Defector: $$y=0.47x+1.87$$ plot
• DoubleCrosser: $$y=0.63x+1.88$$ plot

Here are the fits (and corresponding plots) for the wins:

• EvolvedLookerUp: $$y=0.28x+0.05$$ plot
• TitForTat: $$y=0.00x+0.00$$ plot
• Cooperator: $$y=0.00x+0.00$$ plot
• Defector: $$y=0.89x+4.14$$ plot
• DoubleCrosser: $$y=0.85+2.87$$ plot

## Conclusion

It looks like the EvolvedLookerUp strategy continues to perform well in environments that are not the ones it evolved in.

The Axelrod library makes this analysis possible as you can quickly create tournaments from a wide library of strategies. You could also specify the analysis further by considering strategies of a particular type. For example you could sample only from strategies that act deterministically (no random behaviour):

It would probably be worth gathering even more data to be able to make substantial claims about the performances as well as considering other test strategies but ultimately this gives some insight in to the performances of the strategies in other environments.

## For fun

The latest release of the library (v0.0.21) includes the ability to draw sparklines that give a visual representation of the interactions between pairs of strategies. If you’re running python 3 you can include emoji so here goes the sparklines for the test strategies considered: