Skip to article frontmatterSkip to article content

Replicator Dynamics

Motivating Example: Coffee clubs and the common good game

Across a large university campus, students form informal coffee clubs. Each day, students join clubs to share a cafetière of coffee and discuss algebraic topology or the latest student gossip. There are three types of students:

In each club, all coffee brought is pooled, brewed, and the resulting pot is shared equally among the attendees. If there are kk cooperators, the pot yields rkrk units of value to be divided among all participants (cooperators and defectors). The loners, who don’t attend clubs, receive a fixed payoff σ\sigma.

Let the total population be normalized to 1, with:

Assuming the population is large and well-mixed, we can model the average payoff to each strategy as:

Here, r>1r > 1 is the return factor of the public good (coffee), and the subtraction of 1 from fCf_C reflects the cost of bringing coffee.

These payoffs are used in the replicator dynamics equation, which models how the frequency of each strategy changes over time:

x˙i=xi(fi(x)ϕ)\dot{x}_i = x_i \left( f_i(x) - \phi \right)

where ϕ\phi is the population average payoff.

A simplex with the evolutionary trajectory of the underlying replicator dynamics equation

Figure 1:A diagram showing the direction of the derivative as given by (1) for r=3r=3 and σ=.6\sigma=.6.

Strategies that do better than the average will increase in frequency; those that do worse will decline. This feedback mechanism drives the evolution of behavior in the population — capturing the shifting fortunes of cooperators, defectors, and loners on campus.

As we will see, this system often exhibits cyclical behavior: when most students cooperate, defectors thrive; when defection becomes too common, students prefer to be loners; when clubs are small and rare, cooperation becomes appealing again. These cycles — and their stability — are precisely what replicator dynamics helps us understand.

Theory

Definition: Replicator Dynamics Equation

Consider a population with NN types of individuals. Let xix_i denote the proportion of individuals of type ii, so that i=1Nxi=1\sum_{i=1}^N x_i = 1.

Suppose the fitness of an individual of type ii depends on the state of the entire population, through a population-dependent fitness function fi(x)f_i(x).

The replicator dynamics equation is given by:

dxidt=xi(fi(x)ϕ)for all i\frac{dx_i}{dt} = x_i(f_i(x) - \phi) \quad \text{for all } i

where the average population fitness ϕ\phi is defined as:

ϕ=i=1Nxifi(x)\phi = \sum_{i=1}^N x_i f_i(x)

This equation describes the change in frequency of each type as proportional to how much better (or worse) its fitness is compared to the population average.

Example: The common good game

In the common good game around coffee clubs the replicator dynamics equation can be written as:

x˙c=xc(rxcxc+xd1ϕ)x˙d=xd(rxcxc+xdϕ)x˙l=xl(σϕ)\begin{align*} \dot{x}_c &= x_c\left(\frac{rx_c}{x_c + x_d} - 1 - \phi\right)\\ \dot{x}_d &= x_d\left(\frac{rx_c}{x_c + x_d} - \phi\right)\\ \dot{x}_l &= x_l\left(\sigma - \phi\right)\\ \end{align*}

where:

ϕ=xc(rxcxc+xd1)+xd(rxcxc+xd)+xlσ=rxc2xc+cdxc+rxcxdxc+xd+xlσ=rxc2xc(xc+xd)+rxcxd+xlσ(xc+xd)xc+cd=rxc(xc+xd)xc(xc+xd)+xlσ(xc+xd)xc+cd=(xc+xd)(rxcxc+xlσ)xc+cd=xc(r1)+xlσ\begin{align*} \phi &= x_c \left(\frac{rx_c}{x_c + x_d} - 1\right) + x_d \left(\frac{rx_c}{x_c + x_d}\right) + x_l\sigma\\ &= \frac{rx_c^2}{x_c + c_d} - x_c + \frac{rx_cx_d}{x_c + x_d} + x_l\sigma\\ &= \frac{rx_c^2 - x_c(x_c+x_d) + rx_cx_d + x_l\sigma(x_c + x_d)}{x_c + c_d}\\ &= \frac{rx_c(x_c + x_d) - x_c(x_c+x_d) + x_l\sigma(x_c + x_d)}{x_c + c_d}\\ &= \frac{(x_c + x_d)\left(rx_c - x_c + x_l\sigma\right)}{x_c + c_d}\\ &= x_c(r - 1) + x_l\sigma \end{align*}

Definition: Stable Population


For a given population game with NN types of individuals and fitness functions fif_i a stable population x~\tilde x is one for which x˙i=0\dot x_i=0 for all ii.


For a stable population x~i\tilde x_i:

x˙i=x~i(fi(x~i)ϕ)=0\dot{x}_i = \tilde x_i(f_i(\tilde x_i) - \phi) = 0

so either: x~i=\tilde x_i= or fi(x~i)ϕf_i(\tilde x_i) - \phi.

Example: No interior point stable population in the common goods game

For the common good game around coffee clubs we have some immediate stable populations:

A question remains, is there a point in the interior of the simplex of Figure 1 that is stable? Such a point has xc>0x_c>0, xd>0x_d>0 and xl>0x_l > 0 which implies:

fc(x)=fd(x)=fl(x)f_c(x) = f_d(x) = f_l(x)

This is not possible as: fc(x)=fd(x)1f_c(x) = f_d(x) - 1 for all xx.

Definition: Fitness of a strategy in a population

In a population with NN types let f(y,x)f(y, x) denote the fitness of an individual playing strategy yy in a population xx:

f(y,x)=i=1Nyifi(x)f(y, x) = \sum_{i=1}^Ny_if_i(x)

Example: A new student in the Common Goods Game

For the common good game if we consider a stable population x=(0,1,0)x=(0, 1, 0) where everyone is defecting and assume that a new student enters planning to cooperate 50% of the time and defect 50% of the time, their fitness is given by:

f((1/2,1/2,0),(0,1,0))=1/2fc(0,1,0)+1/2fd(0,1,0)=1/2(1/21)+1/21˙/2=0f((1/2, 1/2, 0), (0, 1, 0)) = 1/2 f_c(0, 1, 0) + 1/2f_d(0, 1, 0) = 1/2(1/2 - 1) + 1/2\dot1/2 = 0

Definition: Post Entry Population

For a population with NN types of individuals Given a population xR[0,1]Nx \in \mathbb{R}^N_{[0, 1]} (with i=1Nxi=1\sum_{i=1}^Nx_i=1), some ϵ>0\epsilon>0 and a strategy yR[0,1]Ny \in \mathbb{R}^N_{[0, 1]} (with i=1Nxi=1\sum_{i=1}^Nx_i=1), the post entry population xϵx_{\epsilon} is given by:

xϵ=x+ϵ(yx)x_{\epsilon} = x + \epsilon(y - x)

Example: Post Entry Population for the Common Goods Game

For the common good game if we consider the stable population x=(0,1,0)x=(0, 1, 0) where everyone is defecting and assume that a new student enters the population planning to cooperate with a coffee club 50% of the time and defect 50% of the time the post entry population will be:

xϵ=(0,1,0)+ϵ((1/2,1/2,0)(0,1,0))=(0,1,0)+(ϵ/2,ϵ/2,0)=(ϵ/2,1ϵ/2,0)x_{\epsilon} = (0, 1, 0) + \epsilon((1/2, 1/2, 0) - (0, 1, 0)) = (0, 1, 0) + (\epsilon/2, -\epsilon/2, 0) = (\epsilon / 2, 1 - \epsilon / 2, 0)

What is of interest in the field of evolutionary game theory is what happens to the post entry population: does this new student change the stability of the system or is the system going to go back to all students defecting?

Definition: Evolutionary Stable Strategy

Given a stable population xx, xx represents an Evolutionary Stable Strategy (ESS) if and only if there exists ϵˉ>0\bar\epsilon>0 such that:

A strategy xx^* is an evolutionarily stable strategy if for all xϵxx_{\epsilon} \neq x^* sufficiently close to xx^*:

f(x,x)>f(xϵ,x)f(x^*, x^*) > f(x_{\epsilon}, x^*)

In practice sufficiently close implies that there exists some ϵˉ\bar\epsilon such that for all yxy \neq x^* and for all 0<ϵ<ϵˉ0 < \epsilon < \bar\epsilon the post entry population xϵ=x+ϵ(yx)x_{\epsilon} = x + \epsilon(y - x) satisfies (12).

Example: Are Loners Evolutionarily Stable in the Common Goods Game

For the common good game we have seen that a population x=(0,0,1)x^*=(0, 0, 1) where everyone is a longer is stable. Let us check if it is evolutionarily stable.

We have:

f(x,x)=σf(x^*, x^*) = \sigma

Now to calculate the right hand side of (12):

f(xϵ,x)=xϵcfc(x)+xϵdfd(x)+xϵlfl(x)=xϵc(1)+xϵd(0)+xϵlσ=xϵc(1)+xϵlf(x,x)\begin{align*} f(x_{\epsilon}, x^*) &= {x_\epsilon}_{c}f_c(x^*) + {x_\epsilon}_{d}f_d(x^*) + {x_\epsilon}_{l}f_l(x^*)\\ &= {x_\epsilon}_{c}(-1) + {x_\epsilon}_{d}(0) + {x_\epsilon}_{l}\sigma\\ &= {x_\epsilon}_{c}(-1) + {x_\epsilon}_{l}f(x^*, x^*)\\ \end{align*}

which is strictly less than f(x,x)=σf(x^*, x^*)=\sigma unless xϵc=0{x_{\epsilon}}_c=0 which it cannot as xϵxx_{\epsilon}\ne x^*.

Definition: Pairwise Interaction Game


Given a population of NN types of individuals and a payoff matrix Mn×nM^{n\times n} a pairwise interaction game is a game where the fitness is fi(x)f_i(x) is given by:

fi(x)=j=1NxjMijf_i(x) = \sum_{j=1}^{N}x_jM_{ij}

This corresponds to a population where all individuals interact with all other individuals in the population and obtain a fitness given by the matrix MM.

Note that there is a linear algebraic equivalent to (15):

f=xMf = xM

and then:

ϕ=xf\phi = x f

Example: The Hawk Dove Game

Consider a population of animals. These animals, when they interact, will always share their food. Due to a genetic mutation, some of these animals may act in an aggressive manner and not share their food. If two aggressive animals meet, they both compete and end up with no food. If an aggressive animal meets a sharing one, the aggressive one will take most of the food.

These interactions can be represented using the matrix AA:

M=(2130)M = \begin{pmatrix} 2 & 1\\ 3 & 0 \end{pmatrix}

In this scenario: what is the likely long-term effect of the genetic mutation?

Over time will:

To answer this question, we will model it as a pairwise interaction game with xR[0,1]2x\in\mathbb{R}_{[0, 1]}^2 representing the population distribution. In this case:

In this case, the replicator dynamics equation becomes:

dx1dt=x1(2x1+x2ϕ)dx2dt=x2(3x1ϕ)\begin{align*} \frac{dx_1}{dt} &= x_1(2x_1 + x_2 - \phi) \\ \frac{dx_2}{dt} &= x_2(3x_1 - \phi) \end{align*}
ϕ=x1(2x1+x2)+x2(3x1)\phi = x_1(2x_1 + x_2) + x_2(3x_1)

Note that x2=1x1x_2 = 1 - x_1 so for simplicity of notation we will only use xx to represent the proportion of the population that shares.

Thus we can write the single differential equation:

dxdt=x(2x+(1x)ϕ)=x(2x+(1x)x(2x+(1x))(1x)(3x))=x(2x23x+1)=x(x1)(2x1)\begin{align*} \frac{dx}{dt} &= x (2x + (1 - x) - \phi)\\ &= x (2x + (1 - x) - x(2x + (1-x)) - (1 - x)(3x))\\ &= x (2x^2-3x+1)\\ &= x(x -1)(2x-1) \end{align*}

We see that there are 3 stable populations:

This differential equation can then be solved numerically, for example using Euler’s Method to show the evolution of

Let us do this with a step size h=.05h=.05 and an initial population of x=3/5x=3/5:

Recall, we use the update rule:

xn+1=xn+hf(xn)x_{n+1} = x_n + h \cdot f(x_n)

to give Table 1.

Table 1:Step by step application of Euler’s method to the Hawk Dove game with step size h=.1h=.1 and x0=3/5x_0=3/5.

nntnt_nxnx_nf(xn)f(x_n)xn+1x_{n+1}
00.00.600-0.0480.595
10.10.595-0.0460.591
20.20.591-0.0440.586
30.30.586-0.0420.582
40.40.582-0.0400.578
50.50.578-0.0380.574
60.60.574-0.0360.571
70.70.571-0.0350.567
80.80.567-0.0330.564
90.90.564-0.0310.561
101.00.561-0.0300.558

If we repeat this with x=2/5x=2/5 we obtain Table 2.

Table 2:Step by step application of Euler’s method to the Hawk Dove game with step size h=.1h=.1 and x0=2/5x_0=2/5.

nntnt_nxnx_nf(xn)f(x_n)xn+1x_{n+1}
00.00.4000.0480.405
10.10.4050.0460.409
20.20.4090.0440.414
30.30.4140.0420.418
40.40.4180.0400.422
50.50.4220.0380.426
60.60.4260.0360.429
70.70.4290.0350.433
80.80.4330.0330.436
90.90.4360.0310.439
101.00.4390.0300.442

It looks the population is converging to the population which has a mix of both sharers and aggressive types: x=1/2x=1/2. Figure 2 confirms this.

A plot of two lines, one starting at 3/5 and the other at 2/5. The lines slowly converge to 1/2.

Figure 2:The numerical integration of the differential equation (21) with two different initial values of xx.

This indicates that x=1/2x=1/2 is an evolutionary stable strategy. To confirm this we could repeat calculations using the definition of an evolutionary stable strategy however for pairwise interaction games there is a theoretic result that can be used instead.

Theorem: Characterisation of ESS in two-player games


Let σ\sigma^* be a strategy in a symmetric two-player game (so that Mr=McM_r=M_c^{\top}). Then σ\sigma^* is an evolutionarily stable strategy (ESS) if and only if, for all σσ\sigma \ne \sigma^*, one of the following conditions holds:

  1. f(σ,σ)>f(σ,σ)f(\sigma^*, \sigma^*) > f(\sigma, \sigma^*)
  2. f(σ,σ)=f(σ,σ)f(\sigma^*, \sigma^*) = f(\sigma, \sigma^*) and
    f(σ,σ)>f(σ,σ)f(\sigma^*, \sigma) > f(\sigma, \sigma)

Conversely, if either of the above conditions holds for all σσ\sigma \ne \sigma^*, then σ\sigma^* is an ESS in the corresponding population game.


Proof


Assume σ\sigma^* is an ESS. Then, by definition, there exists ε>0\varepsilon > 0 such that for all σσ\sigma \ne \sigma^* and all 0<ϵ<ε0 < \epsilon < \varepsilon, we have:

f(σ,χϵ)>f(σ,χϵ)f(\sigma^*, \chi_\epsilon) > f(\sigma, \chi_\epsilon)

where χϵ=(1ϵ)σ+ϵσ\chi_\epsilon = (1 - \epsilon)\sigma^* + \epsilon \sigma is the mixed population state. Substituting into the expected fitness, we obtain:

(1ϵ)f(σ,σ)+ϵf(σ,σ)>(1ϵ)f(σ,σ)+ϵf(σ,σ)(1 - \epsilon) f(\sigma^*, \sigma^*) + \epsilon f(\sigma^*, \sigma) > (1 - \epsilon) f(\sigma, \sigma^*) + \epsilon f(\sigma, \sigma)

Rearranging, this inequality holds for all sufficiently small ϵ>0\epsilon > 0 if either:

For the converse, suppose neither condition holds. Then either:

Hence, the two conditions are necessary and sufficient for evolutionary stability.


This theorem gives us a practical method for identifying ESS:

  1. Construct the associated symmetric two-player game.
  2. Identify all symmetric Nash equilibria of the game.
  3. For each symmetric Nash equilibrium, test the two conditions above.

Note that the first condition is very close to the condition for a strict Nash equilibrium, while the second adds a refinement that removes certain non-strict symmetric equilibria. This distinction is especially important when considering equilibria in strategies.


Example: Evolutionary stability in the Hawk-Dove game

Let us consider the Hawk-Dove game. The associated symmetric two-player game can be written in a general form. Let vv denote the value of the resource and cc the cost of conflict with v<cv < c.

Row player’s payoff matrix:

A=(vc2v0v2)A = \begin{pmatrix} \frac{v-c}{2} & v \\ 0 & \frac{v}{2} \end{pmatrix}

Column player’s payoff matrix:

B=(vc20vv2)B = \begin{pmatrix} \frac{v-c}{2} & 0 \\ v & \frac{v}{2} \end{pmatrix}

To find symmetric Nash equilibria, we apply the support enumeration algorithm:

f(σ,H)=f(σ,D)q=vc\begin{aligned} f(\sigma^*, H) &= f(\sigma^*, D) \\ q^* &= \frac{v}{c} \end{aligned}

This gives

σ=(vc,1vc)\sigma^* = \left(\frac{v}{c}, 1 - \frac{v}{c}\right)

where individuals are aggressive with probability vc\frac{v}{c} and share otherwise.

To determine whether this strategy is evolutionarily stable, we check the conditions of the characterisation theorem.

Crucially, because f(σ,σ)=f(σ,Aggressive)=f(σ,Sharing)f(\sigma^*, \sigma^*) = f(\sigma^*, \text{Aggressive}) = f(\sigma^*, \text{Sharing}), the first condition does not hold. We must therefore verify the second condition:

f(σ,σ)>f(σ,σ)f(\sigma^*, \sigma) > f(\sigma, \sigma)

Let σ=(ω,1ω)\sigma = (\omega, 1 - \omega) be an arbitrary strategy. Then:

f(σ,σ)=vcωvc2+vc(1ω)v+(1vc)(1ω)v2f(\sigma^*, \sigma) = \frac{v}{c} \cdot \omega \cdot \frac{v - c}{2} + \frac{v}{c} \cdot (1 - \omega) \cdot v + \left(1 - \frac{v}{c} \right) \cdot (1 - \omega) \cdot \frac{v}{2}
f(σ,σ)=ω2vc2+ω(1ω)v+(1ω)2v2f(\sigma, \sigma) = \omega^2 \cdot \frac{v - c}{2} + \omega(1 - \omega) \cdot v + (1 - \omega)^2 \cdot \frac{v}{2}

Subtracting the two expressions gives:

f(σ,σ)f(σ,σ)=c2(vcω)2>0f(\sigma^*, \sigma) - f(\sigma, \sigma) = \frac{c}{2} \left( \frac{v}{c} - \omega \right)^2 > 0

This inequality is strictly satisfied for all ωvc\omega \ne \frac{v}{c}, so the second condition holds and σ\sigma^* is an evolutionarily stable strategy.

Definition: Replicator Mutator Dynamics Equation

An extension of the replicator equation is to allow for mutation. In this setting, reproduction is imperfect: individuals of a given type can give rise to individuals of another type.

This process is represented by a matrix QQ, where QijQ_{ij} denotes the probability that an individual of type jj is produced by an individual of type ii.

In this case, the replicator dynamics equation can be modified to yield the replicator-mutation equation:

dxidt=j=1NxjfjQjixiϕfor all i\frac{dx_i}{dt} = \sum_{j=1}^N x_j f_j Q_{ji} - x_i \phi \quad \text{for all } i

Example: The Replicator Mutator Dynamics Equation for the Hawk Dove Game

Let there be a 10% change that aggressive individuals will produce sharing ones in which case the matrix QQ is given by:

Q=(101/109/10)Q = \begin{pmatrix} 1 & 0\\ 1 / 10 & 9 / 10 \end{pmatrix}

Example: Recovering the Replicator Dynamics Equation from the Replicator Mutator Dynamics Equation

Show that when Q=INQ=I_N (the identity matrix of size NN) then the replicator dynamics equation corresponds to the replicator dynamics equation.

The replicator-mutation equation is:

dxidt=j=1NxjfjQjixiϕfor all i\frac{dx_i}{dt} = \sum_{j=1}^N x_j f_j Q_{ji} - x_i \phi \quad \text{for all } i

As Q=INQ = I_N:

Qij={1if i=j0otherwiseQ_{ij} = \begin{cases} 1 & \text{if } i = j \\\\ 0 & \text{otherwise} \end{cases}

This gives:

dxidt=xifiQiixiϕfor all iQij=0 for all ij=xifixiϕfor all iQii=1=xi(fiϕ)for all i\begin{align*} \frac{dx_i}{dt} &= x_i f_i Q_{ii} - x_i \phi \quad \text{for all } i && Q_{ij} = 0 \text{ for all } i \ne j \\\\ &= x_i f_i - x_i \phi \quad \text{for all } i && Q_{ii} = 1 \\\\ &= x_i (f_i - \phi) \quad \text{for all } i \end{align*}

Definition: Asymmetric Replicator Dynamics Equation


A further extension of the replicator dynamics framework accounts for populations divided into two distinct subsets. Individuals in the first population are one of MM possible types, while those in the second population are one of NN possible types.

This setting arises naturally in asymmetric games, where the roles of the players differ and the strategy sets need not be the same (i.e., MNM \ne N). In such cases, the standard replicator equation does not apply directly.

The asymmetric replicator dynamics equations describe the evolution of strategy distributions xx and yy in each population:

dxidt=xi((fx)iϕx)for all 1iM\frac{dx_i}{dt} = x_i\left((f_x)_i - \phi_x\right) \quad \text{for all } 1 \leq i \leq M
dyjdt=yj((fy)jϕy)for all 1jN\frac{dy_j}{dt} = y_j\left((f_y)_j - \phi_y\right) \quad \text{for all } 1 \leq j \leq N

Here:


Example: Tennis Serve and Return

In tennis, serving and receiving form an asymmetric interaction. The server (row player) chooses one of two serves, while the receiver (column player) chooses one of three possible return strategies.

The server can deliver a power or spin serve. The receiver can either prepare for power, cover a wide spin, or take an early aggressive position.

This leads to an asymmetric game where the server has 2 strategies and the receiver has 3. The game matrices are:

Mr=(312421)M_r = \begin{pmatrix} 3 & 1 & 2 \\ 4 & 2 & 1 \end{pmatrix}
Mr=(132024)M_r = \begin{pmatrix} 1 & 3 & 2 \\ 0 & 2 & 4 \end{pmatrix}

These matrices are based on the following assumptions:

Let x=(x1,x2)x = (x_1, x_2) be the strategy distribution of the server and y=(y1,y2,y3)y = (y_1, y_2, y_3) that of the receiver. The asymmetric replicator dynamics for this game are:

dxidt=xi((Ay)ixAy)for 1i2\frac{dx_i}{dt} = x_i\left((A y)_i - x^\top A y\right) \quad \text{for } 1 \leq i \leq 2
dyjdt=yj((xB)jxBy)for 1j3\frac{dy_j}{dt} = y_j\left((x^\top B)_j - x^\top B y\right) \quad \text{for } 1\leq j \leq 3

Figure 3 shows the numerical solutions of these differential equations over time.

Two plots showing the numerical solutions of the asymmetric replicator dynamics equation

Figure 3:Numerical solutions to the asymmetric replicator dynamics equation. Preparing for power quickly dies out as a played strategy in the population. There is a cycle of the 2 remaining strategies for the returner and for the server although power remains the strategy player most often.

Exercises

Exercise: Stability from fitness functions

Consider a population with two types of individuals: x=(x1,x2)x = (x_1, x_2) such that x1+x2=1x_1 + x_2 = 1. Obtain all the stable populations for the system defined by the following fitness functions:

  1. f1(x)=x1x2f2(x)=x22x1f_1(x) = x_1 - x_2 \qquad f_2(x) = x_2 - 2x_1
  2. f1(x)=x1x2x2f2(x)=x2x1+12f_1(x) = x_1 x_2 - x_2 \qquad f_2(x) = x_2 - x_1 + \frac{1}{2}
  3. f1(x)=x12f2(x)=x22f_1(x) = x_1^2 \qquad f_2(x) = x_2^2

For each stable population, choose a nearby post-entry state and solve the replicator dynamics equation numerically using Euler’s method with step size h=0.05h = 0.05 for 10 steps, starting from x=3/5x = 3/5.


Exercise: Stable populations from payoff matrices

For the following games, obtain all the stable populations for the associated pairwise interaction game:

  1. A=(2453)A = \begin{pmatrix} 2 & 4 \\\\ 5 & 3 \end{pmatrix}
  2. A=(1001)A = \begin{pmatrix} 1 & 0 \\\\ 0 & 1 \end{pmatrix}

Exercise: Evolutionarily stable strategies in symmetric games

Consider the pairwise contest games defined by the following associated two-player games. In each case, identify all evolutionarily stable strategies (ESS).

  1. Mr=(2451)Mc=(2541)M_r = \begin{pmatrix} 2 & 4 \\\\ 5 & 1 \end{pmatrix} \qquad M_c = \begin{pmatrix} 2 & 5 \\\\ 4 & 1 \end{pmatrix}
  2. Mr=(1001)Mc=(1001)M_r = \begin{pmatrix} 1 & 0 \\\\ 0 & 1 \end{pmatrix} \qquad M_c = \begin{pmatrix} 1 & 0 \\\\ 0 & 1 \end{pmatrix}

Exercise: Typesetting conventions in a mathematics department

In a mathematics department, researchers can choose to use one of two systems for typesetting their research papers: LaTeX or Word. We will refer to these two strategies as LL and WW respectively. A user of WW receives a basic utility of 1. As LL is more widely used by mathematicians outside the department and is generally considered the superior system, a user of LL receives a basic utility of α>1\alpha > 1. Since collaboration is common, it is beneficial for researchers to use the same system. If μ\mu denotes the proportion of LL users, we define:

u(L,χ)=α+2μu(L, \chi) = \alpha + 2\mu
u(W,χ)=1+2(1μ)u(W, \chi) = 1 + 2(1 - \mu)

What are the evolutionarily stable strategies?

Programming

In Appendix: Numerical Integration, we introduce general programming approaches for numerically solving differential equations. These apply directly to the replicator dynamics equation. Here, we focus on tools specifically tailored to population interaction games.


Solving symmetric replicator dynamics

The Nashpy library provides built-in functionality for solving the replicator dynamics equation in a pairwise interaction game.

Let us consider the classic Rock–Paper–Scissors game:

import nashpy as nash
import numpy as np

M_r = np.array([[0, 1, -1], [-1, 0, 1], [1, -1, 0]])
game = nash.Game(M_r)

We can compute the population trajectory from an initial distribution:

x0 = np.array([1 / 6, 1 / 6, 2 / 3])
timepoints = np.linspace(0, 10, 1500)
xs = game.replicator_dynamics(y0=x0, timepoints=timepoints).T
xs

To visualize the evolution of strategy frequencies over time:

import matplotlib.pyplot as plt

plt.figure()
plt.plot(xs.T)
plt.ylim(0, 1)
plt.legend(["$x_1$", "$x_2$", "$x_3$"])
plt.ylabel("Distribution")
plt.xlabel("Time")

Plotting a simplex with ternary

The ternary library Harper, 2019 allows for plotting trajectories on a simplex, ideal for representing three-component distributions that sum to one.

We can use it to plot the Rock–Paper–Scissors trajectory:

import ternary

figure, tax = ternary.figure(scale=1.0)
tax.boundary()
tax.gridlines(multiple=0.2, color="black")
# Plot the data
tax.plot(xs.T, linewidth=2.0, label="$x$")
tax.ticks(axis='lbr', multiple=0.2, linewidth=1, tick_formats="%.1f")
tax.legend()
tax.left_axis_label("Scissors")
tax.right_axis_label("Paper")
tax.bottom_axis_label("Rock")
tax.ax.axis('off')
tax.show()

Solving Asymmetric Replicator dynamics

The Nashpy library also supports numerical solutions for the asymmetric replicator dynamics equation.

M_r = np.array([[3, 1, 2], [4, 2, 1]])
M_c = np.array([[1, 3, 2], [0, 2, 4]])
game = nash.Game(M_r, M_c)

x0 = np.array([1/2, 1/2])
y0 = np.array([1/3, 1/3, 1/3])
timepoints = np.linspace(0, 20, 1000)

xs, ys = game.asymmetric_replicator_dynamics(x0=x0, y0=y0, timepoints=timepoints)
xs

The corresponding trajectory for the column player’s strategy distribution:

ys

Notable Research

The original conceptual idea of an evolutionarily stable strategy (ESS) was formulated by Maynard Smith Smith & Price, 1973Smith, 1982. Although these works did not explicitly introduce the replicator dynamics equation, they were foundational in connecting game theory with evolutionary biology.

The first formal presentation of the replicator dynamics equation appeared in Taylor & Jonker, 1978, which directly built upon Maynard Smith’s ESS framework. This formulation was later extended to multi-player games in Palm, 1984, and to asymmetric populations in Accinelli & Carrera, 2011.

Several influential applications of replicator dynamics have since emerged. For example, Komarova et al., 2001 used replicator-mutator dynamics to model the spread of grammatical structures in language populations. In the context of cooperation, Hilbe et al., 2013 applied the model to study the evolution of reactive strategies, while Knight et al., 2024 recently demonstrated how extortionate strategies fail to persist under evolutionary pressure.

A particularly notable extension is found in Weitz et al., 2016, where the game itself changes dynamically depending on the population state. This approach is especially relevant in modeling the tragedy of the commons and other environmental feedback systems.

In Lv et al., 2023, a model similar to the one in Section: Motivating Example is examined using both replicator dynamics and a discrete population model. The latter is explored in detail in Chapter: Moran Process. Remarkably, the replicator dynamics equation emerges as the infinite-population limit of the discrete model—a connection rigorously established in Traulsen et al., 2005.

Conclusion

The replicator dynamics equation provides a powerful lens through which to study strategy evolution in large populations. By linking the fitness of strategies to their growth or decline in the population, it captures the essence of selection and adaptation.

Throughout this chapter, we explored how replicator dynamics:

From modelling simple two-strategy contests to rich three-strategy dynamics on a simplex, replicator dynamics offer an interpretable and analytically rich framework for evolutionary game theory. Table Table 3 gives a summary of the main concepts of this chapter.

Table 3:Summary of key concepts in replicator dynamics.

ConceptDescription
Replicator Dynamics EquationModels strategy frequency change based on relative fitness
Average Population Fitness (ϕ\phi)Weighted average of individual fitnesses
Stable PopulationA distribution where no strategy’s frequency changes over time
Evolutionarily Stable Strategy (ESS)A stable strategy resistant to invasion by nearby alternatives
Post Entry PopulationPerturbed population after a rare mutant enters
Replicator-Mutator EquationExtension accounting for imperfect strategy transmission
Asymmetric Replicator DynamicsModels evolution in multi-population or role-asymmetric settings
Pairwise Interaction GameFitness determined by payoffs in repeated pairwise interactions
References
  1. Harper, M. (2019). python-ternary: Ternary Plots in Python. Zenodo 10.5281/Zenodo.594435. 10.5281/zenodo.594435
  2. Smith, J. M., & Price, G. R. (1973). The logic of animal conflict. Nature, 246(5427), 15–18.
  3. Smith, J. M. (1982). Evolution and the Theory of Games. In Did Darwin get it right? Essays on games, sex and evolution (pp. 202–215). Springer.
  4. Taylor, P. D., & Jonker, L. B. (1978). Evolutionary stable strategies and game dynamics. Mathematical Biosciences, 40(1–2), 145–156.
  5. Palm, G. (1984). Evolutionary stable strategies and game dynamics for n-person games. Journal of Mathematical Biology, 19, 329–334.
  6. Accinelli, E., & Carrera, E. J. S. (2011). Evolutionarily stable strategies and replicator dynamics in asymmetric two-population games. In Dynamics, Games and Science I: DYNA 2008, in Honor of Maurı́cio Peixoto and David Rand, University of Minho, Braga, Portugal, September 8-12, 2008 (pp. 25–35). Springer.
  7. Komarova, N. L., Niyogi, P., & Nowak, M. A. (2001). The evolutionary dynamics of grammar acquisition. Journal of Theoretical Biology, 209(1), 43–59.
  8. Hilbe, C., Nowak, M. A., & Sigmund, K. (2013). Evolution of extortion in iterated prisoner’s dilemma games. Proceedings of the National Academy of Sciences, 110(17), 6913–6918.
  9. Knight, V., Harper, M., Glynatsi, N. E., & Gillard, J. (2024). Recognising and evaluating the effectiveness of extortion in the Iterated Prisoner’s Dilemma. PloS One, 19(7), e0304641.
  10. Weitz, J. S., Eksin, C., Paarporn, K., Brown, S. P., & Ratcliff, W. C. (2016). An oscillating tragedy of the commons in replicator dynamics with game-environment feedback. Proceedings of the National Academy of Sciences, 113(47), E7518–E7525.
  11. Lv, S., Li, J., & Zhao, C. (2023). The evolution of cooperation in voluntary public goods game with shared-punishment. Chaos, Solitons & Fractals, 172, 113552.
  12. Traulsen, A., Claussen, J. C., & Hauert, C. (2005). Coevolutionary dynamics: from finite to infinite populations. Physical Review Letters, 95(23), 238701.