The fitness of an app user is made up of:
For app A:
fA(x)=base utility of using any app1+extra utility from many friends using app A2x1 For app B:
fB(x)=base utility of using any app1+extra utility from many friends using app B2(1−x1) So users gain more by coordinating on the same app as their friends.
If x1 is the proportion of the population using strategy 1 (here, app A),
the replicator dynamics for a two type population is:
dtdx1=x1(f1(x)−ϕ(x)), where the average fitness ϕ(x) is
ϕ(x)=x1f1(x)+(1−x1)f2(x). From Definition: Stable Population:
For a given population game with N types of individuals and fitness functions
fi a stable population x~ is one for which x˙i=0 for
all i.
From Definition: Post Entry Population:
For a population with N types of individuals
Given a population x∈R[0,1]N (with ∑i=1Nxi=1), some ϵ>0 and
a strategy y∈R[0,1]N (with ∑i=1Nyi=1), the post entry population xϵ is given by:
xϵ=x+ϵ(y−x) From Definition: Evolutionary Stable Strategy:
A strategy x∗ is an evolutionarily stable strategy if for all xϵ=x∗ sufficiently close to x∗:
f(x∗,x∗)>f(xϵ,x∗) In practice sufficiently close implies that there exists some ϵˉ such
that for all y=x∗ and for all 0<ϵ<ϵˉ the post entry population xϵ=x+ϵ(y−x)
satisfies the above inequality.
We have
fA(x)=1+2x1,fB(x)=1+2(1−x1).
The average fitness is
ϕ(x)=x1fA(x)+(1−x1)fB(x)=x1(1+2x1)+(1−x1)(1+2(1−x1))=x1+2x12+(1−x1)(3−2x1)=x1+2x12+3−5x1+2x12=3−4x1+4x12. The replicator dynamics is
dtdx1=x1(fA(x)−ϕ(x))=x1((1+2x1)−(3−4x1+4x12))=x1(−2+6x1−4x12)=−2x1(2x12−3x1+1)=−2x1(2x1−1)(x1−1). The fixed points are the solutions of dtdx1=0:
x1∈{0,21,1}. The above answers the question, below is some code to cofirm:
import sympy as sym
x_1 = sym.Symbol("x_1")
f_A = 1 + 2 * x_1
f_B = 1 + 2 * (1 - x_1)
phi = x_1 * f_A + (1 - x_1) * f_B
sym.simplify(phi)
x_1_dash = x_1 * (f_A - phi)
sym.simplify(x_1_dash)
x_1_dash = x_1 * (f_A - phi)
sym.solveset(x_1_dash, x_1)
We use the replicator equation from Question 6:
x˙1=dtdx1=−2x1(2x1−1)(x1−1). Euler’s method with step size h=0.01 gives
x1(1)=x1(0)+hx˙1∣∣x1(0). Here x1(0)=0.01:
x˙1∣∣0.01=−2⋅0.01⋅(2⋅0.01−1)⋅(0.01−1)=−2⋅0.01⋅(−0.98)⋅(−0.99)≈−0.019404. Thus
x1(1)=0.01+0.01(−0.019404)≈0.009806. Interpretation:
The proportion of A–users decreases further from 0.01 to about 0.0098.
This shows that, starting near the pure B population, the dynamics move even
closer to (0,1), confirming that the all–B population is evolutionarily
stable in this model.
The above answers the question, here is some code to confirm the calculations.
h = 0.01
initial_x_1 = 0.01
initial_x_1 + h * x_1_dash.subs({x_1: initial_x_1})
Here x1(0)=0.49:
x˙1∣∣0.49=−2⋅0.49⋅(2⋅0.49−1)⋅(0.49−1)=−2⋅0.49⋅(−0.02)⋅(−0.51)≈−0.009996. So
x1(1)=0.49+0.01(−0.009996)≈0.489900. Interpretation:
Starting slightly below the mixed state (21,21), the proportion
of A–users decreases further (from 0.49 to about 0.4899). A small
perturbation away from the mixed state does not return; instead it is pushed
further away. This indicates that the mixed state is not evolutionarily
stable.
The above answers the question, here is some code to confirm the calculations.
h = 0.01
initial_x_1 = 0.49
initial_x_1 + h * x_1_dash.subs({x_1: initial_x_1})
Here x1(0)=0.99:
x˙1∣∣0.99=−2⋅0.99⋅(2⋅0.99−1)⋅(0.99−1)=−2⋅0.99⋅(0.98)⋅(−0.01)≈0.019404. Thus
x1(1)=0.99+0.01(0.019404)≈0.990194. Interpretation:
The proportion of A–users increases slightly from 0.99 to about 0.9902.
Starting near the pure A population, the dynamics move even closer to
(1,0), showing that the all–A population is also evolutionarily stable.
The above answers the question, here is some code to confirm the calculations.
h = 0.01
initial_x_1 = 0.99
initial_x_1 + h * x_1_dash.subs({x_1: initial_x_1})
Overall, the dynamics push the population towards one of the two pure
coordination states, and away from the mixed state.
We now let the strength of the coordination benefit be a parameter a=0:
fA(x)=1+ax1,fB(x)=1+a(1−x1).
The average fitness is
ϕ(x)=x1fA(x)+(1−x1)fB(x)=x1(1+ax1)+(1−x1)(1+a(1−x1))=2ax12−2ax1+a+1. Then
fA(x)−ϕ(x)=(1+ax1)−(2ax12−2ax1+a+1)=a(−2x12+3x1−1)=−a(2x1−1)(x1−1). Thus the replicator dynamics is
dtdx1=x1(fA(x)−ϕ(x))=−ax1(2x1−1)(x1−1). The fixed points satisfy dtdx1=0, so again
x1∈{0,21,1}. To determine stability, we can consider all potential post entry populations or
we can equivalently examine the sign of x˙1:
First let us consider the case a>0:
For 0<x1<21:
x1>0, 2x1−1<0, x1−1<0 so
−ax1(2x1−1)(x1−1)<0.
Thus for the post entry population x1=1/2−ϵ x1(t) decreases and the flow moves away from x1=21
towards x1=0.
For 21<x1<1:
x1>0, 2x1−1>0, x1−1<0 so
−ax1(2x1−1)(x1−1)>0.
Thus for the post entry population x1=1/2+ϵ x1(t) increases and the flow moves away from x1=21
towards x1=1.
Hence:
Now let us consider the case a<0:
For 0<x1<21:
x1>0, 2x1−1<0, x1−1<0 so
−ax1(2x1−1)(x1−1)>0.
Thus for the post entry population x1=1/2−ϵ x1(t) increases and the flow moves towards from x1=21.
For 21<x1<1:
x1>0, 2x1−1>0, x1−1<0 so
−ax1(2x1−1)(x1−1)<0.
Thus for the post entry population x1=1/2+ϵ x1(t) decreases and the flow moves towards from x1=21.
If a=0 the derivative is 0 and thus all populations are stable.
Hence:
So x∗=(1/2,1/2) is an evolutionary stable strategy if and only if a<0: the
network effect is bad. There will only ever be an emergent population using both
apps when the network effect is in fact negative.
The above answer the question. Here is some code to confirm the calculations and
illustrate the sign of the derivative.
a = sym.Symbol("a")
f_A = 1 + a * x_1
f_B = 1 + a * (1 - x_1)
phi = x_1 * f_A + (1 - x_1) * f_B
sym.simplify(phi)
x_1_dash = x_1 * (f_A - phi)
sym.simplify(x_1_dash)
x_1_dash = x_1 * (f_A - phi)
sym.solveset(x_1_dash, x_1)
import matplotlib.pyplot as plt
import numpy as np
x_1 = sym.Symbol("x_1")
a = sym.Symbol("a")
f_A = 1 + a * x_1
f_B = 1 + a * (1 - x_1)
phi = x_1 * f_A + (1 - x_1) * f_B
x_1_dash = x_1 * (f_A - phi)
x_1_values = np.linspace(0, 1, 100)
plt.figure()
plt.plot(x_1_values, [x_1_dash.subs({x_1: x_value, a: -1}) for x_value in x_1_values], label="$a=-1$")
plt.plot(x_1_values, [x_1_dash.subs({x_1: x_value, a: -5}) for x_value in x_1_values], label="$a=-5$")
plt.plot(x_1_values, [x_1_dash.subs({x_1: x_value, a: 0}) for x_value in x_1_values], label="$a=0$")
plt.plot(x_1_values, [x_1_dash.subs({x_1: x_value, a: 1}) for x_value in x_1_values], label="$a=1$")
plt.plot(x_1_values, [x_1_dash.subs({x_1: x_value, a: 5}) for x_value in x_1_values], label="$a=5$")
plt.legend()
plt.xlabel("$x_1$")
plt.ylabel(r"$\frac{dx_1}{dt}$")