Example: Dividing the loot after a D&D battle¶
After a climactic boss fight, a party of adventurers must divide a hoard of
1,000 gold coins recovered after fighting a dragon.
The party includes:
- Ren the fighter, who absorbed the dragon’s attacks.
- Azel the wizard, who cast a decisive spell at the turning point.
- Quinn the bard, who provided support and distracted the dragon
with an improvised limerick.
Each character played a crucial role, but in very different ways. How can they
divide the treasure fairly?
Based on past encounters, the party has a sense of how much gold they might
have secured without the full team:
- (they can’t get past the dragon alone)
- (powerful spells, but too vulnerable on their own)
- (charming, but not a serious threat)
- (they can defeat the dragon, but with great effort)
- (they can stall the dragon,
but not defeat it) - (Azel can burn the dragon while
Quinn keeps it distracted) - (the full party defeats
the dragon smoothly and decisively)
This is a classic example of a cooperative game, and understanding how to
allocate resources fairly in such settings is the focus of this chapter.
Theory¶
Definition: characteristic function game¶
A characteristic function game is given by a pair
where is the number of players and
is a characteristic function which
maps every coalition of players to a payoff.
Example: The characteristic function game of the D&D battle¶
For the D&D battle, we have and the
characteristic function is given by:
Definition: Monotone characteristic function game¶
A characteristic function game is called monotone if it satisfies
for all .
Example: The D&D battle is a monotone characteristic function game¶
The function defined in (1) is monotone.
However, the following game is not:
Indeed, Ren and Azel receive more without Quinn than with them, violating monotonicity.
Definition: superadditive characteristic function game¶
A characteristic function game is called superadditive if it satisfies
for all disjoint coalitions
.
Example: The D&D battle is not a superadditive characteristic function game¶
The function defined in (1) is superadditive.
However, the following game is not:
Indeed:
so the game violates superadditivity.
Definition: Payoff vector¶
When we talk about a solution to a characteristic function game, we mean a
payoff vector that allocates the value of
the grand coalition among the players. In particular, must satisfy:
Example: A possible payoff vector:¶
One possible payoff vector for (1) is:
This would seem unfair as Quinn gets 450 of the gold. To find a “fair” distribution of the grand coalition we must define what is meant by “fair”. We require four desirable properties:
- Efficiency;
- Null player;
- Symmetry;
- Additivity.
Definition: Efficiency¶
Given a game , a payoff vector is efficient if:
Example: An efficient Payoff Vector for the D&D Battle¶
An efficient payoff vector is one that shares all 1,000 gold coins like the one in (6).
Definition: Null Player Property¶
Given a game , a payoff vector satisfies the null player property if, for a player :
If for all coalitions , then:
Example: The Null Player Property in the D&D Battle¶
In Example: Dividing the loot after a D&D battle, Quinn contributes very little, yet still adds value to each coalition he joins. For instance, without Quinn, the grand coalition would only obtain 900 coins. If, however, a member of the party made no difference to the value of any coalition, the null player property dictates that this player should receive a payoff of 0.
Definition: Symmetry Property¶
Given a game , a payoff vector satisfies the symmetry property if, for any pair of players :
If for all coalitions , then:
Example: The Symmetry Property in the D&D Battle¶
In Example: Dividing the loot after a D&D battle, no two players contribute equally to every coalition they join. If there were such players, the symmetry property would guarantee that they each receive an equal share of the gold
Definition: Additivity Property¶
Given two games and , define their sum by:
A payoff vector satisfies the additivity property if:
Example: The Additive Property in the D&D Battle¶
In Example: Dividing the loot after a D&D battle, if there were in fact two separate battles,
the method for distributing the gold should
yield the same result as treating both battles as a single event with the
combined total reward.
Definition: Predecessors¶
Given a permutation of , the set of predecessors of player in is denoted by and defined as:
Example: The predecessors for the D&D Battle¶
For Example: Dividing the loot after a D&D battle we have if we assume the ordering: Table 1 gives the predecessors of each individual for each of the permutations of .
Table 1:The predecessors of each individual for each permutation
(Predecessors of Ren) | (Predecessors of Azel) | (Predecessors of Quinn) | |
---|---|---|---|
Definition: Marginal Contribution¶
Given a permutation of , the marginal contribution of player with respect to is defined as:
Example: Marginal Contributions in the D&D Battle¶
For Example: Dividing the loot after a D&D battle , with . Table 2 shows the marginal contribution for each player in each permutation using (1).
Table 2:Marginal contributions of each individual for each permutation
(Ren) | (Azel) | (Quinn) | |
---|---|---|---|
0 | 900 | 100 | |
0 | 600 | 400 | |
900 | 0 | 100 | |
400 | 0 | 600 | |
400 | 600 | 0 | |
400 | 600 | 0 |
We are now ready to define the Shapley value for any cooperative game .
Definition: Shapley Value¶
Given a cooperative game , the Shapley value of player is denoted by and defined as:
Example: The Shapley Value for the D&D Battle¶
For Example: Dividing the loot after a D&D battle the Shapley value is the vector with entries corresponding to the average of each of the columnts of Table 2.
Giving:
Exercises¶
Exercise: Shapley Value Computation and Properties¶
For the following cooperative games:
- Verify whether the game is monotonic.
- Verify whether the game is superadditive.
- Obtain the Shapley value.
Exercise: Interpreting linear model performance with the Shapley value¶
In statistics and machine learning, a linear model predicts a target variable as a weighted sum of input features . The coefficient of determination, denoted , measures how well the model explains the variance in the data. It takes values between 0 and 1, with higher values indicating better explanatory power.
In cooperative game theory, a characteristic function maps every coalition (subset of players) to a value. In this exercise, we will interpret each feature as a player, and define the value of a coalition to be the of the model trained on those features.
Suppose we are predicting a target variable using a linear model:
Below are the values obtained from fitting models to different subsets of
the features. These were generated using synthetic data (you can refer to
:download:main.py </_static/data-for-shapley-regression-tutorial/main.py>
for the code).
Model | |
---|---|
0.122 | |
0.097 | |
0.551 | |
0.174 | |
0.581 | |
0.620 | |
0.623 |
Define the characteristic function for each subset of using the table above.
Compute the Shapley value for each feature with respect to the characteristic function .
Interpret the result: Which feature contributes the most explanatory power? Which contributes the least?
Exercise: Additivity and Symmetry¶
Consider the following two cooperative games defined on :
Game :
Game :
- Verify that both games satisfy the symmetry property.
- Obtain the Shapley value for and individually.
- Construct the game and calculate the Shapley value.
- Verify that the Shapley value of equals the sum of the Shapley values of and .
Exercise: Null Player and Marginal Contributions¶
Consider the cooperative game defined on :
- Identify any null players in this game.
- Compute the marginal contributions of each player.
- Calculate the Shapley value and confirm it respects the null player property.
Exercise: Properties of the Shapley Value¶
Prove that the Shapley value satisfies the following properties:
- Efficiency
- Null player property
- Symmetry
- Additivity
Note that this does not prove uniqueness; that is, other vectors might satisfy these properties (though in fact, the Shapley value is uniquely defined by them).
Programming¶
The CoopGT library has functionality to verify properties of a characteristic function game and also compute the Shapley value.
Here let us define a characteristic function and check if it is valid:
import coopgt.characteristic_function_properties
characteristic_function = {
(): 0,
(1,): 0,
(2,): 0,
(3,): 0,
(1, 2): 900,
(1,3): 400,
(2, 3): 600,
(1, 2, 3): 1000,
}
coopgt.characteristic_function_properties.is_valid(characteristic_function=characteristic_function)
Here we can check if it is monotone:
coopgt.characteristic_function_properties.is_monotone(
characteristic_function=characteristic_function
)
Here we can check if it is supperadditive:
coopgt.characteristic_function_properties.is_superadditive(
characteristic_function=characteristic_function
)
Let us compute the Shapley value:
import coopgt.shapley_value
coopgt.shapley_value.calculate(characteristic_function=characteristic_function)
Notable Research¶
The Shapley value was first introduced by Shapley in Shapley & others, 1953, a foundational paper in cooperative game theory. Although Shapley is best known for this concept, he was awarded the Nobel Prize in Economics for his later work on matching games.
In large games, the computational cost of calculating the Shapley value can be prohibitive. While not necessarily intractable in the formal sense, as shown in Deng & Papadimitriou, 1994, the number of permutations makes exact computation impractical. One of the earliest practical approaches to this challenge is Castro et al., 2009, which proposes Monte Carlo sampling of permutations as an efficient approximation method.
One of the most impactful application areas of the Shapley value in recent years has been in model explainability. As explored in #exercise:interpreting_linear_models, the Shapley value provides a principled way to attribute a model’s output to its input features. One of the first papers to formalize this idea was Strumbelj & Kononenko, 2010. This approach has since seen wide application, particularly in healthcare, where a comprehensive overview is provided in Jin et al., 2022.
Conclusion¶
In this chapter, we introduced the theory of cooperative games, where groups of players can form coalitions and share the resulting value. We focused on characteristic function games, where the value of each coalition is known, and explored how to fairly distribute this value using the Shapley value.
We discussed the key axioms that characterize fairness—efficiency, null player, symmetry, and additivity and saw how they uniquely determine the Shapley value. We also explored computational aspects, applications to machine learning, and interactive programming tools to calculate Shapley values.
Table 3 gives a summary of the concepts of this chapter.
Table 3:Summary of cooperative game concepts
Concept | Description |
---|---|
Characteristic function | Maps each subset (coalition) of players to a real-valued payoff |
Monotonicity | Adding players to a coalition cannot decrease its value |
Superadditivity | The value of two disjoint coalitions is at most the value of their union |
Payoff vector | A distribution of the total value among all players |
Marginal contribution | The added value a player brings to a coalition |
Predecessors in permutation | Players who appear before a given player in a permutation |
Shapley value | The average marginal contribution: a unique, fair payoff satisfying four axioms |
- Shapley, L. S., & others. (1953). A value for n-person games.
- Deng, X., & Papadimitriou, C. H. (1994). On the complexity of cooperative solution concepts. Mathematics of Operations Research, 19(2), 257–266.
- Castro, J., Gómez, D., & Tejada, J. (2009). Polynomial calculation of the Shapley value based on sampling. Computers & Operations Research, 36(5), 1726–1730.
- Strumbelj, E., & Kononenko, I. (2010). An efficient explanation of individual classifications using game theory. The Journal of Machine Learning Research, 11, 1–18.
- Jin, D., Sergeeva, E., Weng, W.-H., Chauhan, G., & Szolovits, P. (2022). Explainable deep learning in healthcare: A methodological survey from an attribution view. WIREs Mechanisms of Disease, 14(3), e1548.