Document Text Content
Game Theory and Morality
Moshe Hoffman , Erez Yoeli , and Carlos David Navarrete
Introduction
Consider the following puzzling aspects of our morality:
1. Many of us share the view that one should not use people, even if it benefits them
to be used, as Kant intoned in his second formulation of the categorical imperative:
“Act in such a way that you treat humanity, whether in your own person or
in the person of any other, never merely as a means to an end, but always at the
same time as an end” (Kant, 1997 ). Consider dwarf tossing, where dwarfs wearing
protective padding are thrown for amusement, usually at a party or pub. It is
viewed as a violation of dwarfs’ basic dignity to use them as a means for amusement,
even though dwarves willingly engage in the activity for economic gain.
Many jurisdictions ban dwarf tossing on the grounds that the activity violates
dwarfs’ basic human rights, and these laws have withstood lawsuits raised by
dwarfs suing over the loss of employment (!).
2. Charitable giving is considered virtuous, but little attention is paid to how just
the cause or efficient the charity. For example, Jewish and Christian traditions
advocate giving 10 % of one’s income to charity, but make no mention of the
importance of evaluating the cause or avoiding wasteful charities. The intuition
that giving to charity is a moral good regardless of efficacy results in the persistence
of numerous inefficient and corrupt charities. For example, the Wishing
Well Foundation has, for nearly a decade, ranked as one of CharityNavigator.
M. Hoffman (*) • E. Yoeli
Program for Evolutionary Dynamics , Harvard University ,
One Brattle Square, Suite 6 , Cambridge , MA 02138 , USA
e-mail: moshehoffman@fas.harvard.edu
C.D. Navarrete
Department of Psychology, and, the Ecology, Evolutionary Biology and Behavior Program ,
Michigan State University , East Lansing , MI , USA
© Springer International Publishing Switzerland 2016
T.K. Shackelford, R.D. Hansen (eds.), The Evolution of Morality,
Evolutionary Psychology, DOI 10.1007/978-3-319-19671-8_14
289
290
M. Hoffman et al.
com’s most inefficient charities. Yet its mission of fulfilling wishes by children
with terminal illnesses is identical to that of the more efficient Make-A-Wish
Foundation. Worse yet, scams masquerading as charities persist. One man operating
as The US Navy Veteran’s Association collected over 100 million dollars—
over 7 years!—before anyone bothered to investigate the charity.
3. In every culture and age, injunctions against murder have existed. If there is one
thing much of humanity seems to agree on, it’s that ending the life of another
without just cause which is among the worst of moral violations. Yet cultures
don’t consider the loss of useful life years in their definition, even though it is
relevant to the measure of harm done by the murder. Why is our morality so
much more sensitive to whether a life was lost than to how much life was lost?
There are numerous other examples of how our moral intuitions appear to be rife
with logical inconsistencies. In this chapter, we use game theory to provide insight
on a range of moral puzzles similar to the puzzles described above.
What Is Game Theory and Why Is It Relevant?
In this section , we review the defi nition of a game , and of a Nash equilibrium , then
discuss how evolution and learning processes would yield moral intuitions consistent
with Nash equilibria .
Game theory is a tool for the analysis of social interactions. In a game, the payoff
to each player depends on their actions, as well as the actions of others. Consider
the Prisoner’s Dilemma (Chammah & Rapoport, 1965 ; see Fig. 1 ), a model that
captures the paradox of cooperation. Each of two players chooses whether to cooperate
or to defect. Cooperating reduces a player’s payoff by c > 0 while increasing
the other’s payoffs by b> c. Players could be vampire bats with the option of
sharing blood, or firms with the option of letting each other use their databases, or
premed students deciding whether to take the time to help one another to study. The
payoffs, b and c , may represent likelihood of surviving and leaving offspring, profits,
or chance of getting into a good medical school.
Solutions to such games are analyzed using the concept of a Nash equilibrium 1 —
a specification of each player’s action such that no player can increase his payoff by
deviating unilaterally. In the Prisoner’s Dilemma, the only Nash equilibrium is for
neither player to cooperate, since regardless of what the other player does, cooperation
reduces one’s own payoff.
1
Note that we focus on the concept of Nash equilibrium in this chapter and not evolutionary stable
strategy (ESS), a refinement of Nash that might be more familiar to an evolutionary audience. ESS
are the Nash equilibria that are most relevant in evolutionary contexts. However, ESS is not well
defined in many of our games, so we will focus on the insights garnered from Nash and directly
discuss evolutionary dynamics when appropriate.
Morality Games
291
Fig. 1 The Prisoner’s
Dilemma. Player 1’s available
strategies (C and D, which
stand for cooperate and
defect, respectively) are
represented as rows . Player
2’s available strategies (also
C and D) are represented as
columns . Player 1’s payoffs
are represented at the
intersection of each row and
column. For example, if
player 1 plays D and player 2
plays C, player 1’s payoff is
b. The Nash equilibrium of
the game is (D, D). It is
indicated with a circle
Game theory has traditionally been applied in situations where players are rational
decision makers who deliberately maximize their payoffs, such as pricing
decisions of firms (Tirole, 1988 ) or bidding in auctions (Milgrom & Weber, 1982 ).
In these contexts, behavior is expected to be consistent with a Nash equilibrium,
otherwise one of the agents—who are actively deliberating about what to do—
would realize she could benefit from deviating from the prescribed strategy.
However, game theory also applies to evolutionary and learning processes, where
agents do not deliberately choose their behavior in the game, but play according to
strategies with which they are born, imitate, or otherwise learn. Agents play a game
and then “reproduce” based on their payoffs, where reproduction represents offspring,
imitation, or learning. The new generation then play the game, and so on. In
such settings, if a mutant does better (mutation can be genetic or can happen when
agents experiment), then she is more likely to reproduce or her behavior imitated or
reinforced, causing the behavior to spread. This intuition is formalized using models
of evolutionary dynamics (e.g., Nowak, 2006 ).
The key result for evolutionary dynamic models is that, except under extreme
conditions, behavior converges to Nash equilibria. This result rests on one simple,
noncontroversial assumption shared by all evolutionary dynamics: Behaviors that
are relatively successful will increase in frequency. Based on this logic, game theory
models have been fruitfully applied in biological contexts to explain phenomena
such as animal sex ratios (Fisher, 1958 ), territoriality (Smith & Price, 1973 ), cooperation
(Trivers, 1971 ), sexual displays (Zahavi, 1975 ), and parent–offspring conflict
(Trivers, 1974 ). More recently, evolutionary dynamic models have been applied
in human contexts where conscious deliberation is believed to not play an important
role, such as in the adoption of religious rituals (Sosis & Alcorta, 2003 ), in the
expression and experience of emotion (Frank, 1988 ; Winter, 2014 ), and in the use
of indirect speech (Pinker, Nowak, & Lee, 2008 ).
292
M. Hoffman et al.
Crucially for this chapter, because our behaviors are mediated by moral intuitions
and ideologies, if our moral behaviors converge to Nash, so must the intuitions and
ideologies that motivate them. The resulting intuitions and ideologies will bear
the signature of their game theoretic origins, and this signature will lend clarity on
the puzzling, counterintuitive, and otherwise hard-to-explain features of our moral
intuitions, as exemplified by our motivating examples.
In order for game theory to be relevant to understanding our moral intuitions and
ideologies, we need only the following simple assumption: Moral intuitions and
ideologies that lead to higher payoffs become more frequent . This assumption can
be met if moral intuitions that yield higher payoffs are held more tenaciously, are
more likely to be imitated, or are genetically encoded. For example, if every time
you transgress by commission you are punished, but every time you transgress by
omission you are not, you will start to intuit that commission is worse than
omission.
Rights and the Hawk–Dove Game
In this section we will argue that just as the Hawk –Dove model explains animal territoriality
(Maynard Smith & Price, 1973 , to be reviewed shortly ), the Hawk –Dove
model sheds light onto our sense of rights (Descioli & Karpoff, 2014 ; Gintis, 2007 ;
Myerson, 2004 ).
Let us begin by asking the following question (Myerson, 2004 ): “Why [does] a
passenger pay a taxi driver after getting out of the cab in a city where she is visiting
for one day, not expecting to return?” If the cabby complains to the authorities, the
passenger could plausibly claim that she had paid in cash. The answer, of course, is
that the cabby would feel that the money the passenger withheld was his—that he
had a right to be paid for his service—and get angry, perhaps making a scene or
even starting a fight. Likewise, if the passenger did in fact pay, but the cabby
demanded money a second time, the passenger would similarly be infuriated. This
example illustrates that people have powerful intuitions regarding rightful ownership.
In this section, we explore what the Hawk – Dove game can teach us about our
sense of property rights.
The reader is likely familiar with the Hawk – Dove game, a model of disputes
over contested resources. In the Hawk – Dove game, each player decides whether to
fight over a resource or to acquiesce (i.e. play Hawk or Dove). If one fights and the
other does not, the fighter gets the resource, worth v . If both fight, each pays a cost
c and split the resource. That is, each gets v/2- c. If neither fights, they split the
resource and get v /2. As long as v/2 < c, then in any stable Nash equilibrium, one
player fights and the other acquiesces. That is, if one player expects the other to
fight, she is better off acquiescing, and vice versa (see Fig. 2 ).
Crucially, it is not just a Nash equilibrium for one player to always play Hawk
and the other to always play Dove. It is also an equilibrium for both players to condition
whether they play Hawk on an uncorrelated asymmetry —a cue or event that
Morality Games
293
Fig. 2 The Hawk–Dove
game. The Nash equilibria of
the game are circled
does not necessarily affect the payoffs, but does distinguish between the players,
such as who arrived at the territory first or who built the object. If one conditions on
the event (say, plays Hawk when she arrives first), then it is optimal for the other to
condition on the event (to play Dove when the other arrives first).
As our reader is likely aware, this was the logic provided by Maynard Smith to
explain animal territoriality—why animals behave aggressively to defend territory
that they have arrived at first, even if incumbency does not provide a defensive
advantage and even when facing a more formidable intruder. Over the years, evidence
has amassed to support Maynard Smith’s explanation, such as experimental
manipulation of which animal arrives first (Davies, 1978 ; Sigg & Falett, 1985 ).
Like other animals, we condition how aggressively we defend a resource on
whether we arrive first. Because our behaviors are motivated by beliefs, we are also
more likely to believe that the resource is “ours” when we arrive first. Studies have
shown these effects with children’s judgments of ownership, in ethnographies of
prelegal societies, and in computer games. In one such illustration, DeScioli and
Wilson ( 2011 ) had research subjects play a computer game in which they contested
a berry patch. Subjects who ended up keeping control of the patch usually arrived
first, and this determined the outcome more often than differences in fighting ability
in the game.
This sense of ownership is codified in our legal systems, as illustrated by the quip
“possession is 9/10ths of the law,” and in a study involving famous legal property
cases conducted by Descioli and Karpoff ( 2014 ). In a survey, these researchers
asked participants to identify the rightful owner of a lost item, after reading vignettes
based on famous property rights legal cases. Participants consistently identified the
possessor of the found item as its rightful owner (as the judges had at the time of the
case). This sense of ownership is also codified in our philosophical tradition, e.g., in
Locke ( 1988 ), who found property rights in initial possession. Note that, as has also
been found in animals, possession extends to objects on one’s land: In DeScioli and
294
M. Hoffman et al.
Karpoff’s survey, another dictate of participants’ (and the judges’) property rights
intuitions was who owned the land on which the lost item was found.
Also like animals, our sense of property rights is influenced by who created or
invested in the resource, another uncorrelated asymmetry. In locales that sometimes
grant property rights to squatters—individuals who occupy lands others have purchased—a
key determinant of whether the squatters are granted the land is whether
they have invested in it (Cone vs. West Virginia Pulp & Paper Co., 1947 ; Neuwirth,
2005 ). Locke also intuited that investment in land is part of what makes it ours:
In Second Treatise on Civil Government (1689), Locke wrote, “everyman has a
property in his person; this nobody has a right to but himself. The labor of his body
and the work of his hand, we may say, are properly his.”
If the Hawk – Dove model underlies our sense of property rights, we would expect
to see psychological mechanisms that motivate us to feel entitled to an object when
we possess it or have invested in it. Here are three such mechanisms, which can be
seen by reinterpreting some well-documented “biases” in the behavioral economics
literature. The first such bias is the endowment effect : We value items more if we are
in possession of them. The endowment effect has been documented in dozens of
experiments, where subjects are randomly given an item (mug, pen, etc.) and
subsequently state that they are willing to sell the mug for much more than those
who were not given the mug are willing to pay (Kahneman, Knetsch, & Thaler,
1990 ). In the behavioral economics literature, the endowment effect has sometimes
been explained by loss aversion, which is when we are harmed more by a loss than
we benefit from an equivalent gain. However, the source of loss aversion is not
questioned or explained. When it is, loss aversion is also readily explained by the
Hawk – Dove game (Gintis, 2007 ).
A second bias that also fits the Hawk – Dove model is the IKEA effect : Our valuation
of an object is influenced by whether we have developed or built the resource.
The IKEA effect has been documented by asking people how much they would pay
for items like Lego structures or IKEA furniture after randomly being assigned to
build them or receive them pre-built. Subjects are willing to pay more for items they
build themselves.
A third such bias that fits the Hawk – Dove model is the sunk cost fallacy (Mankiw,
2007 ; Thaler, 1980 ), which leads us to “throw good money after bad” when we
invest in ventures simply because we have already put so much effort into them,
arguably because our prior efforts lead us to value those ventures more.
Possession and past investment are not the only uncorrelated asymmetries that
can dictate rights. Rights can be dictated by a history of agreements, as happens
when one party sells another deed to a house or car, or, as in our taxicab example,
by whether a service was provided. There are also countless examples in which
rights were determined by perhaps unfair or arbitrary characteristics such as race
and sex: Black Americans were expected to give up their seat for Whites in the Jim
Crow South and women to hand over their earnings or property to their husbands
throughout the ages.
Hawk – Dove is not just a post hoc explanation for our sense of rights; it also leads
to the following novel insight: We can formally characterize the properties that
Morality Games
295
uncorrelated asymmetries must have. This requires a bit more game theory to illustrate;
the logic is detailed in the section on categorical distinctions but the implications
are straightforward: Uncorrelated asymmetries must be discrete (as in who
arrived first or whether someone has African ancestry) and cannot be continuous
(who is stronger, whether someone has darker skin). Indeed, we challenge the reader
to identify a case where our sense of rights depends on surpassing a threshold in a
continuous variable (stronger than? darker than?). More generally, an asymmetry
must have the characteristic that, when it occurs, every observer believes it occurred
with a sufficiently high probability, where the exact level of confidence is determined
by the payoffs of the game. This is true of public, explicit speech and handshakes,
but not innuendos or rumors. (Formally, explicit speech and handshakes
induce what game theorists term common p-beliefs.)
The Hawk – Dove explanation of our sense of rights also gives useful clarity on
when there will be conflict. Conflict will arise if both players receive opposing signals
regarding the uncorrelated asymmetry, such as two individuals each believing
they arrived first, or when there are two uncorrelated asymmetries that point in
conflicting directions, such as when one person invested more and the other arrived
first. The former source of conflict appears to be the case in the Israeli–Palestinian
conflict. Indeed, both sides pour great resources into demonstrating their early
possession, especially Israel, through investments in and public displays of archeology
and history. The latter source of conflict appears to be the case in many of the
contested legal disputes in the study by DeScioli and Karpoff ( 2014 ) mentioned
above. An example is one person finds an object on another’s land. Indeed, this turns
out to be a source of many legal conflicts over property rights, and a rich legal tradition
has developed to assign precedence to one uncorrelated asymmetry over another
(Descioli & Karpoff, 2014 ). As usual, we see similar behavior in animals in studies
that provide empirical support for Maynard Smith’s model for animal territoriality:
When two animals are each given the impression they arrived first by, for example,
clever use of mirrors, a fight ensues (Davies, 1978 ).
Authentic Altruism, Motives, and the Envelope Game
In this section , we present a simple extension of the Repeated Prisoner’s Dilemma
to explain why morality depends not just on what people do but also what they think
or consider .
In the Repeated Prisoner’s Dilemma and other models of cooperation, players
judge others by their actions—whether they cooperate or defect. However, we not
only care about whether others cooperate but also about their decision-making process:
We place more trust in cooperators who never even considered defecting. To
quote Kant, “In law a man is guilty when he violates the rights of others. In ethics
he is guilty if he only thinks of doing so.”
The Envelope Game (Fig. 3 ) models why we care about thoughts and considerations
and not just actions (Hoffman, Yoeli, & Nowak, 2015 ). The Envelope Game
296
M. Hoffman et al.
Fig. 3 A single stage of the Envelope Game
is a repeated game with two players. In each round, player 1 receives a sealed envelope,
which contains a card stating the costs of cooperation (high temptation to
defect vs. low temptation to defect). The temptation is assigned randomly and is
usually low. Player 1 can choose to look inside the envelope and thus find out the
magnitude of the temptation or choose not to look. Then player 1 decides to cooperate
or to defect. Subsequently, player 2 can either continue to the next round or end
the game. As in the Repeated Prisoner’s Dilemma, the interaction repeats with a
given likelihood, and if it does, an envelope is stuffed with a new card and presented
to player 1, etc.
In this model, as long as temptations are rare, large, and harmful to player 2, it is
a Nash equilibrium for player 1 to “cooperate without looking” in the envelope and
for player 2 to continue if and only if player 1 has cooperated and not looked. We
refer to this as the cooperate without looking (CWOL) equilibrium. 2 This equilibrium
emerges in agent-based simulations of evolution and learning processes. 3 Notice
that if player 1 could not avoid looking inside the envelope, or player 2 could not
observe whether player 1 looked, there would not be a cooperative equilibrium
since player 1 would benefit by deviating to defection in the face of large temptations.
Not looking permits cooperative equilibria in the face of large temptations.
The Envelope Game is meant to capture the essential features of many interesting
aspects of our morality, as described next.
Authentic Altruism . Many have asked whether “[doing good is] always and exclusively
motivated by the prospect of some benefit for ourselves, however subtle”
(Batson, 2014 ), for example, the conscious anticipation of feeling good (Andreoni,
2
Technically, the conditions under which we expect players to avoid looking and attend to looking
are ch > a /(1 − w ) > c lp + ch (1 − p ) and bp + d (1 − p ) < 0), where ch and cl are the magnitudes of the
high and low temptations, respectively; p is the likelihood of the low temptation; a /(1 − w ) is the
value of a repeated, cooperative interaction to player 1; and bp + d (1 − p ) is the expected payoff to
player 2 if player 1 only cooperates when the temptation is low.
3
The simulations employ numerical estimation of the replicator dynamics for a limited strategy
space: cooperate without looking, cooperate with looking, look and cooperate only when the temptation
is low, and always defect for player 1, and end if player 1 looks, end if player 1 defects, and
always end for player 2.
Morality Games
297
1990 ), avoidance of guilt (Cain, Dana, & Newman, 2014 ; Dana, Cain, & Dawes,
2006 ; DellaVigna, List, & Malmendier, 2012 ), anticipation of reputational benefits
or reciprocity (as Plato’s Glaucon suggests, when he proffers that even a pious man
would do evil if given a ring that makes him invisible; Trivers, 1971 ). At the extreme,
this amounts to asking if saintly individuals such as Gandhi or Mother Teresa were
motivated thus, or if they were “authentic” altruists who did good without anticipating
any reward and would be altruistic even in the absence of such rewards.
Certainly, religions advocate doing good for the “right” reasons. In the Gospel of
Matthew, Chapter 6, Jesus advocates, “Be careful not to practice your righteousness
in front of others to be seen by them. If you do, you will have no reward from your
Father in heaven,” after which he adds, “But when you give to the needy, do not let
your left hand know what your right hand is doing, so that your giving may be in
secret. Then your Father, who sees what is done in secret, will reward you.”
The Envelope Game suggests authentic altruism is indeed possible: By focusing
entirely on the benefits to others and ignoring the benefits to themselves, authentic
altruists are trusted more, and the benefits from this trust outweigh the risk of, for
example, dying a martyr’s death. Moreover, this model helps explain why we think
so highly of authentic altruists, as compared to others who do good, but with an
ulterior motive (consider, as an example, the mockery Sean Penn has faced for
showing up at disaster sites such as Haiti and Katrina with a photographer in tow).
Principles . Why do we like people who are “principled” and not those who are
“strategic”? For example, we trust candidates for political office whose policies are
the result of their convictions and are consistent over time and distrust those whose
policies are carefully constructed in consultation with their pollsters and who “flipflop”
in response to public opinion (as caricatured by the infamous 2004 Republican
presidential campaign television ad showing John Kerry windsurfing and tacking
from one direction to another). CWOL offers the following potential explanation.
Someone who is strategic considers the costs and benefits to themselves of every
decision and will defect when faced with a large temptation, whereas someone who
is guided by principles is less sensitive to the costs and benefits are to themselves
and thus less likely to defect. Imagine our flip-flopping politician was once against
gay marriage but supports it now that it is popular. This indicates the politician is
unlikely to fight for the cause if it later becomes unpopular with constituents or risks
losing a big donor. Moreover, this model may help explain why ideologues that are
wholly devoted to a cause (e.g., Hitler, Martin Luther King, and Gandhi) are able to
attract so many followers.
Don ’t Use People . Recall Kant’s second formulation of the categorical imperative:
“Act in such a way that you always treat humanity, whether in your own person or
in the person of any other, never simply as a means but always at the same time as
an end.” In thinking this through, let’s again consider dwarf tossing. Many see it as
a violation of dwarfs’ basic dignity to use them as a means for amusement, even
though they willingly engage in the activity for economic gain. Our aversion to
using people may explain many important aspects of our moral intuitions, such as
298
M. Hoffman et al.
why we judge torture as worse than imprisonment or punishment (torture is harming
someone as a means to obtaining information) and perhaps one of the (many) reasons
we oppose prostitution (prostitution is having sex with someone as a means to
obtaining money). The Envelope Game clarifies the function of adhering to this
maxim. Whereas those who treat someone well as means to an end would also
mistreat them if expedient, those who treat someone well as an end can be trusted
not to mistreat them when expedient.
Attention to Motives . The previous two applications are examples of a more general
phenomenon: that we judge the moral worth of an action based on the motivation
of the actor, as argued by deontological ethicists, but contested by
consequentialists. The deontological argument is famously invoked by Kant:
“Action from duty has its moral worth not in the purpose to be attained by it but in
the maxim in accordance with which it is decided upon, and therefore does not
depend upon the realization of the object of the action but merely upon the principle
of volition in accordance with which the action is done without regard for any object
of the faculty of desire” (Kant, 1997 ). These applications illustrate that we attend to
motives because they provide valuable information on whether the actor can be
trusted to treat others well even when it is not in her interest.
Altruism Without Prospect of Reciprocation . CWOL also helps explain why people
cooperate in contexts where there is no possibility of reciprocation, such as in
one-shot anonymous laboratory experiments like the dictator game (Fehr &
Fischbacher, 2003 ), as well as when performing heroic and dangerous acts. Consider
soldiers who throw themselves on a grenade to save their compatriots or stories like
that of Liviu Librescu, a professor at the University of Virginia and a Holocaust survivor,
who saved his students during a school shooting. When he heard the shooter
coming toward his classroom, Librescu stood behind the door to his classroom,
expecting that when the shooter tried to shoot through the door, it would kill him and
his dead body would block the door. Mr. Librescu, clearly, did not expect this act to
be reciprocated. Such examples have been used as evidence for group selection
(Wilson, 2006 ), but can be explained by individuals “not looking” at the chance of
future reciprocation. Consistent with this interpretation, cooperation during extreme
acts of altruism is more likely to be intuitive than deliberative (Rand & Epstein,
2014 ), and those who cooperate without considering the prospect of reciprocation
are more trusted (Critcher, Inbar, & Pizarro, 2013 ). We also predict that people are
more likely to cooperate intuitively when they know they are being observed.
The Omission–Commission Distinction
and Higher-Order Beliefs
We explain the omission –commission distinction and the means–by-product distinction
by arguing that these moral intuitions evolved in contexts where punishment is
coordinated. Then , even when intentions are clear to one witness for omissions and
by-products , a witness will think intentions are less clear to the other witnesses .
Morality Games
299
Why don’t we consider it murder to let someone die that we could have easily
saved? For example, we sometimes treat ourselves to a nice meal at a fancy restaurant
rather than donating the cost of that meal to a charity that fights deadly diseases.
This extreme example illustrates a general phenomenon: that people have a tendency
to assess harmful commissions (actions such as killing someone) as worse, or
more morally reprehensible, than equally harmful omissions (inactions such as letting
someone die). Examples of this distinction abound, in ethics (we assess withholding
the truth as less wrong than lying (Spranca, Minsk, & Baron, 1991 )), in law
(it is legal to turn off a patient’s life support and let the patient die, as long as one
has the consent of the patient’s family; however, it is illegal to assist the patient in
committing suicide even with the family’s consent), and in international relations.
For example, consider the Struma, a boat carrying Jewish refugees fleeing Nazi
persecution in 1942. En route to Palestine, the ship’s engine failed, and it was towed
to a nearby port in Turkey. At the behest of the British authorities then in control of
Palestine, passengers were denied permission to disembark and find their way to
Palestine by land. For weeks, the ship sat at port. Passengers were brought only
minimal supplies, and their requests for safe haven were repeatedly denied by the
British and others. Finally, the ship was towed to known hostile waters in the Black
Sea, where it was torpedoed by a Russian submarine almost immediately, killing
791 of 792 passengers. Crucially, though, the British did not torpedo the ship themselves
or otherwise execute passengers—an act of commission that they and their
superiors would undoubtedly have found morally reprehensible.
Why do we distinguish between transgressions of omission and commission? To
address this question, we present a simple game theory model based on the insight
by DeScioli, Bruening, and Kurzban ( 2011 ). The intuition can be summarized in
four steps:
1. We note that moral condemnation motivates us to punish transgressors. Such
punishment is potentially costly, e.g., due to the risk of retaliation. We expect
people to learn or evolve to morally condemn only when such costs are worth
paying.
2. Moral condemnation can be less costly when others also condemn, perhaps
because the risk of retaliation is diffused, because some sanctions do not work
unless universally enforced or, worse, because others may sanction individuals
they believe wrongly sanctioned. This can be modeled using any game with
multiple Nash equilibria, including the Repeated Prisoner’s Dilemma and the
Side- Taking Game. The Coordination Game is the simplest game with multiple
equilibria, so we present this game to convey the basic intuition. In the
Coordination Game, there are two players who each simultaneously choose
between two actions, say punish and don’t punish. The key assumption is that
each player prefers to do what she expects the other to do, which can be captured
by assuming each receives a if they both punish, d if neither punish, b < d if one
punishes and the other does not, and c < a if one does not punish while the other
does (Fig. 4 ).
3. Transgressions of omission that are intended are difficult to distinguish from
unintended transgression, as is the case when perpetrators are simply not paying
300
M. Hoffman et al.
Fig. 4 The Coordination
Game. In our applications, A
stands for punish, and B
stands for don’t punish
attention or do not have enough time to react with better judgment (DeScioli
et al., 2011 ). Relative to the example of the tennis player with the allergy
described above, it is usually hard to distinguish between a competitor who does
not notice his opponent orders the dish with the allergen versus one who notices
but does not care. In contrast, transgressions of commission must be intended
almost by definition.
4. Suppose the witness knows an omission was intentional: In the above example,
the tennis player’s opponent’s allergy is widely known, and the witness saw the
player watch his opponent order the offending dish, had time to react, thought
about it, but did not to say anything. The witness suspects that others do not
know the competitor was aware his opponent ordered the dish, but believes the
tennis player should be condemned for purposely withholding information from
his competitor. However, since the witness does not wish to be the sole condemner,
she is unlikely to condemn. In contrast, when a witness observes a transgression
of commission (e.g., the player recommends the dish), the witness is
relatively confident that others present interpret the transgression as purposely
harmful, since his recommendation reveals that the player was obviously paying
attention and therefore intended to harm his opponent. So, if all other individuals
present condemn the tennis player when they observe the commission, each does
not anticipate being the sole condemner.
For the above result to hold, all that is needed is the following: (1) The more the costs
of punishment decrease, the more others punish and (2) omissions are usually unintended
(Dalkiran, Hoffman, Paturi, Ricketts, & Vattani, 2012 ; Hoffman et al., 2015 ). 4
4
In fact, even if one knows that others know that the transgression was intended, omission will still
be judged as less wrong, since the transgression still won’t create what game theorists call common
p-belief, which is required for an event to influence behavior in a game with multiple equilibria.
Morality Games
301
This explanation for the omission–commission distinction leads to two novel
predictions: First, for judgments and emotions not evolved to motivate witnesses to
punishment but to, say, motivate witnesses to avoid dangerous partners (such as the
emotion of fear; in contrast to anger or moral disgust), the omission–commission
distinction is expected to be weaker or disappear altogether. Second, for transgressions
of omission that, without any private information, can be presumed intentional
(such as a mother who allows her child to go hungry or a person who does not give