John Robinson's pages on
Ethics

INTRODUCTION
Empiricist Ethics

ESSAYS
Luck, duty and benevolence
Rights, morals and history
The moral prisoner
A doubtful utopia

RESOURCES
Online resources on ethics

RETURN TO:
Userport homepage
York homepage

7

The moral prisoner's dilemma

Introduction

The prisoner's dilemma is a formal, abstract, model of interaction between two parties who must decide whether to cooperate or to cheat each other. Since its invention, or discovery, in 1950, it has provided a game-theoretic model for examining how egoism thwarts itself, cooperation arises from self-interest, and individual rationality undermines collective benefit. In its repeated form, it has shown how strategies of "trust", "punishment" and "forgiveness" emerge in a competitive environment. However, because the model is so abstract, and has artificial constraints against communication, its application to real-world problems must be done with care. In this chapter I provide concrete examples in several places, to enhance readability, but the reader should remember that the mapping between the limited world of the dilemma and the complicated real problem is not always as direct as I have portrayed.

The purpose of the chapter is to use the Prisoner's Dilemma as a tool to probe various traditional moral theories. Although these theories are set up in the real world, they should be applicable in the sterile environment of the prisoner's dilemma, and testing them with the dilemma may yield new insights about them. In its usual form, both parties are egoists - they each want the outcome to be as good as possible for themselves (individually, not collectively). But we can, instead, assume that the parties have some other underlying moral disposition. I am not aware of any study of what happens in such situations, for example when the two parties seek to maximize total utility rather than their own individual utilities. Rather, with the a priori assumption of egoism, the claimed success of the prisoner's dilemma is that, even so, it yields outcomes that look moral. So we will consider a priori moral players, and, as an experiment, see what outcomes they cause. As well, we will challenge some of the remarkable success of the Prisoner's Dilemma in explaining cooperative behaviour: when the model parameters are changed, egoists playing the game refuse to cooperate, and the appearance of moral behaviour arising out of self-interest is undermined. In these situations, we will see whether the other, non-egoistic, participants fare better.

Throughout the chapter I talk about transactions or exchanges of value. The Prisoner's Dilemma works no matter what constitutes value. In some cases, for brevity, I associate the goods that are given and received with wealth. But it is probably better to think in terms of the exchange of time, expertise, and effort or even values like happiness and freedom from pain. Also I sometimes use numbers to compare value, but these are just to indicate an ordering, not absolute measures of "utility". So, for example, when I say that something worth 1 unit of value to you is worth 2 to me, I don't necessarily mean this in monetary terms. It could be that 5 minutes of effort on your part saved me 20 minutes of pain. But 20/5 is 4, not 2, and pain is not equivalent to effort. These quantitative details don't affect the arguments. The important thing is that the cost to you of those 5 minutes of time was in some sense less than the value of your 5 minutes to me.

The Basic (Single-run) Prisoner's Dilemma

The electrical equipment that runs the mines of Diamondtown must be renewed. Diamondtowners are resourceful builders, but they need gold for the contacts of their relays and that can come from only one place: Goldville. The Goldvillers have a different problem: in their mines the diamond tips of the drills need renewal. So it is that the representatives of Diamondtown and Goldville meet to scowl at each other across no-man's-land and then venture forward to make a resentful trade. The Goldviller approaches with an ingot, the Diamondtowner with a bag of stones.

"How do I know they are not just cut glass?" demands the Goldviller.

"How do I know that isn't a bar of gold-plated lead?" retorts the Diamondtowner.

Distrustfully they do the trade then rush home to test their purchases. Neither is surprised to find their fears were justified - both products are fakes. "Just what you'd expect of a Goldviller/Diamondtowner" they say, "Good thing we weren't suckered into giving them the real goods". Then they go back to running their mines with decaying equipment.

It might seem that Goldville and Diamondtown are needlessly petty, but look at it from their point of view. Yes, by both cheating, both sides are both no better off. But suppose our side had played fair and the other had cheated. We'd lose a valuable resource and gain nothing. It's better to cheat and at least keep what we have. And what if (against all expectation) the other side had played fair? Then, if we too cooperated, trading fairly, both sides would gain. But it would be even more advantageous for us to keep our resource and sucker the other side! Again, it's better (in self-interested terms) to cheat. So both Goldville and Diamondtown rationally decided that whatever the other did, their own best interest would be served by not cooperating. Yet both would have been better off if, instead, both had cooperated.

This paradox can be presented in more formal and abstract terms. Suppose I have some resource (which may be goods, money, my talents or time). A certain amount of this resource represents to me one unit of value (for example, one hour of my time). But to you it is worth more, say two units. Similarly you have a resource, and what represents one unit of value to you, is worth two to me. We don't know each other well, so we can't predict what the other will do. (This means that metagame analysis is impossible (Howard, 1971).) We have no feelings towards each other, so we aren't motivated by sentiment in any way. We're only ever going to do one transaction, and there are no other complications - no opportunity for retaliation, for example. We'll deliver our resources simultaneously, and if one of us cooperates and the other does not, the cooperator won't be able to withdraw. Here are the possible outcomes arranged as a table where in each cell the first number represents the value to me and the second number the value to you. The term "Defect" is used instead of "Cheat".

You

Cooperate

Defect

I

Cooperate

(2, 2)

(0,3)

Defect

(3, 0)

(1,1)

A table like this is often called a "payoff matrix". What it says is that we can stay as we are and each keep our one unit of value (I defect, you defect), we can trade and double each of our values (I cooperate, you cooperate), or one of us - you for instance - can hand over your resource without getting anything back, leaving you with nothing and me with my original unit plus the two units of value I get from your resource.

More generally, if your one unit of resource is worth X units to me, and my one unit is worth Y units to you, we have the payoff matrix.

You

Cooperate

Defect

I

Cooperate

(X, Y)

(0,Y+1)

Defect

(X+1, 0)

(1,1)

Assuming X and Y are both greater than 1, that is, we both will benefit from exchanging units, then this table has the same property as the previous one. Either of us gains by defecting if the other person cooperates; either of us loses less by defecting if the other person defects; therefore the rational self-interested choice is always to defect. But collectively we'd be better off both cooperating than both defecting.

We can rank how the various outcomes serve each player's interests in a similar table.

You

Cooperate

Defect

I

Cooperate

(second best, second best)

(worst, best)

Defect

(best, worst)

(third best, third best)

The examples I have given have been somewhat artificial. But they are simplified versions of situations that actually occur in real life. The classic two-party example is the arms race: if our two countries are balanced in weaponry, we can agree not to develop more. This will save us both money. But if you develop new weapons and I don't, you might be able to destroy me, and that is my worst case. On the other hand, if I develop new weapons and you don't, I'll be able to sleep easier at night, knowing that I am ahead of you. This might be my best case. Thus we both develop new weapons (third best for both) instead of having an arms freeze (second best for both). The argument works in the same way for disarmament, trade restrictions, and international relations in general. More local versions of the dilemma (within a country, a city or a community) tend to involve several players, but they have the same properties. There are many examples. If every commuter in a crowded city took public transport, there would be fewer traffic snarl-ups and everyone would get to work quicker. But when everyone is going by bus, leaving the roads relatively clear, why shouldn't I take my car and get there even faster? When an ocean is overfished, it is better for each individual fisher to catch more, but worse for each if all do. In poor countries, with overcrowding, it may be better for each family to have more children, but worse for each if all do. Excusing wrongdoing with the claim "Everybody does it" exemplifies the defect/defect choice.

The paradox of self-interest and cooperation that we have been discussing is usually called the "Prisoner's Dilemma", because of the terms in which it was originally posed. For completeness I will give a concise version of this story. Suppose that you and a supposed accomplice are both arrested for a crime. You have no feelings for each other (whether or not you really were partners in crime), and now you are both in the same hole, with no means to communicate. The police offer you a deal (and tell you that your alleged accomplice is being given the same offer): "If you both claim innocence, we've enough circumstantial evidence to send you both to jail for two years. But if you help us by confessing, fingering your accomplice, we'll let you out after one. Unless, of course, your friend confesses too, when we send you both down for five years." You sit and ponder the risk. Two years is certain if you keep silent, and it's either one or five years if you confess. Seems like confession involves higher stakes. But then you realize that there's a piece of information missing, so you ask, "And what if my accomplice - I mean alleged accomplice - confesses and implicates me, even though I still say I'm innocent?" "Ah," comes the reply, "then we send you down for ten years." Now it's clear that whether or not the other prisoner confesses, your situation will be better if you confess. Your accomplice comes to the same conclusion, you both confess and both get five years. If you'd only both kept quiet, you'd have been out after two. In this case confess corresponds to defecting, saying nothing corresponds to cooperating (with your accomplice).

The Prisoner's Dilemma has been studied by game theorists, economists, political scientists and philosophers. The terminology of any discussion comes from game theory: thus a single encounter is often called a game, and the participants are players. The form of the payoff matrix is what turns the game into a dilemma. Later in this chapter I will discuss modifications to the matrix which remove the dilemma for self-interested players. Most authors (at least, the economists, political scientists and game theorists) would say that these cases are not true Prisoner's Dilemmas. However they can give rise to dilemmas if the players happen to be altruists rather than egoists. This is a possibility that few authors entertain! Before turning to these interesting variations, however, I must explain the most important extension of the Basic Prisoner's Dilemma.

The Repeated Prisoner's Dilemma

The Repeated (or Iterated) Prisoner's Dilemma is the playing of the basic, single-run, version an indeterminate (usually large) number of times, with the same players, whose actions may be different on different occasions. Although neither party knows what the other will do this time, there are past and future encounters. Each of us can remember what the other did before and make our move accordingly; we can also bear in mind that our action this time may affect future encounters. Therefore it is now possible to retaliate and reward, and, perhaps, to build up "trust". For example, a player can adopt the strategy to cooperate with another until that person defects. Then the player can strike back, perhaps just on the following encounter, perhaps forever after.

It is important that the Prisoner's Dilemma is not repeated a fixed, known, number of times. If both sides knew when the last encounter would be, that would make that game just like the basic, single-run, case. Thus the rational self-interested choice would be to defect. However, if you know that your opponent is going to defect next time, then you might as well defect this time - there is no way that a retaliation can follow. Thus, knowing the number of encounters seems to force the strategy of always defecting - making the same mistake as the egoist in the Basic Prisoner's Dilemma.

Some of the strategies for playing the repeated prisoner's dilemma are:

ALL D

Always defect.

ALL C

Always cooperate.

TIT FOR TAT

On the first encounter cooperate; thereafter do what the other player did last time.

MASSIVE RETALIATION

Cooperate until the other player defects; thereafter always defect.

TIT FOR TWO TATS

Forgive a single defection; strike back once after two in a row.

TWO TITS FOR A TAT

Defect twice following a defection by the other side.

SNEAKY

Normally play TIT FOR TAT, but try to sneak in a defection now and then.

OPPORTUNIST TIT FOR TAT

Play TIT FOR TAT unless the other player appears to be unresponsive; then always defect.

RANDOM

Choose between defection and cooperation randomly.

The Repeated Prisoner's Dilemma was extensively studied during the 60s and 70s, principally as a tool for probing how human subjects play the game, but also as a model for social processes and in abstract terms. Work on cooperation without enforcement led to the very significant experiments between 1979 and 1987 by Robert Axelrod (Axelrod 1984, 1987). The remainder of this section reviews Axelrod's computer simulations, and the remarkable conclusions he drew.

In 1979 Axelrod invited professional game theorists to send in Repeated Prisoner's Dilemma strategies in the form of computer programs to be played against each other in a round robin tournament. The object of the contest was to score the maximum number of points, defined by the total value taken away from all the encounters. Therefore, it wasn't necessary to "win" each encounter, just to do well overall. Fourteen entries were submitted. Each played 200 rounds against itself, against RANDOM, and against each of the other entries, for a total of 3,000 rounds per program. (None of the programs "knew" that the 200th round was the last, avoiding the last-time-defect problem discussed above.)

TIT FOR TAT, submitted by Anatol Rapoport, was the simplest program and won the tournament. TIT FOR TAT was, by that time, the most discussed rule for playing the Repeated Prisoner's Dilemma, having shown itself successful in experiments between humans. Many of the programs that did worse than Rapoport's were simply "improvements" on TIT FOR TAT. However none of these more complex versions was able to perform as well as the original.

Following the first tournament, Alexrod did extensive analysis, identifying the key trait of the programs that did well - "niceness", which means never being the first to defect. He also showed how certain other strategies would have outperformed TIT FOR TAT if they had been entered. This information, together with detailed results from the tournament, was sent out with an invitation to participate in a second round. This time sixty-two programs were entered, and although anyone was allowed to enter anything, only one TIT FOR TAT was submitted, again by Rapoport. Most of the programs were "nice", except, perhaps, trying to sneak in an occasional defection, or retaliating more than in kind, like TWO TITS FOR A TAT. The programs that would have beaten TIT FOR TAT in the first contest were also included.

Many thousands of runs later, the results were in: TIT FOR TAT won again!

Axelrod again analysed the results to find out what was common to the successful strategies. I summarize these in a table below. The reader can imagine how tempting and easy it is to apply these empirically successful principles from a computer simulation to relationships of all kinds: international and domestic policy, rights and duties in the community, personal morality.

Strategic Principle

As applied in TIT FOR TAT

A possible wider implication

Be nice

Don't be the first to defect

Begin a relationship with a stranger by showing kindness and cooperating

Be provocable

Retaliate immediately to a defection

Don't appease evildoers

Be forgiving

After retaliation, return to cooperation as soon as the other player does.

Don't hold a grudge

Be simple

If the other player can see you are nice, provocable and forgiving, they will cooperate. Deviousness leads to patterns of defection.

Don't be devious

Play to gain most, not to win

TIT FOR TAT never beats its opponents; it just maintains a mutually prosperous relationship.

Don't be envious or resentful of others

It is very important to emphasize that these traits were found empirically to be the ones that made a program prosper. In other words, if the program's ultimate aim was self interest (which it was, expressed in terms of scoring as many points as possible), then it best achieved this aim by having these traits.

Several authors have attempted to apply the Repeated Prisoner's Dilemma traits to practical problems of morality. An example is Singer (1995). He writes, for example, of the successive stages of appeasement of Nazi Germany - rearmament, the Rhineland, the Sudetanland, Austria - as four TATs; it was only on the fifth - Poland - that the allies responded. Had there been a measured TIT FOR TAT response at the start (or perhaps after two TATs), then perhaps the Second World War could have been averted.

After his two computer tournaments, Axelrod was reluctant to repeat the process again, for reasons discussed below. Hofstadter and Lugowski did hold a further tournament (Hofstadter, 1983) which confirmed the general results, although this time TIT FOR TAT did not quite win. It was beaten by a version of OPPORTUNIST TIT FOR TAT which could tell that an opponent like RANDOM was unresponsive and switched to ALL D against such a player. This suggests adding a final line to the above table:

Be realistic

If an opponent doesn't respond to niceness, provocability, etc., just defect. I.e. make the best of a doomed encounter.

Don't bother communicating with someone who doesn't listen

Axelrod was concerned that the victory of TIT FOR TAT reflected optimality only in the context of the ecology of his tournaments. By ecology is meant the sum effect of the various competing strategies, collectively, on each other. To see that TIT FOR TAT is not certain to win, consider a tournament in which one entry was TIT FOR TAT, one was ALL D, and the remainder were ALL C. The ALL D entry would exploit the ALL C entries by always defecting against cooperation - TIT FOR TAT would merely cooperate. In the long run ALL D would prosper over TIT FOR TAT (and, of course, over ALL C). Because the entries to Axelrod's tournaments had come from people familiar with the benefits of TIT FOR TAT, Axelrod worried that the ecology was skewed towards niceness and provocability. He therefore went a step further in the direction suggested by the title of his book - The Evolution of Cooperation, applying the tool of genetic algorithms to the Repeated Prisoner's Dilemma (Axelrod, 1987). In a genetic algorithm the behaviour of a program is specified in terms of a genetic code, which is then interpreted into actions during execution of a specific task - in Axelrod's case, a run of the Prisoner's Dilemma. Several instances of the program run at any time, each with their own genome, and thus their own behaviour. The environment is initialized with many "creatures" with random genes. Those which are most successful at the task (score most points) are allowed to reproduce with mutation and crossover of genomes. Those which are least successful simply "die". After many generations the behaviour of the creatures which are prospering can be examined, as can their genes. The result of this experiment was the evolution of "creatures" that behave in the Repeated Prisoner's Dilemma with many different traits. Over the course of time different periods of relative stability appear, where strategic types are fairly balanced. But in the long run the most stable strategies are those that follow t he principles listed above.

The evolutionary aspect to Axelrod's work is not of concern in the later parts of this chapter. However the ecological aspect is. Just as running the Prisoner's Dilemma repeatedly removes some of its artificiality (because there are now consequences beyond the current game), considering ecological implications removes even more (because our interactions affect third parties). In a final step towards realism, Wu and Axelrod looked at the repeated Prisoner's Dilemma in the presence of noise (Wu and Axelrod, 1995). In this experiment, strategies were matched against each other as before, but random errors were inserted into the player's moves. So, occasionally, a player would have its move changed from COOPERATE to DEFECT. Typically its opponent would then retaliate. Indeed, if two TIT FOR TAT players were cooperating when an error was introduced, then they would thereafter alternate COOPERATE and DEFECT. The point of the experiment was to simulate the mistakes that occur in real life, and to find out which strategies proved most robust to these mistakes. As suggested, TIT FOR TAT is jumpy - provoke it and it defects, perhaps setting off a chain of mutual retaliations. The best strategies in noise were found to GENEROUS TIT FOR TAT and CONTRITE TIT FOR TAT. The first is simply a TIT FOR TAT that allows some percentage of the other player's defections to go unpunished. The optimal percentage depends on the amount of noise: too low and single errors echo for a long time, too high and the scheme is exploitable. CONTRITE TIT FOR TAT monitors its own output so that it knows when a COOPERATE has been turned into a DEFECT by an error. If this happens CONTRITE TIT FOR TAT allows itself to be punished immediately (by a TIT FOR TAT style defection) without defecting in turn.

Alexrod and others (e.g. Gauthier, 1986) have written extensively on the implications of the Prisoner's Dilemma, for biology, policy making and morality. Their claimed success is to show how cooperation arises from self interest, and is a stable strategy in many contexts. They have discovered a reason to be good, an evolutionary explanation for morality, that works even though, underneath it all, people are egoists.

The Moral Prisoner

What if, underneath it all, people are not egoists? Or if, having "evolved" cooperation, people can go beyond that level, towards a morality with a different basis? That is certainly the assumption of many ethicists. A system that advocates universalizability relies on people taking an agent-neutral, or unselfish, view of morals. Are such systems constructions that hide our underlying egoism? Do they merely limit egoism or transcend it?

In this section of the chapter I run several moralities through the controlled environment of the Prisoner's Dilemma. This is contrary to the usual approach of feeding in egoism and looking for morality to come out. Instead it feeds in different moralities and compares the outputs. I don't know of any previous attempt to do this, but it is worth mentioning one philosopher who uses the Prisoner's Dilemma to attack egoism.

Derek Parfitt uses the dilemma in his discussion of self interest and consequentialism (Parfit (1979, 1984)). Parfitt is interested in whether theories can be self-defeating, that is, whether they contain within themselves the seeds of their own contradiction. The theory that he particularly wants to explode is the self-interest theory. He says that a theory T is directly collectively self-defeating when it is certain that, if we all successfully follow T, we will thereby cause the T-given aims of each to be worse achieved that they would have been if none of us had successfully followed T. This is exactly the situation in the Prisoner's Dilemma, where T is the self-interest theory. This is one strand in Parfitt's argument against self interest. It seems rather weak however, because it does not acknowledge the possibility of self-interested knowledge of such experiments as Axelrod's where long-term egoism is fully served by adopting an adaptive strategy.

I now consider several moral theories - simple statements of what one has most reason to do - and ask how a moralist of each persuasion would play the Prisoner's Dilemma. I begin with the simple case of the single-run dilemma, then consider the repeated case and its ecological implications. Finally I turn to transactions that have unbalanced payoff matrices. These are models of interaction between the weak and the strong, or the rich and the poor, and therefore may tell us something about morality that goes beyond economic maximization and towards virtue. My motivation is to begin a response to Axelrod et al, from what I see as the traditional moral camps. I'm interested in whether we can work backwards from moral theories to see how they compare with the evolved "proto-moral" behaviour, particularly when prisoner's dilemma variants expose problems in the egoist approach.

I list in the table below a number of moral theories. We will consider players of each of these types in the prisoner's dilemma. In the table, "I should" means "I have most reason to".

E

Egoist or Purely Self Interested: I should do what is in my own best interest.

A

Altruist: I should do what is in other people's (or another person's) best interest.

U

Utilitarian: I should do what maximizes overall utility (value/happiness).

K1

Kantian (1): I should not do anything that I cannot will be to a universal law.

K2

Kantian (2): I should treat other persons always as ends and not merely as means.

R1

Rawlsian (1): I should do the action that would be agreed to by all parties behind a veil of ignorance about their contingent circumstances.

R2

Rawlsian (2): I should act to maximize the status/utility of the least well off party.

X

Disciple: I should act according to the instructions of a higher authority. Here I consider a particular example of this: I should act according to the teaching of Christ.

These theories are all prescriptive. Axelrod's work gives a descriptive theory of morality based on a prior assumption of natural egoism. My theory E differs in that it doesn't say that I have to act in my own best interest (because that's my nature), but that acting in my own best interest is how I should act, if I am an egoist. Other descriptive theories, based on sentiment for example, are also excluded.

The Single Run Game with Prisoner's Dilemma Payoffs

We first consider the single-run prisoner's dilemma, played by moralists.

The ethical egoist makes decisions on the basis of self interest. The basic presumption of the Prisoner's Dilemma is that the players are egoists. They will defect because, whatever the other person does, defection is better for them. The fact that a dilemma arises because of mutual defection undermines the egoist position, suggesting that, at the very least, the egoist needs some additional rules to resolve paradoxes.

Pure altruism maximizes benefits to others, even at cost to oneself, so in prisoner's dilemma the altruist always cooperates. When both players are altruists, neither fully achieves their aim but there is no paradox. An altruist player - you, say - while cooperating, nurtures a desire that the other player - me - will defect, so as to maximize the other player's benefits. But when I cooperate (because I too am an altruist), I don't do as well as you hoped. However, both of us end up with more than we started, so both our aims (for each other) are partly satisfied. The ranking of outcomes for two altruists is:

You

Cooperate

Defect

I

Cooperate

(second best, second best)

(best, worst)

Defect

(worst, best)

(third best, third best)

Hence cooperation is always indicated whatever the opponent does, and the result of both taking this strategy is better than if neither did.

If the players in a prisoner's dilemma were both utilitarians, they would maximize the total benefit. Recalling the matrix for the general case, and substituting in each cell the sum of values to both players gives

You

Cooperate

Defect

I

Cooperate

X + Y

Y + 1

Defect

X + 1

2

Assuming, as above, that X and Y are both greater than 1, then collectively the best outcome is that of cooperation. With two utilitarians in a prisoner's dilemma, both would rationally choose collaboration and both would be maximally satisfied with the outcome.

Note that if one player - me, say - is an egoist and the other - you - is a utilitarian, then the egoist will certainly defect. For the utilitarian to know this is outside the constraints of the conventional prisoner's dilemma, but at this stage I want to begin considering such possibilities. Assuming a utilitarian player knows that the opponent is a defecting egoist, what is the correct move? The utilitarian should cooperate, because the total benefit is greater by doing so: X+1 > 2. A utilitarian might point out the Prisoner's Dilemma, with a once-only transaction, and no future implications, is highly artificial: while strict act utilitarianism means cooperating even in the face of a certain defection from the other side, real-world cases can have consequences, and rule utilitarianism would argue against appeasing defectors. Nonetheless, in the basic prisoner's dilemma, utilitarianism yields a pure altruistic solution.

Thus the utilitarian acts exactly like an altruist, no matter what the other player does - even if the other player's move is known (making the situation not a true prisoner's dilemma).

At first glance, Kant's categorical imperative addresses the Prisoner's Dilemma nicely. Willing defection as a universal law is irrational; therefore we should cooperate. However, if we consider K1 (the first formulation of the categorical imperative) in isolation, we may be uneasy about this quick answer. Why is universal defection irrational? It is because everyone's object is to gain value. (If everyone were indifferent to value, we could rationally universalize defection.) Thus it is tempting to say that the goal of self interest has a more fundamental role in deciding the Kantian response to prisoner's dilemma than does willing-as-universal-law. Now, if we consider K2 (the second formulation), the problem seems to be resolved: to defect means to treat the other person merely as a means to our own end. Therefore we must cooperate. But Kant claims that K1 and K2 are equivalent. Does the dependence of the answer from K1 on a priori interests undermine this equivalence? We can ask the same question about other cases. Consider Kant's classic example about the breaking of promises. The argument from K1 goes that the reason for promises is to establish trust; if everyone broke promises, no-one would trust them, therefore their very nature would be contradicted. Thus, since we cannot universalize breaking promises, we should not break any particular promise. But to promise is to put oneself under an obligation, and it is only because the obligation has value that the promise has meaning. If we were unconcerned about value, we would have no reason for promises.

A possible solution to this seeming Kantian dilemma lies in the precise wording of K1, in particular the use of the word "will". We might be able to imagine a world where everyone was indifferent to value, but could we will it? Does the nullification of value presuppose the nullification of will? If so, then will in the first formulation is the Trojan Horse that contains the second formulation. I am not sure about this, remaining unconvinced that the formulations really are equivalent, and skeptical that will can carry the weight demanded by this interpretation. The alternative, however, seems anti-Kantian, leading to the conclusion that only if we are fundamentally egoists does overlaying universalizability on our predispositions yield a rational requirement for promise keeping.

The Rawlsian may come from a Kantian tradition, but the issues are much simpler in Prisoner's Dilemma. The Rawlsian assumes that both players are initially behind a veil of ignorance and seek to maximize the lesser payoff. Therefore two Rawlsians cooperate. However, if a Rawlsian knows that the other player is a defecting egoist, then the proper thing to do is to defect. (This is analogous to OPPORTUNIST TIT FOR TAT in the repeated prisoner's dilemma.) These are straightforward conclusions, illustrating the simple application of the Rawlsian framework. (Both R1 and R2 directly support this reasoning.)

Christian morality is variegated. I am going to use it as an example of a morality based on religious authority, but will focus on a simple (and therefore extreme) version of it. I consider only the moral teaching of Jesus in the gospels, and call a follower of this teaching, a Christian. Note that even the most fundamentalist of New Testament believers would consider this unbalanced. The Apostle Paul's attitude towards morality is more complicated than Jesus's: his emphasis on the Gospel of Grace leads him to urge morality on the basis of gratitude for salvation, rather than on the basis of eternal reward or punishment. The law of love dominates much of the New Testament. However, Jesus's own teaching is unambiguous: moral action and one's own eternal self-interest are bound together. The Christian in Prisoner's Dilemma is playing the game at two levels. Jesus's teaching about moral action is: "Do not resist an evil person. If someone strikes you on the right cheek, turn to him the other also." (Matt 5:39) "Blessed are you when people insult you, persecute you and falsely say all kinds of evil against you." (Matt 5:11) "Love your enemies and pray for those who persecute you." (Matt 5:44). Therefore the Christian should cooperate even when the other player is certain to defect - that is, they should act like an altruist. However, the Christian has a higher-order reason for acting in this way, which itself can be expressed within a payoff matrix: "because great is your reward in heaven" (Matt 5:12), "that you may be sons of your Father in heaven" (Matt 5:45), "For if you forgive men when they sin against you, your heavenly Father will also forgive you. But if you do not forgive men their sins, your Father will not forgive your sins." (Matt 6:14,15) Thus for the Christian (i.e. a disciple of Jesus, without the mitigating influence of Paul and other Apostles), the prisoner's dilemma is dissolved. My eternal self interest is best served by cooperation even when the immediate payoff is negative. My reason for being an altruist is egoism.

The Repeated Game with Prisoner's Dilemma Payoffs

Moral systems are usually set up to be universal. We have already seen how the utilitarian can object to the artificiality of the single-run prisoner's dilemma game. This complaint can be summed up by saying that the game does not admit consideration of all relevant facts. In moving to the repeated game we situate it in time and thereby remove part of this objection. By considering the ecological implications of the repeated game we include the side-effects of actions. Later, by considering other payoff matrices, we will broaden the scope still further. As an example, the utilitarian may argue that the value of this particular transaction to each of us is only secondarily important to our relative wealth. Since wealth is accrued over time, the repeated game begins to address the problem. Considering different payoff combinations will bring further convergence between model and reality.

If we assume that all our moral players are rational and informed, we can expect the following behaviour in the repeated game when ecological implications are not considered. Egoists will have read Alexrod and understand the empirical results: therefore they will play TIT FOR TAT or some more opportunist version of that basic strategy. Most of the other moralists will play exactly as for the single run game. This seems counterintuitive, but moral systems are temporally neutral. The consideration of long-run consequences in utilitarianism, for example, does not change the utilitarian's altruism in the repeated game: continuing to cooperate in the face of defection is still the "right" thing to do because it maximizes total value. The only exception might be the Kantian. Kant's view of punishment as necessary if we respect the human dignity of the offender seems to push the Kantian towards TIT FOR TAT. Certainly we could universalize the strategy of always doing what the other person did last time (K1), and respect for the opponent as an end (K2) could dictate a strategy that would "encourage" them to cease defection.

When ecological implications are considered, other important shifts will take place. The egoist will continue to play TIT FOR TAT. The utilitarian will make a major change, eschewing ALL C for a strategy that gives most benefit to the whole ecology. Having read Alexrod, the utilitarian will choose TIT FOR TAT: total value is maximized by everyone cooperating; the presence of defectors brings the value down; when defectors prosper; the total value suffers; defectors should be dealt with by robust, fair, prompt, retaliation - in other words, TIT FOR TAT. The altruist now has a dilemma of a different sort. If altruism is directed towards just the other person I am now interacting with, then the altruist continues with ALL C. However, if it is directed towards every other person, then the altruist is just like the utilitarian (except that their own utility is not counted) and the altruist will switch to TIT FOR TAT. Kantians are already playing TIT FOR TAT in the repeated game; the ecological implications give even more reason to do so, for if I appease, I allow strategies to prosper that cause loss to others. We saw that the Rawlsian framework in the single-run game suggested cooperation unless the other player was known to be a defector. In the repeated game, considering ecology, the Rawlsian will seek to maximize the minimum value in the whole ecology. ALL C players are most likely to be suckered, and, if these exist, Rawlsians will seek to "protect" them by discouraging the tactics which make them poor. Thus Rawlsians will punish exploiters, if doing so will result in exploiters decline in the environment. TIT FOR TAT or a more extreme strategy may be appropriate. In particular, if a Rawlsian player is unconcerned for her or his own total score (unless it becomes the minimum of all players), but can cause the decline of exploitative players, the appropriate response may be MASSIVE RETALIATION. On the other hand, the possibility of noise (see below) will probably push the Rawlsi an back towards a more measured response. The Christian player continues with ALL C: Christ's teaching said nothing about the dangers of appeasement. So although allowing cheek strikers to go on striking might endanger other people, the Christian is still obliged to take this approach.

The different strategies of the players are summarized in the table below.

Single-run prisoner's dilemma

Repeated prisoner's dilemma (no ecology)

Repeated prisoner's dilemma with ecology

Egoist

DEFECT

OPPORTUNIST TIT FOR TAT

OPPORTUNIST TIT FOR TAT

Altruist

COOPERATE

ALL C

TIT FOR TAT (or ALL C if dominated by current transaction)

Utilitarian

COOPERATE

COOPERATE

TIT FOR TAT

Kantian (K1, K2)

COOPERATE

TIT FOR TAT

TIT FOR TAT

Rawslian (R1, R2)

COOPERATE

COOPERATE (or perhaps TIT FOR TAT)

TIT FOR TAT

Christian

COOPERATE

ALL C

ALL C

We have not considered the effect of noise on the different strategies. Since CONTRITE TIT FOR TAT seems to be better for the player and survive optimally in noisy environments (Wu and Axelrod, 1995), we can expect that all TIT FOR TAT players will move in this direction. Possibly utilitarians will allow some generosity too, given that this is slightly better for the ecology at low noise levels.

It is remarkable that when all things are considered, most players will adopt TIT FOR TAT or a simple variant. The exception is the Christian, who will continue cooperating in all circumstances. Again, it should be emphasized that not only are actual Christians unlikely to behave this way, but also Christian theology outside the gospels tends to push morality away from pure ALL C. We might draw a parallel here with pacifism. Most moralities have a way of allowing a just war; only a few are as dogmatic as Jesus in refusing to use force in response to violence. (Jesus did, however, use force in response to unregulated trade (John 2:15-16).)

The broad agreement between the egoist strategy and moral strategies in the closed world of the prisoner's dilemma suggests a common core. In the next section we will undermine this commonality by looking at situations where the payoff matrix is unbalanced - a practically important case that yields different answers for egoism and most moralities.

The Single Run Game with Unbalanced Payoffs

The general payoff matrix, as before, is

You

Cooperate

Defect

I

Cooperate

(X, Y)

(0,Y+1)

Defect

(X+1, 0)

(1,1)

What happens if X and Y have different values? First, if both are greater than 1 (as we have been assuming), the dilemma and the various moral actions discussed above hold true. Suppose, for illustration, that X = 1.1 (a slight value increment) and Y = 100. The payoff matrix becomes

You

Cooperate

Defect

I

Cooperate

(1.1, 100)

(0,101)

Defect

(2.1, 0)

(1,1)

Clearly it is much more advantageous for you to get my unit than for me to get yours. If we were trading in the real world (instead of in prisoner's dilemma) I might be able to "charge" you more, but as it is you are getting a bargain, assuming we both cooperate. Because I am getting a value increment too, the essential arguments above all still hold, even though our levels of motivation and need are different.

For X or Y less than 1, we no longer have a true prisoner's dilemma. These cases are interesting however because they too correspond to real-life possibilities. Moreover, because we are considering configurations where the players are not egoists, perhaps some cases will now become dilemmas.

If both X and Y were less than 1, egoists would have no motivation to cooperate and no dilemma because defect/defect would be better for each than cooperate/cooperate. Altruists and Christians would have a dilemma now because they would swap units, hoping to load value onto the other, and would end up each with less than they started. Utilitarians and Kantians would simply keep their units and achieve the best collective benefits.

But what if X were less than 1 and Y greater than 1? For illustration, consider X = 0.5 and Y = 2. The payoff matrix is

You

Cooperate

Defect

I

Cooperate

(0.5, 2)

(0,3)

Defect

(1.5, 0)

(1,1)

So when we cooperate, I end up with less than when I started. This could happen, for example, if you were unable even to offer me something of comparable worth to that which I am offering you. For example, if I were much richer than you, or much stronger, there might be no exchange on which I would gain. Thus we could be talking about a situation of possible charity. Let us assume that the potential rich giver does not gain in warm feelings, largeness of heart or any other intangible to the same value as she or he is about to invest in the situation.

For two egoists there is no longer a paradox. I am bound to defect because not only is this better for me whatever you do, but also both defecting is better for me than if we both cooperate. You don't have a paradox, you have a problem. You need me to cooperate but I'm not going to. You will defect.

For two altruists there is no paradox either. I will be a more successful altruist that you because every turn we will try to add to each other's value and only I will succeed. But you will go on cooperating because that is the best you can do - you can't force me to defect.

Two utilitarians will realize that in this case the best collective outcome is for me to cooperate and you to defect. So this is what will happen. Note that the actual action taken depends on the numbers. The utilitarians' choice will depend on finding the maximum of X+Y, X+1, Y+1, 2, and choosing the appropriate action to achieve that sum. There is no paradox.

The Kantian framework looks precarious in these conditions. Let us consider first the possibility that we can make our Kantian decisions independently. Then, for you, the issue is whether to cheat on an exchange when you are getting something worth more than you are giving. If you are self-interested, then, as discussed above, by universalizing your question, you reach the conclusion, no, you shouldn't cheat. Since we are both Kantians I can be sure you will cooperate. For me the issue is should I cheat on an exchange when the alternative is getting something worth less than I am giving? The Kantian question is, suppose everybody did that, what would happen? That is a difficult question to answer, because if everyone were in this situation, then everyone would defect because that would be better for everyone (as mentioned above). But not everyone is in this situation because we have a transaction where the other party is going to benefit, and this is an essential part of the problem. Is there a rational reason to count up the actual values involved to make a decision? For example if I only have to sacrifice a little for you to gain a lot, should that yield a different result to when I end up with nearly nothing and only help you a little? The answer to this is no, unless we're prepared to reveal that under our Kantian veneer we're really consequentialists. Therefore I have to find a single general solution to my very difficult question.

But things may not be as difficult as I have painted. Perhaps we should not make our Kantian decisions independently. Instead, when you are deciding whether to cheat, you should bear in mind the fact that the deal is bad for me. Your Kantian question then becomes, suppose all exchanges involved one party losing in value, would that nullify the reason for exchanges? As before, if the motivation for a transaction is to increase value, then we should say, yes, the reason for exchange would disappear if one party always lost out. Therefore you should not take part in the trade. Unfortunately the prisoner's dilemma does not give "don't take part" as an option. But you know that I know you're a Kantian and that you will be thinking this way. Therefore you decide you can defect in fairness to me, knowing that I will defect too, so that collectively we achieve the equivalent of a "don't take part" solution. It is interesting to see how the idea of autonomy seems to rise out of this argument: whatever the payoffs, you don't have the right to demand my sacrifice for your greater good. Thus we don't engage in the game: indeed we "trust" each other to defect!

We seem to have rescued the Kantian approach to some degree, even though there is still an assumption about adding value under the universalizable rules we have discovered. However, the above argument has actually undermined the first formulation of the categorical imperative. This is because I have argued that the players cannot consider their Kantian decisions independently. They have to consider the whole situation as a single question. Thus any moral decision becomes contextualized. It is impossible to give a single answer to the question "should I tell a lie?", because the embedding of this question in, for example, "should I tell a lie to the Gestapo officer at the door asking about the Jews in the attic?" changes the Kantian question. This is problematic for the decontextualized reasoning that Kant seeks for morality.

Once again, although the Rawlsian and the Kantian come from the same tradition, they reach different conclusions. Although the simple minimax solution to the unbalanced game is for both to defect, the Rawlsian will ask, what a priori wealth do the two parties have? In the initial condition we might well agree that when payoffs are not too unbalanced, it is appropriate for the rich to lose a little while the poor gain a lot. This would be applying the minimax criterion, R2, not to this particular transaction, but to our overall wealth or welfare. Thus it is that the Rawlsian approach does not give a single answer for what to do morally, when the context is pre-existing inequity.

On a literal reading of Jesus, the Christian appears to be forced into the altruist's position in these cases. So the rich should give to the poor, even when there is no benefit to the giver. The morality also says that the poor should give to the rich on the same terms. But it may be that other scriptures can be brought into play to balance the teaching of Jesus and move the Christian closer to a Kantian position. I do not know, however, of any decision rule that would allows these issues to be resolved.

The Repeated Game with Unbalanced Payoffs

We now consider the extension of the unbalanced payoff matrix to the repeated game. In all cases it is the ecological implications that are important, not the fact of repeated encounters.

The repeated game with unbalanced payoffs is a very interesting case for some moralities, but let us first dispose of those for which repetition makes no difference. The Altruist and the Christian will continue to cooperate no matter what the payoffs and what the ecological implications. The Kantian will continue to defect (and expect the other player to be a Kantian and defect too.) This is an artifact of the model more than of those moralities. If, for example, two Christians, one rich, one poor, were to meet and consider a transaction, they might well agree that the poor should not sacrifice for the sake of the rich. However, neither would reach this conclusion on the basis of Jesus's teaching. Certainly the rich would be required to sell everything and give to the (impersonal) poor, but both sides would be required to lay down their lives for each other. The imbalance does not remove obligation from the poor. Similarly a Kantian has reasons for generosity that might transcend the defection that represents a rejection of the unbalanced game.

When egoists play one run of the unbalanced single run game, they always defect. What about the repeated game? Without ecological considerations, then it clearly pays for the one who makes a loss to defect all the time. But suppose that by the rich giving to the poor, the poor is able to provide more value to the rich in future interactions. In other words, suppose X, the value of your unit to me, is currently less than 1. I certainly have no reason to trade my 1 for your X this time, unless your X is a monotonically increasing function of your total wealth. Then, if your wealth increases, X will too, so I have an interest in investing in you. According to some complicated mathematical analysis, I could weigh up how much my "donation" would increase your wealth and therefore the future value of your units to me, and if these were sufficient to give me long term gain, I would decide to cooperate. This is the argument for development aid from self interest. It is unclear how well it works in practice, but, if we are fundamentally egoists, this kind of argument may be the best we have to persuade us to provide something for the unlucky. However, if we can be sure that whatever I give you X will never be greater than 1, then, as an egoist, I have no reason to cooperate at all. For example, if you are severely disabled and in doing a job for me, you will never be able to provide value at the level of minimum wages, I will not hire you. Indeed, if I am an egoist, I won't restrict my self-interest to my capitalist workplace, I will have nothing to do with you at all, for interaction demands my time and effort and yields no value to me.

Unbalanced matrices give utilitarians many problems. Not only does the argument for choosing the highest total value outcome extend to the repeated game, but considerations about the ecology don't change the strategy - they just make it seem more unfair. The utilitarian with the low payoff should keep on cooperating, and the utilitarian with the high payoff should keep on defecting. In this way they maximize total utility. The argument about the welfare of the whole ecology leading to TIT FOR TAT no longer applies, because the utilities are imbalanced. Other consequentialist considerations about the measurement of value crowd in to complicate this case.

The utilitarian may protest that even the repeated, ecologically sensitive game doesn't tell you the key information: our starting positions. How much utility do we already have? If I have lots and you have a little, then that should override the local question of this particular transaction. This, after all, was the kind of argument I used in developing the Rawlsian response. But the starting or current position can be captured in the game model by assuming that the values being exchanged are not absolute units of utility but weighted increments to our current positions. So if you are very poor and I am very rich, then my giving 1 unit (e.g. 1% of my income) provides you 50 units of value (a 50% increment in your income), and in return you lose nothing and I gain a warm fuzzy feeling that only slightly compensates for my 1% of income. Thus I should cooperate even though half of my 1% donation never makes it to you, being swallowed up in my favourite charity's administrative costs. Here the utilitarian agrees with the Rawlsian. But note that if you, though poor, could somehow make (rich) me very much more happy by giving me something that doesn't mean very much to you, then the utilitarian says you should do so. Rawls disagrees.

The outcomes are summarized in the following table

Single-run unbalanced game

Repeated unbalanced game with (or without) ecology

Egoist

DEFECT

COOPERATE if by doing so will enhance value of resource from opponent on future encounters. Otherwise DEFECT.

Altruist

COOPERATE

COOPERATE

Utilitarian

LOW-PAYOFF COOPERATE, HIGH-PAYOFF DEFECT

LOW-PAYOFF COOPERATE,

HIGH-PAYOFF DEFECT

Kantian (K1, K2)

DEFECT

DEFECT

Rawlsian (R1, R2)

DEFECT or COOPERATE depending on overall current position

DEFECT or COOPERATE depending on overall current position

Christian

COOPERATE

COOPERATE

Suppose I encounter an old person, someone with a debilitating disease, a desperately poor tramp or other needy person, who wants me to do something for them with no benefit to me. If I am an egoist, I refuse. If an altruist or a Christian, I cooperate. Almost certainly, as a Utilitarian, their benefit would be greater than my loss, so I cooperate. The Kantian analysis above shows an outcome of DEFECT, but this means simply that I am not obligated to help. As I Kantian, I can choose to be generous, but my autonomy means that I do not have to engage in a transaction that is bound to cause me loss. Similarly, for the Rawlsian, I may be motivated because their current situation is so much worse than mine, but I remember that current situation doesn't equate with initial condition, and that a single transaction does not have to maximize minimum payoffs. I may cooperate or not.

We see that the moral conclusions to a particular case of imbalanced payoffs are diverse. The egoist solution is repugnant because it advocates the strong exploiting the weak. The altruist and Christian responses look better except they imply that the desperately poor person has an obligation to sacrifice everything for the whim of the rich, on demand. The Kantian and Rawlsian positions are complicated, yielding not moral guidance but merely boundaries. The utilitarian is the most consistent of all positions, with no paradox in any case, but leads almost to the altruist extreme.

Are there other moral theories that could have fared simply and consistently in all Prisoner's Dilemma contexts? Perhaps a theory of simple reciprocity would work. This would demand that I act as I would want me to act if I were in the other person's shoes. Unfortunately this does not yield a solution to the Repeated Prisoner's Dilemma, and in the unbalanced case it draws in whatever prejudices I already have about justice etc.. However its strength is that it wears its weaknesses on its sleeve - it is unambiguously an imaginative, sympathetic approach.

Conclusion

Axelrod's experiments with the repeated prisoner's dilemma seem to yield a reason why self-interested agents could act cooperatively, even in the absence of Hobbesian central authority. Substituting other kinds of moralities for egoism in the repeated prisoner's dilemma does not, usually, change the outcomes, suggesting that TIT FOR TAT is a very stable strategy, and that alternative moralities cannot claim a practical advantage over egoism. This is a very interesting result: if prisoner's dilemma were an accurate reflection of the real world, we would have little reason to prefer one moral theory over another.

When the players face an unbalanced payoff matrix the difference between the egoist and other moral theories becomes apparent. However, it is not as though the egoist stands alone against the united responses of the rest. There is considerable diversity among moral theories in responding to this case. What then can we say?

First, I do not think that any of the games discussed here gives a way of deciding between moral theories. Some outcomes may be more palatable, but some theories fit the models better than others. As I commented above, we have to be careful when we apply results from the formal closed environment of games into the messiness of the world. Even so, the diversity of responses to even tightly constrained questions shows how different moral theories are. Moreover, some of the reasoning from theory to outcome is quite difficult, even for these tightly constrained cases.

Second, in a descriptive sense, the repeated prisoner's dilemma does give a persuasive explanation for elementary moral behaviour assuming we are (a) fundamentally egoists, and (b) subject to no transcendent moral constraints or guidance. However it does not explain why we have invented or discovered sophisticated moral theories. Despite their diversity of responses to the unbalanced game, the theories I have considered do have common features that set them against pure egoism. In particular they are agent-neutral (except for X), or universal. There are a number of (descriptive) possibilities why these theories exist. First, they could be demanded by reason (as Kant would argue). If this is the case then, presumably, only one of them is true. Second, they could have developed, or evolved, in a similar way to cooperation, as insurance moralities. Thus, if I can expect to experience a significant degree of pain and danger in my life, and there are ways whereby these can be relieved by cooperation, then I have a reason to adopt a universalized morality which fosters altruism in unbalanced cases. Then, following an injury, I can be assured of a degree of care, even if I will never recover sufficiently to contribute value to other players. It is possible that this kind of reciprocity could be modelled in a computer simulation on the lines of Axelrod's. If group altruism can arise in this way, then it's not unreasonable that people try to sum up the situation in a moral rule or a sentimental response. As soon as this is done - for example, as soon as the actor starts looking at insurance morality in terms like "what if it were me?" - then all the ethical questions about the scope of morality and its unifying principles get bootstrapped. Something that originates from self-interest, can, through the parsimony of our thinking, get expressed into a universalized principle that extends beyond the group that provides my "insurance" to all persons, or all sentient beings. Whether or not this is how moral sentiments arise, my own feeling about virtues is that they exist to extend the sphere over which we ask questions like "what if it were me?". In the end this might be an emotivist rather than normative rational moral stance, but I remain unpersuaded that we can do better.

I write this conclusion in a week when two youths at Columbine High School in Colorado shot and killed 12 classmates and a teacher, then, apparently, turned their guns on themselves. Their action was irrational according to any theory of morality. In the face of such horrors, it seems almost impossible to attempt an explanation. We know that real life is so complicated that the right perspective is elusive. The artificial model we have been considering allows questions about such cases to be asked from a small number of different perspectives. With the limited framework of the Prisoner's Dilemma, we can make some headway. For example, the idea of value is at the heart of the dilemma. Even egoism requires that some things are worth more than other things. We may, as moralists, argue about what things are worthy of value - happiness, dignity, honour, fulfilment, virtue, autonomy - but perhaps the crucial question is how we come to value anything at all. One explanation of the Colorado teenagers' actions is that they had no concept of value. Did they live in such a grey world that they saw nothing worth anything? Another, more down to earth, explanation is the power of resentment. The prisoner's dilemma models this simply as repeated mutual defections, so does not capture the cumulative effect of repeated insults and jibes. However, it does simulate resentment's irrational, immoral behaviour, and shows how difficult it is to get out of. If two TIT FOR TAT players are in a state where both are always defecting, either of them has to forgive twice, or each of them has to forgive once, to move to a mutually cooperating state. But if both are playing TWO TITS FOR A TAT, either must forgive twice in a row. With N TITS FOR A TAT, there needs to be N consecutive forgivenesses, and as N increases, the other player looks more and more like ALL D. What is the moral response? The New Testament emphasizes forgiveness and grace, and these are surely remedies of a kind. But X's forgiveness is very hard to achieve in practice. Probably conversion is a prerequisite. The trick, in real life, is to take an action that leads as quickly as possible to a mutually beneficial relationship. The prisoner's dilemma tells us that if we, as paranoid N TITS FOR A TAT players, can be persuaded to cooperate just once, at the same time, we can break the cycle. The problem then is that real-life cases have noise (i.e. misperceptions) that can throw us back to resentment very quickly. Finding ways to evoke spontaneous simultaneous cooperation, reducing noise, and limiting the lengths of retaliations, are all strategies to break out of degenerative interactions. Could any of these have been applied in the antagonistic atmosphere of Columbine High School? Finally, it is clear that the Colorado situation was one involving group allegiances, where intra-group cooperation and inter-group defection fed on each other. It is impossible to change this situation by reducing the intra-group payoffs, only by adjusting the inter-group payoffs. In general, the manipulation of payoffs is a way for authorities to attempt to control behaviour. But inter-group antagonisms are very hard to break. This might be a fruitful area of research for future Prisoner's Dilemma simulations.

I have tried to argue for using the Prisoner's Dilemma as a probe to compare moral theories in a constrained sterile way. As the example above indicates, there remains plenty of scope for Prisoner's Dilemma experimentation and thinking in analysing problems of the real world. There is no promise of new ethical solutions, but perhaps there is the possibility of finding moral implications that are usually submerged in complicating factors.

References

Robert Axelrod, The Evolution of Cooperation, Basic Books, 1984.

Robert Axelrod, "The Evolution of Strategies in the Interated Prisoner's Dilemma," in Genetic Algorithms and Simulated Annealing, ed. Lawrence Davis, Pitman, 1987, pp 32-41. Reprinted in Robert Axelrod, The Complexity of Cooperation, Princeton University Press, 1997.

David Gauthier, Morals by Agreement, Clarendon Press, 1986.

Douglas R Hofstadter, Metamagical Themas, Basic Books, 1985. (Chapter 29, titled "The Prisoner's Dilemma Computer Tournaments and the Evolution of Cooperation" is based on Hofstadter's May 1983 Scientific American column.)

Nigel Howard, Paradoxes of Rationality: Theory of Metagames and Political Behavior, MIT Press, 1971.

Derek Parfit, "Prudence, Morality and the Prisoner's Dilemma", Proceedings of the British Academy, Vol 65, 1979. (This is essentially the same material as Chapters 23 and 24 of Reasons and Persons below.)

Derek Parfit, Reasons and Persons, Clarendon Press, 1984.

Peter Singer, How are we to live: Ethics in an age of self-interest, Prometheus Books, 1995. (Chapter 7 deals with the repeated prisoner's dilemma).

Jianzhong Wu, Robert Axelrod, "How to Cope with Noise in the Interated Prisoner's Dilemma", Journal of Conflict Resolution, Vol 39, No 1, March 19995, pp 183-189. Reprinted in Robert Axelrod, The Complexity of Cooperation, Princeton University Press, 1997.