Axelrod discusses how cooperation can develop via the lens of game theory--specifically, the prisoner's dilemma. For those of you who don't know, I'll try to explain the dilemma briefly. For the rest of you who know about game theory, skip to the stars.
The prisoner's dilemma modeled off the following scenario:
Two criminals are arrested for a crime they committed together and put in separate rooms, where they cannot communicate. Both prisoners are told that if neither squeals (i.e., they both cooperate with each other), then they'll only be able to be convicted of a minor crime and each will get a small sentence. If, however, one prisoner rats out the other (i.e., one defects), and the other stays silent (cooperates), then the rat will be released immediately, while the sucker gets hit with a harsh jail time. If both rat each other out (both defect), then both will be sentenced to moderately long (longer then the minor charge, but shorter than the sucker's sentence) terms in jail.
Put in simpler terms, the prisoner's dilemma occurs when the T > C > D > S, where T = temptation (you defect when your partner cooperates), C = cooperation (both parties cooperate), D = defection (both parties defect), and S = sucker (you cooperate and your partner defects). Often times, it is expressed in terms of points. In a "T" scenario, you (the defecter) get 5 points while your partner (the sucker) gets 0 (5,0), while in an "S" scenario it is inverted (0,5). If both parties cooperate, both get 3 points (3,3), while if both defect, they each get one point (1,1).
The prisoner's dilemma is used in many fields (prominently in International Relations) to demonstrate why sub-optimal outcomes can occur even among rational actors. Specifically, if the prisoner's dilemma "game" is played only once, rational actors will always choose to defect (thus getting one point each), even though an optimal outcome of mutual cooperation would give them both three. Some people say it is for fear of becoming the sucker (getting zero points, or the harsh prison term), but that's only part of the story. In reality, the reason both parties will defect is because it is always the better option. If you think your partner will defect, then you also should defect because it gives you one point instead of zero. If you think your partner will cooperate, you should still defect because it gives you five points instead of three. This shows why cooperation--even when it is mutually beneficial--can be very difficult to bring into being.
However, this logic only occurs if the game is played one time. If the game is played repeatedly (is iterated) over an indefinite period, then it is possible for cooperation to develop, because one can build trust and thus convince partners to go for the mutually beneficial arrangement (which yields more benefits over the long run), then be stuck in a mutually suboptimal cycle of defections.
***
A prisoner's dilemma "tournament" was devised by which different players would submit strategies and compete against another "player" using their own strategy, the goal being to get the most points possible. So, for example, a player could submit "defect every time", where they would always choose defect. Each strategy faced all the others (plus "random") in a round-robin format.
The winner was called "tit-for-tat." It would always cooperate first, and then thereafter would do whatever move its partner did in the previous cycle. So if the partner defected, it would defect the next term, and if the partner cooperated, it would cooperate the next term. This is an example of what Axelrod called a "nice" strategy, in that it would never be the first to defect--it would only defect in response to the partner defecting previously.
It turns out that "nice" strategies have some interesting qualities (finally, the meat of the post). The first is that the benefits of being nice were pervasively under-estimated--it turned out that the vast majority of top-performing strategies were "nice," far over the proportion of "nice" submitted strategies. Similarly, the bottom of the pack was almost entirely made up of "mean" strategies. This is very counter-intuitive--in a world with a strong assortment of "meanies", not only can "nice" guys survive, but they can thrive.
Axelrod posits that cooperation can occur when players meet each other frequently and the benefit of preserving a future relationship outweigh the benefits of the short-term gain from defecting. So, for example, I'm far more likely to cheat a businessperson I know I'll never see again, than one who I have to work with day in and day out. Axelrod also establishes that such cooperation can develop without conscious thought (as in non-human symbiotic relations where one party could prey on the other), and even between supposed enemies.
This has interesting political implications. First, it implies that the world does not have to be dog-eat-dog. If people see each other and interact frequently, then cooperation becomes the optimal strategy. Specifically, it is an excellent argument for diversity (economic, racial, and otherwise). The short-term benefits of "selling out" another person are only worthwhile if one has no interest in maintaining a positive relationship with them. So, for example, a person who knows no homosexuals pays very little price in supporting their demonization. But if that person is their neighbor or grocer or banker or brother, the costs of defection become significantly higher. Hence, if we want to increase social cooperation between erstwhile feuding groups, it is quite possible--if we are willing to put resources into integrating the communities. The reverse, of course, is also true: if we want to maintain a conflict scenario, it is vitally important that we segregate the parties so they do not often encounter each other and are unlikely to "play the game" with the same person on multiple occasions.
What it also tells us is to not despair at the prospect for cooperation even among long-feuding foes. Given the right conditions (conditions which are very possible), cooperation becomes the most stable and most rational course of action for all parties. Of course, there are responses to this analysis (in IR, for example, realists would argue that short-term gains always outweigh long-term ones because a country that is exploited [played for the sucker] in the short-term might not survive to see the long term). But by and large, it is important to remember both that cooperation is quite feasible, and that humans tend to be empirically far too pessimistic about its possibilities for success.
1 comment:
Brilliant summary. Thanks for sharing this. I'll have to pick up a copy of Axelrod's book.
Post a Comment