2 Axelrod’s computer tournament — a brief overview of its setup, results and further conclusions
3 Simulating science? Simulations versus experiments
3.2 External validity and its justification
3.3 Interim findings
4 Are Axelrod’s conclusions externally valid?
A Prisoner’s dilemma matrix
In the early 1980s, Robert Axelrod published several articles and a monograph on The Evolution of Cooperation (Axelrod, 1984), discussing and interpreting the results of his well-known computer tournaments and of a series of subsequent simulations. Both the tournaments and simulations were conducted in order to find a suitable, evolutionary stable strategy for the iterated prisoner’s dilemma, which is generally considered an appropriate model of a certain type of social dilemma that arises when “the pursuit of self-interest by each leads to a poor outcome for all.” (Axelrod, 1984, p. 7)
Analyzing and inductively generalizing the data provided by the simulations, Axelrod drew conclusions concerning the conditions for the emergence of cooperative behavior among selfish individuals in the absence of a central authority. The results of the tournaments and simulations led to a generalized theory of the evolution of cooperation, which claims to provide an explanation for various historical, social and biological phenomena like the development of a live-and-let-live system during the trench warfare in World War I (Axelrod, 1984, ch. 4) or biological mutualisms as, for example, between cleaner fish and predatory fish (Axelrod, 1984, ch. 5).
Axelrod‘s work contributed extensively to popularizing evolutionary game theory and promoting computer simulation as a scientific method in the social sciences. By 1994, more than 200 articles closely related to The Evolution of Cooperation were listed in an annotated bibliography (Axelrod and D’Ambrosio, 1994). Besides the fact that his approach had an unquestionably high impact on succeeding research and ushered in the “simulation era”(Hartmann, 1996, p. 77) in the social sciences, the use Axelrod made of computer simulations raises questions about their methodological and epistemological status: If, as Axelrod states in his paper Advancing the Art of Simulation in the Social Sciences, simulation can serve the purposes of prediction, proof and even scientific discovery (Axelrod, 2005, p. 3), what need is there for conducting experiments any longer? Can’t we simulate science? Admittedly, this suggestion sounds somewhat exaggerated, but why exactly do most of us share the intuition that there are fundamental differences persisting between simulations and experiments? What are the characteristic features distinguishing them? Do computer simulations in general - and Axelrod’s tournaments in particular - resemble experiments insofar as their potential to provide us with surprising results that permit further theorizing is concerned? Or are they nothing else than mere “number-crunching techniques” (Winsberg, 1999, p. 275), using brute-force computational means in order to generate data from theoretical knowledge and assumptions already built into the underlying model?
The question where to draw the conceptual line between simulation and experiment has turned out to be of great interest to philosophy of science, not least since the categorization might be relevant to the way the results are assessed and used. The objective of this paper is to elaborate on the distinctive characteristics of simulations in contrast to experiments and to propose an answer to the question whether to classify simulation as a form of theorizing, experimenting, or as a “third way of doing science” (Axelrod, 2005, p. 5), somewhere in between deduction and induction, theory and experiment.
For this purpose, Axelrod’s work concerning the evolution of cooperation - a paradigmatic simulation-based approach in the social sciences - will be briefly discussed, followed by a more detailed consideration of the features that may serve to distinguish simulation from experiment. Different proposals such as material similarity between object and target (Morgan, 2003, 2005), the possibility of direct physical interaction or of unexpected “intervention” by nature, and the role of background knowledge in justifying external validity (Winsberg, 2009) are taken into account. Finally, the problem of assessing the epistemic power and external validity of simulations will be addressed: Can we learn from simulations? Are the “surprising” results of Axelrod’s tournaments reliable, does the theory of the evolution of cooperation tell us anything new and can it be empirically confirmed?
2 Axelrod’s computer tournament — a brief overview of its setup, results and further conclusions
In order to find a good strategy to use in situations of the type of a repeated prisoner’s dilemma (RPD) - which is considered an adequate game theoretical model of a certain form of cooperation dilemma Robert Axelrod initiated a computer tournament, asking participants from various countries and research fields to submit their decision rules. The first round, whose number of moves was fixed to 200, consisted of 14 different strategies and RANDOM, a rule that randomly defects and cooperates, playing against each other. “Amazingly” (Axelrod, 1984, p. 20), TIT FOR TAT (TFT) by Anatol Rapoport, a simple strategy that cooperates in the first move and henceforward imitates the behavior of its opponent, won the game.
The scores reached by the different decision rules were conveyed to the participants and it was called for a second round. This time, the number of possible interactions was not fixed in advance, but determined by a certain probability w for two players to meet again after a move - a “more realistic assumption” (Axelrod and Hamilton, 1981, p. 1392). 62 competing strategies were submitted by amateurs and experts alike. TFT won again. According to Axelrod, the analysis of the results “revealed” (Axelrod, 1984, p. 20) four characteristics which yield the success of a decision rule: kindness, forgiveness, willingness to retaliate, and clarity. TFT was a kind rule, because it always cooperated in the first move (Axelrod, 1984, p. 33); it was willing to cooperate after another player had defected rather than answering a single defection with permanent retaliation and was therefore forgiving (Axelrod, 1984, p. 36); it was retaliatory, because it immediately defected after an unwarranted defection from the other strategy (Axelrod, 1984, p. 44); and it was clear and easy to understand (Axelrod, 1984, p. 54).
TFT’s success in both tournaments led Axelrod to an “evolutionary perspective” (Axelrod, 1984, p. viii), considering TFT’s robustness, stability and initial viability. In order to cope with the dynamic problem of the decision rules’ future ecological success, Axelrod simulated their population dynamics over 1000 generations (rounds):
“The idea is that the more successful entries are more likely to be submitted in the next round, and the less successful entries are less likely to be submitted again. To make this precise, we can say that the number of copies of a given entry will be proportional to that entry’s tournament score.” (Axelrod, 1984, p. 49)
It could be shown that - with the payoffs and the participating decision rules resembling those of the second tournament - TFT “displaced all the other rules and went to fixation [...] in the long run” (Axelrod and Hamilton, 1981, p. 1393), meaning that TFT is a robust strategy that can do well in a manifold environment. Furthermore, Axelrod concluded that TFT is collectively or evolutionary stable, which means that - once established - it can “resist invasion by any possible mutant strategy provided that the individuals who interact have a sufficiently large probability w of meeting again.” (Axelrod and Hamilton, 1981, p. 1393) Lastly, TFT can become initially viable “in an environment composed overwhelmingly of ALL D” (Axelrod and Hamilton, 1981, p. 1394), another evolutionary stable strategy that always defects, given that TFT intrudes the population of ALL D in clusters. Clustering enables individuals playing TFT to interact with each other and, insofar as the probability p of interactions within the cluster is sufficiently high, gain higher payoffs than ALL Ds. On the other hand, a population wholly consisting of cooperative players can resist an invasion by noncooperative individuals as long as the chance of repeated interactions w is high enough. (Axelrod, 1984, pp. 66-69)
The results of the tournaments and subsequent simulations led Axelrod to the development of a theory of cooperation, which he considered capable of showing that and explaining how “cooperation based on reciprocity can get started in a predominantly noncooperative world, can thrive in a variegated environment, and can defend itself once fully established.” (Axelrod and Hamilton, 1981, p. 1394f.). That is, Axelrod does not merely state the logical possibility, but claims to have discovered the necessary and sufficient conditions for cooperation to emerge “in a world of egoists without central authority” (Axelrod, 1984, p. 20) or in biological contexts. Additionally, he emphasizes the explanatory force of the theory by applying it to certain social and biological phenomena, such as the development of a live-and-let- live system during the trench warfare in World War I or biological mutualisms between creatures with bounded rationality.
What is remarkable about Axelrod’s approach? Dealing with a fundamental question that can hardly be answered by conducting laboratory or field experiments - the question under which circumstances cooperation can evolve Axelrod explored the repeated prisoner’s dilemma “in a novel way” (Axelrod, 1984, p. 20):
- The computer tournaments simulated a “battle of strategies” that yielded surprising results,
- which, along with the simulation of future ecological success and a strong dose of interpretation, seemed to allow for inductive generalization and theory-building.
- The theory of evolution of cooperation, in turn, claims to provide an explanation of how cooperative behavior in general and certain observable cooperative behavior patterns in particular could have emerged.
Regarding the noteworthy aspects of Axelrod’s approach, the question whether it may count as a form of experimenting arises. On the one hand, Axelrod’s tournaments offered surprising results that seemed to allow for further theorizing, and are therefore comparable to experiments; on the other hand the mere “number-crunching” methods employed do not seem capable of revealing anything more, or better, than the consequences of the assumptions already built into the underlying model, such as the particular cardinal values for the payoffs or the number of interactions. In principle, Axelrod could have arrived at his conclusions by using paper and pencil.
As these considerations show, the question where to draw the conceptual line between simulation and experiment seems notoriously problematic and has been lively discussed among philosophers of science. The following chapter offers a detailed consideration of the distinctive features of simulation and experiment and tries to provide an answer to the question whether simulation differs from experiment as far as ontological, methodological and epistemological aspects are concerned.
3 Simulating science? Simulations versus experiments
What is an experiment? Roughly speaking, experimenting means arranging (variable) circumstances in a systematic way, for the purpose of scientific observation. That is, experimenting is an investigative and creative activity that partly consists of an intervention in a particular system under (at least partially) controlled conditions in order to find out about that system. As Hacking (1983, p. 149) remarks, we must “‘twist the lion’s tail’, that is, manipulate our world in order to learn its secrets.” During the past century, quite an effort was made by philosophers of science in order to deepen our understanding of the term “experiment”. By now, it is largely agreed upon the fact that experiments are not exhaustively characterized by ascribing to them the function of theory-testing; rather, they are a way of “questioning nature” - sometimes out of pure scientific curiosity -, that can even precede theory. Moreover, it seems as if the awareness of scientific observation to be a highly complex procedure - often relying upon instruments and, at least sometimes, theory-laden - has nowadays become a commonplace among philosophers of science. To sum up, one can state that we already possess a fairly precise, not overly simplistic notion of experiments.
Unfortunately, when it comes to specifying a satisfactory definition of the term “simulation”, matters seem way more complicated. Humphreys (1990, p. 501) focuses on the technical superiority of simulations with respect to their ability to tackle otherwise intractable mathematical problems, when he characterizes simulation as “any computer-implemented method for exploring the properties of mathematical models where analytic methods are unavailable.” Likewise, Winsberg (2003, p. 107) refers to simulations as “techniques for studying mathematically complex systems.” In contrast to the abovemen- tioned (working-) definitions, Hartmann (1996, p. 83) stresses the inherently dynamic character of simulations, which are “closely related to dynamic models [..., imitating, I.B.] one process by another process.” Hartmanns point of view is shared and spelled out in greater detail by Parker (2009, p. 486): “A simulation [is, I.B.] a time-ordered sequence of states that serves as a representation of some other time-ordered sequence of states; at each point in the former sequence, the simulating system’s having certain properties represents the target system’s having certain properties.” Finally, Morgan (2003, p. 217) considers the missing materiality an important characteristic of simulations, when she refers to them as “nonmaterial experiments.” Obviously, Rohrlich (1990, pp. 515, 511) takes materiality into account, too, as he recognizes that simulations “have something in common with thought experiments,” although, regarding the visual output of some simulations, one can easily have the impression of “participating in an experiment rather than in a purely theoretical study.”
Each of the aforementioned suggestions manages to capture one of the salient features of simulations and, in this respect, is partly correct. But in trying to understand the specifics of simulations, neither narrowing our focus to analytically intractable differential equations nor broadening it too much by appealing to an abstract concept like “the imitation of one process by another process” seems very promising. In order to elaborate on the distinctive features of simulations in contrast to experiments, I will concentrate on three proposals by Morgan (2003, 2005), Parker (2009) and Winsberg (2009), revolving around the ideas of materiality and external validity.
 The payoff matrix presumed by Axelrod can be found in the appendix.
 Axelrod treats both terms as if they were exchangeable, but, as Binmore (1994, p. 197f.) emphasizes, “it should be noted that TIT-FOR-TAT is not evolutionary stable. In fact, no strategy is evolutionary stable in the indefinitely repeated prisoner’s dilemma.” For example, a population of TFTs can be invaded by a single or a group of DOVEs (that is, a nice strategy which never defects)
 For the mathematical proof, see Axelrod and Hamilton (1981, p. 1393).
- Quote paper
- Bachelor of Arts (B.A.) Inga Bones (Author), 2010, Simulating Science?, Munich, GRIN Verlag, https://www.grin.com/document/148996