• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Theor Biol. Author manuscript; available in PMC May 27, 2008.
Published in final edited form as:
PMCID: PMC2396517
NIHMSID: NIHMS47200

Evolutionary graph theory: breaking the symmetry between interaction and replacement

Abstract

We study evolutionary dynamics in a population whose structure is given by two graphs: the interaction graph determines who plays with whom in an evolutionary game; the replacement graph specifies the geometry of evolutionary competition and updating. First, we calculate the fixation probabilities of frequency dependent selection between two strategies or phenotypes. We consider three different update mechanisms: birth-death, death-birth and imitation. Then, as a particular example, we explore the evolution of cooperation. Suppose the interaction graph is a regular graph of degree h, the replacement graph is a regular graph of degree g and the overlap between the two graphs is a regular graph of degree l. We show that cooperation is favored by natural selection if b/c > hg/l. Here, b and c denote the benefit and cost of the altruistic act. This result holds for death-birth updating, weak selection and large population size. Note that the optimum population structure for cooperators is given by maximum overlap between the interaction and the replacement graph (g = h = l), which means that the two graphs are identical. We also prove that a modified replicator equation can describe how the expected values of the frequencies of an arbitrary number of strategies change on replacement and interaction graphs: the two graphs induce a transformation of the payoff matrix.

Keywords: Evolutionary dynamics, evolutionary game theory, population structure, evolutionary graph theory, replicator equation, cooperation

1 Introduction

Evolutionary game theory is a general description of evolutionary dynamics whenever the fitness of individuals is not constant but frequency dependent (Maynard Smith 1982, Nowak & Sigmund 2004, Nowak 2006). Many concepts of evolutionary game theory have their equivalent formulations in mathematical ecology (May 1973, Hofbauer & Sigmund 1998). The classical approach to evolutionary game dynamics is given by the replicator equation, which describes deterministic evolutionary dynamics in infinitely large populations (Taylor & Jonker 1978, Hofbauer et al. 1979, Zeeman 1980, Weibull 1995, Hofbauer & Sigmund 1998, 2003, Nowak & Sigmund 2004). More recently there have been many investigations into stochastic evolutionary game dynamics of finite populations (Nowak et al. 2004, Taylor et al. 2004, Wild & Taylor 2004, Imhof et al. 2005, Traulsen et al. 2005, Antal & Scheuring 2006a, Antal et al. 2006b, Imhof & Nowak 2006, Traulsen et al. 2006a,b,c,d, Fudenberg et al. 2006).

Evolutionary graph theory (Lieberman et al. 2005, Ohtsuki et al. 2006) is a powerful new method to study the effect of population structure or social networks on evolutionary dynamics. This approach is based on the long standing interest of how spatial effects influence evolutionary and ecological dynamics (Levin & Paine 1974, Nowak & May 1992, Ellison 1993, Durrett & Levin 1994, Hassell et al. 1994, Killingback & Doebeli 1996, Nakamaru et al. 1997, 1998, Tilman & Karieva 1997, Szabó & Tőke 1998, van Baalen & Rand 1998, Eshel et al. 1999, Liggett 1999, Irwin & Taylor 2001, Hauert et al. 2002, Szabó & Hauert 2002, Le Galliard et al. 2003, Hauert & Doebeli 2004, Ifti et al. 2004, May 2006). Spatial games can lead to evolutionary kaleidoscopes, deterministic chaos as well as the stable coexistence of cooperators and defectors in the Prisoner's Dilemma (Nowak & May 1992, 1993). For games on graphs we have found a very simple rule that specifies evolution of cooperation: if the benefit-to-cost ratio of an altruistic act exceeds the average number of neighbors, b/c > k, then selection on graphs favors cooperators (Ohtsuki et al. 2006). This phenomenon is called ‘network reciprocity’.

Here we extend our investigations of evolutionary graph theory by placing the members of a population on the vertices of two graphs. The interaction graph, H, determines who-meets-whom in an evolutionary game. The replacement graph, G, specifies evolutionary updating. Both graphs have the same vertices. Each vertex is occupied by one individual. There are no empty vertices. The population size and therefore the number of vertices of each graph is given by N. The graphs H and G may differ in their edges. For the analytic calculations we only consider (random) regular graphs, which are defined by the property that all vertices have the same degree (= number of edges). The degrees of H and G are respectively given by h and g. For analytical treatment we require g ≥ 3 throughout the paper.

The intersection of the sets of edges of graphs H and G defines the graph L. We only consider situations where L is again a random regular graph. The degree of L is given by l. Note that l cannot be larger than h or g. Therefore, we have l ≤ min{h, g}. Figure 1 shows two examples.

Figure 1
Possible graph layouts. a) Case when h = l = 2 and g = 4. The interaction graph H corresponds to the cycle drawn with thick orange lines. The overlap graph L (drawn with thin blue lines) corresponds to the same cycle (full overlap, L = H). Finally, the ...

Note that our analytic methods can only deal with regular graphs. Therefore, in this paper we restrict our attention to regular graphs. In a previous study, however, we have observed that the analytic results for regular graphs are also good approximations for non-regular graphs, such as random graphs and scale-free networks (Ohtsuki et al. 2006).

This paper is organized as follows. Section 2 describes our basic model. In Section 3 we study stochastic game dynamics on replacement and interaction graphs for finite populations and show numerical simulations for the Prisoner's Dilemma. These simulations further validate the nature of the approximations used in our analytical treatment. Section 4 investigates replicator dynamics on these graphs, which can be obtained by taking the infinite-population limit. Discussion and conclusions are provided in Section 5.

2 The basic model

Let us consider a 2 × 2 game with two pure strategies, A and B. The payoff matrix of the game is given by

ABAB(abcd).
(1)

The entries represent the payoffs for the row player. Each individual uses either strategy A or B. We do not consider mixed strategies.

Individuals play the game with all of their h neighbors in the interaction graph, H. These interactions determine the total payoff, P, of each player. The fitness is given by

F=1w+wP.
(2)

Here 0 ≤ w ≤ 1 represents the relative contribution of the game to fitness. If w = 1 then the payoff is equal to fitness. This is the case of ‘strong selection’. If w = 0 then the game is irrelevant for fitness; all players have the same fitness. This is the case of ‘neutral drift’. Throughout this paper we study the case w [double less-than sign] 1, which is the limit of ‘weak selection’ (Nowak et al. 2004).

Studying this limit can be justified in two different ways. First, in most real life situations we are involved in many different games, and each particular game only makes a small contribution to our overall performance. Second, weak selection leads to analytic insights which are not possible for strong selection. Numerical simulations suggest that these results are usually good approximations for larger values of w (Ohtsuki et al. 2006). When studying finite populations in Section 3, we require that the population size, N, fulfills the inequalities N [dbl greater-than sign] max{h, g} and N [double less-than sign] 1/w (Traulsen et al. 2006b). For infinite populations, studied in Section 4, we take the limit N → ∞, keeping w fixed (w [double less-than sign] 1).

The evolutionary dynamics are determined by the ‘update-rules’ which govern how the population changes over time. As in previous papers (Ohtsuki et al. 2006, Ohtsuki & Nowak 2006a,b), we consider three different update rules:

  • Birth-Death (BD) updating. An individual is chosen for reproduction proportional to fitness; the offspring replaces at random one of the g neighbors on the replacement graph.
  • Death-Birth (DB) updating. A random individual is chosen to die; the g neighbors of the replacement graph compete for the empty site proportional to their fitness.
  • Imitation (IM) updating. A random player is chosen for updating his strategy; he either adopts a strategy of one of the g neighbors in the replacement graph or remains with his own strategy, proportional to fitness.

An elementary step of updating is an event where an individual is potentially replaced by another individual (or where an individual potentially changes his strategy). Reproduction can be genetic or cultural (Cavalli-Sforza & Feldman 1981, Boyd & Richerson 1985). Strategies breed true: we do not explicitly consider mutations. But we calculate the probability that a resident population is invaded and replaced by a mutant strategy.

The evolutionary outcomes are dependent on the updating rules. Our three updating rules are only a small subset of the possible dynamics on graphs. For example, some rules might use synchronous updating but others asynchronous updating (Nowak & May 1992, 1993, Nowak et al. 1994). The BD, DB, and IM updating rules assume ‘fertility-selection’ where the payoffs of the game affect the fertility (reproductive success) of players. However, one can also imagine ‘viability-selection’ where the payoffs affect the survival of players. For instance, in the score-dependent viability model of Nakamaru et al. (1997, 1998) and Nakamaru & Iwasa (2005, 2006), an individual is chosen to die according to his payoff (strictly speaking, a candidate is randomly chosen and he can die according to his payoff) and then one of the neighbors takes over the vacancy at random (while their so-called ‘score-dependent fertility model’ corresponds to our DB updating). This subtle change can lead to very different dynamics on graphs. We will discuss the effect of different updating rules in Section 5.

3 Fixation probabilities for games on replacement and interaction graphs

One of the most important quantities in stochastic game dynamics is the fixation probability, defined as the probability that a mutant invading a population of N − 1 resident individuals will produce a lineage which takes over the whole population (Nowak et al. 2004, Taylor et al. 2004, Imhof & Nowak 2006). We denote the fixation probability of strategy A in a B-population by ρA. We denote the fixation probability of strategy B in an A-population by ρB.

The fixation probability of a neutral mutant is given by the inverse of the population size, 1/N. If ρA > 1/N, then natural selection favors the fixation of strategy A.

In frequency-dependent selection, ρA > 1/N does not necessarily imply that ρB < 1/N (Nowak et al. 2004). We are also interested in deriving conditions for ρA > ρB, which means that the fixation of A is more likely than the fixation of B. Under recurrent but rare mutations, if ρA > ρB then strategy A dominates the population more often than strategy B (Fudenberg et al. 2006).

To estimate the fixation probabilities we use diffusion theory (Kimura 1962, Crow & Kimura 1970, Ewens 2004). For analytical convenience we assume that the population size, N satisfies N [dbl greater-than sign] max{h, g} and N [double less-than sign] 1/w, where w [double less-than sign] 1 is the intensity of selection. Let xX denote the global density of strategy X (=A or B). Let TA+(TA) be the probability that the number of A-strategists increases (decreases) by one in an elementary step of updating. These probabilities are defined as

TA+=Prob(ΔxA=1N),TA=Prob(ΔxA=1N).
(3)

We assume that N updating steps occur per unit time (Δt = 1/N). In this sense, the unit of time is considered to be one generation. The probability [var phi]A(yA) that strategy A ultimately takes over the whole population, when its initial frequency is yA, is given as the solution of the following backward Kolmogorov equation (Ewens 2004):

0=m(y)dφA(y)dy+v(y)2d2φA(y)dy2.
(4)

Here, the mean of the increment of xA per unit time, m(xA), is given by

m(xA)=E[ΔxA]Δt=E[ΔxA]1/N=N{(1N)Prob(ΔxA=1N)+(1N)Prob(ΔxA=1N)}=TA+TA.
(5)

The variance of the increment of xA per unit time, v(xA), is given by

v(xA)=Var[ΔxA]Δt=E[(ΔxA)2](E[ΔxA])21/NN·E[(ΔxA)2](since(E[ΔxA])2=O(w2))=N{(1N)2Prob(ΔxA=1N)+(1N)2Prob(ΔxA=1N)}=TA++TAN.
(6)

From eq.(4), the fixation probability is calculated as ρA = [var phi]A(1/N). Hence, we need to calculate TA+ and TA.

However, the state of the population cannot be fully described by global densities of strategies, xA and xB. A particular configuration of the population corresponds to a two-coloring of the vertices. Each vertex can be either occupied by an A or a B individual. There are 2N possible configurations on a graph, which is a huge number for large N. Therefore, in order to calculate TA+ and TA, we must make approximations. Here we adopt the pair-approximation method (Matsuda et al. 1987, Nakamaru et al. 1997, 1998, Keeling 1999, Haraguchi & Sasaki 2000, van Baalen 2000) in order to describe the local configurations of strategies on graphs. Pair-approximation considers not only frequencies of strategies, but also frequencies of (connected) strategy-pairs which enables us to estimate the correlation of strategies in two adjacent nodes. We must take into account that we have three different types of pairs: pairs connected only through the replacement graph, those connected only through the interaction graph, and those connected through both graphs. We label each of them (G), (H), or (L), respectively.

Let qX|Y denote the conditional probability that the focal node is occupied by strategy X (=A or B) given that strategy Y (=A or B) occupies the adjacent node. This conditional probability depends on the type of edges connecting X and Y. Therefore, we need to distinguish qX|Y(G), qX|Y(H), and qX|Y(L). In the weak-selection limit, we expect these ‘local’ frequencies to equilibrate much faster than global frequencies of strategies, xX. Hence, we are able to decouple the dynamics of local frequencies and that of global frequencies. In other words, the system reaches a local steady state while the global frequencies remain constant.

We first derive differential equations for the local frequencies, qX|Y. In order to do so, we count the increase and/or decrease of the number of XY pairs for each type of edges, (G), (H), and (L). Equations (A.1) in Appendix show the dynamics of qX|Y(G), qX|Y(H), and qX|Y(L) for BD updating. For DB and IM updating we have the same equations as eqs.(A.1) except for different constants in front of the equation, which does not change the position of equilibrium. Hence from eqs.(A.1), the steady state values of the local frequencies are given by

qA|A(G)=qA|A(L)=g2g1xA+1g1,qA|A(H)=xAqB|A(G)=qB|A(L)=g2g1xB,qB|A(H)=xBqA|B(G)=qA|B(L)=g2g1xA,qA|B(H)=xAqB|B(G)=qB|B(L)=g2g1xB+1g1,qB|B(H)=xB.
(7)

for all three updating mechanisms.

An intuitive interpretation of eqs.(7) is as follows. It is obvious that correlations between two adjacent nodes build up only through reproduction, and not through interaction. Consequently, we obtain no local correlations through interaction-only edges, (H). Hence, the local frequency of strategy X, qX|Y(H), is equal to its global frequency, xX. Regarding the other edges, (G) and (L), with probability 1/(g − 1) a player shares a common ancestor with his neighbor. With the remaining probability, (g − 2)/(g − 1), his neighbor is a random individual.

Assuming that local frequencies are at steady state, we can now calculate the dynamics of global frequencies: that is, we can calculate TA+ and TA. The details of the calculation are shown in Appendix. Then by solving eq.(4) we derive the fixation probabilities, ρA and ρB. For BD updating we obtain

ρA>1/N(gh+l)(ac)>(2ghl)(db)
(8)

and

ρA>ρBa+b>c+d.
(9)

For DB updating we obtain

ρA>1/Ng2h(a+2bc2d)+gl(2a2b+cd)+l(abc+d)>0
(10)

and

ρA>ρB(gh+l)(ad)>(ghl)(cb).
(11)

For IM updating we obtain

ρA>1/N(g+2)gh(a+2bc2d)+gl(2a2b+cd)+3l(abc+d)>0
(12)

and

ρA>ρB(gh+l+2h)(ad)>(ghl+2h)(cb).
(13)

All results hold for weak selection, w [double less-than sign] 1, and large population size, N [dbl greater-than sign] h, gl. Moreover, we need Nw [double less-than sign] 1. Note that we always have g ≥ 3.

There is an exceptional case in which we do not require Nw [double less-than sign] 1 for the results (8-13) to hold. We show in Appendix that if ac = bd holds then the eqs.(8-13) remain true for any values of Nw as long as w [double less-than sign] 1 and N [dbl greater-than sign] h, gl are satisfied. The condition ac = bd is called ‘equal gains from switching’ (Nowak & Sigmund 1990).

3.1 The limit of well-mixed populations

Nowak et al. (2004) studied evolutionary game dynamics in finite and well-mixed populations. For the 2 × 2 game given by eq.(1), they found that for large population size and weak selection, the following results hold:

ρA>1/Na+2b>c+2d,
(14)
ρA>ρBa+b>c+d.
(15)

These results, which hold for the well-mixed population, are obtained from our results for the three different update rules on replacement and interaction graphs (8-13) in three different limiting cases: (i) g [dbl greater-than sign] h, (ii) h [dbl greater-than sign] g, and (iii) l → 0. All three update rules, BD, DB, and IM, lead to the results of the well-mixed population (14, 15) for any of the three limits (i-iii). Therefore, the population becomes essentially well-mixed if the degree of the overlap, l, becomes small relative to either h or g.

3.2 The Prisoner's Dilemma

As a particular example, we explore the interaction between cooperators and defectors. Let us study a simplified Prisoner's Dilemma given by two parameters. A cooperator, C, pays a cost c for every edge, and the partner of this edge receives a benefit b. Defectors, D, pay no cost and distribute no benefits. We assume b > c otherwise cooperation has no net benefit. The payoff matrix becomes

ABCDAB(abcd)CD(bccb0).
(16)

For BD updating, we find from eqs.(8-9) that for any b and c we have ρC < 1/N and ρC < ρD. Therefore, selection never favors cooperators.

For DB updating, we find from eq.(10) that ρC > 1/N if

bc>hgl.
(17)

The same condition implies that ρC > ρD.

For IM updating, we find from eq.(12) that ρC > 1/N if

bc>h(g+2)l.
(18)

The same condition implies that ρC > ρD.

The inequalities (17) and (18) suggest that the optimum configuration for evolution of cooperation is h = g = l. The degree l of the overlap should be as large as possible, while the degrees h and g should be as small as possible. This optimum is reached when the replacement graph and the interaction graph are identical. In this limit we recover our previous condition, b/c > k (using k = h = g = l) (Ohtsuki et al. 2006). Any deviation from the identity between the interaction and the replacement graphs makes evolution of cooperation more difficult. Note that cooperation is never favored if the overlap between interaction and replacement graph is empty (l = 0).

Note that for DB updating the critical threshold condition (17) is symmetric in the degrees of the replacement and interaction graph, g and h. Therefore, a highly connected replacement graph (large g) and a sparsely connected interaction graph (small h) have the same threshold as the reverse situation (for a fixed overlap l). The symmetry is broken for IM updating. Here it is better to have a smaller degree for the interaction graph than for the replacement graph.

For a general 2 × 2 game with payoff matrix given by (1), we can prove that the degrees of the replacement and interaction graphs play a symmetric role in eq.(10) for DB updating if and only if ac = bd holds, which is the condition for ‘equal gains from switching’ (Nowak & Sigmund 1990). We can easily confirm that the Prisoner's Dilemma game with the payoff matrix (16) satisfies this condition.

Strictly speaking pair approximation is only valid for infinite Bethe lattices (or Cayley trees) where each node has exactly the same number of links and there are no loops or leaves. The performance of pair-approximation and its limitation have been studied by several authors (Matsuda et al. 1992, Harada & Iwasa 1994, Nakamaru et al. 1997, 1998, Keeling 1999, van Baalen 2000). Under weak selection, Ohtsuki et al. (2006) found that pair-approximation is in excellent agreement with computer simulations for random regular graphs and other structures.

3.3 Numerical simulations on random regular graphs

We will now test our analytic results with numerical simulations on random regular graphs. The procedure to generate those graphs is straightforward: given values of h, g, l, we start by constructing a random regular graph (Santos et al. 2005) of degree g, ensuring that it is connected. Subsequently, we augment this graph by increasing the connectivity of all nodes by hl, such that G has connectivity g, while H has connectivity h and L has connectivity l. In general, it is not always possible to generate a graph in this way, due to the stochastic nature of the construction algorithm. However, the algorithm for generation of each homogeneous random graph (Santos et al. 2005) is very efficient, and a few attempts have proven sufficient for generating the plethora of graphs that we used to compute the fixation probabilities by means of numerical simulations.

Players interact on H and reproduce on G. We consider only two update rules for reproduction: DB and IM, since, for weak selection, the BD update rule never provides an evolutionary advantage for cooperators in the Prisoner's Dilemma. The fitness of each individual is related to its payoff by eq.(2). In all numerical simulations, we use w = 0.1. We start by stipulating given values for the connectivities (h, g, l), as well as for the cost to benefit ratio c/b. Subsequently we generate 3000 graphs G and H compatible with the fixed (h, g, l), and run 5000 simulations for each graph, reaching a grand total of 15000000 simulations for each set of parameters (N, h, g, l, b, c).

Simulations always start with a population consisting of only defectors and a single randomly placed cooperator. Its location is changing from simulation to simulation. By changing the initial location of the cooperator and by changing the specific graph realization for given (h, g, l), we compute the average fixation probability of a single cooperator in a population of defectors. Simulations stop when one of the two absorbing states is reached: either 100% cooperators (fixation) or 100% defectors (extinction). We scan regions of the ratio c/b in order to find the critical value below which cooperators become advantageous mutants. We consider populations of size N = 100 and N = 500. Therefore we have Nw = 10 and Nw = 50. Remember that for games of ‘equal gains from switching’, such as the Prisoner's Dilemma (16), our analytical predictions (8-13), hence the conditions (17, 18), are valid for any values of Nw, as long as w [double less-than sign] 1 and N [dbl greater-than sign] h, gl hold.

The results are shown in Figs. 2 and and33 for the DB-updating and in Figs. 4 and and55 for the IM-updating. Excellent agreement with the theoretical predictions is obtained. In particular, the numerical simulations confirm the invariance of the condition above for DB-updating upon exchange of h and g (cf. Fig. 2), showing also how this symmetry is broken under IM-updating (cf. Fig. 4). Yet, for IM-updating, new combinations of h and g lead to the same threshold condition, a feature nicely corroborated by the results of computer simulations.

Figure 2
Results for the fixation probability under DB-updating. For a population size of N = 100, and for each combination of values (h, g, l), we ran 15000000 simulations starting from a single cooperator immersed in a sea of defectors. We did so for different ...
Figure 3
Results for the fixation probability under DB-updating. This figure shows the same results as Fig. 2 but for a population size of N = 500. In all cases the numerical results have been systematically shifted by a constant factor of 0.0035 towards higher ...
Figure 4
Results for the fixation probability under IM-updating. The population size is N = 100. The same simulation conditions employed in Fig. 2 also apply here. The only change occurs in what concerns the update rule which, as predicted theoretically, leads ...
Figure 5
Results for the fixation probability under IM-updating. This figure shows the same results as Fig. 4 but for a population size of N = 500. In all cases the numerical results have been systematically shifted by a constant factor of 0.0035 towards higher ...

A perfect agreement between theory and computer simulations would imply that the simulation results cross the 1/N horizontal line exactly at the point where this line crosses each appropriate vertical line. What we observe for N = 100 is a rigid shift of ≈ 0.018 towards lower values of c/b between the simulation results and the theoretical predictions (note the numerical shift of data by this amount in Figs. 2 and and4).4). This constant shift turns out to be a population size-effect. For N = 500 (Figs. 3 and and5),5), we find an entirely equivalent scenario with a reduced shift of 0.0035.

In other words, as the population size increases, the agreement between the pair-approximation-based predictions and computer simulations improves in the Prisoner's Dilemma (16). For a given finite value of N we need b/c to be slightly larger than the thresholds (17) and (18).

4 The replicator equation for games on replacement and interaction graphs

In the limit of N → ∞ with w [double less-than sign] 1 held constant, we obtain deterministic game dynamics on our two graphs. The key equation will be a replicator equation with a modified payoff matrix. We note that this deterministic equation gives a good approximation for the stochastic process on two graphs with large population size N [dbl greater-than sign] max{h, g} and weak selection w [double less-than sign] 1 when Nw [dbl greater-than sign] 1 holds.

Consider a game with n strategies, i = 1, …, n. The payoff for strategy i versus strategy j is aij. The payoff matrix is given by A = (aij). Let xi denote the global frequency of strategy i. We have i=1nxi=1. In a well-mixed population, strategy i meets strategy j with probability xj. The average payoff of an i-player is given by j=1naijxj=ei·Ax, where ei is the i-th unit column vector, x = (x1, …, xn)T, and the dot denotes inner product. From this we obtain the replicator equation in a well-mixed population as

x·i=xi(ei·Axx·Ax).
(19)

Here x·Ax=i=1n(ei·Ax)xi=i,j=1naijxixj is the average payoff of the population (Taylor & Jonker 1978, Hofbauer & Sigmund 1998). This equation is defined on the simplex Sn = {(x1, … xn) | x1 + … + xn = 1, xi ≥ 0}. Each face of the simplex is invariant under the dynamics.

If, however, the population is structured according to the two graphs, the probability that strategy i meets strategy j is no longer equal to xj. We will show that the replicator equation on graphs has the form:

x·i=xi[ei·(A+B)xx·(A+B)x].
(20)

The matrix B = (bij) contains the effect of local interactions. The matrix B is calculated from A and satisfies x · Bx = 0.

Consider the n × n game with payoff matrix A = (aij). Each individual plays the game with h neighbors on the interaction graph, H, and gains a total payoff which translates into biological fitness by eq.(2). The reproduction of strategies occurs on the replacement graph G. We consider the same update rules as before. We assume that an individual updates its strategy on average once per unit time. We also assume weak-selection, w [double less-than sign] 1. We require g ≥ 3 throughout our calculation.

First, we estimate the local frequencies of strategies in a given environment, using the pair-approximation method. Let qi|j denote the conditional probability that the focal node is occupied by strategy i given strategy j is in an adjacent node. For each of three different types of edges, (G), (H), and (L), we consider such probabilities, denoted by qi|j(G), qi|j(H) and qi|j(L), respectively. Then the equilibrium values of these probabilities are calculated as (see Appendix)

qi|j(G)=qi|j(L)=g2g1xi+1g1δij,qi|j(H)=xi.
(21)

Here δij is one if i = j and zero otherwise.

Let Ti+(Ti) denote the probability that the number of i-strategists increases (decreases) by one in an elementary step of updating. From

x·iE[ΔxA]Δt=Ti+Ti,
(22)

we obtain the equations for the rate of change of global frequencies, which have the form of a replicator equation on graphs.

For BD updating we obtain

x·i=w(g2)(gh2l)g(g1)·xi[ei·(A+B)xx·(A+B)x]+O(w2),
(23)

where B = (bij) is given by

bij=l(aii+aijajiajj)gh2l.
(24)

By re-scaling the time and neglecting w2 and higher order terms we obtain

x·i=xi[ei·(A+B)xx·(A+B)x].
(25)

For DB updating, we also recover this form after neglecting a constant, but in this case the matrix B is given by

bij=l{(g+1)aii+aijaji(g+1)ajj}g2hgl2l.
(26)

Similarly, for IM updating the matrix B is given by

bij=l{(g+3)aii+3aij3aji(g+3)ajj}g2hgl6l+2gh.
(27)

In each of three different limits, (i) g [dbl greater-than sign] h, (ii) h [dbl greater-than sign] g, and (iii) l → 0, the correction term bij in eqs.(24, 26, 27) vanishes to zero and we obtain the standard replicator equation describing a well-mixed population.

4.1 Prisoner's Dilemma

We study the Prisoner's Dilemma (16) using the replicator equation on graphs, (20).

For BD updating, we find that defectors always dominate cooperators, as predicted for a well-mixed population. For DB updating, cooperators dominate defectors if

bc>hgl.
(28)

In this case, ‘network reciprocity’ favors cooperators over defectors: the relative abundance of cooperators converges to unity from any initial fraction. Interestingly, the condition (28) is exactly the same as the one for ρC > 1/N (eq.(17)).

For IM updating, the equivalent condition is found to be

bc>h(g+2)l,
(29)

which is the same as eq.(18).

4.2 The Snow-drift game

The Snow-drift game is given by the payoff matrix

CDCD(bc2bcb0).
(30)

Two drivers are trapped on either side of a snow drift. Cooperation means getting out of the car and shovel. Defection means to remain in the car and let the other driver shovel. The benefit of getting home is b, which is larger than the cost of shoveling, c. If both cooperate the cost is halved for each person. If both defect neither can go home, and both drivers obtain zero payoff (Hauert & Doebeli 2004). In a well-mixed population, the traditional replicator equation (19) predicts stable coexistence of cooperators and defectors at an equilibrium where the relative abundance of cooperators is given by xC=1[c/(2bc)].

Let us consider the Snow-drift game played on graphs. We use the replicator equation on graphs (20). From eqs.(24), (26), and (27), the matrix B is calculated as

CDCD(0χχ0).
(31)

For BD updating, we obtain

χ=l(2b3c)2(gh2l).
(32)

For DB updating, we obtain

χ=l{(2+2g)b(3+g)c}2(g2hgl2l).
(33)

For IM updating, we obtain

χ=l{(6+2g)b(9+g)c}2(g2hgl6l+2gh).
(34)

We can show that the existence of the fixed point is not affected by the graph structure, but its location changes. From eq.(31) it is obvious that cooperation is favored on graphs compared to well-mixed populations if and only if χ is positive. For BD updating this condition is b/c > 3/2. For DB and IM updating this condition is always satisfied whenever g ≥ 3 and l > 0. If there is no overlap between replacement graph and interaction graph (i.e. l = 0) then χ is equal to zero, which leaves the dynamics unchanged with respect to that in a well-mixed population.

Figures 6a-c show the equilibrium-frequency of cooperators over the benefit-to-cost ratio, b/c, for each of three update rules. We observe that for fixed g and h the effect of spatial structure becomes more prominent with increasing degree of overlap, l.

Figure 6
The frequency of cooperators at the equilibrium in the Snow-drift game played on graphs for (a) BD, (b) DB, and (c) IM updating. The x-axis represents the benefit-cost ratio of cooperation, b/c. The degrees of replacement graph and interaction graph are ...

5 Discussion

In the prisoner's dilemma, BD-updating never confers an evolutionary advantage to cooperators, even in the limit of weak-selection. For DB and IM updating, cooperators can be favored over defectors by ‘network reciprocity’. Cooperators in clusters earn a higher payoff than defectors, and therefore the cluster can expand. In Ohtsuki & Nowak (2006a) an intuitive explanation has been proposed for the difference in behaviour associated with these two classes of updating, which stems from the fact that both DB and IM explore a larger neighbourhood than BD updating. In fact, under DB and IM updating the neighbors of a dying/learning individual, who themselves are not direct neighbors of each other, compete for reproduction. This favors the assortative correlation between cooperators, thereby enhancing the survival chances of cooperators. In contrast, under BD updating a player competes against his immediate neighbors with whom he played the Prisoner's Dilemma. In this case the advantage of a cooperator resulting from the assortativeness is exactly canceled out by the benefit a defector receives from the cooperator, thus cooperation never evolves (see also Taylor 1992 and Wilson et al. 1992).

Nakamaru et al. (1997, 1998) and Nakamaru & Iwasa (2005, 2006) compared fertility-selection and viability-selection. In their score-dependent fertility model, a random death is followed by reproduction with selection. In their score-dependent viability model, a selective death is followed by a random birth. For the Prisoner's Dilemma on one- and two-dimensional lattices, Nakamaru et al. (1997, 1998) found that the former is more favorable to cooperation than the latter (we note that the citation of the results of Nakamaru et al (1997, 1998) in Ohtsuki & Nowak (2006) was mistakenly reversed). The observation by Nakamaru et al. is due to the same reason as the difference between BD and DB updating. In the score-dependent fertility model, neighbors opposite to the empty site compete, while in the score-dependent viability model only direct neighbors compete for survival.

In view of this argument, we expect that neither the order of birth and death nor the distinction between fertility-selection and mortality-selection yields a critical difference. We notice that in our BD, DB and IM updating rules, and Nakamaru et al's two selection mechanisms, either birth or death event is under selection and the other is a random process. A crucial difference stems from the fact whether the second part of the update rule is affected by payoff or purely random. Cooperation is favored only when a random step is followed by a selective step.

Recently, we have studied the role of population structure in finite (Ohtsuki et al. 2006) and infinite populations (Ohtsuki & Nowak 2006b), for the case where k = h = g = l. It is therefore worth investigating the consequences of the present study in the light of these previous findings. For DB updating, natural selection will favor cooperators whenever the condition b/c > k is satisfied, a result which assumes that graphs are regular, although numerical simulations have shown that the applicability of this result extends beyond the theoretical assumptions (Ohtsuki et al. 2006). In practice, this rule means that smaller connectivities k favor cooperators, in the sense that smaller clusters of cooperators can play a role at the start of the evolutionary process. Interestingly, for the DB updating, our present results b/c > hg/l show that cooperation is maximally favored whenever h = g = l = k (since gl and hl, always). In other words, even when we separate the interaction from the replacement graph, cooperation is maximized whenever the two actually coincide, in which case we recover the b/c > k-rule. In retrospect, this result is intuitive. For the Prisoner's Dilemma, the only means of cooperators to fare better than defectors is to successfully assortate and to use clustering to collect the benefits from cooperative interactions. Clearly, assortation is maximal through the replacement graph, since cooperators breed cooperators and defectors breed defectors. Consequently, in order for cooperators to benefit from such assortation, the overlap between H and G has to be maximal, and this leads to k = h = g = l.

Note that b/c > hg/l is symmetric in g and h. Therefore, it does not matter whether the interaction graph has high connectivity and the replacement has low connectivity or vice versa. This symmetry is broken for IM updating, which leads to b/c > h(g+2)/l. This result is also easy to understand: as opposed to DB updating, for IM updating the strategy of the focal individual also competes for ‘remaining’ in place. This affects exclusively the replacement graph G. Moreover, one can picture the alluded competition of the own strategy with the others to remain ‘in place’ as if the replacement graph would exhibit an additional ‘loop’ emerging from the focal individual and ending in the same focal individual. This loop, which does not overlap with H, leads to an effective connectivity of G which becomes (g + 2). Remarkably, this is precisely the result given by eq.(18).

So far we have discussed evolutionary dynamics for the special case of the Prisoner's Dilemma. Our results, however, extend to other 2 × 2 games in finite populations, as well as to n × n games in infinite populations. In this respect, it is noteworthy that our results, for weak-selection (for other intensities, cf. Traulsen et al. 2006a,b) also lead to the so-called 1/3-rule in appropriate limits (Nowak et al. 2004). For example, for DB updating the general condition for ρA > 1/N implies

g2h(a+2bc2d)+gl(2a2b+cd)+l(abc+d)>0.
(35)

Surprisingly, this condition is equivalent to asking that the abundance of strategy A increases at frequency xA = 1/3 in the replicator equation on graphs, eq.(20), given the local configuration of strategies on graphs is at the steady state given by eq.(7).

In infinite, structured populations, and in the limit of weak-selection, strategies evolve according to a replicator equation. The effect of population structure is to induce a transformation of the payoff matrix. Once such a transformation is performed, then evolution proceeds ‘as if’ the population were well-mixed (unstructured). This remarkable result also shares many of the peculiarities already found for the case where the replacement and interaction graphs are identical (Ohtsuki & Nowak 2006b). In particular, the transformation of the payoff matrix is of the form (for the case of 2 × 2 games)

ABABAB(abcd)AB(ab+χcχd).
(36)

The specific form of χ depends on the update rule (cf. eqs.(24,26,27)). Yet, irrespective of the update rule, the population structure affects only the off-diagonal terms of the payoff-matrix, that is, population structure only affects the interactions between different strategies. Therefore, a transformed replicator equation can describe evolutionary dynamics on replacement and interaction graphs.

In the present paper we have studied fixed regular graphs. Future directions should include evolutionary dynamics on graphs of subdivided populations (Maruyama 1970, Slatkin 1981, Barton 1993), heterogenous graphs (Antal et al. 2006b, Santos et al. 2006b), complex networks (Nakamaru & Levin 2004, Santos et al. 2005, 2006a, Santos & Pacheco 2005, May 2006), and dynamic networks (Bala & Goyal 2000, Mohtashemi & Mui 2003, Pacheco et al. 2006, Santos et al. 2006c).

Acknowledgments

We thank three anonymous referees for their fruitful comments. Support from the Japan Society for the Promotion of Science (H.O.), FCT-Portugal (J.M.P.) and the John Templeton Foundation and the NSF/NIH joint program in mathematical biology (NIH grant 1R01GM078986-01) (M.A.N.) is gratefully acknowledged. The Program for Evolutionary Dynamics at Harvard University is sponsored by Jeffrey Epstein.

Appendix A

In this Appendix, we study the general case of a n × n game whose payoff matrix is given by A = (aij). Subsequently, we consider the particular case of n = 2, that is, 2 × 2 games, and derive both the replicator equation for the frequencies of strategies in infinite populations, as well as the fixation probabilities of strategies for finite populations.

We use the following notations. xi represents the global frequency of strategy i. xij represents the global frequency of i-j pairs. Since we have three different types of pairs, (G), (H), and (L) as in the main text, we must consider xij(G), xij(H), and xij(L) separately. The conditional probability that the focal node is occupied by strategy i given strategy j is next is denoted by qi|j. Similarly to pair-frequencies, we have qi|j(G), qi|j(H), and qi|j(L). In the pair-approximation we neglect the effect from nodes that are two or more steps away from the focal one. We shall frequently make use of the following identities: qi|j()=xij()/xj and xij = xji, where (*) is a wild-card for (G), (H), and (L).

A.1 BD updating

First we study BD updating. We determine the rate of change in the number of i-j pairs of the three different types of pair-frequencies. Using qi|j()=xij()/xj we obtain

dqi|j(G)dt=1g[2δij+kqi|k(G){(gl1)qk|j(G)+lqk|j(L)}+k{(gl1)qi|k(G)+lqi|k(L)}qk|j(G)2gqi|j(G)]+O(w)dqi|j(H)dt=1g[kqi|k(H){(gl)qk|j(G)+lqk|j(G)}+k{(gl)qi|k(G)+lqi|k(L)}qk|j(H)2gqi|j(H)]+O(w)dqi|j(L)dt=1g[2δij+kqi|k(L){(gl)qk|j(G)+(l1)qk|j(L)}+k{(gl)qi|k(G)+(l1)qi|k(L)}qk|j(L)2gqi|j(L)]+O(w).
(A.1)

Here O(w) represents terms which scale as wn (n ≥ 1). From eqs.(A.1) the equilibrium values of conditional probabilities qi|j() are calculated, leading to eq.(21).

Let Ti+(Ti) be the probability that the number of i-strategists increases (decreases) by one in an elementary step of BD updating. After some algebra we have

Ti++Ti=2(g2)g1xi(1xi)+O(w)Ti+Ti=w(g2)(gh2l)g(g1)xi[ei·(A+B)xx·(A+B)x]+O(w2),
(A.2)

where B = (bij) is given by eq.(24). In infinite populations xi· is given by Ti+Ti.

Hence, neglecting higher order terms with respect to w and re-scaling the time we obtain the replicator equation on graphs, eq.(20).

Consider a game with n = 2 strategies in finite populations (N < ∞). We rename strategies i = 1 and 2, A and B. For payoffs we substitute a, b, c, and d for a11, a12, a21, and a22 respectively. From eq.(A.2) we have

m(xA)=E(ΔxA|xA)Δt=TA+TA=w(g2)g(g1)xA(1xA)(αxA+β)+O(w2)v(xA)=var(ΔxA|xA)Δt=1N(TA++TA)=2(g2)N(g1)xA(1xA)+O(w),
(A.3)

where

α=(gh2l)(abc+d)β=la+(ghl)blc(ghl)d.
(A.4)

Solving the backward Kolmogorov equation (4) gives us

φA(yA)={ξ+(0)ξ+(yA)ez2dz/ξ+(0)ξ+(1)ez2dzifα>01eNwβyA/g1eNwβ/gifα=0ξ(0)ξ(yA)ez2dz/ξ(0)ξ(1)ez2dzifα<0,
(A.5)

where

ξ+(x)Nw2αg(αx+β)andξ(x)Nw2αg(αx+β).
(A.6)

From ρA = [var phi]A(1/N) we obtain ρA. For Nw [double less-than sign] 1, equation (A.5) is simplified to

φA(yA)yA+Nw6gyA(1yA)(α+3β+αyA).
(A.7)

We can calculate ρB in a similar manner. Assuming Nw [double less-than sign] 1, from eq.(A.7) we obtain

ρA1N+w(N1)6Ng(α+3β+αN)ρAρB1+w(N1)h2(a+bcd).
(A.8)

Therefore we obtain eqs.(8, 9) in the main text.

From eq.(A.5), we see that when α = 0, or equivalently when ac = bd, the inequalities ρA > 1/N and ρA > ρB hold for any values of Nw if and only if β > 0. Therefore, if ac = bd holds, equations (8, 9) are valid not only for Nw [double less-than sign] 1 but also for any Nw values. Note that we always need weak selection, w [double less-than sign] 1, and large population size, N [dbl greater-than sign] h, gl.

A.2 DB and IM updating

Next we study DB and IM updating. For convenience we herein introduce an updating rule called ‘generalized imitation (GIM)’ updating with resistance r ≥ 0. This is a generalization of DB and IM updating in the main text. Under this rule a random player is chosen for updating her strategy, and she either adopts a strategy of one of her g neighbors connected via the replacement graph or remains with the same strategy, proportionally to fitness with her own payoff weighted as r, whereas all her neighbours have their payoffs weighted with weight 1. Without resistance (r = 0) we recover the original DB updating. Choosing r = 1 gives us the IM updating. Below we will study this GIM updating with resistance r.

As in calculations for BD updating, we first explore the dynamics of local frequencies of strategies. The resulting equations turn out to be the same as eq.(A.1) except that 1/g's in the r.h.s's of eq.(A.1) has to be replaced by 1/(g + r)'s. As a result, we recover eq.(21) as defining the equilibrium local frequencies.

Given this fact, we can calculate Ti+ and Ti as follows:

Ti++Ti=2g(g2)(g+r)(g1)xi(1xi)+O(w)Ti+Ti=w(g2){g2hgl2l+2r(gh2l)}(g+r)2(g1)×xi[ei·(A+B)xx·(A+B)x]+O(w2),
(A.9)

where B = (bij) is given by

bij=l{(g+1+2r)aii+(1+2r)aij(1+2r)aji(g+1+2r)ajj}g2hgl2l+2r(gh2l).
(A.10)

In infinite populations x·i=Ti+Ti holds. By neglecting higher order terms with respect to w and by re-scaling the time, we obtain the replicator equation on graphs as in the main text.

For finite populations, let us study the 2 × 2 game with strategies A and B with payoff matrix given by eq.(1). From (A.9) we obtain

m(xA)=TA+TA=w(g2)(g+r)2(g1)xA(1xA)(αxA+β)+O(w2)v(xA)=1N(TA++TA)=2g(g2)N(g+r)(g1)xA(1xA)+O(w),
(A.11)

where

α={g2hgl2l+2r(gh2l)}(abc+d)β=(gl+l+2rl)a+{g2hgll+2r(ghl)}b(l+2rl)c{g2hl+2r(ghl)}d.
(A.12)

Solving the backward Kolmogorov equation (4) gives us

φA(yA)={ξ+(0)ξ+(yA)ez2dz/ξ+(0)ξ+(1)ez2dzifα>01eNwβyA/g(g+r)1eNwβ/g(g+r)ifα=0ξ(0)ξ(yA)ez2dz/ξ(0)ξ(1)ez2dzifα<0,
(A.13)

where

ξ+(x)Nw2αg(g+r)(αx+β)andξ(x)Nw2αg(g+r)(αx+β).
(A.14)

From ρA = [var phi]A(1/N) we obtain ρA. For Nw [double less-than sign] 1, equation (A.13) is approximated by

φA(yA)yA+Nw6g(g+r)yA(1yA)(α+3β+αyA).
(A.15)

We can calculate ρB in a similar manner. Assuming Nw [double less-than sign] 1, from eq.(A.15) we obtain

ρA1N+w(N1)6Ng(g+r)(α+3β+αN)ρAρB1+w(N1)2(g+r)[(gh+l+2rh)(ad)(ghl+2rh)(cb)].
(A.16)

Hence for GIM updating with resistance r we have.

ρA>1/N(g+2r)gh(a+2bc2d)+gl(2a2b+cd)+(1+2r)l(abc+d)>0
(A.17)

and

ρA>ρB(gh+l+2rh)(ad)>(ghl+2rh)(cb).
(A.18)

By substituting 0 or 1 for r we obtain results for DB or IM updating, eqs.(10-13).

From eq.(A.13), it is easy to see that when α′ = 0, or equivalently when ac = bd, the inequalities ρA > 1/N and ρA > ρB hold for any Nw if and only if β′ > 0. Thus, if ac = bd holds, equations (A.17, A.18) remain true for any Nw values. We note that we always need weak selection, w [double less-than sign] 1, and large population size, N [dbl greater-than sign] h, gl.

Contributor Information

Hisashi Ohtsuki, Program for Evolutionary Dynamics, Harvard University, Cambridge MA 02138, USA. Department of Biology, Faculty of Sciences, Kyushu University, 6-10-1 Hakozaki, Fukuoka 812-8581, Japan.

Jorge M. Pacheco, Program for Evolutionary Dynamics, Harvard University, Cambridge MA 02138, USA. Centro de Física Teórica e Computacional, Departamento de Física da Faculdade de Ciências, P-1649-003 Lisboa Codex, Portugal.

Martin A. Nowak, Program for Evolutionary Dynamics, Harvard University, Cambridge MA 02138, USA. Department of Organismic and Evolutionary Biology, Department of Mathematics, Harvard University, Cambridge, MA 02138, USA.

References

  • Antal T, Scheuring I. Fixation of strategies for an evolutionary game in finite populations. B Math Biol. 2006 in press. [PubMed]
  • Antal T, Redner S, Sood V. Evolutionary dynamics on degree-heterogeneous graphs. Phys Rev Lett. 2006;96:188104. [PMC free article] [PubMed]
  • Bala V, Goyal S. A noncooperative model of network formation. Econometrica. 2000;68:1181–1229.
  • Barton NH. The probability of fixation of a favoured allele in a subdivided population. Genet Res. 1993;62:149–157.
  • Boyd R, Richerson PJ. Culture and the evolutionary process. University of Chicago Press; Chicago, IL: 1985.
  • Cavalli-Sforza LL, Feldman MW. Cultural transmission and evolution: a quantitative approach. Princeton University Press; Princeton, NJ: 1981. [PubMed]
  • Crow JF, Kimura M. An introduction to population genetics theory. Harper & Row; NY: 1970.
  • Durrett R, Levin SA. The importance of being discrete (and spatial) Theor Popul Biol. 1994;46:363–394.
  • Ellison G. Learning, local interaction, and coordination. Econometrica. 1993;61:1047–1071.
  • Eshel I, Sansone E, Shaked A. The emergence of kinship behavior in structured populations of unrelated individuals. Int J Game Theory. 1999;28:447–463.
  • Ewens WJ. Mathematical population genetics, vol 1. Theoretical introduction. Springer; New York: 2004.
  • Fudenberg D, Nowak MA, Taylor C, Imhof LA. Evolutionary game dynamics in finite populations with strong selection and weak mutation. Theor Popul Biol. 2006;70:352–363. [PMC free article] [PubMed]
  • Hamilton WD. The genetical evolution of social behavior. J Theor Biol. 1964;7:1–16. 17–52. ibid. [PubMed]
  • Harada Y, Iwasa Y. Lattice population dynamics for plants with dispersing seeds and vegetative propagation. Res Popul Ecol. 1994;36:237–249.
  • Haraguchi Y, Sasaki A. The evolution of parasite virulence and transmission rate in a spatially structured population. J Theor Biol. 2000;203:85–96. [PubMed]
  • Hassell MP, Comins HN, May RM. Species coexistence and self-organizing spatial dynamics. Nature. 1994;370:290–292.
  • Hauert C, De Monte S, Hofbauer J, Sigmund K. Volunteering as red queen mechanism for cooperation in public goods game. Science. 2002;296:1129–1132. [PubMed]
  • Hauert C, Doebeli M. Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature. 2004;428:643–646. [PubMed]
  • Hofbauer J, Schuster P, Sigmund K. A note on evolutionarily stable strategies and game dynamics. J Theor Biol. 1979;81:609–612. [PubMed]
  • Hofbauer J, Sigmund K. Evolutionary Games and Population Dynamics. Cambridge Univ. Press; Cambridge, UK: 1998.
  • Hofbauer J, Sigmund K. Evolutionary game dynamics. B Am Math Soc. 2003;40:479–519.
  • Ifti M, Killingback T, Doebeli M. Effects of neighbourhoodsize and connectivity on the spatial Continuous Prisoner's Dilemma. J Theor Biol. 2004;231:97–106. [PubMed]
  • Imhof LA, Fudenberg D, Nowak MA. Evolutionary cycles of cooperation and defection. Proc Natl Acad Sci USA. 2005;102:10797–10800. [PMC free article] [PubMed]
  • Imhof LA, Nowak MA. Evolutionary game dynamics in a Wright-Fisher process. J Math Biol. 2006;52:667–681. [PMC free article] [PubMed]
  • Irwin A, Taylor PD. Evolution of altruism in a stepping-stone population with overlapping generations. Theor Popul Biol. 2001;60:315–325. [PubMed]
  • Keeling MJ. The effects of local spatial structure on epidemiological invasions. Proc R Soc Lond B. 1999;266:859–869. [PMC free article] [PubMed]
  • Killingback T, Doebeli M. Spatial evolutionary game theory: Hawks and Doves revisited. Proc R Soc Lond B. 1996;263:1135–1144.
  • Kimura M. On the probability of fixation of mutant genes in a population. Genetics. 1962;47:713–719. [PMC free article] [PubMed]
  • Le Galliard J, Ferrière R, Dieckman U. The adaptive dynamics of altruism in spatially heterogeneous populations. Evolution. 2003;57:1–17. [PubMed]
  • Levin SA, Paine RT. Disturbance, patch formation, and community structure. Proc Natl Acad Sci USA. 1974;71:2744–2747. [PMC free article] [PubMed]
  • Lieberman E, Hauert C, Nowak MA. Evolutionary Dynamics on Graphs. Nature. 2005;433:312–316. [PubMed]
  • Liggett TM. Stochastic interacting systems: contact, voter and exclusion processes. Springer; Berlin: 1999.
  • Maruyama T. Effective number of alleles in a subdivided population. Theor Popul Biol. 1970;1:273–306. [PubMed]
  • Matsuda H, Tamachi N, Sasaki A, Ogita N. A lattice model for population biology. In: Teramot E, Yamaguti M, editors. Mathematical topics in biology, morphogenesis and neurosciences. 1987. pp. 154–161. Springer Lecture Notes in Biomathematics 71.
  • Matsuda H, Ogita N, Sasaki A, Sato K. Statistical mechanics of population - the lattice Lotka-Volterra model. Prog Theor Phys. 1992;88:1035–1049.
  • May RM. Stability and complexity in model ecosystems. Princeton University Press; 1973.
  • May RM. Network structure and the biology of populations. Trends Ecol Evol. 2006;21:394–399. [PubMed]
  • Maynard Smith J. Evolution and the theory of games. Cambridge University Press; 1982.
  • Mohtashemi M, Mui L. Evolution of indirect reciprocity by social information: the role of trust and reputation in evolution of altruism. J Theor Biol. 2003;223:523–531. [PubMed]
  • Nakamaru M, Matsuda H, Iwasa Y. The evolution of cooperation in a lattice structured population. J Theor Biol. 1997;184:65–81. [PubMed]
  • Nakamaru M, Nogami H, Iwasa Y. Score-dependent fertility model for the evolution of cooperation in a lattice. J Theor Biol. 1998;194:101–124. [PubMed]
  • Nakamaru M, Levin SA. Spread of two linked social norms on complex interaction networks. J Theor Biol. 2004;230:57–64. [PubMed]
  • Nakamaru M, Iwasa Y. The evolution of altruism by costly punishment in the lattice structured population: score-dependent viability versus score-dependent fertility. Evol Ecol Res. 2005;7:853–870.
  • Nakamaru M, Iwasa Y. The coevolution of altruism and punishment: role of the selfish punisher. J Theor Biol. 2006;240:475–488. [PubMed]
  • Nowak MA, Sigmund K. The evolution of stochastic strategies in the prisoner's dilemma. Acta Appl Math. 1990;20:247–265.
  • Nowak MA, May RM. Evolutionary games and spatial chaos. Nature. 1992;359:826–829.
  • Nowak MA, May RM. The spatial dilemmas of evolution. Int J Bifurcat Chaos. 1993;3:35–78.
  • Nowak MA, Bonhoeffer S, May RM. More spatial games. Int J Bifurcat Chaos. 1994;4:33–56.
  • Nowak MA, Sasaki A, Taylor C, Fudenberg D. Emergence of co-operation and evolutionary stability in finite populations. Nature. 2004;428:646–650. [PubMed]
  • Nowak MA, Sigmund K. Evolutionary Dynamics of Biological Games. Science. 2004;303:793–799. [PubMed]
  • Nowak MA. Evolutionary Dynamics. Harvard University Press; MA: 2006.
  • Ohtsuki H, Hauert C, Lieberman E, Nowak MA. A simple rule for the evolution of cooperation on graphs and social networks. Nature. 2006;441:502–505. [PMC free article] [PubMed]
  • Ohtsuki H, Nowak MA. Evolutionary games on cycles. Proc R Soc B. 2006a;273:2249–2256. [PMC free article] [PubMed]
  • Ohtsuki H, Nowak MA. The replicator equation on graphs. J Theor Biol. 2006b;243:86–97. [PMC free article] [PubMed]
  • Pacheco JM, Traulsen A, Nowak MA. Active linking in evolutionary games. J Theor Biol. 2006;243:437–443. [PMC free article] [PubMed]
  • Santos FC, Pacheco JM. Scale-free networks provide a unifying framework for the emergence of cooperation. Phys Rev Lett. 2005;95:098104. [PubMed]
  • Santos FC, Rodrigues JF, Pacheco JM. Epidemic spreading and cooperation dynamics on homogeneous small-world networks. Phys Rev E. 2005;72:056128. [PubMed]
  • Santos FC, Rodrigues JF, Pacheco JM. Graph topology plays a determinant role in the evolution of cooperation. Proc Roy Soc B. 2006;273:51–55. [PMC free article] [PubMed]
  • Santos FC, Pacheco JM, Lenaerts T. Evolutionary dynamics of social dilemmas in structured heterogeneous populations. Proc Natl Acad Sci USA. 2006;103:3490–3494. [PMC free article] [PubMed]
  • Santos FC, Pacheco JM, Lenaerts T. Cooperation prevails when individuals adjust their social ties. PloS Comp Biol. 2006;2:1284–1291. [PMC free article] [PubMed]
  • Slatkin M. Fixation probabilities and fixation times in a subdivided population. Evolution. 1981;35:477–488.
  • Szabó G, Tőke C. Evolutionary prisoner's dilemma game on a square lattice. Phys Rev E. 1998;58:69–73.
  • Szabó G, Hauert C. Phase transitions and volunteering in spatial public goods games. Phys Rev Lett. 2002;89:118101. [PubMed]
  • Taylor PD. Altruism in viscous populations - an inclusive fitness approach. Evol Ecol. 1992;6:352–356.
  • Taylor PD, Jonker L. Evolutionary stable strategies and game dynamics. Math Biosci. 1978;40:145–156.
  • Taylor C, Fudenberg D, Sasaki A, Nowak MA. Evolutionary game dynamics in finite populations. B Math Biol. 2004;66:1621–1644. [PubMed]
  • Tilman D, Karieva P, editors. Monographs in population biology. Princeton University Press; Princeton: 1997. Spatial ecology: the role of space in population dynamics and interspecific interactions.
  • Traulsen A, Claussen JC, Hauert C. Coevolutionary dynamics: from finite to infinite populations. Phys Rev Lett. 2005;95:238701. [PubMed]
  • Traulsen A, Nowak MA, Pacheco JM. Stochastic dynamics of invasion and fixation. Phys Rev E. 2006a;74:011909. [PMC free article] [PubMed]
  • Traulsen A, Pacheco JM, Imhof LA. Stochasticity and evolutionary stability. Phys Rev E. 2006b;74:021905. [PubMed]
  • Traulsen A, Claussen JC, Hauert C. Coevolutionary dynamics in large, but finite populations. Phys Rev E. 2006c;74:011901. [PubMed]
  • Traulsen A, Pacheco JM, Nowak MA. The temperature of selection in evolutionary game dynamics. 2006d submitted.
  • van Baalen M, Rand DA. The unit of selection in viscous populations and the evolution of altruism. J Theor Biol. 1998;193:631–648. [PubMed]
  • van Baalen M. Pair approximations for different spatial geometries. In: Dieckmann U, Law R, Metz JAJ, editors. The geometry of ecological interactions: simplifying spatial complexity. Cambridge University Press; Cambridge, UK: 2000. pp. 359–387.
  • Weibull Jörgen. Evolutionary Game Theory. MIT Press; Cambridge, USA: 1995.
  • Wild G, Taylor PD. Fitness and evolutionary stability in game theoretic models of finite populations. Proc R Soc Lond B. 2004;271:2345–2349. [PMC free article] [PubMed]
  • Wilson DS, Pollock GB, Dugatkin LA. Can altruism evolve in purely viscous populations? Evol Ecol. 1992;6:331–341.
  • Zeeman EC. Population dynamics from game theory. In: Nitecki A, Robinson C, editors. Proceedings of an international conference on global theory of dynamical systems; Berlin: Springer; 1980. Lecture Notes in Mathematics 819.

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...