Skip to main content
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian
  • Log in
  • My Cart

Main menu

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home

Advanced Search

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses

New Research In

Physical Sciences

Featured Portals

  • Physics
  • Chemistry
  • Sustainability Science

Articles by Topic

  • Applied Mathematics
  • Applied Physical Sciences
  • Astronomy
  • Computer Sciences
  • Earth, Atmospheric, and Planetary Sciences
  • Engineering
  • Environmental Sciences
  • Mathematics
  • Statistics

Social Sciences

Featured Portals

  • Anthropology
  • Sustainability Science

Articles by Topic

  • Economic Sciences
  • Environmental Sciences
  • Political Sciences
  • Psychological and Cognitive Sciences
  • Social Sciences

Biological Sciences

Featured Portals

  • Sustainability Science

Articles by Topic

  • Agricultural Sciences
  • Anthropology
  • Applied Biological Sciences
  • Biochemistry
  • Biophysics and Computational Biology
  • Cell Biology
  • Developmental Biology
  • Ecology
  • Environmental Sciences
  • Evolution
  • Genetics
  • Immunology and Inflammation
  • Medical Sciences
  • Microbiology
  • Neuroscience
  • Pharmacology
  • Physiology
  • Plant Biology
  • Population Biology
  • Psychological and Cognitive Sciences
  • Sustainability Science
  • Systems Biology
Research Article

Conditional cooperation and confusion in public-goods experiments

Maxwell N. Burton-Chellew, Claire El Mouden, and Stuart A. West
PNAS February 2, 2016 113 (5) 1291-1296; first published January 19, 2016; https://doi.org/10.1073/pnas.1509740113
Maxwell N. Burton-Chellew
aDepartment of Zoology, University of Oxford, Oxford OX1 3PS, United Kingdom;
bCalleva Research Centre for Evolution and Human Sciences, Magdalen College, Oxford OX1 4AU, United Kingdom;
cSociology Group, Nuffield College, Oxford OX1 1NF, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Claire El Mouden
aDepartment of Zoology, University of Oxford, Oxford OX1 3PS, United Kingdom;
cSociology Group, Nuffield College, Oxford OX1 1NF, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Stuart A. West
aDepartment of Zoology, University of Oxford, Oxford OX1 3PS, United Kingdom;
bCalleva Research Centre for Evolution and Human Sciences, Magdalen College, Oxford OX1 4AU, United Kingdom;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: stuart.west@zoo.ox.ac.uk
  1. Edited by Raghavendra Gadagkar, Indian Institute of Science, Bangalore, India, and approved December 9, 2015 (received for review May 18, 2015)

  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Significance

The finding that people vary in how they play economic games has led to the conclusion that people vary in their preference for fairness. Consequently, people have been divided into fair cooperators that make sacrifices for the good of the group and selfish free-riders that exploit the cooperation of others. This conclusion has been used to challenge evolutionary theory and economic theory and to guide social policy. We show that variation in behavior in the public-goods game is better explained by variation in understanding and that misunderstanding leads to cooperation.

Abstract

Economic experiments are often used to study if humans altruistically value the welfare of others. A canonical result from public-good games is that humans vary in how they value the welfare of others, dividing into fair-minded conditional cooperators, who match the cooperation of others, and selfish noncooperators. However, an alternative explanation for the data are that individuals vary in their understanding of how to maximize income, with misunderstanding leading to the appearance of cooperation. We show that (i) individuals divide into the same behavioral types when playing with computers, whom they cannot be concerned with the welfare of; (ii) behavior across games with computers and humans is correlated and can be explained by variation in understanding of how to maximize income; (iii) misunderstanding correlates with higher levels of cooperation; and (iv) standard control questions do not guarantee understanding. These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.

  • altruism
  • strategy method
  • inequity aversion
  • reciprocity
  • social preferences

It is an accepted paradigm that humans can be divided into fair-minded cooperators that act for the good of the group and selfish “free riders” that exploit the altruism of others (1⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓–16). This conclusion comes from the results of economic experiments, where people in small groups are given some money to play games with. Individuals can either keep the money for themselves or contribute to some cooperative project. The experimenter then typically shares the contributions out equally, but only after multiplying them in such a way that ensures contributions are beneficial to the whole group but personally costly to the contributor. The canonical result from these public-goods games is that most people can be classified into one of two types, with about 50% being conditional cooperators, who approximately match the contributions of their groupmates, and about 25% being free riders who sacrifice nothing (1, 2). The remaining players either contribute a relatively constant amount, regardless of what their groupmates do (unconditional cooperators), or they exhibit some other more complex behavioral pattern (1⇓⇓⇓⇓–6).

This division of people into distinct social types has been the accepted basis for new fields of research investigating the cultural, genetic, and neuronal bases of this variation (3⇓⇓–6, 17⇓⇓–20). Some studies have suggested these differences can be exploited by policies to make societies behave in a more public spirited way (21⇓⇓⇓–25). The idea here is that traditional economic policies were erroneous because they only appealed to material self-interest (23). Instead policies could encourage greater cooperation by taking into account how different social types interact (8, 10) and appealing to people’s sense of fairness, especially in populations with more conditional cooperators (5, 25, 26).

This division of people into distinct social types relies on the assumption that an individual’s decisions in public-goods games can be used to accurately measure their social preferences. Specifically, that greater contributions to the cooperative project in the game reflect a greater valuing of the welfare of others, termed “prosociality.” However, this assumption is problematic because there are other possible explanations for the variation in behavior, such as variation in the extent to which individuals understand the game. For example, individuals might cooperate or cooperate conditionally, if they mistakenly think this will make them more money. There are many reasons why individuals might misunderstand the game, including responses to suggestive cues in the experimental setting or the game superficially reminding them of everyday scenarios where cooperation is favored (27⇓⇓⇓–31).

If the variation in levels of cooperation during experiments were mainly due to variation in understanding, then the accepted division into behavioral types would be an artifact of how economic experiments are conducted, rather than any underlying difference in social preferences. Consequently, any research or governmental policies based on the division would be based on false premises. The relative importance of these alterative explanations for variation in game behavior is controversial; whereas some have argued that confused players are responsible for around 50% of the observed cooperation in public-goods games (32, 33), others have argued that confused players only make up 6–10% of the population (1, 2). We therefore tested between two competing explanations: differences in social preferences or differences in understanding.

We examined differences in behavior and understanding in three ways (SI Methods). First, we tested whether the same social types arise when individuals know they are playing public-goods games with computers and not other people. Any variation in behavior in this game could not be explained by social preferences and so would pose a problem for the accepted explanation. Second, we then made these individuals play with each other, to directly test how behavior is influenced by whether contributions benefit others or not. Third, we examined whether players understood the essential social dilemma of the game, by asking them whether the income-maximizing decision does or does not depend on what others do. This design allows us to determine whether variation in behavior correlates with understanding and hence test whether misunderstanding is correlated with greater cooperation. Although behavior with computers has been examined previously, it is not known how such behavior correlates with understanding and play with humans (33, 34).

SI Methods

Experimental Methods.

For an overview of our experimental design and treatment orders, please see SI Appendix. We included 24 players in each of our three sessions. All decisions were anonymous, and our players did not know who else was in their group. The mean self-reported age of our players was 39.7 ± 1.9 y (SEM), ranging from 18 to 74 y, and 38/34 reported that they were female/male.

We first gave our participants the instructions (SI Appendix) and standard control questions (SI Appendix), both copied verbatim as much as possible from ref. 2. We then told the players that before they would play with people, they would first be playing in a special case with computer players. The full text is available in SI Appendix, but the key text is “Before you begin, you are going to play this game in a special case. In this special case, you will be in a group of just you and the COMPUTER”; “The computer will pick the decisions of the other 3 players. The computer will pick their decisions randomly and separately (so each computer player will make its own random decision)”; and “You are the only real person in the group, and only you will receive any money.” In contrast to a previous study (34), our initial game instructions did not mention playing with computer players, to prevent the possibility of our players being distracted during the instructions on the game’s payoffs or during the control questions by thoughts of computer players. All players had to click an on-screen button with the words, “I understand I am only playing the computer” before they could proceed.

After explaining that they would be playing with computers and that nobody else could benefit from their contributions, we then explained that they would be playing the strategy method, using the same instructions copied verbatim as much as possible from ref. 2 (SI Appendix). Again, we only told our players about the strategy method after the initial instructions and control questions, and after we told them they would be playing with computers (instructions copied and as much as possible from ref. 2). This way their strategies were not merely recording the decisions of players that had already made up their mind of how to strategically respond to human players. Instead, they represented behavior in response to a choice that had no social consequences and therefore should not reflect any social preferences or concerns other than how to maximize personal income (“You are the only person in the group, and only you will receive any money”; SI Appendix). To prevent any learning, we did not provide our players with any information on their earnings from this game.

After the strategy method, which we used to categorize our participants as has been done previously (1⇓⇓⇓⇓–6), we then had our participants play a typical public-goods game, again with computers. In the typical public-goods game, players have to decide simultaneously on a single contribution with no knowledge of each other’s actions (SI Appendix). This way we could compare behavior in the strategy method with behavior in the typical public-goods game setting where conditional responses to the behavior of other players are not possible. We then had our participants continue to play the typical public-goods game, but now with people, for six rounds (SI Appendix). We provided no feedback during these six rounds; therefore, they were essentially a one-shot game that prevented signaling, reciprocity, and learning and therefore minimized any order effects (51⇓–53), “You WILL NOT receive any information about the decisions of the other players, nor about your earnings in these rounds, nor will anyone else at any time except for the experimenter after the experiment.” (SI Appendix).

The first two treatments, with computers, are diagnostic tests of understanding and follow on naturally from the control questions used by others (1, 2). More specifically, our measure was of who did and did not maximize their earnings when there were no social concerns and provided a baseline measure of income maximization behavior. We could then compare our measure of income maximization with behavior in the subsequent public-goods game with people, where there were social concerns, rather than simply attributing all failures of income maximization to social preferences as has been done before (1, 2, 35, 59).

We did not counterbalance the order of our treatments because we wanted to first classify our participants on their ability to maximize their personal income before allowing our participants to play with humans. Overall our treatment order provided a logical progression (49, 50), and there was no reason to suspect treatment order effects as in all cases we provided no feedback and therefore any treatment-order effects should be minimal. We are also not arguing for any differences in behavior between treatments that could be correlated with or confounded by experience or time. Furthermore, we can compare our results in our opening treatments of games with computers to the highly stylized results of prior published research on public-goods games. As we show below (SI Results, Table 1, and Table S1), our distribution of social types is statistically equivalent to the prior research. In addition, as we also show below (SI Results) our players’ behavior in the unconditional game with computers is well in line with previous research. Our players contributed 39% of their endowment, which is very similar to the results reported in other studies with humans (2).

Statistical Methods.

Below is a glossary of the terms we use in the results.

  • • FET: Fisher’s exact test for analyzing two-way contingency tables of categorical data.

  • • LM: Linear model. Used when modeling continuous data (e.g., mean contributions across six rounds of play) with one data point per individual.

  • • GLM: Generalized linear model with binomial-logit link. Used when modeling binary or proportional data (e.g., correct/incorrect answer; or contributions from 0 to 20 MU) that came from a single observation per individual (e.g., contributions to the computers in the one-shot game). Parameter estimation used the Hybrid method, with the Pearson χ2 scale parameter, and a model-based estimator of the covariance matrix. Effect statistics used the Likelihood ratio χ2 and profile-likelihood confidence intervals.

  • • GLMM: Generalized linear mixed model with binomial-logit link. Used when modeling binary or proportional data that came from repeated measures of the same individuals. We specified a first-order auto-regressive covariance structure for the residuals to control for repeated measures of the same individuals across rounds. Degrees of freedom were varied across tests according to the Satterthwaite approximation. Significance tests used model based covariances.

Results and Discussion

We set up a public-goods game in the same way as those that have been previously used to measure if there are distinct social types (1⇓⇓⇓⇓–6), using the same instructions, control questions, and parameter settings (1, 2). We placed individuals into groups of four players, where each player is given 20 monetary units (MUs) that they can either keep for themselves or partially/fully contribute to a group project. We then multiplied all contributions to the group project by 1.6 before sharing them out equally among all four members. Therefore, each player lost 0.6 MU from each 1.0 MU they contributed to the public good, whereas their groupmates each gained 0.4 MU. Consequently, the strategy that maximizes individual financial gain is to contribute nothing (0 MU). Importantly, the return on any MU contributed is not altered by the contributions of others, and therefore the strategy to maximize financial gain is not altered depending on how others are playing. We first explained the public-goods game to all players, both on screen and on a piece of paper they kept throughout the experiment, before using the same control questions as used by previous studies (SI Appendix).

Cooperating with Computers.

We explained to our players, after the initial instructions and control questions, that they would first be playing in a group with three computer players that would be playing randomly and that no other people would benefit from their contributions. All players had to click a button with the words, “I understand I am only playing the computer” before they could proceed (SI Methods). We also followed previous studies in using what is termed the strategy method to classify individuals according to how they vary their behavior depending on the possible behavior of their groupmates (1⇓⇓⇓⇓–6). In the strategy method, players have to make contributions for each and every possible mean integer contribution of their three group mates. In our version, the appropriate amount is then contributed from their account after the contributions of the computer “players” have been generated. To prevent any learning, we did not provide our players with any information on their earnings from this game.

We found that when playing with computers, individuals can be divided into the same behavioral types that have previously been observed when playing with humans (34) (Fig. 1). Specifically, we found that 21% (n = 15) are noncooperators (free riders) who contribute 0 MU, irrespective of the computer contribution, and 50% (n = 36) are conditional cooperators, who contribute more when the computer contributes more (1⇓⇓⇓⇓–6). These conditional cooperators are adjusting their behavior in response to the computer’s contribution, even though they have been told that their contributions will not benefit others and despite the fact that the income-maximizing strategy does not depend on how much the computer contributes. The remaining 29% (21) of players exhibited some other pattern (SI Results).

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

Cooperating with computers. The average public-good contribution when playing with computers, grouped by behavioral type, for each possible mean contribution of their three computerized groupmates in the strategy method (n = 72). Dashed line equals perfect matching of contributions. We followed previous studies by dividing individuals into types on the basis of their contribution pattern (1⇓⇓⇓⇓–6) (SI Results). The distribution of types is not significantly different to previous experiments with humans (Table 1 and Table S1).

This distribution of different behavioral types playing with computers is strikingly similar to that previously observed when individuals are playing with other humans (χ2 test comparing our distribution to an amalgam of the distributions reported in refs. 1 and 2: χ2(3) = 5.2, P = 0.156; Table 1, Table S1, and SI Results). Consequently, a variation in the regard for the welfare of others, or a social preference, is not required to explain why individuals vary in their behavior. The data from games with computers suggest that the standard methodology of public-good games using the strategy method may not provide a reliable measure of underlying social preferences.

View this table:
  • View inline
  • View popup
Table 1.

Distribution of behavioral types does not differ between games with computers and humans

View this table:
  • View inline
  • View popup
Table S1.

Separate distribution of behavioral types reported from 14 trials presented in eight studies (including this one) measuring conditional cooperation using the strategy method of ref. 1

It could be argued that games with computers are uninformative of human psychology because they put players in unnatural situations. However, this argument could just as easily be applied to many other economic games. For example, is it any more natural to ask players to respond to the decisions of others when there is no strategic reason to do so (1⇓⇓⇓⇓–6), or to punish anonymous individuals that they will never interact with again (35), or to report the number they privately roll on a dice to determine their payoff (36)? Laboratory studies are both advantageous, in that they allow precise control of the available incentives, and problematic, because they can remove important cues for natural behavior and because humans are not adapted for the laboratory (21, 37⇓⇓⇓⇓⇓⇓⇓⇓–46).

It could also be argued that theories of social preferences make no prediction for how people will play with computers and that therefore such treatments provide no relevant data for such theories (13). The key point here is not how individuals behave in a single scenario (1), but to experimentally test how behavior compares across different scenarios (39, 47), because theories of social preferences do imply differences between situations when individuals know that others will benefit or not. Consequently, after playing with computers, we had our players play with humans, so that we could directly test how their behavior is affected by the knowledge that others will benefit from contributions (47).

Play with Computers Predicts Play with Humans.

We next compared how well the above strategy method predicted play in unconditional games where players simultaneously and privately decide their contributions, as was done in refs. 2 and 48. We then had our players play one series of six such unconditional games with humans. We provided no feedback between decisions so that these six decisions essentially represented a single “one-shot” decision with no opportunities to influence or respond to the decisions of other player. The instructions made four clear references to playing with people and required the players to click an on-screen button with the words “I understand I am now playing with real people” before they could proceed (SI Appendix). We did not counterbalance the order of our treatments because we wanted to first classify our players on their ability to maximize their personal income before allowing them to play with humans and to provide a logical progression to our treatments (49, 50). In all cases, communication was forbidden, and we provided no feedback on earnings or the behavior of groupmates. This design prevents signaling, reciprocity, and learning and therefore minimizes any order effects (51⇓–53).

We found that the behavioral types from the strategy method significantly predicted the level of cooperation in the subsequent unconditional games, both with computers [generalized linear model (GLM), contribution ∼ type: F3,68 = 7.7, P < 0.001, R2adj from a linear model = 0.22] and with humans [linear model (LM), mean-contribution over six rounds ∼ type: F3,68 = 6.9, P < 0.001, R2adj from a linear model = 0.12; Fig. 2A and Fig. S1]. Furthermore, controlling for individuals, there was no significant difference in the mean unconditional contributions between games with computers or humans (paired t test t(71) = 0.7, P = 0.471; Fig. 2A and Table S2). These results show that individuals cooperate to the same degree, in the public-goods game, irrespective of whether they are playing computers or humans (correlation = 0.78, P < 0.001). These conclusions are based on the classification scheme of ref. 2 but hold if we use our classifications from Fig. 1 (SI Results).

Fig. 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 2.

Play with computers predicts play with humans and conditional cooperators misunderstand the game. (A) The mean contribution (±95% CIs) to the public-good grouped by behavioral type (Fig. 1). For all types, the mean levels of cooperation were not significantly different when playing with computers (dark gray) vs. when playing with humans (light gray). (B) The percentage (±95% modified Wald method CIs) of players, separated by type, failing our beliefs test, which asked if players knew that the payoff maximizing decision did not depend on what groupmates contribute. Conditional cooperators were more likely to fail the beliefs test than noncooperators and were just as likely to fail as unclassified players, who were previously argued to be the only confused players (2).

Fig. S1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. S1.

Behavioral type predicts play with humans in each round. Shown are the mean contributions (0–20 MU) to the public good, grouped by behavioral type. We followed previous studies in dividing our players into separate types on the basis of their pattern of contributions when playing the strategy method with computers (Fig. 1). Different types contributed different amounts in unconditional games, but for each type, there was no significant difference between the mean contributions to groups with computers for groupmates (dark gray) and groups with humans (light gray). Dashed lines represent 95% binomial CIs.

View this table:
  • View inline
  • View popup
Table S2.

Correlations and paired t tests between individual contributions to the computer and contributions to humans across six rounds (rounds 1–6; n = 72; Fig. S1)

We also found that how individuals conditioned their behavior on their beliefs about the behavior of their groupmates did not differ in response to whether they were playing with computers or humans. In unconditional public-goods games, individuals appear to still conditionally cooperate, by correlating their contributions with their stated beliefs about their groupmates (2, 54). Therefore, at the same time players made their contribution decision, we asked them what they expected their groupmates would do. Specifically, what the mean contribution of their three groupmates would be. This way we could investigate if our players conditioned their contributions on the basis of their expectations.

In contrast to some previous studies, we did not financially reward individuals who better estimated the behavior of their groupmates (2, 55). The reason for this is that such incentives increase the length and complexity of the instructions and have been shown to influence the level of cooperation (55). Furthermore, the hypothesis of conditional cooperation stipulates that players are motivated to form accurate beliefs about their groupmates to match them, such that “beliefs have a causal effect on contributions.” (54, p. 414). Our nonincentivized elicitation of beliefs is therefore merely asking putative conditional cooperators to record their already formed beliefs.

As in previous studies (2), we found that our players’ contributions were positively correlated with the amount that they expected their human group mates to contribute [generalized linear mixed model (GLMM) on six rounds of data: F1,405 = 152.9, P < 0.001, β = 0.210 ± 0.017; Fig. S2]. This result demonstrates that financial rewards (incentives) for better estimates of group mates’ behavior are not required to recreate the standard pattern of behavior. However, we also found the same positive relationship between contributions and expectations when playing with computers (GLM: F1,70 = 17.0, P < 0.001, β = 0.173 ± 0.046; Fig. S2). Analyzing all of the data together, the relationship between contributions and expectations did not differ significantly depending on whether groupmates were computers or humans (GLMM interaction: F1,486 = 2.5, P = 0.116, difference in β = 0.054 ± 0.034; SI Results).

Fig. S2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. S2.

People respond to computers and humans alike. Shown are the individual contributions (0–20 MU, n = 72) to the group project of players playing either with computers (one time, black circles) or humans (six times, gray circles). The x axis is the individual’s reported expectation of what they thought their groupmates would contribute. Players’ contributions correlated with their expectations regardless of playing with computers or humans. The color-coded slopes are predicted values from the parameter estimates of a GLMM fitting contributions as dependent on expectations and are not significantly different.

Overall, our results show that individuals behave in the same way, irrespective of whether they are playing computers or humans, even when controlling for beliefs (Figs. S2 and S3). Therefore, the previously observed differences in human behavior do not need to be explained by variation in the extent to which individuals care about fairness or the welfare of others.

Fig. S3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. S3.

No prosocial shift in response to playing with humans. For each player we used a record of what they had contributed to the group project (0–20 MU) in the first round of an unconditional game and of what they had expected their groupmates' mean contribution to be. This allowed us to compare their unconditional contribution as a response to their expectations, with what they had contributed in the prior strategy method (Fig. 1) for the same mean contribution of groupmates. If the contributions were the same, then the player scored a discrepancy value of 0. If they contributed more in the unconditional game than in the strategy method game, then they scored a positive discrepancy. For example, if they contributed 10 MU in the unconditional game and 5 MU in the strategy method, then they scored a discrepancy of +5 and vice versa. Overall, the mean discrepancy was identical, at +0.9028 MU (black line), when playing with computers (black circles) or humans (light gray circles). This was not significantly different to 0 (black/gray dashed lines show 95% CI).

Conditional Cooperators Misunderstand the Game.

We hypothesized that variation in behavior largely reflects variation in understanding of the game. Specifically, that conditional cooperators tend to believe that the income-maximizing strategy depends on what others contribute, whereas noncooperators tend to realize that it does not. We tested this hypothesis by asking each player: “In the game, if a player wants to maximize his or her earnings in any one particular round, does the amount they should contribute depend on what the other people in their group contribute?” We allowed players to answer either: yes/sometimes/no/unsure. We found that only 21 (29%) of our 72 players passed this beliefs test, correctly answering (no) that the income-maximizing strategy does not depend on what others contribute in a one-shot game: with 33 (46%) answering yes that the contributions of others do matter; 11 (15%) answering that the contributions of others sometimes matter; and the remaining 7 (10%) answering that they were unsure.

As predicted by our hypothesis, we found that there was a significant correlation between beliefs about the game and behavior in the strategy method with computers (Fig. 2B, Tables S3 and S4, and SI Results). Specifically, individuals that correctly answered no tended to be noncooperative free riders and individuals that answered otherwise tended to be conditional or humped cooperators (GLM: χ(2) = 12.9, P = 0.002; Fig. 2B). Conditional cooperators were more likely to answer incorrectly than noncooperators. Whereas 30 of the 36 (83%) conditional cooperators were incorrect, only 5 of the 15 (33%) noncooperators were incorrect [Fisher’s exact test (FET): P < 0.001; Fig. 2B]. Refs. 1 and 2 suggested that their 6–10% of unclassified players may have been confused players, yet Fig. 2B shows that conditional and humped cooperators are just as likely to answer incorrectly (36 of 43, 84%) as unclassified players (10 of 14, 71%, FET: P = 0.436).

View this table:
  • View inline
  • View popup
Table S3.

Strategies and understanding about the game were correlated

View this table:
  • View inline
  • View popup
Table S4.

Results of six logistic GLMs on the probability of understanding the game depending on behavioral type

As we did not incentivize responses to the above question, it might be argued that our players were not sufficiently motivated to answer correctly. However, there is no reason to believe that a lack of motivation can explain the significant correlation between type and response. Furthermore, if the incentives of the game with computers did not make players income maximizers, there is no reason to suppose that equally incentivizing this question would have motivated them to answer correctly.

It might also be argued that people playing with computers cannot help behaving as if they were playing with humans. However, this interpretation would: (i) be inconsistent with other studies showing that people discriminate behaviorally, neurologically, and physiologically between humans and computers when playing simpler games (19, 56⇓–58); (ii) not explain why behavior significantly correlated with understanding (Fig. 2B and Tables S3 and S4); (iii) contradict the key assumption for theories of social preferences that players respond to the costs and benefits of the choices offered to them (59); and (iv) suggest that behavior reflects the payoffs of encounters in the real world, rather than the payoffs of the laboratory game (30, 38⇓⇓⇓⇓⇓⇓⇓–46). Such ingraining of behavior would suggest a major problem for the way in which economic games have been used to measure social preferences (38, 41, 42, 60). In particular, behavior would reflect everyday expectations from the real world (39, 40), such as reputation concerns or the possibility of reciprocity, rather than the setup of the game and the true consequences of choices (43, 44). Although this could be useful for measuring cultural differences in how such games are perceived (29, 61), it would make the logic of measuring individual social preferences problematic (60). However, if players are bringing in their outside behavior, this could explain three results: (i) why many people have mistaken beliefs about the income-maximizing strategy; (ii) why players improve their income maximization with experience of economic games (53); and (iii) why people play games differently depending on how they are named or described (61).

Standard Control Questions Fail to Control for Understanding.

Previous studies have required that their players correctly answer a series of control questions before allowing them to play (1⇓⇓⇓⇓–6). We followed a previous study by describing four scenarios and asked the players what the resultant incomes would be (2) (SI Appendix). For example, if all players contribute 20 MU, then how much would each player receive? Previous studies have assumed that ensuring all players have given correct answers to these four questions allow one to “safely assume that the players understood the game” (2, p. 543); however, these same studies have still classified 6–10% of their players as confused (1, 2).

We tested the assumption that correct answers indicate understanding. We did this by letting our players answer the questions freely and then examining if correct answers to the 10 control questions ensured that individuals correctly identified the income-maximizing strategy, either in games with computers, or in our control question. We found that only 16 (22%) of our 72 players correctly answered all 10 control questions (SI Results). However, of these 16 players, only 6 (38%) got the income-maximizing strategy correct in both of the games with computers. In fact these 16 players that answered all questions correctly were not less likely to be conditional or humped cooperators (8 of 16, 50%) than those that failed the standard control questions (36 of 56, 64%, FET: P = 0.744; Fig. 3A). Furthermore, only five (31%) replied correctly to our question about the game being interdependent or not, which was not significantly more than the 16 of the 56 (29%) players that failed the standard control questions (FET: P = 1.000; Fig. 3B).

Fig. 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 3.

Standard control questions fail to control for understanding. We divided individuals into those that correctly answered all 10 standard control questions (n = 16) and those that did not (n = 56). The control questions involved calculating the payoffs in four hypothetical scenarios. Those that passed were (A) just as likely to play as a conditional or humped cooperators when playing with computers and (B) no more likely to report that the income-maximizing decision did not depend on the contributions of others.

Tellingly, even when we only consider the 16 individuals that answered all 10 standard control questions correctly, the responses to our beliefs test still predicted who cooperates or not with computers. Specifically, of the 16 above, all 5 of those that also passed our beliefs test were noncooperators vs. just 2 of the 11 who failed our beliefs test (FET: P < 0.005). Although these sample sizes are small, we found the same qualitative results in a similar, but larger (n = 216) study that did not contain the strategy method but did contain the same control questions and one-shot games with the computer and humans (SI Results). Therefore, answering the standard control questions correctly, contrary to the assumptions in previous studies, does not guarantee understanding (1⇓⇓⇓⇓–6).

Comprehenders Are Not Cooperators.

It is possible that even if a large proportion of players misunderstand the game, those that do understand the game are still likely to be significantly altruistic. Some previous studies have concluded that around 50% of contributions are due to confusion (32, 33), leaving open the possibility that a substantial number of people who do understand the game still choose to cooperate. We investigated this possibility by examining the behavior of three different types of players, which could each be argued to have understood the game. Specifically, we examined the individuals that: (i) contributed 0 MU in both the strategy method and in the one-shot game with the computer (n = 13/72, 18%); (ii) answered all of the standard control questions correctly (n = 16, 22%); and (iii) that passed our beliefs test (n = 21, 29%).

First, overall, players that maximized their income when playing with computers did not contribute significantly more than 0 MU when playing with humans (paired t test: t(12) = 1.957, P = 0.074). Individually, none of these players gave significantly more to humans than to computers (Table S5). These results show that players that successfully maximize their earnings when playing with computers do not contribute significantly more when told their contributions will benefit others. Second, players that answered all of the standard control question correctly showed no prosocial bias toward humans: not giving significantly more to humans (5.4 MU) than they did to computers (4.8 MU) (paired-samples t test: t(15) = 0.6, P = 0.529). Third, players that correctly answered our control question also showed no prosocial bias toward humans: not giving significantly more to humans (6.9 MU) than they did to computers (5.1 MU) (paired-samples t test: t(21) = 1.7, P = 0.098). Therefore, we find no evidence that there is a subpopulation of players that understand the game and have prosocial motives toward human players (SI Results).

View this table:
  • View inline
  • View popup
Table S5.

Individual contributions to human groupmates of the 13 players that did not cooperate with computers

Measuring Motivations.

Finally, we investigated the motivations of all our players with a simple postgame questionnaire and found little evidence of prosociality. We asked our players, “What was your most important motivation in the games with real people? Please select the answer that best describes your motivations” and gave them a choice of five options (SI Results). Perhaps surprisingly, considering that there was no cost to players wishing to appear prosocial, 50% of players (n = 36 of 72) specified they had been motivated by making themselves the maximum money possible. Alternative options were making the most money for everyone (n = 13, 18%), for the group (n = 11, 15%), for others (n = 3, 4%), or making more than others (n = 1, 1%). The remaining players (n = 8, 11%) chose “other.” It is commonly assumed that the incentives in economic experiments make players more honest and thus appear less prosocial than they would in nonincentivized questionnaires. However, we found that the number declaring that they had been motivated by maximizing personal income (n = 36, 50%) was significantly more than the number of noncooperators in the strategy method (n = 15, 21%, FET: P < 0.001) or in the games with humans (n = 13, 18%, FET: P < 0.001).

When we compared motivations among different types of players, we found that nearly half (47%, 20/43 players) of the conditional and humped cooperators declared they were motivated by self-interest. This proportion was not significantly different to the proportion of noncooperators declaring a self-interested motivation (73%, 11/15 players, FET: P = 0.131; Table S6).

View this table:
  • View inline
  • View popup
Table S6.

Measuring motivations

Economic Games and Social Preferences.

To conclude, our results strongly suggest that the previous division of humans into altruistic cooperators and selfish free riders was misleading. We showed that the strategy method reveals the same division even when individuals are playing with computers, and nobody benefits from their cooperation. Instead, the variation in behavior, even in the strategy method, can be explained by variation in understanding rather than variation in social preferences. For example, individuals previously categorized as fair-minded conditional cooperators tend to be individuals who misunderstand the nature of the game and think that, even in one-shot games, the income-maximizing decision depends on others.

There are a number of reasons why individuals might incorrectly think that the way to best maximize their income depends on the behavior of others (SI Discussion). First, the strategy method places an emphasis on the behavior of others, possibly suggesting their behavior is important, as it would be in a threshold public-goods game, for example (62). Second, the wording of instructions to players, with words such as invest, could suggest the game is risky, and hence dependent on how others invest (34). Third, the best strategy for many everyday situations may depend on what others are doing, and the game reminds them of such scenarios (30, 40). Points 2 and 3 could apply to a broad range of scenarios and not just games that used the strategy method. For example, cooperation was significantly reduced in another public-goods game experiment when players were explicitly informed that they “lose money on contributing” (63). We are not arguing that misunderstandings explain all aspects of behavior in economic games; rather, the possibility for misunderstanding needs to be considered when developing null hypotheses (47).

More generally, our results confirm that when attempting to measure social behaviors, even with the strategy method, it is not sufficient to merely record decisions with behavioral consequences and then infer social preferences (1⇓⇓⇓⇓–6). One also needs to manipulate these consequences to test whether this affects the behavior. Here when we removed any social effects from the consequences of players’ decisions, by having them knowingly play with computerized groupmates, their behavior is unchanged in both the strategy method and in the unconditional games. These results suggest that other existing paradigms from the fields of behavioral economics might be built on incorrect conclusions from experimental studies. The question is, which aspects of human sociality are these games actually measuring (38, 60, 64)? Numerous studies have made the implicit assumption that behavior in economic experiments perfectly corresponds to the underlying behavioral preferences or intentions of individuals (1, 2, 35, 59). We showed, in public-goods games, that when a competing hypothesis is considered, which does not make an assumption of perfect play and understanding, it is better able to explain the data. A major task for the future is to develop and test competing hypotheses that do not assume perfect understanding and perfect play in other economic games.

SI Results

The Distribution of Behavioral Types and Altruism Toward Computers.

Refs. 1 and 2 were the first to classify players on the basis of their pattern of contributions in the strategy method. They separated their players into four categories, defining three social types, termed conditional cooperators, humped/triangle cooperators, and free riders. Anyone they could not classify into these categories was described as other (1) or unclassifiable/confused (2). We classified our behavioral types according to this classification scheme but with the addition of two more types. Below we outline the criteria for all of the above categories and also how we categorized our players.

Conditional cooperators (n = 36; 50%) are defined by the method of ref. 1 that classifies conditional cooperators as those that have a contribution schedule that has a significant and positive Spearman’s rank correlation with the mean contribution of the three groupmates. In total, 36 of our 72 participants met these criteria at the P < 0.01 level (players 2, 3, 4, 5, 6, 7, 8, 9, 10, 13, 14, 15, 16, 18, 23, 27, 29, 31, 32, 33, 35, 36, 39, 45, 46, 48, 49, 51, 56, 57, 58, 64, 67, 69, 70, and 72; SI Appendix). Two other participants (players 21 and 40; SI Appendix) also had significant correlations at the conventional P < 0.05 level (P = 0.013 and 0.020 respectively) but we did not classify them as conditional cooperators; instead, they were humped and unclassified (see below), respectively.

Humped cooperators (n = 4; 6%) (also termed triangle cooperators, e.g., in ref. 2) are defined as those who both “increase their contributions with the contribution of others up to a point” and then “decrease their own contributions the more others contribute” (2, p. 542). We found that four of our players displayed such a pattern (players 21, 30, 42, and 62; SI Appendix), first increasing and then decreasing, on average, after an idiosyncratic threshold point, with a peak contribution when their computerized groupmates contributed 16, 10, 7, and 7 MU, respectively.

Negative cooperators (n = 3, 4%) are shown by a further three of our players (players 26, 50, and 65; SI Appendix) who exhibited a pattern that was not evident in ref. 1 or described in ref. 2, but is similar to both the above two patterns. We termed these participants as negative cooperators, because they showed significantly conditional cooperation (significant Spearman’s rank correlation), but with a negative correlation. When comparing the statistical distributions of our study with previously published studies, we needed to have the same number of categories, so we decided to categorize these negative cooperators as humped cooperators. This recategorization was because all of them can be described by the same rule as humped cooperators, increasing contributions and then decreasing contributions the more others contribute after a point (2, p. 542), if one considers their idiosyncratic turning point to be 0 MU.

Unconditional cooperators (n = 5; 7%) were any player that exhibits a constant positive contribution regardless of what their groupmates contribute. Ref. 1 classified their one such player as other. It is not clear if ref. 2 classified such players as confused/unclassifiable or did not find any such participants. We found five (7%) such players (players 19, 24, 55, 60, and 63; SI Appendix).

Noncooperators (n = 15; 21%) are quite simply those that contribute 0 MU regardless of what their groupmates contribute and thus are in one sense similar to the above unconditional cooperators. These players in other studies are typically termed free riders, as they metaphorically enjoy the benefits of public goods such as a publicly funded transport service, without contributing to the costs. This is the payoff-maximizing strategy in the typical public-goods game and thus should be exhibited by all participants, regardless of social preference, when playing with computers, if they understand the game. We found 15 such players (players 1, 11, 20, 22, 25, 28, 37, 43, 44, 47, 52, 59, 61, 66, and 71; SI Appendix). We also had nine players (12.5%) that we chose not to categorize (unclassified; players 12, 17, 34, 38, 40, 41, 53, 54, and 68; SI Appendix).

As explained in the main text, the distribution of our different behavioral types when playing with computers is strikingly similar to that previously observed when individuals are playing with other humans. In comparison with refs. 1 and 2, we find no difference between all three distributions (χ2 test: χ2(6) = 6.4, P = 0.384); no difference between our results and the first study (1) (χ2(3) = 4.2, P = 0.240), or the latter study (2) (χ2(3) = 3.8, P = 0.288), or compared with a combination of both the first and the latter study (χ2(3) = 5.2, P = 0.156).

We primarily compared our distribution to those reported by refs. 1 and 2 as these were the pioneering studies that developed the method we replicated. However, there are also other published studies measuring the frequency of conditional cooperators, primarily for the purpose of cross-cultural comparison. We therefore also compared our distribution of behavioral types to a collection of prior published distributions generated using the strategy method of refs. 1 and 2.

Including this study, we found, in total, eight studies detailing a total of 14 distributions from various locations using a total of 708 players. Two of the studies were with computerized groupmates and 12 with human groupmates (Table S1). In the 12 studies with humans, the frequency of conditional cooperators ranged from 42% to 63% for 11 of them, with one study reporting 81%, and ours was 50%. The frequency of noncooperators (free riders) ranged from 2% to 36%, and ours was 21%. The overall mean frequencies for each type for all human and computerized studies are reported in Table S1.

To compare the reported distributions for the 12 human studies with the two computerized studies, we used a χ2 test on the combined absolute distributions of all human studies compared with both computer studies, and we found no significant difference (χ2 test: χ2(3) = 2.6, P = 0.458; Table S1). For completeness, we also tested the distribution of each type separately, using FET, and found no significant differences for any type (conditional cooperators: P = 0.470; humped cooperators: P = 0.600; unclassified: P = 0.602; free riders: P = 0.168) (Table S1).

Play with Computers Predicts Play with Humans.

We found that our players again cooperated with computers, contributing on average 39% of their endowment (7.7 MU, median = 8 MU, mode = 0 MU), a result in line with findings in other studies with humans (2). They then cooperated likewise with humans (mean = 41% of endowment, 8.1 MU, median = 8 MU, mode = 0 MU). As reported in the main text, we also found that the behavioral type in the strategy method with computers significantly predicted the contributions in an unconditional game with both computers and humans. The results in the main text are based on the classification scheme of ref. 2, which has the four categories presented in Fig. 2 (noncooperator, conditional cooperator, humped cooperator, and unclassified). The results are qualitatively the same when we use our classification scheme presented in Fig. 1, which has six categories (the same as ref. 2 plus negative cooperators and unconditional cooperators). The six-way classification scheme significantly predicted the one-shot contribution with computers (computers: GLM: F5,66 = 4.9, P = 0.001) and all six rounds with humans (the complete absence of any feedback on the behavior or earnings of the players converted the six games with humans into essentially one-shot encounters), regardless if we ran a LM of each player’s mean contribution over six rounds (F5,66 = 4.3, P = 0.002) or a GLMM of all six contributions, controlling for individual (F5,95 = 5.5, P < 0.001). The six-way classification scheme also significantly predicted contributions for four of the six human rounds analyzed separately (R1: F5,66 = 2.6, P = 0.032; R2: F5,66 = 3.3, P = 0.010; R3: F5,66 = 4.3, P = 0.002; R4: F5,66 = 1.8, P = 0.134; R5: F5,66 = 1.6, P = 0.172; R6: F5,66 = 5.7, P < 0.001; Fig. S1). There was also no significant interaction between behavioral type and groupmate type (computer or humans), suggesting that all types treated humans and computers alike (GLMM testing for an interaction effect on contributions between behavioral type and nature of groupmates: F5,473 = 0.470, P = 0.799).

Furthermore, at the individual level, contributions made in one-shot games with humans were correlated with and showed no significant difference to contributions in the one-shot game with computers. This significant similarity was true for the first round with humans (mean difference = 0.57 MU, correlation = 0.73, P < 0.001; paired t test t(71) = 0.96, P = 0.339), for each and every round, and for the mean contribution across all six rounds (mean difference = 0.38 MU, correlation = 0.78, P < 0.001; paired t test t(71) = 0.73, P = 0.471; Table S2).

As described in the main text, we also compared whether individuals differed in how they adjusted their behavior conditionally, depending on whether they were playing computers or humans. At the same time that they made their contribution decision, we asked all individuals what they expected the mean contribution of their groupmates would be. We did this to test whether (i) players’ contributions in the unconditional rounds are a function of their expectations about their groupmates’ mean contributions; (ii) their functions differed when playing with computers or humans; and (iii) their functions in the unconditional rounds corresponded to and were consistent with their behavior in the strategy method.

First, as in previous studies, and reported in the main text, we found that contributions were positively correlated with the amount that they expected their human group mates to contribute, confirming that one does not necessarily need to incentivize the reporting of expectations (GLMM on six rounds of data, contribution ∼ expectation: F1,405 = 152.9, P < 0.001, β = 0.210 ± 0.017; Fig. S2). This correlation was true overall and separately for each of the six rounds with humans (GLM round 1: F1,70 = 59.1, P < 0.001, β = 0.32 ± 0.054, R2adj = 0.49; GLM round 2: F1,70 = 51.4, P < 0.001, β = 0.26 ± 0.044, R2adj = 0.50; GLM round 3: F1,70 = 37.1, P < 0.001, β = 0.23 ± 0.047, R2ad = 0.41; GLM round 4: F1,70 = 41.6, P < 0.001, β = 0.21 ± 0.038, R2adj = 0.46; GLM round 5: F1,70 = 50.8, P < 0.001, β = 0.22 ± 0.039, R2adj = 0.50; GLM round 6: F1,70 = 49.3, P < 0.001, β = 0.23 ± 0.040, R2adj = 0.46; note R2adj are from LMs rather than GLMs that control for the binomial error structure).

Second, as we reported in the main text, we also found qualitatively the same relationship between contributions and expectations when playing with computers (GLM: F1,70 = 17.0, P < 0.001, β = 0.17 ± 0.046, R2adj from a linear model = 0.22; Fig. S2), and analyzing all of the data together shows that quantitatively this correlation did not significantly depend on whether groupmates were computers or humans (GLMM: interaction between nature of groupmates and expectations, F1,486 = 2.5, P = 0.116, difference in β = 0.05 ± 0.034; Fig. S2). The above results were robust to various forms of analysis. The overall result remained qualitatively the same if we changed from model-based covariances to using robust covariances instead (F1,96 = 1.1, P = 0.296). The interaction also remained nonsignificant if we compared play with computers with each round of play with human groupmates separately (GLMM, computer vs. human round 1: F1,80 = 1.0, P = 0.330, difference in β = 0.039 ± 0.040; computer vs. human round 2: F1,83 = 0.4, P = 0.520, difference in β = 0.025 ± 0.039; computer vs. human round 3: F1,92 = 0.0, P = 1.000, difference in β = 0.000 ± 0.044; computer vs. human round 4: F1,94 = 0.1, P = 0.720, difference in β = 0.016 ± 0.045; computer vs. human round 5: F1,86 < 0.1, P = 0.885, difference in β = 0.006 ± 0.042; computer vs. human round 6: F1,76 < 0.1, P = 0.918, difference in β = 0.004 ±0.040).

We also repeated the above analyses, comparing contributions as a function of expectations between the round with computers and the six rounds with humans, but separately for noncooperators (n = 15) and conditional cooperators (n = 36). Neither conditional cooperators nor noncooperators differ in how they do or do not condition their contributions on their expectations depending on the nature of their groupmates (GLMM: interaction between expectations and groupmates on contributions, conditional cooperators: F1,243 = 0.001, P = 0.973; noncooperators: F1,101 = 3.418, P = 0.067).

For completeness, we also compared the extent to which individuals contributed more or less than they expected their groupmates to contribute and compared whether this differed depending on playing with humans or computers. For instance, did those that contributed a little less than they expected of their human groupmates (selfish biased conditional cooperation; refs. 1 and 2) do the same when playing computers? This allowed us to ask if behavior differed when playing with computers or humans while controlling for expectations; for example, maybe players cooperated with computers but became more generous with humans? For the sake of simplicity, we merely compare the first, or final, round of play with humans with the round of play with computers, using a paired-samples t test to control for individual.

On average, individuals contributed 1.1 (first round) or 1.5 MU (final round) less than they expected of their human groupmates and 2.3 MU less than they expected of the computer. These decisions were not significantly different in magnitude (paired t tests, first round: t(71) = 1.9, P = 0.062; final round: t(71) = 0.9, P = 0.375). Repeating the above process for the behavioral types reveals the same results for the conditional cooperators (paired t tests, first round: t(35) = −0.1, P = 0.936; final round: t(35) = 0.4, P = 0.657); humped cooperators (paired t tests, first round: t(6) = 0.3, P = 0.746; final round: t(6) = −0.6, P = 0.601); and the unclassified players (paired t tests, first round: t(13) = −1.2, P = 0.258; final round: t(13) = 0.2, P = 0.813). Noncooperators, in contrast, do show a significant difference, contributing 7.8 MU less than their expectations of the computer, but only 3.0 (first round) or 3.5 MU (final round) less than their expectations of humans (paired t tests, first round: t(14) = −2.6, P = 0.020; final round: t(14) = −1.9, P = 0.08). However, this is not because they contributed more to humans (they tended to contribute 0 MU regardless) but because they had higher expectations of their computerized groupmates (mean = 9.1 MU) than their human groupmates (mean = 5.3 MU; GLMM: F1,99 = 9.5, P = 0.003), leading to a larger discrepancy.

Third, we then investigated whether their contributions in the unconditional rounds corresponded to and were consistent with their behavior in the strategy method. Methodologically, if a participant contributed more in the one-shot game than they did in the strategy method for the corresponding expectation, then we scored them as having a positive discrepancy and vice versa (Fig. S3). For the sake of simplicity, we only analyzed behavior from the first round of play with humans.

When comparing individual contributions, as a function of expectations, with behavior in the strategy method, nearly 50% of the 72 participants were perfectly consistent, contributing the identical amount in the one-shot encounter as they did in the strategy method for the corresponding expectation. This was true regardless if their groupmates were computers (34/72, 47%) or humans (33/72, 46%, FET: P = 1.000) and also suggests affective forecasting was not a serious problem (Fig. S3).

We found that our participants, when playing with either computers or humans, were equally likely to be consistent with their prior decision in the strategy method. We also found that those that deviated, also deviated equally, in both absolute terms (mean absolute discrepancy; vs. computers: ±2.46 MU and vs. humans: ±3.29 MU, paired t test: t(71) = 0.83333, P = 0.074) and in net terms (mean net discrepancy; vs. computers: +0.9028 MU, and vs. humans: +0.9028, paired t test: t(71) = 0.0, P = 1.0). These identical net discrepancies (+0.9028) were not significantly different from 0 MU, either when playing with computers (LM: intercept, F1,71 = 3.085, P = 0.083) or, more importantly, when playing with humans (LM: intercept, F1,71 = 2.014, P = 0.160; Fig. S3).

In summary, although around 50% of players are not perfectly consistent when transferring from the strategy method to the one-shot game, their shift in behavior does not depend on their group-mates being computers or humans, and they show no prosocial shift, being equally likely to become either more or less favorable to their human groupmates (mean net discrepancy of 0.90 MU not significantly different to 0 MU; Fig. S3).

Conditional Cooperators Misunderstand the Game.

After we had our participants first play the strategy method with computers, then a one-shot game with computers, and then six one-shot games with humans, all without feedback to prevent any learning about the game’s payoffs or the propensities of other players, we then asked them a question to test whether they understood the game. Specifically, the question tested whether they knew that the payoff-maximizing strategy was independent of what other players contributed, or did they incorrectly believe that the game’s payoffs were interdependent. The question read “In the game, if a player wants to maximize his or her earnings in any one particular round, does the amount they should contribute depend on what the other people in their group contribute?” We allowed them to answer either yes/sometimes/no/unsure. Note as this was a multiple-choice format, some players may have answered correctly merely by chance. The full breakdown of results is available in Table S3, with responses separated by behavioral type as measured in the preceding strategy method with computerized groupmates.

As the classification scheme for different types can be made more or less complex, we have analyzed the responses using several different classification schemes, using logistic regression (GLM with binary-logit link) to test whether players responded correctly (chose no) or not (chose yes/sometimes/unsure). In all cases, the behavioral types correlated with the answers to our question on understanding (Table S4). Our most complex classification scheme (expansive) is represented in Fig. 1. It contains six behavioral types; the four from ref. 1, plus our two extra, which are negative cooperators and unconditional cooperators. This scheme significantly predicts if a player will answer our question about the nonstrategic nature of the game (GLM: χ2(5) = 15.3, P = 0.009). The next most complex uses the same criteria as ref. 1 but also includes negative cooperators (F01 + negative), a phenomenon not occurring in ref. 1 and not documented in ref. 2. Again this scheme significantly predicts correct answers or not (GLM: χ2(4) = 14.8, P = 0.005). The next most complex classification scheme (F01) is simply that of ref. 1. Here we had to choose which category is most suitable for the negative cooperators; as mentioned above, we chose to classify them as humped cooperators because humped cooperators also show a negative correlation, after an idiosyncratic point, between contributions and the contributions of others. Again this scheme significantly predicts understanding (GLM: χ2(3) = 12.9, P = 0.005).

More simply, we have a three-way classification scheme, which combines conditional and humped cooperators into one category compared with noncooperators and unclassified. Again this scheme significantly predicts understanding of the game (three-way, GLM: χ2(2) = 12.9, P = 0.002). Finally, we have two simple, binary classification schemes. First, do players cooperate (contribute) or not? The division of cooperators and noncooperators significantly predicts understanding (GLM: χ2(1) = 11.9, P = 0.001). As does our final scheme, which simply classifies players as making constant contributions (noncooperators and unconditional cooperators) or inconstant contributions (conditional, humped cooperators, and unclassified). The constant players may or may not cooperate but show no form of conditionality in any sense and thus act as though they believe the game’s payoffs are not interdependent. Again, this classification scheme significantly predicts the correct response to the question testing their beliefs and understanding about the game’s payoffs (GLM: χ2(1) = 12.1, P = 0.001).

Finally, evidence that many players were uncertain in how to play the game comes from the lack of stability in contributions exhibited by our players in the preceding six rounds of play with human groupmates. Such instability suggests a degree of bet hedging by many players. In these rounds, which had no information feedback and thus no learning about the game or one’s groupmates could occur, only 21 (29%) of the 72 players remained constant, and 13 of these were noncooperators (contributed 0 MU). On average, the 72 players changed their contributions by ±3.2 MU from the previous round (mean absolute changes for rounds 2, 3, 4, 5, and 6, respectively: 2.4, 3.3, 3.8, 3.3, and 3.2 MU). On average, the mean range in contributions (maximum − minimum contribution) was 7.3 MU, with 10 (14%) players exhibiting the full range possible (0–20 MU).

Standard Control Questions Fail to Control for Understanding.

The control questions are available in SI Appendix. We have sufficient experience of running public-goods game experiments to confirm the difficulty in getting all players to answer questions correctly in a timely manner. An experimenter nearly always has to get involved, either in person or on-screen, and try to help without helping too much, as is clear from the following quotes; “After completion of the questionnaire, the questions were publicly solved. Any remaining questions were answered in private.” (3, p. 176); “Subjects who showed problems in understanding the task were given assistance.” (4, p. 89); “Once all participants had completed the exercises, the experimenter solved them in public. Any remaining questions that the subjects had were then answered in private.” (6, p. 151); “if any participant repeatedly failed to answer correctly, the experimenter provided an oral explanation.” (50, p. 200) (bold emphases added. An anonymous reviewer also claimed that in ref. 2: “Before the start of the experiment subjects had to read the instructions and to answer control questions and were instructed again when necessary”).

Clearly such processes can influence the participants and allow participants through that have not independently answered all questions. For this reason, we allowed our players to answer as they wished, without any further instruction. This allowed us to identify where each participant went wrong, to identify sources of misunderstanding, and to quantify, for the first time to our knowledge, how well people respond to such questions.

It is not always clear if a player answered the control questions correctly. In the strictest sense, 16 players (22%) answered all 10 questions, pertaining to the four scenarios, perfectly. The breakdown per scenario was as follows: scenario 1 (two questions), 47 of 72 (65%) players answered correctly; scenario 2 (two questions), 39 (54%) players answered correctly; scenario 3 (three questions), 23 (32%) players answered correctly; and scenario 4 (three questions), 37 (51%) players answered correctly. These correct response rates differ significantly (χ2 test: χ2(3) = 16.6, P < 0.001). However, some participants answered in ways that could be interpreted as correct based on certain ambiguities in the questions. For example, in scenario 1, nobody contributes, so nobody makes a profit on his or her initial endowment of 20 MU. Some players incorrectly entered 0 MU, perhaps meaning a gain of 0 MU relative to the endowment, although the question asked them to calculate the “total income (in MU).” Six players appeared to correctly calculate the returns from the public project, but either forgot or thought that one does not add the saved, noncontributed, MU. To be generous, one could also categorize these players as passing the test, to give a total of 22 players (31%) that answered the questions correctly.

As a robustness check of the results in the main text, we analyzed our data from another, larger, study conducted in the same laboratory. This study had 216 players and was very similar in that the players had near identical instructions and control questions that were copied verbatim from ref. 35 instead of ref. 2. Ref. 35 uses eight control questions that are an identical subset covering the same four scenarios of ref. 2 but with only two subquestions instead of three for scenarios 3 and 4. These players then also played a one-shot unconditional game with the computer and the six-stage one-shot unconditional game with humans and answered our control question on whether the game is interdependent or not (but in the postexperiment questionnaire rather than in the experiment). The main difference was that they did not complete a strategy method game.

Of these 216 players, 78 (36%) correctly answered all eight questions, and of these, only 41 (53%) maximized their income (contributed 0 MU) when playing with computers. These 78 players contributed no more on average when playing with humans (5.4 MU) than they did when playing the computer (4.6 MU, paired t test: t(77) = 1.0, P = 0.311). Regarding our control question on whether the game is interdependent or not, we found that only 29 (37%) of the 78 correctly answered that the game is not interdependent. However, 21 of these 29 (72%) maximized their income when playing with the computer, significantly more than compared with the 20 of the 49 (41%) that answered our question incorrectly (FET: P = 0.0097). Thus, our results are robust when analyzing a larger sample than presented in the main text. This larger sample confirms that answering the standard control questions does not guarantee that players understand the game and that our control question significantly helps explain behavior even among those that answered all of the standard control questions correctly.

Comprehenders Are Not Cooperators.

We conducted paired t tests between individual contributions to the computer and contributions to humans for those that could be deemed to understand the game (comprehenders). We examined the behavior of three types of players that could all be argued to be comprehenders. First, all of the individuals that contributed 0 MU in both the strategy method and in the one-shot game with the computer (n = 13 of 72, 18%) and thus could be argued to best comprehend the game as they demonstrated they knew the income-maximizing strategy. Of course some of these could be false positives as some players may not fully understand the game but still contribute 0 MU for various reasons. Second, the individuals that answered all of the standard control questions correctly and thus have previously been assumed to understand the game (n = 16, 22%) (1⇓⇓⇓⇓–6). Third, the individuals that passed our beliefs test and arguably understood the essential dilemma of the game (n = 21, 29%). Again some of these may be false positives, for two reasons: (i) as the test was a multiple-choice question meaning some could guess the correct answer; and (ii) some players may correctly think that the decisions of one’s groupmates are irrelevant to one’s attempts to maximize earnings but incorrectly believe that to contribute is always profitable.

In all three cases, we find that comprehenders are no more cooperative with humans than they are with computers. First, the 13 players that maximized their income when playing computers did not contribute significantly more than 0 MU in any individual round with humans (Table S5). The mean contributions by round were as follows: 1.6; 0.6; 1.0; 2.1; 1.9; and 0.0 MU for the six rounds, respectively. None of these were significantly different from 0 MU (paired t tests: R1, t(12) = 1.3, P = 0.224; R2, t(12) = 1.0, P = 0.337; R3, t(12) = 1.3, P = 0.227; R4, t(12) = 1.5, P = 0.168; R5, t(12) = 1.4, P = 0.175; R6, t(12) = 0.0, P = 1.0), nor across all six rounds combined (paired t test: t(12) = 1.957, P = 0.074).

Second, the 16 players that answered all of the standard control questions correctly showed no prosocial bias toward humans, giving just as much to computers (4.8 MU) as they did to humans in each and every round. They did not contribute significantly more than 4.8 MU in any individual round with humans. The mean contributions by round were as follows: 5.3; 5.7; 4.6; 6.4; 6.2; and 4.3 MU for the six rounds, respectively. None of these were significantly different from 4.8 MU (paired t tests: R1, t(15) = 0.4, P = 0.708; R2, t(15) = 0.7, P = 0.482; R3, t(15) = 0.2, P = 0.815; R4, t(15) = 0.8, P = 0.450; R5, t(15) = 1.0, P = 0.338; R6, t(15) = 0.6, P = 0.562) or across all six rounds combined (paired t test: t(15) = 0.6, P = 0.529).

Third, the 21 players that answered our control question correctly also showed no prosocial bias toward humans. They did not contribute significantly more than the 5.1 MU they gave to the computers in the first round in five of the six subsequent individual rounds with humans. The mean contributions by round were as follows: 7.0; 6.6; 6.7; 7.1; 7.6; 6.4; and 6.9 MU for the six rounds, respectively. Only one of these was significantly different from 5.1 MU (paired t tests: R5, t(20) = 2.1, P = 0.048), and this would no longer be the case if we were to control for multiple testing (e.g., with a Bonferroni correction). None of the other rounds were significantly different (R1, t(20) = 1.4, P = 0.186; R2, t(20) = 1.4, P = 0.183; R3, t(20) = 1.5, P = 0.139; R4, t(20) = 1.4, P = 0.188; R6, t(20) = 1.3, P = 0.224) or across all six rounds combined (paired t test: t(20) = 1.7, P = 0.098).

Measuring Motivations.

We assumed that players playing with computers are trying to maximize their income as much as players are ever motivated to do so. We have then inferred that because behavior is unchanged when shifting from playing with computers to humans that players are equally motivated to try and maximize personal income when playing with humans. One may be tempted to ask, why not simply ask the players what their motivations were? Therefore, in a postgame questionnaire, conducted while the payments to our players were prepared, we asked our players about their motivations during their games with humans.

As this was a hypothetical questionnaire, players could make their responses appear prosocial without paying a cost to do so, contrary to typical economic decisions in experiments measuring social preferences. Therefore, responses to this question could be argued to provide a lower bound of self-interested motivations among our players. We asked, “What was your most important motivation in the games with real people? Please select the answer that best describes your motivations?” (numbers chose shown in brackets).

  • Making myself the maximum money possible (36)

  • Making other people the maximum money possible (3)

  • Making everyone the maximum money possible (13)

  • Making the group the maximum money possible (11)

  • Making myself more money than other people (1)

  • Other (8)

We consider options 1 and 5 to reveal a self-interested motivation (n = 37), whereas options 2–4 are prosocial (N = 27, or 35 if including others). Despite the opportunity to appear prosocial at no cost, and in contrast to the low frequency of noncooperators in our incentivized strategy method (n = 15, 21%), the most common response, chosen by 50% of players, was a self-interested motivation, “making myself the maximum money possible” (n = 36 of 72). Table S6 reveals how these responses depended on behavioral type.

SI Discussion

Why Might Players Think That Contributing to the Public Good Is a Profitable Decision?

It has been claimed that the income-maximizing strategy is “simple to figure out” (13). However, this claim cannot be asserted; it has to be scientifically ascertained. We argue that there are several reasons why players may reach mistaken conclusions about the game. Although we do not wish to criticize the difficult job of making instructions for economic games, we think it possibly significant that the instructions we copied from ref. 2 used the word invest when describing contribution decisions. The word invest has clear financial connotations to do with risk and gain (as confirmed in a postgame survey by ref. 34), suggesting that returns are not fixed and may depend on other factors, such as what other people choose to invest. Also, the fact that the experimenter then offers players the opportunity to explicitly condition on the behavior of others would only further reinforce this belief (27, 28). Furthermore, the instructions specify that each MU contributed returns 0.4 MU, but this does not clearly specify that they incur a cost of −0.6 MU, and judging from the answers to the control questions, it seems that some participants thought that each MU contributed returned 1.4 MU (i.e., the 0.4 MU return was pure profit). Evidence of this potential confusion comes from a study showing that cooperation is significantly reduced in the public-goods game when players are explicitly informed that they “lose money on contributing” (63). Finally, one reason that people may contribute is that they may think contributing something is a necessary requirement to benefit from the public good (e.g., paying to become a member of a club/group in the world). These mixed reasons may explain why many of our players varied their contributions during the six rounds of play with humans. Such within-individual variation suggests some players used a bet-hedging strategy in response to a risky or uncertain decision environment.

However, why would the standard control questions (SI Appendix) not prevent these erroneous beliefs? An examination of the specific questions will show that the first two scenarios do little other than reinforce these false beliefs. Specifically, they show that if no one contributes, then no one makes any additional money (scenario 1), but then conversely if everyone contributes fully, then everyone makes a lot of money (scenario 2). The only hint that contributions are costly comes from scenario 3, whereby players can see that for the case where the rest of the group invests 30 MU, to contribute 0 MU is better than 10 or 15 MU. However, scenario 4 then shows them that they benefit the more others contribute, specifically if they invest 8 MU, they do better if the others contribute 22 MU instead of 12 or 7 MU. Tellingly, players struggle the most with scenario 3, with only 32% of player answering correctly, compared with 65%, 54%, and 51% for scenarios 1, 2, and 4, respectively. This significant difference means that if experimenter assistance is provided, it is most likely to be for scenario 3, which is the only scenario that demonstrates there is a social dilemma.

Why Might Players Show a Correlation Between Contributions and Either Expectations of Others or the Actions of Others?

Again, without wishing to criticize the instructions, they were perhaps not ideal, as they specifically state to the participants that they can “condition their contribution on that of the other group members” (2). In addition, the input screen asks them to enter their “conditional contribution to the project” (2). This suggestive language implies that this is something they may wish to consider doing and therefore may act as a demand characteristic for the game (27, 28).

There are also other reasons why contributions may correlate with either expectations or the decisions of others, without reflecting concerns for fairness or inequity aversion (59). Conditional cooperators are considered to have social preferences because their decisions correlate with their knowledge and beliefs about the contributions of their groupmates. However, this correlation does not automatically imply social preferences. If we relax the assumption that players perfectly understand the game, with full confidence, then there are three alternative reasons, none of which require any social preferences, as to why this correlation may exist: (i) if a player thinks that “contributing X MU is best,” then it is reasonable for them to conclude that many others will think likewise, making their expectations match their own behavior, rather than vice versa; (ii) if a player is uncertain but knows that others are contributing X MU, then it is reasonable for them to conclude that others know better what to do and to copy their behavior in the hope of a better payoff, making their behavior match their knowledge of other players’ behavior; and (iii) if a player thinks that the income-maximizing response to what others contribute changes depending on what they contribute, then they will in some sense condition their contributions on these other contributions. This will lead to their behavior matching both their knowledge and expectations of others, even when others are merely computers playing randomly, as evidenced by our results.

A revealing insight from the strategy method is the presence of humped and negative cooperators. These players reduce their contributions the more their groupmates contribute. This is not consistent with a sense of fairness or social preferences. It is consistent with the appropriate strategy for a threshold, or nonlinear, public-goods game (62). In such games, a self-interested player can benefit by compensating for the lack of contributions by others players. Clearly, in such games, the income-maximizing strategy does depend on knowing the behavior of others.

It is probably reasonable to consider that some of these behaviors may be unique artifacts of the strategy method. For instance, negative and humped cooperators may not occur in unconditional versions of the game, although they might. Due to their rarity, it would require a huge sample size to capture enough of them to detect their behavior in the unconditional game. It would also require repeated games that control for learning, through the omission of feedback between rounds, to sample enough decisions from each individual. However, conditional cooperators are much more common and show consistency between play in the unconditional games and the strategy method. This means that even if the strategy method inflates the frequency of apparent conditional cooperators, concerns about the misinterpretation of their motives are not just restricted to experiments using the strategy method.

Methods

Experiments were conducted using z-Tree (65) at the Centre for Experimental Social Sciences (CESS), Nuffield College, University of Oxford. Participants were recruited using ORSEE (66), from the general participant pool with the sole specification that they had not before participated in a public-goods experiment. The CESS laboratory has a policy of “no deception,” all experiments must pass the CESS ethical review board, and CESS obtains informed consent from all players. We provided all players with the same instructions and control questions (SI Appendix), which were copied verbatim as much as possible from the online appendix of ref. 2. We provided the instructions both on screen and also on paper handouts that they could keep during the experiment (SI Methods).

Acknowledgments

We thank Raghavendra Gadagkar; three anonymous reviewers; Miguel dos Santos for comments; the Centre for Experimental Social Sciences and the European Research Council, the Calleva Research Centre, and the John Fell Fund Oxford for funding; and our players.

Footnotes

  • ↵1To whom correspondence should be addressed. Email: stuart.west{at}zoo.ox.ac.uk.
  • Author contributions: M.N.B.-C., C.E.M., and S.A.W. designed research, performed research, analyzed data, and wrote the paper.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission.

  • This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1509740113/-/DCSupplemental.

Freely available online through the PNAS open access option.

View Abstract

References

  1. ↵
    1. Fischbacher U,
    2. Gachter S,
    3. Fehr E
    (2001) Are people conditionally cooperative? Evidence from a public goods experiment. Econ Lett 71(3):397–404
    .
    OpenUrlCrossRef
  2. ↵
    1. Fischbacher U,
    2. Gachter S
    (2010) Social preferences, beliefs, and the dynamics of free riding in public goods experiments. Am Econ Rev 100(1):541–556
    .
    OpenUrlCrossRef
  3. ↵
    1. Kocher MG,
    2. Cherry T,
    3. Kroll S,
    4. Netzer RJ,
    5. Sutter M
    (2008) Conditional cooperation on three continents. Econ Lett 101(3):175–178
    .
    OpenUrlCrossRef
  4. ↵
    1. Herrmann B,
    2. Thoni C
    (2009) Measuring conditional cooperation: A replication study in Russia. Exp Econ 12(1):87–92
    .
    OpenUrlCrossRef
  5. ↵
    1. Martinsson P,
    2. Villegas-Palacio C,
    3. Woolbrant C
    (2009) Conditional cooperation and social group: Experimental results from Colombia. Environment for Development, Discussion Paper Series, EfD DP 09-16. Available at www.rff.org/files/sharepoint/WorkImages/Download/EfD-DP-09-16.pdf. Accessed January 6, 2016
    .
  6. ↵
    1. Martinsson P,
    2. Nam PK,
    3. Villegas-Palacio C
    (2013) Conditional cooperation and disclosure in developing countries. J Econ Psychol 34:148–155
    .
    OpenUrlCrossRef
  7. ↵
    1. Burlando RM,
    2. Guala F
    (2005) Heterogeneous agents in public goods experiments. Exp Econ 8(1):35–54
    .
    OpenUrlCrossRef
  8. ↵
    1. Camerer CF,
    2. Fehr E
    (2006) When does “economic man” dominate social behavior? Science 311(5757):47–52
    .
    OpenUrlAbstract/FREE Full Text
  9. ↵
    1. Ones U,
    2. Putterman L
    (2007) The ecology of collective action: A public goods and sanctions experiment with controlled group formation. J Econ Behav Organ 62(4):495–521
    .
    OpenUrlCrossRef
  10. ↵
    1. Rustagi D,
    2. Engel S,
    3. Kosfeld M
    (2010) Conditional cooperation and costly monitoring explain success in forest commons management. Science 330(6006):961–965
    .
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. Kosfeld M,
    2. von Siemens FA
    (2011) Competition, cooperation, and corporate culture. Rand J Econ 42(1):23–43
    .
    OpenUrlCrossRef
  12. ↵
    1. Volk S,
    2. Thoni C,
    3. Ruigrok W
    (2012) Temporal stability and psychological foundations of cooperation preferences. J Econ Behav Organ 81(2):664–676
    .
    OpenUrlCrossRef
  13. ↵
    1. Camerer CF
    (2013) Experimental, cultural, and neural evidence of deliberate prosociality. Trends Cogn Sci 17(3):106–108
    .
    OpenUrlCrossRefPubMed
  14. ↵
    1. Cheung SL
    (2014) New insights into conditional cooperation and punishment from a strategy method experiment. Exp Econ 17(1):129–153
    .
    OpenUrlCrossRef
  15. ↵
    1. Nielsen UH,
    2. Tyran JR,
    3. Wengstrom E
    (2014) Second thoughts on free riding. Econ Lett 122(2):136–139
    .
    OpenUrlCrossRef
  16. ↵
    1. Hartig B,
    2. Irlenbusch B,
    3. Kolle F
    (2015) Conditioning on what? Heterogeneous contributions and conditional cooperation. J Behav Exp Econ 55:48–64
    .
    OpenUrlCrossRef
  17. ↵
    1. Mertins V,
    2. Schote AB,
    3. Hoffeld W,
    4. Griessmair M,
    5. Meyer J
    (2011) Genetic susceptibility for individual cooperation preferences: The role of monoamine oxidase A gene (MAOA) in the voluntary provision of public goods. PLoS One 6(6):e20959
    .
    OpenUrlCrossRefPubMed
  18. ↵
    1. Mertins V,
    2. Schote AB,
    3. Meyer J
    (2013) Variants of the monoamine oxidase A gene (MAOA) predict free-riding behavior in women in a strategic public goods experiment. J Neuroscience Psychology Econ 6(2):97–114
    .
    OpenUrlCrossRef
  19. ↵
    1. Suzuki S,
    2. Niki K,
    3. Fujisaki S,
    4. Akiyama E
    (2011) Neural basis of conditional cooperation. Soc Cogn Affect Neurosci 6(3):338–347
    .
    OpenUrlAbstract/FREE Full Text
  20. ↵
    1. Dawes CT, et al.
    (2012) Neural basis of egalitarian behavior. Proc Natl Acad Sci USA 109(17):6479–6483
    .
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. Gintis H
    (2000) Beyond Homo economicus: Evidence from experimental economics. Ecol Econ 35(3):311–322
    .
    OpenUrlCrossRef
  22. ↵
    1. Bowles S,
    2. Gintis H
    (2002) Homo reciprocans. Nature 415(6868):125–128
    .
    OpenUrlCrossRefPubMed
  23. ↵
    1. Bowles S,
    2. Hwang SH
    (2008) Social preferences and public economics: Mechanism design when social preferences depend on incentives. J Public Econ 92(8-9):1811–1820
    .
    OpenUrlCrossRef
  24. ↵
    1. Gsottbauer E,
    2. van den Bergh JCJM
    (2011) Environmental policy theory given bounded rationality and other-regarding preferences. Environ Resour Econ 49(2):263–304
    .
    OpenUrlCrossRef
  25. ↵
    1. Frey BS,
    2. Stuzter A
    1. Gachter S
    (2007) Conditional cooperation: behavioural regularities from the lab and the field and their policy implications. Psychology and Economics: A Promising New Cross-Disciplinary Field, eds Frey BS, Stuzter A (MIT Press, Cambridge, MA), pp 19–50
    .
  26. ↵
    1. Martinsson P,
    2. Villegas-Palacio C,
    3. Wollbrant C
    (2015) Cooperation and social classes: Evidence from Columbia. Soc Choice Welfare 45(4):829–848
    .
    OpenUrlCrossRef
  27. ↵
    1. Bardsley N
    (2008) Dictator game giving: Altruism or artefact? Exp Econ 11(2):122–133
    .
    OpenUrlCrossRef
  28. ↵
    1. Zizzo DJ
    (2010) Experimenter demand effects in economic experiments. Exp Econ 13(1):75–98
    .
    OpenUrlCrossRef
  29. ↵
    1. Henrich J, et al.
    (2005) “Economic man” in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behav Brain Sci 28(6):795–815
    .
    OpenUrlCrossRefPubMed
  30. ↵
    1. Rand DG, et al.
    (2014) Social heuristics shape intuitive cooperation. Nat Commun 5:3677
    .
    OpenUrlPubMed
  31. ↵
    1. Heintz C
    (2013) What can’t be inferred from cross-cultural experimental games. Curr Anthropol 54(2):165–167
    .
    OpenUrl
  32. ↵
    1. Andreoni J
    (1995) Cooperation in public-goods experiments: Kindness or confusion. Am Econ Rev 85(4):891–904
    .
    OpenUrl
  33. ↵
    1. Houser D,
    2. Kurzban R
    (2002) Revisiting kindness and confusion in public goods experiments. Am Econ Rev 92(4):1062–1069
    .
    OpenUrlCrossRef
  34. ↵
    1. Ferraro PJ,
    2. Vossler CA
    (2010) The source and significance of confusion in public goods experiments. The B.E. Journal of Economic Analysis & Policy, 10.2202/1935-1682.2006
    .
  35. ↵
    1. Fehr E,
    2. Gächter S
    (2002) Altruistic punishment in humans. Nature 415(6868):137–140
    .
    OpenUrlCrossRefPubMed
  36. ↵
    1. Fischbacher U,
    2. Follmi-Heusi F
    (2013) Lies in disguise: An experimental study on cheating. J Eur Econ Assoc 11(3):525–547
    .
    OpenUrlCrossRef
  37. ↵
    1. Falk A,
    2. Heckman JJ
    (2009) Lab experiments are a major source of knowledge in the social sciences. Science 326(5952):535–538
    .
    OpenUrlAbstract/FREE Full Text
  38. ↵
    1. Trivers R
    (2004) Mutual benefits at all levels of life. Science 304(5673):964–965
    .
    OpenUrlAbstract/FREE Full Text
  39. ↵
    1. Smith VL
    (2005) Sociality and self interest. Behav Brain Sci 28(6):833
    .
    OpenUrl
  40. ↵
    1. Heintz C
    (2005) The ecological rationality of strategic cognition. Behav Brain Sci 28(6):825
    .
    OpenUrl
  41. ↵
    1. Burnham TC,
    2. Johnson DP
    (2005) The biological and evolutionary logic of human cooperation. Anal Kritik 27(1):113–135
    .
    OpenUrl
  42. ↵
    1. Hagen EH,
    2. Hammerstein P
    (2006) Game theory and human evolution: A critique of some recent interpretations of experimental games. Theor Popul Biol 69(3):339–348
    .
    OpenUrlCrossRefPubMed
  43. ↵
    1. Burnham TC,
    2. Hare B
    (2007) Engineering human cooperation: Does involuntary neural activation increase public goods contributions? Hum Nat 18(2):88–108
    .
    OpenUrlCrossRefPubMed
  44. ↵
    1. Delton AW,
    2. Krasnow MM,
    3. Cosmides L,
    4. Tooby J
    (2011) Evolution of direct reciprocity under uncertainty can explain human generosity in one-shot encounters. Proc Natl Acad Sci USA 108(32):13335–13340
    .
    OpenUrlAbstract/FREE Full Text
  45. ↵
    1. Pedersen EJ,
    2. Kurzban R,
    3. McCullough ME
    (2013) Do humans really punish altruistically? A closer look. Proc R Soc B 280:20122723
    .
  46. ↵
    1. Raihani NJ,
    2. Bshary R
    (2015) Why humans might help strangers. Front Behav Neurosci 9:39
    .
    OpenUrlPubMed
  47. ↵
    1. Burton-Chellew MN,
    2. West SA
    (2013) Prosocial preferences do not explain human cooperation in public-goods games. Proc Natl Acad Sci USA 110(1):216–221
    .
    OpenUrlAbstract/FREE Full Text
  48. ↵
    1. Fischbacher U,
    2. Gachter S,
    3. Quercia S
    (2012) The behavioral validity of the strategy method in public good experiments. J Econ Psychol 33(4):897–913
    .
    OpenUrlCrossRef
  49. ↵
    1. Herrmann B,
    2. Thöni C,
    3. Gächter S
    (2008) Antisocial punishment across societies. Science 319(5868):1362–1367
    .
    OpenUrlAbstract/FREE Full Text
  50. ↵
    1. Fischbacher U,
    2. Schudy S,
    3. Teyssier S
    (2014) Heterogeneous reactions to heterogeneity in returns from public goods. Soc Choice Welfare 43(1):195–217
    .
    OpenUrlCrossRef
  51. ↵
    1. Gintis H,
    2. Smith EA,
    3. Bowles S
    (2001) Costly signaling and cooperation. J Theor Biol 213(1):103–119
    .
    OpenUrlCrossRefPubMed
  52. ↵
    1. Trivers RL
    (1971) Evolution of reciprocal altruism. Q Rev Biol 46(1):35
    .
    OpenUrlCrossRef
  53. ↵
    1. Burton-Chellew MN,
    2. Nax HH,
    3. West SA
    (2015) Payoff-based learning explains the decline in cooperation in public goods games. Proc Biol Sci 282(1801):20142678
    .
    OpenUrlAbstract/FREE Full Text
  54. ↵
    1. Smith A
    (2013) Estimating the causal effect of beliefs on contributions in repeated public good games. Exp Econ 16(3):414–425
    .
    OpenUrlCrossRef
  55. ↵
    1. Gachter S,
    2. Renner E
    (2010) The effects of (incentivized) belief elicitation in public goods experiments. Exp Econ 13(3):364–377
    .
    OpenUrlCrossRef
  56. ↵
    1. van ’t Wout M,
    2. Kahn RS,
    3. Sanfey AG,
    4. Aleman A
    (2006) Affective state and decision-making in the Ultimatum Game. Exp Brain Res 169(4):564–568
    .
    OpenUrlCrossRefPubMed
  57. ↵
    1. Rilling JK,
    2. Sanfey AG,
    3. Aronson JA,
    4. Nystrom LE,
    5. Cohen JD
    (2004) The neural correlates of theory of mind within interpersonal interactions. Neuroimage 22(4):1694–1703
    .
    OpenUrlCrossRefPubMed
  58. ↵
    1. Sanfey AG,
    2. Rilling JK,
    3. Aronson JA,
    4. Nystrom LE,
    5. Cohen JD
    (2003) The neural basis of economic decision-making in the Ultimatum Game. Science 300(5626):1755–1758
    .
    OpenUrlAbstract/FREE Full Text
  59. ↵
    1. Fehr E,
    2. Schmidt KM
    (1999) A theory of fairness, competition, and cooperation. Q J Econ 114(3):817–868
    .
    OpenUrlAbstract/FREE Full Text
  60. ↵
    1. Smith EA
    (2005) Making it real: Interpreting economic experiments. Behav Brain Sci 28(6):832
    .
    OpenUrl
  61. ↵
    1. Gerkey D
    (2013) Cooperation in context public goods games and post-Soviet collectives in Kamchatka, Russia. Curr Anthropol 54(2):144–176
    .
    OpenUrlCrossRef
  62. ↵
    1. Croson R,
    2. Marks M
    (2000) Step returns in threshold public goods: A meta- and experimental analysis. Exp Econ 2(3):239–259
    .
    OpenUrlCrossRef
  63. ↵
    1. Tinghög G, et al.
    (2013) Intuition and cooperation reconsidered. Nature 498(7452):E1–E2, discussion E2–E3
    .
    OpenUrlCrossRefPubMed
  64. ↵
    1. Gurven M,
    2. Winking J
    (2008) Collective action in action: Prosocial behavior in and out of the laboratory. Am Anthropol 110(2):179–190
    .
    OpenUrlCrossRef
  65. ↵
    1. Fischbacher U
    (2007) z-Tree: Zurich toolbox for ready-made economic experiments. Exp Econ 10(2):171–178
    .
    OpenUrlCrossRef
  66. ↵
    1. Greiner B
    (2015) Subject pool recruitment procedures: Organizing experiments with ORSEE. J Econ Sci Assoc 1(1):114–125
    .
    OpenUrlCrossRef
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Conditional cooperation and confusion in public-goods experiments
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Conditional cooperators are confused
Maxwell N. Burton-Chellew, Claire El Mouden, Stuart A. West
Proceedings of the National Academy of Sciences Feb 2016, 113 (5) 1291-1296; DOI: 10.1073/pnas.1509740113

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Conditional cooperators are confused
Maxwell N. Burton-Chellew, Claire El Mouden, Stuart A. West
Proceedings of the National Academy of Sciences Feb 2016, 113 (5) 1291-1296; DOI: 10.1073/pnas.1509740113
Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley
Proceedings of the National Academy of Sciences: 113 (5)
Table of Contents

Submit

Sign up for Article Alerts

Article Classifications

  • Biological Sciences
  • Evolution
  • Social Sciences
  • Psychological and Cognitive Sciences

Jump to section

  • Article
    • Abstract
    • SI Methods
    • Results and Discussion
    • SI Results
    • SI Discussion
    • Methods
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Abstract depiction of a guitar and musical note
Science & Culture: At the nexus of music and medicine, some see disease treatments
Although the evidence is still limited, a growing body of research suggests music may have beneficial effects for diseases such as Parkinson’s.
Image credit: Shutterstock/agsandrew.
Scientist looking at an electronic tablet
Opinion: Standardizing gene product nomenclature—a call to action
Biomedical communities and journals need to standardize nomenclature of gene products to enhance accuracy in scientific and public communication.
Image credit: Shutterstock/greenbutterfly.
One red and one yellow modeled protein structures
Journal Club: Study reveals evolutionary origins of fold-switching protein
Shapeshifting designs could have wide-ranging pharmaceutical and biomedical applications in coming years.
Image credit: Acacia Dishman/Medical College of Wisconsin.
White and blue bird
Hazards of ozone pollution to birds
Amanda Rodewald, Ivan Rudik, and Catherine Kling talk about the hazards of ozone pollution to birds.
Listen
Past PodcastsSubscribe
Goats standing in a pin
Transplantation of sperm-producing stem cells
CRISPR-Cas9 gene editing can improve the effectiveness of spermatogonial stem cell transplantation in mice and livestock, a study finds.
Image credit: Jon M. Oatley.

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Latest Articles
  • Archive

PNAS Portals

  • Anthropology
  • Chemistry
  • Classics
  • Front Matter
  • Physics
  • Sustainability Science
  • Teaching Resources

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Librarians
  • Press
  • Site Map
  • PNAS Updates

Feedback    Privacy/Legal

Copyright © 2021 National Academy of Sciences. Online ISSN 1091-6490