How social information can improve estimation accuracy in human groups
Edited by Burton H. Singer, University of Florida, Gainesville, FL, and approved October 2, 2017 (received for review March 5, 2017)
Significance
Digital technologies deeply impact the way that people interact. Therefore, it is crucial to understand how social influence affects individual and collective decision-making. We performed experiments where subjects had to answer questions and then revise their opinion after knowing the average opinion of some previous participants. Moreover, unbeknownst to the subjects, we added a controlled number of virtual participants always giving the true answer, thus precisely controlling social information. Our experiments and data-driven model show how social influence can help a group of individuals collectively improve its performance and accuracy in estimation tasks depending on the quality and quantity of information provided. Our model also shows how giving slightly incorrect information could drive the group to a better performance.
Abstract
In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects’ sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance.
Sign up for PNAS alerts.
Get alerts for new articles, or get an alert when an article is cited.
In a globalized, connected, and data-driven world, people rely increasingly on online services to fulfill their needs. AirBnB, Amazon, Ebay, and Trip Advisor, to name just a few, have in common the use of feedback and reputation mechanisms (1) to rate their products, services, sellers, and customers. Ideas and opinions increasingly propagate through social networks, such as Facebook or Twitter (2–4), to the point that they have the power to cause political shifts (5). In this context, it is crucial to understand how social influence affects individual decision-making and its resulting effects at the level of a group.
Two observations can be made about these collective phenomena: (i) people often make decisions not simultaneously but sequentially (6, 7), and (ii) decision tasks involve judgmental/subjective aspects. Social psychological research on group decision-making has established that consensual processes vary greatly depending on the demonstrability of answers (8). When the solution is easy to show, people often follow the “truth-wins” process, whereas when the demonstrability is low, they are much more susceptible to “majoritarian” social influence (9). Thus, collective estimation tasks where correct solutions cannot be easily shown are particularly well suited for measuring the impact of social influence on individuals’ decisions. Galton’s original work (10) on estimation tasks shows that the median of independent estimates of a quantity can be impressively close to its true value. This phenomenon has been popularized as the wisdom of crowds (WOC) effect (11), and it is generally used to measure a group’s performance. However, because of the independence condition, it does not consider potential effects of social influence.
In recent years, it has been debated whether social influence is detrimental to the WOC or not: some works argue that it reduces group diversity without improving the collective error (12, 13), while others show that it is beneficial if one defines collective performance otherwise (14, 15). One or two of the following measures were used to define performance and diversity. Let us define as the estimate of individual , as its average over all individuals, and as the true value of the quantity to estimate. Then, is a measure of group diversity, and and are two natural measures of the group performance. However, these estimators are not independent, since , which shows that a decrease in diversity is beneficial to group performance, as measured by , contrary to the general claim. Later research showed that social influence helps the group perform better if one considers only information coming from informed (16), successful (17), or confident (18) individuals. We will show that these traits are actually strongly related. The way that social information is defined also matters: providing individuals with the arithmetic or geometric mean of estimates of other individuals has different consequences (18).
Other than these methodological issues, it is difficult to precisely analyze and characterize the impact of social influence on individual estimates without controlling the quality and quantity of information that is exchanged between subjects. Indeed, human groups are often composed of individuals with heterogeneous expertise; therefore, in a collective estimation task, one cannot rigorously control the quality and quantity of shared social information, and the quantification of individual sensitivity to this information is hence very delicate. To overcome this problem, we performed experiments in which subjects were asked to estimate quantities about which they had very little prior knowledge (low demonstrability of answers) before and after having received social information. The interactions between subjects were sequential and local, while most previous works have used a global kind of interaction, with all individuals being provided some information (estimates of other individuals in the group) at the same time (12–14, 18, 19). From the individuals’ estimates and the social information that they received, we were able to deduce their sensitivity to social influence. Moreover, by introducing virtual experts (artificial subjects providing the true answer, thus affecting social information) in the sequence of estimates—without the subjects being aware of it—we were able to control the quantity and quality of information provided to the subjects and to quantify the impact of this information on the group performance.
Our results show that the subjects’ reaction to social influence is heterogeneous and depends on the distance between personal and group opinion. We then use the data to build and calibrate a model of collective estimation to analyze and predict the impact of information quantity and quality received by individuals on the performances at the group level.
Experimental Design
Subjects were asked to answer questions for which they had to estimate various social, geographical, or astronomical quantities or the number or length of objects in a picture. For each question, the experiment proceeded in two steps: subjects had to first provide their personal estimate . Then, after receiving the social information , they were asked to give a new estimate . is defined as the geometric mean of the previous estimates ( or 3). Subjects answered each question sequentially (SI Appendix, Fig. S1) and were not told the value of . Since humans think in terms of orders of magnitude (20), we used the geometric mean for —which averages orders of magnitude—rather than the arithmetic one.
Virtual “experts” providing the true value for each question were inserted at random into the sequence of participants (SI Appendix, Fig. S1). For each sequence involving 20 human participants, we controlled the number , 5, 15, or 80, and hence, the percentage , , , or of virtual experts, respectively. The social information delivered to human participants, being the geometric mean of previous estimates, is hence strongly affected by these virtual experts.
When providing their estimates and , subjects had to report their confidence level in their answer on a Likert scale ranging from one (very low) to five (very high) and were asked to choose the reason that best explained their second estimate among a list of eight possibilities. We used initial conditions for the social information chosen reasonably far from the true answer and imposed loose limits to the estimates that subjects could give to prevent them from answering too absurdly. All graphs presented here are based on the 29 questions ( prior and final estimates) from the experiment performed in France. A similar experiment was conducted in Japan; all results can be found in SI Appendix, where the full experimental protocol is described in detail.
The aims and procedures of the experiments conformed to the ethical rules imposed by the Toulouse School of Economics and the Center for Experimental Research in Social Sciences at Hokkaido University. All subjects in France and Japan provided written consent for their participation.
Results
Distribution of Individual Estimates.
Previous works have shown that distributions of independent individual estimates are generally highly right-skewed, while distributions of their common logarithm are much more symmetric (12, 13, 18). This is because humans think in terms of orders of magnitude, especially when large quantities are involved, which makes the logarithmic scale more natural to represent human estimates (20). In these works, participants were mostly asked “easy” questions for which they had good prior knowledge (high demonstrability), such that the answers ranged over one to two orders of magnitude at most (12–14, 17–19, 21–23). To ensure that little information was present before the inclusion of our virtual experts and to more clearly identify the impact of social influence, we selected “hard” questions (low demonstrability). These questions involve very large quantities, and answers span several orders of magnitude, making the log transform of estimates even more relevant. To compare quantities that can differ by orders of magnitude, we normalize each estimate by the true answer to the question at hand and define the log-transformed estimate . Note that the log transform of the actual answer is .
Fig. 1A shows the distribution of before and after social information has been provided to the subjects (SI Appendix, Table S1). Although such distributions have often been presented as close to Gaussian distributions (13, 18), we find that they are much better described by Cauchy distributions because of their fat tails, which account for the nonnegligible probability of estimates extremely far from the truth. The Cauchy probability distribution function readswhere is the center/median and is the width of the distribution. SI Appendix, Fig. S2A shows the distribution of estimates in the Japan experiment, and SI Appendix, Fig. S2B shows that, when the same questions were asked, distributions of personal estimates in France and Japan are almost identical.
[1]
Fig. 1.
![](/cms/10.1073/pnas.1703695114/asset/af61c3a9-0878-4ac6-b78d-43bb0d3c000a/assets/graphic/pnas.1703695114fig01.jpeg)
For the Cauchy distribution, the mean and standard deviation (SD) are not defined. Therefore, good estimators of and are the median and one-half the interquartile range (the difference between the third and the first quartiles) of the experimental distribution, respectively. In the following, () and () will refer to the median and one-half the interquartile range of the experimental distribution before social influence (after social influence), respectively.
Cauchy and Gaussian distributions belong to the so-called stable distributions family. More generally, being a set of estimates drawn from a symmetric probability distribution characterized by its center and width , we define the weighted average , with ; is a stable distribution if has the same probability distribution as the original , up to the new width . Indeed, the center remains the same because of the condition , but the width may decrease after averaging (law of large numbers), depending on the stable distribution considered. Cauchy and Gaussian represent two extremes of the stable distribution family, with Lévy distributions being intermediate cases: for the Cauchy distribution, the width remains unchanged, whereas the narrowing of is maximum for the Gaussian distribution (SI Appendix). In the case of actual human estimates, the relevance of a certain distribution can be related to the degree of prior knowledge of the group. When individuals have no idea about the answer to a question, the weighted average of arbitrary answers cannot be statistically better () or worse () than the arbitrary answers themselves, leading to a Cauchy distribution for these estimates (the only distribution for which ). However, when there is a good prior knowledge, one expects that combining answers gives a better statistical estimate (; Gaussian). When the quantity to estimate is closely related to general intuition (ages, dates, etc.), estimates should hence follow a Gaussian-like distribution, while when individuals have very little knowledge about the answer, as in our experiment, estimates should be Cauchy-like distributed. The rationale for naturally observing stable distributions is explained in SI Appendix.
We use the term Cauchy-like, because Fig. 1A shows that the distributions of prior () and final () estimates are slightly skewed toward low estimates (), reminiscent of the human cognitive bias to underestimate numbers, because of the nonlinear internal representation of quantities (24). As we will show, this phenomenon has strong implications on the influence of information provided to the group. We also observe a clear sharpening of the distribution of estimates after social influence mainly caused by the presence of the virtual experts, hence affecting the value of the social information and ultimately, the final estimate of the actual subjects. This sharpening becomes stronger as the percentage of experts increases (SI Appendix, Fig. S3).
Moreover, consistent with our introductory discussion of the measurement methods of group performance, we propose the two following indicators: (i) collective performance , which represents how close the center of the distribution is to zero (the log transform of the true value ), and (ii) collective accuracy , which is a measure of the proximity of individual estimates to the true value.
Distribution of Individual Sensitivities to Social Influence.
After having received social information, an individual may reconsider her personal estimate . The natural way for humans to aggregate estimates is to use the median (22) or the geometric mean (18), which both tend to reduce the effect of outliers. Here, the social information that we provided to the subject was the geometric mean of the previous answers (including that of the virtual experts providing the true answer ): . Moreover, one can always represent the new estimate as the weighted geometric average of the personal estimate and the social information . Hence, we can uniquely define the sensitivity to social influence by . The value corresponds to subjects keeping their initial estimates, while corresponds to subjects adopting the estimate of their peers. In terms of log-transformed variables , we obtainwhere the log-transformed social information is simply the arithmetic mean , and thus, . Note that, in this language, is simply the barycenter coordinate of the final estimate in terms of the initial personal estimate and the social information.
[2]
Fig. 1B shows that the experimental distribution of has a bell-shaped part that we roughly assimilate to a Gaussian, with two additional Dirac peaks exactly at and (SI Appendix, Table S2 shows the numerical values). Five types of behavioral responses can be identified: keeping one’s opinion (peak at ), adopting the group’s opinion (peak at ), making a compromise between one’s opinion and the group’s opinion (), overreacting to social information (), and contradicting it (). Quite surprisingly, responses that consist of overreacting and contradicting were generally overlooked in previous works (21–23, 25), either considered as noise and simply not taken into account or sometimes included into the peaks at and , despite these behaviors being not negligible (especially overreacting). We find that the median of is , in agreement with previous results (15, 18, 25), meaning that individuals tend to give more weight to their own opinion than to information coming from others (14, 19). Moreover, the distributions of for the experiment performed in Japan and for men and women (in France) are very similar to that of Fig. 1B (SI Appendix, Fig. S4).
We find that the subjects’ behavioral reactions are highly consistent, reflecting robust differences in personality or general knowledge: in each session, according to the way that subjects modified their estimates on average in the first questions, we split the subjects into three subgroups. We first define “confident” subjects as the one-quarter of the group minimizing , where is the index of the questions (i.e., the subjects who were on average closest to ), and the “followers” as the one-quarter of the group minimizing (i.e., closest to ). The other one-half of the group is defined as the “average” subjects. SI Appendix, Fig. S5 shows the distributions of for the three subgroups computed from questions 25–29. The differences are striking (SI Appendix, Fig. S6): for the group of confident subjects, the peak at is about seven times higher than the peak at , while for the group of followers, it is less than twice larger. Moreover, the distribution for average subjects is found to be very close to the global distribution shown in Fig. 1B.
Impact of the Difference Between Personal and Group’s Opinions on Individual Sensitivity to Social Influence.
Fig. 2A shows that, on average, depends on the distance between personal and group estimates. Up to a threshold of orders of magnitude, there is a linear cusp relation between and . The farther away the social information is from a subject’s personal estimate , the more likely the latter is to trust the group as increases. Fig. 2B shows the origin of this correlation: as social information gets farther from personal opinion, the probability to keep one’s opinion () decreases, while the probability to compromise increases. Interestingly, the adopting behavior does not change with . The same phenomena have been observed in the Japan experiment (SI Appendix, Fig. S8).
Fig. 2.
![](/cms/10.1073/pnas.1703695114/asset/18e2753a-1868-4c18-ba43-23b8782cd5e0/assets/graphic/pnas.1703695114fig02.jpeg)
Model.
We now introduce an individual-based model to understand the respective effects of individual sensitivity to social influence and information quality and quantity on collective performance and accuracy observed at the group level. In the model, we simulate a sequence of successive estimates performed by the agents (not including the virtual experts). A typical run of the model consists of the following steps for a given condition .
i)
An initial condition is chosen at random according to the experimental ratios of initial conditions.
ii)
With probability , the true value zero is introduced into the sequence, and with probability , an agent plays.
iii)
The agent first determines its personal estimate from a Cauchy distribution restricted to .
iv)
The agent receives, as social information, the average of the previous final estimates .
v)
The agent chooses its sensitivity to social influence , consistent with the results of Figs. 1B and 2. In particular, is drawn in a Gaussian distribution of mean with probability or takes the value or with probability and . and have a linear cusp dependence with , while is kept independent of . For a given value of , the average sensitivity is , where and the slope are extracted from Fig. 2A. is hence given by . The threshold is determined consistently by the condition , where is the value of the plateau beyond in Fig. 2A. The values of all parameters are reported in SI Appendix, Table S3.
vi)
being drawn, the final estimate is given by Eq. 2. One starts again from step ii for the next agent.
Comparison Between Theoretical and Experimental Results.
For all graphs, we ran 100,000 simulations, so that the model predictions error bars are negligible. Fig. 1B shows that the distribution of sensitivities to social influence obtained in the model (red curve in Fig. 1B) is similar by construction to the experimental one. Also, by construction of the model (step v above), the cusp dependence of the sensitivity to social influence with respect to is well-reproduced by the model (Fig. 2A, red curve with open symbols). We now address several nontrivial predictions of the model.
Estimates after social influence.
Fig. 1A (all values of aggregated) and SI Appendix, Fig. S3 (for each ) compare favorably the distributions of estimates predicted by the model with the experimental results (before and after social influence). Social influence leads to the sharpening of the distributions of estimates, and this effect increases as more information is provided to the group.
Impact of social information on collective performance.
Fig. 3 shows the collective performance (precisely defined above) and the width of the distribution of estimates for the different and . The collective performance is zero when the distribution is centered on the true value, such that the closer it is to zero, the better. As expected, when , no significant improvement is observed in the collective performance. Then, as increases, the center gets closer to the true value, and the width decreases accordingly, such as was also observed in the experiments in Japan (SI Appendix, Fig. S9). Note that the experimental error bars (SI Appendix describes their computation) decrease after social influence, reflecting the decrease of the width of the estimate distribution after social influence and the driving of people’s opinion by the virtual experts.
Fig. 3.
![](/cms/10.1073/pnas.1703695114/asset/bb0ed4ca-df2d-459e-bafe-7af3aed9a4f3/assets/graphic/pnas.1703695114fig03.jpeg)
The collective performance and estimate distribution width predicted by the model (Fig. 3, open circles) are in good agreement with those observed in the experiment. The very small effect of , only reliably observed in the model in Fig. 3A, is explained in SI Appendix. As shown there, a simpler model, where we neglect the dependence of with (Fig. 2A), can be analytically solved. It leads to fair predictions (black lines on Fig. 3), although it tends to underestimate the collective performance improvement and does not capture the reduction of the distribution width already observed at . This model guided us to design our experiments, and its relative failure motivated us to investigate the phenomenon illustrated in Fig. 2 and included in the full model described above.
Impact of sensitivity to social influence on collective accuracy.
Fig. 4 (SI Appendix, Fig. S11 shows an alternative representation) shows the collective accuracy for the five categories of behavioral responses identified in Fig. 1B and for the whole group before and after social information has been provided. Before social influence, keeping leads to the best accuracy, while adopting and overreacting behaviors are associated with the worst accuracy. However, as more reliable information is indirectly provided by the experts, and in particular for , adopting and overreacting lead to the best accuracy after social influence (14, 19). The contradicting behavior is the only one for which the accuracy is deteriorating after social influence. Finally, compromising leads to a systematic improvement of the accuracy as the percentage of experts increases (better than keeping for ), very similar to that of the whole group. The collective accuracy for each behavioral category is again fairly well-predicted by the model (we discuss below the disagreement between model predictions and experimental data in Fig. 4 for the adopters before social influence).
Fig. 4.
![](/cms/10.1073/pnas.1703695114/asset/1ce37bbf-6187-4581-8a9c-fefb939c3c45/assets/graphic/pnas.1703695114fig04.jpeg)
The sensitivity to social influence and the collective accuracy are strongly related to confidence (SI Appendix, Fig. S10). The more confident the subjects, the less they tend to follow the group and the better their accuracy is, especially before social influence. This makes the link between confident (18), informed (16), and successful (17) individuals: they are generally the same persons. However, individuals who are too confident (keeping behavior; arguably because they have an idea about the answer, hence their good accuracy before social influence) tend to discard others’ opinion. Although it might sometimes work—especially if no external information is provided —they lose the opportunity to benefit from valuable information learned by others. Meanwhile, adopting and overreacting subjects have poor confidence and accuracy before social influence, arguably because they do not know much about the questions. Note that the model, not including any notion of confidence or heterogeneous prior knowledge, overestimates the quality of the accuracy before social influence for the adopting behavior. However, even at , adopting subjects perform about as well as the other categories after social influence. In fact, if enough information is provided (), they are even able to reach almost perfect collective accuracy. Similar results have been found in the Japan experiment as shown on SI Appendix, Fig. S12. SI Appendix, Figs. S13–S15 show similar graphs for the collective performance in France and Japan.
Predicting the effect of incorrect information given to the human group by virtual agents.
We used the model to investigate the influence on the group performance of the quality and quantity of information delivered to the group (i.e., the value of the answer provided by the percentage of virtual agents). In our experiments, the group was provided with the (log transform of the) true value (the agents were experts). We expect a deterioration of the collective performance and accuracy as moves too far away from zero and as a greater amount of incorrect information is delivered to the group (by increasing ). The optimum collective accuracy is reached for a strictly positive V, whatever the value of (SI Appendix, Fig. S16), as also predicted by our simple analytical model. Hence, incorrect information can be beneficial to the group: providing the group with overestimated values can counterbalance the human cognitive bias to underestimate quantities (24).
Discussion
Quantifying how social information affects individual estimations and opinions is a crucial step to understand and model the dynamics of collective choices or opinion formation (26). Here, we have measured and modeled the impact of social information at individual and collective scales in estimation tasks with low demonstrability. By controlling the quantity and quality of information delivered to the subjects, unbeknownst to them, we have been able to precisely quantify the impact of social influence on group performance. We also tested and confirmed the cross-cultural generality of our results by conducting experiments in France and Japan.
We showed and justified that, when individuals have poor prior knowledge about the questions, the distribution of their log-transformed estimates is close to a Cauchy distribution. The distribution of the sensitivity to social influence is bell-shaped (contradict, compromise, overreact), with two additional peaks exactly at (keep) and (adopt), which lead to the definition of robust social traits as checked by further observing the subjects inclined to follow these behaviors. When subjects have little prior knowledge, we found that their sensitivity to social influence increases (linear cusp) with the difference between their estimate and that of the group, at variance with what was found in ref. 19, for questions where subjects had a high prior knowledge.
We used these experimental observations to build and calibrate a model that quantitatively predicts the sharpening of the distribution of individual estimates and the improvement in collective performance and accuracy as the amount of good information provided to the group increases. This model could be directly applied or straightforwardly adapted to similar situations where humans have to integrate information from other people or external sources.
We studied the impact of virtual experts on the group performance, a methodology allowing us to rigorously control the quantity () and quality () of the information provided to a group with little prior knowledge. These virtual experts can be seen either as an external source of information accessible to individuals (e.g., the Internet, social networks, media, etc.) or as a very cohesive (all having the same opinion ) and overconfident (all having ) subgroup of the population, such as can happen with “groupthink” (27). When these experts provide reliable information to the group, a systematic improvement in collective performance and accuracy is obtained experimentally and is quantitatively reproduced by our model. Moreover, if the experts are not too numerous and the information that they give is slightly above the true value, the model predicts that social influence can help the group perform even better than when the truth is provided, as this incorrect information compensates for the human cognitive bias to underestimate quantities.
We also showed that the sensitivity to social influence is strongly related to confidence and accuracy: the most confident subjects are generally the best performers and tend to weight the opinion of others less. When the group has access to more reliable information, this behavior becomes detrimental to individual and collective accuracy, as too confident individuals lose the opportunity to benefit from this information.
Overall, we showed that individuals, even when they have very little prior knowledge about a quantity to estimate, are able to use information from their peers or from the environment to collectively improve the group performance as long as this information is not highly misleading. Ultimately, getting a better understanding of these influential processes opens perspectives to develop information systems aimed at enhancing cooperation and collaboration in human groups, thus helping crowds become smarter (28, 29).
Future research will have to focus on the experimental validation of our theoretical predictions when providing incorrect information to the group, with the intriguing possibility of actually improving its performance. It would also be interesting to study the impact on the group performance of the number of estimates given as social information (instead of only their mean) and of revealing the confidence and/or reputation of those who share these estimates.
Acknowledgments
We thank Ofer Tchernichovski for his valuable comments. This work was supported by Agence Nationale de la Recherche project 11-IDEX-0002-02–Transversalité–Multi-Disciplinary Study of Emergence Phenomena, a grant from the CNRS Mission for Interdisciplinarity (project SmartCrowd, AMI S2C3), and by Program Investissements d’Avenir under Agence Nationale de la Recherche program 11-IDEX-0002-02, reference ANR-10-LABX-0037-NEXT. B.J. was supported by a doctoral fellowship from the CNRS, and R.E. was supported by Marie Curie Core/Program Grant Funding Grant 655235–SmartMass. T.K. was supported by Japan Society for the Promotion of Science Grant-in-Aid for Scientific Research JP16H06324 and JP25118004.
Supporting Information
Appendix (PDF)
- Download
- 24.90 MB
References
1
C Dellarocas, The digitization of word-of-mouth: Promise and challenges of online feedback mechanisms. Manag Sci 49, 1407–1424 (2003).
2
M Cha, H Haddadi, F Benevenuto, PK Gummadi, Measuring user influence in Twitter: The million follower fallacy. Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media (AAAI Press, Menlo Park, CA), pp. 10–17 (2010).
3
BJ Jansen, M Zhang, K Sobel, A Chowdury, Twitter power: Tweets as electronic word of mouth. J Am Soc Inform Sci Tech 60, 2169–2188 (2009).
4
B Gonçalves, N Perra Social Phenomena: From Data Analysis to Models (Springer, Heidelberg, 2015).
5
RM Bond, et al., A 61-million-person experiment in social influence and political mobilization. Nature 489, 295–298 (2012).
6
S Bikhchandani, D Hirshleifer, I Welch, A theory of fads, fashion, custom, and cultural change as informational cascades. J Polit Econ 100, 992–1026 (1992).
7
AV Banerjee, A simple model of herd behavior. Q J Econ 107, 797–817 (1992).
8
PR Laughlin Group Problem Solving (Princeton Univ Press, Princeton, 2011).
9
T Kameda, RS Tindale, JH Davis, Cognitions, preferences, and social sharedness: Past, present, and future directions in group decision making. Emerging Perspectives on Judgment and Decision Research, eds S Schneider, J Shanteau (Cambridge Univ Press, Cambridge, UK), pp. 215–240 (2009).
10
F Galton, Vox populi. Nature 75, 450–451 (1907).
11
J Surowiecki The Wisdom of Crowds (Anchor Books, New York, 2005).
12
J Lorenz, H Rauhut, F Schweitzer, D Helbing, How social influence can undermine the wisdom of crowd effect. Proc Natl Acad Sci USA 108, 9020–9025 (2011).
13
P Mavrodiev, CJ Tessone, F Schweitzer, Quantifying the effects of social influence. Sci Rep 3, 1360 (2013).
14
C Vande Kerckhove, et al., Modelling influence and opinion evolution in online collective behaviour. PLoS One 11, e0157685 (2016).
15
Y Luo, G Iyengar, V Venkatasubramanian, Social influence makes self-interested crowds smarter: An optimal control perspective. arXiv:1611.01558. (2016).
16
JJ Faria, JR Dyer, CR Tosh, J Krause, Leadership and social information use in human crowds. Anim Behav 79, 895–901 (2010).
17
AJ King, L Cheng, SD Starke, JP Myatt, Is the true ‘wisdom of the crowd’ to copy successful individuals? Biol Lett 8, 197–200 (2012).
18
G Madirolas, GG de Polavieja, Improving collective estimations using resistance to social influence. PLoS Comput Biol 11, e1004594 (2015).
19
I Yaniv, Receiving other people’s advice: Influence and benefit. Organ Behav Hum Decis Process 93, 1–13 (2004).
20
S Dehaene, V Izard, E Spelke, P Pica, Log or linear? Distinct intuitions of the number scale in Western and Amazonian indigene cultures. Science 320, 1217–1220 (2008).
21
M Moussaïd, JE Kämmer, PP Analytis, H Neth, Social influence and the collective dynamics of opinion formation. PLoS One 8, e78433 (2013).
22
C Harries, I Yaniv, N Harvey, Combining advice: The weight of a dissenting opinion in the consensus. J Behav Decis Making 17, 333–348 (2004).
23
A Chacoma, DH Zanette, Opinion formation by social influence: From experiments to modeling. PLoS One 10, e0140406 (2015).
24
T Indow, M Ida, Scaling of dot numerosity. Percept Psychophys 22, 265–276 (1977).
25
JB Soll, RP Larrick, Strategies for revising judgment: How (and how well) people use others’ opinions. J Exp Psychol Learn Mem Cogn 35, 780–805 (2009).
26
P Ball Why Society Is a Complex Matter: Meeting Twenty-First Century Challenges with a New Kind of Science (Springer, Berlin, 2012).
27
IL Janis Groupthink: Psychological Studies of Policy Decisions and Fiascoes (Houghton Mifflin, Boston, 1982).
28
D Helbing, Globally networked risks and how to respond. Nature 497, 51–59 (2013).
29
, eds F Xhafa, N Bessis, Inter-Cooperative Collective Intelligence: Techniques and Applications, Studies in Computational Intelligence (Springer, Berlin). (2014).
Information & Authors
Information
Published in
Classifications
Copyright
Copyright © 2017 the Author(s). Published by PNAS. This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).
Submission history
Published online: November 8, 2017
Published in issue: November 21, 2017
Keywords
Acknowledgments
We thank Ofer Tchernichovski for his valuable comments. This work was supported by Agence Nationale de la Recherche project 11-IDEX-0002-02–Transversalité–Multi-Disciplinary Study of Emergence Phenomena, a grant from the CNRS Mission for Interdisciplinarity (project SmartCrowd, AMI S2C3), and by Program Investissements d’Avenir under Agence Nationale de la Recherche program 11-IDEX-0002-02, reference ANR-10-LABX-0037-NEXT. B.J. was supported by a doctoral fellowship from the CNRS, and R.E. was supported by Marie Curie Core/Program Grant Funding Grant 655235–SmartMass. T.K. was supported by Japan Society for the Promotion of Science Grant-in-Aid for Scientific Research JP16H06324 and JP25118004.
Notes
This article is a PNAS Direct Submission.
Authors
Competing Interests
The authors declare no conflict of interest.
Metrics & Citations
Metrics
Citation statements
Altmetrics
Citations
Cite this article
114 (47) 12620-12625,
Export the article citation data by selecting a format from the list below and clicking Export.
Cited by
Loading...
View Options
View options
PDF format
Download this article as a PDF file
DOWNLOAD PDFLogin options
Check if you have access through your login credentials or your institution to get full access on this article.
Personal login Institutional LoginRecommend to a librarian
Recommend PNAS to a LibrarianPurchase options
Purchase this article to access the full text.