Fighting misinformation on social media using crowdsourced judgments of news source quality

Edited by Susan T. Fiske, Princeton University, Princeton, NJ, and approved December 17, 2018 (received for review April 19, 2018)
January 28, 2019
116 (7) 2521-2526

Significance

Many people consume news via social media. It is therefore desirable to reduce social media users’ exposure to low-quality news content. One possible intervention is for social media ranking algorithms to show relatively less content from sources that users deem to be untrustworthy. But are laypeople’s judgments reliable indicators of quality, or are they corrupted by either partisan bias or lack of information? Perhaps surprisingly, we find that laypeople—on average—are quite good at distinguishing between lower- and higher-quality sources. These results indicate that incorporating the trust ratings of laypeople into social media ranking algorithms may prove an effective intervention against misinformation, fake news, and news content with heavy political bias.

Abstract

Reducing the spread of misinformation, especially on social media, is a major challenge. We investigate one potential approach: having social media platform algorithms preferentially display content from news sources that users rate as trustworthy. To do so, we ask whether crowdsourced trust ratings can effectively differentiate more versus less reliable sources. We ran two preregistered experiments (n = 1,010 from Mechanical Turk and n = 970 from Lucid) where individuals rated familiarity with, and trust in, 60 news sources from three categories: (i) mainstream media outlets, (ii) hyperpartisan websites, and (iii) websites that produce blatantly false content (“fake news”). Despite substantial partisan differences, we find that laypeople across the political spectrum rated mainstream sources as far more trustworthy than either hyperpartisan or fake news sources. Although this difference was larger for Democrats than Republicans—mostly due to distrust of mainstream sources by Republicans—every mainstream source (with one exception) was rated as more trustworthy than every hyperpartisan or fake news source across both studies when equally weighting ratings of Democrats and Republicans. Furthermore, politically balanced layperson ratings were strongly correlated (r = 0.90) with ratings provided by professional fact-checkers. We also found that, particularly among liberals, individuals higher in cognitive reflection were better able to discern between low- and high-quality sources. Finally, we found that excluding ratings from participants who were not familiar with a given news source dramatically reduced the effectiveness of the crowd. Our findings indicate that having algorithms up-rank content from trusted media outlets may be a promising approach for fighting the spread of misinformation on social media.

Continue Reading

Acknowledgments

We thank Mohsen Mosleh for help determining the Twitter reach of various domains, Alexios Mantzarlis for assistance in recruiting professional fact-checkers for our survey, Paul Resnick for helpful comments, SJ Language Services for copyediting, and Jason Schwartz for suggesting that we investigate this issue. We acknowledge funding from the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation, the Social Sciences and Humanities Research Council of Canada, and Templeton World Charity Foundation Grant TWCF0209.

Supporting Information

Appendix (PDF)

References

1
J Gottfried, E Shearer, News use across social media platforms 2016. Pew Research Center. Available at www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/. Accessed March 2, 2017. (2016).
2
D Lazer, et al., The science of fake news. Science 359, 1094–1096 (2018).
3
A Guess, B Nyhan, J Reifler, Selective exposure to misinformation: Evidence from the consumption of fake news during the 2016 US presidential campaign. Available at www.dartmouth.edu/∼nyhan/fake-news-2016.pdf. Accessed February 1, 2018. (2018).
4
S Lewandowsky, UKH Ecker, CM Seifert, N Schwarz, J Cook, Misinformation and its correction: Continued influence and successful debiasing. Psychol Sci Public Interest 13, 106–131 (2012).
5
U Ecker, J Hogan, S Lewandowsky, Reminders and repetition of misinformation: Helping or hindering its retraction? J Appl Res Mem Cogn 6, 185–192 (2017).
6
UKH Ecker, S Lewandowsky, DTW Tang, Explicit warnings reduce but do not eliminate the continued influence of misinformation. Mem Cognit 38, 1087–1100 (2010).
7
G Pennycook, DG Rand, The implied truth effect: Attaching warnings to a subset of fake news stories increases perceived accuracy of stories without warnings. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3035384. Accessed December 10, 2017. (2017).
8
C Silverman, J Lytvynenko, LT Vo, J Singer-Vine, Inside the partisan fight for your news feed. Buzzfeed News. Available at https://www.buzzfeednews.com/article/craigsilverman/inside-the-partisan-fight-for-your-news-feed#.yc3vL4Pbx. Accessed January 31, 2018. (2017).
9
M Zimdars, My “fake news list” went viral. But made-up stories are only part of the problem. Washington Post. Available at https://www.washingtonpost.com/posteverything/wp/2016/11/18/my-fake-news-list-went-viral-but-made-up-stories-are-only-part-of-the-problem/?utm_term=.66ec498167fc&noredirect=on. Accessed November 19, 2016. (2016).
10
B Golub, MO Jackson, Naïve learning in social networks and the wisdom of crowds. Am Econ J Microecon 2, 112–149 (2010).
11
AW Woolley, CF Chabris, A Pentland, N Hashmi, TW Malone, Evidence for a collective intelligence factor in the performance of human groups. Science 330, 686–688 (2010).
12
M Zuckerberg, Continuing our focus for 2018 to make sure the time we all spend on Facebook is time well spent. Available at https://www.facebook.com/zuck/posts/10104445245963251. Accessed January 25, 2018. (2018).
13
G Pennycook, DG Rand, Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, June 20, 2018).
14
G Pennycook, TD Cannon, DG Rand, Prior exposure increases perceived accuracy of fake news. J Exp Psychol Gen 147, 1865–1880 (2018).
15
G Pennycook, DG Rand, Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3023545. Accessed June 10, 2018. (2017).
16
RM Faris, et al., Partisanship, propaganda, and disinformation: Online media and the 2016 US Presidential Election. Berkman Klein Center for Internet Society and Research Paper. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3019414. Accessed August 28, 2017. (2017).
17
DM Kahan, Misconceptions, misinformation, and the logic of identity-protective cognition. SSRN Electron J, 2017).
18
JT Jost, Ideological asymmetries and the essence of political psychology. Polit Psychol 38, 167–208 (2017).
19
G Pennycook, DG Rand, Cognitive reflection and the 2016 U.S. Presidential election. Pers Soc Psychol Bull, July 1, 2018).
20
PH Ditto, et al., At least bias is bipartisan: A meta-analytic comparison of partisan bias in liberals and conservatives. Perspect Psychol Sci, May 1, 2018).
21
J Baron, JT Jost, False equivalence: Are liberals and conservatives in the US equally “biased”? Perspectives on Psychological Science. Available at https://www.sas.upenn.edu/∼baron/papers/dittoresp.pdf. Accessed May 30, 2018. (2018).
22
A Coppock, Generalizing from survey experiments conducted on mechanical turk: A replication approach. Polit Sci Res Methods. Available at https://alexandercoppock.files.wordpress.com/2016/02/coppock_generalizability2.pdf. Accessed August 17, 2017. (2018).
23
C Silverman, J Lytvynenko, S Pham, These are 50 of the biggest fake news hits on Facebook in 2017. Buzzfeed News. Available at https://www.buzzfeednews.com/article/craigsilverman/these-are-50-of-the-biggest-fake-news-hits-on-facebook-in. Accessed January 31, 2018. (2017).
24
J Gillin, PolitiFact’s guide to fake news websites and what they peddle. Politifact. Available at https://www.politifact.com/punditfact/article/2017/apr/20/politifacts-guide-fake-news-websites-and-what-they. Accessed January 31, 2018. (2017).
25
Grinberg N, Joseph K, Friedland L, Swire-Thompson B, Lazer D, Fake news on twitter during the 2016 US Presidential election. Science, 10.1126/science.aau2706.
26
M Bronstein, G Pennycook, A Bear, DG Rand, T Cannon, Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and reduced analytic thinking. J Appl Res Mem Cogn, 2018).
27
A Coppock, OA Mcclellan, Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents. Available at https://alexandercoppock.com/papers/CM_lucid.pdf. Accessed August 27, 2018. (2018).
28
K Olmstead, A Mitchell, T Rosenstiel, The Top 25. Pew Research Center. Available at www.journalism.org/2011/05/09/top-25/. Accessed August 29, 2018. (2011).
29
S Frederick, Cognitive reflection and decision making. J Econ Perspect 19, 25–42 (2005).
30
DM Kahan, Ideology, motivated reasoning, and cognitive reflection. Judgm Decis Mak 8, 407–424 (2013).
31
H Mercier, D Sperber, Why do humans reason? Arguments for an argumentative theory. Behav Brain Sci 34, 57–74, discussion 74–111 (2011).
32
G Pennycook, JA Fugelsang, DJ Koehler, Everyday consequences of analytic thinking. Curr Dir Psychol Sci 24, 425–432 (2015).

Information & Authors

Information

Published in

Go to Proceedings of the National Academy of Sciences
Go to Proceedings of the National Academy of Sciences
Proceedings of the National Academy of Sciences
Vol. 116 | No. 7
February 12, 2019
PubMed: 30692252

Classifications

Submission history

Published online: January 28, 2019
Published in issue: February 12, 2019

Keywords

  1. news media
  2. social media
  3. media trust
  4. misinformation
  5. fake news

Acknowledgments

We thank Mohsen Mosleh for help determining the Twitter reach of various domains, Alexios Mantzarlis for assistance in recruiting professional fact-checkers for our survey, Paul Resnick for helpful comments, SJ Language Services for copyediting, and Jason Schwartz for suggesting that we investigate this issue. We acknowledge funding from the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation, the Social Sciences and Humanities Research Council of Canada, and Templeton World Charity Foundation Grant TWCF0209.

Notes

This article is a PNAS Direct Submission.

Authors

Affiliations

Hill/Levene Schools of Business, University of Regina, Regina, SK S4S 0A2, Canada;
David G. Rand1 [email protected]
Sloan School, Massachusetts Institute of Technology, Cambridge, MA 02138;
Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02138

Notes

1
To whom correspondence may be addressed. Email: [email protected] or [email protected].
Author contributions: G.P. and D.G.R. designed research; G.P. performed research; G.P. and D.G.R. analyzed data; and G.P. and D.G.R. wrote the paper.

Competing Interests

The authors declare no conflict of interest.

Metrics & Citations

Metrics

Note: The article usage is presented with a three- to four-day delay and will update daily once available. Due to ths delay, usage data will not appear immediately following publication. Citation information is sourced from Crossref Cited-by service.


Citation statements

Altmetrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

    Loading...

    View Options

    View options

    PDF format

    Download this article as a PDF file

    DOWNLOAD PDF

    Get Access

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Personal login Institutional Login

    Recommend to a librarian

    Recommend PNAS to a Librarian

    Purchase options

    Purchase this article to get full access to it.

    Single Article Purchase

    Fighting misinformation on social media using crowdsourced judgments of news source quality
    Proceedings of the National Academy of Sciences
    • Vol. 116
    • No. 7
    • pp. 2389-2776

    Media

    Figures

    Tables

    Other

    Share

    Share

    Share article link

    Share on social media