Skip to main content
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian
  • Log in
  • My Cart

Main menu

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home

Advanced Search

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses

New Research In

Physical Sciences

Featured Portals

  • Physics
  • Chemistry
  • Sustainability Science

Articles by Topic

  • Applied Mathematics
  • Applied Physical Sciences
  • Astronomy
  • Computer Sciences
  • Earth, Atmospheric, and Planetary Sciences
  • Engineering
  • Environmental Sciences
  • Mathematics
  • Statistics

Social Sciences

Featured Portals

  • Anthropology
  • Sustainability Science

Articles by Topic

  • Economic Sciences
  • Environmental Sciences
  • Political Sciences
  • Psychological and Cognitive Sciences
  • Social Sciences

Biological Sciences

Featured Portals

  • Sustainability Science

Articles by Topic

  • Agricultural Sciences
  • Anthropology
  • Applied Biological Sciences
  • Biochemistry
  • Biophysics and Computational Biology
  • Cell Biology
  • Developmental Biology
  • Ecology
  • Environmental Sciences
  • Evolution
  • Genetics
  • Immunology and Inflammation
  • Medical Sciences
  • Microbiology
  • Neuroscience
  • Pharmacology
  • Physiology
  • Plant Biology
  • Population Biology
  • Psychological and Cognitive Sciences
  • Sustainability Science
  • Systems Biology
Brief Report

Timing matters when correcting fake news

View ORCID ProfileNadia M. Brashier, View ORCID ProfileGordon Pennycook, View ORCID ProfileAdam J. Berinsky, and David G. Rand
PNAS February 2, 2021 118 (5) e2020043118; https://doi.org/10.1073/pnas.2020043118
Nadia M. Brashier
aDepartment of Psychology, Harvard University, Cambridge, MA 02138;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Nadia M. Brashier
  • For correspondence: nbrashier@fas.harvard.edu
Gordon Pennycook
bPaul J. Hill School of Business, University of Regina, Regina, SK S4S 0A2, Canada;
cKenneth Levene Graduate School of Business, University of Regina, Regina, SK S4S 0A2, Canada;
dDepartment of Psychology, University of Regina, Regina, SK S4S 0A2, Canada;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Gordon Pennycook
Adam J. Berinsky
eDepartment of Political Science, Massachusetts Institute of Technology, Cambridge, MA 02139;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Adam J. Berinsky
David G. Rand
fSloan School, Massachusetts Institute of Technology, Cambridge, MA 02139;
gDepartment of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  1. Edited by Margaret Levi, Stanford University, Stanford, CA, and approved December 16, 2020 (received for review September 30, 2020)

  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Abstract

Countering misinformation can reduce belief in the moment, but corrective messages quickly fade from memory. We tested whether the longer-term impact of fact-checks depends on when people receive them. In two experiments (total N = 2,683), participants read true and false headlines taken from social media. In the treatment conditions, “true” and “false” tags appeared before, during, or after participants read each headline. Participants in a control condition received no information about veracity. One week later, participants in all conditions rated the same headlines’ accuracy. Providing fact-checks after headlines (debunking) improved subsequent truth discernment more than providing the same information during (labeling) or before (prebunking) exposure. This finding informs the cognitive science of belief revision and has practical implications for social media platform designers.

  • fake news
  • misinformation
  • correction
  • fact-checking
  • memory

Concern about fake news escalated during the run-up to the 2016 US presidential election, when an estimated 44% of Americans visited untrustworthy websites (1). Faced with mounting public pressure, social media companies enlisted professional fact-checkers to flag misleading content. However, misconceptions often persist after people receive corrective messages (continued influence effect; ref. 2). Detailed corrections increase the likelihood of knowledge revision (3), but social media platforms prioritize user experience and typically attach simple tags (e.g., “disputed”) to posts. Can we optimize the longer-term impact of these brief fact-checks by presenting them at the right time?

There are arguments for placing fact-checks before, during, or after disputed information. Presenting fact-checks before headlines might confer psychological resistance. Inoculating people to weakened arguments makes them less vulnerable to persuasion (4). As examples, reading about “fake experts” protects people from climate science myths (5), and playing a game involving common disinformation tactics (e.g., faking an official Twitter account) helps people detect fake news (6). Prebunking could direct attention to a headline’s questionable features (e.g., sensational details). On the other hand, people might ignore the content entirely and miss an opportunity to encode it as “false.”

Alternatively, reading fact-checks alongside news could facilitate knowledge revision. Encoding retractions requires building a coherent mental model (7), which is easiest when misinformation and its correction are coactive (8). This mechanism explains why corrections rarely reinforce the original false belief (i.e., do not “backfire”) (9)—it is actually best to restate a myth when retracting it (10, 11). Thus, labeling a headline as “true” or “false” could increase salience and updating.

Finally, providing fact-checks after people process news could act as feedback, boosting long-term retention of the tags. Corrective feedback facilitates learning (12), especially when errors are made with high confidence (13). Prediction error enhances learning of new facts that violate expectations (14). Surprise also occurs when low-confidence guesses turn out to be right, improving subsequent memory (15). Debunking after readers form initial judgments about headlines could boost learning, even if they did not make an error.

Despite the extensive previous work on corrections, no study has directly compared the efficacy of equivalent corrections delivered before, during, or after exposure. In two nearly identical experiments (total N = 2,683), we tested whether the timing of corrections to fake news impacts discernment 1 wk later. Participants were exposed to 18 true and 18 false news headlines taken from social media (Fig. 1); they saw “true” and “false” tags immediately before (prebunking), during (labeling), or immediately after (debunking) reading and rating the accuracy of each headline. In a control condition, participants received no veracity information. One week later, they rated the accuracy of the 36 headlines again. To maximize power, we analyzed the final accuracy ratings from the two experiments together using linear regression with robust SEs clustered on subject and headline. We included dummies for each treatment condition, headline veracity (0 = false, 1 = true), and study. We also included the interaction between veracity and the treatment dummies, and the interaction between veracity and the study dummy.

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

Sample true and false headlines, as shown in the before, during, and after conditions. Fact-checks appeared on separate screens in the before and after conditions.

Results

Fig. 2A shows the distribution of accuracy ratings for false headlines after 1 wk. Presenting corrections after, b = −0.123, F(1, 96587) = 16.34, P < 0.001, Pstan < 0.001, and during, b = −0.081, F(1, 96587) = 7.45, P = 0.006, Pstan = 0.033, exposure to each headline decreased belief in false headlines relative to the control condition (to a similar extent, F(1, 96587) = 2.03, P = 0.154). Presenting corrections before exposure, conversely, did not significantly reduce belief in false headlines, b = 0.042, F(1, 96587) = 1.74, P = 0.188, and was less effective than presenting corrections after, F(1, 96587) = 25.39, P < 0.001, Pstan < 0.001, or during, F(1, 96587) = 15.11, P < 0.001, Pstan < 0.001, reading.

Fig. 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 2.

Distribution of accuracy ratings for false (A) and true (B) headlines and discernment (C) 1 wk after exposure, by treatment. Error bars indicate 95% CIs.

Fig. 2B shows the distribution of accuracy ratings for true headlines after 1 wk. While all three treatments significantly increased belief in true headlines relative to the control condition (F(1, 96587) > 6.75, P < 0.01, Pstan < 0.05 for all), presenting corrections after exposure was significantly more effective than during, F(1, 96587) = 65.53, P < 0.001, Pstan < 0.001, or before, F(1, 96587) = 47.02, P < 0.001, Pstan < 0.001, exposure.

Fig. 2C shows that this leads to significantly greater truth discernment (the difference in belief between true and false headlines) when corrections appeared after compared to during, F(1, 96587) = 37.74, P < 0.001, Pstan < 0.001, or before, F(1, 96587) = 65.08, P < 0.001, Pstan < 0.001, exposure (and during was marginally more effective than before, F(1, 96587) = 6.33, P = 0.012, Pstan = 0.062). Although before was more effective in Experiment 2 than in Experiment 1, after is still more effective than during or before exposure when considering each experiment separately (P < 0.001, Pstan < 0.01 for all comparisons).

Interestingly, neither analytic thinking, as measured by the Cognitive Reflection Test, nor political knowledge moderated the treatment effects (Ps > 0.421), despite both measures being associated with better baseline discernment (P < 0.001, Pstan < 0.001 for both). Lastly, providing corrections after reading may have been less effective for headlines that aligned with participants’ partisanship than for headlines that did not, F(1, 96587) = 5.06, P = 0.025, Pstan = 0.129, while the effectiveness of during and before did not differ based on partisan alignment (Ps > 0.30). Nonetheless, after was more effective than before or during exposure even for politically aligned headlines (P < 0.001, Pstan < 0.001, for all comparisons).

For regression tables and separate analyses of each experiment, see Open Science Framework (OSF, https://osf.io/bcq6d/).

Discussion

We found consistent evidence that the timing of fact-checks matters: “True” and “false” tags that appeared immediately after headlines (debunking) reduced misclassification of headlines 1 wk later by 25.3%, compared to an 8.6% reduction when tags appeared during exposure (labeling), and a 6.6% increase (Experiment 1) or 5.7% reduction (Experiment 2) when tags appeared beforehand (prebunking).

These results provide insight into the continued influence effect. If misinformation persists because people refuse to “update” beliefs initially (16), prebunking should outperform debunking; readers know from the outset that news is false, so no updating is needed. We found the opposite pattern, which instead supports the concurrent storage hypothesis that people retain both misinformation and its correction (17); but over time, the correction fades from memory (e.g., ref. 18). Thus, the key challenge is making corrections memorable. Debunking was more effective than labeling, emphasizing the power of feedback in boosting memory.

Our implementation models real-time correction by social media platforms. However, delivering debunks farther in time from exposure may be beneficial, as delayed feedback can be more effective than immediate feedback (19). Similarly, while our stimulus set was balanced, true headlines far outnumber false headlines on social media. Debunking may improve discernment even more when “false” tags are infrequent, as they would be more surprising and thus more memorable (15). On the other hand, mindlessly scrolling, rather than actively assessing accuracy at exposure, may lead to weaker initial impressions to provide feedback on, thereby reducing the advantage of debunking over labeling.

Ideally, people would not see misinformation in the first place, since even a single exposure to a fake headline makes it seem truer (20). Moreover, professional fact-checkers only flag a small fraction of false content, but tagging some stories as “false” might lead readers to assume that unlabeled stories are accurate (implied truth effect; ref. 21). These practical limitations notwithstanding, our results emphasize the surprising value of debunking fake news after exposure, with important implications for the fight against misinformation.

Materials and Methods

We selected 18 true headlines from mainstream news outlets and 18 false headlines that Snopes.com, a third-party fact-checking website, identified as fabricated (Fig. 1). The Committee on the Use of Human Subjects at the Massachusetts Institute of Technology deemed these experiments exempt. After informed consent, participants evaluated the accuracy of these 36 headlines on a scale from 1 (not at all accurate) to 4 (very accurate). In the treatment conditions, participants saw “true” and “false” tags immediately before, during, or immediately after reading. In the control condition, participants rated the headlines alone, with no tags. One week later, all participants judged the same 36 headlines for accuracy, this time with no veracity information. See SI Appendix for our full methods and preregistrations.

Data Availability.

Our preregistrations, materials, and anonymized behavioral data are available on OSF (https://osf.io/nuh4q/). Regression tables and separate analyses of each experiment are also on OSF (https://osf.io/bcq6d/).

Acknowledgments

We thank Antonio Arechar for assistance with data collection. We gratefully acknowledge funding from the NSF (N.M.B.), Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation (D.G.R. and G.P.), William and Flora Hewlett Foundation (D.G.R. and G.P.), Reset Project of Luminate (D.G.R. and G.P.), Social Sciences and Humanities Research Council of Canada (G.P.), and Google (D.G.R., A.J.B., and G.P.).

Footnotes

  • ↵1To whom correspondence may be addressed. Email: nbrashier{at}fas.harvard.edu.
  • Author contributions: N.M.B., G.P., A.J.B., and D.G.R. designed research; N.M.B. performed research; N.M.B. and D.G.R. analyzed data; N.M.B. wrote the paper; and G.P., A.J.B., and D.G.R. provided critical revisions.

  • The authors declare no competing interest.

  • This article contains supporting information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.2020043118/-/DCSupplemental.

  • Copyright © 2021 the Author(s). Published by PNAS.

This open access article is distributed under Creative Commons Attribution License 4.0 (CC BY).

References

  1. ↵
    1. A. M. Guess,
    2. B. Nyhan,
    3. J. Reifler
    , Exposure to untrustworthy websites in the 2016 US election. Nat. Hum. Behav. 4, 472–480 (2020).
    OpenUrl
  2. ↵
    1. M. S. Chan,
    2. C. R. Jones,
    3. K. Hall Jamieson,
    4. D. Albarracín
    , Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychol. Sci. 28, 1531–1546 (2017).
    OpenUrl
  3. ↵
    1. U. K. H. Ecker,
    2. Z. O’Reilly,
    3. J. S. Reid,
    4. E. P. Chang
    , The effectiveness of short-format refutational fact-checks. Br. J. Psychol. 111, 36–54 (2020).
    OpenUrl
  4. ↵
    1. J. A. Banas,
    2. S. A. Rains
    , A meta-analysis of research on inoculation theory. Commun. Monogr. 77, 281–311 (2010).
    OpenUrlCrossRef
  5. ↵
    1. J. Cook,
    2. S. Lewandowsky,
    3. U. K. H. Ecker
    , Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS One 12, e0175799 (2017).
    OpenUrl
  6. ↵
    1. J. Roozenbeek,
    2. S. van der Linden,
    3. T. Nygren
    , Prebunking interventions based on “inoculation” theory can reduce susceptibility to misinformation across cultures. Harvard Kennedy School Misinformation Rev., doi:10.37016//mr-2020-008 (2020).
    OpenUrlCrossRef
  7. ↵
    1. A. Gordon,
    2. J. C. W. Brooks,
    3. S. Quadflieg,
    4. U. K. H. Ecker,
    5. S. Lewandowsky
    , Exploring the neural substrates of misinformation processing. Neuropsychologia 106, 216–224 (2017).
    OpenUrl
  8. ↵
    1. P. Kendeou,
    2. R. Butterfuss,
    3. J. Kim,
    4. M. Van Boekel
    , Knowledge revision through the lenses of the three-pronged approach. Mem. Cognit. 47, 33–46 (2019).
    OpenUrl
  9. ↵
    1. T. Wood,
    2. E. Porter
    , The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Polit. Behav. 41, 135–163 (2019).
    OpenUrl
  10. ↵
    1. U. K. H. Ecker,
    2. J. L. Hogan,
    3. S. Lewandowsky
    , Reminders and repetition of misinformation: Helping or hindering its retraction? J. Appl. Res. Mem. Cogn. 6, 185–192 (2017).
    OpenUrl
  11. ↵
    1. C. N. Wahlheim,
    2. T. R. Alexander,
    3. C. D. Peske
    , Reminders of everyday misinformation can enhance memory for and beliefs in corrections of those statements in the short term. Psychol. Sci. 31, 1325–1339 (2020).
    OpenUrl
  12. ↵
    1. J. Hattie,
    2. H. Timperley
    , The power of feedback. Rev. Educ. Res. 77, 81–112 (2007).
    OpenUrlCrossRef
  13. ↵
    1. B. Butterfield,
    2. J. Metcalfe
    , Errors committed with high confidence are hypercorrected. J. Exp. Psychol. Learn. Mem. Cogn. 27, 1491–1494 (2001).
    OpenUrlCrossRefPubMed
  14. ↵
    1. A. Pine,
    2. N. Sadeh,
    3. A. Ben-Yakov,
    4. Y. Dudai,
    5. A. Mendelsohn
    , Knowledge acquisition is governed by striatal prediction errors. Nat. Commun. 9, 1673 (2018).
    OpenUrl
  15. ↵
    1. L. K. Fazio,
    2. E. J. Marsh
    , Surprising feedback improves later memory. Psychon. Bull. Rev. 16, 88–92 (2009).
    OpenUrlCrossRefPubMed
  16. ↵
    1. A. E. O’Rear,
    2. G. A. Radvansky
    , Failure to accept retractions: A contribution to the continued influence effect. Mem. Cognit. 48, 127–144 (2020).
    OpenUrl
  17. ↵
    1. A. Gordon,
    2. S. Quadflieg,
    3. J. C. W. Brooks,
    4. U. K. H. Ecker,
    5. S. Lewandowsky
    , Keeping track of ‘alternative facts’: The neural correlates of processing misinformation corrections. Neuroimage 193, 46–56 (2019).
    OpenUrl
  18. ↵
    1. B. Swire,
    2. U. K. H. Ecker,
    3. S. Lewandowsky
    , The role of familiarity in correcting inaccurate information. J. Exp. Psychol. Learn. Mem. Cogn. 43, 1948–1961 (2017).
    OpenUrlCrossRefPubMed
  19. ↵
    1. A. C. Butler,
    2. J. D. Karpicke,
    3. H. L. Roediger 3rd
    , The effect of type and timing of feedback on learning from multiple-choice tests. J. Exp. Psychol. Appl. 13, 273–281 (2007).
    OpenUrlCrossRefPubMed
  20. ↵
    1. G. Pennycook,
    2. T. D. Cannon,
    3. D. G. Rand
    , Prior exposure increases perceived accuracy of fake news. J. Exp. Psychol. Gen. 147, 1865–1880 (2018).
    OpenUrlCrossRefPubMed
  21. ↵
    1. G. Pennycook,
    2. A. Bear,
    3. E. T. Collins,
    4. D. G. Rand
    , The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Manage. Sci. 66, 4944–4957 (2020).
    OpenUrl
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Timing matters when correcting fake news
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Timing matters when correcting fake news
Nadia M. Brashier, Gordon Pennycook, Adam J. Berinsky, David G. Rand
Proceedings of the National Academy of Sciences Feb 2021, 118 (5) e2020043118; DOI: 10.1073/pnas.2020043118

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Timing matters when correcting fake news
Nadia M. Brashier, Gordon Pennycook, Adam J. Berinsky, David G. Rand
Proceedings of the National Academy of Sciences Feb 2021, 118 (5) e2020043118; DOI: 10.1073/pnas.2020043118
Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley
Proceedings of the National Academy of Sciences: 118 (5)
Table of Contents

Submit

Sign up for Article Alerts

Article Classifications

  • Social Sciences
  • Political Sciences

Jump to section

  • Article
    • Abstract
    • Results
    • Discussion
    • Materials and Methods
    • Data Availability.
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Surgeons hands during surgery
Inner Workings: Advances in infectious disease treatment promise to expand the pool of donor organs
Despite myriad challenges, clinicians see room for progress.
Image credit: Shutterstock/David Tadevosian.
Setting sun over a sun-baked dirt landscape
Core Concept: Popular integrated assessment climate policy models have key caveats
Better explicating the strengths and shortcomings of these models will help refine projections and improve transparency in the years ahead.
Image credit: Witsawat.S.
Double helix
Journal Club: Noncoding DNA shown to underlie function, cause limb malformations
Using CRISPR, researchers showed that a region some used to label “junk DNA” has a major role in a rare genetic disorder.
Image credit: Nathan Devery.
Steamboat Geyser eruption.
Eruption of Steamboat Geyser
Mara Reed and Michael Manga explore why Yellowstone's Steamboat Geyser resumed erupting in 2018.
Listen
Past PodcastsSubscribe
Multi-color molecular model
Enzymatic breakdown of PET plastic
A study demonstrates how two enzymes—MHETase and PETase—work synergistically to depolymerize the plastic pollutant PET.
Image credit: Aaron McGeehan (artist).

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Special Feature Articles – Most Recent
  • List of Issues

PNAS Portals

  • Anthropology
  • Chemistry
  • Classics
  • Front Matter
  • Physics
  • Sustainability Science
  • Teaching Resources

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Librarians
  • Press
  • Site Map
  • PNAS Updates

Feedback    Privacy/Legal

Copyright © 2021 National Academy of Sciences. Online ISSN 1091-6490