Skip to main content
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian
  • Log in
  • My Cart

Main menu

  • Home
  • Articles
    • Current
    • Latest Articles
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • Archive
  • Front Matter
  • News
    • For the Press
    • Highlights from Latest Articles
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Purpose and Scope
    • Editorial and Journal Policies
    • Submission Procedures
    • For Reviewers
    • Author FAQ
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home

Advanced Search

  • Home
  • Articles
    • Current
    • Latest Articles
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • Archive
  • Front Matter
  • News
    • For the Press
    • Highlights from Latest Articles
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Purpose and Scope
    • Editorial and Journal Policies
    • Submission Procedures
    • For Reviewers
    • Author FAQ

New Research In

Physical Sciences

Featured Portals

  • Physics
  • Chemistry
  • Sustainability Science

Articles by Topic

  • Applied Mathematics
  • Applied Physical Sciences
  • Astronomy
  • Computer Sciences
  • Earth, Atmospheric, and Planetary Sciences
  • Engineering
  • Environmental Sciences
  • Mathematics
  • Statistics

Social Sciences

Featured Portals

  • Anthropology
  • Sustainability Science

Articles by Topic

  • Economic Sciences
  • Environmental Sciences
  • Political Sciences
  • Psychological and Cognitive Sciences
  • Social Sciences

Biological Sciences

Featured Portals

  • Sustainability Science

Articles by Topic

  • Agricultural Sciences
  • Anthropology
  • Applied Biological Sciences
  • Biochemistry
  • Biophysics and Computational Biology
  • Cell Biology
  • Developmental Biology
  • Ecology
  • Environmental Sciences
  • Evolution
  • Genetics
  • Immunology and Inflammation
  • Medical Sciences
  • Microbiology
  • Neuroscience
  • Pharmacology
  • Physiology
  • Plant Biology
  • Population Biology
  • Psychological and Cognitive Sciences
  • Sustainability Science
  • Systems Biology

Placebo effects in cognitive training

Cyrus K. Foroughi, Samuel S. Monfort, Martin Paczynski, Patrick E. McKnight, and P. M. Greenwood
PNAS published ahead of print June 20, 2016 https://doi.org/10.1073/pnas.1601243113
Cyrus K. Foroughi
aDepartment of Psychology, George Mason University, Fairfax, VA 22030
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Cyrus K. Foroughi
  • For correspondence: cyrus.foroughi@gmail.com
Samuel S. Monfort
aDepartment of Psychology, George Mason University, Fairfax, VA 22030
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Martin Paczynski
aDepartment of Psychology, George Mason University, Fairfax, VA 22030
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Patrick E. McKnight
aDepartment of Psychology, George Mason University, Fairfax, VA 22030
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
P. M. Greenwood
aDepartment of Psychology, George Mason University, Fairfax, VA 22030
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  1. Edited by Michael S. Gazzaniga, University of California, Santa Barbara, CA, and approved May 17, 2016 (received for review January 22, 2016)

  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Significance

Placebo effects pose problems for some intervention studies, particularly those with no clearly identified mechanism. Cognitive training falls into that category, and yet the role of placebos in cognitive interventions has not yet been critically evaluated. Here, we show clear evidence of placebo effects after a brief cognitive training routine that led to significant fluid intelligence gains. Our goal is to emphasize the importance of ruling out alternative explanations before attributing the effect to interventions. Based on our findings, we recommend that researchers account for placebo effects before claiming treatment effects.

Abstract

Although a large body of research shows that general cognitive ability is heritable and stable in young adults, there is recent evidence that fluid intelligence can be heightened with cognitive training. Many researchers, however, have questioned the methodology of the cognitive-training studies reporting improvements in fluid intelligence: specifically, the role of placebo effects. We designed a procedure to intentionally induce a placebo effect via overt recruitment in an effort to evaluate the role of placebo effects in fluid intelligence gains from cognitive training. Individuals who self-selected into the placebo group by responding to a suggestive flyer showed improvements after a single, 1-h session of cognitive training that equates to a 5- to 10-point increase on a standard IQ test. Controls responding to a nonsuggestive flyer showed no improvement. These findings provide an alternative explanation for effects observed in the cognitive-training literature and the brain-training industry, revealing the need to account for confounds in future research.

  • placebo effects
  • cognitive training
  • brain training
  • fluid intelligence

What’s more, working memory is directly related to intelligence—the more you train, the smarter you can be.

NeuroNation (www.neuronation.com/, May 8, 2016)

The above quotation, like many others from the billion-dollar brain-training industry (1), suggests that cognitive training can make you smarter. However, the desire to become smarter may blind us to the role of placebo effects. Placebo effects are well known in the context of drug and surgical interventions (2, 3), but the specter of a placebo may arise in any intervention when the desired outcome is known to the participant—an intervention like cognitive training. Although a large body of research shows that general cognitive ability, g, is heritable (4, 5) and stable in young adults (6), recent research stands in contrast to this, indicating that intelligence can be heightened by cognitive training (7⇓⇓⇓⇓–12). General cognitive ability and IQ are related to many important life outcomes, including academic success (13, 14), job performance (15), health (16, 17), morbidity (18), mortality (18, 19), income (20, 21), and crime (13). In addition, the growing population of older people seeks ways to stave off devastating cognitive decline (22). Thus, becoming smarter or maintaining cognitive abilities via cognitive training is a powerful lure, raising important questions about the role of placebo effects in training studies.

The question of whether intelligence can be increased through training has generated a lively scientific debate. Recent research claims that it is possible to improve fluid intelligence (Gf: a core component of general cognitive ability, g) by means of working memory training (7⇓⇓⇓⇓–12, 23, 24); even meta-analyses support these claims (25, 26), concluding that improvements from cognitive training equate to an increase “…of 3–4 points on a standardized IQ test.” (ref. 25; but cf. ref. 27). However, researchers have yet to identify, test, and confirm a clear mechanism underlying fluid intelligence gains after cognitive training (28). One potential mechanism that has yet to be tested is that the observed effects are partially due to positive expectancy or placebo effects.

Researchers now recognize that placebo effects may potentially confound cognitive-training [i.e., “brain training” (29)] outcomes and may underlie some of the posttraining fluid intelligence gains (24, 27, 29⇓⇓–32). Specifically, it has been argued that “overt” recruitment methods in which the expected benefits of training are stated (or implied) may lead to a sampling bias in the form of self-selection, such that individuals who expect positive results will be overrepresented in any sample of participants (29, 33). If an individual volunteers to participate in a study entitled “Brain Training and Cognitive Enhancement” because he or she thinks the training will be effective, any effect of the intervention may be partially or fully explained by participant expectations.

Expectations regarding the efficacy of cognitive training may be rooted in beliefs regarding the malleability of intelligence (34). Dweck’s (34) work showed that people tend to hold strong implicit beliefs regarding whether or not intelligence is malleable and that these beliefs predict a number of learning and academic outcomes. Consistent with that work, there is evidence that individuals with stronger beliefs in the malleability of intelligence have greater improvements in fluid intelligence tasks after working-memory training (10). If individuals who believe that intelligence is malleable are overrepresented in a sample, the apparent effect of training may be related to the belief of malleability, rather than to the training itself.

The present study was motivated by concerns about overt recruitment and self-selection bias (29, 33), as well as our own observation that few published articles on cognitive training provide details regarding participant recruitment. In fact, of the primary studies included in the meta-analysis of Au et al. (25) only two provided sufficient detail to determine whether participants were recruited overtly [e.g., “sign up for a brain training study” (10)] or covertly [e.g., “did not inform subjects that they were participating in a training study” (24)]. (We were able to assess 18 of the 20 studies.) We later emailed the corresponding authors from all of the studies in the Au et al. (25) meta-analysis for more detailed recruitment information. (This step was done at the suggestion of a reviewer and occurred after data collection was complete. We chose to place this information here instead of the discussion to accurately portray the current recruitment standards within the field.) All but one author responded. We determined that 17 (of 19) studies used overt recruitment methods that could have introduced a self-selection bias. Specifically, 17 studies explicitly mentioned “cognitive” or “brain” training. Of those 17, we found that 11 studies further suggested the potential for improvement or enhancement. Only two studies didn’t mention either (Table S1). A comparison of effect sizes listed in the Au et al. (25) meta-analysis by these three methods of recruitment (i.e., overt, overt and suggestive, and covert) lends further credence to the possibility of a confounded placebo effect. For all of the studies that overtly recruited, Hedge’s g = 0.27; for all of the studies that overtly recruited and suggested improvement, Hedge's g = 0.28; and for the studies that covertly recruited, Hedge's g = 0.11. Lastly, we searched the internet (via Google) for the terms “participate in a brain training study” and “brain training participate.” The top 10 results for both searches revealed six separate laboratories that are actively and overtly recruiting individuals to participants in either a “brain training study” or a “cognitive training study.” Taken together, these findings provide clear evidence that suggestive recruitment methods are common and that such recruitment may contribute to the positive outcomes reported in the cognitive-training literature. We therefore hypothesized that overt and suggestive recruitment would be sufficient to induce positive posttraining outcomes.

View this table:
  • View inline
  • View popup
Table S1.

Responses from authors when queried about recruitment methods

Materials and Methods

We designed a procedure to intentionally induce a placebo effect via overt recruitment. Our recruitment targeted two populations of participants using different advertisements varying in the degree to which they evoked an expectation of cognitive improvement (Fig. 1). Once participants self-selected into the two groups, they completed two pretraining fluid intelligence tests followed by 1 h of cognitive training and then completed two posttraining fluid intelligence tests on the following day. Two individual difference metrics regarding beliefs about cognition and intelligence were also collected as potential moderators. The researchers who interacted with participants were blind to the goal of the experiment and to the experimental condition. Aside from their means of recruitment, all participants completed identical cognitive-training experiments. All participants read and signed an informed consent form before beginning the experiment. The George Mason University Institutional Review Board approved this research.

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

Recruitment flyers for placebo (Left) and control (Right) groups.

We recruited the placebo group (n = 25) with flyers overtly advertising a study for brain training and cognitive enhancement (d). The text “Numerous studies have shown working memory training can increase fluid intelligence” was clearly visible on the flyer. We recruited the control group (n = 25) with a visually similar flier containing generic content that did not mention brain training or cognitive enhancement. We determined the sample sizes for both groups based upon two a priori criteria: (i) Previous, significant training studies had sample sizes of 25 or fewer (7, 8); and (ii) statistical power analyses (power ≥ 0.7) on between-group designs dictated a sample size of 25 per group for a moderate to large effect size (d ≥ 0.7). Our rationale for the first criterion was that we were trying to replicate previous training studies, but with the additional manipulation of a placebo that had been omitted in those studies. The second criterion simply allowed us a good chance to find a reasonably large and important effect with the sample size we selected. In sum, we felt that the sample size allowed for a good replication of prior studies, but restricted us to finding only worthwhile results to report. The final sample of participants consisted of 19 males and 31 females, with an average age of 21.5 y (SD = 2.3). The groups (n = 50; 25 for each condition) did not differ by age [t(48) = 0.18, P = 0.856] or by gender composition [χ2(1) = 0.76, P = 0.382].

After the pretests (Gf assessments described below), participants completed 1 h of cognitive training with an adaptive dual n-back task (SI Materials and Methods). We chose this task for two reasons: First, it is commonly used in cognitive training research, and, second, a high-face-validity task was required to maintain the credibility of the training regimen [compare placebo pain medication appearing identical to the real medication (35)]. In this task, participants were presented with two streams of information: auditory and visuospatial. There were eight stimuli per modality that were presented at a rate of 3 s per stimuli. For each stream, participants decided whether the current stimulus matched the stimulus that was presented n items ago. Our n-back task was an adaptive version in which the level of n changed as performance increased or decreased within each block.

SI Materials and Methods

Tasks.

RAPM.

RAPM is a test of inductive reasoning (37, 38). Each question has nine geometric patterns arranged in a 3 × 3 matrix according to an unknown set of rules. The bottom right pattern is always missing, and the objective is to match the correct missing pattern from a set of eight possible matches. We followed the protocol designed by Jaeggi et al. (10) to create parallel versions of the RAPM, resulting in the RAPM-A and -B, both consisting of 18 problems.

BOMAT.

The BOMAT is similar to the RAPM. However, it suffers from less of a ceiling effect in university participants compared with RAPM (8, 10). Each problem has 15 geometric patterns arranged in a 5 × 3 matrix, and the missing pattern can occur in any location. Participants must choose one correct pattern from six available options. We followed the protocol designed by Jaeggi et al. (10) to create parallel versions of the test, resulting in the BOMAT-A and -B, both consisting of 27 problems.

Questionnaires.

Theories of Intelligence Scale.

The Theories of Intelligence Scale (34) measures beliefs regarding the malleability of intelligence. This six-point scale has eight questions that can be answered from strongly agree to strongly disagree.

Need for Cognition Scale.

The Need for Cognition Scale measures “the tendency for an individual to engage in and enjoy thinking” (41). This 9-point scale has 18 questions that can be answered from very strong agreement to very strong disagreement.

Training Task.

Dual n-back task.

The dual n-back task is commonly used in cognitive training research (7, 8, 10). In this task, participants were presented with two streams of information: auditory and visuospatial. There were eight stimuli per modality that were presented at a rate of 3 s per stimuli. For each stream, participants decided whether the current stimulus matched the stimulus that was presented n items ago. Our n-back task was an adaptive version in which the level of n changed as performance increased or decreased within each block. This task is freely available online at brainworkshop.sourceforge.net/.

Recruitment.

Participants self-selected into either the placebo or control groups by sending an email in response to one of the two flyers presented in the main text (Fig. 1). Flyers were placed throughout buildings on the George Mason University Fairfax campus and were evenly distributed within each building. That is, if the placebo flyer was placed in a building, a control flyer was also placed in it. Flyers were replaced weekly to ensure that pull tags were always available. We also responded to all emails in an identical manner. Specifically, when a participant emailed informing us that they were interested in either study, we responded with the following:

“Thank you for your email and interest in participating. To complete this experiment, you will need to be available for two sessions that occur on back to back days (e.g., Monday and Tuesday). Please allow up to 3 hours for session one and up to 2 hours for session two. We currently have the following sets of sessions available over the next two weeks. Please let us know what sets of times work for you so that we can schedule you. If you have any questions, feel free to email us back.”

When participants emailed back with their available time, we then responded with this email:

“Thank you for agreeing to participate. You are scheduled to complete session one at [TIME AND DATE HERE] and session two at [TIME AND DATE HERE] in [BUILDING NAME AND ROOM NUMBER]. [RESEARCHER’S NAME] will be working with you at both sessions. Please arrive on time, use the restroom beforehand, and turn off your cell phone for the duration of the experiment. A break will be offered during session one. If you have any questions, please email us back. Thank you and have a nice day.”

Procedure.

All participants read and signed an informed consent form before beginning the experiment. Importantly, the consent form did not mention the goal of the study. Additionally, all researchers, who were blind to the goal of the experiment anyway, were instructed to not talk to the participants about anything other than the experiment itself (i.e., the instructions) because we did not want to introduce any additional biases. On day 1, participants completed the RAPM-A and BOMAT-A, took a short break, and then completed ∼1 h of the dual n-back training task. On day 2, participants completed the RAPM-B and BOMAT-B, then the Theories of Intelligence Scale, the Need for Cognition Scale, and a short demographic survey.

RAPM to IQ Conversion.

We used two different methods to determine that our RAPM improvement (d = 0.50) equated to a 5- to 10-point increase in IQ scores. We opted to widen the range to 5–10 to better capture the improvement within a wider range of error. (i) Following the approach of the Au et al. (2015; ref. 25) meta-analysis, we multiplied the SD increase of 0.50 by the SD of common IQ tests, 15. Therefore, 0.50 × 15 = 7.5 points increased on an IQ test. (ii) By using Table APM36 from the RAPM Manual (40) that compares RAPM scores to IQ scores, an increase of 0.50 SDs units equated to an approximate increase of 6–8 IQ points.

Results

All analyses were conducted by using mixed-effects linear regression with restricted maximum likelihood. As expected, both groups’ training performance improved over time [B = 0.016, SE = 0.002, t(48) = 10.5, P < 0.001]. All participants began at 2-back; 18% did not advance beyond a 2-back, 14% finished training at a 4-back, and 68% at a 3-back. Training performance did not differ by group [B = −0.002, SE = 0.002, t(48) = −1.00, P = 0.321): Both the placebo and control groups completed training with a similar degree of success. A placebo effect can occur in the absence of training differences between groups.

The placebo effect does, however, necessitate an effect on the outcome of interest. Pretraining and posttraining fluid intelligence was measured with Raven’s Advanced Progressive Matrices (RAPM) and Bochumer Matrices Test (BOMAT), two tests of inductive reasoning widely used to assess Gf (36⇓–38). No baseline differences were found between groups on either test [t(48) = −0.063, P = 0.939 and t(48) = −0.123, P = 0.938, respectively]. We observed a main effect of time on test performance in which scores on both intelligence tests increased from pretraining to posttraining. These main effects of time on both intelligence measures, however, were qualified by an interaction by group [RAPM: B = 0.65, SE = 0.19, t(48) = 3.41, P = 0.0013, d = 0.98; and BOMAT: B = 0.82, SE = 0.18, t(48) = 4.63, P < 0.0001, d = 1.34]. Specific contrasts showed that these moderation effects were entirely driven by the participants in the placebo group—the only individuals in the study to score significantly higher on posttraining compared with pretraining sessions for both RAPM [B = −1.04, SE = 0.19, t(48) = −5.46, P < 0.0001, d = 0.50] and for the BOMAT [B = −1.28, SE = 0.18, t(48) = −7.22, P < 0.0001, d = 0.39]. Extrapolating RAPM to IQ (25, 39, 40), these improvements equate to a 5- to 10-point increase on a standardized 100-point IQ test (SI Materials and Methods). In contrast, the pretraining and posttraining scores for participants in the control group were statistically indistinguishable, both for RAPM [B = −0.12, SE = 0.19, t(48) = −0.63, P = 0.922] and for the BOMAT [B = −0.12, SE = 0.18, t(48) = −0.68, P = 0.905]. The results are summarized in Tables S2–S4 and depicted in Fig. 2. Interestingly, pooling the data across groups to form one sample (combining the self-selection and control groups) revealed significant posttraining outcomes [B = 0.41, SE = 0.11, t(49) = 3.90, P = 0.0003, d = 0.28 (RAPM); and B = 0.50, SE = 0.15, t(49) = 4.69, P < 0.0001, d = 0.21 (BOMAT)]. That is, the effect from the placebo group was strong enough to overcome the null effect from the control group (when pooled).

Fig. 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 2.

Estimated marginal means of the RAPM (Left) and BOMAT (Right) scores by time and group; errors bars represent SEs.

View this table:
  • View inline
  • View popup
Table S2.

Regression results for RAPM predicted by time and group

View this table:
  • View inline
  • View popup
Table S3.

Regression results for BOMAT predicted by time and group

View this table:
  • View inline
  • View popup
Table S4.

Means and SDs for RAPM and BOMAT

We also observed differences between groups for scores on the Theories of Intelligence scale, which measures beliefs regarding the malleability of intelligence (34). The participants in the placebo group reported substantially higher scores on this index compared with controls [B = 14.96, SE = 1.93, t(48) = 7.75, P < 0.0001, d = 2.15], indicating a greater confidence that intelligence is malleable. These findings indicate that our manipulation via recruitment flyer produced significantly different groups with regard to expectancy. We did not detect differences in Need for Cognition scores (41) [B = 0.56, SE = 5.67, t(48) = 0.10, P = 0.922] (Fig. 3). Together, these results support the interpretation that participants self-selected into groups based on differing expectations.

Fig. 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 3.

Estimated marginal means of the Theories of Intelligence and Need for Cognition scales by group; error bars represent SEs.

We also tested whether the response time to volunteer for our study influenced the aforementioned findings. Specifically, we noticed that the placebo condition appeared to fill faster than the control condition did (366 vs. 488 h). It is possible that speed of signup might represent another measure for—or perhaps gradations within—the strength of the placebo effect. The volunteer response time differences by group failed to produce a significant effect on either the RAPM [B = 0.04, SE = 0.17, t(46) = 0.23, P = 0.819] or the BOMAT [B = 0.20, SE = 0.16, t(46) = 1.28, P = 0.201]. Volunteer response time also failed to explain the improvement observed within the placebo group alone, on RAPM [B = 0.20, SE = 0.20, t(23) = 0.95, P = 0.341] and BOMAT [B = 0.26, SE = 0.22, t(23) = 1.22, P = 0.237] (Fig. 4).

Fig. 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 4.

Improvement in test scores from pretraining to posttraining by group and speed of participant sign up, split into fast and slow (z = −1 and z = 1 of minutes since experiment onset, respectively). Error bars represent SE.

Researchers have hypothesized that a training dosage effect may exist, such that the quality of performance on a training task is associated with the degree of subsequent skill transfer (7). However, as discussed previously, no pre–post improvements occurred within the control group, even though all participants performed equally well on the training task. Consequently, training performance did not predict subsequent performance improvement on its own [B = 0.017, SE = 0.20, t(46) = 0.09, P = 0.930], nor did it moderate the effect of group on the observed test performance improvements [B = −0.16, SE = 0.28, t(46) = −0.58, P = 0.567] (Fig. 5). Therefore, our data do not support the dosage-effect hypothesis.

Fig. 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 5.

Improvement in test scores from pretraining to posttraining by group and performance on training task (z = −1 and z = 1 of training performance, respectively). Error bars represent SE.

Discussion

We provide strong evidence that placebo effects from overt and suggestive recruitment can affect cognitive training outcomes. These findings support the concerns of many researchers (24, 27, 29⇓⇓–32), who suggest that placebo effects may underlie positive outcomes seen in the cognitive-training literature. By capitalizing on the self-selecting tendencies of participants with strong positive beliefs about the malleability of intelligence, we were able to induce an improvement in Gf after 1 h of working memory training. We acknowledge that the flyer itself could have induced the positive beliefs about the malleability of intelligence. Either way, these findings present an alternative explanation for effects reported in the cognitive-training literature and in the brain-training industry, demonstrating the need to account for placebo effects in future research.

Importantly, we do not claim that our study revealed a population of individuals whose intelligence was truly changed by the training that they received in our study. It is extremely unlikely that individuals in the placebo group increased their IQ by 5–10 points with 1 h of cognitive training. Three elements of our design and results support this position. First, a single, 1-h training session is far less than the traditional 15 or more hours spread across weeks commonly used in training studies (8, 10, 23). We argue that the use of a very short training period was sufficient to avoid a true training effect. Second, we observed similar baseline scores on both of the fluid intelligence tests between groups, suggesting that both groups were equally engaged in the experiment. Thus, initial nonequivalence between groups or regression artifacts are likely absent from our design. Third, equivalent performance on the training task between groups suggests that the differences in posttraining intelligence were not the (direct) result of training. If groups showed dramatically different training effects on the dual n-back task, it might follow that one group showed higher posttraining scores on the test of general cognitive ability.

Therefore, our study, to our knowledge, is the first to explicitly model the main effect of expectancy effects while controlling for the effect of training. That is, because our design was unlikely to have produced true training effects, our positive effects on Gf are solely the result of overt and suggestive recruitment. Although posttraining gains in fluid intelligence are typically discussed in terms of a main effect of training (7, 8, 10, 11), we argue that such studies cannot rule out an interaction between training and effects from overt and suggestive recruitment. Furthermore, based on the evidence we reviewed above, we are unaware of any previous studies that obtained a positive main effect of training in the absence of expectation or self-selection. Indeed, to our knowledge, the rigor of double-blind randomized clinical trials is nonexistent in this research area.

Moving forward, we suggest that researchers exercise care in their design of cognitive training studies. Our findings raise philosophical concerns and questions that merit discussion within the scientific field so that this area of inquiry can advance. We discuss two different schools of thought about how to recruit participants and design training studies. We hope that this work can begin a conversation leading to a consensus on how to best design future research in this field.

First, following in the tradition of randomized controlled trials used in medicine, one approach suggests that recruitment and study design should be as covert as possible (29, 32). Specifically, several research groups have argued for the need to remove study-specific information from the recruitment and briefing procedures, avoid providing the goals of the research to participants, and omit mention of any anticipated outcomes (29, 32, 33). The purpose of such a design would be to minimize any confounding effects (e.g., placebo or expectation). Our earlier review of the Au et al. (25) meta-analyses revealed two studies that followed this approach.

Alternatively, the second approach suggests that we should only recruit participants who believe that the training will work and that we should do this using overt methods. Such a screening process would eliminate participants whose prior beliefs would prevent an otherwise effective treatment from having an effect. That is, if a participant does not care about the training, puts little effort in, and/or is motivated solely by something else (e.g., money), they are not likely to improve with any intervention, including cognitive training. Although positive expectancies would be overrepresented in such an overtly recruited sample, proper use of active controls should allow for training effects to be isolated from expectation. This view is in line with some from the medical domain who argue that researchers can make use of participant expectation to better test treatment effects in randomized controlled trials (42). This view is also in line with some from the psychotherapy domain who argue that motivation is important for treatment effectiveness (43).

One interesting consideration is the likelihood that these two design approaches recruit from different subpopulations. Dweck (34) has shown that individuals hold implicit beliefs regarding whether or not intelligence is malleable and that these beliefs predict a number of learning and academic outcomes. Thus, it is possible that the benefits from cognitive training occur only in individuals who believe the training will be effective. That being said, this possibility is not applicable to our data because our design eliminated a main effect of training. It will be important in future work to investigate the relation between expectation and processes of learning during cognitive training.

Our data do not allow us to understand the field as a whole; instead, they allow us to understand existing limitations to current research that require further exploration. To wit, we identified expectancy as a major factor that needs to be considered for a fuller understanding of training effects. More rigorous designs such as double-blind, block randomized controlled trials that measure multiple outcomes may offer a better “test” of these cognitive training effects. Blinding subjects to cognitive training may be the biggest obstacle in these designs—as pointed out by Boot et al. (29), because participants become aware of the goals of the study. Furthermore, assessing expectancy and personal theories of intelligence malleability (cf. ref. 34) before randomization to ensure adequate representation in all groups would allow us to better assess the true training effects and the potential for expectancy to produce effects alone or in interaction with training. Finally, researchers should use more measures of Gf to determine whether positive outcomes are the result of latent changes or changes in test-specific performance. We are aware of no study to date—including the present one—that uses these rigorous methods. (We include the present one by design. Our goal was to determine whether a main effect of expectation existed using methods similar to published research.) By using such methods, we can begin to understand whether true training effects exist and are generalizable to samples (and perhaps populations) beyond those who expect to improve.

Conclusion

Our findings have important implications for cognitive-training research and the brain-training industry at large. Previous cognitive-training results may have been inadvertently influenced by placebo effects arising from recruitment or design. For the field of cognitive training to advance, it is important that future work report recruitment information and include the Theories of Intelligence Scale (34) to determine the relation between observed effects of training and of expectancy. The brain-training industry may be advised to temper their claims until the role of placebo effects is better understood. Many commercial brain-training websites make explicit claims about the effectiveness of their training that are not currently supported by many in the scientific community (ref. 44; cf. ref. 45). Consistent with that concern, one of the largest brain-training companies in the world agreed in January 2016 to pay a $2 million fine to the Federal Trade Commission for deceptive advertising about the benefits of their programs (46). The deception—exaggerated claims of training efficacy—may be fueling a placebo effect that may contaminate actual brain-training effects.

We argue that our findings also have broad implications for the advancement of science of human cognition; in a recent replication effort published in Science, only 36% (35 of 97) of the psychological science studies (including those that fall under the broad category of neuroscience) were successfully replicated (47). Failure to control or account for placebo effects could have contributed to some of these failed replications. Our goal in any experiment should be to take every step possible to ensure that the effects we seek are the result of manipulated interventions—not confounds that go unreported or undetected.

Acknowledgments

This work was supported by George Mason University; the George Mason University Provost PhD Awards; Office of Naval Research Grant N00014-14-1-0201; and Air Force Office of Scientific Research Grant FA9550-10-1-0385.

Footnotes

  • ↵1To whom correspondence should be addressed. Email: cyrus.foroughi{at}gmail.com.
  • Author contributions: C.K.F. designed research; C.K.F. and S.S.M. analyzed data; and C.K.F., S.S.M., M.P., P.E.M., and P.M.G. wrote the paper.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission.

  • Data deposition: The data have been achived on Figshare, https://figshare.com/articles/Placebo_csv/2062479.

  • This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1601243113/-/DCSupplemental.

References

  1. ↵
    Selk J (2013) Amidst billion-dollar brain fitness industry, a free way to train your brain. Forbs. Available at www.forbes.com/sites/jasonselk/2013/08/13/amidst-billion-dollar-brain-fitness-industry-a-free-way-to-train-your-brain/#6b3457647d41. Accessed May 8, 2016.
    .
  2. ↵
    1. Shapiro AK
    (1971) Handbook of Psychotherapy and Behavior Change (Wiley, New York).
    .
  3. ↵
    1. Turner JA,
    2. Deyo RA,
    3. Loeser JD,
    4. Von Korff M,
    5. Fordyce WE
    (1994) The importance of placebo effects in pain treatment and research. JAMA 271(20):1609–1614.
    .
    OpenUrlCrossRefPubMed
  4. ↵
    1. Bouchard TJ Jr,
    2. Lykken DT,
    3. McGue M,
    4. Segal NL,
    5. Tellegen A
    (1990) Sources of human psychological differences: The Minnesota Study of Twins Reared Apart. Science 250(4978):223–228.
    .
    OpenUrlAbstract/FREE Full Text
  5. ↵
    1. Plomin R,
    2. Pedersen NL,
    3. Lichtenstein P,
    4. McClearn GE
    (1994) Variability and stability in cognitive abilities are largely genetic later in life. Behav Genet 24(3):207–215.
    .
    OpenUrlCrossRefPubMed
  6. ↵
    1. Larsen L,
    2. Hartmann P,
    3. Nyborg H
    (2008) The stability of general intelligence from early adulthood to middle-age. Intelligence 36(1):29–34.
    .
    OpenUrlCrossRef
  7. ↵
    1. Jaeggi SM,
    2. Buschkuehl M,
    3. Jonides J,
    4. Perrig WJ
    (2008) Improving fluid intelligence with training on working memory. Proc Natl Acad Sci USA 105(19):6829–6833.
    .
    OpenUrlAbstract/FREE Full Text
  8. ↵
    1. Jaeggi SM, et al.
    (2010) The relationship between n-back performance and matrix reasoning—Implications for training and transfer. Intelligence 38(6):625–635.
    .
    OpenUrlCrossRef
  9. ↵
    1. Jaeggi SM,
    2. Buschkuehl M,
    3. Jonides J,
    4. Shah P
    (2011) Short- and long-term benefits of cognitive training. Proc Natl Acad Sci USA 108(25):10081–10086.
    .
    OpenUrlAbstract/FREE Full Text
  10. ↵
    1. Jaeggi SM,
    2. Buschkuehl M,
    3. Shah P,
    4. Jonides J
    (2014) The role of individual differences in cognitive training and transfer. Mem Cognit 42(3):464–480.
    .
    OpenUrlCrossRefPubMed
  11. ↵
    1. Rudebeck SR,
    2. Bor D,
    3. Ormond A,
    4. O’Reilly JX,
    5. Lee AC
    (2012) A potential spatial working memory training task to improve both episodic memory and fluid intelligence. PLoS One 7(11):e50431.
    .
    OpenUrlCrossRefPubMed
  12. ↵
    1. Strenziok M, et al.
    (2014) Neurocognitive enhancement in older adults: Comparison of three cognitive training tasks to test a hypothesis of training transfer in brain connectivity. Neuroimage 85(Pt 3):1027–1039.
    .
    OpenUrlCrossRefPubMed
  13. ↵
    1. Neisser U, et al.
    (1996) Intelligence: Knowns and unknowns. Am Psychol 51(2):77–101.
    .
    OpenUrlCrossRef
  14. ↵
    1. Watkins MW,
    2. Lei P-W,
    3. Canivez GL
    (2007) Psychometric intelligence and achievement: A cross-lagged panel analysis. Intelligence 35(1):59–68.
    .
    OpenUrlCrossRef
  15. ↵
    1. Schmidt FL,
    2. Hunter JE
    (1998) The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychol Bull 124(2):262–275.
    .
    OpenUrlCrossRef
  16. ↵
    1. Gottfredson LS
    (2004) Intelligence: Is it the epidemiologists’ elusive “fundamental cause” of social class inequalities in health? J Pers Soc Psychol 86(1):174–199.
    .
    OpenUrlCrossRefPubMed
  17. ↵
    1. Gottfredson LS,
    2. Deary IJ
    (2004) Intelligence predicts health and longevity, but why? Curr Dir Psychol Sci 13(1):1–4.
    .
    OpenUrlAbstract/FREE Full Text
  18. ↵
    1. Whalley LJ,
    2. Deary IJ
    (2001) Longitudinal cohort study of childhood IQ and survival up to age 76. BMJ 322(7290):819.
    .
    OpenUrlAbstract/FREE Full Text
  19. ↵
    1. O’Toole BI,
    2. Stankov L
    (1992) Ultimate validity of psychological tests. Pers Individ Dif 13(6):699–716.
    .
    OpenUrlCrossRef
  20. ↵
    1. Ceci SJ,
    2. Williams WM
    (1997) Schooling, intelligence, and income. Am Psychol 52(10):1051–1058.
    .
    OpenUrlCrossRef
  21. ↵
    1. Strenze T
    (2007) Intelligence and socioeconomic success: A meta-analytic review of longitudinal research. Intelligence 35(5):401–426.
    .
    OpenUrlCrossRef
  22. ↵
    1. Willis SL, et al., ACTIVE Study Group
    (2006) Long-term effects of cognitive training on everyday functional outcomes in older adults. JAMA 296(23):2805–2814.
    .
    OpenUrlCrossRefPubMed
  23. ↵
    1. Harrison TL, et al.
    (2013) Working memory training may increase working memory capacity but not fluid intelligence. Psychol Sci 24(12):2409–2419.
    .
    OpenUrlAbstract/FREE Full Text
  24. ↵
    1. Redick TS, et al.
    (2013) No evidence of intelligence improvement after working memory training: A randomized, placebo-controlled study. J Exp Psychol Gen 142(2):359–379.
    .
    OpenUrlCrossRefPubMed
  25. ↵
    1. Au J, et al.
    (2015) Improving fluid intelligence with training on working memory: A meta-analysis. Psychon Bull Rev 22(2):366–377.
    .
    OpenUrlCrossRefPubMed
  26. ↵
    1. Karbach J,
    2. Verhaeghen P
    (2014) Making working memory work: A meta-analysis of executive-control and working memory training in older adults. Psychol Sci 25(11):2027–2037.
    .
    OpenUrlAbstract/FREE Full Text
  27. ↵
    1. Dougherty MR,
    2. Hamovitz T,
    3. Tidwell JW
    (2016) Reevaluating the effectiveness of n-back training on transfer through the Bayesian lens: Support for the null. Psychon Bull Rev 23(1):306–316.
    .
    OpenUrlCrossRefPubMed
  28. ↵
    1. Greenwood PM,
    2. Parasuraman R
    (November 16, 2015) The mechanisms of far transfer from cognitive training: Review and hypothesis. Neuropsychology doi:10.1037/neu0000235.
    .
    OpenUrlCrossRef
  29. ↵
    1. Boot WR,
    2. Simons DJ,
    3. Stothart C,
    4. Stutts C
    (2013) The pervasive problem with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspect Psychol Sci 8(4):445–454.
    .
    OpenUrlAbstract/FREE Full Text
  30. ↵
    1. Melby-Lervåg M,
    2. Hulme C
    (2013) Is working memory training effective? A meta-analytic review. Dev Psychol 49(2):270–291.
    .
    OpenUrlCrossRefPubMed
  31. ↵
    1. Melby-Lervåg M,
    2. Hulme C
    (2016) There is no convincing evidence that working memory training is effective: A reply to Au et al. (2014) and Karbach and Verhaeghen (2014). Psychon Bull Rev 23(1):324–330.
    .
    OpenUrlCrossRefPubMed
  32. ↵
    1. Shipstead Z,
    2. Redick TS,
    3. Engle RW
    (2012) Is working memory training effective? Psychol Bull 138(4):628–654.
    .
    OpenUrlCrossRefPubMed
  33. ↵
    1. Boot WR,
    2. Blakely DP,
    3. Simons DJ
    (2011) Do action video games improve perception and cognition? Front Psychol 2:226.
    .
    OpenUrlPubMed
  34. ↵
    1. Dweck C
    (2000) Self-Theories: Their Roles in Motivation, Personality, and Development (Psychology Press, New York).
    .
  35. ↵
    1. Evans FJ
    (1974) The placebo response in pain reduction. Adv Neurol 4:289–296.
    .
    OpenUrl
  36. ↵
    1. Hossiep R,
    2. Turck D,
    3. Hasella M
    (1999) Bochumer Matrizentest (BOMAT) Advanced (Hogrefe, Goettingen, Germany).
    .
  37. ↵
    1. Raven JC,
    2. Court JH,
    3. Raven J,
    4. Kratzmeier H
    (1994) Advanced Progressive Matrices (Oxford Psychologists Press, Oxford).
    .
  38. ↵
    1. Raven JC,
    2. Court JH
    (1998) Raven’s Progressive Matrices and Vocabulary Scales (Oxford Psychologists Press, Oxford).
    .
  39. ↵
    1. Frey MC,
    2. Detterman DK
    (2004) Scholastic assessment or g? The relationship between the Scholastic Assessment Test and general cognitive ability. Psychol Sci 15(6):373–378.
    .
    OpenUrlAbstract/FREE Full Text
  40. ↵
    1. Raven J,
    2. Raven JC,
    3. Court JH
    (1998) Manual for the Raven’s Progressive Matrices and Vocabulary Scales (Oxford Psychologists Press, Oxford).
    .
  41. ↵
    1. Cacioppo JT,
    2. Petty RE
    (1982) The need for cognition. J Pers Soc Psychol 42(1):805–818.
    .
    OpenUrl
  42. ↵
    1. Torgerson DJ,
    2. Klaber-Moffett J,
    3. Russell IT
    (1996) Patient preferences in randomised trials: Threat or opportunity? J Health Serv Res Policy 1(4):194–197.
    .
    OpenUrlAbstract/FREE Full Text
  43. ↵
    1. Ryan RM,
    2. Lynch MF,
    3. Vansteenkiste M,
    4. Deci EL
    (2010) Motivation and autonomy in counseling, psychotherapy, and behavior change: A look at theory and practice. Couns Psychol 39(2):193–260.
    .
    OpenUrl
  44. ↵
    1. Stanford Center on Longevity
    (2014) A consensus on the brain training industry from the scientific community. Available at longevity3.stanford.edu/blog/2014/10/15/the-consensus-on-the-brain-training-industry-from-the-scientific-community-2/. Accessed May 8, 2016.
    .
  45. ↵
    1. Cognitive Training Data
    (2015) Open letter response to the Stanford Center on longevity. Available at www.cognitivetrainingdata.org/. Accessed May 8, 2016.
    .
  46. ↵
    1. Federal Trade Commission
    (2016) Lumosity to pay 2$ million to settle FTC deceptive advertising charges for its “brain training” program. Available at https://www.ftc.gov/news-events/press-releases/2016/01/lumosity-pay-2-million-settle-ftc-deceptive-advertising-charges. Accessed May 8, 2016.
    .
  47. ↵
    1. Open Science Collaboration
    (2015) PSYCHOLOGY. Estimating the reproducibility of psychological science. Science 349(6251):aac4716.
    .
    OpenUrlAbstract/FREE Full Text
View Abstract
Next
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Placebo effects in cognitive training
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
Citation Tools
Placebo effects in cognitive training
Cyrus K. Foroughi, Samuel S. Monfort, Martin Paczynski, Patrick E. McKnight, P. M. Greenwood
Proceedings of the National Academy of Sciences Jun 2016, 201601243; DOI: 10.1073/pnas.1601243113

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Placebo effects in cognitive training
Cyrus K. Foroughi, Samuel S. Monfort, Martin Paczynski, Patrick E. McKnight, P. M. Greenwood
Proceedings of the National Academy of Sciences Jun 2016, 201601243; DOI: 10.1073/pnas.1601243113
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley
Proceedings of the National Academy of Sciences: 116 (7)
Current Issue

Submit

Sign up for Article Alerts

Jump to section

  • Article
    • Abstract
    • Materials and Methods
    • SI Materials and Methods
    • Results
    • Discussion
    • Conclusion
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Several aspects of the proposal, which aims to expand open access, require serious discussion and, in some cases, a rethink.
Opinion: “Plan S” falls short for society publishers—and for the researchers they serve
Several aspects of the proposal, which aims to expand open access, require serious discussion and, in some cases, a rethink.
Image credit: Dave Cutler (artist).
Several large or long-lived animals seem strangely resistant to developing cancer. Elucidating the reasons why could lead to promising cancer-fighting strategies in humans.
Core Concept: Solving Peto’s Paradox to better understand cancer
Several large or long-lived animals seem strangely resistant to developing cancer. Elucidating the reasons why could lead to promising cancer-fighting strategies in humans.
Image credit: Shutterstock.com/ronnybas frimages.
Featured Profile
PNAS Profile of NAS member and biochemist Hao Wu
 Nonmonogamous strawberry poison frog (Oophaga pumilio).  Image courtesy of Yusan Yang (University of Pittsburgh, Pittsburgh).
Putative signature of monogamy
A study suggests a putative gene-expression hallmark common to monogamous male vertebrates of some species, namely cichlid fishes, dendrobatid frogs, passeroid songbirds, common voles, and deer mice, and identifies 24 candidate genes potentially associated with monogamy.
Image courtesy of Yusan Yang (University of Pittsburgh, Pittsburgh).
Active lifestyles. Image courtesy of Pixabay/MabelAmber.
Meaningful life tied to healthy aging
Physical and social well-being in old age are linked to self-assessments of life worth, and a spectrum of behavioral, economic, health, and social variables may influence whether aging individuals believe they are leading meaningful lives.
Image courtesy of Pixabay/MabelAmber.

More Articles of This Classification

Social Sciences

  • Emergence of analogy from relation learning
  • Defining the economic scope for ecosystem-based fishery management
  • Social threat learning transfers to decision making in humans
Show more

Psychological and Cognitive Sciences

  • Emergence of analogy from relation learning
  • Social threat learning transfers to decision making in humans
  • Fighting misinformation on social media using crowdsourced judgments of news source quality
Show more

Related Content

  • No related articles found.
  • Scopus
  • PubMed
  • Google Scholar

Cited by...

  • Inherent auditory skills rather than formal music training shape the neural encoding of speech
  • Sackler Colloquium on Digital Media and Developing Minds: How to play 20 questions with nature and lose: Reflections on 100 years of brain-training research
  • Scopus (37)
  • Google Scholar

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Latest Articles
  • Archive

PNAS Portals

  • Classics
  • Front Matter
  • Teaching Resources
  • Anthropology
  • Chemistry
  • Physics
  • Sustainability Science

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Press
  • Site Map

Feedback    Privacy/Legal

Copyright © 2019 National Academy of Sciences. Online ISSN 1091-6490