People use less information than they think to make up their minds

Edited by Susan T. Fiske, Princeton University, Princeton, NJ, and approved November 2, 2018 (received for review March 27, 2018)
December 10, 2018
115 (52) 13222-13227

Significance

People readily categorize things as good or bad, a welcome adaptation that enables action and reduces information overload. The present research reveals an unforeseen consequence: People do not fully appreciate this immediacy of judgment, instead assuming that they and others will consider more information before forming conclusions than they and others actually do. This discrepancy in perceived versus actual information use reveals a general psychological bias that bears particular relevance in today’s information age. Presumably, one hopes that easy access to abundant information fosters uniformly more-informed opinions and perspectives. The present research suggests mere access is not enough: Even after paying costs to acquire and share ever-more information, people then stop short and do not incorporate it into their judgments.

Abstract

A world where information is abundant promises unprecedented opportunities for information exchange. Seven studies suggest these opportunities work better in theory than in practice: People fail to anticipate how quickly minds change, believing that they and others will evaluate more evidence before making up their minds than they and others actually do. From evaluating peers, marriage prospects, and political candidates to evaluating novel foods, goods, and services, people consume far less information than expected before deeming things good or bad. Accordingly, people acquire and share too much information in impression-formation contexts: People overvalue long-term trials, overpay for decision aids, and overwork to impress others, neglecting the speed at which conclusions will form. In today’s information age, people may intuitively believe that exchanging ever-more information will foster better-informed opinions and perspectives—but much of this information may be lost on minds long made up.
Opinions come easy. With almost no information at their disposal, people nonetheless form lasting impressions of strangers (13), feel connected or disconnected with new doctors (4), teachers (5), and salespeople (6), and like or dislike new consumer goods and experiences (7, 8). “Preferences,” as once put, “need no inferences” (9). People rarely remain neutral—even when encountering entirely novel situations—due to a system 1 suite of affective responses designed to provide rapid online feedback about the current environment (1012).
This immediacy of judgment enables action and simplifies the overwhelming amount of information that people otherwise would have to process at each step (1315). However, this immediacy may also foster an important disadvantage: People may fail to anticipate the speed at which opinions will form. Indeed, people are generally unaware of their own mental processes (16, 17) and tend to view the mind as a rational system 2 arbiter (1821). As a result, people may believe that they and others would patiently evaluate more evidence before forming conclusions than they and others actually do—insensitive to the fact that once people begin to experience evidence in real time they will simultaneously react to it, taking a stance right from the first piece. Misunderstanding how quickly minds change is especially costly in today’s information age, with more access to more information than ever before. With such abundance of information available, people might be compelled to assume that more information is uniformly more useful to acquire and to share—working in vain to change minds that will long be made up.
Seven studies test this hypothesis, including over 2,000 study participants from a diversity of backgrounds. In a typical study, we compare the estimations of “experiencers” who experience a stimulus piece-by-piece and stop when they have made up their minds about it, with those of “predictors” who first experience a sample of the stimulus (so they know exactly what to imagine) and then predict how many pieces they would need to see before making up their minds about it. This approach is adapted from research on how people judge tipping points of change, in which participants assess streaks of information piece by piece and are asked to stop whenever they feel they have seen enough to diagnose a pattern (22, 23). The current research explores the critical piece of predicted versus actual tipping points. Critically, we emphasize that nearly all studies follow a “preexperience” paradigm, such that all participants, including predictors, first experience the stimulus once in full before rating it (beyond just reading a description). Thus, predictors are fully informed about “what” to imagine, with any subsequent mispredictions reflecting “how much” they thought they would need to experience before making up their minds. Studies 1 to 4 document this discrepancy across many judgments, from the (unforeseen) speed at which people form preferences to the (unforeseen) speed at which people judge others. Studies 5 to 7 highlight its problematic consequences: Gaining and providing access to information are not nearly as valuable as people think.

Results

Study 1.

In study 1, participants viewed different paintings featuring the same novel style of art, with no variation between the pieces beyond simple colors and shapes. They could view up to 40 paintings in total. Participants were randomly assigned to one of two conditions. Experiencers viewed individual paintings one by one and were asked to stop at the very first point when they made up their minds about whether they liked or disliked this general style of art (after doing so, they reported their verdict: like or dislike). Predictors were asked to predict how many paintings they would need to see before hitting this very first point (and then predicted their verdict). This study followed the preexperience paradigm: All participants began by seeing a thumbnail collage of all of the paintings (calibrating expectations of low variance), then completing one practice trial in full exactly as in the real task (giving predictors full knowledge of what to expect, affording a fair shot at accuracy for predicting how much of it to experience).
And yet, in the test trial, participants made up their minds sooner than they expected: Predictors expected they would need to see many more paintings to make up their minds (M = 16.29, SD = 12.04) than experiencers actually needed to see to make up their minds (M = 3.48, SD = 3.44), t(205) = 10.60, P < 0.001, d = 1.45. This discrepancy held regardless of participants’ actual verdict [74.07% of experiencers concluded liking the style and 72.72% of predictors thought they would conclude liking it; key difference between predictors and experiencers after adding verdict as a covariate in an ANOVA, F(1, 204) = 112.01, P < 0.001, ηp2 = 0.35].
We included precautions to assess two potential confounds. First, perhaps some paintings inadvertently contained content that sped up tipping points, which experiencers would be more likely to come across than predictors. This is unlikely given the preexperience paradigm, and the fact that predictors saw a collage of all possible paintings (emphasizing their low variance from piece to piece), further evidenced by the accuracy of the predicted verdict. Nonetheless, we asked participants to rate the paintings in terms of how surprising, shocking, and unique they were. Experiencers found the paintings no more surprising (M = 3.07, SD = 1.38) than predictors had expected (M = 2.97, SD = 1.28), no more shocking (M = 2.64, SD = 1.44) than predictors had expected (M = 2.61, SD = 1.33), and no more unique (M = 3.56, SD = 1.00) than predictors had expected (M = 3.41, SD = 1.11), ts ≤ 0.97, Ps ≥ 0.334, ds ≤ 0.14. Moreover, at the end of the study, nearly all experiencers (89.81%) reported that the initial instructions accurately, fairly, and fully described what followed (all results hold when excluding participants who disagreed). Second, perhaps experiencers simply wanted to end their participation as early as possible. To exclude this possibility, experiencers were clearly informed beforehand that they would have to view all 40 paintings regardless of when they indicated their response. All experiencers reported at the end of the study that their stopping point reflected the point when their impressions tipped rather than other possibilities.

Study 2.

In study 2, we replicated this effect in the speed at which people form consumption preferences. Participants drank identical sample cups of the same vegetable juice, described as novel in the marketplace and shown without any brand identifiers. Based on random assignment, experiencers were asked to drink as many 0.5-oz sample cups of the juice as they needed before they hit the very first point when they made up their minds about whether they liked or disliked the drink (after doing so, they reported their verdict: like or dislike). Predictors first drank one 0.5-oz sample cup of the juice before proceeding. Again, this procedure follows the preexperience paradigm: Predictors learned first-hand how filling one sample cup was, and exactly what the juice tasted like. Then, they predicted how many additional 0.5-oz cups of the juice (beyond the sample cup already consumed) they would need to drink before hitting this point (and then predicted their verdict).
Again, however, participants made up their minds sooner than they expected: Predictors expected they would need to sample more total cups before making up their minds (M = 3.62, SD = 3.12) than experiencers actually sampled before making up their minds (M = 1.50, SD = 1.14), t(212) = 6.61, P < 0.001, d = 0.91. This discrepancy held regardless of participants’ actual verdict [87.85% of experiencers concluded liking the drink and 78.50% of predictors thought they would conclude liking it; key difference between predictors and experiencers after adding verdict as a covariate in an ANOVA, F(1, 211) = 46.65, P < 0.001, ηp2 = 0.18].
As in study 1, we took precautions to ensure that experiencers were not simply trying to end the task early. To exclude this possibility, all participants, including experiencers, were clearly informed beforehand that they would have to spend the same time on study tasks regardless of when they indicated their response. A full 95.33% of experiencers reported at the end of the study that their stopping point reflected the point when their impressions tipped rather than other possibilities (all results hold when excluding participants who indicated otherwise).

Study 3.

Next, study 3 tested whether this discrepancy generalizes across a wide variety of social judgments, both good and bad. Participants were tasked with making five piecemeal judgments one at a time in random order: judging a student’s intellect upon learning their grades from assignment to assignment, judging a neighbor’s character upon learning how they treated others from day to day, judging an athlete’s ability upon learning their performance from game to game, judging a person’s happiness upon learning their mood from day to day, and judging a gambler’s luck upon learning their outcomes from gamble to gamble. Based on random assignment, experiencers viewed each piece of evidence one by one and were asked to stop at the very first point when they had seen enough to make up their minds. Predictors were asked to predict the number of pieces they would need to see before hitting this point. We bounded the range to ensure direct comparisons across conditions. For example, experiencers read that they would learn a student’s next 10 assignment grades one by one. To begin, they learned that the grade on the first assignment was “low.” They indicated whether they had seen enough to conclude this was a bad student or whether they would need to see the next grade to know for sure. If the latter was selected, they learned the second grade was again low and made the same choice. This process repeated through 10 identical observations or until they indicated they had seen enough. Conversely, predictors were asked to report upfront how many low grades “in a row” (of the next 10) they would need to see before hitting this very first point. This procedure follows the preexperience paradigm because the full experience is defined by reading the outcome. Note that predictors and experiencers completed the same task bounded by the same knowledge, with predictors explicitly predicting the number of consecutively identical outcomes of the next 10 outcomes (exactly as experiencers consecutively observed one by one).
In addition to our primary manipulation, participants were randomly assigned to evaluate either negative evidence for all five judgments (e.g., predictors and experiencers who observed streaks of low grades and judged when a student “officially” becomes a bad student) or positive evidence for all five judgments (e.g., predictors and experiencers who observed streaks of high grades and judged when a student officially becomes a good student). That is, the design of this study followed a 2 (experiencers or predictors, between-subjects) × 2 (positive verdict or negative verdict, between-subjects) × 5 (domain, within-subjects) design, extending studies 1 and 2 by methodologically accounting for participants’ verdict.
The discrepancy generalized (for full descriptive statistics, see SI Appendix, Table S1): For all domains and regardless of forming positive or negative opinions, predictors overestimated the amount of evidence that they would collect before making up their minds compared with the amount of evidence that experiencers actually collected, Fs ≥ 50.89, Ps ≤ 0.001, ηp2s ≥ 0.11. Taking all domains together (α = 0.89), this effect was robust: When forming negative impressions, predictors thought they would grant others 5.25 (SD = 1.63) bad actions on average before condemning them as bad actors, while experiencers did so after only 3.46 (SD = 2.11) bad actions, F(1, 199) = 46.23, P < 0.001, ηp2 = 0.19; and when forming positive impressions, predictors thought they would hold others to 5.50 (SD = 1.51) good actions on average before deeming them good actors, while experiencers did so after only 3.35 (SD = 2.09) good actions, F(1, 197) = 64.17, P < 0.001, ηp2 = 0.25. Valence of verdict did not interact with this effect for any of the domains, Fs ≤ 2.13, Ps ≥ 0.145, ηp2s ≤ 0.005.
As in studies 1 and 2, we took precautions to ensure that experiencers were not simply trying to end the task early. To exclude this possibility, experiencers were clearly informed beforehand that they would have to view all 10 outcomes regardless of when they indicated their response, which they indeed did during the task. A full 92.57% of experiencers reported at the end of the study that their stopping point reflected the point at which their impressions tipped rather than other possibilities (all results hold when excluding participants who indicated otherwise).

Study 4.

Next, study 4 tested the discrepancy within a correlational design, in a noisy but naturalistic context: the (surprising) speed of falling in love. Married participants rated how long it took from the moment they met their spouse to the moment they were sure they found their lifelong partner. Never-married participants from the same population rated how long they “thought” it would take from the moment they met their future spouse to the moment they were sure they found their lifelong partner, on the same scale.
Never-married participants overestimated how long it would take for them to “know” (M = 5.17, SD = 2.20) compared with the actual experience of married participants (M = 4.38, SD = 2.13), t(198) = 2.59, P = 0.010, d = 0.36 (this effect held when controlling for participant gender, age, and ethnicity, P = 0.013). We also asked participants to specify the number of days this did or would take, from 1 to 365 (plus a choice option labeled “more than a year”). Never-married participants overestimated the specific number of days this would take compared with the married participants (M = 210.53, SD = 142.80 vs. M = 172.93, SD = 127.58), t(198) = 1.97, P = 0.051, d = 0.28 (controlling for demographic variables, P = 0.181), with a full 38.88% of never-married participants checking off more than a year but only 17.65% of married participants doing so, χ2 = 10.04, P < 0.001 (controlling for demographic variables, P = 0.024). Although causality cannot be interpreted from these correlational data and various other differences between these groups may contribute to the results (such as differences in past romantic success), they simply serve as evidence for the same effect from yet a different perspective (and the effects hold when controlling demographic variables).
Studies 1 to 4 reveal a misperception of the immediacy of judgment: People think they will collect and evaluate more information before drawing conclusions than they actually do. This discrepancy means that people are insensitive to when more information is actually needed, suggesting that they will under-acquire information in contexts in which such information will reliably change their judgments and over-acquire information in contexts in which information is unlikely to change judgments. Studies 5 to 7 test some costs of this discrepancy: People may invest too many resources into impression-formation contexts, from overvaluing long-term product trials (study 5) to overpaying for decision aids (study 6) to overworking to make a good first impression (study 7). As in all previous studies, we tested these possibilities using designs that required all participants to complete the same number of tasks and to commit the same time regardless of their responses.

Study 5.

In study 5, participants signed up for a 5-d trial of an email service called “The Daily Cute.” They were sent a unique email at the same time each morning that contained a funny cat video, a funny quote, and links to share on social media. We followed a within-subjects design. Upon signing up, participants read a thorough description of the service and completed demographic measures. The next day, the 5-d trial began. At the end of day 1, participants rated how valuable that day was in contributing to their overall impression of the service. They also indicated whether they had seen enough at this point to definitively make up their minds about it. Then, participants were asked to make predictions about days 2 to 5. This procedure follows the preexperience paradigm because participants made predictions only after experiencing day 1 in full, and were explicitly told that days 2 to 5 would be exactly like day 1 with different, but similarly matched, videos and quotes, which was true. They predicted each day’s contribution to their overall impression, and also predicted on which day they thought they would definitively make up their minds (as well as what this verdict would be: like or dislike; 22.12% of participants had made up their minds after day 1 and thus did not make these day-level predictions). Thus, we compared participants’ own predictions of each of days 2 to 5 with their actual experiences of each of days 2 to 5, having been fully informed by day 1.
However, participants again made up their minds sooner than they expected (Fig. 1): They significantly overestimated the informational value of each of days 2 to 5, paired ts ≥ 2.86, Ps ≤ 0.005, ds ≥ 0.27. Later days of the trial experience were not as necessary as participants expected. The day-level data confirm that participants finalized their judgment of the service in real time sooner (M = 2.98 d, SD = 0.99) than they thought they would (M = 3.41 d, SD = 0.77), paired t(87) = 4.53, P < 0.001, d = 0.49. Like our other studies, this discrepancy was unaffected by verdict (92.05% of participants predicted that they would like the service and 88.64% concluded liking it upon their tipping point; key differences on day 2 to 5 ratings after adding change in verdict as a covariate in an ANOVA, repeated Fs ≥ 5.41, Ps ≤ 0.022, ηp2s ≥ 0.046).
Fig. 1.
Predicted versus actual necessity of sampling information from day to day, within-subjects (study 5). Error bars represent SEs. Day 1 did not include predicted judgments to allow participants to preexperience the video service before making predictions (for visual ease, the dotted line connects experiences in day 1 to predictions in day 2).
Although later information might matter more for judging other stimuli (thus suggesting an error in stopping too soon before getting an accurate reading), study 5 reveals an error for judging stimuli that may not require much evidence to judge accurately: Participants were mistaken in their specific predictions about specific contexts for which later information truly did not matter. This suggests people may generally assume that more information is more helpful for impression formation, without distinguishing when it is helpful versus unnecessary. Next, we tested the consequences of this assumption in a task that has an objective cost to making it: buying time to make up one’s mind in a context in which additional time is not necessary.

Study 6.

In study 6, participants were asked to guess the winner of 20 US senatorial elections based on observing the photographs of the two candidates, one at a time in random order. For each correct guess, participants won money. Information was operationalized as allowing participants to view the photographs for varying lengths of time, ranging from 1 to 5 s. In essence, we conducted an incentivized replication of past research showing that people are remarkably adept at predicting election outcomes within milliseconds exposure to candidates’ photographs, because competent-looking candidates tend to win elections and system 1 is remarkably attuned to cues of competence (24). We used the original stimuli and picked the 20 elections for which perceived competence was the strongest predictor of election outcome. From this past research, we presumed participants would perform well. However, our findings so far suggest participants may not anticipate how well they would perform, and therefore overpay for longer exposure times to the candidates’ photographs.
Based on random assignment, each pair flashed on the screen side by side for 1, 2, 3, 4, or 5 s. There was no effect of exposure time on accuracy rates, F(4, 839) = 0.65, P = 0.624, ηp2 = 0.003. The number of correct guesses across conditions ranged from 14.66 (earning a bonus of M = $1.47, SD = $0.28) to 15.05 (earning a bonus of M = $1.50, SD = $0.27), ts ≤ 1.32, Ps ≥ 0.191, ds ≤ 0.15. Consistent with past research, guesses after 5 s of exposure were just as accurate as guesses after 1 s of exposure.
Critically, we included a sixth condition in which the study was explained in detail and participants were asked to choose how long to view all pairs of photographs. However, we set increasing prices for longer exposures: 1 s of exposure time was free, 2 s cost 20% of participants’ total winnings, 3 s cost 30% of total winnings, 4 s cost 40% of total winnings, and 5 s cost 50% of total winnings. This price schedule followed the logic that longer trial periods generally cost more money. If people have insight into the immediacy of judgment, these participants should opt into the 1-s condition and maximize their payout. This increasing pay schedule is a conservative test of our hypothesis because the shortest time condition should be attractive merely by virtue of being free (25). And yet, two findings emerged. First, a full 60.12% of these participants paid for an exposure time longer than 1 s, χ2 (1, n = 163) = 6.68, P = 0.010. Second, they chose poorly: Additional exposure did not change participants’ minds and therefore did not change accuracy rates (Fig. 2A). The number of correct guesses across participants’ choices ranged from 14.58 to 15.60 on average—no different from our randomly assigned conditions, ts ≤ 1.25, Ps ≥ 0.215, ds ≤ 0.17. Accordingly, choosing the (more expensive) longer exposures reduced net earnings to $1.26 ± 0.32 (Fig. 2B). When including the choice condition in the full analyses, there was a significant effect of condition, F(5, 1001) = 17.43, P < 0.001, ηp2 = 0.080. The (small number of) participants who chose a 1-s exposure indeed did no worse than participants who were assigned the 1-s exposure and hence made the same high amount of money, t(230) = 1.24, P = 0.216, d = 0.16. However, the (large number of) participants who purchased longer exposures ended up earning less than participants who were assigned into the corresponding exposure times, ts ≤ 6.09, Ps ≤ 0.001, ds ≥ 0.83. Underappreciating the immediacy of judgment was objectively detrimental in this context.
Fig. 2.
Number of correct guesses of election winners (A) and net earnings (B) as a function of assigned or chosen exposure times (study 6). Error bars represent SEs.
Notably, this study is the sole exception in not following the preexperience paradigm, although (like all the studies) it did provide a written description of the task beforehand. For more-direct validation that predictors were sufficiently informed, we conducted a posttest with new participants from the same population (SI Appendix). These participants read the original study instructions, reported their expectations of the content of the photographs, and then viewed all photographs and reported their experiences. We found no differences between predictions and experiences, suggesting our original participants in the main study were sufficiently informed.

Study 7.

Finally, the basic effect that people overestimate how much information they would use in making judgments also suggests interpersonal misunderstandings in the speed at which others judge us. In study 7, we tested this possibility in the context of making a good first impression in job applications. Job applicants assiduously polish the way they present themselves to prospective employers, presumably hoping that all of their minute efforts will be noticed and reviewed equally. However, to the extent that the mind is all but immediately made up, initial parts of job applications may be more influential on employers’ judgments than later parts—a speed that applicants may not intuit. Evaluators may make up their minds faster than applicants realize, rendering one’s obsessive efforts to impress unnoticed.
MBA students currently enrolled in a business school in the United States were asked to apply for a hypothetical management position, and wrote a list of essays about their past management experiences. They were informed that their lists would be evaluated essay by essay by a real hiring manager, and were tasked with writing the exact number of essays they thought the hiring manager would actually read. The MBA students were instructed to write the exact number of essays they believed would lead the hiring managers to “tip” in their impressions—the very first point when the hiring manager would have seen enough to get a general sense of the applicant as a manager and continue onward in the application. Accuracy was key: Participants were told to assume that writing too few essays or too many essays would cost them the job. Then, they completed the task as described and wrote real essays about their real management experiences. Afterward, professional hiring managers who currently work for companies in the United States were randomly yoked to one MBA application. They were asked to read the applicant’s essays and stop at the first point they had seen enough to get a general sense of the applicant as a manager and continue onward in the application. To prevent ceiling effects, the hiring managers were informed at the outset (and reminded before each essay judgment) that if they reached the last essay but still could not make up their minds about the applicant, they could specify the number of additional essays they wished the applicant had written.
Applicants wrote more essays (M = 3.81, SD = 1.25) than hiring managers read (M = 2.09, SD = 1.79), paired t(123) = 8.85, P < 0.001, d = 0.81. As hypothesized, the MBA students failed to anticipate the immediacy of the hiring managers’ judgment and in turn overworked to impress. Of course, beyond the laboratory, applicants might hedge by writing as much as possible given the high cost of coming up short; but in this study, participants were not hedging because their task was to list the exact right number—not too few and not too many, or else they lost the job—and on this task they got it wrong. The tradeoff of the discrepancy when scaled to everyday life likely works against self-presenters. Those looking to impress might be wiser spending their time fine-tuning some information rather than fine-tuning all information, despite intuitions to do the latter (e.g., evaluators likely will not process each and every page of one’s 20-page resume, no matter how well crafted or informative).

Discussion

Minds are made up sooner than people think. Far from carefully weighing all possible evidence, good things strike us as good and bad things strike us as bad much faster than we expect to draw these conclusions. This lack of insight into the speed at which minds change highlights a difficulty in distinguishing the contexts in which more information or experience will inform judgment from the contexts in which it will not. Indeed, throughout our studies, we sought to calibrate expectations as best as possible by following a preexperience paradigm, endowing predictors with full knowledge of the experience that experiencers encountered first-hand. This critical feature rules out the possibility that the misprediction simply reflects a misperception about what one will experience, and instead reveals a misperception about how much one needs to experience to form an impression. In everyday life, people often cannot preexperience a stimulus before making decisions about it, such as determining what products to buy or what cities to visit, suggesting that the misprediction may be even more miscalibrated than what we observed. This discrepancy suggests errors of two kinds. From the predictor’s perspective, this discrepancy suggests errors in the amount of time, money, or worry that people may invest in sampling ever-novel products or in building an initial reputation (as tested in studies 5 to 7). From the experiencer’s perspective, this discrepancy suggests people may underutilize information even though it is available and indeed perhaps useful for longer-term learning and information exchange. In an age of unprecedented availability of information at our fingertips, seekers of information (such as when we log online to research a new topic or to engage in debate) may seek only a sliver of what is available before forming an opinion anyway, whereas providers of information may assume that seekers have taken full advantage and heard them loud and clear. The promise of today’s information age may need to be carefully managed, requiring more than merely granting access to information. Other research suggests that access to information undermines memory, such that people stop encoding new information when they know they can retrieve it elsewhere [e.g., online repositories (26)]. Our research more broadly suggests that minds are less curious and less open to information than we assume they will be, fostering costly misunderstandings for real-time information exchange.
These findings raise three fruitful directions for research. First, future studies should manipulate variance. We accounted for variance in our studies but, in everyday life, predictors may assume a wider range of evidence than experiencers face first-hand. This might attenuate the effect, but also might be more feature than bug (e.g., even when future possible variance is knowingly high, we suspect that people will be less compelled to actually consume it when given the chance). That is, our discrepancy suggests an error of assessing too little information in judging complex entities that require a lot of evidence to accurately judge (27). Ultimately, the amount of evidence that people intuitively collect should depend on people’s assumption that the rational course of action in a current context is to seek out more rather than less information, but our findings reveal that people may not be well-calibrated in distinguishing these contexts beforehand.
Second, future studies should manipulate stakes. When stakes are extremely high, such as assessing piecemeal evidence for sentencing a crime or for investing in retirement, experiencers will presumably collect more information than they collect under low stakes. Of critical interest is the extent to which predictors adjust their expectations in kind when they are fully aware of high stakes, perhaps taking on even more information than what would prove most helpful (28).
Third, future studies should assess how to calibrate beliefs. Perhaps learning about system 1 can encourage people to consider potential discrepancies when they are tasked with setting evidentiary thresholds in advance, as people often must do—from laypeople who must choose the length of product trials and manage time spent on résumé minutiae, to policymakers who must set benchmarks for rewards and punishments before constituents react to each individual act in isolation. However, past efforts to convince people that general psychological effects apply to one’s own circumstances yield mixed results (2931). One promising route could map the conscious thought processes at each stage of judgment (32): In predicting thresholds, people may consider averages (“about how many behaviors are bad enough?”) whereas experiencing the evidence unfold is a serial hypothesis test with each unique piece considered in isolation (“how bad is this behavior?”). It may be possible to develop a set of effective questions and framings for people to consider at each and every stage of evaluation.
Of course, hastening people’s predictions may sometimes leave them none the wiser. Some tasks do benefit from more information and more time to judge (27, 33, 34), and in these cases a tendency to make up one’s mind too soon may be especially costly. The problem revealed here is that people may generally overestimate the amount of evidence they will patiently evaluate before making up their minds anyway, paying costs to acquire information that will go unused (regardless of whether that additional information would or would not be informative). Our findings therefore may explain many failures of self-insight, from why people do not anticipate anchoring effects (35) to why people underestimate the power of defaults (36) and emotions (37) on future judgments. Many cognitive biases may stem from a broader misunderstanding about how quickly minds change, with people assuming they can and will use more information when making decisions than they actually end up using. Opinions come easy, but understanding how easily they come is far more difficult.

Materials and Methods

The Institutional Review Board at The University of Chicago approved all experiments. Informed consent was obtained at the beginning of all experiments. For all studies, additional details are in SI Appendix. All original surveys, stimuli, and data are publicly available.

Study 1.

Participants (207; Mage = 35.06 y, 35.27% women) were recruited from Amazon’s Mechanical Turk (38) for a small monetary sum. All participants viewed a thumbnail collage of all of the paintings and completed a practice trial in which one of the pieces was displayed in full size, exactly like a real trial. They then read that they would be asked to provide their “tipping point”—the “very first point” they made up their mind about whether they liked or disliked this style—and that they would have to view all 40 paintings regardless of when they tipped. After indicating their tipping point, experiencers reported their verdict about whether they liked or disliked this general style of art (forced choice). Predictors predicted their verdict.

Study 2.

Participants (214; Mage = 31.85 y, 48.13% women) were recruited across a university laboratory for a small monetary sum (n = 161) and Chicago public parks for a small gift (n = 53). Participants were asked to drink 0.5-oz sample cups of an unmarked juice. Unbeknownst to participants, the juice was V8 Veggie Blend Caribbean Greens. All participants first sampled a 0.5-oz cup of the juice. Experiencers then sampled as many additional cups as they needed to make up their minds about the drink. Predictors predicted how many additional cups they would need to sample to reach this point. After indicating their tipping point, experiencers reported their verdict about whether they “like the drink” or “dislike the drink” (forced choice). Predictors predicted their verdict.

Study 3.

Participants (400; Mage = 33.90 y, 41.75% women; 1 participant did not report gender) were recruited from Amazon’s Mechanical Turk for a small monetary sum. All participants read that they would be asked to provide their tipping point—the very first point they made up their mind about each target—and that they would have to view all information for all scenarios regardless of when they tipped. They were informed they would evaluate five different scenarios, and that each scenario included 10 observations. For each scenario, if an experiencer clicked through all 10 observations without tipping, the task simply continued to the next part of the survey. For the most conservative test of our hypothesis, these responses were coded as 11.

Study 4.

Participants were recruited from TurkPrime’s prescreen panel services for a small monetary sum. We instructed 102 married participants (Mage = 36.64 y, 63.73% women) to “take a few moments to think about how you met your lifelong romantic partner and came to know that he or she is the one.” They were asked: “How long did it take from the time you first met this person for you to know that he or she was the person you wanted to spend the rest of your life with” (1 = I knew extremely quickly/almost immediately, 7 = I knew extremely slowly/it took a very long time to know). They also specified the number of days (any number from 1 to 365, with a box labeled “it took longer than 1 y.” For comparison, we instructed 98 never-married participants from the same population (Mage = 29.69 y, 30.61% women) to “take a few moments to think about how you will eventually meet your lifelong partner and come to know that he or she is the one.” They were asked to predict their responses.

Study 5.

Participants (150; Mage = 36.89 y, 49.33% women) were recruited from Amazon’s Mechanical Turk. They were paid $0.50 for completing an initial survey and an additional $7.00 if they successfully completed each daily survey. Our final sample resulted in 113 participants (Mage = 37.04 y, 51.33% women) who completed all measures. The study followed a fully within-subjects design. All participants had to complete all 5 d regardless of their daily responses.

Study 6.

Participants (1,007; Mage = 37.69 y, 49.85% women) were recruited from Amazon’s Mechanical Turk for a $2.00 fixed payment. Participants were asked to guess the winner of 20 US senatorial elections by observing photographs of the two candidates. The photographs were from existing research (24). For each correct guess, participants earned $0.10, for a total of $2.00 possible bonus (small in absolute terms but large relative to the $2.00 fixed payment for completing the study, giving participants the chance to double their earnings by adding $2.00 on top of their $2.00 fixed payment). Based on existing research, participants were instructed to make their guesses based on which of the candidates looked more competent, which is a valid cue for accurate guesses. After each pair flashed, blank boxes appeared in their place each labeled “the candidate on this side.” Participants were asked: “Who do you think won the election?” and indicated their response.

Study 7.

In phase 1, we recruited 124 MBA students enrolled in a business school in the United States (Mage = 30.53 y, 23.39% women) to complete the study. They were instructed to write brief essays about their past management experiences as part of a hypothetical application. In phase 2, we recruited 124 professional hiring managers who work for companies in the United States (Mage = 37.30 y, 50.81% women) for a small monetary sum. Each hiring manager was randomly joined to one MBA application. They read identical information and were instructed to complete the task as described. They were asked to indicate their tipping point—that point at which they had read enough to determine their verdict about the applicant.

Acknowledgments

We thank Linda Hagen, Nick Epley, Anuj Shah, Emma Levine, and George Wu for helpful feedback, along with the entire Chicago Booth research community. Chicago Booth’s Center for Decision Research assisted with data collection.

Supporting Information

Appendix (PDF)

References

1
AG Greenwald, MR Banaji, Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychol Rev 102, 4–27 (1995).
2
L Ten Brinke, D Stimson, DR Carney, Some evidence for unconscious lie detection. Psychol Sci 25, 1098–1105 (2014).
3
J Willis, A Todorov, First impressions: Making up your mind after a 100-ms exposure to a face. Psychol Sci 17, 592–598 (2006).
4
JA Hall, DL Roter, CS Rand, Communication of affect between patient and physician. J Health Soc Behav 22, 18–30 (1981).
5
N Ambady, R Rosenthal, Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. J Pers Soc Psychol 64, 431–441 (1993).
6
N Ambady, MA Krabbenhoft, D Hogan, The 30-sec sale: Using thin slice judgments to evaluate sales effectiveness. J Consum Psychol 16, 4–13 (2006).
7
KW Kendall, I Fenwick, What do you learn standing in a supermarket aisle? Advances in Consumer Research, ed WL Wilkie (Assoc Consum Res, Ann Arbor, MI), pp. 153–160 (1979).
8
JE Russo, F Leclerc, An eye-fixation analysis of choice processes for consumer nondurables. J Consum Res 21, 274–290 (1994).
9
RB Zajonc, Feeling and thinking: Preferences need no inferences. Am Psychol 35, 151–175 (1980).
10
D Kahneman Thinking, Fast and Slow (Farrar, New York, 2011).
11
RM Nesse, PC Ellsworth, Evolution, emotions, and emotional disorders. Am Psychol 64, 129–139 (2009).
12
RB Zajonc, Attitudinal effects of mere exposure. J Pers Soc Psychol 9, 1–27 (1968).
13
ST Fiske, SE Taylor Social Cognition (Addison-Wesley, Reading, MA, 1984).
14
G Gigerenzer, Why heuristics work. Perspect Psychol Sci 3, 20–29 (2008).
15
A Tversky, D Kahneman, Judgment under uncertainty: Heuristics and biases. Science 185, 1124–1131 (1974).
16
RE Nisbett, TD Wilson, Telling more than we can know: Verbal reports on mental processes. Psychol Rev 84, 231–259 (1977).
17
TD Wilson Strangers to Ourselves: Discovering the Adaptive Unconscious (Harvard Univ Press, Cambridge, MA, 2002).
18
J Haidt, The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychol Rev 108, 814–834 (2001).
19
L Kohlberg, The development of the personal and professional values-integrated framework as an aid to ethical decision making. Handbook of Socialization Theory and Research, ed D Goslin (Rand McNally, Chicago), pp. 347–380 (1969).
20
DG Myers Social Psychology (McGraw-Hill, New York, 2010).
21
E O’Brien, Mapping out past and future minds: The perceived trajectory of rationality versus emotionality over time. J Exp Psychol Gen 144, 624–638 (2015).
22
E O’Brien, N Klein, The tipping point of perceived change: Asymmetric thresholds in diagnosing improvement versus decline. J Pers Soc Psychol 112, 161–185 (2017).
23
N Klein, E O’Brien, The tipping point of moral change: When do good and bad acts make good and bad actors? Soc Cogn 34, 149–166 (2016).
24
A Todorov, AN Mandisodza, A Goren, CC Hall, Inferences of competence from faces predict election outcomes. Science 308, 1623–1626 (2005).
25
K Shampanier, N Mazar, D Ariely, Zero as a special price: The true value of free products. Mark Sci 26, 742–757 (2007).
26
B Sparrow, J Liu, DM Wegner, Google effects on memory: Cognitive consequences of having information at our fingertips. Science 333, 776–778 (2011).
27
M Kardas, E O’Brien, Easier seen than done: Merely watching others perform can foster an illusion of skill acquisition. Psychol Sci 29, 521–536 (2018).
28
A Dijksterhuis, MW Bos, LF Nordgren, RB van Baaren, On making the right choice: The deliberation-without-attention effect. Science 311, 1005–1007 (2006).
29
B Fischhoff, Latitude and platitudes: How much credit do people deserve? Decision Making: An Interdisciplinary Inquiry, eds GR Ungson, DN Braunstein (Kent, Boston), pp. 116–120 (1982).
30
RE Nisbett, GT Fong, DR Lehman, PW Cheng, Teaching reasoning. Science 238, 625–631 (1987).
31
TD Wilson, N Brekke, Mental contamination and mental correction: Unwanted influences on judgments and evaluations. Psychol Bull 116, 117–142 (1994).
32
DB Markant, TM Gureckis, Is it better to select or to receive? Learning via active and passive hypothesis testing. J Exp Psychol Gen 143, 94–122 (2014).
33
S Frederick, Cognitive reflection and decision making. J Econ Perspect 19, 25–42 (2005).
34
S Hoeffler, D Ariely, P West, Path dependent preferences: The role of early experience and biased search in preference development. Org Beh Hum Dec Proc 101, 215–229 (2006).
35
AD Galinsky, T Mussweiler, First offers as anchors: The role of perspective-taking and negotiator focus. J Pers Soc Psychol 81, 657–669 (2001).
36
JJ Zlatev, DP Daniels, H Kim, MA Neale, Default neglect in attempts at social influence. Proc Natl Acad Sci USA 114, 13643–13648 (2017).
37
G Loewenstein, Hot-cold empathy gaps and medical decision making. Health Psychol 24, S49–S56 (2005).
38
M Buhrmester, T Kwang, SD Gosling, Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 6, 3–5 (2011).

Information & Authors

Information

Published in

Go to Proceedings of the National Academy of Sciences
Go to Proceedings of the National Academy of Sciences
Proceedings of the National Academy of Sciences
Vol. 115 | No. 52
December 26, 2018
PubMed: 30530692

Classifications

Submission history

Published online: December 10, 2018
Published in issue: December 26, 2018

Keywords

  1. tipping point
  2. change
  3. self-insight
  4. judgment
  5. information processing

Acknowledgments

We thank Linda Hagen, Nick Epley, Anuj Shah, Emma Levine, and George Wu for helpful feedback, along with the entire Chicago Booth research community. Chicago Booth’s Center for Decision Research assisted with data collection.

Notes

This article is a PNAS Direct Submission.

Authors

Affiliations

Harris School of Public Policy, University of Chicago, Chicago, IL 60637;
Ed O’Brien1 [email protected]
Booth School of Business, University of Chicago, Chicago, IL 60637

Notes

1
To whom correspondence may be addressed. Email: [email protected] or [email protected].
Author contributions: N.K. and E.O. designed research, performed research, analyzed data, and wrote the paper.

Competing Interests

The authors declare no conflict of interest.

Metrics & Citations

Metrics

Note: The article usage is presented with a three- to four-day delay and will update daily once available. Due to ths delay, usage data will not appear immediately following publication. Citation information is sourced from Crossref Cited-by service.


Citation statements




Altmetrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

    Loading...

    View Options

    View options

    PDF format

    Download this article as a PDF file

    DOWNLOAD PDF

    Get Access

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Personal login Institutional Login

    Recommend to a librarian

    Recommend PNAS to a Librarian

    Purchase options

    Purchase this article to get full access to it.

    Single Article Purchase

    People use less information than they think to make up their minds
    Proceedings of the National Academy of Sciences
    • Vol. 115
    • No. 52
    • pp. 13135-E12465

    Media

    Figures

    Tables

    Other

    Share

    Share

    Share article link

    Share on social media