The default-mode network represents aesthetic appeal that generalizes across visual domains
- aDepartment of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany;
- bDepartment of Psychological Science, Missouri University of Science and Technology, Rolla, MO 65409;
- cDepartment of Psychology, The Ohio State University, Columbus, OH 43210;
- dDepartments of English and Neuroscience, Pomona College, Claremont, CA 91711
See allHide authors and affiliations
Edited by Moshe Bar, Bar-Ilan University, Ramat Gan, Israel, and accepted by Editorial Board Member Michael S. Gazzaniga July 15, 2019 (received for review February 13, 2019)

Significance
Despite being highly subjective, aesthetic experiences are powerful moments of interaction with one’s surroundings, shaping behavior, mood, beliefs, and even a sense of self. The default-mode network (DMN), which sits atop the cortical hierarchy and has been implicated in self-referential processing, is typically suppressed when a person engages with the external environment. Yet not only is the DMN surprisingly engaged when one finds a visual artwork aesthetically moving, here we present evidence that the DMN also represents aesthetic appeal in a manner that generalizes across visual aesthetic domains, such as artworks, landscapes, or architecture. This stands in contrast to ventral occipitotemporal cortex (VOT), which represents the content of what we see, but does not contain domain-general information about aesthetic appeal.
Abstract
Visual aesthetic evaluations, which impact decision-making and well-being, recruit the ventral visual pathway, subcortical reward circuitry, and parts of the medial prefrontal cortex overlapping with the default-mode network (DMN). However, it is unknown whether these networks represent aesthetic appeal in a domain-general fashion, independent of domain-specific representations of stimulus content (artworks versus architecture or natural landscapes). Using a classification approach, we tested whether the DMN or ventral occipitotemporal cortex (VOT) contains a domain-general representation of aesthetic appeal. Classifiers were trained on multivoxel functional MRI response patterns collected while observers made aesthetic judgments about images from one aesthetic domain. Classifier performance (high vs. low aesthetic appeal) was then tested on response patterns from held-out trials from the same domain to derive a measure of domain-specific coding, or from a different domain to derive a measure of domain-general coding. Activity patterns in category-selective VOT contained a degree of domain-specific information about aesthetic appeal, but did not generalize across domains. Activity patterns from the DMN, however, were predictive of aesthetic appeal across domains. Importantly, the ability to predict aesthetic appeal varied systematically; predictions were better for observers who gave more extreme ratings to images subsequently labeled as “high” or “low.” These findings support a model of aesthetic appreciation whereby domain-specific representations of the content of visual experiences in VOT feed in to a “core” domain-general representation of visual aesthetic appeal in the DMN. Whole-brain “searchlight” analyses identified additional prefrontal regions containing information relevant for appreciation of cultural artifacts (artwork and architecture) but not landscapes.
Aesthetic appeal is a fundamental aspect of human experience that touches many aspects of life. Aesthetic considerations affect purchasing decisions (1⇓–3), choice of leisure time activities (4⇓–6), stress levels (7⇓–9), recovery from illness (10, 11), and well-being more generally (12⇓⇓–15). Aesthetics affect central aspects of perception and cognition, including visual orienting (16), learning (17, 18), and valuation (19, 20), and also affect our interactions with other people, from evaluations of attractiveness to judgments of trustworthiness and competence (21).
An aesthetic experience is, in general, a perceptual experience that is evaluative and affectively absorbing and engages comprehension (meaning) processes (22, 23). Aesthetic experiences often have a significant conceptual component, such as during encounters with conceptual art or abstract mathematical problem solving (24, 25), and may also emerge in response to imagined objects. Such experiences may include feelings of pleasure or beauty from engaging with an object, and judgments of liking or attractiveness, but can also include more complex responses such as being moved and feelings of awe and the sublime (26, 27).
People have strong aesthetic experiences with images from a variety of different visual aesthetic domains, such as landscapes, faces, architecture, and artwork. Yet the perceptual features that support such aesthetic experiences are domain-specific. For example, the features that support perception and aesthetic appreciation of a mountain vista are different from those used to perceive and aesthetically evaluate a gothic church (28⇓⇓–31). Recent behavioral studies support a distinction between aesthetic appraisals of natural kinds, such as faces and landscapes, versus those of cultural artifacts, such as artwork and architecture: Aesthetic appraisals of cultural artifacts are highly individual, whereas assessments of faces and landscapes tend to contain a strong degree of “shared taste” (32, 33).
Is there a domain-general brain system that represents aesthetic appeal regardless of aesthetic domain, or, alternately, do different aesthetic domains engage domain-specific processes for aesthetic valuation? A number of studies on decision-making point to a region of the ventral or anterior medial prefrontal cortex (vMPFC/aMPFC) as containing a domain-general representation of value (using money, food, water, arousing pictures, social feedback) (34, 35), and this same brain region has been shown to be important for aesthetic appeal (36⇓–38). However, the relationship between aesthetic appeal and valuation is not straightforward. Aesthetic considerations represent only a subset of inputs to valuation and subsequent decision-making (e.g., ref. 19), and aesthetic judgments can be made in the absence of approach motivation, a signature of reward (39). In addition, aesthetic experiences need not be based on either primary rewards or simple associations with primary rewards, such as when imagery (40) or information foraging, sense making, and uncertainty reduction (41⇓–43) lead to pleasing aesthetic experiences.
Yet if aesthetic appreciation is derived from evolutionarily older processes of object appraisal, there should be overlap in the neural mechanisms for the evaluation of a variety of aesthetic domains, at least partially coinciding with a general system for representing subjective value. Indeed, a functional MRI (fMRI) study of beauty judgments of music and visual art found overlap at the group average level in a single region in vMPFC/ACC (anterior cingulate cortex) (44). Conversely, a metaanalysis of over 93 studies of pleasurable experiences across 4 sensory modalities (vision, taste, hearing, and smell) did not find evidence for overlap in either vMPFC/aMPFC or striatum, and instead identified the anterior insula as a region with the highest overlap (in 3 of 4 domains) (45). A pattern of adjacency, rather than overlap, was seen in orbitofrontal cortex (OFC) and vMPFC. Unfortunately, the methods used in both of these studies were inadequate for resolving the question of whether there was true domain-generality at the level of individual observers: Adjacent but nonoverlapping activations at the level of individuals can lead to spurious overlap at the group or metaanalytic levels, given the difficulty in precisely aligning cortical surfaces across observers and studies.
A recent study using a more precise multivoxel pattern analysis (MVPA) method found evidence for coding of valence in OFC and vMPFC for both strongly valenced (but not “aesthetic”) visual images and gustatory stimuli (46). In the aesthetic domain, Pegors et al. (47) found evidence for overlapping activations for positive aesthetic judgments of landscapes and faces in regions of the vMPFC, as well as domain-specific regions in other anterior parts of the brain. Yet, given the distinction between behavioral responses to natural vs. artifactual aesthetic domains (see above), it is notable that none of these studies addresses this distinction.
In addition to their importance for valuation, the vMPFC and aMPFC are also central nodes of the default-mode network (DMN), a set of interconnected brain regions that are engaged by tasks that require inwardly directed attention or an assessment of self-relevance (48⇓⇓–51). Surprisingly, the DMN is responsive to strongly moving aesthetic experiences with visual artwork (52, 53). Several nodes of the DMN, which are typically suppressed when attention is directed to an external object such as a visual image (54⇓–56), were found to be released from suppression when an observer viewed paintings rated as strongly aesthetically moving, but not for paintings rated as merely pleasing or as not aesthetically appealing. This effect was particularly strong in the aMPFC, a “hub” region of the DMN (48), but was also present in other nodes.
We tested the hypothesis that the DMN contains a representation of aesthetic appeal that generalizes across different visual domains, using a strong multivariate test. We trained classifiers on voxelwise patterns of fMRI blood-oxygen-level-dependent (BOLD) activity to distinguish trials of high versus low aesthetic appeal for one domain (artworks, natural landscapes, or architecture; Fig. 1), and tested the classifiers’ ability to predict observers’ judgments on the other domains. Given the hypothesized importance of DMN regions for representing and processing self-relevant, internally directed information (51, 57), we predicted that classifiers trained on data from the DMN would perform well on across-domain classifications, providing strong evidence for a domain-general representation. Alternatively, it is possible that the DMN represents information about a variety of aesthetic domains, but in a manner that is domain-specific. This would lead to accurate classification within domain, but poor across-domain generalization. Lastly, it is also possible that only specific subregions of the DMN such as vMPFC or aMPFC, but not the DMN as a whole, contain domain-general representations of aesthetic appeal.
Stimuli and experimental design. Examples of images used in the experiment: (A) visual art, (B) interior and exterior architecture, and (C) natural landscapes. (D) Each trial began with a fixation point (1 s), followed by an image of an artwork (4 s) and a rating period (4 s) during which the observer used a trackball to indicate their response on a visual slider. (E) Classifiers were trained on multivoxel patterns of trialwise (beta-series) estimates taken from individual observer ROIs. Three sets of “within-domain” classification scores and 6 sets of “across-domain” classification scores were derived for each observer using cross-validation. (Fig. 1 A, Left) Reprinted from ref. 87. (Fig. 1 A, Right) Reprinted from ref. 88. (Fig. 1 B, Left) Image courtesy of Alec Hartill (photographer). (Fig. 1 B, Right) Image courtesy of R. Hoekstra (photographer). (Fig. 1C) Images are examples from the SUN database, https://groups.csail.mit.edu/vision/SUN/.
Additionally, we used a “searchlight” multivariate mapping technique to identify regions of cortex that contained domain-specific representations of aesthetic appeal for artifacts of human culture (artworks, architecture) or natural landscapes. Similar approaches have been used to understand brain systems for valuation of money, food, and consumer goods (20, 34, 35, 58). While this approach has previously been extended to the aesthetic domain (47), no direct comparison between natural and artifactual aesthetic domains has been made, nor has there been any attempt to topographically map domain-specific and domain-general aesthetic processing across the cortex.
We found evidence for a domain-general representation of aesthetic appeal in the DMN, whereas higher-level visual regions (ventral occipitotemporal cortex, VOT) exhibited a domain-specific pattern. Importantly, our ability to classify trials as high or low in aesthetic appeal on the basis of fMRI signal was strongly related to the degree to which each observer behaviorally distinguished those high versus low trials. The searchlight analysis revealed several additional regions in prefrontal cortex that contained information about aesthetic appeal of art and architecture, whereas domain-specific information about aesthetic appeal of landscapes was mostly confined to the ventral visual pathway.
Results
Classifier Performance in the DMN.
Patterns of multivoxel activity across the entire DMN contained a strong signature of domain-general aesthetic processing (Fig. 2). The 3 within-domain classifiers (Fig. 2A) achieved an average classification of high vs. low preferred trials at a rate of 63.8% (95% CI [61.4 66.4], average area under the curve [AUC] 0.68, d = 2.78; Fig. 2C, gray bars), which was highly significant compared to chance performance of 50% (permutation testing of individual trial labels, P < 0.005). The 6 across-domain classifiers also performed well above chance, with an average classification rate of 60.7% (95% CI [57.2 64.3], average AUC 0.64, d = 1.61, P < 0.005).
Predicting aesthetic appeal from activity patterns in DMN and VOT. (A) For each region, a set of classifiers were trained on voxelwise activity patterns from trials of one stimulus domain and tested on separate trials of another domain to produce a 3 × 3 performance matrix. Classification performance (percent correct) was averaged across the 3 diagonal elements to produce a within-domain accuracy score, and the 6 off-diagonal elements were averaged to produce an across-domain score. (B) Six bilateral DMN ROIs (Left) were identified in individual participants: aMPFC, vMPFC, dMPFC, PCC, IPL, and LTC. The regions shown here are the “master” ROIs drawn on an average template brain—these regions were then used to mask individual DMN maps derived from a separate “rest” scan. In addition, 3 category-selective, bilateral VOT regions (Right) were identified in individual participants using a separate object/face/place localizer scan (approximate location on the average template brain shown here): PPA, VOA, and FFA. An overall DMN mask was created by summing all 6 DMN subregions together, and an overall VOT mask was created by projecting the larger VOT region shown here onto individual participant surfaces. (C) Within- and across- domain classification accuracy scores for each region. DMN regions are left of the dashed line in cool colors, and VOT regions are right of the dashed line in warm colors; n = 16. Error bars are 95% CIs; **P < 0.005, tested by comparison to a null distribution derived from 5,000 permutations of individual trial labels.
Above-chance domain-general classification of aesthetic appreciation was present in most of the DMN subregions tested, although the strength of both the domain-general and domain-specific signals varied from region to region (Fig. 2 B and C, cool colors). The strongest within- and across-domain classification performance was observed in the aMPFC (within-domain 60.1%, 95% CI [57.5 62.8], average AUC 0.63, d = 2.03, P < 0.005; across-domain 58.3%, 95% CI [54.7 61.9], average AUC 0.61, d = 1.23, P < 0.005) and dorsomedial prefrontal cortex (dMPFC, within-domain 59.7%, 95% CI [57.4 61.9], average AUC 0.63, d = 2.30, P < 0.005; across-domain 58.5%, 95% CI [55.5 61.5], average AUC 0.61, d = 1.50, P < 0.005). Classifiers trained on data from the inferior parietal lobule (IPL) also performed well, both within (58.8%, 95% CI [56.2 61.4], average AUC 0.61, d = 1.81, P < 0.005) and across domains (55.8%, 95% CI [52.9 58.6], average AUC 0.58, d = 1.08, P < 0.005). Patterns of activation in posterior cingulate cortex (PCC, within-domain 56.8%, 95% CI [54.0 59.5], average AUC 0.59, d = 1.32, P < 0.005; across-domain 54.0%, 95% CI [51.6 56.4], average AUC 0.55, d = 0.89, P < 0.005) and lateral temporal cortex (LTC, within-domain 54.5%, 95% CI [51.8 57.1], average AUC 0.55, d = 0.90, P < 0.005; across-domain 53.5%, 95% CI [51.4 55.6], average AUC 0.54, d = 0.89, P < 0.005) contained a degree of domain-general decoding information that was less than the other regions.
Surprisingly, the vMPFC did not show within-domain classification performance better than chance (52.1%, 95% CI [50.6 53.5], average AUC 0.52, d = 0.77, P = 0.37) and produced across-domain classification that was higher than chance performance but with a lower effect size (52.6%, 95% CI [51.0 54.2], average AUC 0.54, d = 0.84, P < 0.005).
Classifier Performance in Category-Selective VOT Regions.
Classifiers trained on signal from the category-selective regions of the VOT showed some evidence of domain selectivity, but little evidence of domain generality (Fig. 2 B and C, warm colors). All 3 category-selective regions showed average within-domain performance that was better than chance (fusiform face area [FFA] 55.6%, 95% CI [53.6 57.6], average AUC 0.57, d = 1.49, P < 0.005; parahippocampal place area [PPA] 54.5%, 95% CI [51.2 57.8], average AUC 0.55, d = 0.73, P < 0.005; ventral object area [VOA] 53.5%, 95% CI [51.3 55.7], average AUC 0.53, d = 0.83, P < 0.005), but did not differ from chance for average across-domain classification (FFA 51.5%, 95% CI [49.2 53.7], average AUC 0.52, d = 0.34, P = 0.45; PPA 50.9%, 95% CI [49.1 52.6], average AUC 0.51, d = 0.26, P = 1; VOA 50.7%, 95% CI [48.7 52.7], average AUC 0.51, d = 0.17, P = 1). A classifier trained on a large region of interest (ROI) covering the entire VOT performed better than the individual ROIs for both within-domain (57.5%, 95% CI [55.6 59.4], average AUC 0.61, d = 2.12, P < 0.005) and across-domain classifications (53.6%, 95% CI [51.3 55.9], average AUC 0.55, d = 0.84, P < 0.005), although at a level that was still inferior to individual regions of the DMN.
Comparison of Classifier Performance in DMN and VOT.
A comparison across all ROIs (Fig. 3) revealed that classifiers trained on data from DMN ROIs, despite a range of overall performance, tended to have similar domain-general and domain-specific performance, whereas classifiers trained on VOT ROIs tended to show better performance within domain than across domain. When within- and across-domain performances were plotted against each other, DMN ROIs tended to lie on or just below the diagonal (Fig. 3, cool colors), whereas VOT ROIs tended to lie significantly below the diagonal, closer to the line marking chance across-domain performance. These effects were not a consequence of the independent method of ROI identification: A complementary analysis of regions activated by all 3 domains versus a resting baseline also found domain-general behavior in MPFC but a more domain-specific signature in a large visual ROI covering the entire ventral visual pathway (SI Appendix, Fig. S1 and Supplementary Results).
Characterization of domain-general and domain-specific ROI signatures. Each dot represents one ROI (color key same as Fig. 2; gray, DMN; purple, aMPFC; maroon, vMPFC; indigo, dMPFC; blue, PCC; green, IPL; light green, LTC; beige, VOT; yellow, PPA; orange, VOA; red, FFA). Average across-domain performance (y axis) is plotted against average within-domain performance (x axis). The DMN ROIs (cool colors) all tend to fall near the diagonal, illustrating a “domain-general” signature (similar across-domain and within-domain performance levels). The VOT ROIs (warm colors), however, tend to fall along the horizontal and are thus better characterized as “domain-specific” (significant within-domain performance but poor across-domain performance), with the exception of the overall VOT ROI (tan), which is above the horizontal line; n = 16. Error bars are 95% CIs.
This pattern—domain-general discriminability between high- and low-rated trials in the DMN but poorer, nongeneralized performance in VOT—stands in contrast to standard activation-based measures for these same regions (SI Appendix, Fig. S2). VOT regions were activated by images of all 3 domains (all P < 0.005, significance tested by group-level randomization test; see SI Appendix, Supplementary Results) and were generally more activated for high vs. low trials (VOT and PPA P < 0.005 for all domains; FFA P < 0.005 for architecture; VOA P < 0.005 for landscape and P < 0.05 for architecture). DMN regions, on the other hand, were deactivated by images of all 3 domains [all P < 0.005 except PCC landscape P < 0.05 and vMPFC all domains nonsignificant (n.s.)], and generally did not show differences in average activation for trials labeled high vs. low (all P n.s. except vMPFC architecture, P < 0.005). Specific tests of differences across aesthetic domains were significant only for Image vs. Baseline contrasts in VOT (all P < 0.005), but not for activation-based contrasts of high vs. low trials in VOT, nor for DMN.
Individual Variation in a Behavioral Distance Metric (d’) Predicted Performance of Domain-General Classification from DMN.
The ability to classify single trials as high or low aesthetic appeal based on domain-general activation patterns in the DMN varied from individual to individual. We performed an analysis to test whether this variation in classifier performance was related to the strength of observers’ aesthetic experiences. Behaviorally, we observed that some participants tended to use a much wider range of the rating scale than did others; these participants also tended to have longer response times (SI Appendix, Supplementary Results). The selection of 40 trials (10 from each run) as “high” and “low” for each domain in each observer, regardless of the underlying distribution, resulted in high and low distributions with different distances between them for different observers. This behavioral distance (expressed as a d’) strongly predicted how well classifiers trained on data from that participant’s DMN performed across domains [r = 0.73, linear regression R2 = 0.53, F(1,14) = 15.9, P = 0.0013; Fig. 4]. This suggests that differences in scale use accurately reflected the distinctiveness of brain states subsequently labeled as “high” and “low” aesthetic appeal, and that variation in classification accuracy largely reflected this distinctiveness. This relationship was specific for signal from the DMN: VOT classifier performance was not related to behavioral d’ [r = 0.27, R2 = 0.075, F(1,14) = 1.1, P = 0.31].
Variability in DMN classifier performance reflects the strength of observers’ aesthetic experiences. For each observer, the distance between the top-rated trials (labeled “high”) and the bottom-rated trials (labeled “low”) was calculated based on behavioral ratings (d’). Therefore, observers with greater variability in their ratings produced higher d’ values. Separately, classifiers were trained on BOLD signal patterns from each observers’ DMN to distinguish trials labeled as “high” vs. “low.” Across observers, classifier performance (vertical axis) was strongly correlated with each observers’ behavioral d’ distance measure (horizontal axis); n = 16.
Cortical Topography of Domain-Specific and Domain-General Information.
As a complement to the characterization of specific a priori ROIs, cortical maps of across-domain and within-domain performance were generated using a “searchlight” technique (see Methods). Several clusters of better-than-chance across-domain classification performance were found on the medial surface of PFC in both hemispheres (Fig. 5A and SI Appendix, Table S1; dMPFC, aMPFC/rostral paracingulate [rPaC], right ACC/paracingulate [PaC]), consistent with the DMN ROI findings. In addition, several additional prefrontal clusters were found in left inferior frontal sulcus (IFS)/inferior frontal gyrus, pars opercularis (IFGop), right dorsolateral prefrontal cortex (dlPFC), right frontopolar lateral (FPl), and left OFC. In posterior cortex, significant clusters were observed on the left hemisphere in higher-level visual regions (medial occipital gyrus [MOG], occipitotemporal sulcus [OTS]/inferior occipital gyrus [IOG]) and posterior cingulate, as well as in bilateral early visual cortex (see SI Appendix, Table S1 for a complete listing of significant clusters). These results indicate that domain-general representations of aesthetic appeal exist not only across large-scale patterns of activity but also in local regions of cortex, including MPFC and OFC.
Topography of domain-general and domain-specific information. (A) Average of the 6 across-domain classifier maps (orange), rendered on an average flattened cortical surface. (B) Three within-domain classifier maps and their overlap, as indicated by the color key. Outlines from the across-domain maps are shown in orange. The right hemisphere is on the left side. Maps were cluster-corrected for multiple comparisons using Monte Carlo simulations (P < 0.05); n = 16. Dark gray areas indicate regions of cortex where data from all participants were not available and were therefore not included in the map.
The 3 maps of within-domain classification performance (one each for art, architecture, and natural landscapes) contained several prefrontal clusters with domain-specific signatures (Fig. 5B and SI Appendix, Table S1). For art (red), these clusters were located in the frontopolar regions (left FPl, left frontopolar medial [FPm], bilateral dMPFC) and inferior frontal cortex (bilateral inferior frontal gyrus [IFGt], right IFS). For architecture (blue), clusters were observed in frontopolar regions (left FPm, right FPl/frontomarginal gyrus [FM], bilateral dMPFC, left superior frontal gyrus [SFG]), dorsolateral prefrontal (left MFG/IFS), lateral orbitofrontal, and medial prefrontal cortex (rostral anterior cingulate cortex, [rACC]/rPaC). A comparison to the domain-general map (orange outlines in Fig. 5B) revealed several left hemisphere frontopolar clusters that are adjacent to, but not overlapping with, the domain-general regions. Similarly, the clusters in left OFC/IFG are near, but not overlapping with, a domain-general cluster. The landscape map contained 2 prefrontal domain-specific clusters on the lateral surface, in the precentral sulcus and inferior frontal (IFS/IFGt).
In addition to these prefrontal clusters, the landscape map contained large domain-specific clusters bilaterally in VOT (Fig. 5B, green) and in some parts of early visual cortex that overlap with those observed in the domain-general map. Domain-specific clusters were also observed in visual regions for artwork (IOG, early visual) and architecture (bilateral MOG, early visual) that mostly, but not entirely, overlapped with domain-general clusters.
Discussion
Despite differences in how images of natural landscapes, architecture, and visual art are encoded and evaluated, the DMN contains a representation of aesthetic appeal that generalizes across these visual aesthetic domains. A series of multivariate classifiers were trained to distinguish trials rated as “high” vs. “low” aesthetic appeal using multivoxel patterns of BOLD signal from regions of the DMN. When the full network was used, robust performance was observed for classifiers trained and tested within the same domain as well as for classifiers trained and tested across different domains. This is strong evidence that the DMN contains a representation of aesthetic appeal that is domain-general. Additionally, domain-general classification performance in individual participants was strongly predicted by the distance (d’) between each person’s behavioral ratings of exemplars labeled “high” (top 27% of images in each domain) versus exemplars labeled “low” (bottom 27% of images in each domain), suggesting that variability in classifier performance reflected the psychological and neural span between the states evoked by the most and least aesthetically appealing images in each observer, rather than measurement noise. Strong within- and across-domain performance was also observed in several subregions of the DMN, namely aMPFC, dMPFC, and IPL. On the other hand, classifiers trained on data from VOT did not perform as well and were strongly domain-specific: The voxel patterns that predicted aesthetic appeal for one domain did not generalize to the other domains.
Maps of cortical domain-general and domain-specific signals, measured using a multivariate “searchlight” technique, confirmed the presence of a domain-general representation in aMPFC and dMPFC, and also revealed adjacent domain-specific regions in frontal pole and inferior/orbital frontal cortex, primarily for architecture and art. Domain specificity for natural landscapes, in contrast, was primarily found in the ventral visual pathway. Finally, additional cortical fields containing domain-general information about aesthetic appeal were identified in the lateral and orbital prefrontal cortex, as well as in posterior visual regions.
Unlike metaanalyses or activation analyses, across-domain classification based on voxelwise patterns is a very strong test of domain generality. A univariate activation analysis of the same regions, which found sensitivity to ratings in VOT but not in DMN, might have led one to conclude that VOT possessed greater sensitivity and domain generality for aesthetic appeal (despite discriminating domain identity). This discrepancy suggests that aesthetic appeal, like valence of nonaesthetic images (46), is encoded at a spatial scale that is smaller than brain regions, but still detectable at the voxel level, and may not have a consistent local topography from one person to the next (see also SI Appendix, Supplementary Results). Although high multivariate classification accuracy cannot prove that the identical neurons or columns are representing aesthetic appeal across different domains, it does show that individual voxels, in individual observers, respond to high versus low exemplars in a consistent manner regardless of aesthetic domain.
A Core System for Assessing Aesthetic Appeal.
It is notable that the DMN representation of aesthetic appeal generalized across both natural kinds (landscapes) and cultural artifacts (architecture and artwork). Recent behavioral findings suggest that aesthetic evaluations of natural kinds such as landscapes and faces rely on similar information across people, whereas evaluations of cultural artifacts are highly individual (32). Attractive faces and attractive landscapes engage a region in MPFC (likely overlapping with the DMN) in a similar manner (47). A potential interpretation of our results is that the DMN is part of a “core” system for assessing aesthetic appeal that is engaged by all domains.
Aesthetic experiences are integrative in nature, drawing on perception and imagery across multiple senses as well as on memories, emotion, and associated meanings. The DMN’s anatomical position at one extreme of a cortical hierarchy (59) makes it well positioned to integrate information across multiple brain systems. While it remains unclear whether DMN involvement is necessary for an aesthetic experience to be perceived as strongly moving, it is the case that its activity reflects engagement with artworks: DMN signal fluctuations “lock on” to aesthetically appealing artworks, but are independent for nonappealing artworks (60). In addition, MPFC damage has been shown to reduce the influence of certain types of affective information on aesthetic valuation (61). The current findings add to this understanding by showing that the DMN encodes the aesthetic appeal of visual images in a domain-general manner. Given the DMN’s strong link to assessments of self-relevance and self-referential processes (57, 62, 63), it is possible that the DMN’s engagement by and representation of aesthetically appealing events reflects an assessment of self-relevance—in this case, the potential self-relevance of an external object.
Alternatively, the DMN’s engagement during aesthetic experience may reflect its theorized role in the construction of mental scenes (48). Such mental imagery, involving an interplay of top-down information with bottom-up stimulus properties, is a key aspect of many aesthetic experiences (40, 64). This balance of activation between higher-tier visual regions and DMN regions likely depends on the degree to which an observer is able to recognize familiar content (65). While this study was not designed to tease apart responses to abstract versus representational artwork (out of 148 artworks, 11 were abstract), it is possible that aesthetic experiences with abstract artworks rely more on top-down processes of sense-making and imagery, consistent with the fact that aesthetic judgments for images of indeterminate content take longer than for representational images (66). Yet the ability to decode aesthetic appeal across aesthetic domains, including photographs of landscapes and architecture, suggests that even if the DMN activates differentially to representational or abstract content, the multivoxel patterns for images experienced as high vs. low appeal are similar.
Additionally, the fact that DMN contains information about aesthetic appeal for artworks and architecture suggests that the DMN is able to integrate information about nonperceptual aspects of aesthetic experience, as the low degree of shared taste for these domains indicates that visual features do not uniquely determine felt aesthetic appeal (32, 67, 68). It remains to be seen whether the domain-general aesthetic appeal observed in the DMN for the visual domains studied here would also generalize to nonvisual domains (music, poetry) or to highly conceptual experiences such as the appreciation of mathematical beauty.
The domain-specific regions identified in the searchlight analysis likely complement this core system. Interestingly, domain specificity for architecture and artifacts was primarily observed in prefrontal regions. It is unlikely that these regions of cortex have functionality that is specifically relevant for artwork or architecture—a more parsimonious explanation is that these regions support the idiosyncratic aspects of personal taste and experience that support aesthetic assessments of cultural artifacts. The fact that domain specificity for landscapes was primarily observed in the ventral visual pathway is in line with the fact that aesthetic appreciation of landscapes was more consistent across people (SI Appendix, Supplementary Results), and therefore more closely related to semantic and structural information present in specific images.
We found a strong brain−behavior correlation between how strongly an individual participant discriminated between images labeled “high” and “low” aesthetic appeal (d’) and the ability to predict aesthetic appeal from the DMN. This finding is remarkable because the classifiers were trained solely with the labels “high” or “low”—they received no information about the actual rating given by the participant. That the rated distance between these 2 sets of trials correlated with classifier performance indicates that 1) the different use of the rating scale by different participants was not arbitrary, but actually reflected their psychological reality, and 2) this psychological difference was reflected in the discriminability of the associated multivoxel patterns in the DMN. A potential consequence is that overall classification accuracy was likely limited by the degree to which observers remained engaged with the images over the course of the entire experiment (444 images over 2 d). Selection of participants based on affinity for the domains selected, and presentation of fewer images in a session to fight potential “museum fatigue” (69), would likely result in higher classification accuracies.
The poor ability of signal from vMPFC to predict aesthetic appeal is surprising, given the number of studies in both aesthetics (36, 44, 47) and decision-making (34, 35, 70) that report valuation signals here. There are several possible explanations for this discrepancy. The first is the fact that we only selected voxels in the vMPFC that were also part of the DMN, as defined by individual maps derived from resting-state fMRI. This likely resulted in significantly smaller ROIs than a pure anatomical definition of the vMPFC. Second, there is a potential inconsistency in naming conventions across studies. We have used the term vMPFC to describe the medial part of the gyrus rectus, lying below the superior rostral sulcus (SRS) and including Brodmann areas 25 and 14, whereas others may include cortex above the SRS in their definition of vMPFC. Finally, there is also the issue of distortion and dropout in this region of the brain and inconsistent processing of these distortions by different research groups. To combat these issues, we used a state-of-the-art multiecho (ME) sequence to recover as much signal as possible from OFC/vMPFC and correct for distortions in a manner that is less prone to mixing and spreading of signals across this entire region. Clarification of the exact topography in this region will require that all researchers consistently report relevant imaging parameters, such as sequence type, phase-encoding direction, echo time (TE), and the method used for distortion correction.
Aesthetic Appeal vs. Visual Features in the Ventral Visual Pathway.
Despite the lack of domain generality in VOT, it is noteworthy that patterns of activation in this region were informative for predicting within-domain judgments of aesthetic appeal. While the ventral visual pathway is primarily viewed as important for extraction of visual characteristics of objects and scenes, a number of previous studies report correlations between signal in the ventral visual pathway and aesthetic appeal (52, 71⇓–73). One possible explanation is that certain visual features are correlated with aesthetic appeal (at least on average), and it is these visual features that drive the observed correlations between brain activity and appeal. Yet activations within VOT (52) and VOT response patterns (this paper) appear to correlate with aesthetic appeal even for visual artworks and architecture, categories that produce very low interrater agreement (e.g., “shared taste”) (32). One possibility is that these regions do extract specific visual features, but that observers are differentially sensitive to these features. This would maintain a relationship between attention to the feature and aesthetic appeal but also allow for different observers to express divergent tastes. Alternatively, these regions may not represent stable visual attributes in all observers, but may instead extract more subjective properties of visual experience that have a positive relationship to aesthetic appeal.
In addition to regions in prefrontal cortex and the ventral visual pathway, the searchlight analysis also identified early visual cortex as containing better-than-chance domain-general classification performance. It is unclear whether this decoding ability was a result of bottom-up stimulus-driven differences in activation patterns for high vs. low appeal images or of differential top-down modulation of early visual activity, such as by attention (74) or imagery (75). While the low across-observer agreement for individuals’ aesthetic ratings (SI Appendix, Supplementary Results) means that there was substantial overlap at the group level for images shown on “high” trials and “low” trials, this overlap was not 100%, leaving room for potential residual differences in low-level features. As both the natural landscape and architecture image sets were contrast-equalized, image contrast is unlikely to be a major factor.
Conclusions
Moving aesthetic experiences are highly integrative. Situated at the top of the cortical hierarchy, the DMN is in an ideal network position to integrate information across many sources. Using a strong test, we found evidence that the DMN represents aesthetic appeal in a domain-general manner. In contrast, higher-level visual regions were found to contain only weak and domain-specific information about aesthetic appeal. A searchlight analysis confirmed the MPFC as part of a putative “core” domain-general system for assessing visual aesthetic appeal, and identified additional domain-specific regions near the frontal poles that contained information relevant for aesthetic judgments of artifacts of human culture (architecture, artwork) but not for natural landscapes. While the exact role of the DMN in aesthetic appeal remains unclear, this work confirms that the DMN has access to detailed information about aesthetic appeal, and that this information is not confined to a single node of the DMN.
Methods
Participants.
Eighteen participants were recruited at New York University (NYU) and paid for their participation. Two participants were excluded due to excessive head motion that led to visible signal distortions and difficulties with registration, leaving a final group of 16 participants (10 female, 16 right-handed; 25.7 ± 6.3 y). All had normal or corrected to normal vision and no history of neurological disorders. Informed consent was obtained from all participants. All experimental procedures, including informed consent, were approved by the NYU Committee on Activities Involving Human Subjects.
Stimuli.
Images were presented using back-projection (Eiki LC-XG250 projector) onto a screen mounted in the scanner and viewed through a mirror on the head coil. Stimulus presentation was controlled by a Macintosh Pro running OS 10.6 and MATLAB R2011b (Mathworks) with Psychophysics Toolbox-3 extensions (http://psychtoolbox.org) (76, 77).
Visual art.
The set consisted of 148 photographs of visual artworks (paintings, collages, woven silks, excluding sculpture) sourced from the Catalog of Art Museum Images Online database (Fig. 1A). A subset of these (109) were used in a previous study (52). The set covered a variety of periods (fifteenth century to the twentieth), styles, and genres (landscape, portrait, abstract, still life), and diversely represented cultures of Europe, the Americas, and Asia. While all of the images were taken from museum collections, special care was used to ensure that only lesser-known artworks were included. When necessary, stimuli were cropped to remove the artist’s signature. Due to the large differences in size and color content across different artworks, contrast equalization was not possible.
Architecture.
For architecture, 148 photographs (74 exterior, 74 interior) were selected, with the majority collected from ArtStor, a database of high-quality images representing multiple cultures and periods (Fig. 1B). Images containing people were excluded. Interior images were chosen to highlight architectural detail, not interior décor. An effort was also made to utilize exterior images that gave an impression of building detail as well as its place in a given setting, while excluding images that gave primary emphasis to features of the landscape. Images represented a variety of structures (e.g., skyscraper vs. single residence), styles (e.g., Gothic vs. classical), materials, and time periods. Images were cropped to a 4:3 (landscape) or 3:4 (portrait) aspect ratio and presented at a size of 13° of visual angle for the longer dimension. The images were contrast-equalized (SI Appendix, Supplementary Methods) and displayed using a linearized color look-up table.
Natural landscapes.
For natural landscapes, 148 photographs were obtained from a variety of sources, including the SUN Database (78), IMSI MasterClips, and MasterPhotos Premium Image Collection, and also from images publicly available on the internet (Fig. 1C). Images were cropped to a 4:3 aspect ratio and presented at 13° of visual angle for the horizontal dimension. The images were contrast-equalized (SI Appendix, Supplementary Methods) and displayed using a linearized color look-up table.
Procedure.
The experiment took place over 2 sessions. Participants were instructed in the task and given a short practice (10 trials) to familiarize them with the types of images they would be seeing, the task timing, and the response method. Participants were told that they would be viewing and evaluating images from 3 different aesthetic domains: natural landscapes, architecture, and visual art (Fig. 1 A–C). They were asked to rate how aesthetically “moving” they found each depicted scene/structure/artwork (SI Appendix, Supplementary Methods).
In each session, participants completed 6 experimental scans composed of 37 trials each. Each scan contained images of only a single domain (artwork, natural landscape, interior or exterior architecture) in order to allow participants to fully engage with this domain over a several-minute period. One session therefore contained 2 artwork scans, 2 landscape scans, 1 interior architecture scan, and 1 exterior architecture scan. Scan order was counterbalanced across participants, and image order within each scan was pseudorandomized and also counterbalanced by assigning alternating participants the reverse order of another participant. Each participant saw each image only once. Across the 2 sessions, this resulted in a total of 148 trials for each of the 3 domains.
Each trial began with a blinking 1-s fixation cross followed by image presentation for 4 s (Fig. 1D). After the image disappeared, a visual “slider” bar appeared on the screen, and participants had up to 4 s to indicate their response on a continuous interval (marked with “L” and “H” at the ends) using a trackball. The mapping between observer movement (up/down) and movement of the slider (left/right) was counterbalanced across participants to remove any confounds associated with direction of hand movement. This was followed by a variable intertrial interval drawn from a discrete approximation of an exponential function with a mean of 2.6 s. Each scan also included 20 s of blank screen at the beginning and 10 s at the end to allow for better baseline signal estimation and removal of T1 saturation effects.
fMRI acquisition and reconstruction.
All fMRI scans took place at the NYU Center for Brain Imaging (CBI) using a 3T Siemens Allegra scanner with a Nova Medical head coil (NM011 head transmit coil). Whole-brain BOLD signal was measured from thirty-four 3-mm slices using a custom ME echo-planar imaging (EPI) sequence (2 s repetition time [TR], 80 × 64 3 mm voxels, right-to-left phase encoding, flip angle = 75°). The ME EPI sequence and a tilted slice proscription (15° to 20° tilt relative to the anterior commissure–posterior commissure [AC–PC] line) were used to minimize dropout near the orbital sinuses. Cardiac and respiration signals were collected using Biopac hardware and AcqKnowledge software (Biopac). We collected a custom calibration scan to aid in ME reconstruction, unwarping, and alignment. ME EPI images were reconstructed using a custom algorithm designed by the NYU CBI to minimize dropout and distortion, and were tested for data quality (e.g., spikes, changes in signal-to-noise) using custom scripts.
Following the session 1 experimental runs, we collected a high-resolution (1 mm3) anatomical volume (T1 magnetization-prepared rapid gradient echo [MPRage]) for registration and segmentation using FreeSurfer (http://surfer.nmr.mgh.harvard.edu). Following the session 2 experimental runs, participants completed a 360-s eyes-open “rest” scan plus a 320-s visual localizer scan that contained blocks of objects, places, faces, and scrambled objects. The visual localizer scan was fully described in Vessel et al. (52).
Analysis.
Identification of individual DMN maps and DMN sub-ROIs.
Participant-specific maps of the DMN were obtained using the rest scan. Motion correction, high-pass filtering at 0.005 Hz, and spatial smoothing with 6-mm FWHM Gaussian filter were applied using the FMRIB Software Library (FSL, https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/). Independent component analysis (ICA) was then performed on individual subjects’ scans using FSL's Multivariate Exploratory Linear Optimized Decomposition into Independent Components (MELODIC) tool. MELODIC determines the appropriate size of the lower-dimensional space using the Laplace approximation to the Bayesian evidence of the model order (79, 80). This process resulted in an average of 21.88 spatial components (SD = 3.70) for each subject. These ICA components were spatially thresholded at a z-score cutoff of 2.3, moved into MNI standard space, and compared to a set of predefined network maps (81) using Pearson correlation. The component with the highest correlation to the Smith et al. (81) DMN map was then visually inspected to ensure that its spatial distribution appeared similar to the canonical DMN. For 3 participants, the DMN was split between 2 ICA components similar to those seen in ref. 48, which were combined to form a single map. The final DMN ROI for each subject was then defined as the voxels from this component that also belonged to gray matter (as defined by the FreeSurfer gray matter segmentation).
Volumetric DMN maps were transformed to participant-specific surface space. The following 6 subregions of the DMN were then identified in both hemispheres from the DMN ICA maps of individual subjects by combining anatomically defined boundaries with the participant-specific DMN maps: aMPFC, dMPFC, vMPFC, PCC, IPL, and LTC (see SI Appendix, Supplementary Methods for additional information).
Identification of VOT ROIs.
For comparison to the DMN, 3 category-selective regions of the VOT were identified using standard methods (SI Appendix, Supplementary Methods).
Finally, an overall VOT ROI was created by identifying a larger region of the VOT that included all 3 of these category-selective ROIs plus the voxels between them (82). This ROI was first drawn on the FreeSurfer fsaverage cortical surface bilaterally using category-selective ROIs from 3 representative participants as a guide and then projected onto individual hemispheres. The borders of the resulting ROI (tan-colored region in Fig. 2B) extended from the depth of the occipitotemporal sulcus laterally to the center of the lingual/parahippocampal gyrus medially, and from the posterior collateral transverse sulcus to the anterior collateral transverse sulcus (approximate MNI coordinates y = −72 to y = −31). All ROIs were combined across left and right hemispheres to form bilateral ROIs.
ROI classification analysis of aesthetic appreciation.
A beta-series general linear model (83) implemented using custom MATLAB code was used to extract an estimate of response amplitude for each trial, at each voxel in gray matter. Experimental scans were first preprocessed using FSL to correct for motion, align data across scans, and apply a high-pass filter (0.01-Hz cutoff). Nuisance signals were then removed from the BOLD time courses of all 12 scans by projecting out motion estimates and nuisance time series derived from a second-order Taylor series expansion of cardiac and respiration measurements (RetroIcor method) (84). The cleaned time courses were then converted to percent signal change. Individual trials were modeled using a 4-s “on” period convolved with a canonical hemodynamic response function from the SPM12 Toolbox (Wellcome Trust Centre for Neuroimaging, University College), and the resulting trial-wise amplitude estimates were z-scored separately for each scan.
A series of participant-specific classifiers were then trained to distinguish “high” versus “low” aesthetic appreciation trials using the beta-series amplitude estimates from all voxels in an individual ROI (Fig. 1E). This analysis used the 10 trials with the highest rating (out of 37; top 27%) and the 10 trials with the lowest rating from each scan (bottom 27%). “Within-domain” classification performance was evaluated by training logistic regression classifiers (logreg.m from Princeton MVPA Toolbox, https://github.com/princetonuniversity/princeton-mvpa-toolbox) with high and low trials from 3 scans of one domain and then measuring classification accuracy on the high and low trials of 1 held-out scan of the same domain (4-fold cross-validation). The classifier penalty parameter was set to equal 5% of the total number of voxels in an ROI. “Across-domain” classification performance was evaluated by training on high and low trials from 3 scans of one domain and then measuring classification accuracy on high and low trials in 1 scan from a different domain (again with 4-fold cross-validation). This resulted in a 3-by-3 matrix of classification scores with within-domain scores along the diagonal and across-domain scores off the diagonal.
The 3 within-domain scores along the diagonal were averaged for each participant to produce an overall within-domain score, and the 6 off-diagonal across-domain scores were also averaged to produce an overall across-domain score. Maximum likelihood estimation was then used to compute the average and 95% confidence interval for both of these scores, across all 11 ROIs. An AUC measure was also computed for each ROI and participant by averaging probabilistic classifier predictions. Significance of classification performance across all participants was assessed through permutation testing. For each subject, 5,000 permutations of the high/low trialwise labels were computed and used to generate a null distribution of the 2 summary statistics in each ROI. We tested the hypothesis that classification performance was better than chance (one-tailed) at 2 critical alpha values, 0.05 and 0.005, Bonferroni-corrected for 22 total tests (11 ROIs by 2 statistics).
Searchlight analysis.
Maps of classification performance were computed across the cortical surface using a “searchlight” approach. For each participant, linear support-vector machine classifiers were trained and tested in a series of 5-mm-radius spheres (thirty-three 3-mm voxels) centered on each voxel where data were collected. Three within-domain maps and 6 across-domain maps were created from the average of a 4-fold cross-validation procedure (see above) for each participant. Maps of above-chance performance (>0.5) for the average of the across-domain classifiers (e.g., train on art, test on landscape) and all 3 within-domain classifiers (art, architecture, landscape) were created for each participant, and then averaged across participants on the FreeSurfer “fsaverage” surface. The resulting 4 maps were corrected for comparisons using clusterwise thresholding derived from Monte Carlo simulations (mri_glmfit-sim; ref. 85) with a voxelwise threshold of P < 0.001 and a cluster threshold of 0.05 across the entire cortical surface (2 hemispheres).
Correlation with behavioral discrimination.
The distribution of rating responses differed from observer to observer; some generated largely bimodal distributions, some uniform, and some strongly unimodal near the neutral point. Therefore, the ratings of the distributions of images subsequently labeled “high” (top 40 of 148 in each domain) and those labeled “low” (bottom 40 of 148 in each domain) also differed across observers, with some showing a greater separation than others. In order to quantify this individual variability, a distance measure (d’; e.g., distance between the mean of the 2 distributions, rescaled in units of SD) was computed between the distributions of raw ratings for “high” and “low” labeled trials for each participant, in each aesthetic domain. The average d’ scores across the 3 domains were then used in a linear regression analysis as predictors for DMN domain-general classification performance. Data are available at https://dx.doi.org/10.17617/3.2r (86).
Acknowledgments
We thank Tyra Lindstrom, Lucy Owen, Oriana Neidecker, and Melanie Wen for help with stimulus and data collection; Jean-Remi King for analysis advice; and David Poeppel for feedback on a previous version of this manuscript. This work was supported by an NYU Research Challenge Fund Grant to G.G.S. and E.A.V.
Footnotes
- ↵1To whom correspondence may be addressed. Email: ed.vessel{at}ae.mpg.de.
Author contributions: E.A.V., J.L.S., and G.G.S. designed research; E.A.V., A.M.B., and J.L.S. performed research; E.A.V., A.I.I., and A.M.B. contributed new reagents/analytic tools; E.A.V. and A.I.I. analyzed data; and E.A.V., A.I.I., A.M.B., J.L.S., and G.G.S. wrote the paper.
The authors declare no conflict of interest.
This article is a PNAS Direct Submission. M.B. is a guest editor invited by the Editorial Board.
Data deposition: MRI and behavioral data are posted in anonymized form on Edmond (https://edmond.mpdl.mpg.de), the Open Access Data Repository of the Max Planck Society, and can be accessed at https://dx.doi.org/10.17617/3.2r.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1902650116/-/DCSupplemental.
Published under the PNAS license.
References
- ↵
- M. Reimann,
- J. Zaichkowsky,
- C. Neuhaus,
- T. Bender,
- B. Weber
- ↵
- P. Silayoi,
- M. Speece
- ↵
- ↵
- J. H. Falk
- ↵
- P. P. L. Tinio
- P. P. L. Tinio,
- J. K. Smith,
- L. F. Smith
- ↵
- ↵
- D. Haluza,
- R. Schönbauer,
- R. Cervinka
- ↵
- A. Clow,
- C. Fredhoi
- ↵
- ↵
- R. S. Ulrich
- ↵
- ↵
- K. Cuypers et al
- ↵
- ↵
- ↵
- ↵
- ↵
- J. L. Plass,
- S. Heidig,
- E. O. Hayward,
- B. D. Homer,
- E. Um
- ↵
- ↵
- S.-L. Lim,
- J. P. O’Doherty,
- A. Rangel
- ↵
- I. Levy,
- S. C. Lazzaro,
- R. B. Rutledge,
- P. W. Glimcher
- ↵
- ↵
- ↵
- ↵
- ↵
- S. Zeki,
- O. Y. Chén,
- J. P. Romaya
- ↵
- W. Menninghaus et al
- ↵
- W. Menninghaus et al
- ↵
- ↵
- ↵
- ↵
- ↵
- E. A. Vessel,
- N. Maurer,
- A. H. Denker,
- G. G. Starr
- ↵
- H. Leder,
- J. Goller,
- T. Rigotti,
- M. Forster
- ↵
- ↵
- D. J. Levy,
- P. W. Glimcher
- ↵
- ↵
- ↵
- H. Kim,
- R. Adolphs,
- J. P. O’Doherty,
- S. Shimojo
- ↵
- ↵
- A. M. Belfi,
- E. A. Vessel,
- G. Gabrielle Starr
- ↵
- C. Muth,
- V. M. Hesslinger,
- C.-C. Carbon
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- ↵
- E. A. Vessel,
- G. G. Starr,
- N. Rubin
- ↵
- M. D. Fox et al
- ↵
- ↵
- J. R. Simpson Jr,
- A. Z. Snyder,
- D. A. Gusnard,
- M. E. Raichle
- ↵
- ↵
- F.-X. Neubert,
- R. B. Mars,
- J. Sallet,
- M. F. S. Rushworth
- ↵
- D. S. Margulies et al
- ↵
- A. M. Belfi et al
- ↵
- ↵
- ↵
- ↵
- L. Zunshine
- G. G. Starr
- ↵
- ↵
- ↵
- E. A. Vessel,
- N. Rubin
- ↵
- A. Schepman,
- P. Rodway,
- S. J. Pullen,
- J. Kirkham
- ↵
- B. I. Gilman
- ↵
- ↵
- ↵
- ↵
- ↵
- D. C. Somers,
- A. M. Dale,
- A. E. Seiffert,
- R. B. Tootell
- ↵
- ↵
- ↵
- ↵
- J. Xiao et al
- ↵
- ↵
- T. P. Minka
- ↵
- S. M. Smith et al
- ↵
- ↵
- ↵
- ↵
- ↵
- E. A. Vessel,
- A. I. Isik,
- A. M. Belfi,
- J. L. Stahl,
- G. G. Starr
- ↵
- H. Regnauld
- ↵
- J. Wright of Derby
Citation Manager Formats
Article Classifications
- Biological Sciences
- Neuroscience
- Social Sciences
- Psychological and Cognitive Sciences