Skip to main content

Main menu

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home
  • Log in
  • My Cart

Advanced Search

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
Research Article

The default-mode network represents aesthetic appeal that generalizes across visual domains

View ORCID ProfileEdward A. Vessel, Ayse Ilkay Isik, Amy M. Belfi, Jonathan L. Stahl, and G. Gabrielle Starr
  1. aDepartment of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany;
  2. bDepartment of Psychological Science, Missouri University of Science and Technology, Rolla, MO 65409;
  3. cDepartment of Psychology, The Ohio State University, Columbus, OH 43210;
  4. dDepartments of English and Neuroscience, Pomona College, Claremont, CA 91711

See allHide authors and affiliations

PNAS September 17, 2019 116 (38) 19155-19164; first published September 4, 2019; https://doi.org/10.1073/pnas.1902650116
Edward A. Vessel
aDepartment of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Edward A. Vessel
  • For correspondence: ed.vessel@ae.mpg.de
Ayse Ilkay Isik
aDepartment of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Amy M. Belfi
bDepartment of Psychological Science, Missouri University of Science and Technology, Rolla, MO 65409;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jonathan L. Stahl
cDepartment of Psychology, The Ohio State University, Columbus, OH 43210;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
G. Gabrielle Starr
dDepartments of English and Neuroscience, Pomona College, Claremont, CA 91711
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  1. Edited by Moshe Bar, Bar-Ilan University, Ramat Gan, Israel, and accepted by Editorial Board Member Michael S. Gazzaniga July 15, 2019 (received for review February 13, 2019)

  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Significance

Despite being highly subjective, aesthetic experiences are powerful moments of interaction with one’s surroundings, shaping behavior, mood, beliefs, and even a sense of self. The default-mode network (DMN), which sits atop the cortical hierarchy and has been implicated in self-referential processing, is typically suppressed when a person engages with the external environment. Yet not only is the DMN surprisingly engaged when one finds a visual artwork aesthetically moving, here we present evidence that the DMN also represents aesthetic appeal in a manner that generalizes across visual aesthetic domains, such as artworks, landscapes, or architecture. This stands in contrast to ventral occipitotemporal cortex (VOT), which represents the content of what we see, but does not contain domain-general information about aesthetic appeal.

Abstract

Visual aesthetic evaluations, which impact decision-making and well-being, recruit the ventral visual pathway, subcortical reward circuitry, and parts of the medial prefrontal cortex overlapping with the default-mode network (DMN). However, it is unknown whether these networks represent aesthetic appeal in a domain-general fashion, independent of domain-specific representations of stimulus content (artworks versus architecture or natural landscapes). Using a classification approach, we tested whether the DMN or ventral occipitotemporal cortex (VOT) contains a domain-general representation of aesthetic appeal. Classifiers were trained on multivoxel functional MRI response patterns collected while observers made aesthetic judgments about images from one aesthetic domain. Classifier performance (high vs. low aesthetic appeal) was then tested on response patterns from held-out trials from the same domain to derive a measure of domain-specific coding, or from a different domain to derive a measure of domain-general coding. Activity patterns in category-selective VOT contained a degree of domain-specific information about aesthetic appeal, but did not generalize across domains. Activity patterns from the DMN, however, were predictive of aesthetic appeal across domains. Importantly, the ability to predict aesthetic appeal varied systematically; predictions were better for observers who gave more extreme ratings to images subsequently labeled as “high” or “low.” These findings support a model of aesthetic appreciation whereby domain-specific representations of the content of visual experiences in VOT feed in to a “core” domain-general representation of visual aesthetic appeal in the DMN. Whole-brain “searchlight” analyses identified additional prefrontal regions containing information relevant for appreciation of cultural artifacts (artwork and architecture) but not landscapes.

  • visual aesthetics
  • default-mode network
  • artwork
  • architecture
  • natural landscape

Aesthetic appeal is a fundamental aspect of human experience that touches many aspects of life. Aesthetic considerations affect purchasing decisions (1⇓–3), choice of leisure time activities (4⇓–6), stress levels (7⇓–9), recovery from illness (10, 11), and well-being more generally (12⇓⇓–15). Aesthetics affect central aspects of perception and cognition, including visual orienting (16), learning (17, 18), and valuation (19, 20), and also affect our interactions with other people, from evaluations of attractiveness to judgments of trustworthiness and competence (21).

An aesthetic experience is, in general, a perceptual experience that is evaluative and affectively absorbing and engages comprehension (meaning) processes (22, 23). Aesthetic experiences often have a significant conceptual component, such as during encounters with conceptual art or abstract mathematical problem solving (24, 25), and may also emerge in response to imagined objects. Such experiences may include feelings of pleasure or beauty from engaging with an object, and judgments of liking or attractiveness, but can also include more complex responses such as being moved and feelings of awe and the sublime (26, 27).

People have strong aesthetic experiences with images from a variety of different visual aesthetic domains, such as landscapes, faces, architecture, and artwork. Yet the perceptual features that support such aesthetic experiences are domain-specific. For example, the features that support perception and aesthetic appreciation of a mountain vista are different from those used to perceive and aesthetically evaluate a gothic church (28⇓⇓–31). Recent behavioral studies support a distinction between aesthetic appraisals of natural kinds, such as faces and landscapes, versus those of cultural artifacts, such as artwork and architecture: Aesthetic appraisals of cultural artifacts are highly individual, whereas assessments of faces and landscapes tend to contain a strong degree of “shared taste” (32, 33).

Is there a domain-general brain system that represents aesthetic appeal regardless of aesthetic domain, or, alternately, do different aesthetic domains engage domain-specific processes for aesthetic valuation? A number of studies on decision-making point to a region of the ventral or anterior medial prefrontal cortex (vMPFC/aMPFC) as containing a domain-general representation of value (using money, food, water, arousing pictures, social feedback) (34, 35), and this same brain region has been shown to be important for aesthetic appeal (36⇓–38). However, the relationship between aesthetic appeal and valuation is not straightforward. Aesthetic considerations represent only a subset of inputs to valuation and subsequent decision-making (e.g., ref. 19), and aesthetic judgments can be made in the absence of approach motivation, a signature of reward (39). In addition, aesthetic experiences need not be based on either primary rewards or simple associations with primary rewards, such as when imagery (40) or information foraging, sense making, and uncertainty reduction (41⇓–43) lead to pleasing aesthetic experiences.

Yet if aesthetic appreciation is derived from evolutionarily older processes of object appraisal, there should be overlap in the neural mechanisms for the evaluation of a variety of aesthetic domains, at least partially coinciding with a general system for representing subjective value. Indeed, a functional MRI (fMRI) study of beauty judgments of music and visual art found overlap at the group average level in a single region in vMPFC/ACC (anterior cingulate cortex) (44). Conversely, a metaanalysis of over 93 studies of pleasurable experiences across 4 sensory modalities (vision, taste, hearing, and smell) did not find evidence for overlap in either vMPFC/aMPFC or striatum, and instead identified the anterior insula as a region with the highest overlap (in 3 of 4 domains) (45). A pattern of adjacency, rather than overlap, was seen in orbitofrontal cortex (OFC) and vMPFC. Unfortunately, the methods used in both of these studies were inadequate for resolving the question of whether there was true domain-generality at the level of individual observers: Adjacent but nonoverlapping activations at the level of individuals can lead to spurious overlap at the group or metaanalytic levels, given the difficulty in precisely aligning cortical surfaces across observers and studies.

A recent study using a more precise multivoxel pattern analysis (MVPA) method found evidence for coding of valence in OFC and vMPFC for both strongly valenced (but not “aesthetic”) visual images and gustatory stimuli (46). In the aesthetic domain, Pegors et al. (47) found evidence for overlapping activations for positive aesthetic judgments of landscapes and faces in regions of the vMPFC, as well as domain-specific regions in other anterior parts of the brain. Yet, given the distinction between behavioral responses to natural vs. artifactual aesthetic domains (see above), it is notable that none of these studies addresses this distinction.

In addition to their importance for valuation, the vMPFC and aMPFC are also central nodes of the default-mode network (DMN), a set of interconnected brain regions that are engaged by tasks that require inwardly directed attention or an assessment of self-relevance (48⇓⇓–51). Surprisingly, the DMN is responsive to strongly moving aesthetic experiences with visual artwork (52, 53). Several nodes of the DMN, which are typically suppressed when attention is directed to an external object such as a visual image (54⇓–56), were found to be released from suppression when an observer viewed paintings rated as strongly aesthetically moving, but not for paintings rated as merely pleasing or as not aesthetically appealing. This effect was particularly strong in the aMPFC, a “hub” region of the DMN (48), but was also present in other nodes.

We tested the hypothesis that the DMN contains a representation of aesthetic appeal that generalizes across different visual domains, using a strong multivariate test. We trained classifiers on voxelwise patterns of fMRI blood-oxygen-level-dependent (BOLD) activity to distinguish trials of high versus low aesthetic appeal for one domain (artworks, natural landscapes, or architecture; Fig. 1), and tested the classifiers’ ability to predict observers’ judgments on the other domains. Given the hypothesized importance of DMN regions for representing and processing self-relevant, internally directed information (51, 57), we predicted that classifiers trained on data from the DMN would perform well on across-domain classifications, providing strong evidence for a domain-general representation. Alternatively, it is possible that the DMN represents information about a variety of aesthetic domains, but in a manner that is domain-specific. This would lead to accurate classification within domain, but poor across-domain generalization. Lastly, it is also possible that only specific subregions of the DMN such as vMPFC or aMPFC, but not the DMN as a whole, contain domain-general representations of aesthetic appeal.

Fig. 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 1.

Stimuli and experimental design. Examples of images used in the experiment: (A) visual art, (B) interior and exterior architecture, and (C) natural landscapes. (D) Each trial began with a fixation point (1 s), followed by an image of an artwork (4 s) and a rating period (4 s) during which the observer used a trackball to indicate their response on a visual slider. (E) Classifiers were trained on multivoxel patterns of trialwise (beta-series) estimates taken from individual observer ROIs. Three sets of “within-domain” classification scores and 6 sets of “across-domain” classification scores were derived for each observer using cross-validation. (Fig. 1 A, Left) Reprinted from ref. 87. (Fig. 1 A, Right) Reprinted from ref. 88. (Fig. 1 B, Left) Image courtesy of Alec Hartill (photographer). (Fig. 1 B, Right) Image courtesy of R. Hoekstra (photographer). (Fig. 1C) Images are examples from the SUN database, https://groups.csail.mit.edu/vision/SUN/.

Additionally, we used a “searchlight” multivariate mapping technique to identify regions of cortex that contained domain-specific representations of aesthetic appeal for artifacts of human culture (artworks, architecture) or natural landscapes. Similar approaches have been used to understand brain systems for valuation of money, food, and consumer goods (20, 34, 35, 58). While this approach has previously been extended to the aesthetic domain (47), no direct comparison between natural and artifactual aesthetic domains has been made, nor has there been any attempt to topographically map domain-specific and domain-general aesthetic processing across the cortex.

We found evidence for a domain-general representation of aesthetic appeal in the DMN, whereas higher-level visual regions (ventral occipitotemporal cortex, VOT) exhibited a domain-specific pattern. Importantly, our ability to classify trials as high or low in aesthetic appeal on the basis of fMRI signal was strongly related to the degree to which each observer behaviorally distinguished those high versus low trials. The searchlight analysis revealed several additional regions in prefrontal cortex that contained information about aesthetic appeal of art and architecture, whereas domain-specific information about aesthetic appeal of landscapes was mostly confined to the ventral visual pathway.

Results

Classifier Performance in the DMN.

Patterns of multivoxel activity across the entire DMN contained a strong signature of domain-general aesthetic processing (Fig. 2). The 3 within-domain classifiers (Fig. 2A) achieved an average classification of high vs. low preferred trials at a rate of 63.8% (95% CI [61.4 66.4], average area under the curve [AUC] 0.68, d = 2.78; Fig. 2C, gray bars), which was highly significant compared to chance performance of 50% (permutation testing of individual trial labels, P < 0.005). The 6 across-domain classifiers also performed well above chance, with an average classification rate of 60.7% (95% CI [57.2 64.3], average AUC 0.64, d = 1.61, P < 0.005).

Fig. 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 2.

Predicting aesthetic appeal from activity patterns in DMN and VOT. (A) For each region, a set of classifiers were trained on voxelwise activity patterns from trials of one stimulus domain and tested on separate trials of another domain to produce a 3 × 3 performance matrix. Classification performance (percent correct) was averaged across the 3 diagonal elements to produce a within-domain accuracy score, and the 6 off-diagonal elements were averaged to produce an across-domain score. (B) Six bilateral DMN ROIs (Left) were identified in individual participants: aMPFC, vMPFC, dMPFC, PCC, IPL, and LTC. The regions shown here are the “master” ROIs drawn on an average template brain—these regions were then used to mask individual DMN maps derived from a separate “rest” scan. In addition, 3 category-selective, bilateral VOT regions (Right) were identified in individual participants using a separate object/face/place localizer scan (approximate location on the average template brain shown here): PPA, VOA, and FFA. An overall DMN mask was created by summing all 6 DMN subregions together, and an overall VOT mask was created by projecting the larger VOT region shown here onto individual participant surfaces. (C) Within- and across- domain classification accuracy scores for each region. DMN regions are left of the dashed line in cool colors, and VOT regions are right of the dashed line in warm colors; n = 16. Error bars are 95% CIs; **P < 0.005, tested by comparison to a null distribution derived from 5,000 permutations of individual trial labels.

Above-chance domain-general classification of aesthetic appreciation was present in most of the DMN subregions tested, although the strength of both the domain-general and domain-specific signals varied from region to region (Fig. 2 B and C, cool colors). The strongest within- and across-domain classification performance was observed in the aMPFC (within-domain 60.1%, 95% CI [57.5 62.8], average AUC 0.63, d = 2.03, P < 0.005; across-domain 58.3%, 95% CI [54.7 61.9], average AUC 0.61, d = 1.23, P < 0.005) and dorsomedial prefrontal cortex (dMPFC, within-domain 59.7%, 95% CI [57.4 61.9], average AUC 0.63, d = 2.30, P < 0.005; across-domain 58.5%, 95% CI [55.5 61.5], average AUC 0.61, d = 1.50, P < 0.005). Classifiers trained on data from the inferior parietal lobule (IPL) also performed well, both within (58.8%, 95% CI [56.2 61.4], average AUC 0.61, d = 1.81, P < 0.005) and across domains (55.8%, 95% CI [52.9 58.6], average AUC 0.58, d = 1.08, P < 0.005). Patterns of activation in posterior cingulate cortex (PCC, within-domain 56.8%, 95% CI [54.0 59.5], average AUC 0.59, d = 1.32, P < 0.005; across-domain 54.0%, 95% CI [51.6 56.4], average AUC 0.55, d = 0.89, P < 0.005) and lateral temporal cortex (LTC, within-domain 54.5%, 95% CI [51.8 57.1], average AUC 0.55, d = 0.90, P < 0.005; across-domain 53.5%, 95% CI [51.4 55.6], average AUC 0.54, d = 0.89, P < 0.005) contained a degree of domain-general decoding information that was less than the other regions.

Surprisingly, the vMPFC did not show within-domain classification performance better than chance (52.1%, 95% CI [50.6 53.5], average AUC 0.52, d = 0.77, P = 0.37) and produced across-domain classification that was higher than chance performance but with a lower effect size (52.6%, 95% CI [51.0 54.2], average AUC 0.54, d = 0.84, P < 0.005).

Classifier Performance in Category-Selective VOT Regions.

Classifiers trained on signal from the category-selective regions of the VOT showed some evidence of domain selectivity, but little evidence of domain generality (Fig. 2 B and C, warm colors). All 3 category-selective regions showed average within-domain performance that was better than chance (fusiform face area [FFA] 55.6%, 95% CI [53.6 57.6], average AUC 0.57, d = 1.49, P < 0.005; parahippocampal place area [PPA] 54.5%, 95% CI [51.2 57.8], average AUC 0.55, d = 0.73, P < 0.005; ventral object area [VOA] 53.5%, 95% CI [51.3 55.7], average AUC 0.53, d = 0.83, P < 0.005), but did not differ from chance for average across-domain classification (FFA 51.5%, 95% CI [49.2 53.7], average AUC 0.52, d = 0.34, P = 0.45; PPA 50.9%, 95% CI [49.1 52.6], average AUC 0.51, d = 0.26, P = 1; VOA 50.7%, 95% CI [48.7 52.7], average AUC 0.51, d = 0.17, P = 1). A classifier trained on a large region of interest (ROI) covering the entire VOT performed better than the individual ROIs for both within-domain (57.5%, 95% CI [55.6 59.4], average AUC 0.61, d = 2.12, P < 0.005) and across-domain classifications (53.6%, 95% CI [51.3 55.9], average AUC 0.55, d = 0.84, P < 0.005), although at a level that was still inferior to individual regions of the DMN.

Comparison of Classifier Performance in DMN and VOT.

A comparison across all ROIs (Fig. 3) revealed that classifiers trained on data from DMN ROIs, despite a range of overall performance, tended to have similar domain-general and domain-specific performance, whereas classifiers trained on VOT ROIs tended to show better performance within domain than across domain. When within- and across-domain performances were plotted against each other, DMN ROIs tended to lie on or just below the diagonal (Fig. 3, cool colors), whereas VOT ROIs tended to lie significantly below the diagonal, closer to the line marking chance across-domain performance. These effects were not a consequence of the independent method of ROI identification: A complementary analysis of regions activated by all 3 domains versus a resting baseline also found domain-general behavior in MPFC but a more domain-specific signature in a large visual ROI covering the entire ventral visual pathway (SI Appendix, Fig. S1 and Supplementary Results).

Fig. 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 3.

Characterization of domain-general and domain-specific ROI signatures. Each dot represents one ROI (color key same as Fig. 2; gray, DMN; purple, aMPFC; maroon, vMPFC; indigo, dMPFC; blue, PCC; green, IPL; light green, LTC; beige, VOT; yellow, PPA; orange, VOA; red, FFA). Average across-domain performance (y axis) is plotted against average within-domain performance (x axis). The DMN ROIs (cool colors) all tend to fall near the diagonal, illustrating a “domain-general” signature (similar across-domain and within-domain performance levels). The VOT ROIs (warm colors), however, tend to fall along the horizontal and are thus better characterized as “domain-specific” (significant within-domain performance but poor across-domain performance), with the exception of the overall VOT ROI (tan), which is above the horizontal line; n = 16. Error bars are 95% CIs.

This pattern—domain-general discriminability between high- and low-rated trials in the DMN but poorer, nongeneralized performance in VOT—stands in contrast to standard activation-based measures for these same regions (SI Appendix, Fig. S2). VOT regions were activated by images of all 3 domains (all P < 0.005, significance tested by group-level randomization test; see SI Appendix, Supplementary Results) and were generally more activated for high vs. low trials (VOT and PPA P < 0.005 for all domains; FFA P < 0.005 for architecture; VOA P < 0.005 for landscape and P < 0.05 for architecture). DMN regions, on the other hand, were deactivated by images of all 3 domains [all P < 0.005 except PCC landscape P < 0.05 and vMPFC all domains nonsignificant (n.s.)], and generally did not show differences in average activation for trials labeled high vs. low (all P n.s. except vMPFC architecture, P < 0.005). Specific tests of differences across aesthetic domains were significant only for Image vs. Baseline contrasts in VOT (all P < 0.005), but not for activation-based contrasts of high vs. low trials in VOT, nor for DMN.

Individual Variation in a Behavioral Distance Metric (d’) Predicted Performance of Domain-General Classification from DMN.

The ability to classify single trials as high or low aesthetic appeal based on domain-general activation patterns in the DMN varied from individual to individual. We performed an analysis to test whether this variation in classifier performance was related to the strength of observers’ aesthetic experiences. Behaviorally, we observed that some participants tended to use a much wider range of the rating scale than did others; these participants also tended to have longer response times (SI Appendix, Supplementary Results). The selection of 40 trials (10 from each run) as “high” and “low” for each domain in each observer, regardless of the underlying distribution, resulted in high and low distributions with different distances between them for different observers. This behavioral distance (expressed as a d’) strongly predicted how well classifiers trained on data from that participant’s DMN performed across domains [r = 0.73, linear regression R2 = 0.53, F(1,14) = 15.9, P = 0.0013; Fig. 4]. This suggests that differences in scale use accurately reflected the distinctiveness of brain states subsequently labeled as “high” and “low” aesthetic appeal, and that variation in classification accuracy largely reflected this distinctiveness. This relationship was specific for signal from the DMN: VOT classifier performance was not related to behavioral d’ [r = 0.27, R2 = 0.075, F(1,14) = 1.1, P = 0.31].

Fig. 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 4.

Variability in DMN classifier performance reflects the strength of observers’ aesthetic experiences. For each observer, the distance between the top-rated trials (labeled “high”) and the bottom-rated trials (labeled “low”) was calculated based on behavioral ratings (d’). Therefore, observers with greater variability in their ratings produced higher d’ values. Separately, classifiers were trained on BOLD signal patterns from each observers’ DMN to distinguish trials labeled as “high” vs. “low.” Across observers, classifier performance (vertical axis) was strongly correlated with each observers’ behavioral d’ distance measure (horizontal axis); n = 16.

Cortical Topography of Domain-Specific and Domain-General Information.

As a complement to the characterization of specific a priori ROIs, cortical maps of across-domain and within-domain performance were generated using a “searchlight” technique (see Methods). Several clusters of better-than-chance across-domain classification performance were found on the medial surface of PFC in both hemispheres (Fig. 5A and SI Appendix, Table S1; dMPFC, aMPFC/rostral paracingulate [rPaC], right ACC/paracingulate [PaC]), consistent with the DMN ROI findings. In addition, several additional prefrontal clusters were found in left inferior frontal sulcus (IFS)/inferior frontal gyrus, pars opercularis (IFGop), right dorsolateral prefrontal cortex (dlPFC), right frontopolar lateral (FPl), and left OFC. In posterior cortex, significant clusters were observed on the left hemisphere in higher-level visual regions (medial occipital gyrus [MOG], occipitotemporal sulcus [OTS]/inferior occipital gyrus [IOG]) and posterior cingulate, as well as in bilateral early visual cortex (see SI Appendix, Table S1 for a complete listing of significant clusters). These results indicate that domain-general representations of aesthetic appeal exist not only across large-scale patterns of activity but also in local regions of cortex, including MPFC and OFC.

Fig. 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Fig. 5.

Topography of domain-general and domain-specific information. (A) Average of the 6 across-domain classifier maps (orange), rendered on an average flattened cortical surface. (B) Three within-domain classifier maps and their overlap, as indicated by the color key. Outlines from the across-domain maps are shown in orange. The right hemisphere is on the left side. Maps were cluster-corrected for multiple comparisons using Monte Carlo simulations (P < 0.05); n = 16. Dark gray areas indicate regions of cortex where data from all participants were not available and were therefore not included in the map.

The 3 maps of within-domain classification performance (one each for art, architecture, and natural landscapes) contained several prefrontal clusters with domain-specific signatures (Fig. 5B and SI Appendix, Table S1). For art (red), these clusters were located in the frontopolar regions (left FPl, left frontopolar medial [FPm], bilateral dMPFC) and inferior frontal cortex (bilateral inferior frontal gyrus [IFGt], right IFS). For architecture (blue), clusters were observed in frontopolar regions (left FPm, right FPl/frontomarginal gyrus [FM], bilateral dMPFC, left superior frontal gyrus [SFG]), dorsolateral prefrontal (left MFG/IFS), lateral orbitofrontal, and medial prefrontal cortex (rostral anterior cingulate cortex, [rACC]/rPaC). A comparison to the domain-general map (orange outlines in Fig. 5B) revealed several left hemisphere frontopolar clusters that are adjacent to, but not overlapping with, the domain-general regions. Similarly, the clusters in left OFC/IFG are near, but not overlapping with, a domain-general cluster. The landscape map contained 2 prefrontal domain-specific clusters on the lateral surface, in the precentral sulcus and inferior frontal (IFS/IFGt).

In addition to these prefrontal clusters, the landscape map contained large domain-specific clusters bilaterally in VOT (Fig. 5B, green) and in some parts of early visual cortex that overlap with those observed in the domain-general map. Domain-specific clusters were also observed in visual regions for artwork (IOG, early visual) and architecture (bilateral MOG, early visual) that mostly, but not entirely, overlapped with domain-general clusters.

Discussion

Despite differences in how images of natural landscapes, architecture, and visual art are encoded and evaluated, the DMN contains a representation of aesthetic appeal that generalizes across these visual aesthetic domains. A series of multivariate classifiers were trained to distinguish trials rated as “high” vs. “low” aesthetic appeal using multivoxel patterns of BOLD signal from regions of the DMN. When the full network was used, robust performance was observed for classifiers trained and tested within the same domain as well as for classifiers trained and tested across different domains. This is strong evidence that the DMN contains a representation of aesthetic appeal that is domain-general. Additionally, domain-general classification performance in individual participants was strongly predicted by the distance (d’) between each person’s behavioral ratings of exemplars labeled “high” (top 27% of images in each domain) versus exemplars labeled “low” (bottom 27% of images in each domain), suggesting that variability in classifier performance reflected the psychological and neural span between the states evoked by the most and least aesthetically appealing images in each observer, rather than measurement noise. Strong within- and across-domain performance was also observed in several subregions of the DMN, namely aMPFC, dMPFC, and IPL. On the other hand, classifiers trained on data from VOT did not perform as well and were strongly domain-specific: The voxel patterns that predicted aesthetic appeal for one domain did not generalize to the other domains.

Maps of cortical domain-general and domain-specific signals, measured using a multivariate “searchlight” technique, confirmed the presence of a domain-general representation in aMPFC and dMPFC, and also revealed adjacent domain-specific regions in frontal pole and inferior/orbital frontal cortex, primarily for architecture and art. Domain specificity for natural landscapes, in contrast, was primarily found in the ventral visual pathway. Finally, additional cortical fields containing domain-general information about aesthetic appeal were identified in the lateral and orbital prefrontal cortex, as well as in posterior visual regions.

Unlike metaanalyses or activation analyses, across-domain classification based on voxelwise patterns is a very strong test of domain generality. A univariate activation analysis of the same regions, which found sensitivity to ratings in VOT but not in DMN, might have led one to conclude that VOT possessed greater sensitivity and domain generality for aesthetic appeal (despite discriminating domain identity). This discrepancy suggests that aesthetic appeal, like valence of nonaesthetic images (46), is encoded at a spatial scale that is smaller than brain regions, but still detectable at the voxel level, and may not have a consistent local topography from one person to the next (see also SI Appendix, Supplementary Results). Although high multivariate classification accuracy cannot prove that the identical neurons or columns are representing aesthetic appeal across different domains, it does show that individual voxels, in individual observers, respond to high versus low exemplars in a consistent manner regardless of aesthetic domain.

A Core System for Assessing Aesthetic Appeal.

It is notable that the DMN representation of aesthetic appeal generalized across both natural kinds (landscapes) and cultural artifacts (architecture and artwork). Recent behavioral findings suggest that aesthetic evaluations of natural kinds such as landscapes and faces rely on similar information across people, whereas evaluations of cultural artifacts are highly individual (32). Attractive faces and attractive landscapes engage a region in MPFC (likely overlapping with the DMN) in a similar manner (47). A potential interpretation of our results is that the DMN is part of a “core” system for assessing aesthetic appeal that is engaged by all domains.

Aesthetic experiences are integrative in nature, drawing on perception and imagery across multiple senses as well as on memories, emotion, and associated meanings. The DMN’s anatomical position at one extreme of a cortical hierarchy (59) makes it well positioned to integrate information across multiple brain systems. While it remains unclear whether DMN involvement is necessary for an aesthetic experience to be perceived as strongly moving, it is the case that its activity reflects engagement with artworks: DMN signal fluctuations “lock on” to aesthetically appealing artworks, but are independent for nonappealing artworks (60). In addition, MPFC damage has been shown to reduce the influence of certain types of affective information on aesthetic valuation (61). The current findings add to this understanding by showing that the DMN encodes the aesthetic appeal of visual images in a domain-general manner. Given the DMN’s strong link to assessments of self-relevance and self-referential processes (57, 62, 63), it is possible that the DMN’s engagement by and representation of aesthetically appealing events reflects an assessment of self-relevance—in this case, the potential self-relevance of an external object.

Alternatively, the DMN’s engagement during aesthetic experience may reflect its theorized role in the construction of mental scenes (48). Such mental imagery, involving an interplay of top-down information with bottom-up stimulus properties, is a key aspect of many aesthetic experiences (40, 64). This balance of activation between higher-tier visual regions and DMN regions likely depends on the degree to which an observer is able to recognize familiar content (65). While this study was not designed to tease apart responses to abstract versus representational artwork (out of 148 artworks, 11 were abstract), it is possible that aesthetic experiences with abstract artworks rely more on top-down processes of sense-making and imagery, consistent with the fact that aesthetic judgments for images of indeterminate content take longer than for representational images (66). Yet the ability to decode aesthetic appeal across aesthetic domains, including photographs of landscapes and architecture, suggests that even if the DMN activates differentially to representational or abstract content, the multivoxel patterns for images experienced as high vs. low appeal are similar.

Additionally, the fact that DMN contains information about aesthetic appeal for artworks and architecture suggests that the DMN is able to integrate information about nonperceptual aspects of aesthetic experience, as the low degree of shared taste for these domains indicates that visual features do not uniquely determine felt aesthetic appeal (32, 67, 68). It remains to be seen whether the domain-general aesthetic appeal observed in the DMN for the visual domains studied here would also generalize to nonvisual domains (music, poetry) or to highly conceptual experiences such as the appreciation of mathematical beauty.

The domain-specific regions identified in the searchlight analysis likely complement this core system. Interestingly, domain specificity for architecture and artifacts was primarily observed in prefrontal regions. It is unlikely that these regions of cortex have functionality that is specifically relevant for artwork or architecture—a more parsimonious explanation is that these regions support the idiosyncratic aspects of personal taste and experience that support aesthetic assessments of cultural artifacts. The fact that domain specificity for landscapes was primarily observed in the ventral visual pathway is in line with the fact that aesthetic appreciation of landscapes was more consistent across people (SI Appendix, Supplementary Results), and therefore more closely related to semantic and structural information present in specific images.

We found a strong brain−behavior correlation between how strongly an individual participant discriminated between images labeled “high” and “low” aesthetic appeal (d’) and the ability to predict aesthetic appeal from the DMN. This finding is remarkable because the classifiers were trained solely with the labels “high” or “low”—they received no information about the actual rating given by the participant. That the rated distance between these 2 sets of trials correlated with classifier performance indicates that 1) the different use of the rating scale by different participants was not arbitrary, but actually reflected their psychological reality, and 2) this psychological difference was reflected in the discriminability of the associated multivoxel patterns in the DMN. A potential consequence is that overall classification accuracy was likely limited by the degree to which observers remained engaged with the images over the course of the entire experiment (444 images over 2 d). Selection of participants based on affinity for the domains selected, and presentation of fewer images in a session to fight potential “museum fatigue” (69), would likely result in higher classification accuracies.

The poor ability of signal from vMPFC to predict aesthetic appeal is surprising, given the number of studies in both aesthetics (36, 44, 47) and decision-making (34, 35, 70) that report valuation signals here. There are several possible explanations for this discrepancy. The first is the fact that we only selected voxels in the vMPFC that were also part of the DMN, as defined by individual maps derived from resting-state fMRI. This likely resulted in significantly smaller ROIs than a pure anatomical definition of the vMPFC. Second, there is a potential inconsistency in naming conventions across studies. We have used the term vMPFC to describe the medial part of the gyrus rectus, lying below the superior rostral sulcus (SRS) and including Brodmann areas 25 and 14, whereas others may include cortex above the SRS in their definition of vMPFC. Finally, there is also the issue of distortion and dropout in this region of the brain and inconsistent processing of these distortions by different research groups. To combat these issues, we used a state-of-the-art multiecho (ME) sequence to recover as much signal as possible from OFC/vMPFC and correct for distortions in a manner that is less prone to mixing and spreading of signals across this entire region. Clarification of the exact topography in this region will require that all researchers consistently report relevant imaging parameters, such as sequence type, phase-encoding direction, echo time (TE), and the method used for distortion correction.

Aesthetic Appeal vs. Visual Features in the Ventral Visual Pathway.

Despite the lack of domain generality in VOT, it is noteworthy that patterns of activation in this region were informative for predicting within-domain judgments of aesthetic appeal. While the ventral visual pathway is primarily viewed as important for extraction of visual characteristics of objects and scenes, a number of previous studies report correlations between signal in the ventral visual pathway and aesthetic appeal (52, 71⇓–73). One possible explanation is that certain visual features are correlated with aesthetic appeal (at least on average), and it is these visual features that drive the observed correlations between brain activity and appeal. Yet activations within VOT (52) and VOT response patterns (this paper) appear to correlate with aesthetic appeal even for visual artworks and architecture, categories that produce very low interrater agreement (e.g., “shared taste”) (32). One possibility is that these regions do extract specific visual features, but that observers are differentially sensitive to these features. This would maintain a relationship between attention to the feature and aesthetic appeal but also allow for different observers to express divergent tastes. Alternatively, these regions may not represent stable visual attributes in all observers, but may instead extract more subjective properties of visual experience that have a positive relationship to aesthetic appeal.

In addition to regions in prefrontal cortex and the ventral visual pathway, the searchlight analysis also identified early visual cortex as containing better-than-chance domain-general classification performance. It is unclear whether this decoding ability was a result of bottom-up stimulus-driven differences in activation patterns for high vs. low appeal images or of differential top-down modulation of early visual activity, such as by attention (74) or imagery (75). While the low across-observer agreement for individuals’ aesthetic ratings (SI Appendix, Supplementary Results) means that there was substantial overlap at the group level for images shown on “high” trials and “low” trials, this overlap was not 100%, leaving room for potential residual differences in low-level features. As both the natural landscape and architecture image sets were contrast-equalized, image contrast is unlikely to be a major factor.

Conclusions

Moving aesthetic experiences are highly integrative. Situated at the top of the cortical hierarchy, the DMN is in an ideal network position to integrate information across many sources. Using a strong test, we found evidence that the DMN represents aesthetic appeal in a domain-general manner. In contrast, higher-level visual regions were found to contain only weak and domain-specific information about aesthetic appeal. A searchlight analysis confirmed the MPFC as part of a putative “core” domain-general system for assessing visual aesthetic appeal, and identified additional domain-specific regions near the frontal poles that contained information relevant for aesthetic judgments of artifacts of human culture (architecture, artwork) but not for natural landscapes. While the exact role of the DMN in aesthetic appeal remains unclear, this work confirms that the DMN has access to detailed information about aesthetic appeal, and that this information is not confined to a single node of the DMN.

Methods

Participants.

Eighteen participants were recruited at New York University (NYU) and paid for their participation. Two participants were excluded due to excessive head motion that led to visible signal distortions and difficulties with registration, leaving a final group of 16 participants (10 female, 16 right-handed; 25.7 ± 6.3 y). All had normal or corrected to normal vision and no history of neurological disorders. Informed consent was obtained from all participants. All experimental procedures, including informed consent, were approved by the NYU Committee on Activities Involving Human Subjects.

Stimuli.

Images were presented using back-projection (Eiki LC-XG250 projector) onto a screen mounted in the scanner and viewed through a mirror on the head coil. Stimulus presentation was controlled by a Macintosh Pro running OS 10.6 and MATLAB R2011b (Mathworks) with Psychophysics Toolbox-3 extensions (http://psychtoolbox.org) (76, 77).

Visual art.

The set consisted of 148 photographs of visual artworks (paintings, collages, woven silks, excluding sculpture) sourced from the Catalog of Art Museum Images Online database (Fig. 1A). A subset of these (109) were used in a previous study (52). The set covered a variety of periods (fifteenth century to the twentieth), styles, and genres (landscape, portrait, abstract, still life), and diversely represented cultures of Europe, the Americas, and Asia. While all of the images were taken from museum collections, special care was used to ensure that only lesser-known artworks were included. When necessary, stimuli were cropped to remove the artist’s signature. Due to the large differences in size and color content across different artworks, contrast equalization was not possible.

Architecture.

For architecture, 148 photographs (74 exterior, 74 interior) were selected, with the majority collected from ArtStor, a database of high-quality images representing multiple cultures and periods (Fig. 1B). Images containing people were excluded. Interior images were chosen to highlight architectural detail, not interior décor. An effort was also made to utilize exterior images that gave an impression of building detail as well as its place in a given setting, while excluding images that gave primary emphasis to features of the landscape. Images represented a variety of structures (e.g., skyscraper vs. single residence), styles (e.g., Gothic vs. classical), materials, and time periods. Images were cropped to a 4:3 (landscape) or 3:4 (portrait) aspect ratio and presented at a size of 13° of visual angle for the longer dimension. The images were contrast-equalized (SI Appendix, Supplementary Methods) and displayed using a linearized color look-up table.

Natural landscapes.

For natural landscapes, 148 photographs were obtained from a variety of sources, including the SUN Database (78), IMSI MasterClips, and MasterPhotos Premium Image Collection, and also from images publicly available on the internet (Fig. 1C). Images were cropped to a 4:3 aspect ratio and presented at 13° of visual angle for the horizontal dimension. The images were contrast-equalized (SI Appendix, Supplementary Methods) and displayed using a linearized color look-up table.

Procedure.

The experiment took place over 2 sessions. Participants were instructed in the task and given a short practice (10 trials) to familiarize them with the types of images they would be seeing, the task timing, and the response method. Participants were told that they would be viewing and evaluating images from 3 different aesthetic domains: natural landscapes, architecture, and visual art (Fig. 1 A–C). They were asked to rate how aesthetically “moving” they found each depicted scene/structure/artwork (SI Appendix, Supplementary Methods).

In each session, participants completed 6 experimental scans composed of 37 trials each. Each scan contained images of only a single domain (artwork, natural landscape, interior or exterior architecture) in order to allow participants to fully engage with this domain over a several-minute period. One session therefore contained 2 artwork scans, 2 landscape scans, 1 interior architecture scan, and 1 exterior architecture scan. Scan order was counterbalanced across participants, and image order within each scan was pseudorandomized and also counterbalanced by assigning alternating participants the reverse order of another participant. Each participant saw each image only once. Across the 2 sessions, this resulted in a total of 148 trials for each of the 3 domains.

Each trial began with a blinking 1-s fixation cross followed by image presentation for 4 s (Fig. 1D). After the image disappeared, a visual “slider” bar appeared on the screen, and participants had up to 4 s to indicate their response on a continuous interval (marked with “L” and “H” at the ends) using a trackball. The mapping between observer movement (up/down) and movement of the slider (left/right) was counterbalanced across participants to remove any confounds associated with direction of hand movement. This was followed by a variable intertrial interval drawn from a discrete approximation of an exponential function with a mean of 2.6 s. Each scan also included 20 s of blank screen at the beginning and 10 s at the end to allow for better baseline signal estimation and removal of T1 saturation effects.

fMRI acquisition and reconstruction.

All fMRI scans took place at the NYU Center for Brain Imaging (CBI) using a 3T Siemens Allegra scanner with a Nova Medical head coil (NM011 head transmit coil). Whole-brain BOLD signal was measured from thirty-four 3-mm slices using a custom ME echo-planar imaging (EPI) sequence (2 s repetition time [TR], 80 × 64 3 mm voxels, right-to-left phase encoding, flip angle = 75°). The ME EPI sequence and a tilted slice proscription (15° to 20° tilt relative to the anterior commissure–posterior commissure [AC–PC] line) were used to minimize dropout near the orbital sinuses. Cardiac and respiration signals were collected using Biopac hardware and AcqKnowledge software (Biopac). We collected a custom calibration scan to aid in ME reconstruction, unwarping, and alignment. ME EPI images were reconstructed using a custom algorithm designed by the NYU CBI to minimize dropout and distortion, and were tested for data quality (e.g., spikes, changes in signal-to-noise) using custom scripts.

Following the session 1 experimental runs, we collected a high-resolution (1 mm3) anatomical volume (T1 magnetization-prepared rapid gradient echo [MPRage]) for registration and segmentation using FreeSurfer (http://surfer.nmr.mgh.harvard.edu). Following the session 2 experimental runs, participants completed a 360-s eyes-open “rest” scan plus a 320-s visual localizer scan that contained blocks of objects, places, faces, and scrambled objects. The visual localizer scan was fully described in Vessel et al. (52).

Analysis.

Identification of individual DMN maps and DMN sub-ROIs.

Participant-specific maps of the DMN were obtained using the rest scan. Motion correction, high-pass filtering at 0.005 Hz, and spatial smoothing with 6-mm FWHM Gaussian filter were applied using the FMRIB Software Library (FSL, https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/). Independent component analysis (ICA) was then performed on individual subjects’ scans using FSL's Multivariate Exploratory Linear Optimized Decomposition into Independent Components (MELODIC) tool. MELODIC determines the appropriate size of the lower-dimensional space using the Laplace approximation to the Bayesian evidence of the model order (79, 80). This process resulted in an average of 21.88 spatial components (SD = 3.70) for each subject. These ICA components were spatially thresholded at a z-score cutoff of 2.3, moved into MNI standard space, and compared to a set of predefined network maps (81) using Pearson correlation. The component with the highest correlation to the Smith et al. (81) DMN map was then visually inspected to ensure that its spatial distribution appeared similar to the canonical DMN. For 3 participants, the DMN was split between 2 ICA components similar to those seen in ref. 48, which were combined to form a single map. The final DMN ROI for each subject was then defined as the voxels from this component that also belonged to gray matter (as defined by the FreeSurfer gray matter segmentation).

Volumetric DMN maps were transformed to participant-specific surface space. The following 6 subregions of the DMN were then identified in both hemispheres from the DMN ICA maps of individual subjects by combining anatomically defined boundaries with the participant-specific DMN maps: aMPFC, dMPFC, vMPFC, PCC, IPL, and LTC (see SI Appendix, Supplementary Methods for additional information).

Identification of VOT ROIs.

For comparison to the DMN, 3 category-selective regions of the VOT were identified using standard methods (SI Appendix, Supplementary Methods).

Finally, an overall VOT ROI was created by identifying a larger region of the VOT that included all 3 of these category-selective ROIs plus the voxels between them (82). This ROI was first drawn on the FreeSurfer fsaverage cortical surface bilaterally using category-selective ROIs from 3 representative participants as a guide and then projected onto individual hemispheres. The borders of the resulting ROI (tan-colored region in Fig. 2B) extended from the depth of the occipitotemporal sulcus laterally to the center of the lingual/parahippocampal gyrus medially, and from the posterior collateral transverse sulcus to the anterior collateral transverse sulcus (approximate MNI coordinates y = −72 to y = −31). All ROIs were combined across left and right hemispheres to form bilateral ROIs.

ROI classification analysis of aesthetic appreciation.

A beta-series general linear model (83) implemented using custom MATLAB code was used to extract an estimate of response amplitude for each trial, at each voxel in gray matter. Experimental scans were first preprocessed using FSL to correct for motion, align data across scans, and apply a high-pass filter (0.01-Hz cutoff). Nuisance signals were then removed from the BOLD time courses of all 12 scans by projecting out motion estimates and nuisance time series derived from a second-order Taylor series expansion of cardiac and respiration measurements (RetroIcor method) (84). The cleaned time courses were then converted to percent signal change. Individual trials were modeled using a 4-s “on” period convolved with a canonical hemodynamic response function from the SPM12 Toolbox (Wellcome Trust Centre for Neuroimaging, University College), and the resulting trial-wise amplitude estimates were z-scored separately for each scan.

A series of participant-specific classifiers were then trained to distinguish “high” versus “low” aesthetic appreciation trials using the beta-series amplitude estimates from all voxels in an individual ROI (Fig. 1E). This analysis used the 10 trials with the highest rating (out of 37; top 27%) and the 10 trials with the lowest rating from each scan (bottom 27%). “Within-domain” classification performance was evaluated by training logistic regression classifiers (logreg.m from Princeton MVPA Toolbox, https://github.com/princetonuniversity/princeton-mvpa-toolbox) with high and low trials from 3 scans of one domain and then measuring classification accuracy on the high and low trials of 1 held-out scan of the same domain (4-fold cross-validation). The classifier penalty parameter was set to equal 5% of the total number of voxels in an ROI. “Across-domain” classification performance was evaluated by training on high and low trials from 3 scans of one domain and then measuring classification accuracy on high and low trials in 1 scan from a different domain (again with 4-fold cross-validation). This resulted in a 3-by-3 matrix of classification scores with within-domain scores along the diagonal and across-domain scores off the diagonal.

The 3 within-domain scores along the diagonal were averaged for each participant to produce an overall within-domain score, and the 6 off-diagonal across-domain scores were also averaged to produce an overall across-domain score. Maximum likelihood estimation was then used to compute the average and 95% confidence interval for both of these scores, across all 11 ROIs. An AUC measure was also computed for each ROI and participant by averaging probabilistic classifier predictions. Significance of classification performance across all participants was assessed through permutation testing. For each subject, 5,000 permutations of the high/low trialwise labels were computed and used to generate a null distribution of the 2 summary statistics in each ROI. We tested the hypothesis that classification performance was better than chance (one-tailed) at 2 critical alpha values, 0.05 and 0.005, Bonferroni-corrected for 22 total tests (11 ROIs by 2 statistics).

Searchlight analysis.

Maps of classification performance were computed across the cortical surface using a “searchlight” approach. For each participant, linear support-vector machine classifiers were trained and tested in a series of 5-mm-radius spheres (thirty-three 3-mm voxels) centered on each voxel where data were collected. Three within-domain maps and 6 across-domain maps were created from the average of a 4-fold cross-validation procedure (see above) for each participant. Maps of above-chance performance (>0.5) for the average of the across-domain classifiers (e.g., train on art, test on landscape) and all 3 within-domain classifiers (art, architecture, landscape) were created for each participant, and then averaged across participants on the FreeSurfer “fsaverage” surface. The resulting 4 maps were corrected for comparisons using clusterwise thresholding derived from Monte Carlo simulations (mri_glmfit-sim; ref. 85) with a voxelwise threshold of P < 0.001 and a cluster threshold of 0.05 across the entire cortical surface (2 hemispheres).

Correlation with behavioral discrimination.

The distribution of rating responses differed from observer to observer; some generated largely bimodal distributions, some uniform, and some strongly unimodal near the neutral point. Therefore, the ratings of the distributions of images subsequently labeled “high” (top 40 of 148 in each domain) and those labeled “low” (bottom 40 of 148 in each domain) also differed across observers, with some showing a greater separation than others. In order to quantify this individual variability, a distance measure (d’; e.g., distance between the mean of the 2 distributions, rescaled in units of SD) was computed between the distributions of raw ratings for “high” and “low” labeled trials for each participant, in each aesthetic domain. The average d’ scores across the 3 domains were then used in a linear regression analysis as predictors for DMN domain-general classification performance. Data are available at https://dx.doi.org/10.17617/3.2r (86).

Acknowledgments

We thank Tyra Lindstrom, Lucy Owen, Oriana Neidecker, and Melanie Wen for help with stimulus and data collection; Jean-Remi King for analysis advice; and David Poeppel for feedback on a previous version of this manuscript. This work was supported by an NYU Research Challenge Fund Grant to G.G.S. and E.A.V.

Footnotes

  • ↵1To whom correspondence may be addressed. Email: ed.vessel{at}ae.mpg.de.
  • Author contributions: E.A.V., J.L.S., and G.G.S. designed research; E.A.V., A.M.B., and J.L.S. performed research; E.A.V., A.I.I., and A.M.B. contributed new reagents/analytic tools; E.A.V. and A.I.I. analyzed data; and E.A.V., A.I.I., A.M.B., J.L.S., and G.G.S. wrote the paper.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission. M.B. is a guest editor invited by the Editorial Board.

  • Data deposition: MRI and behavioral data are posted in anonymized form on Edmond (https://edmond.mpdl.mpg.de), the Open Access Data Repository of the Max Planck Society, and can be accessed at https://dx.doi.org/10.17617/3.2r.

  • This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1902650116/-/DCSupplemental.

Published under the PNAS license.

References

  1. ↵
    1. M. Reimann,
    2. J. Zaichkowsky,
    3. C. Neuhaus,
    4. T. Bender,
    5. B. Weber
    , Aesthetic package design: A behavioral, neural, and psychological investigation. J. Consum. Psychol. 20, 431–441 (2010).
    OpenUrl
  2. ↵
    1. P. Silayoi,
    2. M. Speece
    , The importance of packaging attributes: A conjoint analysis approach 41, 1495–1517 (2007).
  3. ↵
    1. U. Ritterfeld,
    2. G. C. Cupchik
    , Environmental psychology perceptions of interior spaces. J. Environ. Psychol. 16, 349–360 (1996).
    OpenUrlCrossRef
  4. ↵
    1. J. H. Falk
    , Identity and the Museum Visitor Experience (Left Coast Press, Walnut Creek, CA, 2009).
  5. ↵
    1. P. P. L. Tinio
    1. P. P. L. Tinio,
    2. J. K. Smith,
    3. L. F. Smith
    , “The walls do speak: Psychological aesthetics and the museum experience” in The Cambridge Handbook of the Psychology of Aesthetics and the Arts, P. P. L. Tinio, Ed. (Cambridge University Press, 2015), pp. 195–218.
  6. ↵
    1. I. C. McManus,
    2. A. Furnham
    , Aesthetic activities and aesthetic attitudes: Influences of education, background and personality on interest and involvement in the arts. Br. J. Psychol. 97, 555–587 (2006).
    OpenUrlCrossRefPubMed
  7. ↵
    1. D. Haluza,
    2. R. Schönbauer,
    3. R. Cervinka
    , Green perspectives for public health: A narrative review on the physiological effects of experiencing outdoor nature. Int. J. Environ. Res. Public Health 11, 5445–5461 (2014).
    OpenUrl
  8. ↵
    1. A. Clow,
    2. C. Fredhoi
    , Normalisation of salivary cortisol levels and self-report stress by a brief lunchtime visit to an art gallery by London City workers. J. Holist. Healthcare 3, 29–32 (2006).
    OpenUrl
  9. ↵
    1. M. L. Chanda,
    2. D. J. Levitin
    , The neurochemistry of music. Trends Cogn. Sci. 17, 179–193 (2013).
    OpenUrlCrossRefPubMed
  10. ↵
    1. R. S. Ulrich
    , View through a window may influence recovery from surgery. Science 224, 420–421 (1984).
    OpenUrlAbstract/FREE Full Text
  11. ↵
    1. R. S. Ulrich et al
    ., A review of the research literature on evidence-based healthcare design. HERD 1, 61–125 (2008).
    OpenUrlPubMed
  12. ↵
    1. K. Cuypers et al
    ., Patterns of receptive and creative cultural activities and their association with perceived health, anxiety, depression and satisfaction with life among adults: The HUNT study, Norway. J. Epidemiol. Community Health 66, 698–703 (2012).
    OpenUrlAbstract/FREE Full Text
  13. ↵
    1. R. Kaplan
    , The nature of the view from home. Environ. Behav. 33, 507–542 (2001).
    OpenUrlCrossRef
  14. ↵
    1. C. I. Seresinhe,
    2. T. Preis,
    3. H. S. Moat
    , Quantifying the impact of scenic environments on health. Sci. Rep. 5, 16899 (2015).
    OpenUrlCrossRefPubMed
  15. ↵
    1. P. Leather,
    2. M. Pyrgas,
    3. D. Beale,
    4. C. Lawrence
    , Windows in the workplace: Sunlight, view, and occupational stress. Environ. Behav. 30, 739–762 (1998).
    OpenUrlCrossRef
  16. ↵
    1. S. Shimojo,
    2. C. Simion,
    3. E. Shimojo,
    4. C. Scheier
    , Gaze bias both reflects and influences preference. Nat. Neurosci. 6, 1317–1322 (2003).
    OpenUrlCrossRefPubMed
  17. ↵
    1. J. L. Plass,
    2. S. Heidig,
    3. E. O. Hayward,
    4. B. D. Homer,
    5. E. Um
    , Emotional design in multimedia learning: Effects of shape and color on affect and learning. Learn. Instr. 29, 128–140 (2014).
    OpenUrl
  18. ↵
    1. P. J. Silvia
    , Interest–The curious emotion. Curr. Dir. Psychol. Sci. 17, 57–60 (2008).
    OpenUrlCrossRef
  19. ↵
    1. S.-L. Lim,
    2. J. P. O’Doherty,
    3. A. Rangel
    , Stimulus value signals in ventromedial PFC reflect the integration of attribute value signals computed in fusiform gyrus and posterior superior temporal gyrus. J. Neurosci. 33, 8729–8741 (2013).
    OpenUrlAbstract/FREE Full Text
  20. ↵
    1. I. Levy,
    2. S. C. Lazzaro,
    3. R. B. Rutledge,
    4. P. W. Glimcher
    , Choice from non-choice: Predicting consumer preferences from blood oxygenation level-dependent signals obtained during passive viewing. J. Neurosci. 31, 118–125 (2011).
    OpenUrlAbstract/FREE Full Text
  21. ↵
    1. A. Todorov,
    2. C. P. Said,
    3. A. D. Engell,
    4. N. N. Oosterhof
    , Understanding evaluation of faces on social dimensions. Trends Cogn. Sci. 12, 455–460 (2008).
    OpenUrlCrossRefPubMed
  22. ↵
    1. M. T. Pearce et al
    ., Neuroaesthetics: The cognitive neuroscience of aesthetic experience. Perspect. Psychol. Sci. 11, 265–279 (2016).
    OpenUrlCrossRefPubMed
  23. ↵
    1. A. Chatterjee,
    2. O. Vartanian
    , Neuroaesthetics. Trends Cogn. Sci. 18, 370–375 (2014).
    OpenUrlCrossRefPubMed
  24. ↵
    1. S. Zeki,
    2. J. P. Romaya,
    3. D. M. T. Benincasa,
    4. M. F. Atiyah
    , The experience of mathematical beauty and its neural correlates. Front. Hum. Neurosci. 8, 68 (2014).
    OpenUrlPubMed
  25. ↵
    1. S. Zeki,
    2. O. Y. Chén,
    3. J. P. Romaya
    , The biological basis of mathematical beauty. Front. Hum. Neurosci. 12, 467 (2018).
    OpenUrl
  26. ↵
    1. W. Menninghaus et al
    ., What are aesthetic emotions? Psychol. Rev. 126, 171–195 (2019).
    OpenUrl
  27. ↵
    1. W. Menninghaus et al
    ., Towards a psychological construct of being moved. PLoS One 10, e0128451 (2015).
    OpenUrl
  28. ↵
    1. A. Oostendorp,
    2. D. E. Berlyne
    , Dimensions in the perception of architecture: I. Identification and interpretatoin of dimensions of similarity. Scand. J. Psychol. 19, 73–82 (1978).
    OpenUrlCrossRef
  29. ↵
    1. J. Meyers‐Levy,
    2. R. Zhu
    , The influence of ceiling height: The effect of priming on the type of processing that people use. J. Consum. Res. 34, 174–186 (2007).
    OpenUrlCrossRef
  30. ↵
    1. M. R. Greene,
    2. A. Oliva
    , Recognition of natural scenes from global properties: Seeing the forest without representing the trees. Cognit. Psychol. 58, 137–176 (2009).
    OpenUrlCrossRefPubMed
  31. ↵
    1. A. Oliva,
    2. P. G. Schyns
    , Coarse blobs or fine edges? Evidence that information diagnosticity changes the perception of complex visual stimuli. Cognit. Psychol. 34, 72–107 (1997).
    OpenUrlCrossRefPubMed
  32. ↵
    1. E. A. Vessel,
    2. N. Maurer,
    3. A. H. Denker,
    4. G. G. Starr
    , Stronger shared taste for natural aesthetic domains than for artifacts of human culture. Cognition 179, 121–131 (2018).
    OpenUrl
  33. ↵
    1. H. Leder,
    2. J. Goller,
    3. T. Rigotti,
    4. M. Forster
    , Private and shared taste in art and face appreciation. Front. Hum. Neurosci. 10, 155 (2016).
    OpenUrl
  34. ↵
    1. O. Bartra,
    2. J. T. McGuire,
    3. J. W. Kable
    , The valuation system: A coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. Neuroimage 76, 412–427 (2013).
    OpenUrlCrossRefPubMed
  35. ↵
    1. D. J. Levy,
    2. P. W. Glimcher
    , Comparing apples and oranges: Using reward-specific and reward-general subjective value representation in the brain. J. Neurosci. 31, 14693–14707 (2011).
    OpenUrlAbstract/FREE Full Text
  36. ↵
    1. H. Kawabata,
    2. S. Zeki
    , Neural correlates of beauty. J. Neurophysiol. 91, 1699–1705 (2004).
    OpenUrlCrossRefPubMed
  37. ↵
    1. T. Jacobsen,
    2. R. I. Schubotz,
    3. L. Höfel,
    4. D. Y. Cramon
    , Brain correlates of aesthetic judgment of beauty. Neuroimage 29, 276–285 (2006).
    OpenUrlCrossRefPubMed
  38. ↵
    1. H. Kim,
    2. R. Adolphs,
    3. J. P. O’Doherty,
    4. S. Shimojo
    , Temporal isolation of neural processes underlying face preference decisions. Proc. Natl. Acad. Sci. U.S.A. 104, 18253–18258 (2007).
    OpenUrlAbstract/FREE Full Text
  39. ↵
    1. I. Aharon et al
    ., Beautiful faces have variable reward value: fMRI and behavioral evidence. Neuron 32, 537–551 (2001).
    OpenUrlCrossRefPubMed
  40. ↵
    1. A. M. Belfi,
    2. E. A. Vessel,
    3. G. Gabrielle Starr
    , Individual ratings of vividness predict aesthetic appeal in poetry. Psychol. Aesthetics Creat. Arts 12, 341–350 (2017).
    OpenUrl
  41. ↵
    1. C. Muth,
    2. V. M. Hesslinger,
    3. C.-C. Carbon
    , The appeal of challenge in the perception of art: How ambiguity, solvability of ambiguity, and the opportunity for insight affect appreciation. Psychol. Aesthetics Creativity Arts 9, 206–216 (2015).
    OpenUrl
  42. ↵
    1. C. Muth,
    2. C. C. Carbon
    , The aesthetic aha: On the pleasure of having insights into gestalt. Acta Psychol. (Amst.) 144, 25–30 (2013).
    OpenUrlCrossRefPubMed
  43. ↵
    1. I. Biederman,
    2. E. A. Vessel
    , Perceptual pleasure and the brain: A novel theory explains why the brain craves information and seeks it through the senses. Am. Sci. 94, 247–253 (2006).
    OpenUrlCrossRef
  44. ↵
    1. T. Ishizu,
    2. S. Zeki
    , Toward a brain-based theory of beauty. PLoS One 6, e21852 (2011).
    OpenUrlCrossRefPubMed
  45. ↵
    1. S. Brown,
    2. X. Gao,
    3. L. Tisdelle,
    4. S. B. Eickhoff,
    5. M. Liotti
    , Naturalizing aesthetics: Brain areas for aesthetic appraisal across sensory modalities. Neuroimage 58, 250–258 (2011).
    OpenUrlCrossRefPubMed
  46. ↵
    1. J. Chikazoe,
    2. D. H. Lee,
    3. N. Kriegeskorte,
    4. A. K. Anderson
    , Population coding of affect across stimuli, modalities and individuals. Nat. Neurosci. 17, 1114–1122 (2014).
    OpenUrlCrossRefPubMed
  47. ↵
    1. T. K. Pegors,
    2. J. W. Kable,
    3. A. Chatterjee,
    4. R. A. Epstein
    , Common and unique representations in pFC for face and place attractiveness. J. Cogn. Neurosci. 27, 959–973 (2015).
    OpenUrlCrossRefPubMed
  48. ↵
    1. J. R. Andrews-Hanna,
    2. J. S. Reidler,
    3. J. Sepulcre,
    4. R. Poulin,
    5. R. L. Buckner
    , Functional-anatomic fractionation of the brain’s default network. Neuron 65, 550–562 (2010).
    OpenUrlCrossRefPubMed
  49. ↵
    1. G. Northoff et al
    ., Self-referential processing in our brain–A meta-analysis of imaging studies on the self. Neuroimage 31, 440–457 (2006).
    OpenUrlCrossRefPubMed
  50. ↵
    1. J. M. Moran,
    2. T. F. Heatherton,
    3. W. M. Kelley
    , Modulation of cortical midline structures by implicit and explicit self-relevance evaluation. Soc. Neurosci. 4, 197–211 (2009).
    OpenUrlCrossRefPubMed
  51. ↵
    1. A. D’Argembeau et al
    ., The neural basis of personal goal processing when envisioning future events. J. Cogn. Neurosci. 22, 1701–1713 (2010).
    OpenUrlCrossRefPubMed
  52. ↵
    1. E. A. Vessel,
    2. G. G. Starr,
    3. N. Rubin
    , The brain on art: Intense aesthetic experience activates the default mode network. Front. Hum. Neurosci. 6, 66 (2012).
    OpenUrlPubMed
  53. ↵
    1. E. A. Vessel,
    2. G. G. Starr,
    3. N. Rubin
    , Art reaches within: Aesthetic experience, the self and the default mode network. Front. Neurosci. 7, 258 (2013).
    OpenUrl
  54. ↵
    1. M. D. Fox et al
    ., The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proc. Natl. Acad. Sci. U.S.A. 102, 9673–9678 (2005).
    OpenUrlAbstract/FREE Full Text
  55. ↵
    1. G. L. Shulman et al
    ., Common blood flow changes across visual tasks. 2. Decreases in cerebral cortex. J. Cogn. Neurosci. 9, 648–663 (1997).
    OpenUrlCrossRefPubMed
  56. ↵
    1. J. R. Simpson Jr,
    2. A. Z. Snyder,
    3. D. A. Gusnard,
    4. M. E. Raichle
    , Emotion-induced changes in human medial prefrontal cortex: I. During cognitive task performance. Proc. Natl. Acad. Sci. U.S.A. 98, 683–687 (2001).
    OpenUrlAbstract/FREE Full Text
  57. ↵
    1. P. Qin,
    2. G. Northoff
    , How is our self related to midline regions and the default-mode network? Neuroimage 57, 1221–1233 (2011).
    OpenUrlCrossRefPubMed
  58. ↵
    1. F.-X. Neubert,
    2. R. B. Mars,
    3. J. Sallet,
    4. M. F. S. Rushworth
    , Connectivity reveals relationship of brain areas for reward-guided learning and decision making in human and monkey frontal cortex. Proc. Natl. Acad. Sci. U.S.A. 112, E2695–E2704 (2015).
    OpenUrlAbstract/FREE Full Text
  59. ↵
    1. D. S. Margulies et al
    ., Situating the default-mode network along a principal gradient of macroscale cortical organization. Proc. Natl. Acad. Sci. U.S.A. 113, 12574–12579 (2016).
    OpenUrlAbstract/FREE Full Text
  60. ↵
    1. A. M. Belfi et al
    ., Dynamics of aesthetic experience are reflected in the default-mode network. Neuroimage 188, 584–597 (2019).
    OpenUrl
  61. ↵
    1. A. R. Vaidya,
    2. M. Sefranek,
    3. L. K. Fellows
    , Ventromedial frontal lobe damage alters how specific attributes are weighed in subjective valuation. Cereb. Cortex 28, 3857–3867 (2018).
    OpenUrlCrossRefPubMed
  62. ↵
    1. A. Abraham
    , The world according to me: Personal relevance and the medial prefrontal cortex. Front. Hum. Neurosci. 7, 341 (2013).
    OpenUrlPubMed
  63. ↵
    1. R. N. Spreng,
    2. C. L. Grady
    , Patterns of brain activity supporting autobiographical memory, prospection, and theory of mind, and their relationship to the default mode network. J. Cogn. Neurosci. 22, 1112–1123 (2010).
    OpenUrlCrossRefPubMed
  64. ↵
    1. L. Zunshine
    1. G. G. Starr
    , “Multisensory imagery” in Introduction to Cognitive Cultural Studies, L. Zunshine, Ed. (The Johns Hopkins University Press, Baltimore, MD, 2009), pp. 1–29.
  65. ↵
    1. S. L. Fairhall,
    2. A. Ishai
    , Neural correlates of object indeterminacy in art compositions. Conscious. Cogn. 17, 923–932 (2008).
    OpenUrlCrossRefPubMed
  66. ↵
    1. A. Ishai,
    2. S. L. Fairhall,
    3. R. Pepperell
    , Perception, memory and aesthetics of indeterminate art. Brain Res. Bull. 73, 319–324 (2007).
    OpenUrlCrossRefPubMed
  67. ↵
    1. E. A. Vessel,
    2. N. Rubin
    , Beauty and the beholder: Highly individual taste for abstract, but not real-world images. J. Vis. 10, 1–14 (2010).
    OpenUrlAbstract/FREE Full Text
  68. ↵
    1. A. Schepman,
    2. P. Rodway,
    3. S. J. Pullen,
    4. J. Kirkham
    , Shared liking and association valence for representational art but not abstract art. J. Vis. 15, 11 (2015).
    OpenUrlAbstract/FREE Full Text
  69. ↵
    1. B. I. Gilman
    , Museum fatigue. Sci. Mon. 2, 62–74 (1916).
    OpenUrl
  70. ↵
    1. F. Grabenhorst,
    2. E. T. Rolls
    , Value, pleasure and choice in the ventral prefrontal cortex. Trends Cogn. Sci. 15, 56–67 (2011).
    OpenUrlCrossRefPubMed
  71. ↵
    1. O. Vartanian,
    2. V. Goel
    , Neuroanatomical correlates of aesthetic preference for paintings. Neuroreport 15, 893–897 (2004).
    OpenUrlCrossRefPubMed
  72. ↵
    1. A. Chatterjee,
    2. A. Thomas,
    3. S. E. Smith,
    4. G. K. Aguirre
    , The neural response to facial attractiveness. Neuropsychology 23, 135–143 (2009).
    OpenUrlCrossRefPubMed
  73. ↵
    1. X. Yue,
    2. E. A. Vessel,
    3. I. Biederman
    , The neural basis of scene preferences. Neuroreport 18, 525–529 (2007).
    OpenUrlCrossRefPubMed
  74. ↵
    1. D. C. Somers,
    2. A. M. Dale,
    3. A. E. Seiffert,
    4. R. B. Tootell
    , Functional MRI reveals spatially specific attentional modulation in human primary visual cortex. Proc. Natl. Acad. Sci. U.S.A. 96, 1663–1668 (1999).
    OpenUrlAbstract/FREE Full Text
  75. ↵
    1. S. H. Lee,
    2. D. J. Kravitz,
    3. C. I. Baker
    , Disentangling visual imagery and perception of real-world objects. Neuroimage 59, 4064–4073 (2012).
    OpenUrlCrossRefPubMed
  76. ↵
    1. D. H. Brainard
    , The psychophysics toolbox. Spat. Vis. 10, 433–436 (1997).
    OpenUrlCrossRefPubMed
  77. ↵
    1. D. G. Pelli
    , The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spat. Vis. 10, 437–442 (1997).
    OpenUrlCrossRefPubMed
  78. ↵
    1. J. Xiao et al
    ., Basic level scene understanding: Categories, attributes and structures. Front. Psychol. 4, 506 (2013).
    OpenUrl
  79. ↵
    1. C. F. Beckmann,
    2. S. M. Smith
    , Probabilistic independent component analysis for functional magnetic resonance imaging. IEEE Trans. Med. Imaging 23, 137–152 (2004).
    OpenUrlCrossRefPubMed
  80. ↵
    1. T. P. Minka
    , Automatic choice of dimensionality for PCA. Adv. Neural Inf. Process Syst. 13, 598–604 (2000).
    OpenUrl
  81. ↵
    1. S. M. Smith et al
    ., Correspondence of the brain’s functional architecture during activation and rest. Proc. Natl. Acad. Sci. U.S.A. 106, 13040–13045 (2009).
    OpenUrlAbstract/FREE Full Text
  82. ↵
    1. U. Hasson,
    2. M. Harel,
    3. I. Levy,
    4. R. Malach
    , Large-scale mirror-symmetry organization of human occipito-temporal object areas. Neuron 37, 1027–1041 (2003).
    OpenUrlCrossRefPubMed
  83. ↵
    1. J. Rissman,
    2. A. Gazzaley,
    3. M. D’Esposito
    , Measuring functional connectivity during distinct stages of a cognitive task. Neuroimage 23, 752–763 (2004).
    OpenUrlCrossRefPubMed
  84. ↵
    1. G. H. Glover,
    2. T. Q. Li,
    3. D. Ress
    , Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR. Magn. Reson. Med. 44, 162–167 (2000).
    OpenUrlCrossRefPubMed
  85. ↵
    1. D. J. Hagler Jr,
    2. A. P. Saygin,
    3. M. I. Sereno
    , Smoothing and cluster thresholding for cortical surface-based group analysis of fMRI data. Neuroimage 33, 1093–1103 (2006).
    OpenUrlCrossRefPubMed
  86. ↵
    1. E. A. Vessel,
    2. A. I. Isik,
    3. A. M. Belfi,
    4. J. L. Stahl,
    5. G. G. Starr
    , The default-mode network represents aesthetic appeal that generalizes across visual domains. Max Planck Society. https://dx.doi.org/10.17617/3.2r. Deposited 20 August 2019.
  87. ↵
    1. H. Regnauld
    , Seated African Woman, 1860s, oil on fabric, bequest of Noah L. Butkin, 1980.280, The Cleveland Museum of Art, Cleveland.
  88. ↵
    1. J. Wright of Derby
    , Cottage on Fire, ca. 1786-1787, oil on canvas, The Putnam Dana McMillan Fund and bequest of Lillian Malcolm Larkin, by exchange, 84.53, Minneapolis Institute of Arts, Minneapolis.
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
The default-mode network represents aesthetic appeal that generalizes across visual domains
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
The default-mode network represents aesthetic appeal that generalizes across visual domains
Edward A. Vessel, Ayse Ilkay Isik, Amy M. Belfi, Jonathan L. Stahl, G. Gabrielle Starr
Proceedings of the National Academy of Sciences Sep 2019, 116 (38) 19155-19164; DOI: 10.1073/pnas.1902650116

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
The default-mode network represents aesthetic appeal that generalizes across visual domains
Edward A. Vessel, Ayse Ilkay Isik, Amy M. Belfi, Jonathan L. Stahl, G. Gabrielle Starr
Proceedings of the National Academy of Sciences Sep 2019, 116 (38) 19155-19164; DOI: 10.1073/pnas.1902650116
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley

Article Classifications

  • Biological Sciences
  • Neuroscience
  • Social Sciences
  • Psychological and Cognitive Sciences
Proceedings of the National Academy of Sciences: 116 (38)
Table of Contents

Submit

Sign up for Article Alerts

Jump to section

  • Article
    • Abstract
    • Results
    • Discussion
    • Conclusions
    • Methods
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Smoke emanates from Japan’s Fukushima nuclear power plant a few days after tsunami damage
Core Concept: Muography offers a new way to see inside a multitude of objects
Muons penetrate much further than X-rays, they do essentially zero damage, and they are provided for free by the cosmos.
Image credit: Science Source/Digital Globe.
Water from a faucet fills a glass.
News Feature: How “forever chemicals” might impair the immune system
Researchers are exploring whether these ubiquitous fluorinated molecules might worsen infections or hamper vaccine effectiveness.
Image credit: Shutterstock/Dmitry Naumov.
Venus flytrap captures a fly.
Journal Club: Venus flytrap mechanism could shed light on how plants sense touch
One protein seems to play a key role in touch sensitivity for flytraps and other meat-eating plants.
Image credit: Shutterstock/Kuttelvaserova Stuchelova.
Illustration of groups of people chatting
Exploring the length of human conversations
Adam Mastroianni and Daniel Gilbert explore why conversations almost never end when people want them to.
Listen
Past PodcastsSubscribe
Panda bear hanging in a tree
How horse manure helps giant pandas tolerate cold
A study finds that giant pandas roll in horse manure to increase their cold tolerance.
Image credit: Fuwen Wei.

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Special Feature Articles – Most Recent
  • List of Issues

PNAS Portals

  • Anthropology
  • Chemistry
  • Classics
  • Front Matter
  • Physics
  • Sustainability Science
  • Teaching Resources

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Subscribers
  • Librarians
  • Press
  • Cozzarelli Prize
  • Site Map
  • PNAS Updates
  • FAQs
  • Accessibility Statement
  • Rights & Permissions
  • About
  • Contact

Feedback    Privacy/Legal

Copyright © 2021 National Academy of Sciences. Online ISSN 1091-6490