A neural link between affective understanding and interpersonal attraction

Edited by Susan T. Fiske, Princeton University, Princeton, NJ, and approved February 11, 2016 (received for review August 20, 2015)
April 4, 2016
113 (16) E2248-E2257

Significance

Humans interacting with other humans must be able to understand their interaction partner’s affect and motivations, often without words. We examined whether people are attracted to others whose affective behavior they can easily understand. For this, we asked participants to watch different persons experiencing different emotions. We found the better a participant thought they could understand another person’s emotion the more they felt attracted toward that person. Importantly, these individual changes in interpersonal attraction were predicted by activity in the participant’s reward circuit, which in turn signaled how well the participant’s “neural vocabulary” was suited to decode the other’s behavior. This research elucidates neurobiological processes that might play an important role in the formation and success of human social relations.

Abstract

Being able to comprehend another person’s intentions and emotions is essential for successful social interaction. However, it is currently unknown whether the human brain possesses a neural mechanism that attracts people to others whose mental states they can easily understand. Here we show that the degree to which a person feels attracted to another person can change while they observe the other’s affective behavior, and that these changes depend on the observer’s confidence in having correctly understood the other’s affective state. At the neural level, changes in interpersonal attraction were predicted by activity in the reward system of the observer’s brain. Importantly, these effects were specific to individual observer–target pairs and could not be explained by a target’s general attractiveness or expressivity. Furthermore, using multivoxel pattern analysis (MVPA), we found that neural activity in the reward system of the observer’s brain varied as a function of how well the target’s affective behavior matched the observer’s neural representation of the underlying affective state: The greater the match, the larger the brain’s intrinsic reward signal. Taken together, these findings provide evidence that reward-related neural activity during social encounters signals how well an individual’s “neural vocabulary” is suited to infer another person’s affective state, and that this intrinsic reward might be a source of changes in interpersonal attraction.
Finding the “right” cooperation partner is an important task for individuals living in complex environments that require social interaction and cooperation. To accomplish a common goal, interaction partners must understand and continuously update information about their partner’s current intentions, motivation, and affect, anticipate the other’s behavior, and adapt their own behavior accordingly. From a sociobiological point of view, one thus might expect that evolution has favored a neural mechanism that permits individuals to select other individuals as their cooperation partners whose behavior and communication signals they can easily decode. However, the neural mechanisms that control human interpersonal attraction and the selection of cooperation partners are not well-understood.
Several influential theories in social psychology have stressed the role of reward in interpersonal attraction (1, 2). The idea is that if a social encounter with another person is rewarding, then the reward will become associated with the other person, resulting in interpersonal attraction (24). Until recently, neuroscientific research into interpersonal attraction has focused mainly on determining the neural mechanism underlying the evaluation of others based on the physical attractiveness of their faces (e.g., 511). These studies consistently show that neural activity in the ventral striatum and medial orbitofrontal cortex (mOFC), core regions of the brain’s reward system that also respond to food and money (12, 13), increases in response to faces that are perceived as attractive. Other studies show that these brain regions also respond to another person’s prosocial behavior (1420). Although this research documents the role of the brain’s reward system in interpersonal attraction, it does not explain why social encounters often result in relational effects in interpersonal attraction (21) such that one individual is particularly attracted to one person whereas another individual is more attracted to another person. Here we focus on the role of nonverbal understanding in interpersonal attraction. Specifically, we ask whether the human brain possesses a neural mechanism that permits individuals to select and approach other individuals as interaction partners whose affective behavior they can easily understand.
Recent work on perceptual learning provides a first hint that the brain’s reward system might play an important role not only in signaling facial attractiveness but also in the individual adjustment of interpersonal attraction during social interaction. This work suggests that whenever the brain evaluates sensory information, it generates a neural signal in the ventral striatum that reflects the amount of evidence available for stimulus evaluation (22) and that, at the experiential level, is associated with subjective confidence (23). Importantly, it has been proposed that such intrinsic confidence signals can act as positive reinforcement signals (24). We hypothesized that a similar confidence signal, reflecting the amount of evidence available to decode another person’s nonverbal behavior, might serve as an intrinsic reward that becomes associated with the interaction partner and thereby increases or decreases the perceiver’s interpersonal attraction toward the interaction partner during social encounters.
Considering further evidence from social neuroscience, we reasoned that an individual’s confidence in their understanding of the other’s behavior might reflect how well the individual’s “neural vocabulary” is suited to decode and interpret the target’s behavior. Neurobiological accounts of social cognition suggest that when humans evaluate the inner state of another person, they implicitly use neural representations of their own states as reference (e.g., 2528), and empirical studies have provided evidence that is consistent with this idea (e.g., 2939). Thus, we predicted that an individual’s intrinsic confidence that they correctly understood another person’s affective state would reflect how well the target’s behavior matches the observer’s neural representation of the underlying state.
To examine the role of mutual understanding in interpersonal attraction, we conducted two experiments, a behavioral experiment (experiment I) and a combined behavioral–fMRI (functional magnetic resonance imaging) experiment (experiment II) with two independent samples of volunteers. Participants in both experiments were shown short video clips of six different female targets who experienced and facially expressed two different emotions, fear or sadness. After each video, participants were asked to judge the target’s affective state (fear or sadness) and to indicate how confident they were that they had correctly understood the target’s affective state (Fig. 1). Before and after emotion observation, we assessed the participants’ interpersonal attraction toward each target at two levels. First, we asked participants to enlarge a small picture of each target on a computer screen by repeatedly pressing a button until the picture had a size that corresponded to a subjectively pleasant conversational distance. The number of button presses executed to enlarge the picture of a given target was taken as measure of the participant’s approach behavior toward that target (modified after ref. 6). Next, participants were given three statements about each target and asked to indicate how much they agreed with each of these statements (Table 1). This way, we derived a motivational–behavioral measure (approach behavior) and a self-report measure of each participant’s interpersonal attraction toward each target before and after emotion observation. This experimental design allowed us to link the participants’ confidence that they correctly understood the targets' affective state to individual changes in interpersonal attraction. In experiment II, we additionally measured the participants’ brain activity during emotion observation. This enabled us to identify neural activity in the brain’s reward system that predicted the participants’ self-reported confidence in their emotion judgments and to link this neural activity to changes in interindividual attraction. Finally, participants in experiment II completed an emotion experience task immediately after the emotion observation task in which they were asked to experience and express the two emotions (fear and sadness) themselves, using instructions similar to those that were used when recording the videos of the targets (37). This permitted us to compare the patterns of neural activity elicited during emotion observation to those associated with the observer's own emotional experience. We refer to the level of correspondence between these patterns as neural observation–experience matching (NOE matching).
Fig. 1.
Experimental design. To measure changes in interpersonal attraction during emotion observation, the participants’ attraction toward each target was assessed before (Left) and after (Right) emotion observation. (Middle) A sample emotion observation trial is shown (time intervals and screen shots are taken from experiment II). A trial consisted of a facial observation period during which a short video clip of the target experiencing fear or sadness was shown, followed by a fixation cross, an emotion judgment period, and a confidence rating period. Responses were given with a button box and fed back to the observer (orange frame around the selected emotion and orange dot on the confidence scale).
Table 1.
Statements used to assess interpersonal attraction
Original statement (German)Translation
Willingness to meet 
 Ich würde Sie gerne im echten Leben treffenI would like to meet her in real life
Expectation of intimate communication 
 Ich habe das Gefühl, dass sie mich verstehen würdeI feel that she would understand me
 Ich glaube, dass ich mit ihr über persönliche Probleme reden könnteI think I could discuss personal problems with her
Participants were asked to indicate how much they agreed with each statement on a Likert-type 7-point scale ranging from 1 (not at all) to 7 (definitely). The first statement estimated the participant’s overt willingness to meet the target; the last two questions were averaged to estimate the participant’s expectation that they could have an intimate communication with the target.
Experiment I tested whether observing another person’s affective behavior can lead to individual changes in interpersonal attraction, and whether these changes are predicted by the observer’s subjective confidence that they correctly understood the other’s affective state. Experiment II validated the findings of experiment I and additionally investigated the neural mechanisms that might mediate between affective understanding and interpersonal attraction. To this end, we first examined whether the participants’ subjective confidence in their emotion judgments was predicted by neural activity in the reward system of their brains. Second, we used multivoxel pattern analysis (MVPA) (40) to examine the relation between subjective confidence, neural confidence signals, and NOE matching. Both analyses were performed in a cross-validated hierarchical approach, using whole-brain analyses to identify relevant brain regions, followed by region-of-interest (ROI) analyses to examine the relation between neural signals within these regions and individual changes in interpersonal attraction (Fig. S1).
Fig. S1.
Graphical summary of data analysis steps. (i) Experiment I and experiment II tested whether an observer’s subjective confidence that they correctly understood a target’s affective state predicted postobservation attraction scores (red arrow) if all variance that could be explained by preobservation attraction scores was removed from both variables (dashed blue arrows). (iiv) Experiment II additionally examined the neural mechanisms that might mediate between subjective confidence and changes in interpersonal attraction. (ii) First, regions in the brain’s reward system were identified whose activity covaried with subjective confidence (whole-brain analysis; red arrow). (iii) Second, we tested whether these neural confidence signals predicted changes in interpersonal attraction (ROI analysis, red arrow) if all variance that could be explained by preobservation attraction was removed from both variables (dashed blue arrows). (iv) Third, brain regions were identified where NOE matching covaried with subjective confidence and/or neural confidence signals in the reward system (whole-brain analyses; red arrows). (v) Fourth, we tested whether NOE matching in the anterior insula predicted changes in interpersonal attraction (ROI analysis; red arrow) if all variance that could be explained by preobservation attraction was removed from both variables (dashed blue arrows). Red double lines indicate cross-validated effects. Red circles indicate ROIs. Please note that independency of all ROI analyses was ensured by cross-validation (please see Materials and Methods for details).

Results

Experiment I.

Emotion judgments and self-reported confidence.

In the behavioral experiment, observers (21 women, 19 men) correctly labeled the target’s emotional state in the majority of trials (hit rate 74 ± 3.6% [mean ± SEM], T[39] = 21, P < 0.001), and the observers’ average self-reported confidence for a given target closely reflected the actual correctness of their emotion judgments for this target (mean r = 0.71 ± 0.07 [back-transformed mean of Fisher-transformed correlation coefficients], T[39] = 9.8, P < 0.001). This indicates that observers had a valid internal model that allowed them to infer the targets’ affective state and to accurately estimate the correctness of their understanding. A two-way ANOVA with between-subject factor observer (40 levels), within-subject factor target (6 levels), and the observers’ self-reported confidence as dependent variable (n = 40 × 6 × 16 = 3,840 trials) revealed, in addition to a significant main effect of target (F[5,195] = 95, P < 0.001, eta2 = 0.71), a significant observer-by-target interaction (F[195,3600] = 2.4, P < 0.001, eta2 = 0.12). This indicates that an observer’s subjective confidence that they correctly understood a target’s affective state did not only depend on the observer’s general ability to recognize facial emotions, or the target’s general ability to express their emotion, but also on how well a particular observer could “tune in” to a particular target’s affect.

Changes in interpersonal attraction.

Overall, observers conducted more button presses to “approach” the observed targets after than before emotion observation (T[39] = 3.7, P < 0.001; please see Table S1 for results for self-reported interpersonal attraction). This is in line with previous research that shows that familiarity increases interpersonal attraction (e.g., 41). However, the critical question in the current study was whether emotion observation can lead to changes in interpersonal attraction that differ between observers, such that one observer feels more attracted to a particular target after emotion observation whereas another observer feels less attracted to the same target, even though both observers saw exactly the same behavior. Intriguingly, this was the case. Between-subject variability of the number of button presses observers conducted to approach a given target (measured as the width of the 66% interval of button presses for each target) was significantly larger after emotion observation (mean width of the 66% interval for each target, 7.4 ± 0.6 button presses) than before emotion observation (mean width of the 66% interval for each target, 6.2 ± 0.4 button presses) (T[5] = 2.7, P = 0.040; Fig. 2; please see Table S1 for results for self-reported interpersonal attraction).
Fig. 2.
Interindividual variability in the observers’ approach behavior toward the targets before and after emotion observation. Data were first centered, separately for each target and pre- and postobservation runs (i.e., the mean number of button presses executed for each target in each run was set to 0), and then averaged across all targets, separately for pre- and postobservation runs.
Table S1.
Changes in interpersonal attraction during emotion observation
MeasureTarget 1Target 2Target 3Target 4Target 5Target 6MeanT (postobservation minus preobservation)
Level of interpersonal attraction before and after emotion observation
 Experiment I (n = 40)        
  Approach behavior        
   Preobservation9.110.510.610.78.68.79.7 
   Postobservation11.512.412.712.58.79.311.23.7*
  Willingness to meet        
   Preobservation3.84.14.44.62.62.63.7 
   Postobservation4.04.34.24.72.12.83.70.2
  Expected intimacy of communication        
   Preobservation3.93.84.14.12.72.53.5 
   Postobservation4.04.44.54.72.42.83.82.5*
 Experiment II (n = 52)        
  Approach behavior        
   Preobservation10.812.912.513.310.410.411.7 
   Postobservation12.014.813.414.410.710.812.73.2*
  Willingness to meet        
   Preobservation4.34.34.34.52.93.03.9 
   Postobservation4.54.74.55.02.93.24.12.8*
  Expected intimacy of communication        
   Preobservation3.94.04.14.32.92.83.6 
   Postobservation4.14.54.24.52.62.93.81.9*
Interindividual variability of interpersonal attraction before and after emotion observation (width of 66% interval)
 Experiment I (n = 40)        
  Approach behavior        
   Preobservation6.77.06.77.05.05.06.2 
   Postobservation8.48.79.06.75.06.77.42.7*
  Willingness to meet        
   Preobservation3.04.02.71.73.02.02.7 
   Postobservation3.03.03.02.72.02.02.6−0.4
  Expected intimacy of communication        
   Preobservation3.52.83.03.02.42.52.8 
   Postobservation2.03.03.03.02.52.02.6−1.1
 Experiment II (n = 52)        
  Approach behavior        
   Preobservation8.79.37.08.76.07.77.9 
   Postobservation9.710.09.79.77.08.79.14.2*
  Willingness to meet        
   Preobservation3.02.02.73.03.72.02.7 
   Postobservation3.03.03.02.02.02.02.5−0.6
  Expected intimacy of communication        
   Preobservation2.02.92.72.52.52.52.5 
   Postobservation2.32.82.52.82.52.52.60.6
Approach behavior is measured as number of button presses. Self-reported attraction is measured on a 7-point visual scale (please see main text).
*
Significant differences between pre- and postobservation values (P < 0.05, two-tailed. Level of interpersonal attraction, df = 39 in experiment I, df = 51 in experiment II. Interindividual variability, df = 5 in both experiments).

Self-reported confidence and individual changes in interpersonal attraction.

Next, we asked whether the observed changes in interpersonal attraction were predicted by the observers’ confidence that they correctly understood the targets’ affective state. To test this, we computed partial correlations between each observer’s confidence ratings and postobservation attraction scores for each target. Importantly, to ensure that this correlation was not driven by the observer’s initial attraction toward the targets, any variance that could be explained by preobservation attraction (Table S2) was removed from confidence ratings and postobservation attraction scores. This revealed significant positive partial correlations between the observer's confidence and postobservation attraction both at the behavioral–motivational level (approach behavior) and at the level of self-report (Table 2). To further control for potential differences between targets in physical attractiveness and facial expressivity, we removed average confidence ratings and average attraction scores for each target (“general target effects,” ref. 21) from each individual dataset and performed the same partial correlation analyses as above. As predicted, partial correlations between self-reported confidence and self-reported interpersonal attraction remained significant after general target effects had been removed (Table 2). This indicates that the link between confidence and changes in interpersonal attraction cannot be fully explained by a target’s general attractiveness or expressivity.
Table 2.
Partial correlations between self-reported confidence, neural confidence signals in the brain’s reward system, neural observation–experience matching in the anterior insula, and interpersonal attraction after emotion observation
 Postobservation attraction
 Partial correlationsResidual partial correlations (general target effects removed)
 Approach behaviorWillingness to meetExpected intimacy of communicationApproach behaviorWillingness to meetExpected intimacy of communication
MeasurerTrTrTrTrTrT
Experiment I (n = 40)            
 Self-reported confidence0.45*(4.5)0.58*(5.4)0.61*(7.5)0.03(0.3)0.23*(1.7)0.24*(1.7)
Experiment II (n = 52)            
 Self-reported confidence0.44*(4.5)0.45*(5.2)0.59*(6.9)0.17(1.6)0.19*(1.7)0.30*(2.8)
 Confidence signals in VS0.20*(2.2)0.06(0.1)0.12(1.3)0.21*(2.0)    
 Confidence signals in mOFC0.26*(3.1)−0.03(−0.3)0.01(0.1)0.22*(2.8)    
 NOE matching (cluster 1)0.10(1.2)0.02(0.2)0.04(0.4)      
 NOE matching (cluster 2)0.11(1.2)0.00(0.0)−0.02(−0.2)      
Variance that can be explained by the observer’s initial interpersonal attraction toward the targets is removed from both variables in all analyses. Residual correlations (i.e., correlations after general target effects are removed from both variables) are only reported for significant main correlations. r, back-transformed average partial correlation coefficients; T, t values at random-effects group level; VS, ventral striatum. Please see Fig. 3 for the location of the two clusters in the anterior insula.
*
Significant correlations (P < 0.05, one-tailed).
Correlations that remain significant in the split-half analysis (P < 0.05, one-tailed) (see SI Materials and Methods for details).
Table S2.
Correlations between initial interpersonal attraction and self-reported confidence, neural confidence signals in the brain’s reward system, and NOE matching in the anterior insula
MeasurePreobservation attraction
CorrelationsResidual correlations (general target effects removed)
Approach behaviorWillingness to meetExpected intimacy of communicationApproach behaviorWillingness to meetExpected intimacy of communication
rTrTrTrTrTrT
Experiment I (n = 40)            
 Self-reported confidence0.45*(5.4)0.61*(6.5)0.50*(6.4)0.04(0.4)−0.01(−0.1)0.01(0.1)
 Postobservation attraction0.64*(6.3)0.79*(14.8)0.64*(6.9)0.41*(4.4)0.64*(8.0)0.33*(3.1)
Experiment II (n = 52)            
 Self-reported confidence0.49*(8.7)0.40*(5.5)0.43*(4.4)0.18*(2.5)0.02*(0.4)0.21*(2.5)
 Confidence signal in VS0.21*(2.6)0.14*(2.2)0.11(1.6)0.12(1.2)0.11(1.6)0.06(0.7)
 Confidence signal in mOFC−0.01(−0.2)0.11(1.3)0.11(1.2)      
 NOE matching (cluster 1)−0.03(−0.3)−0.03(−0.3)−0.02(−0.3)      
 NOE matching (cluster 2)−0.15(−2.3)−0.13(−1.2)−0.09(−2.0)      
 Postobservation attraction0.56*(9.7)0.68*(9.2)0.70*(11.8)0.28*(3.5)0.47*(5.1)0.56*(7.7)
Residual correlations (i.e., correlations after general target effects are removed from both variables) are only reported for significant main correlations. r, back-transformed average correlation coefficients; T, t values at random-effects group level; VS, ventral striatum.
*
Significant effects (P < 0.05, one-tailed).

Experiment II.

Behavioral data: Emotion judgments, self-reported confidence, and individual changes in interpersonal attraction.

Behavioral data of experiment II largely replicated those of experiment I. Observers (28 women, 24 men) correctly labeled the target’s emotional state in the majority of trials (mean hit rate 75 ± 2.5%, T[51] = 30, P < 0.001), and their self-reported confidence closely reflected the actual correctness of their emotion judgments for a given target (mean back-transformed r = 0.42 ± 0.01, T[51] = 4.1, P < 0.001). Furthermore, there was a significant observer-by-target interaction in self-reported confidence similar to that observed in experiment I (two-way ANOVA with between-subject factor observer [52 levels], within-subject factor target [6 levels], and the observers’ confidence ratings as dependent variable [n = 52 × 6 × 16 = 4,992 trials]; main effect target, F[5,255] = 41, P < 0.001, eta2 = 45; observer-by-target interaction, F[255,4624] = 2.9, P < 0.001, eta2 = 0.14). As in experiment I, observers conducted more button presses to approach the observed targets after than before emotion observation (T[51] = 2.7, P = 0.005), and between-subject variability of the number of button presses observers conducted to approach a given target increased significantly from preobservation (mean width of the 66% interval for each target 7.8 ± 0.5 button presses) to postobservation (mean width of the 66% interval for each target 9.1 ± 0.5 button presses) (T[5] = 4.2, P < 0.001; Fig. 2; please see Table S1 for results for self-reported interpersonal attraction). Finally, the pattern of correlations between self-reported confidence and interpersonal attraction closely reflected that of experiment I, with partial correlations between self-reported confidence and self-reported interpersonal attraction remaining significant after general target effects had been removed from each individual dataset (Table 2).

Neural confidence signals in the brain’s reward system.

In the first step of the fMRI data analysis, we examined whether the observers’ subjective confidence that they correctly understood the target’s affective state was associated with neural activity in reward-related brain regions. For this, we used whole-brain univariate correlation analyses. First, we computed the correlation between each observer’s trial-by-trial confidence ratings and trial-by-trial neural activity during the facial observation period. This revealed a significant positive correlation between self-reported confidence and neural activity in the right ventral striatum (x = 18, y = 6, z = −15; T[51] = 5.7, P = 0.022, familywise error [FWE]-corrected at voxel level; Fig. 3 A and B). Second, we computed the correlation between each observer’s trial-by-trial confidence ratings and trial-by-trial neural activity during the emotion judgment period. This revealed a significant positive correlation between self-reported confidence and neural activity in the mOFC (x = −3, y = 39, z = −18; T[51] = 4.9, k = 503, P < 0.001, FWE-corrected at cluster level; Fig. 3 E and F) (please see Table S3 for brain regions outside the reward system where neural activity increased significantly with increasing subjective confidence).
Fig. 3.
Confidence-related neural activity in the brain’s reward system and individual changes in interpersonal attraction. (A) Brain regions where neural activity during the facial observation period covaried with self-reported confidence. (B) Scatter plot illustrating the correlation between neural activity in the right ventral striatum and self-reported confidence. (C) Scatter plot illustrating the partial correlation between confidence-related neural activity in the right ventral striatum and the observer's postobservation approach behavior (variance that can be explained by preobservation approach behavior is removed). (D) Scatter plot illustrating the partial correlation between confidence-related neural activity in the right ventral striatum and the observer's postobservation approach behavior (variance that can be explained by preobservation approach behavior and general target effects are removed). (E) Brain regions where neural activity during the emotion judgment period covaried with self-reported confidence. (F) Scatter plot illustrating the correlation between neural activity in the mOFC and self-reported confidence. (G) Scatter plot illustrating the partial correlation between confidence-related neural activity in the mOFC and the observer's postobservation approach behavior (variance that can be explained by preobservation approach behavior is removed). (H) Scatter plot illustrating the partial correlation between confidence-related neural activity in the mOFC and the observer's postobservation approach behavior (variance that can be explained by preobservation approach behavior and general target effects are removed). Note: SPMs (height threshold T[51] = 5.5, P = 0.05, FWE-corrected at voxel level in A; height threshold T[51] = 3.2, extent threshold k = 100 voxels, P = 0.001, FWE-corrected at cluster level in E) are superimposed onto a rendered surface and coronal/axial sections of a T1-weighted map of a standard brain (MNI space). For the scatter plots in B and F, trialwise data of each observer (i.e., 96 data points) were z-standardized and rank-ordered according to BOLD parameter estimates and then averaged across observers, separately for each rank. For the scatter plots in C, D, G, and H, targetwise data of each observer (i.e., 6 data points) were z-standardized and rank-ordered according to BOLD parameter estimates and then averaged across observers, separately for each rank. Error bars represent SEMs.
Table S3.
Results of the whole-brain correlation analysis of neural activity during the facial observation period/during the emotion judgment period and self-reported confidence
Contrast/anatomical structureCoordinate at peakT value at peakTotal cluster sizePercent of cluster
Neural activity during the facial observation period    
 Calcarine gyrus R18, −69, 66.232 
 Lingual gyrus R15, −78, −66.386 
 Supramarginal gyrus L−57, −42, 307.192 
 Supramarginal gyrus R54, −30, 306.120 
 Insula L−39, 6, 36.420 
 Rolandic operculum R66, 6, 66.211 
 Ventral striatum R18, 6, −155.73 
Neural activity during the emotion judgment period    
 Medial orbitofrontal gyrus L−3, 39, −184.950325%
 Medial superior frontal gyrus L   24%
 Gyrus rectus L   15%
 Middle temporal gyrus L−63, −15, −94.714979%
 Inferior temporal gyrus L   10%
 Paracentral lobule L−33, −6, 214.841926%
 Supplementary motor area R   17%
 Superior parietal gyrus L   13%
 Postcentral gyrus L   12%
Anatomical structures were labeled according to the automated anatomical labeling (AAL) atlas (62), except for the ventral striatum, which was defined according to ref. 13. Listed are all anatomical structures that contain at least 10% of all voxels of a cluster. Regions are ordered from posterior to anterior. For convenience, SPMs are thresholded at the same levels as in Fig. 3 (i.e., facial observation period, height threshold T = 5.5, P = 0.05, FWE-corrected at voxel level; emotion judgment period, height threshold T = 3.3, P = 0.001, extent threshold k = 100 voxels, P = 0.001, FWE-corrected at cluster level). Cluster sizes are the numbers of voxels (3 × 3 × 3 mm3) per cluster. L, left hemisphere; R, right hemisphere.

ROI analysis: Neural confidence signals in the brain’s reward system and individual changes in interpersonal attraction.

Having shown that the observer’s self-reported confidence reflected neural activity in the brain’s reward system, we next asked whether these confidence-related neural signals would predict changes in interpersonal attraction. For this, we averaged the neural activity within the two clusters in the ventral striatum and mOFC, separately for each observer and target, and performed a partial correlation analysis between this neural activity and the observer’s postobservation attraction scores for each target (with variance explained by preobservation attraction removed from both variables). This revealed significant positive partial correlations between confidence-related neural activity and the observer’s approach behavior in both clusters (Fig. 3 C and G and Table 2). Again, these partial correlations remained significant when general target effects (i.e., average levels of confidence-related neural activity and average attraction scores for each target across all observers) were removed from each individual dataset (Fig. 3 D and H and Table 2). The only brain region outside the reward system that showed a similar pattern of partial correlations was the lingual gyrus (Table S4).
Table S4.
Results of the partial correlation analysis of neural confidence signals outside the reward system and postobservation interpersonal attraction
Contrast/anatomical structurePostobservation attraction
Partial correlationsResidual partial correlations (general target effects removed)
Approach behaviorWillingness to meetExpected intimacy of communicationApproach behaviorWillingness to meetExpected intimacy of communication
rTrTrTrTrTrT
Confidence-related neural activity during the facial observation period
 Calcarine gyrus R0.30*(3.1)0.35*(4.3)0.39*(4.0)0.14(1.2)0.14(1.6)0.07(0.7)
 Lingual gyrus R0.42*(3.9)0.33*(4.4)0.38*(3.9)0.30*(2.7)0.21(2.3)0.15(1.6)
 Supramarginal gyrus L0.27*(3.1)0.21(2.4)0.31*(3.3)0.08(0.7)0.01(0.1)0.01(0.1)
 Supramarginal gyrus R0.30*(3.8)0.29*(3.2)0.51*(5.0)0.18(1.7)0.14(1.1)0.11(1.0)
 Insula L0.24(2.4)0.17(2.1)0.21(2.2)      
Confidence-related neural activity during the emotion judgment period
 Rolandic operculum R0.20(2.0)0.17(1.8)0.17(1.5)      
 Middle/inferior temporal gyrus L−0.02(0.0)−0.02(0.0)0.19(0.2)      
 Parietal cortex L0.06(0.1)0.02(0.0)0.11(0.1)      
Variance that can be explained by the observer’s initial interpersonal attraction toward the targets is removed from both variables in all analyses. Residual correlations (i.e., correlations after general target effects are removed from both variables) are only reported for significant main correlations. r, back-transformed average correlation coefficients; T, t values at random-effects group level.
*
Significant effects [P < 0.05, one-tailed, Bonferroni-corrected for five regions (corresponding to T >2.5) and three regions (corresponding to T >2.1), respectively].
To ensure that the observed partial correlations between confidence-related neural activity and postobservation attraction were not due to the fact that we used the observer’s confidence ratings (which we already knew predicted changes in interpersonal attraction) to identify clusters in the reward system that showed confidence-related neural activity, we performed a split-half cross-validation analysis (42, 43) (see SI Materials and Methods for details). This analysis replicated the significant partial correlations between confidence-related neural activity and postobservation attraction in the ventral striatum and mOFC (ventral striatum, mean back-transformed r = 0.17, T[51] = 1.9, P = 0.033; mOFC, mean back-transformed r = 0.23, T[50] = 2.9, P = 0.003) and the significant partial correlation between confidence-related neural activity and postobservation attraction after general target effects had been removed in the mOFC (mean back-transformed r = 0.11, T[50] = 1.7, P = 0.047). This indicates that the observed correlations between neural confidence signals in the reward system and interpersonal attraction cannot be explained by nonindependencies.

NOE matching.

In the next step of our fMRI data analysis, we asked whether the observer’s self-reported confidence and neural confidence signals are linked to the degree to which the patterns of neural activity elicited during emotion observation matched those associated with the observer's own emotional experience. For this, we used searchlight-based MVPA (44, 45), a technique that allows estimation of the level of correspondence between local patterns of neural activity within spherical neighborhoods (the “searchlights”) across the entire brain volume (we refer to this level of correspondence as neural observation–experience matching; please see Materials and Methods for details). NOE-matching maps were computed for each trial and observer. These maps were then subjected to whole-brain correlation analyses with (i) the observer’s trial-by-trial confidence ratings and (ii) the observer’s trial-by-trial confidence-related neural activity in the reward system (ventral striatum and mOFC, respectively). This revealed (i) a significant positive correlation between the observer’s self-reported confidence and NOE matching in a cluster in the anterior insula (x = −33, y = 21, z = −9; T[51] = 3.7, P = 0.001, FWE-corrected at cluster level) and (ii) a significant positive correlation between neural confidence signals in the mOFC and NOE matching in a second, adjacent, cluster in the anterior insula (x = −27, y = 30, z = −12; T[51] = 4.2, P = 0.001, FWE-corrected at cluster level) (Fig. 4; please note that the actual overlap of the two clusters in the anterior insula is much larger than shown in Fig. 4 because each voxel represents a 9-mm spherical searchlight). No significant correlation was observed between confidence-related activity in the ventral striatum and NOE matching (Table S5). As above, a split-half cross-validation analysis, performed to ensure that the correlation between NOE matching and confidence-related neural activity in the mOFC was not due to nonindependencies, replicated this effect (x = −27, y = 30, z = −12; T[51] = 3.5, P = 0.040, FWE-corrected at cluster level).
Fig. 4.
Neural observation–experience matching (NOE matching), self-reported confidence, and neural confidence signals in the mOFC. (A) Brain regions where NOE matching covaried with self-reported confidence (orange, cluster 1) and neural confidence signals in the mOFC (red, cluster 2), respectively. (B and C) Scatter plots illustrating the correlation between NOE matching in each cluster and self-reported confidence (orange)/neural confidence signals in the mOFC (red). Note: SPMs (height threshold T[51] = 3.2, extent threshold k = 10 voxels, P = 0.05, FWE-corrected at cluster level) are superimposed onto a rendered surface and coronal/axial sections of a T1-weighted map of a standard brain (MNI space). For the scatter plots in B and C, trialwise data of each observer (i.e., 96 data points) were z-standardized and rank-ordered according to NOE matching and then averaged across observers, separately for each rank. Error bars represent SEMs.
Table S5.
Results of the whole-brain correlation analysis of NOE matching and self-reported confidence/neural confidence signals in the mOFC
Contrast/anatomical structureCoordinate at peakT value at peakTotal cluster sizePercent of cluster
Self-reported confidence    
 Insula L−33, 21, −93.7944%
 Inferior frontal gyrus L   44%
Neural confidence signals in mOFC    
 Insula L−27, 30, −124.2850%
 Inferior frontal gyrus L   50%
 Middle cingulate cortex L−3, −6, 453.6786%
 Supplementary motor area   14%
 Inferior parietal gyrus L−51, −27, 363.8771%
 Supramarginal gyrus L   29%
 Fusiform gyrus L−36, −48, −214.01191%
Anatomical structures are labeled according to the AAL atlas (62). Listed are all anatomical structures that contain at least 10% of all voxels of a cluster. Regions are ordered from posterior to anterior. Height threshold T = 3.3, P = 0.001, FWE-corrected at cluster level. Cluster sizes are the numbers of voxels (3 × 3 × 3 mm3) per cluster. L, left hemisphere; R, right hemisphere.

ROI analysis: NOE matching and individual changes in interpersonal attraction.

Finally, we tested whether NOE matching in the anterior insula also predicted changes in interpersonal attraction directly. Interestingly, this was not the case (Table 2). This is in line with our hypothesis that NOE matching is not directly associated with changes in interpersonal attraction but that these changes are mediated by neural confidence signals in the brain’s reward system.

SI Materials and Methods

Instructions Given to Participants Before the Emotion Experience Task (Experiment II).

Participants were informed about the emotion experience task after they had completed the emotion observation task. This was done to ensure that the instruction for the emotion experience task did not increase the participants’ tendency to feel with the targets during emotion observation. However, to ensure that participants were not completely staggered by the instruction to submerge themselves in emotional situations, they were instructed to think about, and note down, a few fear- and sadness-provoking situations in a preexperiment phone call a few days before the actual experiment. During the actual fMRI scanning session, the following instruction was read to the participants after the emotion observation task had been completed:
Original German wording: “Im ersten Teil der fMRT Untersuchung haben Sie Videos gesehen, die wir von Studentinnen aufgenommen haben, die sich während einer fMRT Untersuchung in furchterregende oder traurige Situationen versetzt haben. Wir bitten Sie jetzt, das gleiche zu tun: Bitte versetzen Sie sich in furchterregende bzw. traurige Situationen und versuchen Sie dabei, die Emotion wirklich zu empfinden. Dabei können Sie die emotionalen Situationen benutzen, die Sie aufgeschrieben haben. Es gibt 4 Durchgänge. Jeder Durchgang besteht aus 4 emotionalen Phasen und 4 neutralen Phasen, in denen Sie sich entspannen sollen. Während der emotionalen Phasen sehen Sie das jeweilige Wort, also „Furcht“ oder „Trauer“. Während der neutralen Phasen sehen Sie ein Fixationskreuz. Wichtig ist, dass Sie in den Fixationsphasen entspannt sind und dass Sie sich nur in die emotionale Situation versetzen, solange das jeweilige Wort zu sehen ist. Ihren Gesichtsausdruck werden wir nicht filmen”.
English translation: “In the first part of the experiment you saw videos recorded of students who were putting themselves into a frightening or sad state during fMRI. Now we would like you to do the same. Please try and put yourself into frightening or sad states. It is important that you try to really experience the emotion. For this, you may use the situations you have noted down. There will be four experimental runs. Each run comprises four emotional periods and four neutral periods. During the emotional periods you will see the respective word, namely ‘fear’ or ‘sadness.’ During the neutral periods you will see a cross-hair. Please try to relax during the neutral periods and submerge yourself into the emotional situations only while you see the word ‘fear’ or ‘sadness.’ We will not videotape your facial expression.”

Cross-Validation Analysis.

Split-half cross-validation was performed for all correlation analyses and partial correlation analyses in experiment II that used confidence-related neural activity as one variable and that detected a significant effect (i.e., confidence-related neural activity in the ventral striatum–postobservation attraction, confidence-related neural activity in the mOFC–postobservation attraction, confidence-related neural activity in the mOFC–NOE matching) to ensure that these effects were not due to nonindependencies between self-reported confidence and confidence-related neural activity. In these analyses, we used the data of half of all participants to identify voxels that showed confidence-related neural activity and the data of the remaining participants to test for the correlations of interest, and vice versa (42, 43). For this purpose, participants were divided into two equally sized subsamples; one subsample comprised every other female participant starting with the first female participant and every other male participant starting with the second male participant, and the other comprised the remaining participants. SPMs were thresholded at a height threshold lowered by factor 1/square root (2) relative to the main analysis [i.e., T = 5.5/sqrt(2) = 3.9 for the ventral striatum and T = 3.2/sqrt(2) = 2.3 for the mOFC and insula; this accounted for the fact that the sample size for each subsample was half the sample size in the main analysis]. Suprathreshold voxels were identified within two predefined anatomical regions of interest (a 9-mm sphere centered at [12 9 −6] [table 1 in ref. 13] for the ventral striatum, and a 6-mm sphere centered at [0 45 −9] [table 1 in ref. 13] for the mOFC). This resulted in cluster sizes of k = 3 voxels/k = 3 voxels (cross-validation run 1/cross-validation run 2) in the right ventral striatum, and cluster sizes of k = 5 voxels/k = 33 voxels in the mOFC. To keep the number of voxels approximately equal across cross-validation runs, voxels that were more than 3 mm apart from the center coordinate in the mOFC were discarded in run 2 (keeping 7 voxels). As in the main analyses, trialwise BOLD parameter estimates were extracted from these voxels, averaged separately for each target and participant, and used for correlation analyses.

Discussion

The goal of this study was to examine whether the human brain possesses a neural mechanism that attracts individuals to other individuals whose nonverbal signals they can easily understand. To pursue this goal, we conducted two experiments. In line with our first prediction, data of experiment I and behavioral data of experiment II show that an individual’s interpersonal attraction toward another person can change after a few minutes of emotion observation, depending on the individual’s subjective confidence that they correctly understood the other’s affective state. fMRI data from experiment II provide an initial understanding of the neural processes that mediate between subjective understanding and interpersonal attraction. First, we found that individual changes in interpersonal attraction were predicted by confidence-related neural signals in the ventral striatum and the mOFC, core regions of the brain’s reward system (12, 13). Second, we found that both the observer’s subjective confidence and neural confidence signals in the mOFC covaried with the degree of similarity between patterns of neural activity elicited during emotion observation and those associated with the observer’s own emotional experience (NOE matching). This suggests that an individual’s confidence in their interpersonal judgments of affect, and ensuing changes in interpersonal attraction, is partly determined by how well the other person’s affective behavior matches the observer’s neural representation of the underlying state.

Confidence Signals in the Brain’s Reward System.

The first important finding of the current study is that confidence-related neural activity in the brain’s reward system can act as an intrinsic reward signal, predicting individual changes in interpersonal attraction. Previous studies have shown that neural activity in the ventral striatum signals internal confidence when subjects make perceptual judgments about physical stimuli such as circles, lines, and moving dots (23, 24). The current study shows that neural activity in the ventral striatum/mOFC covaries with subjective confidence when participants try to infer another person’s current affective state from their facial expression. This underlines a modality-independent role of the ventral striatum/mOFC in signaling confidence. Furthermore, confidence-related neural activity in the ventral striatum/mOFC predicted changes in the observer’s interpersonal attraction toward the target, providing behavioral evidence that confidence-related activity in the brain’s reward system can act as a positive reinforcement signal (24).
Three details of these findings are worth further discussion. First, we observed a temporal dissociation of neural confidence signals in the ventral striatum and in the mOFC: Confidence-related neural activity in the ventral striatum occurred during the facial observation period of each trial, whereas confidence-related neural activity in the mOFC occurred later, during the emotion judgment period. This is in line with previous work that has shown a similar temporal dissociation of neural activity in the ventral striatum/mOFC during face evaluation, which has led to the suggestion that the mOFC has a particular role in holding the outcome of stimulus evaluations online for further processing (9).
Second, neural confidence signals in both the ventral striatum and the mOFC were more closely associated with subsequent changes in the observer’s approach behavior toward the target than with changes in self-reported interpersonal attraction, whereas the observer’s subjective confidence that they correctly understood the target’s affective state was more closely associated with changes in self-reported interpersonal attraction. This is in line with previous studies that have stressed the role of the ventral striatum in motivated behavior (6) and suggests a partial dissociation between neural processes underlying motivational–behavioral and cognitive components of interpersonal attraction.
Third, confidence-related activity in the mOFC, but not in the ventral striatum, reflected the degree of similarity between patterns of neural activity elicited during emotion observation and those associated with the observer’s own emotional states (NOE matching). Again, this supports a particular role of the mOFC in holding the outcome of stimulus evaluations online for further processing (9).

“Common Coding” and Success of Affective Communication.

The second important finding of the current study is that both the observer’s subjective confidence and neural confidence signals in the mOFC reflected the level of correspondence between patterns of neural activity elicited in the anterior insula during emotion observation and those associated with the observer’s own emotional experience (NOE matching). This extends previous studies that observed overlapping activity in the anterior insula when participants experienced and observed pain (46, 47), disgust (30), or joy (32) and more recent studies that used MVPA to examine whether one’s own pain and emotional experience and another person’s pain and emotional experience are encoded in similar patterns of neural activity (37, 38).
Importantly, the results of the current study not only provide evidence that confidence can signal correspondence, they also indicate that confidence and correspondence covary across individual observer–target pairs. This provides empirical evidence for theoretical models of social interaction and communication that propose that the more similar the observer’s and the target’s internal model of a given behavior, the easier it should be for the observer to understand the target’s inner state and to react accordingly (25, 48).
A similar link between correspondence of neural activity and success of communication has been observed in the medial prefrontal cortex (mPFC). A study using pseudohyperscanning (a technique where a “sender” and a “perceiver” are scanned one after the other in the same scanner but are connected by audio or video recordings such that their brain activity can be temporally aligned after scanning) showed that neural activity in the mPFC is time-locked between speakers and listeners involved in verbal communication. Strikingly, the listener’s semantic understanding of the story told by the speaker varied as a function of the degree to which neural activity in the mPFC of the listener’s brain at time point t1 predicted the speaker’s neural activity in that region at time point t2 (49). However, in that study, all stories were told by a single speaker, so it remains unclear whether there was a specific listener-by-speaker interaction, similar to the observer-by-target interaction observed in the current study.

Success of Communication and Interpersonal Attraction.

Until recently, neuroscientific research into interpersonal attraction has been guided by the view that an individual’s primary goal when evaluating other individuals must be to identify potential mating partners who possess high genetic fitness and fertility (e.g., 11, 50). This research builds on a large literature that links physical attractiveness to genetic fitness (for a review, see, e.g., ref. 51). However, for species that live in complex environments that require social interaction and cooperation to maximize reproductive success, being able to identify the right cooperation partners might be equally important. The current study provides evidence that potential cooperation partners qualify as right not only by their willingness and competence to cooperate (1417) (for a theoretical account see, e.g., ref. 52) but also by the degree to which their communication signals can be reliably decoded by the other individual. Importantly, unlike interpersonal attraction due to a target’s fitness-signaling physical features, which seems to be fairly consistent across perceivers (53, 54), the confidence-dependent adjustment of interpersonal attraction found in the current study seems to be specific for specific interaction partners. Indeed, observers showed more disagreement about which target they felt attracted to after than before emotion observation. In social psychology it has long been recognized (21) that attraction between individuals is not only determined by general target effects (e.g., a target’s physical attractiveness) but also by specific perceiver-by-target effects (relational effects) such as the match between a target’s affective behavior and a perceiver’s neural vocabulary we describe here. Interestingly, it has been suggested that such interaction partner-specific effects could underlie the forming of social cliques within larger groups (1, 55). The current study provides evidence that the brain’s reward system, signaling how well one’s neural vocabulary is suited to decode another person’s behavior, might play an important role in these social processes.

Conclusion.

In sum, we have shown that subjective understanding during social interaction can modulate interpersonal attraction. Interestingly, the findings of the current study suggest that the neural mechanisms underlying individual adjustments of interpersonal attraction during social encounters might act through internal reward signals that are partly independent of external feedback, which makes them perhaps less prone to cheating by potential cooperation partners. To investigate the interaction between intrinsic confidence signals and other—honest or manipulative—signals sent back and forth between communication partners and to examine the neural determinants of the dynamics of human social relations in larger groups (“social connectomes”) remain challenging tasks for future studies. The current study suggests that mutual understanding is an important factor in interpersonal attraction, and that further research into the role of a common neural vocabulary in interpersonal attraction will lead to a better understanding of the neurobiological factors that define human social relations.

Materials and Methods

Participants.

Forty volunteers (21 women, 19 men, all Caucasian, mean age 22.3 y, range 18–30 y) completed experiment I, and 54 volunteers completed experiment II. In experiment II, data of two participants were discarded because of estimated head movements >3 mm within a functional imaging run. The final sample in experiment II comprised 52 participants (28 women, 24 men, all right-handed and Caucasian, mean age 25.3 y, range 18–35 y). Participants reported no history of neurological or psychiatric disorders and had normal or corrected-to-normal vision. All participants gave written consent before participation and both studies were approved by the local ethics committee (Universität zu Lübeck).

Stimuli.

Videos of women experiencing fear or sadness were recorded in a previous fMRI study in which participants were asked to imagine and submerge themselves into a cued emotional situation and to facially express their feeling to their romantic partner (36). Using prerecorded videos of women who experienced and facially expressed fear and sadness toward their romantic partner ensured that all participants saw exactly the same behavior, and allowed us to exclude the possibility that individual changes in the participants’ interpersonal attraction toward the targets were due to differences in the target’s behavior toward different participants. Women were chosen as targets because women have been shown to express their emotions more accurately than men (56, 57). For the current study, videos of fear and sadness of six different women (all Caucasian, age 20–25 y) were selected. Videos were cut into short clips, each covering the first 8 s of a 20-s emotional period. The final set consisted of 48 different video clips (4 videos for each target and emotion). Each of the 48 emotion video clips was shown twice, resulting in a total of 96 emotion observation trials per participant. For preexperiment familiarization with each target and the assessment of interpersonal attraction (see below), a still picture of each target was cut from the original recordings, showing the target’s face during a 20-s rest period.

Cover Story and Preexperiment Familiarization.

Experimental procedures were similar in both studies (Fig. 1). Upon arrival in the laboratory, participants were told that the aim of the study was to investigate the relation between response times and the neural processing of faces and emotional expressions. Participants were then seated in front of a computer screen using head phones and a chin rest to avoid distraction and to ensure that they viewed all facial stimuli at the same distance. To support the cover story and to familiarize participants with the targets, assessment of interpersonal attraction was preceded by a motor task in which participants were required to press one of two response buttons in response to a visual cue (an arrow pointing to the left or to the right, or a negative or positive word) as quickly as possible. Response time trials were intermixed with a total of 96 short presentations (200 ms) of still pictures of each target (16 presentations per target) and targets were fully balanced over arrows and words, so that after completion of the motor task, participants were well-familiarized with each target. Participants in experiment I completed all parts of the study on the same computer screen, and participants in experiment II completed all parts except emotion observation and emotion experience (which were performed during fMRI) on the same computer screen.

Assessment of Interpersonal Attraction.

First, to assess the participants’ interpersonal attraction toward each target at the motivational–behavioral level, participants were asked to imagine that they would approach targets, one after the other, for a casual conversation. At the beginning of each trial, a small picture of the target appeared on the computer screen (about 40% of the original picture size) and participants were asked to increase the size of the picture by repeatedly pressing a button (increase about 4% per button press, no decrease button) until a pleasant conversational distance was reached. This task is a modified version of a task originally introduced by Aharon and colleagues (6) to measure the reward value of faces, except that the task used in the current study additionally mimics approach behavior by increasing the size of the target with each button press.
Second, to assess the participants’ self-reported attraction toward each target, they were shown a still picture of each target (1 s) followed by three statements about their interpersonal attraction toward the target. The statements were adapted from the “social attraction” items in McCroskey and McCain (58) and related to the participant’s subjective motivation to meet the target in real life (willingness to meet) and the participant’s expectation that they could have an intimate communication with the target (expectation of intimate communication) (Table 1). Participants were asked to indicate how much they agreed with each statement on a Likert-type 7-point visual scale ranging from 1 (not at all) to 7 (definitely) by pressing the corresponding key on a keyboard. In both tasks, targets were presented in different random orders before and after emotion observation.

Emotion Observation.

In experiment I, emotion observation was divided into four runs. During each run, 24 video clips, balanced across the six targets and two emotions, were presented in randomized order. Each video clip was followed by an emotion judgment question [“Hat sie Furcht oder Trauer empfunden?” (“Did she feel fearful or sad?”)]. After the participant had entered their emotion judgment by pressing the corresponding button on the keyboard, a confidence question [“Wie sicher bist Du, dass sie Furcht/Trauer empfunden hat?” (“How confident are you that she felt fearful/sad?”)] and a 5-point visual scale ranging from 1 (I am guessing) to 5 (I am absolutely sure) appeared on the screen. Responses were entered by pressing the corresponding number on the keyboard. The orientation of the scale (increasing confidence values from left to right or right to left) was balanced across participants. Each trial terminated with an intertrial interval of 1 s, during which a fixation cross was shown.
In experiment II, emotion observation was divided into eight runs. During each run, 12 video clips, balanced across the six targets and two emotions, were presented in randomized order. Each video clip was followed by a fixation cross (1 s), an emotion judgment screen (2 s), and a confidence screen (3 s). The emotion judgment screen showed the words “Trauer” (sadness) and “Furcht” (fear) side by side at the center of the screen, indicating that the participant should convey their emotion judgment by the response button in their left or right hand, respectively. The order of emotion words (left or right) was balanced across targets and emotions within participants. After the participant had entered their response, an orange frame appeared around the chosen emotion word, providing the participant with feedback about their response. The confidence screen showed a five-dot visual scale ranging from 1 (I am guessing) to 5 (I am absolutely sure). An orange dot at the central position of the scale indicated the starting position of the cursor. Participants were asked to move the orange dot to the left or to the right by pressing the button in their left or right hand, respectively, to indicate their confidence about their emotion judgment. The orientation of the scale (increasing confidence from left to right or right to left) was balanced across participants. Each trial terminated with an intertrial interval of 8 or 10 s, during which a fixation cross was shown (Fig. 1).
Stimulus presentation and response logging were controlled with Presentation software (Neurobehavioral Systems).

Emotion Experience.

After completion of the emotion observation part, participants in experiment II participated in four additional fMRI runs during which they were asked to experience and express fear and sadness themselves. Participants were informed that the experimental setup during this part of the experiment would be very similar to that for the women they had just observed, except that their facial expression would not be recorded, and that they would be asked to submerge themselves into frightening or sad situations and to feel and express their feelings as soon as they saw the corresponding word [Furcht (fear) or Trauer (sadness)] on the screen (please see SI Materials and Methods for details). Each run (two runs per emotion) comprised four emotional periods (20 s) and five interspersed periods (20 s), during which participants were asked to relax. The order of emotions was balanced across participants.

MRI Data Acquisition.

MRI data were acquired on a 3-T scanner (Siemens MAGNETOM Trio). A T1-weighted magnetization-prepared rapid gradient-echo (MPRAGE) image [MPRAGE, 176 sagittal slices, resolution 1 × 1 × 1 mm3, field of view (FOV) 256 × 256 mm2, flip angle 8°, inversion time 1,100 ms], used for spatial normalization of individual data, and a T2-gradient echo image [39 axial slices per volume, slice thickness 3 mm + 1-mm gap, interleaved order, in-plane resolution 3 × 3 mm2, FOV 192 × 192 mm2, echo time (TE) 1 5.19 ms, TE2 7.65 ms, repetition time (TR) 425 ms], used to compute individual field maps for correction of image distortions, were obtained from each participant before functional imaging. One hundred forty-five T2*weighted echoplanar images (EPIs) covering the whole brain were acquired during each emotion observation run, and 96 EPIs were acquired during each emotion experience run (35 axial slices per volume, slice thickness 4 mm + 0.4-mm gap, interleaved order, in-plane resolution 3 × 3 mm2, FOV 192 × 192 mm2, TE 30 ms, TR 2,000 ms, generalized autocalibrating partially parallel acquisition, factor 2). Functional runs were preceded by five functional images not included in the analysis to allow for T1 saturation.

Data Analysis.

MRI data were preprocessed with SPM8 (Wellcome Department of Imaging Neuroscience, University College London; www.fil.ion.ucl.ac.uk/spm/software/spm8/). Preprocessing followed standard procedures and included concurrent spatial realignment and correction of image distortions and normalization into standard Montreal Neurological Institute (MNI) space (59) at a spatial resolution of 3 × 3 × 3 mm3 using DARTEL (60). An additional receiver coil sensitivity bias correction [using the New Segment tool of SPM8 with very light regularization (0.0001) and 60-mm smoothness] was conducted after realignment and unwarping that corrected for differences in the scanner’s bias correction between functional runs that occurred due to a technical problem.
For the analysis of confidence-related neural activity, individual maps of parameter estimates were computed for each participant based on a standard generalized linear model (GLM) that accounted for first-order autocorrelations and low-frequency drifts (high-pass cutoff period 128 s). BOLD (blood oxygen level-dependent) activity was modeled separately for each emotion observation trial (n = 96) using box car functions (three per trial), convolved with a standard hemodynamic response function (hrf) that modeled (i) video onset and duration (8 s), (ii) emotion judgment onset and duration (3 s), and (iii) confidence rating onset and duration (3 s) (the latter were included as regressors of no interest). For the analysis of neural observation–experience matching (see below), a second set of maps of parameter estimates (n = 16, 20-s box car functions convolved with the hrf) was obtained for the emotion experience runs of each participant.
For the whole-brain analysis of confidence-related neural activity, trial-by-trial correlation maps (BOLD parameter estimates–self-reported confidence) were computed for each participant, Fisher-transformed, spatially smoothed (8-mm isotropic Gaussian kernel), and tested at random-effects group level (using T statistics).
For the analysis of NOE matching, we used a linear support vector machine (SVM) as implemented in LIBSVM (https://www.csie.ntu.edu.tw/∼cjlin/libsvm) with a linear kernel and a hard margin. The searchlight radius was set to 9 mm (123 voxels), and the searchlight was moved in steps of one voxel through the entire brain volume. The classifier was trained on patterns of neural activity associated with the participant’s own emotional experience (n = 8 samples per class) and tested on patterns of neural activity elicited during video observation (n = 96). To ensure that classification was based on multivoxel patterns of neural activity and not on the average level of activity within a sphere, the spatial mean of each local pattern was set to zero. Because we reasoned that the classifier’s decision confidence (the distance between test sample and decision border) would provide a more accurate estimate of the level of correspondence between a test pattern and the reference patterns than the classifier’s decision accuracy alone (which is binary variable), we computed a measure that reflected both the classifier’s decision accuracy and the classifier’s confidence for this decision, the weighted decision confidence. Mathematically, the weighted decision confidence is the product of the classifier’s decision accuracy [which was set to {1} for correct decisions (classifier’s decision matched the participant’s judgment) and to {−1} for incorrect decisions (classifier’s decision did not match the participant’s judgment)] and the classifier’s decision confidence (which is defined for a linear SVM as the distance between the test sample and the hyperplane that separates the two classes).
For the whole-brain analyses of neural observation–experience matching, three trial-by-trial correlation maps (NOE-matching–self-reported confidence, NOE-matching–neural confidence signals in the ventral striatum, NOE-matching–neural confidence signals in the mOFC) were computed for each participant. As above, these maps were Fisher-transformed, spatially smoothed (4-mm isotropic Gaussian kernel; the small kernel size accounted for the fact that a searchlight-based SVM already introduces some smoothness into the data), and tested at random-effects group level (using T statistics) separately for each comparison.
Statistical significance of all random-effects statistical parametric maps (SPMs) was assessed allowing for a probability of false positives of P = 0.05, corrected for multiple tests (familywise errors) across the whole volume according to random-field theory (61). FWE correction was performed at voxel level for the ventral striatum and at cluster level (using a height threshold of T[51] = 3.2 corresponding to P = 0.001) for all other regions. This accounted for the fact that clusters of activity were expected to be more distributed in cortical regions than in the ventral striatum.
For the ROI analyses (i.e., neural confidence signals in the ventral striatum–interpersonal attraction, neural confidence signals in the mOFC–interpersonal attraction, NOE matching in the anterior insula–interpersonal attraction), trialwise BOLD parameter estimates/trialwise weighted decision confidences were extracted from the corresponding cluster (identified in the whole-brain analyses) and averaged separately for each target and participant. For the cluster in the mOFC, BOLD parameter estimates were extracted from a 3-mm sphere centered at the peak voxel ([−3 39 −18]) because the mOFC cluster was a large cluster that extended into the dorsomedial OFC.
All ROI-based correlation analyses were checked for outliers, defined as values that deviated more than three times the interquartile range from the first or third quartile. No outliers were detected in the main analysis, and two outliers were detected in the split-half cross-validation (indicated by degrees of freedom less than n − 1). For all ROI-based analyses, a probability of false positives of P = 0.05 (one-tailed) was accepted unless indicated otherwise, and exact P values are reported for P < 0.200 and P > 0.001.
Observer-by-target interactions in self-reported confidence were computed with SPSS (version 22.0.0.1; IBM).

Acknowledgments

The authors thank N. Dewies, E. Charyasz, and M. Erb for help with data acquisition, and N. Weiskopf for technical expertise. This work was funded by Bundesministerium für Bildung und Forschung (German Federal Ministry of Education and Research) Grant 01GQ1105 (to S.A.).

Supporting Information

Supporting Information (PDF)
Supporting Information

References

1
TM Newcomb, The prediction of interpersonal attraction. Am Psychol 11, 575–586 (1956).
2
D Byrne, An overview (and underview) of research and theory within the attraction paradigm. J Soc Pers Relat 14, 417–431 (1997).
3
D Byrne, R Rhamey, Magnitude of positive and negative reinforcements as a determinant of attraction. J Pers Soc Psychol 2, 884–889 (1965).
4
C Gouaux, K Summers, Interpersonal attraction as a function of affective state and affective change. J Res Pers 7, 254–260 (1973).
5
K Nakamura, et al., Activation of the right inferior frontal cortex during assessment of facial emotion. J Neurophysiol 82, 1610–1614 (1999).
6
I Aharon, et al., Beautiful faces have variable reward value: fMRI and behavioral evidence. Neuron 32, 537–551 (2001).
7
J O’Doherty, et al., Beauty in a smile: The role of medial orbitofrontal cortex in facial attractiveness. Neuropsychologia 41, 147–155 (2003).
8
KKW Kampe, CD Frith, RJ Dolan, U Frith, Reward value of attractiveness and gaze. Nature 413, 589 (2001).
9
H Kim, R Adolphs, JP O’Doherty, S Shimojo, Temporal isolation of neural processes underlying face preference decisions. Proc Natl Acad Sci USA 104, 18253–18258 (2007).
10
J Cloutier, TF Heatherton, PJ Whalen, WM Kelley, Are attractive people rewarding? Sex differences in the neural substrates of facial attractiveness. J Cogn Neurosci 20, 941–951 (2008).
11
D Bzdok, et al., ALE meta-analysis on facial judgments of trustworthiness and attractiveness. Brain Struct Funct 215, 209–223 (2011).
12
SN Haber, B Knutson, The reward circuit: Linking primate anatomy and human imaging. Neuropsychopharmacology 35, 4–26 (2010).
13
O Bartra, JT McGuire, JW Kable, The valuation system: A coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. Neuroimage 76, 412–427 (2013).
14
J Rilling, et al., A neural basis for social cooperation. Neuron 35, 395–405 (2002).
15
JK Rilling, AG Sanfey, JA Aronson, LE Nystrom, JD Cohen, Opposing BOLD responses to reciprocated and unreciprocated altruism in putative reward pathways. Neuroreport 15, 2539–2543 (2004).
16
G Tabibnia, AB Satpute, MD Lieberman, The sunny side of fairness: Preference for fairness activates reward circuitry (and disregarding unfairness activates self-control circuitry). Psychol Sci 19, 339–347 (2008).
17
KL Phan, CS Sripada, M Angstadt, K McCabe, Reputation for reciprocity engages the brain reward center. Proc Natl Acad Sci USA 107, 13099–13104 (2010).
18
RM Jones, et al., Behavioral and neural properties of social reinforcement learning. J Neurosci 31, 13039–13045 (2011).
19
CW Korn, K Prehn, SQ Park, H Walter, HR Heekeren, Positively biased processing of self-relevant social feedback. J Neurosci 32, 16832–16844 (2012).
20
JP Bhanji, MR Delgado, The social brain and reward: Social information processing in the human striatum. Wiley Interdiscip Rev Cogn Sci 5, 61–73 (2014).
21
DA Kenny, TV West, TE Malloy, L Albright, Componential analysis of interpersonal perception data. Pers Soc Psychol Rev 10, 282–294 (2006).
22
A Kepecs, N Uchida, HA Zariwala, ZF Mainen, Neural correlates, computation and behavioural impact of decision confidence. Nature 455, 227–231 (2008).
23
MN Hebart, Y Schriever, TH Donner, J-D Haynes, The relationship between perceptual decision variables and confidence in the human brain. Cereb Cortex 26, 118–130 (2016).
24
R Daniel, S Pollmann, Striatal activations signal prediction errors on confidence in the absence of external feedback. Neuroimage 59, 3457–3467 (2012).
25
V Gallese, The manifold nature of interpersonal relations: The quest for a common mechanism. Philos Trans R Soc Lond B Biol Sci 358, 517–528 (2003).
26
J Decety, PL Jackson, The functional architecture of human empathy. Behav Cogn Neurosci Rev 3, 71–100 (2004).
27
JACJ Bastiaansen, M Thioux, C Keysers, Evidence for mirror systems in emotions. Philos Trans R Soc Lond B Biol Sci 364, 2391–2404 (2009).
28
M Iacoboni, Imitation, empathy, and mirror neurons. Annu Rev Psychol 60, 653–670 (2009).
29
R Adolphs, H Damasio, D Tranel, G Cooper, AR Damasio, A role for somatosensory cortices in the visual recognition of emotion as revealed by three-dimensional lesion mapping. J Neurosci 20, 2683–2690 (2000).
30
L Carr, M Iacoboni, M-C Dubeau, JC Mazziotta, GL Lenzi, Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. Proc Natl Acad Sci USA 100, 5497–5502 (2003).
31
B Wicker, et al., Both of us disgusted in My insula: The common neural basis of seeing and feeling disgust. Neuron 40, 655–664 (2003).
32
G Pourtois, et al., Dissociable roles of the human somatosensory and superior temporal cortices for processing social face signals. Eur J Neurosci 20, 3507–3515 (2004).
33
A Hennenlotter, et al., A common neural basis for receptive and expressive communication of pleasant facial affect. Neuroimage 26, 581–591 (2005).
34
JP Mitchell, MR Banaji, CN Macrae, The link between social cognition and self-referential thought in the medial prefrontal cortex. J Cogn Neurosci 17, 1306–1315 (2005).
35
JP Mitchell, CN Macrae, MR Banaji, Dissociable medial prefrontal contributions to judgments of similar and dissimilar others. Neuron 50, 655–663 (2006).
36
TW Buchanan, D Bibas, R Adolphs, Associations between feeling and judging the emotions of happiness and fear: Findings from a large-scale field experiment. PLoS One 5, e10640 (2010).
37
S Anders, J Heinzle, N Weiskopf, T Ethofer, J-D Haynes, Flow of affective information between communicating brains. Neuroimage 54, 439–446 (2011).
38
C Corradi-Dell’Acqua, C Hofstetter, P Vuilleumier, Felt and seen pain evoke the same local patterns of cortical activity in insular and cingulate cortex. J Neurosci 31, 17996–18006 (2011).
39
B Lorey, et al., Confidence in emotion perception in point-light displays varies with the ability to perceive own emotions. PLoS One 7, e42169 (2012).
40
JV Haxby, et al., Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425–2430 (2001).
41
RL Moreland, RB Zajonc, Exposure effects in person perception: Familiarity, similarity, and attraction. J Exp Soc Psychol 18, 395–415 (1982).
42
E Vul, C Harris, P Winkielman, H Pashler, Reply to comments on “Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition.”. Perspect Psychol Sci 4, 319–324 (2009).
43
RA Poldrack, JA Mumford, Independence in ROI analysis: Where is the voodoo? Soc Cogn Affect Neurosci 4, 208–213 (2009).
44
J-D Haynes, G Rees, Decoding mental states from brain activity in humans. Nat Rev Neurosci 7, 523–534 (2006).
45
N Kriegeskorte, R Goebel, P Bandettini, Information-based functional brain mapping. Proc Natl Acad Sci USA 103, 3863–3868 (2006).
46
T Singer, et al., Empathy for pain involves the affective but not sensory components of pain. Science 303, 1157–1162 (2004).
47
KN Ochsner, et al., Your pain or mine? Common and distinct neural systems supporting the perception of pain in self and other. Soc Cogn Affect Neurosci 3, 144–160 (2008).
48
DM Wolpert, K Doya, M Kawato, A unifying computational framework for motor control and social interaction. Philos Trans R Soc Lond B Biol Sci 358, 593–602 (2003).
49
GJ Stephens, LJ Silbert, U Hasson, Speaker-listener neural coupling underlies successful communication. Proc Natl Acad Sci USA 107, 14425–14430 (2010).
50
R Funayama, et al., Neural bases of human mate choice: Multiple value dimensions, sex difference, and self-assessment system. Soc Neurosci 7, 59–73 (2012).
51
C Senior, Beauty in the brain of the beholder. Neuron 38, 525–528 (2003).
52
ST Fiske, Journey to the edges: Social structures and neural maps of inter-group processes. Br J Soc Psychol 51, 1–12 (2012).
53
CP Said, JV Haxby, A Todorov, Brain systems for assessing the affective value of faces. Philos Trans R Soc Lond B Biol Sci 366, 1660–1670 (2011).
54
JB Freeman, RM Stolier, ZA Ingbretsen, EA Hehman, Amygdala responsivity to high-level social information from unseen faces. J Neurosci 34, 10573–10581 (2014).
55
R Hogan, D Mankin, Determinants of interpersonal attraction. A clarification. Psychol Rep 26, 235–238 (1970).
56
RW Buck, VJ Savin, RE Miller, WF Caul, Communication of affect through facial expressions in humans. J Pers Soc Psychol 23, 362–371 (1972).
57
RM Sabatelli, R Buck, A Dreyer, Communication via facial cues in intimate dyads. Pers Soc Psychol Bull 6, 242–247 (1980).
58
JC McCroskey, TA McCain, The measurement of interpersonal attraction. Speech Monogr 41, 261–266 (1974).
59
KJ Worsley, et al., A unified statistical approach for determining significant signals in images of cerebral activation. Hum Brain Mapp 4, 58–73 (1996).
60
J Ashburner, A fast diffeomorphic image registration algorithm. Neuroimage 38, 95–113 (2007).
61
DL Collins, P Neelin, TM Peters, AC Evans, Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. J Comput Assist Tomogr 18, 192–205 (1994).
62
N Tzourio-Mazoyer, et al., Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15, 273–289 (2002).

Information & Authors

Information

Published in

The cover image for PNAS Vol.113; No.16
Proceedings of the National Academy of Sciences
Vol. 113 | No. 16
April 19, 2016
PubMed: 27044071

Classifications

Submission history

Published online: April 4, 2016
Published in issue: April 19, 2016

Keywords

  1. affective communication
  2. confidence
  3. intrinsic reward
  4. multivoxel pattern analysis
  5. human social relations

Acknowledgments

The authors thank N. Dewies, E. Charyasz, and M. Erb for help with data acquisition, and N. Weiskopf for technical expertise. This work was funded by Bundesministerium für Bildung und Forschung (German Federal Ministry of Education and Research) Grant 01GQ1105 (to S.A.).

Notes

This article is a PNAS Direct Submission.

Authors

Affiliations

Silke Anders1 [email protected]
Social and Affective Neuroscience, Department of Neurology, Universität zu Lübeck, 23562 Luebeck, Germany;
Roos de Jong
Social and Affective Neuroscience, Department of Neurology, Universität zu Lübeck, 23562 Luebeck, Germany;
Christian Beck
Social and Affective Neuroscience, Department of Neurology, Universität zu Lübeck, 23562 Luebeck, Germany;
John-Dylan Haynes
Bernstein Center for Computational Neuroscience, Charité Universitätsmedizin Berlin, 10115 Berlin, Germany;
Berlin Center for Advanced Neuroimaging, Charité Universitätsmedizin Berlin, 10117 Berlin, Germany;
Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany;
Thomas Ethofer
Department of Biomedical Magnetic Resonance, University of Tübingen, 72076 Tuebingen, Germany;
Clinic for Psychiatry and Psychotherapy, University of Tübingen, 72076 Tuebingen, Germany

Notes

1
To whom correspondence should be addressed. Email: [email protected].
Author contributions: S.A. and R.d.J. designed research; S.A. and R.d.J. performed research; S.A. analyzed data; C.B. provided analysis tools; and S.A., J.-D.H., and T.E. wrote the paper.

Competing Interests

The authors declare no conflict of interest.

Metrics & Citations

Metrics

Note: The article usage is presented with a three- to four-day delay and will update daily once available. Due to ths delay, usage data will not appear immediately following publication. Citation information is sourced from Crossref Cited-by service.


Citation statements




Altmetrics

Citations

Export the article citation data by selecting a format from the list below and clicking Export.

Cited by

    Loading...

    View Options

    View options

    PDF format

    Download this article as a PDF file

    DOWNLOAD PDF

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Personal login Institutional Login

    Recommend to a librarian

    Recommend PNAS to a Librarian

    Purchase options

    Purchase this article to access the full text.

    Single Article Purchase

    A neural link between affective understanding and interpersonal attraction
    Proceedings of the National Academy of Sciences
    • Vol. 113
    • No. 16
    • pp. 4231-E2347

    Media

    Figures

    Tables

    Other

    Share

    Share

    Share article link

    Share on social media