Mapping the information flow from one brain to another during gestural communication
Edited* by Riitta Hari, Aalto University School of Science and Technology, Espoo, Finland, and approved April 6, 2010 (received for review February 17, 2010)
Abstract
Both the putative mirror neuron system (pMNS) and the ventral medial prefrontal cortex (vmPFC) are deemed important for social interaction: the pMNS because it supposedly “resonates” with the actions of others, the vmPFC because it is involved in mentalizing. Strictly speaking, the resonance property of the pMNS has never been investigated. Classical functional MRI experiments have only investigated whether pMNS regions augment their activity when an action is seen or executed. Resonance, however, entails more than only “going on and off together”. Activity in the pMNS of an observer should continuously follow the more subtle changes over time in activity of the pMNS of the actor. Here we directly explore whether such resonance indeed occurs during continuous streams of actions. We let participants play the game of charades while we measured brain activity of both gesturer and guesser. We then applied a method to localize directed influences between the brains of the participants: between-brain Granger-causality mapping. Results show that a guesser's brain activity in regions involved in mentalizing and mirroring echoes the temporal structure of a gesturer's brain activity. This provides evidence for resonance theories and indicates a fine-grained temporal interplay between regions involved in motor planning and regions involved in thinking about the mental states of others. Furthermore, this method enables experiments to be more ecologically valid by providing the opportunity to leave social interaction unconstrained. This, in turn, would allow us to tap into the neural substrates of social deficits such as autism spectrum disorder.
Sign up for PNAS alerts.
Get alerts for new articles, or get an alert when an article is cited.
How do humans understand each other? In the last decade, two parallel lines of research have investigated this question. On the one hand, the finding that some brain regions and neurons involved in performing an action are also active while viewing the actions of others [jointly referred to as the mirror neuron system; MNS (1–13)] has led to the idea that we understand the actions of others in part by transforming them into the motor vocabulary of our own actions. On the other hand, reflecting on other people's thoughts and beliefs is mediated by another part of the brain, the ventromedial prefrontal cortex (vmPFC; including the anterior cingulate and paracingulate gyrus). This area is consistently activated when we think about other people's mental states (14–17). Given that we often deduce the beliefs and attitudes of others through their actions, it is intuitively appealing to believe that these two networks would work together to achieve a coherent representation of the mental states of others (18, 19). However, a recent meta-analysis has shown that these two networks are often found to be dissociated (20). In what follows, we will first briefly describe some key issues that have limited our understanding of how these two systems contribute to reading the mental states of other individuals during naturalistic situations, and then present an experimental paradigm to explore their role and interaction in a naturalistic communicative situation.
In humans, the dorsal and ventral premotor, somatosensory cortex, anterior inferior parietal lobule, and midtemporal gyrus have the peculiar property of being active not only when we perform an action but also when we witness similar actions of others (1–3, 5, 7–13, 21). This set of brain regions has therefore jointly been referred to as the putative MNS (pMNS) (10). It has been proposed that through this system, the brains of two interacting individuals “resonate” with each other: “other people's mental states are represented by […] tracking […] their states with resonant states of one's own” (22). In this context, the term “resonance” is used rather loosely and metaphorically, not in a strict physical sense but rather to suggest that the ups and downs in the activity of one person's motor system lead to sequences of actions and rest which trigger similar ups and downs in the activity of the observer's (22–24). This concept of resonance is very influential; however, the only case in which this proposed temporal “tracking through resonant states” has really been tested is for viewing repetitive cyclic ups and downs of the wrist (25). Whether it applies to the natural streams of actions that typically lead us to read the minds of others, for example the sight of two gesticulating individuals on the side of the road, remains untested. This is because experimental designs so far have merely tested whether the pMNS becomes active at the transition between a control condition and the sight of a single complex action. This shows that the pMNS of the observer is indeed triggered by the sight of an action. Single-cell recordings (26, 27) and magnetoencephalography (28) show that the temporal profile of this activity is indeed similar during action observation and execution, potentially providing a neural basis for resonance. The concept of resonance, however, entails more than only “going on and off together” at the beginning and end of a single action. It involves a continuous tracking of the more subtle changes in activity during the execution and observation of entire streams of action. Natural social interactions are composed of complex sequences of actions where it is often difficult to know when one action ends and another starts. Here we will directly explore whether such resonance indeed occurs within the pMNS during such continuous streams of actions.
The literature on the role of the vmPFC in social interaction suffers from another problem. The vmPFC not only seems to be involved in reflecting on the mental states of others (14), it is also one of the brain regions that systematically decreases its activity whenever participants process external stimuli (the default network) (29). Studies investigating mentalizing and the default network show strongly overlapping results while exploring seemingly very different functions (30). The fact that the vmPFC is not typically found to be active while people observe the behaviors of others (20) is therefore difficult to interpret. Is activity due to interpreting another mind masked by the fact that its overall level of activity is reduced compared with baseline because of attention to external stimuli? A powerful way to examine this possibility would be to look at the activity of the vmPFC during the observation of longer streams of actions, as the overall level of activity in the default network might then be decreased but the subtle ups and downs could still reflect the mentalizing activity of this region in response to the sequence of actions.
Here participants played the game of charades in the magnetic resonance scanner to allow us to examine brain activity during longer streams of gestures. The game of charades was chosen because its success as a commercial game shows how powerfully it triggers the naturalistic motivation to communicate a mental state to a partner through hand actions. It also has the advantage of making the participants generate and observe streams of actions that are naturalistic both in duration and complexity. Given that the type of gesture involved in this game are hand actions, charades can serve to examine the unresolved issue of whether the pMNS would make two individuals’ brains resonate during longer streams of actions. Furthermore, because the aim of charades is also to make one player guess a concept that is in the mind of the other player, it is also a powerful instrument to check whether fluctuations in the activity of the vmPFC during longer streams of gestures could reflect mentalizing processes triggered by the behavior of another individual. Indeed, both the pMNS (31–33) and the vmPFC (31) have been implicated in the observation of single gestures, maximizing our chances to examine the yet unexplored issue of whether the activity of these regions during longer streams of actions would indeed resonate with the activity of the brain of the gesturer.
We therefore asked couples to take turns in the functional MRI (fMRI) scanner while we measured their brain activity. Each partner knew that in half of the trials they would see a word on a screen and would have to gesture this word into a video camera for their partner to guess later, and in the other half they would see a video of their partner's gestures and have to guess what the word had been. Using this manipulation, a single fMRI scanner was enough to measure the brain activity both when one person generates gestures and (later) when another person decodes these very gestures. By aligning the time courses of the two brains’ activity, as measured using fMRI, relative to the video recording, we can then directly investigate the temporal coupling of the two brains’ activity during gestural communication. To do this quantitatively, we introduce an analysis method. We extend Granger-causality mapping originally used to track information flow within a brain (34) to a between-brain Granger-causality mapping (bbGCM; Fig. 1 and SI Discussion). BbGCM quantifies the influence from a selected seed region Y in the gesturer's brain to all voxels Xi of the guesser's brain by statistically comparing the G causalities in both directions, that is, (Y→Xi) − (Xi→Y) (34).
Fig. 1.

A preliminary analysis of the same data using a traditional general linear model (GLM) approach ignoring the temporal relationship between the brain activities of each couple has been published previously (33) and shows involvement of pMNS areas but not the vmPFC. Using bbGCM, however, we will show that even if one ignores the beginning and end of a gesture, activity in both the pMNS and the vmPFC of the observer does carry fine-grained information about the time course of the activity in the brain of the gesturer, providing a powerful demonstration of resonance across brains during gestural communication.
Results
BbGCM: Gesturer to Guesser.
The resulting bbGCMs are shown in Fig. 2 separately for each region of interest (ROI) (top four rows) as well as summarized over all ROIs (bottom row). They show that activity in the pMNS of the gesturer indeed predicts brain activity in the brain of the guesser more than the other way around (warm colors). Given that we used only the very recent past (4 s) of the gesturer's brain activity in the analysis, this provides evidence that the moment-to-moment activity in the guesser's pMNS indeed mirrors the close past of the gesturer's pMNS activity during gestural communication. Notably, regions of the vmPFC cortex (including the anterior cingulate and paracingulate gyrus) are also G-caused by activity in the pMNS. The opposite directionality (guesser→gesturer) is much rarer (only found in a small region on the mesial wall; cold colors). This suggests that bbGCM is indeed able to track the prevalent direction of information flow between brains, which has to be in the gesturer→guesser direction given that the guesser could see the gesturer in the video but not vice versa.
Fig. 2.

BbGCM: Gesturer to Passive Observation.
To examine whether these results were dependent on the observer actively trying to guess the meaning of the observed gestures, we had participants watch the same gesture movies again but with an instruction not to actively interpret the movies (Materials and Methods). We computed between-brain influences between the gesturer's brain activity while generating the gestures and their partner's during this control condition (Fig. 3A and Fig. S1). We then directly contrasted these results with those during active guessing (Figs. 2 and 3B). Directed influence was significantly reduced within the ventral premotor and parietal regions associated with the pMNS when directly comparing the two situations (Fig. 3C and Fig. S2), and differential G causality was more consistent during deliberate guessing than passive viewing (Fig. 3A vs. 3B). This suggests that a task to decode gestures does influence the consistency (and therefore statistical significance) with which an observer's brain time locks onto the gestures, and thereby pMNS activity, of the gesturer. An instruction not to interpret the gestures, however, cannot ensure that participants indeed refrained entirely from interpretation, and a traditional GLM analysis of the same dataset (33) showed that overall activity during active guessing and passive observation is indeed similar. Accordingly, contrasting bbGCM during active guessing and passive viewing is a very conservative approach to localizing the neural basis of gesture interpretation that would exclude all neural processes that are triggered automatically by the vision of gestures. However, at debriefing, participants reported having interpreted the gestures at least less consistently during passive viewing than active guessing, and we did find differences in bbGCM results. This shows that an instruction not to interpret the gestures of a partner does seem to partially decouple the observer's brain regions from the pMNS of the gesturer.
Fig. 3.

BbGCM: Gesturer to a Random Guesser.
To further test whether bbGCM is indeed identifying information flow based on the fine-grained temporal chain of behaviors that makes each social interaction unique, we recalculated bbGCMs while pairing each gesturer's brain activity with that of a randomly selected guesser who had viewed different gestures of another gesturer (Fig. 3D and Fig. S3). Virtually no vertices (vertices refer to the nodes on the cortical surface, and are therefore similar to voxels except that they are on a cortical surface instead of a brain volume) demonstrated significant differential G causality in this control analysis and there was significantly more G causality from gesturer to guesser than from gesturer to random guesser (Fig. 3E and Fig. S4). Because the sequence of words used for each couple was randomized, the original guesser and his/her randomly selected control guesser saw a different sequence of words being gestured to them. As a yet stricter control analysis, we therefore repeated this control analysis by substituting word by word the time series of the original guesser with that of a randomly selected control viewing the same word being gestured to him/her but by someone else than the original gesturer (Fig. S5). Differential G causality was again significantly stronger for the original gesturer-guesser pair. This provides direct evidence that the brain activity of the guesser was indeed an echo of the unique way in which his/her particular partner generated these gestures and suggests that bbGCM can indeed track the unique way in which two brains time lock onto one another during communication.
GLM: Using Instantaneous Motion Energy of the Gestures as Predictor.
Here we used bbGCM to identify brain regions involved in tracking the gestures of others. To test whether this technique can unravel the involvement of brain regions that more traditional techniques do not, we compared bbGCM with a classical GLM in which we enter two regressors. One contains the timing of the gesture movies (as a boxcar function) and the other the fluctuation of instantaneous motion energy within each movie, both of which were convolved with the hemodynamic response function. This analysis shows that whereas the brain is strongly reacting to the on- and offset of the movies (see figure 1a in ref. 33), it does not show a correlation between the fluctuations in the movement of the gesturer (as approximated using instantaneous motion energy in the movie) and fluctuations in brain activity of the guesser, as the parameter estimates for the predictor motion energy were not significantly above zero in any cluster (Fig. S6).
Discussion
In the present study, we introduced bbGCM to investigate to which degree two brains resonate during gestural communication. We show that activity in the pMNS and the vmPFC of the guesser is Granger-caused by fluctuations in activity in the pMNS of the gesturer. These findings have three sets of implications. First, they show that pMNS regions indeed resonate across brains, thereby providing evidence for resonance theories (22–24). Second, they extend our understanding of the neural basis of gestural communication by providing evidence for a fine-grained temporal interplay between regions involved in motor planning (pMNS) and regions involved in thinking about the mental states of others (vmPFC). Third, they demonstrate more generally that G causality can be used to map directional information flow across brains during social interactions without the need for the experimenter to impose temporal structure on the social interaction.
Before going into details of each of these, we would like to discuss how such G causality should be interpreted (SI Discussion). Directed G causality between brains has to be mediated by what the guesser can perceive: the observable gestures of the gesturer. BbGCM as we apply it is therefore not a method to determine direct causal interactions across brains, but a method to map brain regions that are at opposite ends of a longer indirect chain of causality that goes through the external world. Neural activity in the gesturer causes both the execution of the gestures and the blood-oxygen-level-dependent (BOLD) signal that we measure with fMRI. The videotaped gestures are seen by the guesser, which causes brain activity and an observable BOLD response that we again measure with fMRI. Keeping the indirect nature of the causal pathway in mind, our bbGCM maps brain regions in the receiver that echo the brain activity of the sender. By “echo” in this context we mean providing temporal information about the state of the other person's brain region. In general, the directed bbGCM should be interpreted with a further issue in mind. Because it is calculated by contrasting bbGCM in two directions (brain A→brain B − brain B→brain A), a value larger than zero is evidence for an influence of brain A on brain B, whereas the opposite is not evidence for a lack of influence from A to B. This is because, in addition to a potential lack of statistical power, negative findings with directed bbGCM could originate from two very different scenarios: because there is no significant information flow in either direction, or because the influence is significant but equally strong in both directions. In our experiment, the latter possibility is unlikely, as the guesser can view the gesturer but the gesturer cannot see the guesser. It is therefore unlikely that the guesser's brain could influence the gesturer's. The second scenario will, however, be more likely in future experiments in which two interacting partners might be able to mutually observe each other in real time.
We still know relatively little about the neural basis of mind reading and gestural communication. However, two sets of brain regions could play a role: pMNS areas and the vmPFC. Our findings support the idea that both play a role. We show that BOLD activity in functionally defined regions of the pMNS and in the vmPFC are G-caused by BOLD activity in the pMNS of the gesturer. This strengthens the idea that simulation and mentalizing both contribute to our interpretation of other people's gestures (18, 35, 36). Because of the constraints of traditional data analysis (e.g., GLM analysis), the degree to which the sequences of complex actions composing a gestural phrase (37) would be parsed (38) similarly by the gesturer and guesser has never been explored. Our finding of significant G causality between the pMNS in the two brains using a temporal window of 3 (i.e., regressing the brain activity of the guesser onto the activity in the past 3 volumes = 4 s of the gesturer) provides evidence that the two brain activities go up and down together in naturalistic gestural communication. Reducing the temporal window to 1 (i.e., considering only the past 1.33 s of the gesturer) much reduces this differential G causality (Fig. S7). This shows that the pMNSs of communicative partners indeed resonate with each other (in the loose sense used in this literature), as had been suggested by simulation accounts of mind reading and communication (22–24, 39). Moreover, it informs us that this resonance is not most evident at the second-to-second timescale of time window 1 but at a more moderate timescale of several (~4 s) seconds that is commensurable with the time it takes to plan, generate, and perceive a gestural element (37). BbGCM is therefore a powerful tool to test the temporal resonance phenomenon at the core of simulation accounts of communication. It furthermore shows that the pMNS indeed provides the time-resolved information about the state of the gesturer's motor system that would be required for motor simulation to be useful for communication in a naturalistic context. This adds to the evidence that the excitability of an observer's motor system fluctuates in synch with the repetitive wrist flexion of another individual by showing that the concept of resonance indeed applies to the complex, nonrepetitive, and nonrhythmical streams of gestures that are more typical of real mind-reading situations and may have been essential in the early state of language evolution (40). Additionally, this finding dovetails with the observation that brain activity in both these regions predicts the accuracy with which participants can judge the moment-to-moment emotional state of another individual (41).
The fact that vmPFC activity was also G-caused by pMNS activity in the gesturer is surprising. This region is well-known to play a role in inferring mental states from written stories or cartoons (14), and activity in this region is increased while participants try to interpret certain gestures (31). These findings suggest that the vmPFC might be involved in attributing mental states to others, and could do so during gestures and actions. However, a previously published preliminary GLM analysis of our data revealed that this region does not demonstrate more brain activity while guessing gestures than while fixating a cross (33). In addition, during the free-viewing of a Hollywood movie, the vmPFC does not seem to synchronize across viewers (42). The inferential nature of these processes seems to detach brain activity in this region from the exact timing of the stimulus, leading it not to synchronize across viewers and therefore also making it difficult to link activity in this region directly to the stimulus itself. Indeed, activity in this region also does not simply correlate with the low-level motion contained in the stimuli (Fig. S6). Here, on the other hand, we show that activity in this brain region in a guesser does contain information (SI Discussion) about the time course of activity in the regions involved in planning and executing gestures in the gesturer. This supports the idea that the pMNS and mentalizing brain areas may work in concert to derive mental states from observed actions (18, 36). A traditional GLM analysis of the same data (33) may have been unable to detect the involvement of the vmPFC because this region is also part of the default network (29, 30). The default network is a set of brain regions that demonstrate augmented metabolism during passive baseline conditions, supposedly because they contain neurons that are involved in the self-referential processes we engage in while not performing a particular task (29, 30). During guessing, these self-referential processes would have been suspended, lowering the BOLD in this brain region below baseline. The activity in this region of a smaller number of neurons engaging in the mental-state attribution required by the game of charades would then have been masked by the concurrent reduction of self-referential activity. Our bbGCM can detect such activity, nevertheless, because it examines not whether activity overall goes up or down relative to a baseline condition but rather whether fluctuations in activity during the stream of gestures covaries with the past activity of the gesture execution system of the gesturer. This changes the interpretation of the same dataset compared with a classical GLM analysis which showed that only the pMNS but not the vmPFC demonstrates augmented activity compared with baseline during the active decoding of gestures (33).
A number of control analyses served to establish that bbGCM indeed tracks a specific information flow between two communicating participants. When pairing the time course of a given gesturer with that of a randomly selected guesser, instead of the one who had actually observed the gestures, significantly less between-brain influences were observed compared with the analysis with the active guessing condition (Fig. 3E). This suggests that bbGCM indeed revealed the specific effect of a particular pattern of gestures on the brain activity of the guesser. Furthermore, we found that active guessing, but not passive viewing, of the same gestures leads to significant bbGCM of the pMNS and vmPFC (Fig. 3A). This shows that an instruction to actively decode the gestures increases the coherence between the activities in the two brains.
Given that the brain activity of our guesser is not directly caused by brain activity in the gesturer but by the prerecorded movie of his/her gestures, one might argue that measuring the brain activity of the gesturer is not necessary to map brain regions involved in social information transfer. Instead, quantifying what is in the stimulus would suffice to localize those brain regions in the guesser's brain that respond to that stimulus. We tested this approach by using instantaneous motion energy from the gesture movies as a predictor in a GLM analysis. Results show no correlation between their activity and the instantaneous motion energy from the movies, indicating that using this particular measure with a traditional GLM approach does not provide any extra information. Although alternative approaches to quantifying the content of the stimulus and introducing time-lagged versions of these predictors into a GLM may help, the fundamental problem with such a stimulus-centered approach is that quantifying the relevant dimensions of a naturalistic stream of gestures is far from trivial. It is a highly multidimensional stimulus, and transforming it into a univariate time series for a GLM requires knowledge of what aspects of the stimulus are relevant for the brain of the observer—a knowledge that we often lack. BbGCM has the elegant property of circumventing this problem altogether and thereby directly testing those theories, like the pMNS resonance theory of mind reading (22, 40), which are formulated not as a link between a stimulus and a neural state but between the neural states of two individuals.
As an alternative for this method, one might show the same gestures to many participants and examine what brain regions synchronize across participants (42). Between-viewer correlation also has the elegant property of circumventing the problem of quantifying the stimulus, and is conceptually related to our approach. This, however, has other limitations. It requires many viewers of the same stimulus and, in its standard form, only examines instantaneous dependencies between brain regions (i.e., it does not allow for time shifts between the brain activity of different viewers). BbGCM overcomes these limitations. It can be applied to pairs of interacting partners, with only one participant viewing any particular communicative episode, and is therefore more suited for studying dyadic communication. Additionally, it allows examination of dependencies between brain activity over a longer time period (the G-causality order, in our 4-s case), which is more appropriate for the analysis of brain activity during communication, where several seconds can separate the planning of a gesture from its execution and perception. BbGCM and between-viewer correlation could be combined to a between-viewer GCM. If the brain activity of one viewer contains information about the brain activity of another viewer, in the absence of direct communication between the viewers, this shared information has to be information about the stimulus. Between-viewer GCM would thus map regions containing information about the stimulus while allowing for slight time shifts across viewers.
More generally, for the field of social neuroscience, our findings show that it is possible to map the brain regions involved in the flow of information across individuals with fMRI without imposing a temporal structure on the social interaction and without depending on certain choices for the quantification of the information in a stimulus. A similar approach seems to be suited for analyzing electroencephalogram data during social interactions (43). Demonstrating bbGCM between one brain region in partner 1 and another region in partner 2 then allows a data-driven identification of brain regions that could play a role in the information flow across participants. Much as for other data analysis techniques (42), further experiments that control the content of the stimulus seen by participants are then needed to isolate which aspects of the complex interaction were encoded in the brain activity in these brain regions, and virtual lesions using transcranial magnetic stimulation will be needed to examine whether these brain regions are necessary for normal social interactions.
In conclusion, bbGCM has advantages over existing techniques. It can map the information transfer from brain to brain in pairs of participants without having to impose temporal structure on social interactions and without requiring knowledge about the relevant dimensions of the complex social stimulus. Here we used this approach to show that even for naturalistic streams of gestural communication, the core prediction of MNS theories of communication and mind reading hold. The pMNS of the guesser does indeed reflect moment-to-moment information about the state of the motor system of the gesturer. In addition, using this method, we narrow the gap between literatures exploring the pMNS and that exploring mentalizing by show that the vmPFC of the observer could add to this mirroring by also resonating with the motor system of the gesturer. More generally, we hope that this technique will enable and inspire the investigation of one of the most defining features of human beings: their capacity to transfer knowledge from one person to another. In particular, the opportunity to leave the social interaction unconstrained will enable experiments to be more ecologically valid. This, in turn, could allow us to tap into the neural substrates of social deficits and ask questions such as: which neural substrates are responsible for the difficulty autistic individuals have in taking turns during communication?
Materials and Methods
Participants.
Twelve couples (total 24 participants) were scanned while playing the game charades. Four participants had moved more than the voxel size during the gesturing phase, which led us to exclude three couples from the data analysis that contained these participants. All of the analyses in this paper were performed on 18 participants. The mean age of the participants was 27.5 ± 3.8 years. Each couple consisted of a man and a woman involved in a romantic relationship for at least 6 months. More details are described in SI Materials and Methods.
Task/Experimental Design.
The experiment consisted of two separate sessions on different days. In the first session, the couple was required to play the game of charades. In the second, detailed anatomical scans and a passive observation control condition were acquired. For the game of charades, participants took turns going into the scanner, alternating gesturing and guessing of words. Words were either objects (e.g., nutcracker, watch, pencil sharpener) or actions (e.g., painting, knitting, shaving) (Table S1). Each participant performed two gesture and two guess runs in which they gestured 14 words and guessed 14 words in total (7 per run). During a gesture run, the participant was presented with a word on the screen and was instructed to communicate this word to his or her partner by means of gestures. Every word had to be gestured for 90 s and was then followed by a 20-s fixation cross. During a guess run, the participant was shown the movies that were recorded in the gesture run of their partner. The task they had to perform was to guess what their partner was trying to gesture to them. Participants were asked to consider the gestures for at least 50 s before committing to a specific interpretation of the gestures. This was done to ensure at least 50 s of data in each trial to examine the time course of activity using between-brain Granger causality. As a control condition for the guess run, the participants watched the movies they had seen during the guessing condition again. This time, they were instructed not to guess what was gestured, but only to passively view them. More details are described in SI Materials and Methods and in Fig. S8.
Granger-Causality Analyses.
Granger-causality analyses were performed as described in Roebroeck et al. (34) but applied here to data from different brains (SI Discussion). In short, given two time series (for a seed and another point on the cortical surface), autoregressive models are estimated that quantify G causality. Given a seed, maps are created that specify G-causal influence from the seed in the gesturer to all of the guesser's brain, as well as influence in the reverse direction, that is, from anywhere in the guesser's brain to the seed in the gesturer's brain. These two directions of G causality are then subtracted from each other to generate differential G-causality maps, such that positive values indicate more G causality from the gesturer to the guesser than from the guesser to the gesturer. A separate differential G map was calculated for each of the eight seed regions (see below) for each participant. These differential G-causality maps were then taken, separately for each seen region map, to the second level (see below) and thresholded for multiple comparisons at P < 0.05 using a cluster threshold determined by a Monte Carlo simulation method (44, 45). The order of the estimated autoregressive models was 3, that is, the three preceding time points are taken into account to predict the current activity, corresponding to ~4 s [3 repetition time (TR)].
Seed ROIs.
The ROIs that were used as seeds in the between-brain Granger-causality analysis were defined as those “mirror” areas that were active both during gesturing and guessing using a traditional GLM analysis on the same data.
Instantaneous Motion-Energy GLM.
We extracted motion energy from the gesture movies using Matlab (Mathworks, Inc., Natick, MA). For two consecutive frames of the recorded movies, motion energy was quantified in every pixel as the sum of the squared differences in the red, green, and blue channels and then summed over all pixels. This time course was then mean-corrected, convolved with the hemodynamic response function, and sampled at the acquisition rate of the fMRI signal (TR = 1.33 s).
Acknowledgments
We thank V. Gazzola and R. Goebel for help in designing the experiment and the latter for providing BrainVoyager, and P. Toffanin, M. Spezio, and D. Arnstein for critical comments on the manuscript. The research was supported by a Vidi grant of the Dutch Science Foundation (NWO) and a Marie Curie Excellence Grant of the European Commission (to C.K.).
Supporting Information
Supporting Information (PDF)
Supporting Information
- Download
- 1.45 MB
References
1
L Aziz-Zadeh, SM Wilson, G Rizzolatti, M Iacoboni, Congruent embodied representations for visually presented actions and linguistic phrases describing actions. Curr Biol 16, 1818–1823 (2006).
2
TT Chong, R Cunnington, MA Williams, N Kanwisher, JB Mattingley, fMRI adaptation reveals mirror neurons in human inferior parietal cortex. Curr Biol 18, 1576–1580 (2008).
3
I Dinstein, U Hasson, N Rubin, DJ Heeger, Brain areas selective for both observed and executed movements. J Neurophysiol 98, 1415–1427 (2007).
4
L Fadiga, L Fogassi, G Pavesi, G Rizzolatti, Motor facilitation during action observation: A magnetic stimulation study. J Neurophysiol 73, 2608–2611 (1995).
5
F Filimon, JD Nelson, DJ Hagler, MI Sereno, Human cortical representations for reaching: Mirror neurons for execution, observation, and imagery. Neuroimage 37, 1315–1328 (2007).
6
V Gallese, L Fadiga, L Fogassi, G Rizzolatti, Action recognition in the premotor cortex. Brain 119, 593–609 (1996).
7
V Gazzola, L Aziz-Zadeh, C Keysers, Empathy and the somatotopic auditory mirror system in humans. Curr Biol 16, 1824–1829 (2006).
8
J Grèzes, JL Armony, J Rowe, RE Passingham, Activations related to “mirror” and “canonical” neurones in the human brain: An fMRI study. Neuroimage 18, 928–937 (2003).
9
M Iacoboni, et al., Cortical mechanisms of human imitation. Science 286, 2526–2528 (1999).
10
C Keysers, V Gazzola, Expanding the mirror: Vicarious activity for actions, emotions, and sensations. Curr Opin Neurobiol 19, 666–671 (2009).
11
JM Kilner, A Neal, N Weiskopf, KJ Friston, CD Frith, Evidence of mirror neurons in human inferior frontal gyrus. J Neurosci 29, 10153–10159 (2009).
12
E Ricciardi, et al., Do we really need vision? How blind people “see” the actions of others. J Neurosci 29, 9719–9724 (2009).
13
L Turella, M Erb, W Grodd, U Castiello, Visual features of an observed agent do not modulate human brain activity during action observation. Neuroimage 46, 844–853 (2009).
14
DM Amodio, CD Frith, Meeting of minds: The medial frontal cortex and social cognition. Nat Rev Neurosci 7, 268–277 (2006).
15
CD Frith, U Frith, The neural basis of mentalizing. Neuron 50, 531–534 (2006).
16
HL Gallagher, CD Frith, Functional imaging of ‘theory of mind.’. Trends Cogn Sci 7, 77–83 (2003).
17
M Sommer, et al., Neural correlates of true and false belief reasoning. Neuroimage 35, 1378–1384 (2007).
18
C Keysers, V Gazzola, Integrating simulation and theory of mind: From self to social cognition. Trends Cogn Sci 11, 194–196 (2007).
19
M Brass, RM Schmitt, S Spengler, G Gergely, Investigating action understanding: Inferential processes versus action simulation. Curr Biol 17, 2117–2121 (2007).
20
F Van Overwalle, K Baetens, Understanding others’ actions and goals by mirror and mentalizing systems: A meta-analysis. Neuroimage 48, 564–584 (2009).
21
C Keysers, Mirror neurons. Curr Biol 19, R971–R973 (2009).
22
V Gallese, A Goldman, Mirror neurons and the simulation theory of mind-reading. Trends Cogn Sci 12, 493–501 (1998).
23
V Gallese, C Keysers, G Rizzolatti, A unifying view of the basis of social cognition. Trends Cogn Sci 8, 396–403 (2004).
24
G Rizzolatti, L Fogassi, V Gallese, Neurophysiological mechanisms underlying the understanding and imitation of action. Nat Rev Neurosci 2, 661–670 (2001).
25
P Borroni, M Montagna, G Cerri, F Baldissera, Cyclic time course of motor excitability modulation during the observation of a cyclic hand movement. Brain Res 1065, 115–124 (2005).
26
C Keysers, et al., Audiovisual mirror neurons and action recognition. Exp Brain Res 153, 628–636 (2003).
27
R Mukamel, AD Ekstrom, J Kaplan, M Iacoboni, I Fried, Single-neuron responses in humans during execution and observation of actions. Curr Biol, 10.1016/j.cub.2010.02.045. (April 8, 2010).
28
G Caetano, V Jousmäki, R Hari, Actor's and observer's primary motor cortices stabilize similarly after seen or heard motor actions. Proc Natl Acad Sci USA 104, 9058–9062 (2007).
29
ME Raichle, AZ Snyder, A default mode of brain function: A brief history of an evolving idea. Neuroimage 37, 1083–1090, discussion 1097–1099. (2007).
30
RN Spreng, RA Mar, AS Kim, The common neural basis of autobiographical memory, prospection, navigation, theory of mind, and the default mode: A quantitative meta-analysis. J Cogn Neurosci 21, 489–510 (2009).
31
KJ Montgomery, N Isenberg, JV Haxby, Communicative hand gestures and object-directed hand movements activated the mirror neuron system. Soc Cogn Affect Neurosci 2, 114–122 (2007).
32
M Pazzaglia, N Smania, E Corato, SM Aglioti, Neural underpinnings of gesture discrimination in patients with limb apraxia. J Neurosci 28, 3030–3041 (2008).
33
MB Schippers, V Gazzola, R Goebel, C Keysers, Playing charades in the fMRI: Are mirror and/or mentalizing areas involved in gestural communication? PLoS One 4, e6801 (2009).
34
A Roebroeck, E Formisano, R Goebel, Mapping directed influence over the brain using Granger causality and fMRI. Neuroimage 25, 230–242 (2005).
35
FP de Lange, M Spronk, RM Willems, I Toni, H Bekkering, Complementary systems for understanding action intentions. Curr Biol 18, 454–457 (2008).
36
M Thioux, V Gazzola, C Keysers, Action understanding: How, what and why. Curr Biol 18, R431–R434 (2008).
37
D McNeill Hand and Mind: What Gestures Reveal About Thought (University of Chicago Press, Chicago, 1992).
38
RW Byrne, Imitation as behaviour parsing. Philos Trans R Soc Lond B Biol Sci 358, 529–536 (2003).
39
G Rizzolatti, L Craighero, The mirror-neuron system. Annu Rev Neurosci 27, 169–192 (2004).
40
G Rizzolatti, MA Arbib, Language within our grasp. Trends Neurosci 21, 188–194 (1998).
41
J Zaki, J Weber, N Bolger, K Ochsner, The neural bases of empathic accuracy. Proc Natl Acad Sci USA 106, 11382–11387 (2009).
42
U Hasson, Y Nir, I Levy, G Fuhrmann, R Malach, Intersubject synchronization of cortical activity during natural vision. Science 303, 1634–1640 (2004).
43
F Babiloni, et al., Hypermethods for EEG hyperscanning. Conf Proc IEEE Eng Med Biol Soc 1, 3666–3669 (2006).
44
SD Forman, et al., Improved assessment of significant activation in functional magnetic resonance imaging (fMRI): Use of a cluster-size threshold. Magn Reson Med 33, 636–647 (1995).
45
DJ Hagler, AP Saygin, MI Sereno, Smoothing and cluster thresholding for cortical surface-based group analysis of fMRI data. Neuroimage 33, 1093–1103 (2006).
Information & Authors
Information
Published in
Classifications
Submission history
Published online: May 3, 2010
Published in issue: May 18, 2010
Keywords
Acknowledgments
We thank V. Gazzola and R. Goebel for help in designing the experiment and the latter for providing BrainVoyager, and P. Toffanin, M. Spezio, and D. Arnstein for critical comments on the manuscript. The research was supported by a Vidi grant of the Dutch Science Foundation (NWO) and a Marie Curie Excellence Grant of the European Commission (to C.K.).
Notes
*This Direct Submission article had a prearranged editor.
Authors
Competing Interests
The authors declare no conflict of interest.
Metrics & Citations
Metrics
Altmetrics
Citations
Cite this article
Mapping the information flow from one brain to another during gestural communication, Proc. Natl. Acad. Sci. U.S.A.
107 (20) 9388-9393,
https://doi.org/10.1073/pnas.1001791107
(2010).
Copied!
Copying failed.
Export the article citation data by selecting a format from the list below and clicking Export.
Cited by
Loading...
View Options
View options
PDF format
Download this article as a PDF file
DOWNLOAD PDFLogin options
Check if you have access through your login credentials or your institution to get full access on this article.
Personal login Institutional LoginRecommend to a librarian
Recommend PNAS to a LibrarianPurchase options
Purchase this article to access the full text.