Bodily maps of musical sensations across cultures

Significance Music is inherently linked with the body. Here, we investigated how music's emotional and structural aspects influence bodily sensations and whether these sensations are consistent across cultures. Bodily sensations evoked by music varied depending on its emotional qualities, and the music-induced bodily sensations and emotions were consistent across the tested cultures. Musical features also influenced the emotional experiences and bodily sensations consistently across cultures. These findings show that bodily feelings contribute to the elicitation and differentiation of music-induced emotions and suggest similar embodiment of music-induced emotions in geographically distant cultures. Music-induced emotions may transcend cultural boundaries due to cross-culturally shared links between musical features, bodily sensations, and emotions.


Culture, music, and emotions
We investigated the relationship between the spatial patterns of music-induced bodily sensations and the emotional and acoustic attributes of music in a cross-cultural replication approach with participants from Western regions (United States and Western Europe) and East-Asian regions (China), using both Western and East Asian music.We refer to 'Western' and 'East-Asian' as cultures for heuristic purposes but acknowledge that these terms do not fully capture diverse musical backgrounds of the participants and that there exists a rich diversity of musical exposure within these cohorts, and that individuals within them may have encountered multiple musical cultures and possibly identify with various sub-cultures.We argue that, despite these caveats, the participants within each cohort are more likely to share not only geographical location, nationality, and language but also customs, norms and beliefs as well as aspects of musical exposure that may have relevance for music-induced emotions.In accordance with the recommendations by Jacoby et al. (1) local experts were enlisted for guidance in determining the cohorts to be studied and the stimulus materials.Furthermore, we explicitly define music and emotions below to elucidate the underlying assumptions of the study.
Music is a form of art and cultural expression that involves organized sounds created through the manipulation of various elements that may include pitch, rhythm, dynamics, timbre and lyrics.Music is often performed and appreciated for its aesthetic, emotional, and communicative qualities.There is cultural variation in the relative importance of specific musical elements, like harmony, and what qualifies as music.
Emotions are psychological and physiological states induced by internal or external events that involves subjective feelings, physiological, and behavioral responses.Emotions can range from putatively universal states experienced as happiness, sadness, anger, fear, and disgust to more complex and nuanced feelings.There is individual, cultural and contextual variation in the intensity, duration, physiological and neural underpinnings and expression of emotions.Musicinduced emotions are often conceptualized as discrete categories (2) but these categories may have fuzzy boundaries (3) and/or may reflect more elemental dimensions like valence and arousal (4).

Similarity analysis of the rating data
To conduct a similarity analysis of the dimensional ratings across the two cultures, we first calculated the mean ratings for each dimension separately for each song for both Western and East-Asian participants.Then we computed the mean values within each category (happy, sad, scary, tender, aggressive, and danceable) for both sets of stimuli (Western and Asian songs).Finally, we organized the resulting 120 category-specific mean ratings (6 categories × 2 stimulus sets × 10 dimensions) into arrays for both Western and East-Asian participants and determined the correlation between these arrays.These results are depicted in Figure 1 in the main text.

Comparisons of dimension-wise ratings between Western and East Asian participants
The continuous ratings were analyzed separately for each of the 11 dimensions with linear mixed model with Culture (Western, East Asian), Stimulus Set (Western vs. East Asian songs), Category (Aggressive, Danceable, Happy, Sad, Scary and Tender songs) as fixed factors and subject as the random factor.The analyses were performed using the lme4, lmerTest and emmeans packages in R.These results are reporter in the supplementary results below.

Similarity analyses across ratings and BSMs
We generated distance matrices (Euclidean distance) for the 144 song-wise BSMs (72 Western and 72 East Asian songs) and the corresponding dimensional ratings separately for the Western and East Asian participants.This yielded four 144 × 144 distance matrices: one BSM and one rating distance matrix for the Western and East Asian participants.The correlations between the matrices were computed using Mantel's test.The results are illustrated in Figure 3 of the main text.

Hierarchical Clustering
To further examine the similarity in the organization of the dimension ratings and BSMs across the tested cultures, we performed hierarchical clustering of the ratings and BSMs using the hclust function in R.Here were used the average ratings and BSMs per category (happy, sad, scary, tender, aggressive, and danceable) for both stimulus sets (Western and Asian songs).The clustering was performed separately for the ratings and BSMs and for the Western and East Asian subjects.These results are illustrated in Figure 4.

Correlation analysis for ratings and BSMs
We first calculated the mean BSMs for each of the 72 songs (song-wise BSMs) and mean ratings for each song (song-wise ratings) for each of the 10 dimensions.The song-wise ratings were then subjected to principal component analysis (PCA).Two principal components explained over 90% of the variance in both cultures and were included in the subsequent analyses.Then, for every pixel, we computed a correlation involving 72 data points representing the BSM "activation" in that particular pixel for each song and another set of 72 data points representing the scores for one of the PCs for each song (pixel-wise correlations).This procedure was repeated for each pixel within the body silhouette (N = 62320) and for both PCs.False Discovery Rate (FDR) was used to control for false positives.These results are shown in Figure 5.

Similarity analysis for musical features
We computed mean ratings per each song and dimension and extracted features listed in supplementary table S2 for each song using the MIR toolbox.In total, there were 144 songs, and each song had a mean rating for each of the 10 dimension (shown in the X-axis in Figure 6) and a single value for each of 21 music features (listed on the Y-axis of Figure 6).First, we computed the correlation for every dimension-feature pair.We then arranged these correlations as an array both for the Western and East-Asian participants and computed the correlation between the arrays.These results are shown in Figure 6 of the main text.

Correlation analysis for musical features and BSMs
We first calculated the mean BSMs and extracted the musical features listed in supplementary Table S2 for each song.The song-wise feature values were then subjected to principal component analysis.Three components were explained 58% of the variance and were included in the subsequent analyses.For every pixel, we computed a correlation involving 72 data points representing the BSM "activation" in that particular pixel for each song and another set of 72 data points representing the scores for one of the PCs for each song.This procedure was repeated for each pixel within the body silhouette (N = 62320) and for the three PCs.FDR was used to control for false positives.These results are shown in Figure 7.

Comparisons of dimension-wise ratings between Western and East Asian participants
The tables below list the results of LMM analyses for each dimension.The figures display the estimated means and 95% confidence intervals for each dimension per song category, culture (circle = Asian subjects, triangle = Western subjects) and stimulus set (blue = Asian songs, red = Western songs).
Despite the high correlation between the emotion ratings between Western and East-Asian participants (Figure 1 and Figure 2), the dimension-wise LMM analysis revealed some differences between the tested cultures.Most notable ones were as follows: The East Asian subjects rated the happy, danceable and tender songs higher for tenderness than the Western participants; The Western participants gave lower ratings for fear for the scary songs in general and East Asian scary songs in particular than the East-Asian participants; Similarly, the Western subject gave lower ratings for sadness particularly for the East Asian sad songs.For happiness, liking, danceability, relaxation and energization, the Western participants gave lower ratings for the East-Asian happy and danceable songs.Conversely, the Western subjects rated these songs higher for irritation than the East-Asian participants did.The East Asian subjects in turn, rated the scary songs lower on liking and higher on irritation and aggression than the Western subjects.The East-Asian subjects rated the tender songs higher in energization than the Western subjects did.Finally, as mentioned in the main text, Western subjects were more familiar with the Western songs while East Asian subject were more familiar with the Asian songs.

Timbral features
Attack Attack duration averaged across the events detected in the amplitude envelope.

Spectral novelty
Estimate of the frequency and amount of change in the spectrum per time unit.

Roughness, standard deviation
The standard deviation across the stimulus of sensory dissonance estimated from the frequency ratios of each pair of peaks in the frequency spectrum.
Roughness, mean Averaged sensory dissonance across the segment.
Spectral entropy An index of the complexity of the spectrum.
Spectral entropy SC An index of the complexity of the spectrum (estimated from a smoothed and collapsed spectrum).
Spectral roll-off Frequency boundary below which 85% of the total energy is contained.

Spectral centroid
Geometric center on the frequency spectrum, i.e., the dominant frequency.

Spectral spread
Standard deviation of the frequency spectrum.

Spectral flatness
The ratio between the geometric mean and the arithmetic mean of the spectrum.An index of spikiness vs. flatness of the spectrum.

Zero crossing rate
The number of times the signal changes sign within a given time frame.

Tonal Features
Key clarity A measure of how clearly the pitch distribution of the song implies a specific key among all major and minor keys.

Majorness
Estimate of the modality (major vs. minor) of the song based on pitch distribution of the signal.

Chroma Peak Std
Standard deviation across the stimulus of the position of the maximum peak in the chromagram.
Tonal novelty An estimate of frequency and amount of change in tonality per time unit.

HCDF
An index of harmonic change (variation in tonal centroid) in the signal (8).

Rhythmic features
Tempo Tempo estimated from periodicities in the amplitude envelope of the signal.
Fluctuation entropy Shannon entropy of the fluctuation spectrum (i.e., periodicities at different frequency bands).High Fluctuation entropy indicates rhythmic complexity.

Pulse clarity
Rhythmic clarity estimated from the periodicities of the amplitude envelope of the signal (9).
Fluctuation max Maximum of the summarized fluctuation spectrum.

RMS
Variance of the root mean square (RMS) energy across the stimulus.

Fig. S2 .
Fig. S2.BSMs for Western and Asian songs in Western and East Asian participants.

Fig. S3 .
Fig. S3.Standard deviation maps for Western and East Asian participants

Fig. S5 .
Fig. S5.BSMs for the 10 continuous dimensions in Western (a) and East Asian participants (b).Colourbar indicates T-value.

Table S1 .
Musical pieces used as stimuli.

Table S2 .
Description of the musical features