- Musicians have “..an increased sensitivity of emotional processing in musicians with respect to sadness expressed in speech, possibly reflecting empathic processes.”
- ….we only observed the difference between musicians and non-musicians in identifying sadness on the neural level, but we did not find any significant differences on the behavioral level.
- Musicians exhibit different expressions of musical emotion, and show stronger emotional experience in response to music.
- Musicians possess higher skills for the recognition of emotions expressed in music, and they differ from non-musicians in the processing of the sadness and fear conveyed in music.
- musicians showed enhanced activations in several brain areas when responding to sentences spoken with sad prosody, suggesting higher sensitivity in emotion processing.
Musical training is associated with changes in cognitive and affective processing. Musicians exhibit different expressions of musical emotion, and show stronger emotional experience in response to music. Musicians possess higher skills for the recognition of emotions expressed in music, and they differ from non-musicians in the processing of the sadness and fear conveyed in music. However, the effects of musical training are not limited to the musical domain, and in particular certain aspects of speech processing have been shown to benefit from musical training.
Musicians show improved performance in:
- the encoding of speech sounds
- in detecting speech in noise
- in extracting rhythmical patterns in auditory sequences
- in processing pitch in speech
- advantages in processing speech prosody
- extra-linguistic properties such as the emotional content of speech
The advantages musicians exhibit in both music and speech processing have been explained by enhanced acoustic skills that musicians acquire through continuous training. The transfer effect from musical training to speech processing is assumed to be due to acoustic and rhythmic similarities between the two functional domains…In order to express emotions, both music and speech make use of the same or similar acoustic elements such as timbre or pitch. Similarities between music and speech are also observed in the temporal domain as musical and verbal expressions use “temporal windows” of a few seconds within which musical motives or speech utterances are represented.
These strong associations between music and speech have also been observed on the neural level. Similarities have been found in brain networks active during processing of both music and language, and it has been assumed that the communication of emotion in both domains may be based on the same neural systems associated with social cognition…. Similar to music, processing of emotional speech prosody…multi-phase models that assume several stages to be involved in emotional prosody processing recruiting both the left and the right hemisphere…
Music training has been shown to alter the neural processing of music presumably based on functional and structural changes in the musician’s brain……A number of studies have described differences between musicians and non-musicians in speech and prosody processing on the neural level…
In response to sad prosody musicians showed a significant increase of activation in the frontal cortex, posterior cingulate and retrosplenial cortex. We did not observe any differences in neural activation between musicians and non-musicians in response to happy or fearful prosody. No increases of activation for non-musicians relative to musicians in response to any of the emotions were found.
The present study revealed similarities and differences between musicians and non-musicians in processing of emotional speech prosody expressing happiness, sadness and fear. These common activations suggest that in musicians and non-musicians similar neural mechanisms are recruited for early stage processing of emotional vocal stimuli.
We also observed differences in neural responses to emotional speech prosody between the groups. Specifically – musicians showed enhanced activations in several brain areas when responding to sentences spoken with sad prosody, suggesting higher sensitivity in emotion processing.
Consistently, models on prosody processing agree in assuming the frontal cortex to play a crucial role in higher levels of prosody processing, specifically in the detection and judgment of emotional speech prosody. Specifically:
- the middle frontal gyrus has previously been found to be associated with processing of incongruity of in emotional prosody and the detection of sad emotional tone.
- The stronger activations in right prefrontal areas may thus reflect processes related to the evaluation and categorization of emotional prosody and it might also point to an enhanced sensitivity in the musician group specifically for the sad emotional content of the stimuli.
The ACC is assumed to be part of a network specifically sensitive to monitoring of uncertainty and emotional saliency and the ACC and the medial prefrontal cortex have been specifically associated with the induction of sadness…We previously found increased activation in prefrontal regions in musicians in response to sadness in a study on musically conveyed emotions found activations in the superior frontal cortex and the ACC during the processing of emotions that were expressed in music and through vocalization.
In fact, the medial prefrontal cortex and the ACC have consistently been associated with empathic processes and perspective taking and in particular the medial prefrontal cortex has been termed a “hub of a system mediating inferences about one’s own and other individual’s mental states”. The increased activations in the medial prefrontal cortex and the ACC in the group of musicians in response to sad sentences might thus suggest stronger emotional responses specifically related to the sad prosody of the stimuli. The increases of activation might furthermore point towards specific empathic processes related to the perceived sadness expressed in the stimuli.
It may be a puzzling result that the only significant differences between the groups were observed in the neural response to prosody expressing sadness but not in response to the other emotions. However, sadness is consistently found to be one of the emotions that are easiest to recognize..Furthermore, the expression of sadness in both music and speech prosody relies on similar acoustic features, which musicians, due to their enhanced acoustic skills, may be able to extract more readily.
In a previous study on musical emotions, we also found that musicians showed stronger neural activations to musical excerpts conveying negative emotions including sadness, and indicated stronger arousal in response to sad music. It was hypothesized that musicians may possibly be at an advantage to respond to the high social saliency of this emotion due to certain gains in social-emotional sensibility. In fact, the social functions and effects of music making have recently received increased attention and listening to music has been shown to automatically engage TOM processes such as mental state attributions, possibly implying that musicians because of their ongoing training may be particularly experienced in those specific aspects of social-emotional cognition.
Finally, while a transfer effect of musical training to speech processing may mainly depend on acoustic and rhythmic similarities between music and speech temporal mechanisms might constitute another driving force for this cross-functional learning effect. Temporal mechanisms are of utmost importance in coordinating cognitive processes and can be considered to be an anthropological universal. Positive learning effects related to temporal training have been observed previously on the level of temporal order thresholds of native speaker of the tonal language Chinese who show different thresholds compared to subjects from a non-tonal language environment…Since neuro-imaging studies have shown music and language to rely on similar neural structures and considering the temporal similarities between music and speech it might be suspected that musical training also positively impacts temporal processing, and the observed effects thus may reflect enhanced temporal sensitivity as an effect of inter-modal transfer which might also involve a higher competence to detect sadness in speech.
musical training also alters the neural processing of distinct emotions conveyed in speech prosody. In particular, while musicians and non-musicians do not differ in their performance in recognizing sadness in speech, they process this particular emotion significantly differently on the neural level. Musicians show distinct increases of neural activations only in response to the sad prosody, possibly due to a higher affective saliency that the sentences spoken with sad intonation might possess. \
Digested from – Sadness is unique: neural processing of emotions in speech prosody in musicians and non-musicians – Mona Park, et al – Front. Hum. Neurosci., 30 January 2015