From age 8 we spontaneously link vocal to facial emotion

UNIGE scientists have tracked the eye movements of children to show how they make the link – spontaneously and without instructions – between vocal emotion (happiness or anger) followed by a natural or virtual face.

Share

Throughout life, our feelings influence the choices that we make.

The concept of emotion may seem simple, but scientists often have trouble agreeing on what it means. Most scientists believe that emotions involve things other than just feelings. They include bodily reactions, like when your heart races because you feel excited. They also involve expressive movements, including facial expressions and sounds.

The spontaneous amodal coding of emotions – i.e., independently of perceptual modalities and, therefore, the physical characteristics of faces or voices – is easy for adults, but how does the same capacity develop in children?

To find out the answer, scientists from the Faculty of Psychology and Educational Sciences – together with members of the Swiss Centre for Affective Sciences – led by Professor Edouard Gentaz, studied the development of the capacity to establish links between vocal emotion and the emotion conveyed by a natural or artificial face in children age 5, 8 and 10 years, as well as in adults.

Scientists used an experimental paradigm initially designed for use with babies, a task known as an emotional intermodal transfer.

Average durations of all eye fixations in milliseconds for all 80 participants looking at natural faces expressing anger or happiness visualised in the form of a coloured card, after listening to a voice expressing happiness
Average durations of all eye fixations in milliseconds for all 80 participants looking at natural faces expressing anger or happiness visualised in the form of a coloured card, after listening to a voice expressing happiness. Credit: UNIGE

The children were presented to emotional voices and faces expressing happiness and anger. In the primary stage, devoted to hearing familiarisation, every member sat confronting a dark screen and tuned in to three voices – neutral, happy, and angry – for 20 seconds.

In the second, visual discrimination stage, which kept going 10 seconds, the same participants were presented to two emotional faces, one expressing happiness and the other anger, one with a facial expression corresponding to the voice and the other with a facial expression that was different to the voice.

Eye-tracking technology was used to determine the eye movements of 80 participants. Based on that, scientists were able to determine whether the time spent looking at one or other of the emotional faces – or particular areas of the natural or virtual face (the mouth or eyes) – varied according to the voice heard.

The use of a virtual face, produced with CISA’s FACSGen software, gave more significant control over the emotional characteristics compared to a natural face.

Amaya Palama, a researcher in the Laboratory of Sensorimotor, Affective and Social Development in the Faculty of Psychology and Educational Sciences at UNIGE, said“If the participants made the connection between the emotion in the voice they heard and the emotion expressed by the face they saw, we could assume that they recognize and code the emotion in an amodal manner, i.e., independently of perceptive modalities.”

The results show that after a control phase (without a voice or a neutral voice), there is no difference in the visual preference between the happy and angry faces. So, after the emotional voices (happiness or anger), participants looked at the face (natural or virtual) congruent with the voice for longer.

More specifically, the results showed a spontaneous transfer of the emotional voice of joy, with a preference for the congruent face of joy from the age of 8 and a spontaneous transmission of the emotional voice of anger, with a preference for the congruent face of anger from the age of 10.

These results suggest a spontaneous amodal coding of emotions. The research was part of a project designed to study the development of emotional discrimination capacities in childhood funded by the Swiss National Science Foundation (SNSF) obtained by Professor Gentaz. 

Current and future research is trying to validate whether this task is suitable for revealing unsuspected abilities to understand emotions in children with multiple disabilities, who are unable to comprehend verbal instructions or produce verbal responses.

Journal Reference:
  1. Palama, A., Malsert, J., Grandjean, D., Sander, D., & Gentaz, E. (2020). The cross-modal transfer of emotional information from voices to faces in 5-, 8- and 10-year-old children and adults: An eye-tracking study. Emotion. Advance online publication. DOI: 10.1037/emo0000758

Trending