Emotional recognition in autism spectrum conditions from voices and faces

Mary E. Stewart, Clair McAdam, Mitsuhiko Ota, Sue Peppé, Joanne Cleland

Research output: Contribution to journalArticlepeer-review

33 Citations (Scopus)


The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence were congruent, incongruent, or neutral with respect to the semantic content. We also included a condition in which there was no semantic content (an 'mmm' was uttered using an emotional tone). Performance was compared between 11 adults with ASC and 14 typically developed adults. Identification of emotion from sentences in which the vocal emotion and the meaning of sentence were congruent was similar in people with ASC and a typically developed comparison group. However, the comparison group was more accurate at identifying the emotion in the voice from incongruent and neutral trials, and also from trials with no semantic content. The results of the vocal emotion task were correlated with performance on a face emotion recognition task. In decoding emotion from spoken utterances, individuals with ASC relied more on verbal semantics than did typically developed individuals, presumably as a strategy to compensate for their difficulties in using prosodic cues to recognize emotions.

Original languageEnglish
Pages (from-to)6-14
Number of pages9
Issue number1
Early online date8 Oct 2012
Publication statusPublished - Jan 2013


  • emotion
  • autism spectrum conditions
  • prosody
  • vocal emotion


Dive into the research topics of 'Emotional recognition in autism spectrum conditions from voices and faces'. Together they form a unique fingerprint.

Cite this