An investigation into audiovisual speech correlation in reverberant noisy environments

Simone Cifani*, Andrew Abel, Amir Hussain, Stefano Squartini, Francesco Piazza

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution book

7 Citations (Scopus)

Abstract

As evidence of a link between the various human communication production domains has become more prominent in the last decade, the field of multimodal speech processing has undergone significant expansion. Many different specialised processing methods have been developed to attempt to analyze and utilize the complex relationship between multimodal data streams. This work uses information extracted from an audiovisual corpus to investigate and assess the correlation between audio and visual features in speech. A number of different feature extraction techniques are assessed, with the intention of identifying the visual technique that maximizes the audiovisual correlation. Additionally, this paper aims to demonstrate that a noisy and reverberant audio environment reduces the degree of audiovisual correlation, and that the application of a beamformer remedies this. Experimental results, carried out in a synthetic scenario, confirm the positive impact of beamforming not only for improving the audio-visual correlation but also in a complete audio-visual speech enhancement scheme. Thus, this work inevitably highlights an important aspect for the development of future promising bimodal speech enhancement systems.

Original languageEnglish
Title of host publicationCross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions
Subtitle of host publicationCOST Action 2102 International Conference Prague
EditorsAnna Esposito, Robert Vích
Place of PublicationCham, Switzerland
PublisherSpringer
Pages331-343
Number of pages13
ISBN (Electronic)9783642033209
ISBN (Print)3642033199, 9783642033193
DOIs
Publication statusPublished - 30 Jul 2009
EventCOST Action 2102 International Conference on Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions - Prague, Czech Republic
Duration: 15 Oct 200818 Oct 2008

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume5641 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

ConferenceCOST Action 2102 International Conference on Cross-Modal Analysis of Speech, Gestures, Gaze and Facial Expressions
Country/TerritoryCzech Republic
CityPrague
Period15/10/0818/10/08

Keywords

  • discrete cosine transform
  • automatic speech recognition
  • speech enhancement
  • noisy speech
  • active appearance model

Fingerprint

Dive into the research topics of 'An investigation into audiovisual speech correlation in reverberant noisy environments'. Together they form a unique fingerprint.

Cite this