Abstract
Visual articulatory models can be used for visualizing vocal tract articulatory speech movements. This information may be helpful in pronunciation training or in therapy of speech disorders. For testing this hypothesis, speech recognition rates were quantified for mute animations of vocalic and consonantal speech movements generated by a 2D and a 3D visual articulatory model. The visually based speech sound recognition test (mimicry test) was performed by two groups of eight children (five to eight years old) matched in age and sex. The children were asked to mimic the visually produced mute speech movement animations for different speech sounds. Recognition rates stay significantly above chance but indicate no significant difference for each of the two models. Children older than 5 years are capable of interpreting vocal tract articulatory speech sound movements without any preparatory training in a speech adequate way. The complex 3D-display of vocal tract articulatory movements provides no significant advantage in comparison to the visually simpler 2D-midsagittal displays of vocal tract articulatory movements.
Original language | English |
---|---|
Publication status | Published - 26 Sept 2008 |
Event | Interspeech, 9th Annual Conference of the International Speech Communication Association - Brisbane, Australia Duration: 22 Sept 2008 → 26 Sept 2008 |
Conference
Conference | Interspeech, 9th Annual Conference of the International Speech Communication Association |
---|---|
City | Brisbane, Australia |
Period | 22/09/08 → 26/09/08 |
Keywords
- visual articulatory model
- audio-visual speechsynthesis
- visual perception
- development of visual perception