The organization of a neurocomputational control model for articulatory speech synthesis

Bernd J. Kroger, Anja Lowit, Ralph Schnitker, N. Avouris (Editor), N. Bourbakis (Editor), A. Esposito (Editor), I. Hatzilygeroudis (Editor)

Research output: Chapter in Book/Report/Conference proceedingChapter

2 Citations (Scopus)

Abstract

The organization of a computational control model of articulatory speech synthesis is outlined in this paper. The model is based on general principles of neurophysiology and cognitive psychology. Thus it is based on such neural control circuits, neural maps and mappings as are hypothesized to exist in the human brain, and the model is based on learning or training mechanisms similar to those occurring during the human process of speech acquisition. The task of the control module is to generate articulatory data for controlling an articulatory-acoustic speech synthesizer. Thus a com plete 'BIONIC' (i.e. BIOlogically motivated and techNICally realized) speech syn the sizer is described, capable of generating linguistic, sensory, and motor neural representations of sounds, syllables, and words, capable of generating articu latory speech movements from neuromuscular activation, and subse quently capable of generating acoustic speech signals by controlling an articu latory-acoustic vocal tract model. The module developed thus far is capable of producing single sounds (vowels and consonants), simple CV- and VC-syllables, and first sample words. In addition, processes of human-human interaction occurring during speech acquisition (mother-child or carer-child interactions) are briefly discussed in this paper.
Original languageEnglish
Title of host publicationVerbal and Nonverbal Features of Human-Human and Human-Machine Interaction
Place of PublicationBerlin
PublisherSpringer-Verlag
Pages121-135
Number of pages14
Volume5042
ISBN (Print)978-3-540-70871-1
DOIs
Publication statusPublished - 2008

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Verlag

    Fingerprint

Keywords

  • computational model
  • neural model
  • speech production
  • articulatory speech synthesis
  • speech acquisition
  • self-organizing maps

Cite this

Kroger, B. J., Lowit, A., Schnitker, R., Avouris, N. (Ed.), Bourbakis, N. (Ed.), Esposito, A. (Ed.), & Hatzilygeroudis, I. (Ed.) (2008). The organization of a neurocomputational control model for articulatory speech synthesis. In Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction (Vol. 5042, pp. 121-135). (Lecture Notes in Computer Science). Berlin: Springer-Verlag. https://doi.org/10.1007/978-3-540-70872-8_9