The organization of a neurocomputational control model for articulatory speech synthesis

Bernd J. Kroger, Anja Lowit, Ralph Schnitker, N. Avouris (Editor), N. Bourbakis (Editor), A. Esposito (Editor), I. Hatzilygeroudis (Editor)

Research output: Chapter in Book/Report/Conference proceedingChapter

2 Citations (Scopus)

Abstract

The organization of a computational control model of articulatory speech synthesis is outlined in this paper. The model is based on general principles of neurophysiology and cognitive psychology. Thus it is based on such neural control circuits, neural maps and mappings as are hypothesized to exist in the human brain, and the model is based on learning or training mechanisms similar to those occurring during the human process of speech acquisition. The task of the control module is to generate articulatory data for controlling an articulatory-acoustic speech synthesizer. Thus a com plete 'BIONIC' (i.e. BIOlogically motivated and techNICally realized) speech syn the sizer is described, capable of generating linguistic, sensory, and motor neural representations of sounds, syllables, and words, capable of generating articu latory speech movements from neuromuscular activation, and subse quently capable of generating acoustic speech signals by controlling an articu latory-acoustic vocal tract model. The module developed thus far is capable of producing single sounds (vowels and consonants), simple CV- and VC-syllables, and first sample words. In addition, processes of human-human interaction occurring during speech acquisition (mother-child or carer-child interactions) are briefly discussed in this paper.
LanguageEnglish
Title of host publicationVerbal and Nonverbal Features of Human-Human and Human-Machine Interaction
Place of PublicationBerlin
PublisherSpringer-Verlag
Pages121-135
Number of pages14
Volume5042
ISBN (Print)978-3-540-70871-1
DOIs
Publication statusPublished - 2008

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Verlag

Fingerprint

Speech synthesis
Acoustics
Neurophysiology
Acoustic waves
Linguistics
Brain
Chemical activation
Networks (circuits)

Keywords

  • computational model
  • neural model
  • speech production
  • articulatory speech synthesis
  • speech acquisition
  • self-organizing maps

Cite this

Kroger, B. J., Lowit, A., Schnitker, R., Avouris, N. (Ed.), Bourbakis, N. (Ed.), Esposito, A. (Ed.), & Hatzilygeroudis, I. (Ed.) (2008). The organization of a neurocomputational control model for articulatory speech synthesis. In Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction (Vol. 5042, pp. 121-135). (Lecture Notes in Computer Science). Berlin: Springer-Verlag. https://doi.org/10.1007/978-3-540-70872-8_9
Kroger, Bernd J. ; Lowit, Anja ; Schnitker, Ralph ; Avouris, N. (Editor) ; Bourbakis, N. (Editor) ; Esposito, A. (Editor) ; Hatzilygeroudis, I. (Editor). / The organization of a neurocomputational control model for articulatory speech synthesis. Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction. Vol. 5042 Berlin : Springer-Verlag, 2008. pp. 121-135 (Lecture Notes in Computer Science).
@inbook{f998754c4dc148ed9e0d1f58e720c652,
title = "The organization of a neurocomputational control model for articulatory speech synthesis",
abstract = "The organization of a computational control model of articulatory speech synthesis is outlined in this paper. The model is based on general principles of neurophysiology and cognitive psychology. Thus it is based on such neural control circuits, neural maps and mappings as are hypothesized to exist in the human brain, and the model is based on learning or training mechanisms similar to those occurring during the human process of speech acquisition. The task of the control module is to generate articulatory data for controlling an articulatory-acoustic speech synthesizer. Thus a com plete 'BIONIC' (i.e. BIOlogically motivated and techNICally realized) speech syn the sizer is described, capable of generating linguistic, sensory, and motor neural representations of sounds, syllables, and words, capable of generating articu latory speech movements from neuromuscular activation, and subse quently capable of generating acoustic speech signals by controlling an articu latory-acoustic vocal tract model. The module developed thus far is capable of producing single sounds (vowels and consonants), simple CV- and VC-syllables, and first sample words. In addition, processes of human-human interaction occurring during speech acquisition (mother-child or carer-child interactions) are briefly discussed in this paper.",
keywords = "computational model, neural model, speech production, articulatory speech synthesis, speech acquisition, self-organizing maps",
author = "Kroger, {Bernd J.} and Anja Lowit and Ralph Schnitker and N. Avouris and N. Bourbakis and A. Esposito and I. Hatzilygeroudis",
year = "2008",
doi = "10.1007/978-3-540-70872-8_9",
language = "English",
isbn = "978-3-540-70871-1",
volume = "5042",
series = "Lecture Notes in Computer Science",
publisher = "Springer-Verlag",
pages = "121--135",
booktitle = "Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction",

}

Kroger, BJ, Lowit, A, Schnitker, R, Avouris, N (ed.), Bourbakis, N (ed.), Esposito, A (ed.) & Hatzilygeroudis, I (ed.) 2008, The organization of a neurocomputational control model for articulatory speech synthesis. in Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction. vol. 5042, Lecture Notes in Computer Science, Springer-Verlag, Berlin, pp. 121-135. https://doi.org/10.1007/978-3-540-70872-8_9

The organization of a neurocomputational control model for articulatory speech synthesis. / Kroger, Bernd J.; Lowit, Anja; Schnitker, Ralph; Avouris, N. (Editor); Bourbakis, N. (Editor); Esposito, A. (Editor); Hatzilygeroudis, I. (Editor).

Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction. Vol. 5042 Berlin : Springer-Verlag, 2008. p. 121-135 (Lecture Notes in Computer Science).

Research output: Chapter in Book/Report/Conference proceedingChapter

TY - CHAP

T1 - The organization of a neurocomputational control model for articulatory speech synthesis

AU - Kroger, Bernd J.

AU - Lowit, Anja

AU - Schnitker, Ralph

A2 - Avouris, N.

A2 - Bourbakis, N.

A2 - Esposito, A.

A2 - Hatzilygeroudis, I.

PY - 2008

Y1 - 2008

N2 - The organization of a computational control model of articulatory speech synthesis is outlined in this paper. The model is based on general principles of neurophysiology and cognitive psychology. Thus it is based on such neural control circuits, neural maps and mappings as are hypothesized to exist in the human brain, and the model is based on learning or training mechanisms similar to those occurring during the human process of speech acquisition. The task of the control module is to generate articulatory data for controlling an articulatory-acoustic speech synthesizer. Thus a com plete 'BIONIC' (i.e. BIOlogically motivated and techNICally realized) speech syn the sizer is described, capable of generating linguistic, sensory, and motor neural representations of sounds, syllables, and words, capable of generating articu latory speech movements from neuromuscular activation, and subse quently capable of generating acoustic speech signals by controlling an articu latory-acoustic vocal tract model. The module developed thus far is capable of producing single sounds (vowels and consonants), simple CV- and VC-syllables, and first sample words. In addition, processes of human-human interaction occurring during speech acquisition (mother-child or carer-child interactions) are briefly discussed in this paper.

AB - The organization of a computational control model of articulatory speech synthesis is outlined in this paper. The model is based on general principles of neurophysiology and cognitive psychology. Thus it is based on such neural control circuits, neural maps and mappings as are hypothesized to exist in the human brain, and the model is based on learning or training mechanisms similar to those occurring during the human process of speech acquisition. The task of the control module is to generate articulatory data for controlling an articulatory-acoustic speech synthesizer. Thus a com plete 'BIONIC' (i.e. BIOlogically motivated and techNICally realized) speech syn the sizer is described, capable of generating linguistic, sensory, and motor neural representations of sounds, syllables, and words, capable of generating articu latory speech movements from neuromuscular activation, and subse quently capable of generating acoustic speech signals by controlling an articu latory-acoustic vocal tract model. The module developed thus far is capable of producing single sounds (vowels and consonants), simple CV- and VC-syllables, and first sample words. In addition, processes of human-human interaction occurring during speech acquisition (mother-child or carer-child interactions) are briefly discussed in this paper.

KW - computational model

KW - neural model

KW - speech production

KW - articulatory speech synthesis

KW - speech acquisition

KW - self-organizing maps

UR - http://www.springer.com/computer/hci/book/978-3-540-70871-1

UR - http://dx.doi.org/10.1007/978-3-540-70872-8_9

U2 - 10.1007/978-3-540-70872-8_9

DO - 10.1007/978-3-540-70872-8_9

M3 - Chapter

SN - 978-3-540-70871-1

VL - 5042

T3 - Lecture Notes in Computer Science

SP - 121

EP - 135

BT - Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction

PB - Springer-Verlag

CY - Berlin

ER -

Kroger BJ, Lowit A, Schnitker R, Avouris N, (ed.), Bourbakis N, (ed.), Esposito A, (ed.) et al. The organization of a neurocomputational control model for articulatory speech synthesis. In Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction. Vol. 5042. Berlin: Springer-Verlag. 2008. p. 121-135. (Lecture Notes in Computer Science). https://doi.org/10.1007/978-3-540-70872-8_9