rard BAILLY, 1998 - Google Patents
Learning to speak. Sensori-motor control of speech movementsrard BAILLY, 1998
View PS- Document ID
- 15095430526825226466
- Author
- rard BAILLY G
- Publication year
External Links
Snippet
This paper shows how an articulatory model, able to produce acoustic signals from articulatory motion, can learn to speak, ie coordinate its movements in such a way that it utters meaningful sequences of sounds belonging to a given language. This complex …
- 230000004913 activation 0 abstract description 19
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/033—Voice editing, e.g. manipulating the voice of the synthesiser
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/003—Changing voice quality, e.g. pitch or formants
- G10L21/007—Changing voice quality, e.g. pitch or formants characterised by the process used
- G10L21/013—Adapting to target pitch
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B21/00—Teaching, or communicating with, the blind, deaf or mute
- G09B21/001—Teaching or communicating with blind persons
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/065—Adaptation
- G10L15/07—Adaptation to the speaker
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Bailly | Learning to speak. Sensori-motor control of speech movements | |
| Howard et al. | Modeling the development of pronunciation in infant speech acquisition | |
| Kröger et al. | Towards a neurocomputational model of speech production and perception | |
| Lindblom | Role of articulation in speech perception: Clues from production | |
| Guenther | Speech sound acquisition, coarticulation, and rate effects in a neural network model of speech production. | |
| Lindblom | On the notion of “possible speech sound” | |
| Story | Phrase-level speech simulation with an airway modulation model of speech production | |
| Van Lieshout | Dynamical systems theory and its application in speech | |
| Grimme et al. | Limb versus speech motor control: A conceptual review | |
| Fels | Glove-talkII: mapping hand gestures to speech using neural networks-an approach to building adaptive interfaces. | |
| Serkhane et al. | Infants’ vocalizations analyzed with an articulatory model: A preliminary report | |
| Westerman et al. | Modelling the development of mirror neurons for auditory-motor integration | |
| Howard et al. | A computational model of infant speech development | |
| Kitani et al. | A talking robot and its singing performance by the mimicry of human vocalization | |
| rard BAILLY | Learning to speak. Sensori-motor control of speech movements | |
| Nenov | Perceptually grounded language acquisition: A neural/procedural hybrid model | |
| Menn et al. | Connectionist modeling and the microstructure of phonological development: a progress report | |
| Cohen et al. | What can visual speech synthesis tell visual speech recognition? | |
| Bailly et al. | Learning to speak: Speech production and sensori-motor representations | |
| Hornstein et al. | A unified approach to speech production and recognition based on articulatory motor representations | |
| Hirayama et al. | Inverse dynamics of speech motor control | |
| Hirayama et al. | Physiologically-based speech synthesis using neural networks | |
| Bailly | Building sensori-motor prototypes from audiovisual exemplars | |
| Liu | Fundamental frequency modelling: An articulatory perspective with target approximation and deep learning | |
| SAWADA | A talking robot and the expressive speech communication with human |