Chartier et al., 2018 - Google Patents
Encoding of articulatory kinematic trajectories in human speech sensorimotor cortexChartier et al., 2018
View HTML- Document ID
- 9450063821384955733
- Author
- Chartier J
- Anumanchipalli G
- Johnson K
- Chang E
- Publication year
- Publication venue
- Neuron
External Links
Snippet
When speaking, we dynamically coordinate movements of our jaw, tongue, lips, and larynx. To investigate the neural mechanisms underlying articulation, we used direct cortical recordings from human sensorimotor cortex while participants spoke natural sentences that …
- 230000001755 vocal 0 abstract description 71
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/26—Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/065—Adaptation
- G10L15/07—Adaptation to the speaker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F19/00—Digital computing or data processing equipment or methods, specially adapted for specific applications
- G06F19/30—Medical informatics, i.e. computer-based analysis or dissemination of patient or disease data
- G06F19/34—Computer-assisted medical diagnosis or treatment, e.g. computerised prescription or delivery of medication or diets, computerised local control of medical devices, medical expert systems or telemedicine
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/033—Voice editing, e.g. manipulating the voice of the synthesiser
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Chartier et al. | Encoding of articulatory kinematic trajectories in human speech sensorimotor cortex | |
| US10438603B2 (en) | Methods of decoding speech from the brain and systems for practicing the same | |
| Latif et al. | Speech technology for healthcare: Opportunities, challenges, and state of the art | |
| Hamilton et al. | A spatial map of onset and sustained responses to speech in the human superior temporal gyrus | |
| Anumanchipalli et al. | Speech synthesis from neural decoding of spoken sentences | |
| Gonzalez et al. | Direct speech reconstruction from articulatory sensor data by machine learning | |
| Bouchard et al. | Functional organization of human sensorimotor cortex for speech articulation | |
| US20200151519A1 (en) | Intelligent Health Monitoring | |
| Herff et al. | Brain-to-text: decoding spoken phrases from phone representations in the brain | |
| Bouchard et al. | Control of spoken vowel acoustics and the influence of phonetic context in human speech sensorimotor cortex | |
| US20220208173A1 (en) | Methods of Generating Speech Using Articulatory Physiology and Systems for Practicing the Same | |
| Bouton et al. | Focal versus distributed temporal cortex activity for speech sound category assignment | |
| Bouchard et al. | High-resolution, non-invasive imaging of upper vocal tract articulators compatible with human brain recordings | |
| Caponetti et al. | Biologically inspired emotion recognition from speech | |
| Al-Ali et al. | The detection of dysarthria severity levels using AI models: A review | |
| Bhat et al. | Speech technology for automatic recognition and assessment of dysarthric speech: An overview | |
| Narayanan | 12 speech in affective computing | |
| De Silva et al. | Clinical decision support using speech signal analysis: Systematic scoping review of neurological disorders | |
| Redford et al. | Acoustic theories of speech perception | |
| Feng et al. | Acoustic inspired brain-to-sentence decoder for logosyllabic language | |
| Leal et al. | Speech-based depression assessment: A comprehensive survey | |
| Wingfield et al. | Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem | |
| Kadi et al. | Automated diagnosis and assessment of dysarthric speech using relevant prosodic features | |
| Anumanchipalli et al. | Intelligible speech synthesis from neural decoding of spoken sentences | |
| Feng et al. | A high-performance brain-to-sentence decoder for logosyllabic language |