[go: up one dir, main page]

Chartier et al., 2018 - Google Patents

Encoding of articulatory kinematic trajectories in human speech sensorimotor cortex

Chartier et al., 2018

View HTML @Full View
Document ID
9450063821384955733
Author
Chartier J
Anumanchipalli G
Johnson K
Chang E
Publication year
Publication venue
Neuron

External Links

Snippet

When speaking, we dynamically coordinate movements of our jaw, tongue, lips, and larynx. To investigate the neural mechanisms underlying articulation, we used direct cortical recordings from human sensorimotor cortex while participants spoke natural sentences that …
Continue reading at www.cell.com (HTML) (other versions)

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/065Adaptation
    • G10L15/07Adaptation to the speaker
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F19/00Digital computing or data processing equipment or methods, specially adapted for specific applications
    • G06F19/30Medical informatics, i.e. computer-based analysis or dissemination of patient or disease data
    • G06F19/34Computer-assisted medical diagnosis or treatment, e.g. computerised prescription or delivery of medication or diets, computerised local control of medical devices, medical expert systems or telemedicine
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser

Similar Documents

Publication Publication Date Title
Chartier et al. Encoding of articulatory kinematic trajectories in human speech sensorimotor cortex
US10438603B2 (en) Methods of decoding speech from the brain and systems for practicing the same
Latif et al. Speech technology for healthcare: Opportunities, challenges, and state of the art
Hamilton et al. A spatial map of onset and sustained responses to speech in the human superior temporal gyrus
Anumanchipalli et al. Speech synthesis from neural decoding of spoken sentences
Gonzalez et al. Direct speech reconstruction from articulatory sensor data by machine learning
Bouchard et al. Functional organization of human sensorimotor cortex for speech articulation
US20200151519A1 (en) Intelligent Health Monitoring
Herff et al. Brain-to-text: decoding spoken phrases from phone representations in the brain
Bouchard et al. Control of spoken vowel acoustics and the influence of phonetic context in human speech sensorimotor cortex
US20220208173A1 (en) Methods of Generating Speech Using Articulatory Physiology and Systems for Practicing the Same
Bouton et al. Focal versus distributed temporal cortex activity for speech sound category assignment
Bouchard et al. High-resolution, non-invasive imaging of upper vocal tract articulators compatible with human brain recordings
Caponetti et al. Biologically inspired emotion recognition from speech
Al-Ali et al. The detection of dysarthria severity levels using AI models: A review
Bhat et al. Speech technology for automatic recognition and assessment of dysarthric speech: An overview
Narayanan 12 speech in affective computing
De Silva et al. Clinical decision support using speech signal analysis: Systematic scoping review of neurological disorders
Redford et al. Acoustic theories of speech perception
Feng et al. Acoustic inspired brain-to-sentence decoder for logosyllabic language
Leal et al. Speech-based depression assessment: A comprehensive survey
Wingfield et al. Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem
Kadi et al. Automated diagnosis and assessment of dysarthric speech using relevant prosodic features
Anumanchipalli et al. Intelligible speech synthesis from neural decoding of spoken sentences
Feng et al. A high-performance brain-to-sentence decoder for logosyllabic language