WO2024058585A1 - Procédé et dispositif d'analyse pour la classification de la gravité d'une maladie pulmonaire d'un sujet à l'aide de données vocales et d'informations cliniques - Google Patents
Procédé et dispositif d'analyse pour la classification de la gravité d'une maladie pulmonaire d'un sujet à l'aide de données vocales et d'informations cliniques Download PDFInfo
- Publication number
- WO2024058585A1 WO2024058585A1 PCT/KR2023/013863 KR2023013863W WO2024058585A1 WO 2024058585 A1 WO2024058585 A1 WO 2024058585A1 KR 2023013863 W KR2023013863 W KR 2023013863W WO 2024058585 A1 WO2024058585 A1 WO 2024058585A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- exercise
- voice data
- subject
- clinical information
- severity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/08—Measuring devices for evaluating the respiratory organs
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4803—Speech analysis specially adapted for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4842—Monitoring progression or stage of a disease
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/66—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/20—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
Definitions
- the technology described below relates to a technique for predicting the degree of lung disease using the subject's voice.
- COPD chronic obstructive pulmonary disease
- the technology described below seeks to provide a technique for predicting the degree of lung disease such as COPD based on the subject's voice and clinical information.
- a method of classifying the severity of a subject's lung disease using voice data and clinical information includes the steps of: an analysis device receiving voice data and clinical information of a subject; the analysis device preprocessing the voice data and clinical information; The analysis device includes inputting the pre-processed voice data and clinical information into a pre-trained learning model, and the analysis device classifies the severity of the subject's lung disease based on the output value of the learning model.
- the analysis device that classifies the severity of the subject's lung disease includes an interface device that receives the subject's voice data and clinical information, a storage device that stores a learning model that receives the voice data and clinical information and classifies the severity of the lung disease, and the input device. It includes a computing device that preprocesses voice data and clinical information, inputs the preprocessed voice data and clinical information into the learning model, and classifies the severity of the subject's lung disease based on the output value of the learning model.
- the technology described below can predict the degree of lung disease by analyzing the user's voice and clinical information that can be obtained relatively easily.
- the technology described below can diagnose the severity of lung disease through voice recording and self-diagnosis without the patient having to visit a medical institution.
- Figure 1 is an example of a lung disease severity classification system using voice and clinical information.
- Figure 2 is an example of the learning process of a learning model for lung disease severity classification.
- Figure 3 shows the results of verifying the performance of a learning model that classifies lung disease severity.
- Figure 4 is an example of an analysis device that classifies lung disease severity.
- first, second, A, B, etc. may be used to describe various components, but the components are not limited by the terms, and are only used for the purpose of distinguishing one component from other components. It is used only as For example, a first component may be named a second component without departing from the scope of the technology described below, and similarly, the second component may also be named a first component.
- the term and/or includes any of a plurality of related stated items or a combination of a plurality of related stated items.
- each component is responsible for. That is, two or more components, which will be described below, may be combined into one component, or one component may be divided into two or more components for more detailed functions.
- each of the components described below may additionally perform some or all of the functions handled by other components, and some of the main functions handled by each component may be performed by other components. Of course, it can also be carried out exclusively by .
- each process forming the method may occur in a different order from the specified order unless a specific order is clearly stated in the context. That is, each process may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in the opposite order.
- the technology described below is a technique for predicting or classifying the severity of lung diseases such as COPD based on the subject's voice and clinical information. For convenience of explanation, the following explanation will focus on COPD. However, the technology described below can be used to predict or classify the severity of various lung diseases other than COPD.
- User data used for analysis includes the user's voice and clinical information.
- user data is collected from a specific subject, and can be collected before and after exercise for a subject performing a certain exercise.
- the user's voice is collected before and after exercise, and input variables include features extracted from the voice.
- Some of the clinical information may be collected separately before and after exercise.
- clinical information may include questionnaire information collected from the subject. A detailed description of user data will be provided later.
- the analysis device classifies or predicts the degree of lung disease based on the user's voice and clinical information.
- the analysis device can be implemented as a variety of devices capable of processing data.
- an analysis device can be implemented as a PC, a server on a network, a smart device, a wearable device, or a chipset with a dedicated program embedded therein.
- analysis devices may be built into various devices such as exercise equipment, vehicles, smart speakers, etc.
- the analysis device can classify lung disease using a machine learning model.
- Machine learning models include decision trees, random forest, KNN (K-nearest neighbor), Naive Bayes, SVM (support vector machine), and ANN (artificial neural network). The following learning model will be explained focusing on DNN (Deep Neural Network). However, the learning model for lung disease classification can be implemented as various types of models.
- Figure 1 is an example of a lung disease severity classification system 100 using voice and clinical information.
- the analysis device is a user terminal 130, a computer terminal 140, and a server 150.
- Subject A performs a certain exercise for a certain amount of time.
- Patients with lung disease may have different vocal characteristics before and after exercise. Accordingly, user data can be collected from subject A before and after exercise, respectively.
- the user data may include the subject's voice data and clinical information.
- Voice data consists of voice data before exercise and voice data after exercise.
- the voice data before exercise and the voice data after exercise are composed of data in which the same subject A uttered the same words or sentences (text) before and after exercise, respectively.
- Clinical information may consist of various items. Some of the items included in clinical information correspond to data collected before and after exercise.
- the database may store the subject's voice data and clinical information.
- the database 110 may be a device such as an Electronic Medical Record (EMR).
- EMR Electronic Medical Record
- the user terminal 120 may receive user data from subject A.
- the user terminal 120 illustrates a device such as a smart device.
- the user terminal 120 corresponds to a device that can collect user voice through a microphone and receive clinical information through a certain interface device.
- the user terminal 120 may be any one of various types of devices, such as a smart device, PC, wearable device, smart speaker, etc.
- the user terminal 130 may receive user data from the database 110. Furthermore, the user terminal 120 and the user terminal 130 may be the same device. In this case, the user terminal 130 may be a device that collects and analyzes user data at the same time.
- the user terminal 130 may constantly preprocess the user data of the subject. For example, the user terminal 130 may remove noise from the subject's voice data. Additionally, the user terminal 130 may convert voice data into one of the following types: chromagram, Mel frequency cepstral coefficient (MFCC), and Mel spectrogram. Additionally, the user terminal 130 may perform preprocessing to normalize information of different categories among clinical information to a certain range.
- the user terminal 130 may classify the severity of the subject's lung disease by inputting user data into a pre-built learning model. User A can check the degree of the subject's lung disease through the user terminal 130.
- the computer terminal 140 receives user data from the database 110 or the user terminal 120.
- the computer terminal 140 may constantly preprocess user data.
- the computer terminal 140 may classify the severity of the subject's lung disease by inputting user data into a pre-built learning model.
- User B can check the degree of the subject's lung disease through the computer terminal 140.
- the server 150 receives user data from the database 110 or the user terminal 120.
- the server 150 may constantly preprocess the user data of the subject.
- the server 150 may classify the severity of the subject's lung disease by inputting user data into a pre-built learning model.
- User A can access the server 150 through the user terminal to check the degree of the subject's lung disease.
- Figure 2 is an example of a learning process 200 of a learning model for lung disease severity classification.
- a learning model may be one of various types.
- the learning model shows a deep learning model as an example.
- a learning model that classifies lung disease severity can be named a classification model.
- Classification models are built using training data.
- the learning process of the classification model can be performed by a learning device.
- a learning device refers to a computing device that controls digital data processing and the learning process of deep learning models.
- the learning device constructs learning data (210).
- Training data can be collected from various groups depending on the severity of lung disease. For example, learning data may be collected from the normal group, severity 1 group, ..., and severity n group, respectively.
- Lung disease severity can be determined based on FEV1 (Forced expiratory volume).
- FEV1 refers to the amount of air expelled from the lungs when exhaling in 1 second. If the patient's FEV1 is lower than a threshold (eg, the average of the entire population), the patient can be classified as a COPD patient. If a patient's FEV1 is above the threshold, the patient can be classified as a patient with low severity.
- subjects can be classified into normal, low-severity lung disease patients, and high-severity lung disease patients.
- Learning data includes clinical information and voice data for each group.
- the training data also includes the label value of each training data.
- Voice data is collected separately before and after performing certain exercises. Voice data can be collected as subjects utter the same sentence.
- Voice data may consist of items as shown in Table 1 below.
- the learning device can extract 32 features as shown in Table 1 below from voice signals.
- voice data may consist of any number of items among the items in Table 1 below.
- the learning device can extract silence sections and conversation sections from the entire file using a voice recognition tool.
- the silent section is defined as a section in which a signal with an amplitude level of -36dBFS (decibel full scale) or less lasts for more than 200ms.
- Jitter is a value that indicates how constant the period of vibration is. The more irregular the period or amplitude, the larger the value.
- Shimmer is a number that indicates how constant the amplitude of vibration is. The more irregular the period or amplitude, the larger the value.
- Formant is a resonance that occurs in the vocal tract (the space that extends from the pharynx and oral cavity to the nasal cavity and lips).
- HNR Harmonic to noise ratio
- Speech rate refers to the number of words per minute in speech.
- f0 (fundamental frequency) is the frequency of vocal cord vibration and perceptually corresponds to pitch.
- Articulation rate is the number of syllables per second in speech.
- Syllable duration refers to the duration of a syllable.
- the learning device can extract jitter, shimmer, formants, HNR, speech rate, f0, articulation rate, and syllable length using publicly available software for speech analysis.
- Clinical information can consist of 31 items as shown in Table 2 below.
- the clinical information below includes self-administration variables. Some of the clinical information may be collected through wearable devices, sensor devices, etc. Furthermore, clinical information may consist of any number of items among the items in Table 2 below.
- BMI Body Mass Index
- Resting SpO2 blood oxygen saturation
- SpO2 blood oxygen saturation
- resting heart rate 10 heart rate after exercise 11
- the learning device can consistently preprocess the initial learning data.
- Preprocessing for voice data may include noise removal, data type conversion, etc.
- Preprocessing of clinical information may include the process of adjusting values into certain categories.
- the learning device can normalize clinical information using preprocessing techniques such as Min-Max Normalization and z-score normalization.
- the learning device can convert the value of clinical information into a constant value by one-hot vector coding.
- the learning device can input encoded clinical information into a learning model.
- the learning device treats 32 voice variables and 31 types of clinical information as individual input variables and can construct a total of 63 input variables as learning data.
- the learning device builds a classification model using the learning data (220).
- the learning device extracts one input data from the collected learning data and inputs it into the classification model.
- the classification model outputs a probability value for lung disease severity for the corresponding input data.
- the learning device compares the value output by the classification model with the known correct answer (label value) and updates the weight of the classification model so that the classification model outputs a label corresponding to the correct answer.
- the learning device repeats the learning process using multiple learning data.
- Figure 3 shows the results of verifying the performance of a learning model that classifies lung disease severity. Looking at Figure 3, the built model showed an average micro AUROC (area under the ROC) and an average macro AUROC of 0.87. Therefore, the classification model showed significantly high performance in classifying lung disease severity.
- FIG. 4 is an example of an analysis device 300 that classifies the severity of lung disease.
- the analysis device 300 corresponds to the above-described analysis device (130, 140, or 150 in FIG. 1).
- the analysis device 300 may be physically implemented in various forms.
- the analysis device 300 may take the form of a smart device, a computer device such as a PC, a network server, a wearable device, an exercise device, or a chipset dedicated to data processing.
- the analysis device 300 may include a storage device 310, a memory 320, an arithmetic device 330, an interface device 340, a communication device 350, and an output device 360.
- the storage device 310 may store the above-described classification model.
- the classification model is a pre-trained model.
- the classification model is a model that outputs lung disease severity based on input user data (voice data and clinical information).
- the storage device 310 can store user data.
- User data is the user's voice data and clinical information that are subject to analysis.
- Voice data consists of data collected before exercise and data collected after exercise.
- Voice data may consist of the items in Table 1.
- Clinical information may consist of the items in Table 2.
- the memory 320 may store data and information generated when the analysis device classifies the severity of lung disease using the subject's user data.
- the interface device 340 is a device that receives certain commands and data from the outside.
- the interface device 340 may receive the subject's voice data from a physically connected input device or an external storage device.
- the input device may include a device such as a microphone.
- Voice data consists of data measured before and after exercise.
- the interface device 340 may receive the subject's clinical information from a physically connected input device or an external storage device.
- the interface device 340 may analyze the subject's user data and transmit the results of classifying the severity of lung disease to an external object.
- the interface device 340 may receive data or information transmitted through the communication device 350 below.
- the communication device 350 refers to a configuration that receives and transmits certain information through a wired or wireless network.
- the communication device 350 may receive the subject's voice data from an external object (database, user terminal, microphone, etc.).
- an external object database, user terminal, microphone, etc.
- the communication device 350 may receive clinical information about a subject from an external object.
- the communication device 350 may analyze the subject's user data and transmit the results of classifying the severity of lung disease to an external object such as a user terminal.
- the output device 360 is a device that outputs certain information.
- the output device 360 can output interfaces, classification results, etc. required for the data processing process.
- the computing device 330 may preprocess user data consistently. For example, the computing device 330 may convert voice data into a certain type of data. Additionally, the computing device 330 may normalize each value of clinical information into a certain category.
- the computing device 330 inputs the preprocessed user data into a pre-trained learning model.
- the computing device 330 may classify the severity of the subject's lung disease based on the probability value output by the learning model.
- the computing device 330 may be a device such as a processor that processes data and performs certain operations, an AP, or a chip with an embedded program.
- the method for classifying the severity of a subject's lung disease as described above may be implemented as a program (or application) including an executable algorithm that can be executed on a computer.
- the program may be stored and provided in a temporary or non-transitory computer readable medium.
- a non-transitory readable medium refers to a medium that stores data semi-permanently and can be read by a device, rather than a medium that stores data for a short period of time, such as registers, caches, and memories.
- the various applications or programs described above include CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM (read-only memory), PROM (programmable read only memory), and EPROM (Erasable PROM, EPROM).
- EEPROM Electrically EPROM
- Temporarily readable media include Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM), and Enhanced SDRAM (Enhanced RAM). It refers to various types of RAM, such as SDRAM (ESDRAM), Synchronous DRAM (Synclink DRAM, SLDRAM), and Direct Rambus RAM (DRRAM).
- SRAM Static RAM
- DRAM Dynamic RAM
- SDRAM Synchronous DRAM
- DDR SDRAM Double Data Rate SDRAM
- Enhanced SDRAM Enhanced SDRAM
- ESDRAM Synchronous DRAM
- SLDRAM Synchronous DRAM
- DRRAM Direct Rambus RAM
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Physiology (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Psychiatry (AREA)
- Pulmonology (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Ce procédé de classification de la gravité d'une maladie pulmonaire d'un sujet à l'aide de données vocales et d'informations cliniques comprend les étapes dans lesquelles un dispositif d'analyse : reçoit des données vocales et des informations cliniques concernant le sujet ; effectue un prétraitement des données vocales et des informations cliniques ; entre les données vocales et les informations cliniques prétraitées dans un modèle d'apprentissage pré-entraîné ; et classe la gravité de la maladie pulmonaire du sujet sur la base d'une valeur de sortie du modèle d'apprentissage.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2022-0117052 | 2022-09-16 | ||
| KR20220117052 | 2022-09-16 | ||
| KR10-2023-0122823 | 2023-09-15 | ||
| KR1020230122823A KR102685274B1 (ko) | 2022-09-16 | 2023-09-15 | 음성 데이터 및 임상 정보를 이용하여 대상자의 폐질환의 중증도를 분류하는 방법 및 분석장치 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024058585A1 true WO2024058585A1 (fr) | 2024-03-21 |
Family
ID=90275303
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2023/013863 Ceased WO2024058585A1 (fr) | 2022-09-16 | 2023-09-15 | Procédé et dispositif d'analyse pour la classification de la gravité d'une maladie pulmonaire d'un sujet à l'aide de données vocales et d'informations cliniques |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR20240110929A (fr) |
| WO (1) | WO2024058585A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20150127380A (ko) * | 2014-05-07 | 2015-11-17 | 한국 한의학 연구원 | 음성 분석을 이용한 건강 상태 진단 장치 및 방법 |
| JP2018516616A (ja) * | 2015-04-16 | 2018-06-28 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 対象の心臓及び/又は呼吸器疾患を検出するデバイス、システム及び方法 |
| JP2018534026A (ja) * | 2015-10-08 | 2018-11-22 | コルディオ メディカル リミテッド | 音声分析による肺疾患の評価 |
| US20210076977A1 (en) * | 2017-12-21 | 2021-03-18 | The University Of Queensland | A method for analysis of cough sounds using disease signatures to diagnose respiratory diseases |
| JP2022100317A (ja) * | 2019-03-11 | 2022-07-05 | 株式会社RevComm | 情報処理装置 |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3964134A1 (fr) | 2020-09-02 | 2022-03-09 | Hill-Rom Services PTE. LTD. | Détection de la santé pulmonaire par analyse vocale |
-
2023
- 2023-09-15 WO PCT/KR2023/013863 patent/WO2024058585A1/fr not_active Ceased
-
2024
- 2024-07-09 KR KR1020240090558A patent/KR20240110929A/ko active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20150127380A (ko) * | 2014-05-07 | 2015-11-17 | 한국 한의학 연구원 | 음성 분석을 이용한 건강 상태 진단 장치 및 방법 |
| JP2018516616A (ja) * | 2015-04-16 | 2018-06-28 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 対象の心臓及び/又は呼吸器疾患を検出するデバイス、システム及び方法 |
| JP2018534026A (ja) * | 2015-10-08 | 2018-11-22 | コルディオ メディカル リミテッド | 音声分析による肺疾患の評価 |
| US20210076977A1 (en) * | 2017-12-21 | 2021-03-18 | The University Of Queensland | A method for analysis of cough sounds using disease signatures to diagnose respiratory diseases |
| JP2022100317A (ja) * | 2019-03-11 | 2022-07-05 | 株式会社RevComm | 情報処理装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| KR20240110929A (ko) | 2024-07-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11810670B2 (en) | Intelligent health monitoring | |
| Muhammad et al. | Convergence of artificial intelligence and internet of things in smart healthcare: a case study of voice pathology detection | |
| US20240374187A1 (en) | Multi-modal systems and methods for voice-based mental health assessment with emotion stimulation | |
| Shi et al. | Theory and Application of Audio‐Based Assessment of Cough | |
| US10223934B2 (en) | Systems and methods for expressive language, developmental disorder, and emotion assessment, and contextual feedback | |
| Stasak et al. | Automatic detection of COVID-19 based on short-duration acoustic smartphone speech analysis | |
| CN106073706A (zh) | 一种面向简易精神状态量表的个性化信息和音频数据分析方法及系统 | |
| Vatanparvar et al. | CoughMatch–subject verification using cough for personal passive health monitoring | |
| Romero et al. | Deep learning features for robust detection of acoustic events in sleep-disordered breathing | |
| Dubbioso et al. | Precision medicine in als: Identification of new acoustic markers for dysarthria severity assessment | |
| JP2023531464A (ja) | 身体計測情報と気管呼吸音を使用して覚醒時に閉塞性睡眠時無呼吸をスクリーニングする方法とシステム | |
| Zhao et al. | Dysphagia diagnosis system with integrated speech analysis from throat vibration | |
| Mitra et al. | Pre-trained foundation model representations to uncover breathing patterns in speech | |
| KR20230050208A (ko) | 시계열 기침음, 호흡음, 낭독음, 발성음 측정을 통한 호흡기 질환 예후 예측시스템 및 방법 | |
| Romero et al. | Snorer diarisation based on deep neural network embeddings | |
| WO2024058585A1 (fr) | Procédé et dispositif d'analyse pour la classification de la gravité d'une maladie pulmonaire d'un sujet à l'aide de données vocales et d'informations cliniques | |
| KR102686011B1 (ko) | 음성 데이터를 이용한 ai 기반 난청, 인지장애 및 치매 선별시스템 | |
| KR102685274B1 (ko) | 음성 데이터 및 임상 정보를 이용하여 대상자의 폐질환의 중증도를 분류하는 방법 및 분석장치 | |
| Ng et al. | A Tutorial on Clinical Speech AI Development: From Data Collection to Model Validation | |
| Serrano et al. | Obstructive Sleep Apnea Identification Based On VGGish Networks. | |
| Härmä et al. | Survey on biomarkers in human vocalizations | |
| Sayadi et al. | Voice as an indicator for laryngeal disorders using data mining approach. | |
| Dkhan et al. | Respiratory diseases detection and classification based on respiratory voice using artificial intelligence methods | |
| Xu et al. | A Review of Disorder Voice Processing toward to Applications | |
| WO2025084507A2 (fr) | Procédé et dispositif d'analyse pour classifier une maladie pulmonaire d'un sujet à l'aide de données de vibration selon une phonation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23865881 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 23865881 Country of ref document: EP Kind code of ref document: A1 |