WO2024080069A1 - 情報処理装置、情報処理方法及びプログラム - Google Patents
情報処理装置、情報処理方法及びプログラム Download PDFInfo
- Publication number
- WO2024080069A1 WO2024080069A1 PCT/JP2023/033500 JP2023033500W WO2024080069A1 WO 2024080069 A1 WO2024080069 A1 WO 2024080069A1 JP 2023033500 W JP2023033500 W JP 2023033500W WO 2024080069 A1 WO2024080069 A1 WO 2024080069A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- hearing
- test
- information processing
- information
- hearing loss
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/12—Audiometering
- A61B5/121—Audiometering evaluating hearing capacity
- A61B5/123—Audiometering evaluating hearing capacity subjective methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0002—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
- A61B5/0015—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
- A61B5/0022—Monitoring a patient using a global network, e.g. telephone networks, internet
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/12—Audiometering
- A61B5/121—Audiometering evaluating hearing capacity
- A61B5/125—Audiometering evaluating hearing capacity objective methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4836—Diagnosis combined with treatment in closed-loop systems or methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
- A61B5/6815—Ear
- A61B5/6817—Ear canal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient; User input means
- A61B5/742—Details of notification to user or communication with user or patient; User input means using visual displays
- A61B5/7435—Displaying user selection data, e.g. icons in a graphical user interface
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/70—Adaptation of deaf aid to hearing loss, e.g. initial electronic fitting
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
Definitions
- This disclosure relates to an information processing device, an information processing method, and a program.
- a pure tone hearing test is a hearing test performed using a testing device called an audiometer. More specifically, an air conduction hearing test and a bone conduction hearing test are performed using an audiometer. For example, an air conduction hearing test and a bone conduction hearing test are performed using the method described in the following non-patent document 1. Then, the degree of conductive hearing loss can be grasped by taking the difference between the air conduction hearing level obtained by the air conduction hearing test and the bone conduction hearing level obtained by the bone conduction hearing test. Furthermore, information on the degree of conductive hearing loss is used to determine whether or not to give priority to treatment other than hearing aids before starting to wear a hearing aid. In addition, information on the degree of conductive hearing loss is also used to determine the gain of the hearing aid.
- a bone conduction receiver When conducting a bone conduction hearing test, a bone conduction receiver is attached to the subject's head, but because the bone conduction receiver needs to be attached precisely in a position that can transmit the test sound efficiently, it is difficult for anyone other than an expert to wear it. Therefore, for example, it is not easy for an individual subject to perform a bone conduction hearing test and obtain information about the degree of conductive hearing loss.
- this disclosure proposes an information processing device, information processing method, and program that can easily predict the degree of conductive hearing loss using air-conducted sound.
- an information processing device is provided that is a hearing test that uses air-conducted sound and that includes a prediction unit that predicts the degree of conductive hearing loss based on the test results of first and second hearing tests that include different test contents.
- the present disclosure also provides an information processing method in which an information processing device predicts the degree of conductive hearing loss based on the test results of first and second hearing tests that use air-conducted sound and include different test contents.
- a program causes a computer to execute a function of predicting the degree of conductive hearing loss based on the test results of first and second hearing tests that use air-conducted sound and include different test contents.
- FIG. 2 is an explanatory diagram illustrating the difference in the path of air conduction sound and bone conduction sound.
- FIG. 1 is an explanatory diagram for explaining the corresponding range of the impaired part in a pure tone hearing test.
- 11 is a flowchart illustrating the flow of determining whether or not treatment is necessary before wearing a hearing aid.
- FIG. 2 is an explanatory diagram illustrating an example of hearing aid gain settings for each type of hearing loss.
- FIG. 1 is an explanatory diagram for explaining an example of a conventional pure tone hearing test.
- FIG. 1 is an explanatory diagram illustrating a hearing test according to an embodiment of the present disclosure.
- FIG. 1 is an explanatory diagram for explaining an example of a conventional hearing test.
- FIG. 13 is an explanatory diagram illustrating an example of an observation result obtained by an autorecording audiometer.
- FIG. 13 is an explanatory diagram for explaining the range of self-recorded audiometry corresponding to the affected area.
- FIG. 1 is an explanatory diagram illustrating an overview of a first embodiment of the present disclosure.
- FIG. 2 is an explanatory diagram illustrating learning data according to the first embodiment of the present disclosure.
- 1 is a block diagram of an information processing terminal according to a first embodiment of the present disclosure.
- 1 is an explanatory diagram illustrating a conductive hearing loss prediction unit according to a first embodiment of the present disclosure.
- FIG. 1 is a flowchart (part 1) illustrating a flow of an information processing method according to a first embodiment of the present disclosure.
- FIG. 11 is a flowchart (part 2) illustrating the flow of the information processing method according to the first embodiment of the present disclosure.
- 1 is an explanatory diagram illustrating an example of a hearing test service in the first embodiment of the present disclosure.
- FIG. 11 is an explanatory diagram illustrating an overview of a second embodiment of the present disclosure.
- FIG. 11 is a block diagram of an information processing terminal according to a second embodiment of the present disclosure.
- 10 is a flowchart illustrating a flow of an information processing method according to a second embodiment of the present disclosure.
- 13 is a flowchart (part 1) illustrating a flow of an information processing method according to a third embodiment of the present disclosure.
- FIG. 13 is a flowchart (part 1) illustrating a flow of an information processing method according to a modified example of the third embodiment of the present disclosure.
- 13 is a flowchart (part 2) illustrating the flow of an information processing method according to a modified example of the third embodiment of the present disclosure.
- 13 is a flowchart (part 2) illustrating the flow of an information processing method according to the third embodiment of the present disclosure.
- FIG. 13 is a block diagram of an information processing terminal according to a fourth embodiment of the present disclosure.
- 13 is a flowchart illustrating a flow of an information processing method according to a fourth embodiment of the present disclosure.
- FIG. 13 is an explanatory diagram illustrating a fourth embodiment of the present disclosure.
- FIG. 13 is an explanatory diagram (part 1) for explaining a display example according to a fourth embodiment of the present disclosure.
- FIG. 13 is an explanatory diagram (part 2) for explaining a display example according to the fourth embodiment of the present disclosure.
- FIG. 11 is an explanatory diagram (part 1) for explaining an application example according to the fourth embodiment of the present disclosure.
- FIG. 13 is an explanatory diagram (part 2) for explaining an application example according to the fourth embodiment of the present disclosure.
- FIG. 13 is a block diagram of an external device according to a fourth embodiment of the present disclosure.
- FIG. 1 is a diagram showing a schematic configuration of a hearing aid system according to an embodiment of the present disclosure.
- FIG. 2 is a functional block diagram of a hearing aid and a charger according to an embodiment of the present disclosure.
- FIG. 1 is a block diagram of an information processing terminal according to an embodiment of the present disclosure.
- FIG. 2 is a block diagram of a server according to an embodiment of the present disclosure.
- FIG. 1 is a diagram illustrating an example of data utilization.
- FIG. 11 is a diagram illustrating an example of data.
- FIG. 13 is a diagram illustrating an example of cooperation with other devices.
- FIG. 13 is a diagram illustrating an example of a use transition.
- Fig. 1 is an explanatory diagram explaining the difference between the paths of air conduction sound and bone conduction sound
- Fig. 2 is an explanatory diagram explaining the range of the impaired part in the pure tone hearing test
- Fig. 3 is a flowchart explaining the flow of determining the necessity of treatment before wearing a hearing aid.
- Fig. 4 is an explanatory diagram explaining an example of the setting of the hearing aid gain for each type of hearing loss
- Fig. 5 is an explanatory diagram explaining an example of a conventional pure tone hearing test
- Fig. 6 is an explanatory diagram explaining a hearing test according to an embodiment of the present disclosure.
- Fig. 1 is an explanatory diagram explaining the difference between the paths of air conduction sound and bone conduction sound
- Fig. 2 is an explanatory diagram explaining the range of the impaired part in the pure tone hearing test
- Fig. 3 is a flowchart explaining the flow of determining the necessity of treatment before wearing a hearing aid.
- Fig. 4 is an explanatory diagram explaining an
- Fig. 7 is an explanatory diagram explaining an example of a conventional hearing test
- Fig. 8 is an explanatory diagram explaining an example of the observation result by a self-recording audiometer
- Fig. 9 is an explanatory diagram explaining the range of the self-recording audiometry for the impaired part.
- the first path 121 indicates the path of air conduction sound.
- Air conduction sound is sound that reaches the inner ear as vibration after passing through the outer ear and the middle ear.
- the vibration is converted into an electric signal in the inner ear, and the electric signal proceeds through the posterior labyrinth toward the auditory cortex.
- the air conduction hearing test is a test that delivers a test sound to the inner ear using the first path 121.
- the air conduction sound reaches the inner ear via the outer ear and the middle ear, so that the combined degree of damage to the outer ear, middle ear, inner ear, and posterior labyrinth can be measured.
- hearing loss caused by damage to the outer ear or middle ear is called conductive hearing loss
- hearing loss caused by damage to the inner ear is called cochlear hearing loss
- hearing loss caused by damage to the posterior labyrinth is called posterior labyrinthine hearing loss
- cochlear hearing loss and posterior labyrinthine hearing loss are collectively called sensorineural hearing loss.
- hearing loss in which conductive hearing loss and sensorineural hearing loss occur simultaneously is called mixed hearing loss.
- the air conduction hearing test can be said to be a test that can measure the degree of conductive hearing loss and the degree of sensorineural hearing loss combined.
- the hearing level measured by the air conduction hearing test is called the air conduction hearing level, and the air conduction hearing level can be expressed by the following formula (1).
- the second path 122 indicates the path of bone conduction sound.
- Bone conduction sound is sound that reaches the inner ear as vibration after passing through the skull. After reaching the inner ear, it is the same as the case of the air conduction sound described above.
- the bone conduction hearing test is a test that delivers the test sound to the inner ear using the second path 122. In the bone conduction hearing test, the bone conduction sound reaches the inner ear without passing through the outer ear and middle ear, so that it is possible to measure the combined degree of the disorder of the inner ear and the posterior labyrinth without being affected by the disorder of the outer ear and the middle ear.
- the bone conduction hearing test can be said to be a test that can measure the degree of sensorineural hearing loss without being affected by conductive hearing loss.
- the hearing level measured by the bone conduction hearing test is called the bone conduction hearing level, and the bone conduction hearing level can be expressed by the following formula (2).
- the air conduction bone gap indicates the degree of conductive hearing loss and can be expressed by the following formula (3).
- Figure 2 shows the range of correspondence between air conduction hearing tests and bone conduction hearing tests for the affected area in pure tone hearing tests.
- Air conduction hearing tests can be used to obtain indicators of disorders that cause conductive hearing loss, cochlear hearing loss, and retrolabyrinthine hearing loss.
- Bone conduction hearing tests can be used to obtain indicators of disorders that cause cochlear hearing loss and retrolabyrinthine hearing loss.
- both air conduction hearing tests and bone conduction hearing tests use the same test sounds, the response to each measurement at each affected area is similar. Therefore, the air conduction bone conduction gap, which is the air conduction hearing level minus the bone conduction hearing level, indicates the degree of conductive hearing loss. This has been explained using formulas (1) to (3).
- both the air conduction and bone conduction hearing test results will be around 40 dBHL.
- both the air conduction and bone conduction hearing test results will be around 30 dBHL.
- the air conduction hearing test results will be around 35 dBHL, but the bone conduction hearing test results will be around 0 dBHL.
- the result will be a mixed hearing loss of 65 dBHL (conductive hearing loss of 30 dBHL and sensorineural hearing loss of 35 dBHL).
- the results of the air conduction hearing test will be around 65 dBHL
- the results of the bone conduction hearing test will be around 35 dBHL.
- air conduction hearing tests and bone conduction hearing tests can be said to be tests that can distinguish between conductive hearing loss and sensorineural hearing loss, and can also measure the degree of conductive hearing loss and sensorineural hearing loss.
- Information about the degree of conductive hearing loss is important for two reasons when considering the use of hearing aids. First, information about the degree of conductive hearing loss is used to determine whether or not to prioritize consideration of treatments other than hearing aids before starting to use hearing aids. Second, information about the degree of conductive hearing loss and the degree of sensorineural hearing loss are used when determining the gain of the hearing aid.
- conductive hearing loss is suspected, it is recommended that treatments other than hearing aids be considered before wearing a hearing aid. If a disorder that causes conductive hearing loss occurs, there is a high possibility that the disorder can be improved with early treatment. Conversely, if time passes, there is a risk that the disorder will no longer be able to be improved by treatment. Therefore, if conductive hearing loss is suspected, it is recommended that you first visit a medical institution to determine whether or not priority treatment is required. If there is a treatment that should be prioritized, that treatment should be performed first. In this way, when considering wearing a hearing aid, it is important to understand the degree of conductive hearing loss. However, if there is conductive hearing loss but no treatment that should be prioritized, for example, if conductive hearing loss has been treated but conductive hearing loss remains, wearing a hearing aid will be considered.
- step S101 an air conduction hearing test and a bone conduction hearing test are performed on the subject (step S101). Then, it is determined whether or not there is a problem with the subject's hearing (step S102). In step S102, for example, it is determined whether or not wearing a hearing aid is appropriate. Specifically, in Japan, a hearing level of 25 dBHL or more and less than 40 dBHL is considered mild hearing loss, and a hearing level of 40 dBHL or more and less than 70 dBHL is considered moderate hearing loss.
- step S102 it is common to wear a hearing aid when the hearing level is 40 dBHL or more. However, the criteria for determination may change depending on the situation. Then, if it is determined that there is no problem with the subject's hearing (step S102: No), proceed to step S103. Next, it is decided to periodically monitor the subject's hearing as necessary (step S103), and the process ends.
- Step S104 it is determined whether or not there is conductive hearing loss. Whether or not there is conductive hearing loss can be determined based on the air-bone gap. From the perspective of treatment, for example, the widely used standard for determining that there is conductive hearing loss is an air-bone gap of 25 dBHL or more. Then, if it is determined that there is no conductive hearing loss (Step S104: No), the process proceeds to Step S105. Next, the subject is advised to consider a hearing aid (Step S105), and the process ends.
- step S104 determines whether or not treatment other than a hearing aid is necessary. This determination is usually made at a medical institution or the like. If it is determined that treatment is not necessary (step S106: No), the process proceeds to step S105. On the other hand, if it is determined that treatment is necessary (step S106: Yes), the process proceeds to step S107. Then, it is decided to provide treatment to the subject (step S107), and the process ends.
- Figure 4 shows examples of hearing aid gain settings for each type of hearing loss.
- the left side of Figure 4 shows an example of hearing aid gain settings for conductive hearing loss
- the center of Figure 4 shows an example of hearing aid gain settings for mixed hearing loss
- the right side of Figure 4 shows an example of hearing aid gain settings for sensorineural hearing loss.
- mixed hearing loss is hearing loss in which conductive hearing loss and sensorineural hearing loss occur simultaneously.
- the hearing aid microphone since there is no damage to the inner ear, it is considered best to amplify the sound picked up by the hearing aid microphone with a constant gain. Amplifying with a constant gain regardless of the input level in this way is generally called linear amplification.
- amplification is performed with a constant gain regardless of the input level.
- the slope of the line in the first section 751 is 45 degrees. Note that the slope of the line is not limited to this.
- the second section 752 is set as an output limit, and is usually set to prevent the hearing loss from being further aggravated by a loud sound.
- amplification with different gains depending on the input level is performed in the sixth section 756, seventh section 757, and eighth section 758.
- the slope of the straight line in the seventh section 757 is closer to horizontal than 45 degrees.
- the ninth section 759 in the example shown on the right side of Figure 4 is set as an output limit.
- the recruitment phenomenon is also called the recruitment phenomenon, and is a loudness (sense of loudness of sound) abnormality caused by damage to the outer hair cells in the cochlea of the inner ear.
- the recruitment phenomenon is present, people become sensitive to even small changes in volume, and symptoms may occur in which soft sounds are difficult to hear, but loud sounds seem extremely loud.
- the signal is set to be amplified with a gain between that of conductive hearing loss and that of sensorineural hearing loss.
- the slope of the fourth section 754 is between the slope of the first section 751 and the slope of the seventh section 757.
- the degree of the slope of the fourth section 754 is determined, for example, by the degree of conductive hearing loss and the degree of sensorineural hearing loss.
- the fifth section 755 in the example shown in the center of FIG. 4 is set as an output limit.
- FIG. 5 shows an example of a conventional pure tone hearing test.
- the equipment used to perform a pure tone hearing test is generally called an audiometer.
- Audiometer 913 has an air conduction receiver 911, a bone conduction receiver 912, and an answer button 914. Pure tone hearing tests are performed using audiometer 913 in medical institutions, hearing aid stores, etc.
- Air conduction receiver 911 When performing an air conduction hearing test, which is included in a pure tone hearing test, an air conduction receiver 911 is attached to the head of the subject 901, and the subject 901 is made to listen to a test sound. The test sound then reaches the inner ear via the first path 121 shown in FIG. 1. When the subject 901 hears the test sound, he or she responds by pressing a response button 914.
- Air conduction receivers 911 come in two types: overhead and inner-ear.
- An overhead-type air conduction receiver has a shape similar to general overhead headphones used for listening to music
- an inner-ear-type air conduction receiver has a shape similar to general inner-earphones used for listening to music.
- a bone conduction receiver 912 When performing a bone conduction hearing test, which is included in the pure tone hearing test, a bone conduction receiver 912 is attached to the head of the subject 901 and the subject listens to the test sound.
- the test sound reaches the inner ear via the second path 122 shown in FIG. 1.
- the ear opposite the test ear (the ear being tested) is masked in principle with an air conduction receiver or the like for masking (not shown).
- the bone conduction receiver 912 vibrates the skull, so the test sound emitted from it is heard by both ears at the same time. Therefore, it is not possible to test each ear separately without masking. Therefore, masking is performed to prevent the subject 901 from listening to the test sound with the other ear and responding.
- the bone conduction receiver 912 is often attached to the mastoid (behind the auricle) or the midline of the forehead, and must be attached accurately to a location where the test sound can be transmitted efficiently, such as the mastoid or the midline of the forehead. Therefore, care must be taken when attaching the bone conduction receiver 912. Furthermore, the bone conduction receiver 912 must not touch the auricle. It is also inappropriate for the bone conduction receiver 912 to pinch hair. In addition, since a masking air conduction receiver (not shown) is also attached at the same time, some measure must be taken to prevent the positions of the two from shifting. Therefore, it is preferable for the bone conduction receiver 912 to be attached to the head of the subject 901 by an expert 903.
- Figure 6 shows an example of a hearing test in which a subject 102 purchases a hearing aid via the Internet, etc., without visiting a medical institution, hearing aid store, etc.
- hearing aids have mainly been sold through hearing aid specialists 903 working at medical institutions, hearing aid stores, etc.
- reasons for this include the price of hearing aids, which keeps people who need them away, and the fact that there is a bias in the areas where the services of hearing aid specialists 903 can be received.
- the subject 102 uses an information processing terminal 117 to perform an air conduction hearing test via the Internet 119.
- an air conduction hearing test for example, dedicated application software may be downloaded and installed on the information processing terminal 117.
- the hearing aid 116 itself may have the function of performing an air conduction hearing test.
- the hearing aid 116 does not have a bone conduction receiver 912, it can perform an air conduction hearing test but cannot perform a bone conduction hearing test. Therefore, in a case where the hearing aid 116 is sold without the intervention of a hearing aid specialist 903 (hearing aids sold in this manner are called OTC (Over-The-Counter) hearing aids, etc.), there is a problem that the air conduction hearing level can be measured but the bone conduction hearing level cannot be measured. In other words, in a case where the hearing aid 116 is sold without the intervention of a hearing aid specialist 903, there is a problem that the degree of conductive hearing loss cannot be measured.
- OTC Over-The-Counter
- Figure 7 shows an example of a hearing test in a group health check at a school or company.
- the main purpose of a group health check is to efficiently find people with hearing impairments from among many people with normal hearing, that is, to screen them. For this reason, bone conduction hearing tests are not usually performed in group health checks. Bone conduction hearing tests take a lot of time because the hearing aids must be placed in the correct position and in the correct condition before the test is performed. In addition, bone conduction hearing tests require appropriate masking, so the tester must have more specialized knowledge and skills than air conduction hearing tests. For this reason, it is difficult to secure suitable testers. Therefore, bone conduction hearing tests are not usually performed in group health checks, whose main purpose is to efficiently find people with hearing impairments from among many people with normal hearing.
- the inventors wanted to establish a method for easily determining the degree of conductive hearing loss using air conduction sound, in order to prevent the subject 901 from missing an opportunity to receive prompt treatment, even in cases where the hearing aid 116 is sold without the intervention of a hearing aid specialist 903, or in cases where it is a group medical examination. Furthermore, the inventors wanted to establish a method for easily determining the degree of conductive hearing loss using air conduction sound, in order to appropriately set the hearing aid gain, even in cases where the hearing aid 116 is sold without the intervention of a hearing aid specialist 903. And, while the inventors were engaged in extensive research to obtain such a method, they were inspired by the following and came up with the embodiment of the present disclosure.
- the self-recording audiometer also called the Bekesy type audiometer, invented by Bekesy in 1947, can provide material for determining the presence or absence of conductive hearing loss using only an air conduction receiver, without using a bone conduction receiver 912.
- the feature of the self-recording audiometer is that it uses two types of test sound sources: intermittent sounds and continuous sounds. In a test using a self-recording audiometer, the subject 901 continues to press a button while the test sound is heard, and continues to release the button while the test sound is not heard.
- the self-recording audiometer gradually weakens the intensity of the test sound while the subject 901 is pressing the button, and gradually increases the intensity of the test sound while the subject 901 releases the button.
- a sawtooth wave graph is observed.
- Figure 8 shows an example of the results of observation using a self-recording audiometer.
- the intensity of the recorded test sounds fluctuates around the air conduction hearing threshold of subject 901.
- the threshold the median value between the peaks and valleys of the sawtooth wave
- the threshold is approximately equal to the results of air conduction hearing tests using a conventional audiometer.
- the observation results from the self-recording audiometer are classified as follows, and the type of hearing loss of the subject 901 can be predicted. For example, as shown in the upper right of Figure 8, if the amplitude (difference between the peaks and valleys) of the sawtooth wave observed when a continuous sound is used on the self-recording audiometer is reduced compared to the results when an intermittent sound is used, it is classified as Jerger Classification Type II. In Jerger Classification Type II, recruitment is positive, and sensorineural hearing loss caused by an inner ear disorder is suspected.
- Jerger Classification Type III is positive for temporary threshold shift, and is suspected to be sensorineural hearing loss caused by damage to the posterior labyrinth.
- Jerger Classification Type IV is suspected to be a sensorineural hearing loss caused by a disorder of the posterior labyrinth.
- Jerger type V is suspected to be a psychogenic functional hearing loss.
- Jerger Classification Type I when there is no reduction in the amplitude of the sawtooth wave, no transient threshold increase, and the results of intermittent and continuous sounds overlap, it is classified as Jerger Classification Type I. Jerger Classification Type I is presumed to be normal. Furthermore, although not shown in Figure 8, when there is no reduction in the amplitude of the sawtooth wave, the results of intermittent and continuous sounds overlap, and there is an increase in the threshold, conductive hearing loss is suspected.
- the results obtained can be classified according to the Jerger classification, allowing for an estimate of cochlear hearing loss, hypermetropic hearing loss, sensorineural hearing loss, or conductive hearing loss.
- the result is Jerger Classification Type II, cochlear hearing loss (sensorineural hearing loss) is assumed, if the result is Jerger Classification Type III or Jerger Classification Type IV, then neural hearing loss (sensorineural hearing loss) is assumed, and if the result is Jerger Classification Type I and there is an elevated threshold, then conductive hearing loss is assumed.
- Figure 9 shows the range of self-recorded audiometry for the affected area.
- the results of self-recorded audiometry using intermittent sounds can be an indicator of disorders that cause conductive hearing loss, cochlear hearing loss, and neural hearing loss.
- intermittent sounds alone cannot distinguish between conductive hearing loss (range 333) and neural hearing loss (range 335).
- conductive hearing loss range 333
- neural hearing loss range 335
- cochlear hearing loss range 334
- recruitment is not necessarily negative.
- Self-recorded audiometric results using continuous sounds can also be an indicator of disorders that cause conductive, cochlear, or neural hearing loss.
- continuous sounds alone cannot distinguish between conductive hearing loss (corresponding range 336) and neural hearing loss (corresponding range 338). If there is a reduction in the amplitude of the sawtooth waves, recruitment is positive and cochlear hearing loss (corresponding range 337) can be distinguished, but if there is no reduction in the amplitude of the sawtooth waves, recruitment is not necessarily negative. Furthermore, the reduction in the amplitude of the sawtooth waves is not necessarily the same in cases where intermittent sounds are used (corresponding range 334) and cases where continuous sounds are used (corresponding range 337).
- neural hearing loss can be distinguished by comparing the two.
- self-recording audiometer testing is a suitable test for classifying a person's sensorineural hearing loss into cochlear hearing loss or neural hearing loss when the hearing loss is known to be sensorineural.
- self-recording audiometer testing uses different test sounds, intermittent and continuous sounds, the degree of conductive hearing loss cannot be determined by simple subtraction as in the case of pure-tone audiometry.
- self-recorded audiometry is a testing method that uses two types of test sounds, intermittent sounds and continuous sounds, and combines them to identify the presence and location of hearing impairment. Furthermore, since self-recorded audiometry can be performed using only an air conduction receiver, it can be performed with a hearing aid 116. In other words, it is possible to detect conductive hearing loss with a hearing aid 116.
- test is conducted by presenting two types of test sounds, intermittent sounds and continuous sounds, through an air conduction receiver, which is a measurement method used in self-recording audiometry, and came to create the embodiment of the present disclosure.
- the Jerger classification which is performed on the results obtained using a self-recording audiometer that uses continuous and intermittent sounds, also indicates the location of the impairment but does not mention its severity. Interpretation of the Jerger classification is particularly difficult in cases of mixed hearing loss.
- the purpose of self-recording audiometry is to sub-diagnose sensorineural hearing loss (i.e., whether it is cochlear or retrolabyrinthine), not to grasp the degree of conductive hearing loss. Therefore, simply applying self-recording audiometry will not establish a method for easily grasping the degree of conductive hearing loss using air-conducted sounds.
- the degree of conductive hearing loss can be predicted using an air conduction receiver, i.e., air conduction sound.
- the subject 901 can be encouraged to visit a medical institution, etc., and the subject can be prevented from missing an opportunity for early treatment of a disorder that causes hearing loss.
- the gain of the hearing aid 116 of the subject 901 can be appropriately set according to the hearing ability of the subject 901. Details of such embodiments of the present disclosure will be described in order below.
- Fig. 10 is an explanatory diagram for explaining the overview of this embodiment, and in detail shows the corresponding ranges of impaired parts in the hearing tests of the first and second groups in this embodiment.
- Fig. 11 is an explanatory diagram for explaining the learning data according to this embodiment.
- hearing tests are conducted for two groups, similar to self-recorded audiometry, which uses two types of test sounds: intermittent sounds and continuous sounds.
- the first group of hearing tests (first hearing tests) according to this embodiment has a correspondence range 541 and includes hearing tests capable of measuring the air conduction hearing threshold.
- the first group of hearing tests according to this embodiment may include an air conduction hearing test of a pure tone hearing test, a self-recorded audiometry test using intermittent sounds, etc.
- a pure tone a warble tone or narrow band noise (more specifically, a sustained noise at a specified frequency) may be used.
- the first group of hearing tests according to this embodiment may be a combination of multiple hearing tests.
- the hearing test of the second group may have correspondence ranges 542, 543, and 544, or may have correspondence ranges 545 and 546. Furthermore, the hearing test of the second group according to this embodiment may have only correspondence range 545 or only correspondence range 546.
- the hearing test of the second group includes a hearing test of the inner ear and/or the retrolabyrinth.
- the hearing test of the first group is assumed to be a self-recorded audiometry using intermittent sounds
- the hearing test of the second group can be a self-recorded audiometry using continuous sounds.
- by comparing the results of the self-recorded audiometry using intermittent sounds with the results of the self-recorded audiometry using continuous sounds it is possible to distinguish retrolabyrinth hearing loss in the corresponding range 544 of FIG. 10.
- the inner ear hearing tests that can be included in the second group of hearing tests according to this embodiment include hearing tests that show a positive result in the case of cochlear hearing loss, such as the Short Increment Sensitivity Index (SISI) test, the Alternate Binaural Loudness Balance (ABLB) test, and the Difference Limen (DL) test.
- SII Short Increment Sensitivity Index
- ABLB Alternate Binaural Loudness Balance
- DL Difference Limen
- test sounds are output at regular intervals, occasionally increasing in intensity by a certain level, and the subject 901 is prompted to respond when he or she notices that the sound has become louder.
- the number of correct responses by the subject 901 in response to the sound intensity is scored, and cochlear hearing loss is estimated based on the score.
- a pure tone of the test frequency is played to both the left and right ears, and the level at which it sounds equal in both ears is determined.
- a line is then drawn connecting the results, and the slope of the line is used to estimate cochlear hearing loss.
- TD Te Decay
- the TD test is used to measure hearing fatigue, and outputs a continuous sound at the audible threshold for a fixed frequency to the subject 901. If the test sound becomes inaudible within a specified time, the sound level is immediately increased by a specified amount and output again. This operation is repeated, and retrolabyrinthine hearing loss is estimated based on the level that can be heard for a specified time or more.
- hearing tests can include tests that estimate the tendency for neural hearing loss by comparing with the average pure-tone hearing level.
- the results of a speech recognition threshold test basically match the results of a pure-tone hearing test, but in cases of neural hearing loss, it is known that the results of the speech recognition threshold test are significantly lower. Therefore, the second group of hearing tests can include a speech recognition threshold test.
- the hearing test for the second group can include a best speech intelligibility test.
- distorted speech audiometry is used to detect neural hearing loss because it is known that speech audiometry scores are lower when there is distortion compared to when there is no distortion. Therefore, distorted speech audiometry can also be included in the second group of hearing tests.
- examples of retrolabyrinthine hearing tests that can be included in the second group of hearing tests according to this embodiment include a dichotic hearing test and a sense of direction test.
- a dichotic hearing test is a test in which different auditory stimuli are presented simultaneously to the left and right ears.
- the tests that may be included in the first group hearing test and the second group hearing test are given as examples, but in this embodiment, the tests that may be included in the first group hearing test and the second group hearing test are not limited to the specific hearing tests mentioned above.
- the tests that may be included in the first group hearing test and the second group hearing test may be variations of the specific hearing tests mentioned above.
- the degree of conductive hearing loss is predicted by combining the results of the hearing test of the first group and the hearing test of the second group, which include different test contents. In this embodiment, the degree of conductive hearing loss is predicted by applying a statistical method to these results, or by applying a trained model to these results. In this way, in this embodiment, the degree of conductive hearing loss can be easily grasped using air conduction sound.
- training data consisting of known test results is prepared in order to predict the degree of conductive hearing loss using statistical methods or to generate a trained model.
- FIG. 11 shows an example of learning data in this embodiment.
- N sets are prepared, each set consisting of first group hearing test information 781 consisting of the test results of a hearing test of the same subject 901 at the same time, second group hearing test information 782 consisting of the test results of a hearing test of the second group, and conductive hearing loss degree information 784.
- the first group hearing test information 781 shown in FIG. 11 includes only one test result, but in this embodiment, this is not limited to one and may be multiple.
- the learning data in this embodiment may include additional information 783. Also, while FIG. 11 shows only one piece of additional information, it may be multiple.
- the hearing test information 781 of the first group can be the test results of a hearing threshold test. More specifically, the hearing test information 781 of the first group can be, for example, an air conduction hearing level. Note that in this embodiment, the numerical value of the air conduction hearing level can be used as is, or the numerical value of the air conduction hearing level can be normalized to be between 0.0 and 1.0.
- the hearing test information 782 of the second group can be the test results of a test that shows a positive result in the case of sensorineural hearing loss. More specifically, the hearing test information 782 of the second group can be, for example, the test results of a self-recorded audiometry using continuous sound. Note that in this embodiment, when the results of a self-recorded audiometry using continuous sound are used as the hearing test information 782 of the second group, it is desirable to compare them with the results of a self-recorded audiometry using intermittent sound. Therefore, it is desirable that the hearing test information 781 of the first group includes the results of a self-recorded audiometry using intermittent sound.
- a reduction in the amplitude of the sawtooth wave can be used to determine whether or not hearing impairment is caused by the inner ear.
- the amplitude of the sawtooth wave is within 3 dB, hearing impairment caused by the inner ear is strongly suspected, and when it is within 2 dB, it is determined that there is an inner ear disorder. Therefore, in this embodiment, for example, an amplitude of within 2.5 dB may be considered positive (+) and an amplitude of more than 2.5 dB may be considered negative (-), and further, positive may be defined as 1.0 and negative as 0.0. Alternatively, it may be converted into a score that expresses the range between positive and negative as a continuous value.
- a transient threshold increase for continuous sounds can be used to determine retrolabyrinthine hearing impairment.
- the transient threshold increase for continuous sounds is 10 dB or more relative to the threshold for intermittent sounds, it can be considered positive (+), and if it is less than 10 dB, it can be considered negative (-).
- positive can be defined as 1.0 and negative as 0.0.
- it can be converted into a score that expresses the range between positive and negative as a continuous value.
- the hearing test of the second group when the hearing test of the second group targets both cochlear hearing loss and neural hearing loss, the hearing test of the second group may be a combination of a hearing test that is positive in the case of cochlear hearing loss and a hearing test that is positive in the case of neural hearing loss.
- the hearing test of the second group in this embodiment may include a SISI test as a hearing test that is positive in the case of cochlear hearing loss, and a TD test as a hearing test that is positive in the case of neural hearing loss.
- a SISI test score of 60% or more may be considered positive (+), a score of 20-55% may be considered suspected positive (+'), and a score of 15% or less may be considered negative (-).
- positive may be defined as 1.0 and negative as 0.0, or the score may be converted to one that expresses the range between positive and negative as a continuous value.
- the increase in the minimum level that can be heard for one minute or more is 10 dB or more, it may be considered positive (+), and if it is less than 10 dB, it may be considered negative (-).
- positive may be defined as 1.0 and negative as 0.0, or it may be converted into a score that expresses the range between positive and negative as a continuous value.
- the first group hearing test information 781 and the second group hearing test information 782 may include information obtained by biosensing of the subject 901.
- an example of biosensing information may include brain wave information.
- an example of brain wave information may include auditory evoked response.
- ABR auditory brainstem response
- a response classified as an I wave originates from the cochlear nerve (retrolabyrinth). Therefore, in this embodiment, for example, a case where the height of the ABR I wave is less than 0.1 ⁇ V may be considered positive (+), and a case where it is 0.1 ⁇ V or more may be considered negative (-).
- the additional information 783 shown in FIG. 11 can be information that indicates a statistical trend due to the state of hearing.
- the additional information 783 can be attribute information of the subject 901, such as the age, sex, medical history, family history, occupational history, and lifestyle of the subject 901.
- the additional information 783 may include a score of 1.0 when the subject 901 is 75 years old or older and 0.0 when the subject is younger than 75 years old, or the information included in the additional information 783 may be a score that expresses age as a continuous value from 0.0 to 1.0.
- the conductive hearing loss degree information 784 is a value representing the degree of conductive hearing loss.
- the air conduction bone gap can be used as a value representing the degree of conductive hearing loss.
- it may be converted into a conductive hearing loss score value z for use.
- the conductive hearing loss score value z may be the ratio of the air conduction bone gap to the air conduction hearing level as shown in the following mathematical formula (4).
- the conductive hearing loss degree information 784 is quantified based on the first group hearing test information 781 and the second group hearing test information 782 shown in Fig. 11. Specifically, for example, the air conduction hearing level included in the first group hearing test information 781 is set to y1 , for example, the SISI test score included in the second group hearing test information 782 is set to y2 , the TD test score is set to y3 , and the Mth test score included in the second group hearing test information 782 is set to yM+1 . Then, these values are calculated and set as each feature vector. Furthermore, the conductive hearing loss score value z indicating the conductive hearing loss degree information 784 is calculated according to the following formula (5).
- u i is a weighting factor
- u 0 is a predetermined bias value.
- Various statistical judgment methods can be used to determine the weighting factor u i and the bias value u 0.
- a large number of known air-bone gaps and the corresponding first and second groups of hearing test information 781, 782 are obtained, and based on them, training data is prepared in which each feature vector is paired with a desired score value (for example, 0.0 to 1.0).
- a linearly approximated weight can be obtained, and the weighting factor u i and the bias value u 0 can be determined.
- a neural network may be used, or a discrimination method such as Bayesian estimation or vector quantization may be used, and is not particularly limited.
- the conductive hearing loss score value z indicating the conductive hearing loss degree information 784 is calculated using a statistical method, but this embodiment is not limited to such a method.
- a trained model may be generated using deep machine learning, and the trained model may be used to calculate the conductive hearing loss score value z indicating the conductive hearing loss degree information 784.
- a trained model may be generated by performing deep machine learning using a large amount of training data prepared in advance, and the trained model may be used to predict the conductive hearing loss degree information 784.
- one training sample refers to an information set that includes the first group hearing test information 781, the second group hearing test information 782 (input data), and the conductive hearing loss degree information 784 (teacher data) for the same subject 901 at the same time.
- the degree of conductive hearing loss as described above can be predicted using a combination of the information processing terminal 117 and the hearing aid 116 (or the headphone speaker 115).
- the functional configuration of the information processing terminal 117 according to this embodiment will be described below with reference to Figures 12 and 13.
- Figure 12 is a block diagram of the information processing terminal 117 according to this embodiment
- Figure 13 is an explanatory diagram explaining the conductive hearing loss prediction unit according to this embodiment.
- the information processing terminal 117 includes a first group hearing test sound source generating unit (sound source generating unit) 161, a second group hearing test sound source generating unit (sound source generating unit) 162, a test control unit 163, a conductive hearing loss degree predicting unit (predicting unit) 164, and a test information storage unit 165. Furthermore, the information processing terminal 117 according to this embodiment includes an additional information storage unit 166, a hearing test sound output means 171, an information output means (output unit) 172, an information input means 173, and a communication means 174. Below, each functional unit included in the information processing terminal 117 according to this embodiment will be explained in order.
- the information processing terminal 117 includes a first group hearing test sound source generating unit 161, a second group hearing test sound source generating unit 162, a test control unit 163, a test information storage unit 165, an additional information storage unit 166, and a communication means 174.
- the hearing aid 116 or the headphone speaker 115 includes a hearing test sound output means 171.
- the server on the Internet 119 includes a conductive hearing loss degree prediction unit 164.
- the first group hearing test sound source generator 161 generates a sound source of a test sound for the first group hearing test under the control of the test control unit 163 described later. Furthermore, the first group hearing test sound source generator 161 outputs the generated sound source to the hearing test sound output means 171. In this embodiment, when the first group hearing test is a combination of a plurality of tests, for example, generation and output are repeated for the number of types of tests.
- the second group hearing test sound source generator 162 generates a sound source of a test sound for the second group hearing test under the control of the test controller 163. Furthermore, the second group hearing test sound source generator 162 outputs the generated sound source to the hearing test sound output means 171. In this embodiment, when the hearing test of the second group is a combination of a plurality of tests, for example, generation and output are repeated for the number of types of tests.
- Test control unit 163 controls the generation of the sound sources of the first group hearing test sound source generating unit 161 and the second group hearing test sound source generating unit 162 in response to input information from the information input means 173 described later, and performs the hearing test according to the rules. In order to smoothly proceed with the hearing test, the test control unit 163 may control the information output means 172 described later to guide the subject 901 by, for example, informing the subject 901 of the procedure.
- the conductive hearing loss degree prediction unit 164 can receive the hearing test results of the first and second groups stored in the test information storage unit 165 described later via the test control unit 163. The conductive hearing loss degree prediction unit 164 can predict the degree of conductive hearing loss based on the hearing test results. Furthermore, the conductive hearing loss degree prediction unit 164 can receive additional information 783 stored in the additional information storage unit 166 described later via the test control unit 163 and use it to predict the degree of conductive hearing loss. Furthermore, the conductive hearing loss degree prediction unit 164 can output prediction information of the predicted degree of conductive hearing loss to the information output means 172 via the test control unit 163.
- FIG. 13 shows the input and output of the conductive hearing loss degree prediction unit 164.
- the conductive hearing loss degree prediction unit 164 receives the first group of hearing test information 781 and the second group of hearing test information 782, and outputs conductive hearing loss degree prediction information. Additionally, additional information 783 may be input to the conductive hearing loss degree prediction unit 164.
- the conductive hearing loss degree prediction unit 164 can predict the degree of conductive hearing loss using only air-conducted sound, for example, by using the statistical method described above or a method using a trained model.
- Test information storage unit 165 The test information storage unit 165, under the control of the test control unit 163, stores result information of the hearing test of the first group and result information of the hearing test of the second group.
- the additional information storage unit 166 stores additional information 783 input via the information input means 173 (specifically, a keyboard, a touch panel, or an external information processing terminal, etc.) under the control of the test control unit 163.
- the information input means 173 specifically, a keyboard, a touch panel, or an external information processing terminal, etc.
- the hearing test sound output means 171 can output the sound sources generated by the first group hearing test sound source generator 161 and the second group hearing test sound source generator 162 to the subject 901 to perform the test.
- the hearing test sound output means 171 can be, for example, a receiver of the hearing aid 116.
- the information output means 172 can display, etc., the result information of the first group hearing test, the result information of the second group hearing test, information such as the predicted degree of conductive hearing loss, or information derived therefrom.
- the information output means 172 can be the screen of the information processing terminal 117 or a display device wirelessly connected to the information processing terminal 117.
- Information input means 173 For example, a response from the subject 901 is input to the information input means 173. Specifically, for example, in the case of performing an air conduction hearing test, the subject 901 continues to press the response button 914 while hearing sounds, and at this time, the subject 901 uses the response button 914 as the information input means 173.
- the information input means 173 may be such a response button 914, or may be a touch panel superimposed on the screen of the information processing terminal 117 or other input means.
- the response is not limited to pressing down the response button 914, but may be a predetermined movement of the subject's 901's head or arm.
- the response may be input by sensing the movement of the subject 901 with an imaging device or an acceleration sensor.
- the response may be the voice of the subject 901, in which case the voice may be picked up by a microphone.
- a response may be input using a biosensor.
- brain waves indicating an auditory brainstem response or the like of the subject 901 may be detected by a biosensor and input as a response.
- the information input means 173 is not particularly limited as long as it is a means capable of inputting response or answer information from the subject 901.
- the communication means 174 can receive additional information 783 such as age and sex from an external device and transmit it to the additional information storage unit 166 via the test control unit 163. Furthermore, the communication means 174 can receive coefficients, bias values, learned models, etc. used in the conductive hearing loss degree prediction unit 164 from an external device and transmit them to the conductive hearing loss degree prediction unit 164. In this embodiment, by doing so, it is possible to constantly update the coefficients, bias values, and learned models to ones with higher accuracy. Furthermore, the communication means 174 can transmit prediction information of the degree of conductive hearing loss predicted by the conductive hearing loss degree prediction unit 164 to an external device.
- the functional configuration of the information processing terminal 117 is not limited to the configuration shown in FIG. 12.
- Fig. 14 and Fig. 15 are flowcharts illustrating the flow of the information processing method according to this embodiment.
- the example shown in FIG. 14 includes multiple steps, step S201 and step S202.
- a hearing test for the first group and a hearing test for the second group are performed (step S201).
- the order in which the tests are performed may be determined depending on the test being performed. For example, since the SISI test is usually performed when the hearing threshold is about 20 dB higher, it is preferable to have completed the hearing test for the hearing threshold in advance. Therefore, in this embodiment, the hearing test for the hearing threshold may be performed first, followed by the SISI test.
- the degree of conductive hearing loss is predicted based on the result information of the hearing test of the first group and the result information of the hearing test of the second group (step S202), and the process ends.
- formula (4) or formula (5) can be used to predict conductive hearing loss.
- FIG. 15 includes multiple steps from step S301 to step S304.
- the example shown in FIG. 15 differs from the example shown in FIG. 14 in that if there is no problem with the air conduction hearing level obtained by the hearing test of the first group, the hearing test of the second group is not performed and the process is terminated.
- the air conduction hearing level is the sum of the degree of conductive hearing loss and the degree of sensorineural hearing loss. Therefore, if there is no problem with hearing at the air conduction hearing level, it can be seen at this point that there is no problem with the degree of conductive hearing loss either.
- the time required for the entire test can be shortened by terminating the process without performing the hearing test of the second group. In particular, shortening the test time is important in cases such as mass medical examinations.
- a hearing test is first performed on the first group (step S301).
- the criterion for this determination can be whether the air conduction hearing level is 25 dBHL or higher. If there is no problem with the air conduction hearing level (step S302: No), the process ends. On the other hand, if there is a problem with the air conduction hearing level (step S302: Yes), the process proceeds to step S303. Next, a hearing test is performed on the second group (step S303).
- step S304 based on the result information of the hearing test of the first group and the result information of the hearing test of the second group, the degree of conductive hearing loss is predicted (step S304), and the process ends.
- formula (5) can be used to predict conductive hearing loss, as explained above.
- FIG. 16 is an explanatory diagram illustrating an example of a hearing test service in this embodiment.
- a hearing test service As described above, when considering purchasing a hearing aid 116, it is preferable to first perform a hearing test in order to understand the level of hearing and to determine whether or not a visit to a medical institution or the like is necessary before wearing the hearing aid 116.
- the subject 901 operates the information processing terminal 117 to access a hearing aid sales site provided by a hearing aid sales company 291 via the Internet 119.
- the hearing aid sales site of the hearing aid sales company 291 is assumed to provide a hearing test service.
- the hearing test service may be provided by the hearing aid sales company 291 itself, or may be provided by a hearing test service provider 292 different from the hearing aid sales company 291.
- the hearing test service provider 292 provides the hearing test service
- the results of the hearing test may be provided by the hearing test service provider 292 to the hearing aid sales company 291, or may be provided directly by the hearing test service provider 292 to the subject 901.
- the subject 901 operates the information processing terminal 117 to perform the hearing test.
- the program for performing the hearing test may be one that operates on the Internet 119, or may be one that operates on the information processing terminal 117, and is not particularly limited.
- the headphone speaker (hereinafter also referred to as headphones) 115 connected to the information processing terminal 117 may not be one that is calibrated for the hearing test.
- uncalibrated headphones 115 if the specifications of the headphones 115 do not meet the sound pressure level and frequency characteristics expected for the hearing test, the hearing test may not be performed accurately. Therefore, in this application example, it is preferable to obtain characteristic information (output characteristic information) such as the sound pressure level and frequency characteristics of the headphones 115 in advance prior to the test.
- the headphones 115 can be calibrated for the hearing test.
- characteristic information of various commercially available headphones 115 is stored in advance in the information processing terminal 117, and the subject 901 selects or inputs information such as the model number and specifications of the headphones 115 that he or she owns, and the information processing terminal 117 acquires the characteristic information corresponding to the headphones 115.
- the information processing terminal 117 may communicate with the headphones 115, acquire information such as the model number and specifications from the headphones 115, and download the characteristic information of the headphones 115 from the database 299 based on the acquired information.
- the information processing terminal 117 can perform a hearing test after making corrections based on the acquired characteristic information (specific corrections will be described later).
- the headphones 115 may be either a wired type or a wireless type.
- the headphones 115 are not limited to overhead type headphones, and may be earphones or similar devices.
- the characteristic information of the headphones 115 is not limited to being stored in advance in the information processing terminal 117, and the characteristic information of the headphones 115 may be downloaded from the database 299 to the information processing terminal 117 via the Internet 119 or a line not shown. Furthermore, in this application example, correction of the headphones 115 may be performed on the information processing terminal 117, or may be performed on a program running on the Internet 119. Furthermore, the subject 901 can respond or answer to the hearing test using the information processing terminal 117 or the like, and can receive the test results via, for example, the information processing terminal 117 or the like.
- the information processing terminal 117 or the like may notify the subject 901 of that fact and recommend the use of other headphones 115.
- the information processing terminal 117 or the like may present the subject 901 with the headphones 115 having the most suitable characteristic information for the test. For example, if the subject 901 owns two headphones, one that can output a high sound pressure level and one that can only output a low sound pressure level, the former is more suitable for the test because it can accommodate a more severe hearing loss. Also, for example, if the subject 901 owns two headphones, a circum-ear type overhead headphone and a supra-ear type overhead headphone, the former is more suitable for the test because it can accommodate a noisier environment.
- a person wishing to purchase a hearing aid can know, based on the results of the hearing test, whether they should first proceed with considering purchasing a hearing aid 116, or whether they should visit a medical institution or the like first.
- a hearing test service can be used even without going through a specialist 903
- the subject 901 can use the hearing test service of the hearing test service provider 292 using the information processing terminal 117. At this time, the subject 901 can use the headphones 115 that he or she owns.
- Fig. 17 is an explanatory diagram for explaining the overview of this embodiment.
- characteristic information of the headphones (audio output device) 115 to be used can be acquired based on an image of the headphones 115.
- the headphones 115 owned by the subject 901 are photographed with the camera of the information processing terminal 117, and information such as the model number and specifications of the headphones 115 is identified using image recognition technology. Then, in this embodiment, characteristic information of the headphones 115 is obtained based on the identified information such as the model number and specifications of the headphones 115, and can be used for correction during testing. According to this embodiment, it is possible to easily obtain characteristic information of the headphones 115 and perform correction without bothering the subject 901. Note that this embodiment can be executed mainly by a program that operates on the Internet 119 or a program that operates on the information processing terminal 117.
- Fig. 18 is a block diagram of the information processing terminal 117 according to this embodiment.
- the information processing terminal 117 includes a first group hearing test sound source generator 161, a second group hearing test sound source generator 162, a test control unit 163, a conductive hearing loss degree prediction unit 164, and a test information storage unit 165, as in the first embodiment. Also, the information processing terminal 117 according to this embodiment includes an additional information storage unit 166, a hearing test sound output means 171, an information output means 172, an information input means 173, and a communication means 174, as in the first embodiment. Furthermore, in this embodiment, the information processing terminal 117 includes a level and frequency characteristic correction information storage unit 189 and a level and frequency characteristic correction unit 190.
- each functional unit included in the information processing terminal 117 according to this embodiment will be described in order, but here, the functional units common to the first embodiment will be omitted, and only the level and frequency characteristic correction information storage unit 189 and the level and frequency characteristic correction unit 190 will be described.
- the information processing terminal 117 includes a first group hearing test sound source generating unit 161, a second group hearing test sound source generating unit 162, a test control unit 163, a test information storage unit 165, an additional information storage unit 166, a communication means 174, a level and frequency characteristic correction information storage unit 189, and a level and frequency characteristic correction unit 190.
- the hearing aid 116 or the headphone speaker 115 includes a hearing test sound output means 171.
- the server on the Internet 119 includes a conductive hearing loss degree prediction unit 164.
- the level and frequency characteristic correction information storage unit 189 stores images of various types of headphones 115 linked to the model numbers of the headphones 115. The images are used by the level and frequency characteristic correction unit 190 when identifying the model numbers of the headphones 115 using image recognition technology.
- the level and frequency characteristic correction information storage unit 189 also stores characteristic information of various types of headphones 115. The characteristic information is used by the level and frequency characteristic correction unit 190 when performing correction.
- the information stored in the level and frequency characteristic correction information storage unit 189 may be downloaded from a database 299 via the Internet 119 or a line not shown.
- the level and frequency characteristic correction information storage unit 189 is not limited to storing images of the headphones 115, and may store identification signals and the like output from the headphones 115.
- the identification signals and the like stored in the level and frequency characteristic correction information storage unit 189 are collated with the identification signals and the like acquired from the headphones 115, thereby making it possible to identify the model numbers and specification information of the headphones 115.
- Level and frequency characteristic correction section 190 The level and frequency characteristic correction unit 190 can correct the sound sources of the test sounds generated by the first group hearing test sound source generation unit 161 and the second group hearing test sound source generation unit 162, based on the characteristic information of the headphones 115 stored in the level and frequency characteristic correction information storage unit 189. Furthermore, the level and frequency characteristic correction unit 190 can output the corrected test sound source to the hearing test sound output means 171.
- the level and frequency characteristic correction unit 190 may use image recognition technology to identify the model number of the headphones 115 based on an image of the headphones 115.
- the functional configuration of the information processing terminal 117 is not limited to the configuration shown in FIG. 18.
- Fig. 19 is a flowchart illustrating the flow of the information processing method according to this embodiment.
- the information processing method includes a number of steps from step S401 to step S403.
- level and frequency characteristic correction information is set (step S401).
- image recognition technology is applied to an image of the headphones 115 owned by the subject 901, taken by the camera of the information processing terminal 117, to identify the model number of the headphones 115.
- characteristic information of the headphones 115 is acquired based on the identified model number of the headphones 115, and level and frequency characteristic correction information for correcting the test sound source is set based on the acquired information.
- the sound source of the test sound is corrected based on the set level and frequency characteristic correction information. Note that in this embodiment, if characteristic information of the headphones 115 used by the subject 901 cannot be acquired, the subject 901 is notified of this fact, and the process ends.
- steps S402 and S403 are performed in sequence, but since these steps are similar to steps S201 and S202 of the information processing method according to the first embodiment shown in FIG. 14, a description thereof will be omitted here.
- the headphones 115 owned by the subject 901 are photographed with the camera of the information processing terminal 117, and the model number of the headphones 115 is identified using image recognition technology, making it possible to easily obtain characteristic information of the headphones 115 and perform corrections without bothering the subject 901.
- Fig. 20 is a flowchart illustrating the flow of an information processing method according to this embodiment.
- This embodiment is an application example when determining the need to visit a medical institution before wearing a hearing aid.
- a subject 901 performs the test by himself/herself.
- the information processing method according to this embodiment includes multiple steps from step S501 to step S508.
- a hearing test is performed on the first group and a hearing test is performed on the second group (step S501).
- the degree of conductive hearing loss is predicted based on the result information of the hearing test on the first group and the result information of the hearing test on the second group (step S502).
- step S503 it is determined whether there is a hearing problem.
- step S503 it is determined whether the wearing of a hearing aid is at an appropriate level. In this embodiment, for example, whether the air conduction hearing level is 40 dBHL or higher can be used as a criterion. If it is determined that there is no hearing problem (step S503: No), the process proceeds to step S504. Then, the subject 901 is advised to undergo regular follow-up observation as necessary (step S504), and the process ends.
- step S505 it is determined whether the predicted degree of conductive hearing loss is high.
- a criterion can be used that determines that the predicted degree of conductive hearing loss is high if the predicted degree of conductive hearing loss is 25 dBHL or higher. In such a case, the predicted degree of conductive hearing loss is compared with the reference value (predetermined threshold) of 25 dBHL to make a judgment. If it is determined that the predicted degree of conductive hearing loss is not high (step S505: No), the process proceeds to step S506. Then, the subject 901 is advised to consider wearing a hearing aid (step S506), and the process ends.
- step S507 If the predicted level of conductive hearing loss is judged to be high (step S505: Yes), proceed to step S507.
- the subject 901 is then asked whether or not the subject 901 has already had a medical professional or the like confirm that there is no problem with wearing a hearing aid (step S507). For example, if the subject 901 has been treated for conductive hearing loss but still has conductive hearing loss, it is highly likely that there is no problem with wearing a hearing aid. If the answer is that the medical professional or the like has already confirmed (step S507: No), proceed to step S506. On the other hand, if the answer is that the medical professional or the like has not yet confirmed (step S507: Yes), proceed to step S508. Furthermore, the subject 901 is encouraged to visit a medical institution or the like (step S508), and the process ends.
- step S503 the determination of whether there is a hearing problem is made in step S503, but this is not limited to this in this embodiment.
- the determination of whether there is a hearing problem may be made between the hearing test for the first group and the hearing test for the second group.
- Figs. 21 and 22 are flow charts explaining the flow of an information processing method according to a modified example of this embodiment. As shown in Figs. 21 and 22, the information processing method according to this modified example includes multiple steps from step S601 to step S613.
- a hearing test is performed on the first group (step S601).
- 0 is input to the count i of the hearing test for the second group (step S602).
- 1 is added to the count i of the hearing test for the second group (step S603).
- the i-th hearing test is performed among the hearing tests of the second group, which includes multiple hearing tests (step S604). Then, based on the result information of the hearing test of the first group and the result information of the hearing test of the second group that has already been performed, the degree of conductive hearing loss is predicted, and the likelihood of the prediction of the degree of conductive hearing loss is predicted (step S606).
- the likelihood may be estimated as the likelihood of the prediction of the degree of conductive hearing loss.
- an estimation model that estimates the likelihood may be generated by performing deep machine learning using a large amount of known training data used when generating the trained model described above, and the likelihood of the newly predicted degree of conductive hearing loss may be estimated using the estimation model.
- step S606 it is determined whether the count i of the hearing test of the second group is a predetermined value M (step S606).
- the predetermined value M may be, for example, the same as the number of hearing tests included in the hearing test of the second group, or may be set appropriately by the user. If the count i of the hearing test of the second group is not the predetermined value M (step S606: Yes), proceed to step S607, and if the count i of the hearing test of the second group is the predetermined value M (step S606: No), proceed to step S608 shown in FIG. 22.
- step S607 it is determined whether the degree of conductive hearing loss predicted in step S605 is sufficiently accurate. For example, the accuracy of the prediction of the degree of conductive hearing loss can be determined by comparing it with a predetermined threshold value. If the accuracy of the prediction of the degree of conductive hearing loss is sufficient (step S607: Yes), proceed to step S608 shown in FIG. 22. On the other hand, if the accuracy of the prediction of the degree of conductive hearing loss is not sufficient (step S607: No), return to step S603.
- step S603 the next (i+1) hearing test is performed among the hearing tests of the second group, which includes multiple hearing tests, and the degree of conductive hearing loss is predicted using the result information of the hearing tests of the second group, which is greater than the previous one. Therefore, the possibility of improving the accuracy of the prediction of the degree of conductive hearing loss is increased.
- steps S608 to S613 shown in FIG. 22 are carried out, but because these steps are similar to steps S502 to S508 of the information processing method according to this embodiment described with reference to FIG. 20, a description thereof will be omitted here.
- the number of hearing tests for the second group is increased until the prediction of the degree of conductive hearing loss can be predicted with a certain degree of certainty. Therefore, according to this modified example, the hearing test for the second group is terminated when a prediction of the degree of conductive hearing loss can be obtained with a certain degree of certainty, i.e., hearing tests are not necessarily performed for all of the second group every time, and therefore the time required for the entire test can be shortened.
- FIG. 23 is a flowchart illustrating the flow of an information processing method according to this embodiment. Note that this embodiment may be configured so that a purchase button is available instead of transitioning to the hearing aid purchase screen.
- This embodiment is configured so that a hearing test is always taken before transitioning to the hearing aid purchase screen, which prevents a user from purchasing a hearing aid 116 without realizing that there is a hearing test service, or from accidentally purchasing a hearing aid 116 despite being recommended to visit a medical institution in the hearing test service.
- the information processing method according to this embodiment includes multiple steps from step S701 to step S708. Note that steps other than step S706 are similar to steps S501 to S505, and steps S507 and S508 of the information processing method according to this embodiment described with reference to FIG. 20, and therefore will not be described here.
- the screen transitions to a hearing aid purchase screen (step S706), and the process ends.
- the hearing aid purchase screen allows the user to carry out the purchase procedure.
- step S703 the determination of whether there is a hearing problem is made in step S703, but this is not limited to this in this embodiment.
- the determination of whether there is a hearing problem may be made between the hearing test for the first group and the hearing test for the second group.
- step S706 may be reached, and the hearing aid purchase screen may be transitioned to, but access to the website may be interrupted for some reason. If the hearing test must be taken again when attempting to resume the purchase procedure after that, the subject 901 may be significantly discouraged from purchasing the hearing aid 116. Therefore, in this embodiment, this is avoided by, for example, configuring the system to be able to immediately skip to step S706 as necessary. For example, once step S706 is reached, the information is stored, and the information is confirmed the next time the procedure is accessed, and the procedure is resumed from step S706. In such a case, since hearing ability changes over time, an elapsed time (more specifically, an elapsed period from the start of the previous test) that allows immediate resumption from step S706 may be set.
- the subject 901 can perform the test himself/herself and predict the degree of conductive hearing loss, so that even in a place such as the subject's home, the subject 901 can know whether he/she should consider purchasing a hearing aid 116 or whether he/she should first visit a medical institution. As a result, this embodiment can prevent the subject 901 from missing an opportunity for early treatment of a disorder that causes hearing loss. Furthermore, this embodiment can predict the degree of conductive hearing loss of the prospective purchaser even in cases where the hearing aid 116 is purchased via the Internet 119, so that it can prevent a prospective purchaser who is not suitable for wearing a hearing aid from purchasing the hearing aid 116.
- This embodiment is an embodiment that allows for setting hearing aid parameters for a hearing aid 116.
- the functional configuration of an information processing terminal 117 according to this embodiment will be described with reference to Fig. 24.
- Fig. 24 is a block diagram of the information processing terminal 117 according to this embodiment.
- the information processing terminal 117 includes a first group hearing test sound source generator 161, a second group hearing test sound source generator 162, a test control unit 163, a conductive hearing loss degree predictor 164, and a test information storage unit 165, as in the first embodiment.
- the information processing terminal 117 according to this embodiment also includes an additional information storage unit 166, a hearing test sound output unit 171, an information output unit 172, an information input unit 173, and a communication unit 174, as in the first embodiment.
- the information processing terminal 117 includes a hearing aid parameter determiner 197 and a hearing aid parameter storage unit 198.
- each functional unit included in the information processing terminal 117 according to this embodiment will be described in order, but here, the functional units common to the first embodiment will be omitted, and only the hearing aid parameter determiner 197 and the hearing aid parameter storage unit 198 will be described.
- the information processing terminal 117 includes a first group hearing test sound source generator 161, a second group hearing test sound source generator 162, a test control unit 163, a test information storage unit 165, an additional information storage unit 166, and a communication means 174.
- the hearing aid 116 or the headphone speaker 115 includes a hearing test sound output means 171 and a hearing aid parameter storage unit 198.
- the server on the Internet 119 includes a conductive hearing loss degree prediction unit 164 and a hearing aid parameter determination unit 197.
- the hearing aid parameter determination unit 197 can obtain the degree of conductive hearing loss predicted by the conductive hearing loss degree prediction unit 164 via the test control unit 163, and determine the hearing aid parameters, which are the setting parameters of the hearing aid 116, based on the predicted degree of conductive hearing loss.
- the hearing aid parameters can be parameters used for the gain setting control and noise suppression setting control (noise suppression intensity, frequency characteristics, etc.) of the hearing aid 116. More specifically, the hearing aid parameters can be parameters for gain setting such as the fourth section 754 shown in FIG. 4.
- the widely used methods NAL-NL2 (National Acoustic Laboratories Nonlinear 2) and DSLv5 (Desired Sensation Level version 5) can be used to determine the hearing aid parameters.
- the bone conduction hearing level can be calculated from the above-mentioned formulas (1) and (2) if the air conduction hearing level (the result information of the first group hearing test) and predicted information on the degree of conductive hearing loss are available.
- the hearing aid parameter determination unit 197 may determine the strength and frequency characteristics of noise suppression using predicted information on the degree of conductive hearing loss and predicted information on the degree of sensorineural hearing loss. More specifically, for example, the strength of noise suppression is increased when the degree of sensorineural hearing loss is large. Furthermore, the hearing aid parameter determination unit 197 outputs the determined hearing aid parameters to the hearing aid parameter storage unit 198.
- the hearing aid parameter storage section 198 stores the hearing aid parameters determined by the hearing aid parameter determination section 197 .
- the functional configuration of the information processing terminal 117 is not limited to the configuration shown in FIG. 24.
- Fig. 25 is a flowchart for explaining the flow of the information processing method according to this embodiment
- Fig. 26 is an explanatory diagram for explaining this embodiment.
- the information processing method according to this embodiment includes a plurality of steps from step S801 to step S803.
- a hearing test is performed on the first group and a hearing test is performed on the second group (step S801).
- the degree of conductive hearing loss is predicted based on the result information of the hearing test on the first group and the result information of the hearing test on the second group.
- the bone conduction hearing level is also predicted (step S802). If there is an air conduction hearing level (hearing test information of the first group) and a predicted value of the degree of conductive hearing loss, the predicted value of the bone conduction hearing level can also be calculated from the above-mentioned formulas (1) and (2).
- the hearing aid parameters which are the settings of the hearing aid 116, are determined (step S803).
- the hearing aid parameters may be determined using a unique method.
- the information processing terminal 117 performs a hearing test using the hearing aid 116 and obtains information predicting the degree of conductive hearing loss.
- the information processing terminal 117 registers the hearing test result information, additional information 783, and the information predicting the degree of conductive hearing loss in a database 299 on the Internet 119. That is, the database 299 on the Internet 119 stores various types of information. The information stored in the database 299 can be used to improve the accuracy of predicting the degree of conductive hearing loss.
- the difference from the training data can be understood, which can lead to improvements in the training data.
- the information sent from the information processing terminal 117 to the database 299 may include log data regarding fine-tuning of the hearing aid 116. By comparing this with log data of fine-tuning when a separately prepared bone conduction hearing test is performed, the difference with the log data in the case of a predicted degree of conductive hearing loss can be understood, which can lead to improvements in the learning data.
- Fig. 27 and Fig. 28 are explanatory diagrams for explaining a display example according to this embodiment.
- FIG. 27 is an example of a display in which the predicted value of the bone conduction hearing level is displayed on an audiogram.
- the predicted value of the bone conduction hearing level can be calculated using the air conduction hearing level (result information of the hearing test for the first group) and the predicted degree of conductive hearing loss, using the above-mentioned formulas (1) and (2).
- the audiogram is the most widely used display method for displaying hearing ability.
- the predicted value of the bone conduction hearing level can be displayed in the form of an audiogram.
- the audiogram is displayed so that it is clear that it is a predicted value.
- the bone conduction hearing level is also indicated on the margin of the audiogram as a predicted value. In this way, a person looking at the audiogram can recognize that the bone conduction hearing level shown on the audiogram is a predicted value.
- the display format is not limited to the format shown in Figure 27.
- the predicted value may be expressed as data within the device that predicted the predicted value, or as data during data communication between different devices, or as data within different devices, and is not particularly limited.
- a "prediction flag" indicating that the bone conduction hearing level is a predicted value is included, and specifically, for example, a predicted value is set to 1, and an actual value is set to 0. In this way, even if a device other than the device that predicted the bone conduction hearing level is used, it is possible to distinguish whether the value is a predicted value or an actual value.
- FIG. 28 is a display example of a predicted value of bone conduction hearing level expressed as data.
- the predicted value may be expressed as data within the device that predicted the predicted value, or as data during data communication between different devices, or as data within different devices, and is not particularly limited.
- a "prediction flag" indicating that the bone conduction hearing level is a predicted value is included, and specifically, for example, a predicted value is set to 1, and an actual value is set to 0. In this way, even if
- a prediction flag is added to the bone conduction hearing level, but in this embodiment, a prediction flag may be added to the air conduction bone gap, the degree of conductive hearing loss, or the degree of sensorineural hearing loss so that it is clear that the value is a predicted value. In this way, for example, a device that receives data to which a prediction flag is added can change the content of processing between the actual measurement and the prediction.
- the subject 901 can perform the test himself/herself and predict the level of conductive hearing loss, so the hearing aid 116 can be appropriately set up even in a place such as the subject's 901 home.
- Fig. 29 and Fig. 30 are explanatory diagrams for explaining an application example according to this embodiment
- Fig. 31 is a block diagram of an external device according to this embodiment.
- the information processing terminal 117 acquires the predicted degree of conductive hearing loss and the air conduction hearing level of the subject 901, and then transmits this information to the external device 120.
- the information processing terminal 117 may determine parameters for controlling the external device 120 after acquiring the predicted degree of conductive hearing loss and the air conduction hearing level of the subject 901, and transmit the determined parameters to the external device 120.
- the external device 120 is a television device (sound device). Note that in this embodiment, the external device 120 is not limited to a television device (sound device).
- the external device 120 can receive information from the information processing terminal 117 and change the content of the processing in the external device 120. Two examples of processing performed by the external device 120 are given below.
- the external device 120 can weaken the signal level of the other than the audio channel in the multi-channel signal of the video content, for example, for the subject 901 who has a severe degree of sensorineural hearing loss, compared to the audio channel.
- Objects may be used instead of channels.
- frequency discrimination ability tends to decrease, so when sounds other than audio overlap with audio, the person often feels more difficulty in hearing the audio than a person with normal hearing.
- Some hearing aids 116 have a function of audio enhancement processing, but it is more reliable to improve hearing by preventing sounds that may interfere with hearing from being emitted from the television device.
- the external device 120 performs a process to suppress the intensity (volume) of signals other than audio output from the external device 120 based on hearing information such as predicted information of the degree of conductive hearing loss of the subject 901, air conduction hearing level, and parameters transmitted from the information processing terminal 117, etc. Also, in this application example, the external device 120 may change the content of the processing depending on whether the information received is actual measurement information or predicted information. For example, if the information is predicted, the effect of the processing may be reduced compared to when the information is actually measured.
- the external device 120 can suppress sudden sounds such as the sound of a door slamming or a gunshot for a subject 901 with severe sensorineural hearing loss.
- compression processing is normally performed by the hearing aid 116 due to the recruitment phenomenon, but in order to maintain a natural intonation of the voice, the attack time (more specifically, the time from the onset of the sound until the amount of compression reaches a specified value) cannot be made too short. Therefore, in order to suppress sudden sounds without shortening the attack time, it is effective to read the acoustic signal ahead.
- Figure 30 shows an example of the effect of suppressing sudden sounds and looking ahead.
- the top row of Figure 30 shows an example where a weak sound suddenly becomes a strong sound. More specifically, in this example, the sound suddenly becomes strong from the time of 20 msec.
- the second row from the top of Figure 30 shows an example where sudden sounds are suppressed by compression without look-ahead (no processing delay). Around the time of 40 msec, there is sufficient suppression, but around 20 msec, there is not much difference from the example in the top row of Figure 30.
- the bottom row of Figure 30 shows an example where 4 msec look-ahead (i.e., there is a 4 msec processing delay) is allowed. As shown in the bottom row of Figure 30, it can be seen that around the time of 24 msec, there is also suppression. Note that if the length of the look-ahead is made even longer, the amount of suppression can be increased even further.
- Reading ahead causes a processing delay, so reading ahead cannot be actively used in the hearing aid 116. If a processing delay occurs, there is a problem in that it can lead to a sense of incongruity caused by the sound entering through the gap between the hearing aid 116 and the ear canal mixing with the sound amplified by the hearing aid 116, as well as a sense of incongruity caused by the time difference between vision and hearing.
- FIG. 31 shows a block diagram of an external device 120 having a configuration for such audio-video control.
- the external device 120 mainly has an audio-video control unit 595, a hearing information storage unit 596, an audio processing unit 597, and a video delay adjustment unit 598.
- the hearing information storage unit 596 stores hearing information such as predicted information on the degree of conductive hearing loss of the subject 901, air conduction hearing level, parameters, etc., transmitted from the information processing terminal 117, etc.
- the audio/video control unit 595 controls the audio processing unit 597 based on the hearing information stored in the hearing information storage unit 596.
- the audio processing unit 597 processes, for example, an audio signal as shown in the top row of FIG. 30 to become an audio signal as shown in the bottom row of FIG. 30, and outputs it.
- the audio signal processed by the audio processing unit 597 has a delay. Therefore, the audio/video control unit 595 controls the video delay adjustment unit 598.
- the video delay adjustment unit 598 receives a video signal, adjusts the delay amount, and outputs it. The delay amount added by the video delay adjustment unit 598 is determined depending on the delay amount in the audio processing unit 597.
- the audio processing performed by the audio processing unit 597 may be processing that compensates for the hearing level of the subject 901 based on the hearing information stored in the hearing information storage unit 596, or may be processing that obtains an effect different from the hearing level. For example, when a user of the hearing aid 116 listens to a sound output from a speaker of a television device through the hearing aid 116, a process that provides a delay such as suppressing sudden sounds may be performed on the television device side, and processing that compensates for the hearing level may be left to normal processing on the hearing aid 116 side. In this way, even if the hearing aid user and his/her family are watching television together, the discomfort caused by sudden sounds by the hearing aid user can be suppressed, and the family can also enjoy natural sounds. As described above, the hearing aid 116 can compensate for the weak points of the hearing aid 116 by linking with the external device 120.
- the functional configuration of the external device 120 is not limited to the configuration shown in FIG. 31.
- the degree of conductive hearing loss can be predicted using an air conduction receiver, i.e., air conduction sound.
- the subject 901 can be encouraged to visit a medical institution, etc., and the subject 901 can be prevented from missing an opportunity for early treatment of a disorder that causes hearing loss.
- the gain of the hearing aid 116 of the subject 901 can be appropriately set according to the hearing ability of the subject 901.
- Fig. 32 is a diagram showing a schematic configuration of the hearing aid system 1 according to an embodiment of the present disclosure
- Fig. 33 is a functional block diagram of the hearing aid 2 and the charger 3 according to an embodiment of the present disclosure
- Fig. 34 is a block diagram of the information processing terminal 40 according to an embodiment of the present disclosure.
- Fig. 35 is a block diagram of the server 90 according to an embodiment of the present disclosure.
- the hearing aid system 1 includes a pair of hearing aids 2 (left and right), a charger 3 (charging case) that stores and charges the hearing aids 2, and an information processing terminal 40 such as a smartphone that can communicate with at least one of the hearing aids 2 and the charger 3. Furthermore, the hearing aid system 1 according to the embodiment of the present disclosure includes a server 90 managed by a hearing aid sales company 291 or a hearing test service provider 292. Each device included in the hearing aid system 1 according to the embodiment of the present disclosure will be described below in order.
- the hearing aid 2 is described as a pair for both ears, but the embodiment of the present disclosure is not limited to this and may be a single-ear type worn on either the left or right ear.
- the hearing aid 2 mainly has a sound collection unit 20 (20b, 20f), a signal processing unit 21, an output unit 22, a battery 25, a connection unit 26, communication units 27, 30, a memory unit 28, and a control unit 29.
- the sound collection unit 20 includes an outer (feedforward) sound collection unit 20f that collects sounds from the outer region of the ear canal, and an inner (feedback) sound collection unit 20b that collects sounds from the inner region of the ear canal.
- an outer sound collection unit 20f that collects sounds from the outer region of the ear canal is provided.
- Each sound collection unit 20 has a microphone (hereinafter also referred to as a microphone) 201 and an A/D (analog/digital) conversion unit 202.
- the microphone 201 collects sound, generates an analog audio signal (acoustic signal), and outputs it to the A/D conversion unit 202.
- the A/D conversion unit 202 performs digital conversion processing on the analog audio signal input from the microphone 201, and outputs the digitized audio signal to the signal processing unit 21.
- the signal processing unit 21 performs predetermined signal processing on the digital audio signal input from the sound collection unit 20 and outputs the result to the output unit 22.
- the predetermined signal processing include filtering processing that separates the audio signal into predetermined frequency bands, amplification processing that amplifies each predetermined frequency band after filtering processing by a predetermined amount, noise reduction processing, and howling cancellation processing.
- the signal processing unit 21 can be configured, for example, with a memory and a processor having hardware such as a DSP (Digital Signal Processor).
- the output unit 22 has a D/A (digital/analog) conversion unit 221 and a receiver 222.
- the D/A conversion unit 221 performs analog conversion processing on the digital audio signal input from the signal processing unit 21 and outputs the signal to the receiver 222.
- the receiver 222 outputs an output sound (audio) corresponding to the analog audio signal input from the D/A conversion unit 221.
- the receiver 222 can be configured using, for example, a speaker, etc.
- the battery 25 supplies power to each component of the hearing aid 2.
- the battery 25 can be composed of a rechargeable secondary battery such as a lithium ion battery. Furthermore, the battery 25 can be charged by power supplied from the charger 3 via the connection part 26.
- connection unit 26 connects to the connection unit of the charger 3, and can receive power and various information from the charger 3 and output various information to the charger 3.
- the connection unit 26 can be configured using multiple pins, for example.
- the communication unit 27 can communicate with the charger 3 or the information processing terminal 40 via the Internet 484 in accordance with a predetermined communication standard.
- a predetermined communication standard include Wi-Fi (registered trademark) and Bluetooth (registered trademark).
- the communication unit 27 can be configured using, for example, a communication module.
- the communication unit 30 can communicate with the other hearing aid 2 by short-range communication such as NFMI (Near Field Magnetic Induction).
- the memory unit 28 stores various information related to the hearing aid 2.
- the memory unit 28 can be configured using, for example, a RAM (Random Access Memory), a ROM (Read Only Memory), a memory card, etc.
- the memory unit 28 can store a program 281 executed by the hearing aid 2 and various data 282 used by the hearing aid 2.
- the data 282 can include the user's age, whether the user has used the hearing aid 2 before, the user's gender, etc.
- the data can include the usage time of the user's hearing aid 2, which is timed by a timekeeping unit (not shown).
- the timekeeping unit is provided inside the hearing aid 2, and can time the date and time and output the timekeeping result to the control unit 29, etc.
- the timekeeping unit can be configured using, for example, a timing generator, a timer with a timekeeping function, etc.
- the control unit 29 controls each component of the hearing aid 2.
- the control unit 29 can be configured, for example, using a memory and a processor having hardware such as a CPU (Central Processing Unit) or a DSP.
- the control unit 29 reads out the stored program 281 into the working area of the memory and executes it, and controls each component through the execution of the program by the processor.
- the hearing aid 2 may have an operation unit.
- the operation unit can receive an input of a start-up signal (trigger signal) for starting the hearing aid 2, and output the received start-up signal to the control unit 29.
- the operation unit can be configured using, for example, a push-type switch, a button, a touch panel, or the like.
- the charger 3 mainly has a display unit 31, a battery 32, a storage unit 33, a communication unit 34, a memory unit 35, and a control unit 36.
- the display unit 31 displays various states related to the hearing aid 2 under the control of the control unit 36.
- the display unit 31 can display information indicating that the hearing aid 2 is charging, and information indicating that various information is being received from the information processing terminal 40.
- the display unit 31 can be configured using, for example, a light-emitting LED (Light Emitting Diode) or the like.
- the battery 32 supplies power to the components constituting the hearing aid 2 and charger 3 stored in the storage section 33 via a connection section 331 provided in the storage section 33.
- the battery 32 can be constructed using a secondary battery such as a lithium ion battery.
- the storage section 33 stores the left and right hearing aids 2 separately.
- the storage section 33 is also provided with a connection section 331 that can be connected to the connection section 26 of the hearing aid 2.
- the connection section 331 connects to the connection section 26 of the hearing aid 2 and transmits power from the battery 32 and various information from the control section 36, and also receives various information from the hearing aid 2 and outputs it to the control section 36.
- the connection section 331 can be configured using multiple pins, for example.
- the communication unit 34 communicates with the information processing terminal 40 via a communication network in accordance with a specific communication standard.
- the communication unit 34 can be configured, for example, using a communication module.
- the storage unit 35 stores various programs 351 executed by the charger 3.
- the storage unit 35 can be configured using, for example, a RAM, a ROM, a flash memory, a memory card, etc.
- the control unit 36 controls each component of the charger 3. For example, when the hearing aid 2 is stored in the storage unit 33, the control unit 36 causes power to be supplied from the battery 32 via the connection unit 331.
- the control unit 36 can be configured, for example, using a memory and a processor having hardware such as a CPU or DSP.
- the control unit 36 reads out the program 351 into the working area of the memory, executes it, and controls each component through the execution of the program by the processor.
- the information processing terminal 40 mainly has an input unit 41, a communication unit 42, an output unit 43, a display unit 44, a storage unit 45, and a control unit 46.
- the input unit 41 receives various operations input from the user and outputs a signal corresponding to the received operation to the control unit 46.
- the input unit 41 can be configured using, for example, a switch, a touch panel, etc.
- the communication unit 42 communicates with the charger 3 or the hearing aid 2 via a communication network under the control of the control unit 46.
- the communication unit 42 can be configured using, for example, a communication module.
- the output unit 43 outputs a volume of a predetermined sound pressure level for each predetermined frequency band under the control of the control unit 46.
- the output unit 43 can be configured using, for example, a speaker.
- the display unit 44 displays various information related to the information processing terminal 40 and information related to the hearing aid 2.
- the display unit 44 can be configured using, for example, a liquid crystal display or an organic electroluminescent display (OLED).
- the storage unit 45 stores various information related to the information processing terminal 40.
- the storage unit 45 stores various programs 451 executed by the information processing terminal 40.
- the storage unit 45 can be configured using a recording medium such as a RAM, a ROM, a flash memory, or a memory card.
- the control unit 46 controls each component of the information processing terminal 40.
- the control unit 46 can be configured, for example, using a memory and a processor having hardware such as a CPU.
- the control unit 46 reads out a program stored in the storage unit 45 into the working area of the memory and executes it, and controls each component through the execution of the program by the processor.
- the server 90 may be configured as shown in FIG. 35. As shown in FIG. 35, the server 90 mainly includes a communication unit 91, a storage unit 95, and a control unit 96.
- the communication unit 91 communicates with the hearing aid 2 and the information processing terminal 40 via the Internet 484.
- the communication unit 91 can be configured, for example, using a communication module.
- the memory unit 95 also stores various information related to the hearing aid 2.
- the memory unit 95 further stores various programs 961 executed by the server 90, etc.
- the memory unit 95 can be configured, for example, using a recording medium such as a RAM, ROM, flash memory, or memory card.
- the control unit 96 controls each component of the server 90.
- the control unit 96 can be configured, for example, using a memory and a processor having hardware such as a CPU.
- the control unit 96 reads out a program stored in the storage unit 95 into the working area of the memory and executes it, and controls each component through the execution of the program by the processor.
- the hearing aid system 1 and the functional configuration of each device included therein are not limited to the forms shown in Figures 32 to 35.
- FIG. 36 is a diagram showing an example of data utilization.
- elements in the edge area 1000 include a sound device 1100, a peripheral device 1200, and a vehicle 1300.
- Examples of elements in the cloud area 2000 include a server device 2100.
- Examples of elements in the operator area 3000 include an operator 3100 and a server device 3200.
- the sound generating device 1100 in the edge region 1000 is worn by the user or placed near the user so as to emit sound toward the user.
- Specific examples of the sound generating device 1100 include an earphone speaker, a headset (headphone speaker), a hearing aid, etc. More specifically, the sound generating device 1100 can be the headphone speaker 115 in FIG. 16 or the hearing aid 2 in FIG. 32.
- the peripheral device 1200 and the vehicle 1300 in the edge region 1000 are devices used together with the sound generating device 1100, and transmit signals such as content viewing sounds and telephone call sounds to the sound generating device 1100.
- the sound generating device 1100 outputs sounds to the user in response to signals from the peripheral device 1200 or the vehicle 1300.
- a specific example of the peripheral device 1200 is a smartphone, etc.
- FIG. 37 is a diagram showing examples of data.
- Examples of data that can be acquired within the edge region 1000 include device data, usage history data, personalization data, biometric data, emotional data, application data, fitting data, and preference data. Note that data may be interpreted as information, and may be interpreted as appropriate within the scope of no contradiction. Various known methods may be used to acquire the example data.
- the device data is data related to the pronunciation device 1100, and includes, for example, type data of the pronunciation device 1100, specifically, data specifying whether the pronunciation device 1100 is an earphone, a headphone, a TWS (True Wireless Stereo), a hearing aid (CIC (Completely-In-The-Canal), ITE (In-The-Ear), RIC (Receiver-In-The-Canal), etc.), etc.
- type data of the pronunciation device 1100 specifically, data specifying whether the pronunciation device 1100 is an earphone, a headphone, a TWS (True Wireless Stereo), a hearing aid (CIC (Completely-In-The-Canal), ITE (In-The-Ear), RIC (Receiver-In-The-Canal), etc.
- type data of the pronunciation device 1100 specifically, data specifying whether the pronunciation device 1100 is an earphone, a headphone, a TWS (True Wireless Stereo), a hearing aid (CIC (Completely-In-The
- the usage history data is usage history data of the sound device 1100, and includes, for example, data such as the amount of music exposure, the continuous use time of the hearing aid, and content viewing history (viewing time, etc.).
- the usage history data may also include the usage time and number of uses of functions such as the transmission of the speech flag in the embodiment described above.
- the usage history data can be used for safe listening, turning TWS into a hearing aid, notifying the replacement of the wax guard (earwax prevention filter), etc.
- the personalized data is data related to the user of the sound generation device 1100, and includes, for example, the user's personal head-related transfer function (HRTF), air conduction hearing level, earwax type, etc. Furthermore, data such as hearing ability may also be included in the personalized data.
- HRTF personal head-related transfer function
- the biometric data is the biometric data of the user of the sound generation device 1100, and includes, for example, data on sweating, blood pressure, blood flow, heart rate, pulse rate, body temperature, brain waves, breathing, and myoelectric potential.
- Emotional data is data that indicates the emotions of the user of the sound generation device 1100, and includes, for example, data indicating pleasure, discomfort, etc.
- Application data is data used in various applications, and includes, for example, the location of the user of the pronunciation device 1100 (which may be the location of the pronunciation device 1100), user attribute information data such as schedule, age, and gender, as well as data on weather, air pressure, temperature, etc.
- the location data can be used to search for a lost pronunciation device 1100.
- the fitting data may include, for example, adjustment parameters for the hearing aid 2 and headphone speaker 115 used by the user, and hearing aid gain for each frequency band that is set based on the user's hearing test results (audiogram), etc.
- Preference data is data related to the user's preferences, including, for example, preferences for music to listen to while driving.
- data on the communication status, data on the charging status of the sound generation device 1100, etc. may also be acquired.
- some of the processing in the edge area 1000 may be executed by the cloud area 2000. By sharing the processing, the processing burden on the edge area 1000 is reduced.
- data such as that described above is acquired within the edge region 1000 and transmitted from the sound generation device 1100, the peripheral device 1200, or the vehicle 1300 to the server device 2100 in the cloud region 2000.
- the server device 2100 stores (saves, accumulates, etc.) the received data.
- the business operator 3100 in the business operator domain 3000 uses the server device 3200 to obtain data from the server device 2100 in the cloud domain 2000. The business operator 3100 can then utilize the data.
- businesses 3100 There may be various businesses 3100. Specific examples of businesses 3100 include hearing aid stores, hearing aid manufacturers, content production companies, distribution businesses that provide music streaming services, etc., and in order to distinguish between them, they are illustrated as businesses 3100-A, 3100-B, and 3100-C.
- the corresponding server devices 3200 are illustrated as server devices 3200-A, 3200-B, and 3200-C.
- Various data is provided to such various businesses 3100, promoting the use of data. Data may be provided to businesses 3100, for example, through subscriptions, recurring, etc.
- Data can also be provided from the cloud area 2000 to the edge area 1000.
- data for feedback, revision, etc. of learning data is prepared by an administrator of the server device 2100 in the cloud area 2000.
- the prepared data is transmitted from the server device 2100 to the sound device 1100, peripheral device 1200, or vehicle 1300 in the edge area 1000.
- some kind of incentive (a privilege such as a premium service) may be provided to the user.
- a privilege such as a premium service
- An example of a condition is that at least some of the devices among the pronunciation device 1100, the peripheral device 1200, and the vehicle 1300 are devices provided by the same operator.
- the incentive can be supplied electronically (such as an electronic coupon), the incentive may be transmitted from the server device 2100 to the pronunciation device 1100, the peripheral device 1200, or the vehicle 1300.
- the sound output device 1100 may cooperate with other devices using a peripheral device 1200, such as a smartphone, as a hub. An example will be described with reference to FIG.
- FIG. 38 is a diagram showing an example of collaboration with other devices.
- the edge area 1000, cloud area 2000, and business area 3000 are connected by a network 4000 and a network 5000.
- a smartphone is exemplified as a peripheral device 1200 in the edge area 1000, and other devices 1400 are also exemplified as elements in the edge area 1000.
- the peripheral device 1200 can communicate with both the sound generating device 1100 and the other device 1400.
- the communication method is not particularly limited, but for example, Bluetooth LDAC or the previously mentioned Bluetooth LE Audio may be used.
- the communication between the peripheral device 1200 and the other device 1400 may be multicast communication.
- An example of multicast communication is Auracast (registered trademark), etc.
- the other device 1400 is used in conjunction with the sound generation device 1100 via the peripheral device 1200.
- Specific examples of the other device 1400 include a television (hereinafter referred to as a television), a personal computer (PC), an HMD (Head Mounted Display), a robot, a smart speaker, a gaming device, etc.
- An incentive may also be provided to the user if the pronunciation device 1100, the peripheral device 1200, and the other devices 1400 meet certain conditions (e.g., at least some of them are all provided by the same operator).
- the peripheral device 1200 is a hub, and the pronunciation device 1100 and the other devices 1400 can work together.
- the work together may be performed using various data stored in the server device 2100 in the cloud area 2000.
- the pronunciation device 1100 and the other devices 1400 share information such as the user's fitting data, viewing time, and hearing ability, and thereby adjust the volume of each device in cooperation with each other.
- the hearing aid 2 HA: Hearing Aid
- PSAP Personal Sound Amplification Product
- the settings of the other devices may be automatically changed so that the settings that are normally set for normal hearing people are set to settings suitable for a hearing aid user.
- whether or not the user is using the hearing aid 2 may be determined by automatically sending information (e.g., wearing detection information) that the hearing aid 2 is worn to a device such as a television or PC to which the hearing aid 2 is paired when the user wears the hearing aid 2, or may be detected as a trigger when the hearing aid user approaches another device such as a target television or PC.
- information e.g., wearing detection information
- the user is a hearing aid user by capturing an image of the user's face with a camera or the like provided on another device such as a television or PC, or may be determined by a method other than the above.
- the hearing aid 2 which is the sound device 1100
- the hearing aid 2 can function as an earphone by linking with the other device 1400.
- the other device 1400 has a microphone that collects ambient sound
- the earphone which is the sound device 1100, can function like the hearing aid 2.
- the function of the hearing aid can be used in a style (appearance, etc.) as if listening to music.
- Data on the user's listening history may be shared. Listening for long periods of time can be a risk of future hearing loss. To prevent listening times from becoming too long, a notification may be sent to the user. For example, such a notification may be sent when the viewing time exceeds a predetermined threshold (safe listening). The notification may be sent by any device within the edge area 1000.
- At least some of the devices used in the edge area 1000 may be provided by different operators.
- Information regarding the device settings of each operator may be transmitted from the server device 3200 in the operator area 3000 to the server device 2100 in the cloud area 2000 and stored in the server device 2100. Using such information, it may be possible for devices provided by different operators to work together.
- FIG. 39 is a diagram showing an example of usage transition.
- the sound device 1100 When the user has normal hearing, for example while the user is a child and for a while after becoming an adult, the sound device 1100 is used as headphones or earphones (headphones/TWS). In addition to the safe listening mentioned above, the sound device 1100 adjusts the equalizer, performs processing according to the user's behavioral characteristics, current location, and external environment (for example, switching to the most appropriate noise canceling mode when the user is in a restaurant and when the user is on a vehicle), collects logs of music played, etc. Communication between devices using Auracast is also used.
- the hearing aid function of the pronunciation device 1100 begins to be used.
- the pronunciation device 1100 is used as an OTC hearing aid.
- the pronunciation device 1100 is used as a hearing aid.
- OTC hearing aids are hearing aids sold at stores without the intervention of a specialist, and are convenient in that they can be purchased without undergoing a hearing test or a specialist such as an audiologist. The user may perform operations specific to hearing aids, such as fitting.
- the pronunciation device 1100 is used as an OCT hearing aid or a hearing aid, hearing tests are performed and the hearing aid function is turned on. For example, functions such as sending a speech flag in the embodiment described above may also be used.
- various information related to hearing is collected, and fitting, sound environment adaptation, remote support, and even transcription are performed.
- the above-described embodiment of the present disclosure may include, for example, an information processing method executed by the information processing device or information processing system described above, a program for causing the information processing device (computer) to function, and a non-transitory tangible medium on which the program is recorded.
- the program may also be distributed via a communication line (including wireless communication) such as the Internet.
- each step in the information processing method of the embodiment of the present disclosure described above does not necessarily have to be processed in the order described.
- each step may be processed in a different order as appropriate.
- each step may be processed partially in parallel or individually instead of being processed in chronological order.
- each step does not necessarily have to be processed in the manner described, and may be processed in another manner by another functional unit, for example.
- each component of each device shown in the figure is a functional concept, and does not necessarily have to be physically configured as shown in the figure.
- the specific form of distribution and integration of each device is not limited to that shown in the figure, and all or part of them can be functionally or physically distributed and integrated in any unit depending on various loads, usage conditions, etc.
- the present technology can also be configured as follows.
- the hearing test uses air-conducted sound, and the hearing test includes a prediction unit that predicts the degree of conductive hearing loss based on test results of a first and a second hearing test including mutually different test contents.
- Information processing device (2)
- the test results of the first hearing test include an air conduction hearing threshold;
- the test results of the second hearing test include test results for the inner ear and/or the posterior labyrinth.
- the information processing device according to (1) above.
- the information processing device is an air conduction hearing test and/or audiometry using intermittent sounds.
- the first hearing test is a self-recorded audiometry test using intermittent sounds.
- the second hearing test is a self-recorded audiometry test using continuous sounds.
- the prediction unit predicts the degree of the conductive hearing loss based on attribute information of the subject.
- the information processing device is a (10) a correction unit that acquires output characteristic information of a sound output device that outputs the generated air conducted sound, and corrects the generated air conducted sound based on the acquired output characteristic information;
- An information processing device according to any one of (1) to (9) above.
- the setting parameters are parameters for controlling gain settings and/or noise suppression settings of the hearing aid.
- the setting parameter is a parameter for adding a delay to the sound output from the acoustic device or for controlling the volume of the sound output from the acoustic device.
- An information processing method comprising an information processing device predicting a degree of conductive hearing loss based on the test results of first and second hearing tests which use air-conducted sound and which include different test contents.
- a program that causes a computer to execute a function of predicting the degree of conductive hearing loss based on the test results of first and second hearing tests that use air-conducted sound and include different test contents.
- (21) inputting test results of first and second hearing tests, which are hearing tests using air-conducted sound and include mutually different test contents, as input data into a learning device, and inputting the degree of conductive hearing loss corresponding to the test results of the first and second hearing tests as teacher data;
- the learning device generates a trained model for predicting the degree of the conductive hearing loss based on the test results of the first and second hearing tests.
- a method for generating a trained model comprising:
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Surgery (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Otolaryngology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Physiology (AREA)
- Multimedia (AREA)
- Psychiatry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Fuzzy Systems (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Neurosurgery (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Testing Or Calibration Of Command Recording Devices (AREA)
Abstract
Description
1. 本開示の実施形態を創作するに至る背景
2. 第1の実施形態
2.1 概要
2.2 構成
2.3 情報処理方法
2.4 適用例
3. 第2の実施形態
3.1 概要
3.2 構成
3.3 情報処理方法
4. 第3の実施形態
5. 第4の実施形態
5.1 構成
5.2 情報処理方法
5.3 表示例
5.4 応用例
6. まとめ
7. 補聴器システムの概要
8. データの利活用の例
9. 他のデバイスとの連携の例
10. 用途遷移の例
11. 補足
まずは、本開示の実施形態を説明する前に、本発明者らが本開示の実施形態を創作するに至る背景について説明する。
<2.1 概要>
まずは、図10及び図11を参照して、本開示の第1の実施形態の概要を説明する。図10は、本実施形態の概要を説明する説明図であり、詳細には、本実施形態における第1及び第2グループの聴覚検査の障害部位の対応範囲を示す。また、図11は、本実施形態に係る学習データを説明する説明図である。
そして、本実施形態においては、上述のような伝音難聴の程度の予測を、情報処理端末117及び補聴器116(又はヘッドフォンスピーカ115)の組み合わせを用いて行うことができる。以下、図12及び図13を参照して、本実施形態に係る情報処理端末117の機能的構成について説明する。図12は、本実施形態に係る情報処理端末117のブロック図であり、図13は、本実施形態に係る伝音難聴予測部を説明する説明図である。
第1グループ聴覚検査音源生成部161は、後述する検査制御部163の制御を受け、第1グループの聴覚検査の検査音の音源を生成する。さらに、第1グループ聴覚検査音源生成部161は、生成した音源を、聴覚検査音出力手段171へ出力する。本実施形態においては、第1グループの聴覚検査が複数の検査の組み合わせである場合は、例えば、検査の種類の数だけ、生成と出力とを繰り返すこととなる。
第2グループ聴覚検査音源生成部162は、検査制御部163の制御を受け、第2グループの聴覚検査の検査音の音源を生成する。さらに、第2グループ聴覚検査音源生成部162は、生成した音源を、聴覚検査音出力手段171へ出力する。本実施形態においては、第2グループの聴覚検査が複数の検査の組み合わせである場合は、例えば、検査の種類の数だけ、生成と出力とを繰り返すこととなる。
検査制御部163は、後述する情報入力手段173からの入力情報に応じて、第1グループ聴覚検査音源生成部161及び第2グループ聴覚検査音源生成部162の音源生成を制御し、ルールに従って聴覚検査を行なう。検査制御部163は、聴覚検査を行なうにあたって検査を円滑に進めるために、後述する情報出力手段172を制御して、手順等を被検者901に伝える等の誘導を行ってもよい。
伝音難聴程度予測部164は、後述する検査情報記憶部165に格納されている第1及び第2グループの聴覚検査結果を検査制御部163経由で受け取ることができる。伝音難聴程度予測部164は、聴覚検査結果に基づいて、伝音難聴の程度を予測することができる。さらに、伝音難聴程度予測部164は、後述する追加情報記憶部166に格納されている追加情報783を検査制御部163経由で受け取り、伝音難聴の程度の予測に利用してよい。また、伝音難聴程度予測部164は、予測された伝音難聴の程度の予測情報を、検査制御部163を経由して、情報出力手段172へ出力することができる。
検査情報記憶部165は、検査制御部163の制御に従い、第1グループの聴覚検査の結果情報、及び、第2グループの聴覚検査の結果情報を格納する。
追加情報記憶部166は、検査制御部163の制御に従い、情報入力手段173(具体的には、キーボード、タッチパネル、もしくは、外部の情報処理端末等)を介して入力された追加情報783を格納する。
聴覚検査音出力手段171は、第1グループ聴覚検査音源生成部161及び第2グループ聴覚検査音源生成部162で生成した音源を、被検者901に向けて出力し、検査を実行することができる。聴覚検査音出力手段171は、例えば、補聴器116のレシーバであることができる。
情報出力手段172は、第1グループ聴覚検査の結果情報、第2グループ聴覚検査の結果情報、予測した伝音難聴の程度等の情報、又は、これらから導き出される情報等を表示等することができる。例えば、情報出力手段172は、情報処理端末117の画面や、当該情報処理端末117と無線接続された表示装置であることができる。
情報入力手段173には、例えば、被検者901からの応答が入力される。具体的には、例えば、気導聴力検査を行なう場合であれば、被検者901は音が聴こえている間、応答用ボタン914を押し続けることとなるが、この際、被検者901は、情報入力手段173として応答用ボタン914を使用する。本実施形態においては、情報入力手段173は、このような応答用ボタン914であってもよく、もしくは、情報処理端末117の画面に重畳されたタッチパネルやその他の入力手段であってもよい。
通信手段174は、年齢や性別等の追加情報783を外部機器から受け取り、検査制御部163を経由して、追加情報記憶部166に送信することができる。さらに、通信手段174は、伝音難聴程度予測部164で用いる係数やバイアス値、学習済モデル等を、外部機器から受け取り、伝音難聴程度予測部164へ送信することができる。本実施形態においては、このようにすることで、より精度の高い係数やバイアス値、学習済モデルに、常に更新することができる。さらに、通信手段174は、伝音難聴程度予測部164で予測された伝音難聴の程度の予測情報を外部機器へ送信することができる。
次に、図14及び図15を参照して、本実施形態に係る情報処理方法の流れを説明する。図14及び図15は、本実施形態に係る情報処理方法の流れを説明するフローチャートである。
次に、図16を参照して、本実施形態の適用例について説明する。図16は、本実施形態における聴覚検査サービスの例を説明する説明図である。先に説明したように、補聴器116の購入を検討する場合、聴力の程度を把握し、さらには、補聴器116を装用する前に医療機関等を訪問すべきなのかを判断するために、最初に聴覚検査を行うことが好ましい。
<3.1 概要>
次に、図17を参照して、本開示の第2の実施形態の概要を説明する。図17は、本実施形態の概要を説明する説明図である。本実施形態は、先に説明した図16に示す例において、使用するヘッドフォン(音声出力装置)115の特性情報の取得を、ヘッドフォン115の画像に基づいて行うことができる。
次に、図18を参照して、本実施形態に係る情報処理端末117の機能的構成について説明する。図18は、本実施形態に係る情報処理端末117のブロック図である。
レベル・周波数特性補正情報記憶部189は、各種のヘッドフォン115の画像をヘッドフォン115の型番に紐づけて格納する。当該画像は、レベル・周波数特性補正部190により、画像認識技術を用いて、ヘッドフォン115の型番を特定する際に利用される。また、レベル・周波数特性補正情報記憶部189は、各種のヘッドフォン115の特性情報を格納する。特性情報は、レベル・周波数特性補正部190により、補正を行う際に利用される。なお、レベル・周波数特性補正情報記憶部189に格納される情報は、データベース299から、インターネット119や図示しない回線を経由して、ダウンロードしてもよい。なお、本実施形態においては、レベル・周波数特性補正情報記憶部189は、ヘッドフォン115の画像を格納することに限定されるものではなく、ヘッドフォン115から出力される識別信号等を格納してもよい。この場合、レベル・周波数特性補正情報記憶部189に格納された識別信号等と、ヘッドフォン115から取得した識別信号等を照合することにより、ヘッドフォン115の型番や仕様の情報を特定することができる。
レベル・周波数特性補正部190は、第1グループ聴覚検査音源生成部161や第2グループ聴覚検査音源生成部162で生成された検査音の音源を、レベル・周波数特性補正情報記憶部189に格納されたヘッドフォン115の特性情報に基づき、補正することができる。さらに、レベル・周波数特性補正部190は、補正した検査音源を聴覚検査音出力手段171に出力することができる。
次に、図19を参照して、本実施形態に係る情報処理方法の流れを説明する。図19は、本実施形態に係る情報処理方法の流れを説明するフローチャートである。
次に、図20を参照して、本開示の第3の実施形態の概要を説明する。図20は、本実施形態に係る情報処理方法の流れを説明するフローチャートである。本実施形態は、補聴器装用前における医療機関訪問の必要性を判断する際の適用例である。本実施形態においては、被検者901が自身で検査を行なう場合を主に想定する。
<5.1 構成>
次に、本開示の第4の実施形態の概要を説明する。本実施形態は、補聴器116のための補聴パラメータを設定することができる実施形態である。まずは、図24を参照して、本実施形態に係る情報処理端末117の機能的構成について説明する。図24は、本実施形態に係る情報処理端末117のブロック図である。
補聴パラメータ決定部197は、伝音難聴程度予測部164で予測された伝音難聴の程度予測情報を、検査制御部163を経由して取得し、予測した伝音難聴の程度に基づいて、補聴器116の設定パラメータである補聴パラメータを決定することができる。補聴パラメータとは、補聴器116の利得設定制御や雑音抑制設定制御(雑音抑制強度、周波数特性等)に使用されるパラメータであることができる。より具体的には、補聴パラメータは、図4に示す第4の区間754のような利得設定のパラメータであってよい。補聴パラメータの決定には、例えば、一般に広く使われている方式NAL-NL2(National Acoustic Laboratories Nonlinear 2)やDSLv5(Desired Sensation Level version 5)等を使うことができる。これらの方式は、入力データとして気導聴力レベルと骨導聴力レベルとを用いるものである。骨導聴力レベルは、気導聴力レベル(第1グループの聴覚検査の結果情報)と伝音難聴の程度の予測情報とがあれば、上述した数式(1)及び数式(2)から算出することができる。
補聴パラメータ記憶部198は、補聴パラメータ決定部197で決定した補聴パラメータを格納する。
次に、図25及び図26を参照して、本実施形態に係る情報処理方法の流れを説明する。図25は、本実施形態に係る情報処理方法の流れを説明するフローチャートであり、図26は、本実施形態を説明する説明図である。図25に示すように、本実施形態に係る情報処理方法は、ステップS801からステップS803までの複数のステップが含まれている。
次に、図27及び図28を参照して、本実施形態における表示例について説明する。図27及び図28は、本実施形態に係る表示例を説明する説明図である。
図29から図31を参照して、補聴器116以外の外部機器において聴力情報を利用する応用例を説明する。図29及び図30は、本実施形態に係る応用例を説明する説明図であり、図31は、本実施形態に係る外部装置のブロック図である。
以上のように、本開示の実施形態においては、気導受話器、すなわち、気導音を用いて、伝音難聴の程度を予測することができる。その結果、本開示の実施形態によれば、被検者901に医療機関等への受診を勧めることができ、難聴を引き起こす障害の早期の治療の機会を被検者901が逃すことを防ぐことができる。さらに、本開示の実施形態によれば、被検者901の補聴器116の利得を被検者901の聴力に応じて適切に設定することができる。
図32から図35を参照して、本開示の実施形態に係る補聴器システム1の概要を説明する。図32は、本開示の実施形態に係る補聴器システム1の概略構成を示す図であり、図33は、本開示の実施形態に係る補聴器2及び充電器3の機能ブロック図であり、図34は、本開示の実施形態に係る情報処理端末40のブロック図である。さらに、図35は、本開示の実施形態に係るサーバ90のブロック図である。
補聴デバイスの利用に関連して得られたデータは、さまざまに利活用されてよい。一例について図36を参照して説明する。
エッジ領域1000内において、例えばスマートフォンのような周辺デバイス1200をハブとして、発音デバイス1100と、他のデバイスとが連携してよい。一例について図38を参照して説明する。
上述のようなユーザのフィッティングデータ、視聴時間、聴力等をはじめとするさまざまな状況に応じて、発音デバイス1100の用途が遷移し得る。一例について図39を参照して説明する。
なお、先に説明した本開示の実施形態は、例えば、上記で説明したような情報処理装置又は情報処理システムで実行される情報処理方法、情報処理装置(コンピュータ)を機能させるためのプログラム、及びプログラムが記録された一時的でない有形の媒体を含みうる。また、当該プログラムをインターネット等の通信回線(無線通信も含む)を介して頒布してもよい。
(1)
気導音を用いる聴覚検査であって、且つ、互いに異なる検査内容を含む第1及び第2の聴覚検査の検査結果に基づいて、伝音難聴の程度を予測する予測部を備える、
情報処理装置。
(2)
前記第1の聴覚検査の検査結果は、気導聴覚閾値を含み、
前記第2の聴覚検査の検査結果は、内耳、及び/又は、後迷路についての検査結果を含む、
上記(1)に記載の情報処理装置。
(3)
前記予測部は、統計的手法を用いて、前記伝音難聴の程度を予測する、上記(1)又は(2)に記載の情報処理装置。
(4)
前記予測部は、学習済モデルを用いて、前記伝音難聴の程度を予測する、上記(1)又は(2)に記載の情報処理装置。
(5)
前記第1の聴覚検査は、気導聴力検査、及び/又は、断続音による自記オージオメトリーである、上記(1)~(4)のいずれか1つに記載の情報処理装置。
(6)
前記第2の聴覚検査は、
連続音による自記オージオメトリー、SISI検査、ABLB検査、DL検査、TD検査、語音了解閾値検査、最高語音明瞭度検査、歪語音聴力検査、両耳間分離機能検査、及び、方向感検査からなる群から選択される少なくとも1つの検査を含む、
上記(1)~(5)のいずれか1つに記載の情報処理装置。
(7)
前記第1の聴覚検査は、断続音による自記オージオメトリーである、
前記第2の聴覚検査は、連続音による自記オージオメトリーである、
上記(1)~(4)のいずれか1つに記載の情報処理装置。
(8)
予測部は、被検者の属性情報に基づき、前記伝音難聴の程度を予測する、
上記(1)~(5)のいずれか1つに記載の情報処理装置。
(9)
前記第1及び第2の聴覚検査で用いる前記気導音を生成する音源生成部をさらに備える、
上記(1)~(8)のいずれか1つに記載の情報処理装置。
(10)
生成した前記気導音を出力する音声出力装置の出力特性情報を取得し、取得した前記出力特性情報に基づき、生成した前記気導音を補正する補正部をさらに備える、
上記(1)~(9)のいずれか1つに記載の情報処理装置。
(11)
前記補正部は、前記音声出力装置の画像に基づき、前記出力特性情報を取得する、上記(10)に記載の情報処理装置。
(12)
前記第1の聴覚検査の検査結果、前記第2の聴覚検査の検査結果、及び、予測した前記伝音難聴の程度を出力する出力部をさらに備える、上記(1)~(11)のいずれか1つに記載の情報処理装置。
(13)
予測した伝音難聴の程度と所定の閾値とを比較して、比較結果に基づいて、被検者に対して医療機関の受診を促すための情報を出力する出力部をさらに備える、上記(1)~(12)のいずれか1つに記載の情報処理装置。
(14)
予測した伝音難聴の程度と所定の閾値とを比較して、比較結果に基づいて、被検者に対して補聴器購入のための情報を出力する出力部をさらに備える、上記(1)~(12)のいずれか1つに記載の情報処理装置。
(15)
予測した前記伝音難聴の程度に基づいて、補聴器の設定パラメータを決定するパラメータ決定部をさらに備える、上記(1)~(14)のいずれか1つに記載の情報処理装置。
(16)
前記設定パラメータは、前記補聴器の利得設定制御、及び/又は、雑音抑制設定制御を行うためのパラメータである、上記(15)に記載の情報処理装置。
(17)
予測した前記伝音難聴の程度に基づいて、前記情報処理装置の外部の音響装置の設定パラメータを決定するパラメータ決定部をさらに備える、上記(1)~(14)のいずれか1つに記載の情報処理装置。
(18)
前記設定パラメータは、前記音響装置から出力される音声に遅延を与える、又は、前記音響装置から出力される音声の音量を制御するためのパラメータである、上記(17)に記載の情報処理装置。
(19)
情報処理装置が、気導音を用いる聴覚検査であって、且つ、互いに異なる検査内容を含む第1及び第2の聴覚検査の検査結果に基づいて、伝音難聴の程度を予測することを含む、情報処理方法。
(20)
コンピュータに、気導音を用いる聴覚検査であって、且つ、互いに異なる検査内容を含む第1及び第2の聴覚検査の検査結果に基づいて、伝音難聴の程度を予測する機能を実行させる、プログラム。
(21)
学習器に、気導音を用いる聴覚検査であって、且つ、互いに異なる検査内容を含む第1及び第2の聴覚検査の検査結果を入力データとして入力し、前記第1及び第2の聴覚検査の検査結果に対応する伝音難聴の程度を教師データとして入力し、
前記学習器が、前記第1及び第2の聴覚検査の検査結果に基づいて、前記伝音難聴の程度を予測するための学習済モデルを生成する、
ことを含む、学習済モデル生成方法。
2、116 補聴器
3 充電器
20b、20f 集音部
21 信号処理部
22 出力部
25、32 電池
26、331 接続部
27、30、34、42、91 通信部
28、35、45、95 記憶部
29、36、46、96 制御部
31、44 表示部
33 収納部
40、117 情報処理端末
41 入力部
43 出力部
90 サーバ
102、901 被検者
115 ヘッドフォンスピーカ
119、484 インターネット
120 外部機器
121 第1の経路
122 第2の経路
161 第1グループ聴覚検査音源生成部
162 第2グループ聴覚検査音源生成部
163 検査制御部
164 伝音難聴程度予測部
165 検査情報記憶部
166 追加情報記憶部
171 聴覚検査音出力手段
172 情報出力手段
173 情報入力手段
174 通信手段
189 レベル・周波数特性補正情報記憶部
190 レベル・周波数特性補正部
197 補聴パラメータ決定部
198 補聴パラメータ記憶部
201b、201f マイクロフォン
202b、202f A/D変換部
221 D/A変換部
222 レシーバ
281、351 プログラム
282 データ
291 補聴器販売会社
292 聴覚検査サービス提供会社
299 データベース
333、334、335、336、337、338、541、542、543、544、545、546 対応範囲
595 音声映像制御部
596 聴力情報記憶部
597 音声処理部
598 映像遅延量調整部
751 第1の区間
752 第2の区間
753 第3の区間
754 第4の区間
755 第5の区間
756 第6の区間
757 第7の区間
758 第8の区間
759 第9の区間
781 第1グループの聴覚検査情報
782 第2グループの聴覚検査情報
783 追加情報
784 伝音難聴の程度情報
903 専門家
911 気導受話器
912 骨導受話器
913 オージオメータ
914 応答用ボタン
Claims (20)
- 気導音を用いる聴覚検査であって、且つ、互いに異なる検査内容を含む第1及び第2の聴覚検査の検査結果に基づいて、伝音難聴の程度を予測する予測部を備える、
情報処理装置。 - 前記第1の聴覚検査の検査結果は、気導聴覚閾値を含み、
前記第2の聴覚検査の検査結果は、内耳、及び/又は、後迷路についての検査結果を含む、
請求項1に記載の情報処理装置。 - 前記予測部は、統計的手法を用いて、前記伝音難聴の程度を予測する、請求項1に記載の情報処理装置。
- 前記予測部は、学習済モデルを用いて、前記伝音難聴の程度を予測する、請求項1に記載の情報処理装置。
- 前記第1の聴覚検査は、気導聴力検査、及び/又は、断続音による自記オージオメトリーである、請求項1に記載の情報処理装置。
- 前記第2の聴覚検査は、
連続音による自記オージオメトリー、SISI検査、ABLB検査、DL検査、TD検査、語音了解閾値検査、最高語音明瞭度検査、歪語音聴力検査、両耳間分離機能検査、及び、方向感検査からなる群から選択される少なくとも1つの検査を含む、
請求項1に記載の情報処理装置。 - 前記第1の聴覚検査は、断続音による自記オージオメトリーである、
前記第2の聴覚検査は、連続音による自記オージオメトリーである、
請求項1に記載の情報処理装置。 - 予測部は、被検者の属性情報に基づき、前記伝音難聴の程度を予測する、
請求項1に記載の情報処理装置。 - 前記第1及び第2の聴覚検査で用いる前記気導音を生成する音源生成部をさらに備える、
請求項1に記載の情報処理装置。 - 生成した前記気導音を出力する音声出力装置の出力特性情報を取得し、取得した前記出力特性情報に基づき、生成した前記気導音を補正する補正部をさらに備える、
請求項1に記載の情報処理装置。 - 前記補正部は、前記音声出力装置の画像に基づき、前記出力特性情報を取得する、請求項10に記載の情報処理装置。
- 前記第1の聴覚検査の検査結果、前記第2の聴覚検査の検査結果、及び、予測した前記伝音難聴の程度を出力する出力部をさらに備える、請求項1に記載の情報処理装置。
- 予測した伝音難聴の程度と所定の閾値とを比較して、比較結果に基づいて、被検者に対して医療機関の受診を促すための情報を出力する出力部をさらに備える、請求項1に記載の情報処理装置。
- 予測した伝音難聴の程度と所定の閾値とを比較して、比較結果に基づいて、被検者に対して補聴器購入のための情報を出力する出力部をさらに備える、請求項1に記載の情報処理装置。
- 予測した前記伝音難聴の程度に基づいて、補聴器の設定パラメータを決定するパラメータ決定部をさらに備える、請求項1に記載の情報処理装置。
- 前記設定パラメータは、前記補聴器の利得設定制御、及び/又は、雑音抑制設定制御を行うためのパラメータである、請求項15に記載の情報処理装置。
- 予測した前記伝音難聴の程度に基づいて、前記情報処理装置の外部の音響装置の設定パラメータを決定するパラメータ決定部をさらに備える、請求項1に記載の情報処理装置。
- 前記設定パラメータは、前記音響装置から出力される音声に遅延を与える、又は、前記音響装置から出力される音声の音量を制御するためのパラメータである、請求項17に記載の情報処理装置。
- 情報処理装置が、気導音を用いる聴覚検査であって、且つ、互いに異なる検査内容を含む第1及び第2の聴覚検査の検査結果に基づいて、伝音難聴の程度を予測することを含む、情報処理方法。
- コンピュータに、気導音を用いる聴覚検査であって、且つ、互いに異なる検査内容を含む第1及び第2の聴覚検査の検査結果に基づいて、伝音難聴の程度を予測する機能を実行させる、プログラム。
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202380065372.XA CN119855546A (zh) | 2022-10-11 | 2023-09-14 | 信息处理装置、信息处理方法和程序 |
| JP2024551338A JPWO2024080069A1 (ja) | 2022-10-11 | 2023-09-14 | |
| EP23877075.4A EP4603015A4 (en) | 2022-10-11 | 2023-09-14 | Information processing device, information processing method, and program |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2022163328 | 2022-10-11 | ||
| JP2022-163328 | 2022-10-11 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024080069A1 true WO2024080069A1 (ja) | 2024-04-18 |
Family
ID=90669049
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2023/033500 Ceased WO2024080069A1 (ja) | 2022-10-11 | 2023-09-14 | 情報処理装置、情報処理方法及びプログラム |
Country Status (4)
| Country | Link |
|---|---|
| EP (1) | EP4603015A4 (ja) |
| JP (1) | JPWO2024080069A1 (ja) |
| CN (1) | CN119855546A (ja) |
| WO (1) | WO2024080069A1 (ja) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH07143976A (ja) * | 1993-11-24 | 1995-06-06 | Rion Co Ltd | オージオメータ |
| US20070258609A1 (en) * | 2006-05-04 | 2007-11-08 | Siemens Audiologische Technik Gmbh | Method and apparatus for determining a target amplification curve for a hearing device |
| JP2010518884A (ja) * | 2006-09-14 | 2010-06-03 | ユーメディカル シーオー.,エルティーディー. | 自動遮蔽が可能な純音聴力検査装置 |
| JP2013075066A (ja) * | 2011-09-30 | 2013-04-25 | Toshiba Corp | 電子機器、音響信号の補正方法およびプログラム |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2021214556A1 (en) * | 2020-04-20 | 2021-10-28 | Hearx Ip (Pty) Ltd | Method and system for predicting or detecting conductive hearing loss risk in a person |
-
2023
- 2023-09-14 JP JP2024551338A patent/JPWO2024080069A1/ja active Pending
- 2023-09-14 CN CN202380065372.XA patent/CN119855546A/zh active Pending
- 2023-09-14 EP EP23877075.4A patent/EP4603015A4/en active Pending
- 2023-09-14 WO PCT/JP2023/033500 patent/WO2024080069A1/ja not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH07143976A (ja) * | 1993-11-24 | 1995-06-06 | Rion Co Ltd | オージオメータ |
| US20070258609A1 (en) * | 2006-05-04 | 2007-11-08 | Siemens Audiologische Technik Gmbh | Method and apparatus for determining a target amplification curve for a hearing device |
| JP2010518884A (ja) * | 2006-09-14 | 2010-06-03 | ユーメディカル シーオー.,エルティーディー. | 自動遮蔽が可能な純音聴力検査装置 |
| JP2013075066A (ja) * | 2011-09-30 | 2013-04-25 | Toshiba Corp | 電子機器、音響信号の補正方法およびプログラム |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP4603015A4 * |
Also Published As
| Publication number | Publication date |
|---|---|
| EP4603015A1 (en) | 2025-08-20 |
| EP4603015A4 (en) | 2025-12-10 |
| CN119855546A (zh) | 2025-04-18 |
| JPWO2024080069A1 (ja) | 2024-04-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9426582B2 (en) | Automatic real-time hearing aid fitting based on auditory evoked potentials evoked by natural sound signals | |
| US12058496B2 (en) | Hearing system and a method for personalizing a hearing aid | |
| US11671769B2 (en) | Personalization of algorithm parameters of a hearing device | |
| US20180263562A1 (en) | Hearing system for monitoring a health related parameter | |
| Cord et al. | Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids | |
| US20220272465A1 (en) | Hearing device comprising a stress evaluator | |
| US20130138012A1 (en) | Electroencephalogram recording apparatus, hearing aid, electroencephalogram recording method, and program thereof | |
| US12356149B2 (en) | System comprising a computer program, hearing device, and stress evaluation device | |
| CN101783998A (zh) | 基于用户当前认知负荷的估计运行听力仪器的方法及助听器系统 | |
| JP2019036958A (ja) | 補聴器の作動方法および補聴器 | |
| US12273683B2 (en) | Self-fit hearing instruments with self-reported measures of hearing loss and listening | |
| Hopkins et al. | Benefit from non-linear frequency compression hearing aids in a clinical setting: The effects of duration of experience and severity of high-frequency hearing loss | |
| CN113395647A (zh) | 具有至少一个听力设备的听力系统及运行听力系统的方法 | |
| Jonas Brännström et al. | The acceptable noise level and the pure-tone audiogram | |
| Portelli et al. | Functional outcomes for speech-in-noise intelligibility of NAL-NL2 and DSL v. 5 prescriptive fitting rules in hearing aid users | |
| Hohmann | The future of hearing aid technology: Can technology turn us into superheroes? | |
| Kuk et al. | Measuring the effect of adaptive directionality and split processing on noise acceptance at multiple input levels | |
| Chang et al. | Validation of a Bluetooth self-fitting device for people with mild-to-moderate hearing loss in quiet or noisy environments | |
| Searchfield et al. | The performance of an automatic acoustic-based program classifier compared to hearing aid users’ manual selection of listening programs | |
| US20250016512A1 (en) | Hearing instrument fitting systems | |
| WO2024080069A1 (ja) | 情報処理装置、情報処理方法及びプログラム | |
| Zimmermann et al. | Audiological results with the SAMBA audio processor in comparison to the amade for the Vibrant Soundbridge | |
| US9204226B2 (en) | Method for adjusting a hearing device as well as an arrangement for adjusting a hearing device | |
| Clark et al. | Objective and perceptual comparisons of two bluetooth hearing aid assistive devices | |
| Zanin et al. | Evaluating benefits of remote microphone technology for adults with hearing loss using behavioural and predictive metrics |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23877075 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024551338 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202380065372.X Country of ref document: CN |
|
| WWP | Wipo information: published in national office |
Ref document number: 202380065372.X Country of ref document: CN |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023877075 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2023877075 Country of ref document: EP Effective date: 20250512 |
|
| WWP | Wipo information: published in national office |
Ref document number: 2023877075 Country of ref document: EP |