WO2020090763A1 - Dispositif de traitement, système, procédé de traitement, et programme - Google Patents
Dispositif de traitement, système, procédé de traitement, et programme Download PDFInfo
- Publication number
- WO2020090763A1 WO2020090763A1 PCT/JP2019/042240 JP2019042240W WO2020090763A1 WO 2020090763 A1 WO2020090763 A1 WO 2020090763A1 JP 2019042240 W JP2019042240 W JP 2019042240W WO 2020090763 A1 WO2020090763 A1 WO 2020090763A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound data
- section
- sound
- value
- indicating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/08—Measuring devices for evaluating the respiratory organs
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B7/00—Instruments for auscultation
- A61B7/02—Stethoscopes
- A61B7/04—Electric stethoscopes
Definitions
- the present invention relates to a processing device, a system, a processing method, and a program.
- Patent Literature 1 describes a technique for calculating a body sound characteristic by using a power ratio between body sound in a first part of a living body and body sound in a second part, power of a body signal in a specific frequency band, and the like. Has been done.
- Patent Document 2 describes that an exhalation sound is extracted from continuous breath sounds, and a value indicating the sound pressure of the exhalation sound is used to detect a suspected exhalation sound that is suspected as an abnormal exhalation sound. Further, Patent Document 2 describes that a breathing band sensor is attached so as to wind around a subject's chest, changes in chest expansion and contraction during a breathing motion are measured, and an exhalation sound is extracted using the measurement result. There is.
- Patent Document 3 describes that an output signal of a sensor including a piezoelectric element is digitized and is filtered by a high-pass filter to be a respiratory airflow sound signal of a living body. Further, Patent Document 3 describes that the time period in which the inspiratory sound and the expiratory sound are generated is specified based on the magnitude of the amplitude of the respiratory airflow sound signal.
- Patent Document 4 describes that breath sounds are extracted from body sounds using a bandpass filter. Further, it is described that the breathing section is estimated based on the power pattern of the breathing sound. A preset threshold value is used for estimating the breathing section.
- the signal obtained by the sensor includes the effects of biological sounds other than breath sounds, and the effects also vary from person to person.
- An example of the problem to be solved by the present invention is to provide a technique for calculating a breathing volume close to the volume perceived by human hearing from body sound data.
- the invention according to claim 1 is An acquisition unit that acquires sound data including breath sounds, At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data.
- the value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data
- the section identifying unit is a processing device that determines the threshold value based on the sound data.
- the invention described in claim 10 is The processing device according to claim 1, Equipped with a sensor,
- the acquisition unit is a system that acquires the sound data indicating the sound detected by the sensor.
- the invention according to claim 11 is An acquisition step of acquiring sound data including breath sounds, At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data.
- the section identifying step is a processing method of determining the threshold value based on the sound data.
- the invention according to claim 12 is A program that causes a computer to execute each step of the processing method according to claim 11.
- FIG. 3 is a flowchart illustrating a processing method according to the first embodiment. It is the figure which displayed an example of sound data as an image. It is a figure which illustrates the structure of the processing apparatus and system which concern on 2nd Embodiment.
- 9 is a flowchart illustrating a processing method executed by the processing device according to the second embodiment. It is a flow chart which illustrates the contents of processing of section specific step S130 in detail.
- (A) to (d) is a figure for demonstrating the example of the processing content of area identification step S130 which concerns on 2nd Embodiment. It is a figure for explaining an example of processing contents of section specific step S130 concerning a 2nd embodiment.
- (A) And (b) is a figure for explaining an example of processing contents of section specific step S130 concerning a 2nd embodiment. It is a figure which illustrates the computer for implement
- FIG. It is a box and whisker plot which shows the relationship between the value which shows the volume calculated in the comparative example, and the result of a hearing evaluation.
- 6 is a box and whisker plot showing the relationship between the value indicating the volume calculated in the example and the result of the hearing evaluation.
- FIG. It is the figure which showed the relationship between the result of hearing evaluation and the value which shows the volume calculated by the comparative example with the histogram. It is the figure which showed the relationship between the result of hearing evaluation and the value which shows the sound volume calculated by the example with the histogram.
- each component of the processing device 10 indicates a block of a functional unit rather than a configuration of a hardware unit unless otherwise specified.
- Each component of the processing device 10 includes an arbitrary computer such as a CPU, a memory, a program loaded in the memory, a storage medium such as a hard disk storing the program, and a network connection interface. It is realized by combination. Then, there are various modified examples of the realizing method and the apparatus.
- FIG. 1 is a diagram illustrating a configuration of a processing device 10 according to the first embodiment.
- the processing device 10 includes an acquisition unit 110, a section identification unit 130, and a calculation unit 150.
- the acquisition unit 110 acquires one or more sound data including breath sounds.
- the section identifying unit 130 identifies at least one of the first section and the second section.
- the first section is a section estimated to be breathing, and the second section is a section between the plurality of first sections.
- the calculation unit 150 uses the first portion of the target sound data determined based on the first section and the second portion of the target sound data determined based on the second section of the target sound data. Volume information indicating the breath volume is calculated.
- FIG. 2 is a flowchart illustrating the processing method according to the first embodiment.
- the method includes an acquisition step S110, a section identification step S130, and a calculation step S150.
- the acquisition step S110 one or more sound data including breath sounds are acquired.
- the section specifying step S130 at least one of the first section and the second section is specified.
- the first section is a section estimated to be breathing
- the second section is a section between the plurality of first sections.
- the calculation step S150 by using the first portion of the target sound data determined based on the first section and the second portion of the target sound data determined based on the second section, , Volume information indicating the breath volume is calculated.
- This processing method can be executed by the processing device 10.
- a method of calculating the breathing volume for example, there is a method of performing a filtering process for removing a specific frequency component from a biological signal and obtaining a signal power after extracting a breathing sound component.
- the frequency band of the respiratory sound component and the frequency band of other body sound components overlap each other in the body sound detected at any part.
- the respiratory sound component appeared in the band from 0 Hz to 1500 Hz
- the pulsation and blood flow sound component appeared in the band from 0 Hz to 200 Hz.
- the respiratory sound component appeared in the band from 0 Hz to 300 Hz
- the heart sound component appeared in the band from 0 Hz to 500 Hz.
- the acquisition unit 110 acquires sound data from, for example, a sensor attached to a living body.
- the section identifying unit 130 identifies the first section and the second section.
- the first section is a section in which it is estimated that the living body is inhaling or exhaling.
- the second section is a section between the first section and the first section. More specifically, the second section is a section other than the first section. That is, the second section is a section in which it is estimated that breathing is not performed, and is a section in which it is estimated that apnea is temporarily made between breathing.
- the second section does not necessarily have to be between the first section and the first section.
- the end of the sound data may be specified as the second section.
- FIG. 3 is a diagram showing an image of an example of sound data.
- the time waveform of the sound data is shown in the display area 501
- the spectrogram of the sound data is shown in the display area 502.
- the horizontal axis represents time (time)
- the vertical axis represents frequency
- the intensity of each frequency component is represented by luminance.
- the horizontal axis of the time waveform and the horizontal axis of the spectrogram are aligned.
- the section identified as the first section is indicated by an arrow in the display area 502.
- the section without the arrow is the second section. In this way, the section indicates a time range.
- the sound data includes a plurality of first sections that are separated from each other and a plurality of second sections that are separated from each other.
- the part estimated to be breathing contains a respiratory sound component and other body sound components
- the part estimated to be apnea contains only other body sound components. Be done. Therefore, by comparing the data of the portion estimated to be breathing with the data of the other portion, it is possible to obtain the volume information in which the influence of other body sound components is reduced.
- the volume information thus calculated has a high correlation with, for example, the volume of a breathing sound that a person feels when hearing with his or her hearing.
- the use of such volume information makes it easier to detect, for example, an abnormality of a living body by data processing, and is useful for assisting diagnosis and monitoring the condition of a patient.
- the calculation unit 150 includes the first portion of the target sound data that is determined based on the first section and the second portion of the target sound data that is determined based on the second section. Is used to calculate volume information indicating the respiratory volume of the target sound data. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
- FIG. 4 is a diagram illustrating the configurations of the processing device 10 and the system 20 according to the second embodiment.
- the processing device 10 according to the present embodiment has the configuration of the processing device 10 according to the first embodiment.
- FIG. 5 is a flowchart illustrating a processing method executed by the processing device 10 according to the second embodiment.
- the processing method according to this embodiment has the configuration of the processing method according to the first embodiment.
- the system 20 includes a processing device 10 and a sensor 210. Then, the acquisition unit 110 acquires sound data indicating the sound detected by the sensor 210.
- the sensor 210 detects body sounds including breath sounds.
- the sensor 210 generates an electrical signal indicating the body sound and outputs it as sound data.
- the sensor 210 is, for example, a microphone or a vibration sensor.
- the vibration sensor is, for example, a displacement sensor, a speed sensor, or an acceleration sensor.
- the microphone converts air vibrations caused by body sounds into electric signals.
- the signal level value of this electric signal indicates the sound pressure of the vibration of the air.
- the vibration sensor converts the vibration of the medium (for example, the body surface of the subject) caused by the body sound into an electric signal.
- the signal level value of this electric signal directly or indirectly indicates the vibration displacement of the medium.
- the vibration sensor when the vibration sensor includes a diaphragm, the vibration of the medium is transmitted to the diaphragm and the vibration of the diaphragm is converted into an electric signal.
- the electric signal may be an analog signal or a digital signal.
- the sensor 210 may be configured to include a circuit or the like that processes an electric signal. Examples of circuits that process electric signals include A / D conversion circuits and filter circuits. However, the A / D conversion and the like may be performed by the processing device 10.
- the sound data is data indicating an electric signal, and is data indicating a signal level value based on the electric signal obtained by the sensor 210 in time series. That is, the sound data represents the waveform of a sound wave.
- one sound data means a continuous sound data in time.
- the sensor 210 is, for example, an electronic stethoscope.
- the sensor 210 is pressed or attached to, for example, a part of the subject's living body where the body sound is to be measured by the measurer.
- the acquisition unit 110 acquires only one continuous sound data will be described.
- the acquisition unit 110 acquires, for example, sound data from the sensor 210 in the acquisition step S110.
- the acquisition unit 110 can acquire the sound data detected by the sensor 210 in real time.
- the acquisition unit 110 may read and acquire sound data that is measured by the sensor 210 in advance and is stored in the storage device.
- the storage device may be provided inside the processing device 10 or may be provided outside the processing device 10.
- the storage device provided inside the processing device 10 is, for example, the storage device 1080 of the computer 1000 described later.
- the acquisition unit 110 may acquire sound data output from the sensor 210 and subjected to conversion processing or the like in the processing device 10 or a device other than the processing device 10. Examples of the conversion processing include amplification processing and A / D conversion processing.
- the acquisition unit 110 continuously acquires sound data including body sounds from the sensor 210, for example. It should be noted that each signal level value of the sound data is associated with the recording time. The time may be associated with the sound data in the sensor 210, or when the sound data is acquired from the sensor 210 in real time, the acquisition unit 110 may associate the acquisition time of the sound data with the sound data.
- the sound data for which the volume information is calculated is also referred to as target sound data.
- the acquisition unit 110 acquires only one sound data, and thus the acquired sound data is the target sound data.
- the acquisition unit 110 can continuously acquire the sound data while the subsequent step S120, the section identification step S130, and the calculation step S150 are performed. The following processing is performed on the acquired sound data in order from the beginning.
- the processing device 10 further includes a filter processing unit 120.
- step S120 the filter processing unit 120 performs the first filter processing on at least the target sound data.
- the filter processing unit 120 performs bandpass filter processing in which the cutoff frequency on the low frequency side is f L1 [Hz] and the cutoff frequency on the high frequency side is f H1 [Hz].
- the first filter processing for example, Fourier transform is performed on the sound data to remove the band below f L1 [Hz] and the band above f H1 [Hz] in the frequency space. After that, the time-axis waveform is restored by the inverse Fourier transform.
- the Fourier transform is, for example, a fast Fourier transform (FFT).
- FFT fast Fourier transform
- the first filter process is not limited to the above example, and may be a process using an FIR (Finite Impulse Response) filter or an IIR (Infinite Impulse Response) filter, for example.
- the calculation unit 150 calculates the volume information using the first portion and the second portion of the target sound data after the first filter processing.
- Noise included in the sound data can be removed by the first filtering process. Note that it is not necessary to extract only the respiratory sound component by the first filter processing.
- the sound data after the first filter processing may include components other than respiratory sounds.
- the section specifying unit 130 specifies at least one of the first section and the second section.
- the section identifying unit 130 identifies at least one of the first section and the second section based on the sound data acquired by the acquiring section 110.
- the section specifying unit 130 may specify at least one of the first section and the second section without using the sound data. A method of specifying the section by the section specifying unit 130 will be described later in detail.
- the section specifying unit 130 generates at least one of first time information indicating the time range of the first section and second time information indicating the time range of the second section based on the result of specifying the section. For example, when the section specifying unit 130 specifies the first section, the section specifying unit 130 generates the first time information, and when the section specifying unit 130 specifies the second section, the section specifying unit 130 generates the second time information. To do. When the section specifying unit 130 specifies a plurality of discontinuous first sections, a plurality of pieces of first time information are generated. Further, when the section specifying unit 130 specifies a plurality of discontinuous second sections, a plurality of second time information is generated.
- the calculation unit 150 calculates the volume information using the data in the first area among the target sound data acquired by the acquisition unit 110.
- the first region is, for example, a region indicating the sound during the latest time T 1 when the calculation unit 150 performs this step.
- T 1 is not particularly limited, it is, for example, 2 seconds or more and 30 seconds or less.
- the calculation unit 150 uses at least one of the first time information and the second time information generated by the section specifying unit 130 to specify the first portion and the second portion in the first region of the target sound data. To do.
- the calculating unit 150 sets the part of the time range indicated by the first time information in the first region of the target sound data as the first part, A part of the time range indicated by the second time information in the first area is referred to as a second part.
- the calculation unit 150 sets the part of the time range indicated by the first time information in the first region as the first part, and the other parts of the first region. Let a part be a 2nd part.
- the calculation unit 150 sets the portion of the time range indicated by the second time information in the first area as the second portion, and the portion of the first area The part other than is the first part.
- the calculation unit 150 calculates the first signal strength that is the strength of the first portion of the target sound data and the second signal strength that is the strength of the second portion of the target sound data. Specifically, for example, the calculation unit 150 calculates the RMS (root mean square) of the first portion of the target sound data as the first signal strength. Further, the calculation unit 150 calculates the RMS of the second portion of the target sound data as the second signal strength. Note that the calculation unit 150 may calculate another index such as a peak-to-peak value as the signal strength instead of the RMS. However, the calculation method of the first signal strength and the calculation method of the second signal strength are the same.
- the first signal strength is a value indicating the signal strength when, for example, all the first portions are regarded as one continuous signal.
- the second signal strength is a value indicating the signal strength when, for example, all the second portions are regarded as one continuous signal.
- the calculation unit 150 calculates the volume information of the target sound data using the first signal strength and the second signal strength.
- the volume information does not have to be an absolute volume measured by another device or the like (for example, dB SPL, etc.).
- the volume information may be at least a relative value with which the volume information obtained by the processing device 10 can be compared with each other.
- the calculation unit 150 sets at least one of the information specifying the ratio of the first signal strength to the second signal strength and the information specifying the difference between the first signal strength and the second signal strength as the target sound data. It is calculated as the sound volume information. However, it is preferable that the calculator 150 calculates at least the information that specifies the ratio of the first signal strength to the second signal strength as the volume information. By doing so, the volume information can be expressed in dB as in the normal volume, and the information can be closer to the volume perceived by human hearing.
- the information specifying the ratio of the first signal strength to the second signal strength is, for example, a value obtained by dividing the first signal strength by the second signal strength, a value obtained by dividing the second signal strength by the first signal strength, and these values. Is one of the values displayed in decibels.
- the calculation unit 150 similarly calculates the volume information for each time T 2 .
- T 2 is not particularly limited, but is, for example, 1 second or more and 10 seconds or less.
- the volume information calculated by the calculation unit 150 is displayed on a display device, for example.
- the calculation unit 150 may calculate a plurality of volume information of the target sound data in time series, and a graph showing the plurality of volume information in time series may be displayed on the display device.
- the volume information may be displayed numerically.
- the volume information calculated by the calculation unit 150 may be stored in the storage device or may be output to a device other than the processing device 10.
- FIG. 6 is a flowchart showing in detail the processing contents of the section identifying step S130.
- FIGS. 7A to 9B are diagrams for explaining an example of the processing content of the section identifying step S130 according to the present embodiment. 7A to 7D, 9A, and 9B, the horizontal axis represents the elapsed time from the reference time. An example of a method for the section specifying unit 130 to specify a section will be described in detail below with reference to FIGS. 6 to 9B.
- the section specifying unit 130 specifies at least one of the first section and the second section using a threshold value for the value indicating the amplitude of the first sound data.
- the value indicating the amplitude is a value indicating the magnitude of vibration of the first sound data at each time. Then, the section identifying unit 130 determines the threshold value based on the first sound data.
- the sound data acquired by the acquisition unit 110 includes the first sound data and the target sound data.
- the first sound data is sound data used for specifying a section
- the target sound data is sound data for which volume information is calculated.
- both the first sound data and the target sound data are this one sound data, and are the same at the time of acquisition by the acquisition unit 110.
- the sound data acquired by the acquisition unit 110 is body sound data acquired by the neck. Then, the section can be specified more accurately. This is because the respiratory sound component can be detected at a high rate in the neck and its vicinity.
- the section identifying unit 130 performs the second filtering process on the first sound data acquired by the acquiring unit 110 in step S131.
- the second filter process for example, Fourier transform is performed on the sound data to remove the band of f L2 [Hz] or less and the band of f H2 [Hz] or more in the frequency space. After that, the time-axis waveform is restored by the inverse Fourier transform.
- the Fourier transform is, for example, a fast Fourier transform (FFT).
- FFT fast Fourier transform
- the second filter processing is not limited to the above example, and may be processing by an FIR filter or an IIR filter, for example.
- the section identifying unit 130 obtains a mode value, which will be described later, based on the first sound data that has been subjected to the second filtering process.
- the second filter process is a bandpass filter process in which the cutoff frequency on the low frequency side is f L2 [Hz] and the cutoff frequency on the high frequency side is f H2 [Hz].
- f L2 [Hz] the cutoff frequency on the low frequency side
- f H2 [Hz] the cutoff frequency on the high frequency side
- 150 ⁇ f L2 ⁇ 250 holds
- 550 ⁇ f H2 ⁇ 650 holds.
- the respiratory sound component included in the first sound data can be mainly extracted. Note that it is not necessary to extract only the respiratory sound component by the second filter processing.
- the first sound data after the second filter processing may include components other than breath sounds.
- FIG. 7A is a diagram exemplifying the waveform of the first sound data at the time of acquisition by the acquisition unit 110
- FIG. 7B is the waveform of FIG. 7A subjected to the second filter processing. It is a figure which illustrates the waveform after doing. As shown by the arrow in FIG. 7B, the respiratory sound component is mainly extracted by the second filter processing.
- the first sound data acquired by the acquisition unit 110 is subjected to both the first filtering process of step S120 by the filtering unit 120 and the second filtering process of step S131 by the section identifying unit 130. You may be broken. However, if the pass band of the second filter process is narrower than the pass band of the first filter process, the presence or absence of the first filter process does not affect the section identification result.
- step S132 the section identifying unit 130 calculates the absolute value of the data obtained in step S131. That is, each signal level value in the time series data is converted into an absolute value.
- FIG. 7C is a diagram showing the result of calculating the absolute value of the data of FIG. 7B.
- step S133 the section identifying unit 130 performs downsampling processing on the data obtained in step S132.
- the contour of the data waveform is obtained as shown in FIG.
- each data point of the data obtained by the downsampling process corresponds to a value indicating the amplitude at each time of the first sound data.
- the section identifying unit 130 obtains a mode value described later based on the data obtained by performing at least the downsampling process on the first sound data in this way.
- a portion estimated to be breathing is indicated by a downward arrow
- a portion estimated to be not breathing is indicated by an upward arrow.
- step S134 the section identifying unit 130 determines whether to update (determine) the threshold value based on a predetermined update condition. Specifically, for example, when the section specifying unit 130 has never determined the threshold value for the first sound data, the section specifying unit 130 determines that the update condition is satisfied. On the other hand, when the section identifying unit 130 determines the threshold value for the first sound data at least once, the section identifying unit 130 determines that the update condition is not satisfied.
- step S134 When it is determined that the update condition is satisfied (Yes in step S134), the section identifying unit 130 proceeds to step S135 and performs a process for determining a threshold value. On the other hand, when it is determined that the update condition is not satisfied (No in step S134), the section identifying unit 130 performs step S137 using the threshold value that has already been set.
- the section identifying unit 130 obtains the mode value of the values indicating the amplitude in the first data. Specifically, in step S135, the section identifying unit 130 obtains the mode value of the values indicating the amplitude of the first sound data. Therefore, the section identifying unit 130 counts the number of times of appearance of a value indicating each amplitude in the predetermined time range in the first sound data. Then, the section identifying unit 130 obtains, as the mode value, the value indicating the amplitude having the largest number of appearances.
- the predetermined time range may be, for example, a range from when the acquisition unit 110 starts acquiring the first sound data to when the section identification unit 130 performs this step, or the section identification unit 130 performs the present step. It may be the latest time T 3 at the time of performing. T 3 is not particularly limited, but is, for example, 2 seconds or more and 30 seconds or less. Further, for example, T 3 may be the same as T 1 . When the first sound data is the target sound data, the area in the predetermined time range and the first area may be matched.
- the acquisition unit 110 may read out and acquire the sound data stored in the storage device.
- the section identifying unit 130 may identify the mode value and the threshold value using the entire sound data.
- FIG. 8 is a histogram illustrating the number of appearances of each amplitude. Such a histogram corresponds to a graph of the first sound data in which the horizontal axis represents the amplitude and the vertical axis represents the number of appearances.
- the value indicating the amplitude is not limited to the value obtained by the above processing, and may be, for example, a peak-to-peak value or a standardized value.
- the section identifying unit 130 sets a threshold value larger than the mode value. Specifically, in the histogram, the value indicating the amplitude closest to the mode value is set as the threshold value among the values indicating one or more amplitudes having the minimum value. The most frequent value in the histogram is the value showing the smallest amplitude among the plurality of values showing the maximum value. In the graph of FIG. 8, a point 505 having the maximum number of appearances and a plurality of minimum values are circled. In the example of this figure, the value indicating the amplitude at this point 505 is the mode value. Then, a value indicating the amplitude having the minimum value 506 on the lowest amplitude side among the minimum values is determined as the threshold value.
- the mode value is considered to be a value indicating the amplitude mainly corresponding to the apnea section. Therefore, by determining the threshold value in this manner, a threshold value capable of distinguishing the apnea section from the other sections can be obtained. Further, since this threshold value is obtained using the sound data acquired by the acquisition unit 110, highly accurate section identification is realized regardless of individual differences in body sounds.
- the threshold value is not limited to the above example.
- the threshold may be a value indicating the amplitude having the second or third minimum value from the low amplitude side in the histogram, or may be a value indicating the amplitude having the maximum value next to the point 505.
- the section identifying unit 130 performs at least one of the following first process and second process in step S137.
- the first process is a process of identifying, as a first section, a section in which at least a value indicating the amplitude exceeds a threshold value in the first sound data.
- the second process is a process of identifying, as the second section, a section in which at least the value indicating the amplitude is less than the threshold value in the first sound data.
- the section that matches the threshold may be included in the first section or may be included in the second section.
- the section identifying unit 130 applies a threshold value to the first sound data that has been subjected to the process of step S133, and one section in which the value indicating the amplitude continuously exceeds the threshold value is one first section. And on the other hand, a section in which the value indicating the amplitude is continuously less than the threshold value is defined as one second section.
- the section specifying unit 130 may specify only one of the first section and the second section. In that case, the remaining section becomes the other section.
- FIG. 9A points below the threshold of the graph of FIG. 7D are circled.
- FIG. 9B is an enlarged view of the small amplitude side in FIG. 9A. Further, in FIG. 9B, a straight line indicating the threshold value is attached.
- the section specifying unit 130 specifies a section for the newly acquired sound data, for example, every time T 4 .
- T 4 is not particularly limited, but is, for example, 1 second or more and 10 seconds or less. Note that T 4 is preferably T 2 or less as described above.
- section identifying unit 130 may determine the threshold each time the section is identified. In addition, the section identifying unit 130 may identify the section each time the calculating unit 150 calculates the volume information. For example, the sound data in the same time range may be processed, and the threshold value may be determined, the section may be specified, and the volume information may be calculated.
- the section specifying unit 130 may specify the section by another method. For example, a section may be similarly determined using a predetermined threshold value.
- the section specifying unit 130 may specify the section based on the output of the band sensor attached to the chest or the like of the target person. For example, the band sensor can detect a bulge and movement of the chest during breathing.
- Each functional configuration unit of the processing device 10 may be implemented by hardware that implements each functional configuration unit (eg, hard-wired electronic circuit, etc.), or a combination of hardware and software (eg, electronic Combination of a circuit and a program for controlling the circuit).
- each functional component of the processing device 10 is realized by a combination of hardware and software will be further described.
- FIG. 10 is a diagram exemplifying a computer 1000 for realizing the processing device 10.
- the computer 1000 is an arbitrary computer.
- the computer 1000 is an SoC (System On Chip), a Personal Computer (PC), a server machine, a tablet terminal, a smartphone, or the like.
- the computer 1000 may be a dedicated computer designed to realize the processing device 10 or a general-purpose computer.
- the computer 1000 has a bus 1020, a processor 1040, a memory 1060, a storage device 1080, an input / output interface 1100, and a network interface 1120.
- the bus 1020 is a data transmission path for the processor 1040, the memory 1060, the storage device 1080, the input / output interface 1100, and the network interface 1120 to exchange data with each other.
- the processor 1040 is various processors such as a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), or an FPGA (Field-Programmable Gate Array).
- the memory 1060 is a main storage device realized by using a RAM (Random Access Memory) or the like.
- the storage device 1080 is an auxiliary storage device realized by using a hard disk, SSD (Solid State Drive), memory card, ROM (Read Only Memory), or the like.
- the input / output interface 1100 is an interface for connecting the computer 1000 and input / output devices.
- the input / output interface 1100 is connected with an input device such as a keyboard and a mouse and an output device such as a display device.
- the input / output interface 1100 may be connected with a touch panel or the like that doubles as a display device and an input device.
- the network interface 1120 is an interface for connecting the computer 1000 to the network.
- This communication network is, for example, LAN (Local Area Network) or WAN (Wide Area Network).
- the method by which the network interface 1120 connects to the network may be a wireless connection or a wired connection.
- the storage device 1080 stores a program module that realizes each functional component of the processing device 10.
- the processor 1040 realizes the function corresponding to each program module by reading each of these program modules into the memory 1060 and executing them.
- the sensor 210 is connected to, for example, the input / output interface 1100 of the computer 1000 or the network interface 1120 via a network.
- the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
- FIG. 11 is a diagram illustrating the configurations of the processing device 10 and the system 20 according to the third embodiment.
- the processing apparatus 10 and the system 20 according to this embodiment are the same as the processing apparatus 10 and the system 20 according to the second embodiment, respectively, except for the points described below.
- the system 20 includes a plurality of sensors 210, and the acquisition unit 110 according to the present embodiment acquires a plurality of sound data indicating sounds detected by the plurality of sensors 210.
- the plurality of sensors 210 detect body sounds in a plurality of parts of the living body of the same person, for example. Then, the acquisition unit 110 can acquire the sound data of the body sounds simultaneously detected in a plurality of parts of the living body of the same person. Further, the acquisition unit 110 can acquire a plurality of sound data in which at least some recording times overlap each other.
- the acquisition unit 110 may further acquire sound data of body sounds of a plurality of persons, but at least the processing by the section identification unit 130 and the calculation unit 150 is performed for each person.
- the acquisition unit 110 may acquire a plurality of sound data whose recording times do not overlap each other, but at least the processing by the section identification unit 130 and the calculation unit 150 is performed for each sound data or at least a part of the recorded sound data. It is performed for each of a plurality of pieces of sound data whose times overlap with each other.
- FIG. 12 is a diagram illustrating the attachment positions of the plurality of sensors 210.
- the sensor 210 is attached from the part A to the part D.
- the processing device 10 receives the input of information indicating the attachment site of the sensor 210 by the user prior to the acquisition of the sound data.
- a diagram showing the living body is displayed on the display device together with the candidates for the attachment site of the sensor 210, and the user specifies the attachment position of each sensor 210 among the candidates using an input device such as a mouse, a keyboard, or a touch panel. To do.
- the sound data acquired by each sensor 210 is associated with information indicating a part.
- the section identifying unit 130 identifies at least one of the first section and the second section based on the first sound data, as described in the second embodiment. That is, the target sound data and the first sound data are included in the plurality of sound data acquired by the acquisition unit 110.
- the target sound data is not limited to one.
- the target sound data includes at least second sound data different from the first sound data will be described below. That is, the volume information of the second sound data is calculated based on the section specified by the first sound data. There may be a plurality of second sound data.
- the first sound data indicates the sound detected by the first sensor 210 provided at the first position on the surface of or inside the human body.
- the second sound data indicates the sound detected by the second sensor 210 provided at the second position on the surface of or inside the human body.
- the first position is located on the neck, or that the first position is closer to the neck than the second position.
- the section can be specified more accurately.
- the sound data obtained at the part A is the first sound data.
- the section identifying unit 130 selects which of the plurality of sound data acquired by the acquisition unit 110 is to be the first sound data based on the information indicating the part associated with each sound data.
- the acquisition step S110 and step S120 according to the present embodiment are performed by the acquisition unit 110 and the filter processing unit 120, respectively, similarly to the second embodiment.
- the section specifying unit 130 specifies at least one of the first section and the second section in the first sound data, as in the second embodiment. Then, based on the first sound data, at least one of the first time information indicating the time range of the first section and the second time information indicating the time range of the second section is generated.
- the calculation unit 150 uses at least one of the first time information and the second time information in the calculation step S150, and the first part of the sound data of each target. And the second part are specified. By doing so, also in the second sound data included in the target sound data, it is possible to specify the first section in which it is estimated that breathing is performed and the second section other than that.
- the calculation unit 150 calculates volume information for each target sound data, as in the second embodiment. By doing so, volume information for each part can be obtained. Note that the calculation unit 150 may set all the sound data acquired by the acquisition unit 110 as the target sound data, or may set only the sound data corresponding to the part designated by the user as the target sound data.
- FIG. 13 is a diagram showing a display example of volume information of a plurality of parts.
- the volume information of a plurality of parts is displayed in a state in which the correspondence with each part can be understood.
- a numerical value indicating the volume is displayed based on the volume information in the map indicating the region.
- the numerical value indicating the volume is displayed in a graph in time series.
- the horizontal axis represents the elapsed time from the reference time, respectively, and is synchronized among a plurality of parts. It should be noted that the scale of the axes of the graph may be enlarged or reduced by the user or may be moved in parallel as necessary.
- target sound data may further include the first sound data.
- target sound data may include only the first sound data.
- the volume information of the first sound data can be calculated in the same manner as above.
- the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
- the processing device 10 and the system 20 according to the fourth embodiment are the same as the processing device 10 and the system 20 according to the third embodiment, except for the processing contents of the section identifying unit 130 and the calculation unit 150, respectively.
- the acquisition unit 110 acquires a plurality of sound data indicating sounds detected by a plurality of sensors 210. Then, the section identifying unit 130 identifies the first section and the second section in each of the two or more pieces of sound data among the plurality of pieces of acquired sound data. That is, in the present embodiment, the section identifying unit 130 uses two or more sound data sets among the plurality of sound data sets acquired by the acquisition unit 110 as the first sound data set. By specifying the section using the two or more first sound data, the accuracy of specifying the section can be improved.
- the section specifying unit 130 sets the threshold for each of the first sound data that specifies the section. decide.
- the section identifying unit 130 sets the time range that is the first section in all the first sound data that specifies the section, or the time range that is the second section in all the first sound data that specifies the section.
- the 3rd time information shown is generated.
- the two or more first sound data include sound data detected in the neck or sound data detected at a position closest to the neck among a plurality of sound data. Then, the section can be specified more accurately.
- the calculation unit 150 identifies the first part and the second part of the sound data of each target using the third time information. Specifically, when the third time information is information indicating the time range defined as the first section in all the first sound data, the calculation unit 150 indicates the third time information in the target sound data. The portion of the time range is the first portion, and the other portions are the second portions. On the other hand, when the third time information is the information indicating the time range defined as the second section in all the first sound data, the calculation unit 150 sets the time range indicated by the third time information in the target sound data. The portion is the second portion, and the other portions are the first portions.
- the target sound data may or may not include the first sound data. Further, the target sound data may or may not include second sound data different from the first sound data.
- the calculation unit 150 calculates volume information for each target sound data, as in the second embodiment. By doing so, volume information for each part can be obtained.
- the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
- the section identifying unit 130 identifies the first section and the second section in each of the two or more sound data. Therefore, the accuracy of section identification can be improved.
- FIG. 14 is a diagram illustrating the configurations of the processing device 10 and the system 20 according to the fifth embodiment.
- the processing apparatus 10 and the system 20 according to this embodiment are the same as at least one of the second to fourth embodiments except for the points described below.
- the processing device 10 further includes an estimation unit 170.
- the estimation unit 170 estimates the state of the person in which the body sound is detected, based on the volume information calculated by the calculation unit 150. The details will be described below.
- the calculation unit 150 calculates volume information of a plurality of target sound data in the same manner as at least one of the second to fourth embodiments.
- 15 and 16 are diagrams showing display examples of volume information of a plurality of parts, respectively.
- the estimation unit 170 acquires the calculated volume information from the calculation unit 150. Information indicating a part is associated with each volume information. The estimation unit 170 calculates, for example, the rate of decrease in the volume indicated by the volume information of each part. Then, if the rate of decrease exceeds a predetermined reference value, it is estimated that breathing is weakened. In that case, the estimation unit 170 displays or notifies that the breathing is weakened, as shown in FIG. 15, for example. Note that the estimation unit 170 may estimate that the breathing is weakened when the volume decrease rate becomes high in a predetermined number or more of sites. In addition, the estimation unit 170 may estimate that the breathing is weakened when the decrease rate of the sound volume becomes high over a predetermined length.
- the estimation unit 170 calculates, for example, a difference between two sound data indicating body sounds detected at positions symmetrical to each other in the living body. Then, the estimation unit 170 estimates that there is a suspicion of pneumothorax when the magnitude of the difference exceeds a predetermined reference value. Also, based on whether the difference is positive or negative, which lung is suspected to be pneumothorax is estimated. In this way, the estimation unit 170 may estimate the position of the sound source of the abnormal breath sound based on the calculated plurality of volume information. Then, the estimation unit 170 displays or notifies that there is a suspicion of pneumothorax and the estimated position, as shown in FIG. 16, for example. Note that the estimation unit 170 may estimate that the breathing is weakened when the magnitude of the difference exceeds the reference value over the predetermined length.
- the processing device 10 according to the present embodiment can also be realized by using the computer 1000 as shown in FIG.
- the calculation unit 150 determines that the first part of the target sound data is set based on the first section and the second part of the target sound data is set. Volume information indicating the breathing volume of the target sound data is calculated using the second portion determined based on the above. Therefore, it is possible to calculate a respiratory sound volume that is close to the sound volume perceived by human hearing from the body sound data.
- the processing device 10 includes the estimation unit 170 that estimates the state of the person in which the body sound is detected, based on the volume information calculated by the calculation unit 150. Therefore, the condition of the patient or the like can be monitored.
- Sound data was obtained by measuring body sounds of 4 subjects (neck, upper right chest, upper left chest, and lower right chest) with 22 subjects. Then, the obtained sound data was reproduced, and the audibility evaluation whether or not the breathing sound was heard was performed. In the auditory perception, “0” was given when no breathing sound was heard, “1” was barely heard, and “2" was heard well.
- the sound data was processed by each method of the example and the comparative example, and the value indicating the sound volume was calculated. Then, the calculated value was compared with the result of hearing evaluation.
- the value indicating the volume was calculated as described in the second embodiment. Specifically, a threshold value was set based on each sound data, and the section was specified. Further, the RMS of each section of the sound data subjected to the first filter processing in which f L1 was 100 Hz and f H1 was 1000 Hz was calculated. Then, a value obtained by decibel-displaying the value obtained by dividing the RMS in the first section by the RMS in the second section was used as the value indicating the volume. That is, the RMS in the second section was set to 0 dB. Note that the determination of the threshold value, the specification of the section, and the calculation of the value indicating the sound volume were performed independently for each sound data.
- FIGS. 17 and 18 are box-and-whisker diagrams showing the relationship between the value indicating the volume calculated in the comparative example and the example and the result of the hearing evaluation.
- FIG. 19 is a diagram showing a histogram of the relationship between the result of the hearing evaluation and the value indicating the volume calculated in the comparative example.
- FIG. 20 is a histogram showing the relationship between the result of the hearing evaluation and the value indicating the volume calculated in the embodiment.
- the magnitude of the value indicating the volume is well correlated with the evaluation 0, the evaluation 1, and the evaluation 2, and the sound data of the evaluation 0 and the sound of the evaluation 1 are obtained.
- the data could be clearly identified by the value indicating the volume.
- a value having a high correlation with the result of the auditory evaluation could be calculated as the volume. Therefore, it was confirmed that the breathing volume close to the volume perceived by human hearing can be calculated from the sound data by the method of the embodiment.
- An acquisition unit that acquires one or more sound data including breath sounds, A section specifying unit that specifies at least one of a first section presumed to be breathing and a second section between the plurality of first sections; Using the first portion of the target sound data that is determined based on the first section and the second portion of the target sound data that is determined based on the second section, the target sound data And a calculation unit that calculates volume information indicating a respiratory volume. 1-2. 1-1.
- a processing device for calculating the volume information of the target sound data using the first signal strength and the second signal strength is a processing apparatus which calculates the information which specifies the ratio of the said 1st signal strength to the said 2nd signal strength as the said volume information of the said sound data of object. 1-4. 1-1. From 1-3.
- the processing device calculates the volume information using the first portion and the second portion of the target sound data after the filtering,
- the filter processing unit performs bandpass filter processing in which the cutoff frequency on the low frequency side is f L1 [Hz] and the cutoff frequency on the high frequency side is f H1 [Hz].
- 50 ⁇ f L1 ⁇ 150 holds, A processing device satisfying 500 ⁇ f H1 ⁇ 1500. 1-5. 1-1. To 1-4.
- the acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors
- the section identifying unit generates at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section based on the first sound data
- the calculation unit uses at least one of the first time information and the second time information to specify the first portion and the second portion of the target sound data
- the processing device in which the target sound data includes at least second sound data different from the first sound data. 1-6. 1-5.
- the first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body
- the second sound data indicates a sound detected by the second sensor provided at a second position on the surface of or inside the human body
- the acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors
- the section specifying unit Specifying the first section and the second section in each of two or more of the sound data, Generating third time information indicating a time range defined as the first section in all of the two or more sound data, or a time range defined as the second section in all of the two or more sound data
- the said calculation part is a processing apparatus which specifies the said 1st part and said 2nd part of the sound data of the said object using the said 3rd time information. 1-8. 1-5. To 1-7.
- the processing device calculates the volume information of a plurality of target sound data
- the processing device further comprising an estimation unit that estimates the position of the sound source of the abnormal respiratory sound based on the calculated plurality of volume information. 1-9. 1-1.
- the said acquisition part is a system which acquires the said sound data which show the sound detected by the said sensor. 2-1.
- a calculation step of calculating volume information indicating the breath volume.
- the processing method described in the calculating step the processing method of calculating information specifying the ratio of the first signal strength to the second signal strength as the volume information of the target sound data. 2-4. 2-1. From 2-3.
- the processing method described in any one of Further comprising a filtering step for performing a filtering process on at least the target sound data In the calculating step, the volume information is calculated using the first portion and the second portion of the target sound data after the filtering process, In the filtering step, bandpass filter processing is performed in which the cutoff frequency on the low frequency side is f L1 [Hz] and the cutoff frequency on the high frequency side is f H1 [Hz].
- the processing method described in any one of a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired,
- the section specifying step at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section is generated based on the first sound data,
- the calculating step at least one of the first time information and the second time information is used to identify the first portion and the second portion of the target sound data,
- the processing method, wherein the target sound data includes at least second sound data different from the first sound data. 2-6. 2-5.
- the first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body
- the second sound data indicates a sound detected by the second sensor provided at a second position on the surface of or inside the human body
- a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired,
- the section specifying step Specifying the first section and the second section in each of two or more of the sound data, Generating third time information indicating a time range defined as the first section in all of the two or more sound data, or a time range defined as the second section in all of the two or more sound data,
- the calculating step a processing method of identifying the first portion and the second portion of the target sound data by using the third time information. 2-8. 2-5. To 2-7.
- the volume information of a plurality of target sound data is calculated, The processing method further comprising an estimation step of estimating the position of the sound source of the abnormal respiratory sound based on the calculated plurality of volume information.
- An acquisition unit that acquires sound data including breath sounds, At least one of the first section presumed to be breathing in the sound data and the second section between the plurality of first sections is set using a threshold value for the amplitude of the sound data.
- the value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data
- the said area specific part is a processing apparatus which determines the said threshold value based on the said sound data.
- In the processing device described in The section specifying unit In the sound data, find the mode of the value indicating the amplitude, Defining the threshold value greater than the mode value, Of the sound data, at least a section in which a value indicating the amplitude exceeds the threshold is specified as the first section, and at least a section in which the value indicating the amplitude is less than the threshold is A processing device that performs at least one of the second processes specified as two sections. 4-3. 4-2.
- the mode value A processing device that is a value indicating the amplitude that is closest to. 4-4. 4-3.
- the mode value is a value indicating the smallest amplitude among a plurality of values indicating the amplitude having a maximum value. 4-5. 4-2. To 4-4.
- the filter process is a bandpass filter process in which the cutoff frequency on the low frequency side is f L2 [Hz] and the cutoff frequency on the high frequency side is f H2 [Hz].
- 150 ⁇ f L2 ⁇ 250 holds,
- In the processing device according to any one of The section specifying unit is a processing device that obtains the mode value based on data obtained by performing at least downsampling processing on the sound data. 4-7. 4-1. To 4-6.
- the acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors
- the section specifying unit Specifying at least one of the first section and the second section in the first sound data included in the plurality of sound data, Generating at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section, Of the sound data included in the plurality of sound data, the first section and the second section of the second sound data different from the first sound data are set to the first time information and the second section.
- a processing device that identifies based on at least one of time information. 4-8. 4-7.
- the first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body
- the second sound data indicates a sound detected by the second sensor provided at a second position on the surface of or inside the human body
- the acquisition unit acquires a plurality of the sound data indicating a sound detected by a plurality of sensors
- the section specifying unit Specifying the first section and the second section in each of two or more of the sound data
- a process of generating third time information indicating a time range specified as the first section in all of the two or more sound data, or a time range specified as the second section in all of the two or more sound data.
- a processing device according to any one of, Equipped with a sensor, The said acquisition part is a system which acquires the said sound data which show the sound detected by the said sensor. 5-1.
- the value indicating the amplitude is a value indicating the magnitude of vibration at each time of the sound data,
- the processing method of determining the threshold value based on the sound data 5-2. 5-1.
- the processing method described in the section specifying step In the sound data, find the mode of the value indicating the amplitude, Defining the threshold value greater than the mode value, Of the sound data, at least a section in which a value indicating the amplitude exceeds the threshold is specified as the first section, and at least a section in which the value indicating the amplitude is less than the threshold is A processing method for performing at least one of the second processes specified as two sections. 5-3. 5-2.
- the mode value when the sound data is represented in a graph in which the horizontal axis indicates the amplitude and the vertical axis indicates the number of appearances, among the values indicating one or more of the minimum values, the mode value A processing method that is a value that is closest to the amplitude. 5-4. 5-3.
- the mode value is a value indicating the smallest amplitude among a plurality of values indicating the amplitude having a maximum value. 5-5. 5-2. To 5-4.
- the mode value is obtained based on the sound data after performing the filtering process
- the filter process is a bandpass filter process in which the cutoff frequency on the low frequency side is f L2 [Hz] and the cutoff frequency on the high frequency side is f H2 [Hz].
- 150 ⁇ f L2 ⁇ 250 holds, A processing method that satisfies 550 ⁇ f H2 ⁇ 650. 5-6. 5-2. To 5-5.
- a processing method for obtaining the mode value based on data obtained by performing at least downsampling processing on the sound data 5-7. 5-1. To 5-6.
- a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired
- the section specifying step Specifying at least one of the first section and the second section in the first sound data included in the plurality of sound data, Generating at least one of first time information indicating a time range of the first section and second time information indicating a time range of the second section, Of the sound data included in the plurality of sound data, the first section and the second section of the second sound data different from the first sound data are set to the first time information and the second section.
- the first sound data indicates a sound detected by the first sensor provided at the first position on the surface of or inside the human body
- the second sound data indicates a sound detected by a second sensor provided at a second position on the surface of or inside the human body
- the said 1st position is located in a neck, or the said 1st position is a processing method closer to a neck than the said 2nd position. 5-9. 5-1. To 5-6.
- a plurality of the sound data indicating the sound detected by the plurality of sensors is acquired,
- the section specifying step Specifying the first section and the second section in each of two or more of the sound data, A process of generating third time information indicating a time range specified as the first section in all of the two or more sound data, or a time range specified as the second section in all of the two or more sound data Method. 6-1. 5-1. To 5-9. A program that causes a computer to execute each step of the processing method described in any one of 1.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Acoustics & Sound (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Physiology (AREA)
- Pulmonology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
La présente invention concerne un dispositif de traitement muni d'une unité d'acquisition et d'une unité de spécification de section. L'unité d'acquisition acquiert des données sonores comprenant le son respiratoire. L'unité de spécification de section spécifie une première section et/ou une seconde section en utilisant des valeurs seuils pour les valeurs indiquant les amplitudes de données sonores. La première section est une section dans laquelle la respiration est estimée comme ayant lieu, tandis que la seconde section est une section entre une pluralité des premières sections. Les valeurs révélant des amplitudes indiquent les intensités de vibrations dans les données sonores aux moments respectifs. De plus, l'unité de spécification de section détermine les valeurs seuils sur la base des données sonores.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2020553908A JP7089650B2 (ja) | 2018-10-31 | 2019-10-29 | 処理装置、システム、処理方法、およびプログラム |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2018-205536 | 2018-10-31 | ||
| JP2018205536 | 2018-10-31 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2020090763A1 true WO2020090763A1 (fr) | 2020-05-07 |
Family
ID=70463771
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2019/042240 Ceased WO2020090763A1 (fr) | 2018-10-31 | 2019-10-29 | Dispositif de traitement, système, procédé de traitement, et programme |
Country Status (2)
| Country | Link |
|---|---|
| JP (1) | JP7089650B2 (fr) |
| WO (1) | WO2020090763A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113899446A (zh) * | 2021-12-09 | 2022-01-07 | 北京京仪自动化装备技术股份有限公司 | 晶圆传送系统检测方法及晶圆传送系统 |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2012060107A1 (fr) * | 2010-11-04 | 2012-05-10 | パナソニック株式会社 | Dispositif et méthode d'analyse des sons biométriques |
| JP2013106906A (ja) * | 2011-11-24 | 2013-06-06 | Omron Healthcare Co Ltd | 睡眠評価装置 |
| JP2013202101A (ja) * | 2012-03-27 | 2013-10-07 | Fujitsu Ltd | 無呼吸状態判定装置,無呼吸状態判定方法,及び無呼吸状態判定プログラム |
-
2019
- 2019-10-29 WO PCT/JP2019/042240 patent/WO2020090763A1/fr not_active Ceased
- 2019-10-29 JP JP2020553908A patent/JP7089650B2/ja active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2012060107A1 (fr) * | 2010-11-04 | 2012-05-10 | パナソニック株式会社 | Dispositif et méthode d'analyse des sons biométriques |
| JP2013106906A (ja) * | 2011-11-24 | 2013-06-06 | Omron Healthcare Co Ltd | 睡眠評価装置 |
| JP2013202101A (ja) * | 2012-03-27 | 2013-10-07 | Fujitsu Ltd | 無呼吸状態判定装置,無呼吸状態判定方法,及び無呼吸状態判定プログラム |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113899446A (zh) * | 2021-12-09 | 2022-01-07 | 北京京仪自动化装备技术股份有限公司 | 晶圆传送系统检测方法及晶圆传送系统 |
| CN113899446B (zh) * | 2021-12-09 | 2022-03-22 | 北京京仪自动化装备技术股份有限公司 | 晶圆传送系统检测方法及晶圆传送系统 |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2020090763A1 (ja) | 2021-09-24 |
| JP7089650B2 (ja) | 2022-06-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6555692B2 (ja) | 呼吸速度を測定する方法および呼吸速度を測定するシステム | |
| KR101619611B1 (ko) | 마이크로폰을 이용한 호흡률 추정 장치 및 기법 | |
| EP3334337B1 (fr) | Surveillance des phénomènes de sommeil | |
| US8882683B2 (en) | Physiological sound examination device and physiological sound examination method | |
| JP5873875B2 (ja) | 信号処理装置、信号処理システム及び信号処理方法 | |
| EP3471610B1 (fr) | Séismocardiographie quantitative | |
| JP2013518607A (ja) | 携帯型モニタリングのための生理学的信号の品質を分類する方法およびシステム | |
| JP7297190B2 (ja) | 警報発出システム | |
| JP6522327B2 (ja) | 脈波解析装置 | |
| CN107106118B (zh) | 检测重搏切迹的方法 | |
| KR101706197B1 (ko) | 압전센서를 이용한 폐쇄성수면무호흡 선별검사를 위한 장치 및 방법 | |
| JP2013544548A5 (fr) | ||
| CN104027109A (zh) | 心房颤动解析装置以及程序 | |
| WO2003005893A2 (fr) | Moniteur de la respiration et de la frequence cardiaque | |
| JP2001190510A (ja) | 周期性生体情報測定装置 | |
| WO2015178439A2 (fr) | Dispositif et méthode d'aide au diagnostic de l'apnée centrale/obstructive du sommeil, et support lisible par un ordinateur comportant un programme d'aide au diagnostic de l'apnée centrale/obstructive du sommeil | |
| JP7089650B2 (ja) | 処理装置、システム、処理方法、およびプログラム | |
| JP7122225B2 (ja) | 処理装置、システム、処理方法、およびプログラム | |
| Rohman et al. | Analysis of the Effectiveness of Using Digital Filters in Electronic Stethoscopes | |
| JP2009254611A (ja) | 咳検出装置 | |
| KR102242479B1 (ko) | 피부영상을 이용한 디지털 호흡 청진 방법 | |
| JP7193080B2 (ja) | 情報処理装置、システム、情報処理方法、およびプログラム | |
| Makalov et al. | Inertial Acoustic Electronic Auscultation System for the Diagnosis of Lung Diseases | |
| KR101587989B1 (ko) | 생체음향을 이용한 심박규칙도 수치화 방법과 청진 장비 | |
| EP4326145A1 (fr) | Capteur multiple et procédé |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19880656 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2020553908 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19880656 Country of ref document: EP Kind code of ref document: A1 |