WO2025018587A1 - Mixed reality-based ultrasound image output system and method - Google Patents
Mixed reality-based ultrasound image output system and method Download PDFInfo
- Publication number
- WO2025018587A1 WO2025018587A1 PCT/KR2024/007759 KR2024007759W WO2025018587A1 WO 2025018587 A1 WO2025018587 A1 WO 2025018587A1 KR 2024007759 W KR2024007759 W KR 2024007759W WO 2025018587 A1 WO2025018587 A1 WO 2025018587A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- ultrasound image
- content
- lesion
- mixed reality
- lesion area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/46—Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
- A61B8/461—Displaying means of special interest
- A61B8/463—Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient; User input means
- A61B5/742—Details of notification to user or communication with user or patient; User input means using visual displays
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5292—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves using additional data, e.g. patient information, image labeling, acquisition parameters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
Definitions
- the present invention relates to a mixed reality-based ultrasound image output system and method, and more particularly, to a mixed reality-based ultrasound image output system and method that aligns and outputs an ultrasound image on an examination area using a device that displays in an AR or VR manner.
- Ultrasound-guided biopsy is a test that obtains tissue samples by inserting a thin needle into a mass or local lesion outside the body under ultrasound guidance. Ultrasound-guided biopsy can help the needle accurately pierce the lesion, which can increase accuracy and reduce false negatives. Accordingly, ultrasound-guided biopsy is widely used in clinical practice.
- Ultrasound-guided radiofrequency ablation is a method of treating tumors by burning them using the heat generated by inserting a needle-shaped electrode into the tumor and then passing an electric current while looking at the ultrasound examination screen.
- Ultrasound-guided RF ablation has the advantage of being able to quickly and accurately insert the electrode inside the cancerous tumor and treat it through real-time guidance compared to CT (Computerized Tomography) and MRI (Magnetic Resonance Imaging) images.
- Ultrasound-guided biopsy can cause high fatigue because the operator must hold the ultrasound probe for a long time and frequently turn the head to check and compare the ultrasound image with the biopsy site.
- Ultrasound-guided biopsy is difficult to perform because the ultrasound monitor and biopsy site cannot be viewed at the same time, and it requires high expertise and skills, so only experienced specialists can perform it.
- the ultrasound equipment is often placed far away due to the patient's bed, numerous pieces of equipment in the operating room, etc., which can hinder ultrasound-guided biopsy.
- Ultrasound-guided radiofrequency ablation therapy may take longer to treat as the size of the tumor to be treated gradually decreases with the early detection of cancerous tumors, and it becomes difficult to identify the location of the tumor with ultrasound imaging.
- Ultrasound-guided radiofrequency ablation therapy requires a high level of concentration during the treatment, making it difficult to timely monitor the patient's physical risk signs such as blood pressure, heart rate, and oxygen saturation.
- the purpose of the present invention is to enable a practitioner to simultaneously view a biopsy site and ultrasound through a wearable output device, and to superimpose an ultrasound image and a lesion site on the biopsy site.
- the present invention seeks to synchronize input signals to ensure the reliability of output information.
- the present invention comprises: a data receiving unit which receives an ultrasound image and a bio-signal of a subject; a first content generating unit which generates a first MR (Mixed-reality) content by aligning the ultrasound image with a specific object; a second content generating unit which generates a second MR content by aligning the bio-signal with a specific space; and an output device which displays the MR content generated by the first content generating unit or the second content generating unit; wherein the first content generating unit is characterized in that it recognizes the probe shape of an ultrasound diagnostic device through a camera provided in the output device, and detects movement of the probe shape to change the output position of the first MR content.
- the method may further include a lesion detection unit that detects a lesion area in the ultrasound image using artificial intelligence learned from an ultrasound image data set in which the lesion area is annotated.
- a lesion detection unit that detects a lesion area in the ultrasound image using artificial intelligence learned from an ultrasound image data set in which the lesion area is annotated.
- the first content generation unit can change the first MR content so that the lesion area detected by the lesion detection unit is distinguished from other areas.
- the first content generation unit can change the MR content so that the lesion area is not distinguished from other areas when it is confirmed through the ultrasound image that the needle has contacted the lesion area.
- the method may further include a synchronization unit that synchronizes the ultrasound image, the bio-signal, and the lesion area so that the ultrasound image and the bio-signal measured at the same time can be output and the lesion area detected at the same time can be displayed.
- a synchronization unit that synchronizes the ultrasound image, the bio-signal, and the lesion area so that the ultrasound image and the bio-signal measured at the same time can be output and the lesion area detected at the same time can be displayed.
- the synchronization unit may include a delay synchronization module that synchronizes based on the most delayed signal among the ultrasound image, the bio-signal, and the lesion area when the lesion area detected by the lesion detection unit is smaller than a preset reference value; and a prediction synchronization module that synchronizes based on the most advanced signal among the ultrasound image, the bio-signal, and the lesion area when the lesion area detected by the lesion detection unit is larger than a preset reference value.
- the prediction synchronization module can synchronize another signal to the most advanced signal by using extrapolation.
- the output device includes a first camera for photographing the forward direction of the wearer and a second camera for photographing the face of the wearer, wherein the first camera can recognize the probe shape of the ultrasonic diagnostic device, and the second camera can track the direction of the wearer's gaze.
- the output device can display the second MR content along the direction of the wearer's gaze tracked by the second camera.
- the output device can be controlled by the movement of the wearer's eye or pupil captured by the second camera.
- the present invention includes a data receiving step of receiving an ultrasound image and a bio-signal of a subject; a first content generating step of generating a first MR (Mixed-reality) content by aligning the ultrasound image with a specific object; a second content generating step of generating a second MR content by aligning the bio-signal with a specific space; and a step of displaying the MR content generated in the first content generating step or the second content generating step on an output device; and another feature of the first content generating step is that the shape of a probe of an ultrasound diagnostic device is recognized through a camera provided in the output device, and the movement of the probe shape is detected to change the output position of the first MR content.
- the present invention has the advantage of reducing operator fatigue by allowing the biopsy site and ultrasound to be viewed simultaneously, and increasing accuracy and stability by allowing the ultrasound image and lesion site to be superimposed on the biopsy site.
- the present invention has an advantage in that it can ensure patient safety by displaying a physical danger signal in the line of sight of the operator through an output device that can be worn by the operator.
- the present invention has the advantage of ensuring the reliability of output information by synchronizing input signals and preventing the needle from being inserted into a different part.
- the present invention can increase the freedom of the operator by providing an environment in which equipment can be controlled without touching.
- Figure 1 shows a configuration diagram of a mixed reality-based ultrasound image output system according to an embodiment of the present invention.
- Figure 2 shows a configuration of a stand-alone mixed reality-based ultrasound image output system according to an embodiment of the present invention.
- FIG. 3 shows a configuration of a mixed reality-based ultrasound image output system using a smartphone tethering method according to an embodiment of the present invention.
- FIG. 4 is a drawing for explaining the relationship between the size of MR content output to an output device and the probe angle according to an embodiment of the present invention.
- Figure 5 shows a configuration diagram of a lesion detection unit according to an embodiment of the present invention.
- Figure 6 shows a process in which a lesion detection unit according to an embodiment of the present invention detects a lesion using artificial intelligence.
- Figure 7 shows a configuration diagram of a synchronization unit according to an embodiment of the present invention.
- FIG. 8 is a diagram illustrating a synchronization method of a delay synchronization module according to an embodiment of the present invention.
- FIG. 9 is a diagram illustrating a synchronization method of a prediction synchronization module according to an embodiment of the present invention.
- Figure 10 shows a screen displayed through an output device according to an embodiment of the present invention.
- a data receiving unit that receives ultrasound images and bio-signals of a subject
- a first content generation unit that generates first MR (Mixed-reality) content by aligning the above ultrasound image with a specific object
- a mixed reality-based ultrasound image output system characterized by recognizing the probe shape of an ultrasound diagnostic device through a camera equipped in the output device, detecting movement of the probe shape, and changing the output position of the first MR content.
- first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
- the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
- FIG. 1 illustrates a configuration diagram of a mixed reality-based ultrasound image output system (100) according to an embodiment of the present invention.
- the mixed reality-based ultrasound image output system (100) may include a data receiving unit (110), a first content generating unit (120), a second content generating unit (130), a lesion detection unit (140), a synchronization unit (150), and an output device (160).
- the mixed reality-based ultrasound image output system (100) has the advantage of ensuring the reliability of the output information by synchronizing the input signals and preventing the needle from being inserted into a different part by outputting signals at different times.
- the mixed reality-based ultrasound image output system (100) can increase the freedom of the operator by providing an environment in which the equipment can be controlled without touching.
- the AI analysis server can output AI analysis results, where the AI analysis may mean lesion detection. That is, the AI analysis server may mean the lesion detection unit (140) of the present invention.
- the AI analysis server may provide the AI analysis results to an MR HMD (Head Mounted Display) or glasses via the Internet.
- MR HMD Head Mounted Display
- MR HMD or glasses can receive patient bio-signal change information and ultrasound images from a medical device data processing server via the Internet, and using the received data, can create first MR (Mixed-reality) content by aligning the ultrasound image with a specific object, and can create second MR content by aligning the bio-signal with a certain space. That is, in the stand-alone method, the first content generation unit (120) and the second content generation unit (130) can be included in the output device (160).
- FIG. 3 shows the configuration of a mixed reality-based ultrasound image output system using a smartphone tethering method according to an embodiment of the present invention.
- a mixed reality-based ultrasound image output system (100) can be connected to and operated by a smartphone.
- the smartphone tethering method can be performed on a smartphone without performing computing operations required for implementing mixed reality in the output device (160) itself.
- the smartphone tethering method has the advantage of being lightweight and generating less heat because there is no processor or battery for computing inside the output device (160).
- the smartphone tethering method can transmit patient bio-signal change information, ultrasound images, and AI analysis results to a smartphone via the Internet, and the smartphone can use the transmitted data to align the ultrasound images with a specific object to generate first MR (Mixed-reality) content, and align the bio-signals with a specific space to generate second MR content. That is, the smartphone tethering method may not include the first content generation unit (120) and the second content generation unit (130) within the output device (160).
- the data receiving unit (110) can receive ultrasound images and bio-signals of a subject.
- the data receiving unit (110) can be connected to an ultrasound diagnostic device and a bio-signal measuring device by wire or wirelessly to receive ultrasound images and bio-signals in real time.
- the data receiving unit (110) can transmit the received ultrasound images and bio-signals to a first content generating unit (120), a second content generating unit (130), or a lesion detecting unit (140).
- a broad mobile communication network such as CDMA (Code Division Multiple Access), WCDMA (Wideband CDMA), LTE (Long Term Evolution), or Wi-Fi.
- the data receiving unit (110) When the data receiving unit (110) is connected wirelessly to another component, it can use a short-range wireless communication such as Bluetooth.
- the first content generation unit (120) can generate the first MR (Mixed-reality) content by aligning the ultrasound image with a specific object.
- the specific object may mean a body part that can be directly superimposed 1:1 on the anatomical structure of the location that the ultrasound image is intended to measure.
- the tracking pattern (marker, QR code, etc.) attached to the probe of the ultrasound diagnostic device was analyzed to calculate the position where the ultrasound image should be aligned.
- the operator continuously moves the probe to accurately obtain the necessary ultrasound image.
- the tracking pattern attached to the probe may be deformed or invisible.
- the camera does not recognize the tracking pattern.
- the first content generation unit (120) can recognize the probe shape of the ultrasonic diagnostic device through a camera equipped in the output device (160) and detect the movement of the probe shape to change the output position of the first MR content.
- the first content generation unit (120) can output the first MR content in the direction in which the front of the probe faces through the recognized probe shape, and preferably, can change the output position of the first MR content so that the ultrasound image directly overlaps 1:1 with the anatomical structure of the location to be measured toward the front of the probe based on the part where the probe and the subject's body come into contact.
- an ultrasound image is an image of a body part perpendicular to the probe. Therefore, the meaning of outputting the first MR content so that the ultrasound image is directly superimposed on the anatomical structure of the location to be measured is that the output device will implement it so that the image created by ultrasound is actually physically located under the probe.
- FIG. 4 is a diagram for explaining the relationship between the size of MR content output to an output device according to an embodiment of the present invention and the probe angle.
- the ultrasound image area output within the AR glasses of the practitioner can be confirmed. If the practitioner is wearing AR glasses, the measurement area under the probe is calculated to look like from the position of the eyes of the practitioner wearing the AR glasses and output to the AR glasses. Therefore, if the MR image is output to the practitioner so as to be directly superimposed 1:1 on the anatomical structure of the location to be measured, the three-dimensional location of the actual lesion inside the body can be accurately known, so that a needle can be accurately inserted into the corresponding location.
- the size of the ultrasound image output to the AR glasses can be determined through the calculation of how the measurement area under the probe looks like to the eyes of the practitioner.
- the ultrasound image area displayed within the operator's AR glasses can be confirmed.
- the viewing angle for the measurement area under the probe becomes larger than when the probe outputs ultrasound perpendicularly to the subject's body, and accordingly, the size of the ultrasound image area displayed within the AR glasses also becomes larger.
- the ultrasound image area displayed within the operator's AR glasses can be confirmed.
- the viewing angle for the measurement area below the probe becomes smaller than when the probe outputs ultrasound perpendicularly to the subject's body, and accordingly, the size of the ultrasound image area displayed within the AR glasses also becomes smaller.
- the size of the ultrasound image displayed on the output device may vary depending on the positional relationship between the output device worn by the practitioner and the probe, such as the angle of the probe, the distance between the practitioner and the probe, etc.
- the size of the displayed ultrasound image is excessively small, the practitioner may have difficulty performing the procedure while viewing the ultrasound image.
- a separate ultrasound image may be displayed in the output device. Accordingly, the practitioner can safely perform the procedure through the separately displayed ultrasound image even if the ultrasound image is displayed excessively small depending on the positional relationship between the output device and the probe.
- a separate ultrasound image may be displayed in a specific area of the AR display according to the control of the practitioner.
- the first content generation unit (120) determines the location where the ultrasound image will be enhanced not by detecting a tracking pattern, but by determining it through the shape of the probe, so that reliable ultrasound images can be output even when the probe moves during the procedure.
- the mixed reality-based ultrasound image output system (100) does not use a tracking pattern but uses the shape of a probe, so it can be applied to existing ultrasound diagnostic devices by only having modularized equipment without separate installation.
- the first content generation unit (120) can change the first MR content so that the lesion area detected by the lesion detection unit (140) can be distinguished from other areas. Although the operator can distinguish the lesion in the ultrasound image, it is very difficult to distinguish the lesion area from other areas besides the lesion area. Therefore, currently, whether or not to detect the lesion is determined based on the operator's ability.
- the first content generation unit (120) can emphasize the outer line of the lesion area detected by the lesion detection unit (140) or emphasize it by changing the color of the inside of the lesion area. If the user wants to check the inside of the lesion according to the user's setting, the first content generation unit (120) can emphasize the outer line of the detected lesion area. If the user wants to check the size of the lesion according to the user's setting, the first content generation unit (120) can emphasize the lesion area by changing the color of the inside of the lesion area.
- the first content generation unit (120) can change the MR content so that the lesion area is not distinguished from other areas when it is confirmed through the ultrasound image that the needle has contacted the lesion area. Even if the operator has distinguished the lesion area, it is a very difficult technique to accurately insert a needle into the lesion area to acquire lesion tissue or treat the lesion. Therefore, the first content generation unit (120) can visually notify the operator that the lesion area has been contacted by restoring the highlighted lesion area to be identical to other areas when the needle has accurately inserted the lesion area.
- the second content generation unit (130) can generate second MR content by aligning a biosignal to a predetermined space.
- the predetermined space means a space where the operator can check the output biosignal while focusing on the ultrasound image and performing a biopsy or treatment. Preferably, it can mean the gaze direction of the operator's right or left eye.
- the biosignal can include an electrocardiogram (ECG), an electroencephalogram (EEG), an electromyogram (EMG), a galvanic skin response (GSR), a skin temperature (SKT), a photoplethysmography (PPG), a pulse rate (PR), blood pressure, oxygen saturation, etc.
- the second content generation unit (130) can determine that a dangerous situation has occurred for the subject if the bio-signal exceeds the reference value and can provide a warning or notification to the operator.
- the second content generation unit (130) can change the second MR content so that the screen on which the bio-signal is output blinks or flashes red when an abnormality in the bio-signal occurs.
- Fig. 5 shows a configuration diagram of a lesion detection unit (140) according to an embodiment of the present invention.
- the lesion detection unit (140) may include an input unit (141), a preprocessing unit (142), a learning unit (143), a lesion prediction unit (144), and a postprocessing unit (145).
- the lesion detection unit (140) can detect a lesion area in an ultrasound image using artificial intelligence learned from an ultrasound image data set in which the lesion area is annotated. Specifically, the lesion detection unit (140) can detect a lesion area in an ultrasound image using a semantic segmentation model.
- the semantic segmentation model is a process of dividing a digital image into a plurality of pixel sets, and can simplify and convert the expression of the image into something easy to interpret through segmentation.
- the lesion detection unit (140) can use a CNN-based model as a semantic segmentation model.
- the input unit (141) can receive ultrasound images of normal and patient groups for learning, testing, or actual lesion detection.
- the input unit (141) can receive ultrasound images from the data receiving unit (110).
- the input unit (141) can transmit the received ultrasound images to the preprocessing unit (142).
- the preprocessing unit (142) can preprocess the ultrasound image received from the input unit (141) to be advantageous for learning or detection.
- the preprocessing unit (142) can perform preprocessing in a manner of normalizing pixel values.
- the preprocessing unit (142) can perform preprocessing to adjust the size and resolution of the ultrasound image received so that it can be input to the artificial neural network.
- the ultrasound image used for learning may include the patient's personal information, which may cause security and personal information leakage issues. Therefore, the preprocessing unit (142) according to the present invention can perform preprocessing to de-identify the patient information in the ultrasound image and metadata.
- the learning unit (143) can train artificial intelligence using ultrasound images that have been preprocessed and have lesion areas annotated.
- the artificial intelligence to be trained can be a CNN-based or SOTA architecture-based semantic segmentation model.
- a convolutional neural network is a type of deep learning model that is inspired by the structure of the visual cortex of animals and is designed to process data with grid patterns such as images.
- a convolutional neural network may generally include a convolutional layer, a pooling layer, and a fully connected layer.
- the convolutional layer and the pooling layer may exist repeatedly in the neural network, and the input data may be converted to an output through these layers.
- the convolutional layer uses a kernel (or mask) for feature extraction, and the element-wise product between each element of the kernel and the input value is calculated at each location and summed to obtain an output value, which is called a feature map. This procedure may be repeated by applying multiple kernels to form an arbitrary number of feature maps.
- the convolutional and pooling layers perform feature extraction, while the fully connected layer maps the extracted features to the final output, such as a classification operation.
- Convolutional neural networks can be trained in a way that minimizes output errors. Separately from the forward propagation process that extracts values from the input layer to the output layer, backpropagation occurs in the neural network to calculate the error between the input training data and the output value of the neural network for it, and update the weights of the nodes of each layer to reduce this error.
- the training process in a convolutional neural network can be summarized as the process of finding the kernel that extracts the output value with the least error based on the given training data.
- the kernel is the only parameter that is automatically learned during the training process of the convolutional layer.
- the kernel size, the number of kernels, and the padding in the convolutional neural network are hyperparameters that must be set before starting the training process, and therefore, different convolutional neural network models can be distinguished depending on the kernel size, the number of kernels, and the number of convolutional layers and pooling layers.
- the lesion prediction unit (144) can predict and detect a lesion area from an ultrasound image of a subject using artificial intelligence learned in the learning unit (143).
- the post-processing unit (145) can perform post-processing to distinguish the lesion area detected by the lesion prediction unit (144) from other areas. This is the same function as that performed in the first content generation unit (120) described above.
- Figure 6 shows a process in which a lesion detection unit (140) according to an embodiment of the present invention detects a lesion using artificial intelligence.
- the lesion detection unit (140) can receive an ultrasound image, input it into a CNN (Convolutional Neural Network), and then input it into a high-performance Semantic segmentation model to obtain a binary mask in which the lesion prediction area and other areas are distinguished. Thereafter, the lesion detection unit (140) can select a lesion area from the lesion prediction area.
- a CNN Convolutional Neural Network
- Fig. 7 shows a configuration diagram of a synchronization unit (150) according to an embodiment of the present invention.
- the synchronization unit (150) may include a delay synchronization module (151) and a prediction synchronization module (152).
- the synchronization unit (150) can synchronize the ultrasound image, the biosignal, and the lesion area so that the ultrasound image and the biosignal measured at the same time can be output, and the lesion area detected at the same time can be displayed.
- the synchronization unit (150) plays an important role in allowing the operator to perform biopsy and treatment based on information at the same time. If the ultrasound image and the lesion area are measured at different times, the operator expects that there will be a lesion in the displayed lesion area, but in reality, there may be no lesion at that location because it is a lesion area at a different time.
- the synchronization unit (150) can synchronize the ultrasound image, bio-signal, or lesion area displayed to the operator through the output device (160) with those measured at the same time, thereby ensuring stability and reliability.
- Fig. 8 is a diagram for explaining a synchronization method of a delay synchronization module according to an embodiment of the present invention.
- the delay synchronization module (151) can synchronize based on the most delayed signal among the ultrasound image, biosignal, and lesion region when the lesion region detected by the lesion detection unit (140) is smaller than a preset reference value.
- the lesion area, ultrasound image, and bio-signal can be delayed by T1, T2, and T3, respectively, and the signals can be delivered (T1>T2>T3).
- the ultrasound image can be synchronized with the lesion area by outputting the signal from a time point earlier than Tp2
- the bio-signal can be synchronized by outputting the signal from a time point earlier than Tp3.
- This synchronization method has the advantage of high reliability and accuracy because the measured values already exist, but has the disadvantage of a long delay time.
- FIG. 9 is a diagram illustrating a synchronization method of a prediction synchronization module (152) according to an embodiment of the present invention.
- the prediction synchronization module (152) can synchronize based on the most advanced signal among the ultrasound image, biosignal, and lesion area when the lesion area detected by the lesion detection unit (140) is larger than a preset reference value.
- the prediction synchronization module (152) can synchronize another signal to the most advanced signal using extrapolation. Extrapolation refers to a method used when the available data is limited and a value that exceeds the limit is desired to be obtained.
- the prediction synchronization module (152) can predict data up to the point in time of the most advanced signal using artificial intelligence.
- the prediction synchronization module (152) can synchronize with the biosignal by predicting the signal at a time point ahead by Tf1 and outputting the lesion area, and predicting the signal at a time point ahead by Tf2 and outputting the ultrasound image.
- This synchronization method has the advantage of a short delay time because it synchronizes at the shortest delay time, but has the disadvantage of low reliability and accuracy because it predicts and outputs values that have not yet been measured.
- the mixed reality-based ultrasound image output system (100) can use a delay synchronization module (151) with high accuracy when the detected lesion is small, and can use a prediction synchronization module (152) with short delay time when the detected lesion is large.
- the delay synchronization module (151) and the prediction synchronization module (152) can be selected according to user settings.
- Fig. 10 illustrates a screen displayed through an output device (160) according to an embodiment of the present invention.
- the output device (160) can display MR content generated by the first content generation unit (120) or the second content generation unit (130).
- the output device (160) can be a device that can be worn by a practitioner.
- the output device (160) may be an AR/VR/MR device, glasses, or HMD that can display the first MR content and the second MR content in an augmented manner.
- the output device (160) may include a processor and a battery to perform computing operations required for implementing mixed reality internally.
- the output device (160) may not include a processor and a battery to perform computing operations in order to reduce weight.
- the output device (160) may include a first camera that photographs the front direction of the wearer and a second camera that photographs the face of the wearer.
- the first camera may recognize the probe shape of the ultrasonic diagnostic device, and the second camera may track the direction of the wearer's gaze.
- the first camera may be used to recognize the probe shape in the first content generation unit (120) described above.
- the output device (160) can display the second MR content along the direction of the wearer's gaze tracked by the second camera.
- the second MR content can be composed of biosignals, and the biosignals must be able to identify their status even in a situation where the operator focuses on the ultrasound image. Accordingly, the output device (160) can detect the direction of the wearer's left or right eye gaze using the second camera and display the second MR content near the corresponding direction of gaze.
- the output device (160) can be controlled by the movement of the wearer's eyes or pupils captured by the second camera.
- the operator holds an ultrasound probe in one hand for ultrasound diagnosis and a needle in the other hand for biopsy or treatment. Therefore, in order to control the output device (160), the operator must put down the probe or needle that he or she is holding or request an assistant to do so.
- the output device (160) can provide the operator with a high degree of freedom by being controlled by the blinking of the wearer's eyes or the movement of the pupils.
- the control may mean changing the UI, turning on/off the first MR content or the second MR content, etc.
- a mixed reality-based ultrasound image output method which is another embodiment of the present invention, may include a data receiving step, a first content generating step, a second content generating step, a lesion detection step, a synchronization step, and a displaying step.
- the data receiving step can receive ultrasound images and bio-signals of the subject from the ultrasonic diagnostic device and the bio-signal measuring device.
- the data receiving step refers to an operation performed in the data receiving unit (110) described above.
- the first content generation step can generate first MR (Mixed-reality) content by aligning an ultrasound image with a specific object.
- the first content generation step can recognize the probe shape of an ultrasound diagnostic device through a camera equipped in an output device, and detect the movement of the probe shape to change the output position of the first MR content.
- the first content generation step refers to an operation performed in the first content generation unit (120) described above.
- the second content generation step can generate second MR content by aligning biosignals to a certain space.
- the second content generation step refers to an operation performed in the second content generation unit (130) described above.
- the lesion detection step can detect lesion areas in ultrasound images using artificial intelligence trained on an ultrasound image data set in which lesion areas are annotated.
- the lesion detection step refers to an operation performed in the lesion detection unit (140) described above.
- the synchronization step can synchronize the ultrasound image, biosignal, and lesion area so that the ultrasound image and biosignal measured at the same time can be output and the lesion area detected at the same time can be displayed.
- the synchronization step means the operation performed in the synchronization unit (150) described above.
- the displaying step can display the MR content generated in the first content generation step or the second content generation step on an output device.
- the displaying step refers to an operation performed on the aforementioned output device (160).
- Data receiving unit 120 First content generating unit
- Second content creation unit 140 Lesion detection unit
- Post-processing section 150 Synchronization section
- the present invention aims to provide a method and system for recognizing the context of a digital slide image, to which a contextual image representation method that is invariant to rotational transformation and can reflect the surrounding context is applied.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Molecular Biology (AREA)
- Radiology & Medical Imaging (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Physics & Mathematics (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physiology (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Description
본 발명은 혼합현실 기반 초음파 영상 출력 시스템 및 방법에 관한 것으로서, 특히 AR 또는 VR 방식으로 디스플레이하는 기기를 이용하여 초음파 영상을 검사 부위 상에 정렬(align)하여 출력하는 혼합현실 기반 초음파 영상 출력 시스템 및 방법에 관한 것이다.The present invention relates to a mixed reality-based ultrasound image output system and method, and more particularly, to a mixed reality-based ultrasound image output system and method that aligns and outputs an ultrasound image on an examination area using a device that displays in an AR or VR manner.
초음파(Ultrasound)는 인간의 가청음역보다 높은 범위인 20,000Hz 이상의 주파수를 가진 음파를 지칭하며, 이러한 초음파를 이용하여 인체 내부 장기나 뼈, 근육조직, 혈액 등 다양한 경계면에서 음파의 확산, 반사, 흡수 및 산란을 통해 생성된 수신 차이를 영상으로 구현한 것을 초음파 영상(ultrasonography)이라고 한다. Ultrasound refers to sound waves with a frequency of over 20,000 Hz, which is higher than the human audible range. Ultrasound imaging (ultrasonography) is the imaging of differences in reception created through diffusion, reflection, absorption, and scattering of sound waves at various boundaries such as internal organs, bones, muscle tissue, and blood in the human body.
초음파 유도 생검(US-guided biopsy)은 초음파 검사 유도 하에 체외에서 가는 바늘을 종괴나 국소 병변에 삽입하여 조직 표본을 얻는 검사이다. 초음파 유도 생검은 바늘이 병변을 정확하게 찌를 수 있게 도움을 줄 수 있어 정확도를 높이고 위음성을 줄일 수 있다. 이에 따라, 초음파 유도 생검은 임상에서 널리 활용되고 있다. Ultrasound-guided biopsy (US-guided biopsy) is a test that obtains tissue samples by inserting a thin needle into a mass or local lesion outside the body under ultrasound guidance. Ultrasound-guided biopsy can help the needle accurately pierce the lesion, which can increase accuracy and reduce false negatives. Accordingly, ultrasound-guided biopsy is widely used in clinical practice.
초음파 유도 고주파 열 치료(US-guided RF ablatio)는 초음파 검사 화면을 보면서 종양에 바늘 형태의 전극을 삽입한 후 전류를 흘려 보냄으로써 발생하는 열을 이용하여 종양을 태워 치료하는 방법이다. 초음파 유도 고주파 열 치료는 CT(Computerized Tomography), MRI(Magnetic Resonance Imaging) 영상 대비 실시간 유도를 통해 빠르고 정확하게 암 종양 내부에 전극을 삽입하여 치료할 수 있다는 이점을 갖고 있다. Ultrasound-guided radiofrequency ablation (US-guided RF ablation) is a method of treating tumors by burning them using the heat generated by inserting a needle-shaped electrode into the tumor and then passing an electric current while looking at the ultrasound examination screen. Ultrasound-guided RF ablation has the advantage of being able to quickly and accurately insert the electrode inside the cancerous tumor and treat it through real-time guidance compared to CT (Computerized Tomography) and MRI (Magnetic Resonance Imaging) images.
초음파 유도 생검은 시술자가 초음파 프로브를 장시간 움켜쥔 채 수시로 고개를 돌려 초음파 영상과 생검 부위를 대조하고 확인해야 하므로, 높은 피로도를 유발할 수 있다. 초음파 유도 생검은 초음파 모니터와 생검 부위를 동시에 볼 수 없어 시술 난이도가 높고, 높은 전문성과 기술을 요구하여 숙련된 전문가만이 수행할 수 있다. 또한, 실제 검사 환경에서, 환자의 침대, 수술장의 수많은 장비 등으로 인해 초음파 장비를 멀리 떨어진 공간에 두고 검사를 수행하는 경우가 빈번하여, 초음파 유도 생검에 장애가 될 수 있다. Ultrasound-guided biopsy can cause high fatigue because the operator must hold the ultrasound probe for a long time and frequently turn the head to check and compare the ultrasound image with the biopsy site. Ultrasound-guided biopsy is difficult to perform because the ultrasound monitor and biopsy site cannot be viewed at the same time, and it requires high expertise and skills, so only experienced specialists can perform it. In addition, in an actual examination environment, the ultrasound equipment is often placed far away due to the patient's bed, numerous pieces of equipment in the operating room, etc., which can hinder ultrasound-guided biopsy.
초음파 유도 고주파 열 치료는 암 종양의 조기 발견에 따라, 치료할 종양의 크기는 점차 작아지고, 초음파 영상으로 종양의 위치 확인이 어려워짐에 따라, 시술 시간이 길어질 수 있다. 초음파 유도 고주파 열 치료는 치료 중에는 고도의 집중력을 발휘하여 시술에 몰입하므로, 환자의 혈압, 심박수, 산소 포화도와 같은 신체 위험 징후를 적시에 모니터링하기 어렵다. Ultrasound-guided radiofrequency ablation therapy may take longer to treat as the size of the tumor to be treated gradually decreases with the early detection of cancerous tumors, and it becomes difficult to identify the location of the tumor with ultrasound imaging. Ultrasound-guided radiofrequency ablation therapy requires a high level of concentration during the treatment, making it difficult to timely monitor the patient's physical risk signs such as blood pressure, heart rate, and oxygen saturation.
따라서, 위와 같은 초음파 유도 생검과 초음파 유도 고주파 열 치료의 한계점을 해결하기 위한 혼합현실 기반 초음파 영상 출력 시스템 및 방법이 요구되고 있는 실정이다.Therefore, there is a need for a mixed reality-based ultrasound image output system and method to overcome the limitations of the above ultrasound-guided biopsy and ultrasound-guided radiofrequency thermal therapy.
본 발명은 시술자가 착용할 수 있는 출력기기를 통해 생검 부위와 초음파를 동시에 볼 수 있게 하고, 생검 부위에 초음파 영상과 병변 부위를 겹쳐볼 수 있게 하는 것을 일 목적으로 한다. The purpose of the present invention is to enable a practitioner to simultaneously view a biopsy site and ultrasound through a wearable output device, and to superimpose an ultrasound image and a lesion site on the biopsy site.
또한, 본 발명은 신체 위험 신호를 시술자가 착용할 수 있는 출력기기를 통해 착용자의 시선방향에 표시하고자 한다.In addition, the present invention seeks to display a physical danger signal in the direction of the wearer's gaze through an output device that the operator can wear.
또한, 본 발명은 출력되는 정보의 신뢰성을 확보하기 위해 입력되는 신호들을 동기화하고자 한다.In addition, the present invention seeks to synchronize input signals to ensure the reliability of output information.
상기 목적을 달성하기 위하여 본 발명은, 피검자의 초음파 영상 및 생체신호를 수신하는 데이터 수신부; 상기 초음파 영상을 특정 객체에 정합하여 제1 MR(Mixed-reality) 콘텐츠를 생성하는 제1 콘텐츠 생성부; 상기 생체신호를 일정 공간에 정합하여 제2 MR 콘텐츠를 생성하는 제2 콘텐츠 생성부; 및 상기 제1 콘텐츠 생성부 또는 상기 제2 콘텐츠 생성부에서 생성된 MR 콘텐츠를 디스플레이하는 출력기기;를 포함하고, 상기 제1 콘텐츠 생성부는, 상기 출력기기에 구비된 카메라를 통해 초음파 진단기기의 프로브 형상을 인식하고, 상기 프로브 형상의 움직임을 감지하여 상기 제1 MR 콘텐츠의 출력 위치를 변경하는 것을 일 특징으로 한다.In order to achieve the above object, the present invention comprises: a data receiving unit which receives an ultrasound image and a bio-signal of a subject; a first content generating unit which generates a first MR (Mixed-reality) content by aligning the ultrasound image with a specific object; a second content generating unit which generates a second MR content by aligning the bio-signal with a specific space; and an output device which displays the MR content generated by the first content generating unit or the second content generating unit; wherein the first content generating unit is characterized in that it recognizes the probe shape of an ultrasound diagnostic device through a camera provided in the output device, and detects movement of the probe shape to change the output position of the first MR content.
바람직하게는, 병변 영역이 주석 처리된 초음파 영상 데이터 세트로 학습된 인공지능을 이용하여 상기 초음파 영상에서 병변 영역을 검출하는 병변 검출부를 더 포함할 수 있다. Preferably, the method may further include a lesion detection unit that detects a lesion area in the ultrasound image using artificial intelligence learned from an ultrasound image data set in which the lesion area is annotated.
바람직하게는, 상기 제1 콘텐츠 생성부는, 상기 병변 검출부가 검출한 병변 영역이 다른 영역과 구분되도록 상기 제1 MR 콘텐츠를 변경할 수 있다. Preferably, the first content generation unit can change the first MR content so that the lesion area detected by the lesion detection unit is distinguished from other areas.
바람직하게는, 상기 제1 콘텐츠 생성부는, 상기 초음파 영상을 통해 바늘이 상기 병변 영역을 접촉한 것이 확인되면, 상기 병변 영역이 다른 영역과 구분되지 않도록 상기 MR 콘텐츠를 변경할 수 있다. Preferably, the first content generation unit can change the MR content so that the lesion area is not distinguished from other areas when it is confirmed through the ultrasound image that the needle has contacted the lesion area.
바람직하게는, 동일한 시점에 측정된 초음파 영상과 생체신호가 출력되고, 동일한 시점에 검출된 병변 영역이 표출될 수 있도록 상기 초음파 영상, 상기 생체신호 및 상기 병변 영역을 동기화시키는 동기화부를 더 포함할 수 있다. Preferably, the method may further include a synchronization unit that synchronizes the ultrasound image, the bio-signal, and the lesion area so that the ultrasound image and the bio-signal measured at the same time can be output and the lesion area detected at the same time can be displayed.
바람직하게는, 상기 동기화부는, 상기 병변 검출부에서 검출한 병변 영역이 기설정된 기준 값보다 작은 경우 상기 초음파 영상, 상기 생체신호 및 상기 병변 영역 중 가장 지연된 신호를 기준으로 동기화시키는 지연 동기화 모듈; 및 상기 병변 검출부에서 검출한 병변 영역이 기설정된 기준 값보다 큰 경우 상기 초음파 영상, 상기 생체신호 및 상기 병변 영역 중 가장 앞선 신호를 기준으로 동기화시키는 예측 동기화 모듈;을 포함할 수 있다. Preferably, the synchronization unit may include a delay synchronization module that synchronizes based on the most delayed signal among the ultrasound image, the bio-signal, and the lesion area when the lesion area detected by the lesion detection unit is smaller than a preset reference value; and a prediction synchronization module that synchronizes based on the most advanced signal among the ultrasound image, the bio-signal, and the lesion area when the lesion area detected by the lesion detection unit is larger than a preset reference value.
바람직하게는, 상기 예측 동기화 모듈은, 보외법(extrapolation)을 이용하여, 가장 앞선 신호에 다른 신호를 동기화시킬 수 있다. Preferably, the prediction synchronization module can synchronize another signal to the most advanced signal by using extrapolation.
바람직하게는, 상기 출력기기는, 착용자의 전방방향을 촬영하는 제1 카메라와 착용자의 안면을 촬영하는 제2 카메라를 포함하고, 상기 제1 카메라는, 초음파 진단기기의 프로브 형상을 인식하고, 상기 제2 카메라는, 착용자의 시선방향을 추적할 수 있다. Preferably, the output device includes a first camera for photographing the forward direction of the wearer and a second camera for photographing the face of the wearer, wherein the first camera can recognize the probe shape of the ultrasonic diagnostic device, and the second camera can track the direction of the wearer's gaze.
바람직하게는, 상기 출력기기는, 상기 제2 카메라에서 추적한 착용자의 시선방향을 따라 상기 제2 MR 콘텐츠를 디스플레이할 수 있다. Preferably, the output device can display the second MR content along the direction of the wearer's gaze tracked by the second camera.
바람직하게는, 상기 출력기기는, 상기 제2 카메라가 촬영한 착용자의 눈 또는 눈동자의 움직임에 의해 제어될 수 있다. Preferably, the output device can be controlled by the movement of the wearer's eye or pupil captured by the second camera.
또한 본 발명은, 피검자의 초음파 영상 및 생체신호를 수신하는 데이터 수신단계; 상기 초음파 영상을 특정 객체에 정합하여 제1 MR(Mixed-reality) 콘텐츠를 생성하는 제1 콘텐츠 생성단계; 상기 생체신호를 일정 공간에 정합하여 제2 MR 콘텐츠를 생성하는 제2 콘텐츠 생성단계; 그리고 상기 제1 콘텐츠 생성단계 또는 상기 제2 콘텐츠 생성단계에서 생성된 MR 콘텐츠를 출력기기에 디스플레이하는 단계;를 포함하고, 상기 제1 콘텐츠 생성단계는, 상기 출력기기에 구비된 카메라를 통해 초음파 진단기기의 프로브 형상을 인식하고, 상기 프로브 형상의 움직임을 감지하여 상기 제1 MR 콘텐츠의 출력 위치를 변경하는 것을 다른 특징으로 한다.In addition, the present invention includes a data receiving step of receiving an ultrasound image and a bio-signal of a subject; a first content generating step of generating a first MR (Mixed-reality) content by aligning the ultrasound image with a specific object; a second content generating step of generating a second MR content by aligning the bio-signal with a specific space; and a step of displaying the MR content generated in the first content generating step or the second content generating step on an output device; and another feature of the first content generating step is that the shape of a probe of an ultrasound diagnostic device is recognized through a camera provided in the output device, and the movement of the probe shape is detected to change the output position of the first MR content.
본 발명은 생검 부위와 초음파를 동시에 볼 수 있게 하여 시술자의 피로도를 저감시킬 수 있고, 생검 부위에 초음파 영상과 병변 부위를 겹쳐볼 수 있게 하여 정확성과 안정성을 높일 수 있다는 이점이 있다. The present invention has the advantage of reducing operator fatigue by allowing the biopsy site and ultrasound to be viewed simultaneously, and increasing accuracy and stability by allowing the ultrasound image and lesion site to be superimposed on the biopsy site.
또한, 본 발명은, 신체 위험 신호를 시술자가 착용할 수 있는 출력기기를 통해 시선방향에 표시하여 환자의 안전을 확보할 수 있다는 이점이 있다. In addition, the present invention has an advantage in that it can ensure patient safety by displaying a physical danger signal in the line of sight of the operator through an output device that can be worn by the operator.
또한, 본 발명은 입력되는 신호들을 동기화하여 출력되는 정보의 신뢰성을 확보하고, 다른 부위에 바늘이 삽입되는 것을 방지할 수 있다는 이점이 있다. In addition, the present invention has the advantage of ensuring the reliability of output information by synchronizing input signals and preventing the needle from being inserted into a different part.
또한, 본 발명은 터치 없이 장비를 제어할 수 있는 환경을 제공하여 시술자의 자유도를 높일 수 있다.In addition, the present invention can increase the freedom of the operator by providing an environment in which equipment can be controlled without touching.
도 1은 본 발명의 실시예에 따른 혼합현실 기반 초음파 영상 출력 시스템의 구성도를 나타낸다. Figure 1 shows a configuration diagram of a mixed reality-based ultrasound image output system according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 stand-alone 방식의 혼합현실 기반 초음파 영상 출력 시스템 구성을 나타낸다. Figure 2 shows a configuration of a stand-alone mixed reality-based ultrasound image output system according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따른 스마트폰 테더링 방식의 혼합현실 기반 초음파 영상 출력 시스템 구성을 나타낸다. FIG. 3 shows a configuration of a mixed reality-based ultrasound image output system using a smartphone tethering method according to an embodiment of the present invention.
도 4는 본 발명의 실시예에 따른 출력기기에 출력되는 MR 콘텐츠의 크기와 프로브 각도의 관계를 설명하기 위한 그림을 나타낸다. FIG. 4 is a drawing for explaining the relationship between the size of MR content output to an output device and the probe angle according to an embodiment of the present invention.
도 5는 본 발명의 실시예에 따른 병변 검출부의 구성도를 나타낸다. Figure 5 shows a configuration diagram of a lesion detection unit according to an embodiment of the present invention.
도 6은 본 발명의 실시예에 따른 병변 검출부가 인공지능을 이용하여 병변을 검출하는 프로세스를 나타낸다. Figure 6 shows a process in which a lesion detection unit according to an embodiment of the present invention detects a lesion using artificial intelligence.
도 7은 본 발명의 실시예에 따른 동기화부의 구성도를 나타낸다. Figure 7 shows a configuration diagram of a synchronization unit according to an embodiment of the present invention.
도 8은 본 발명의 실시예에 따른 지연 동기화 모듈의 동기화 방식을 설명하기 위한 그림을 나타낸다. FIG. 8 is a diagram illustrating a synchronization method of a delay synchronization module according to an embodiment of the present invention.
도 9는 본 발명의 실시예에 따른 예측 동기화 모듈의 동기화 방식을 설명하기 위한 그림을 나타낸다. FIG. 9 is a diagram illustrating a synchronization method of a prediction synchronization module according to an embodiment of the present invention.
도 10은 본 발명의 실시예에 따른 출력기기를 통해 디스플레이되는 화면을 나타낸다.Figure 10 shows a screen displayed through an output device according to an embodiment of the present invention.
피검자의 초음파 영상 및 생체신호를 수신하는 데이터 수신부;A data receiving unit that receives ultrasound images and bio-signals of a subject;
상기 초음파 영상을 특정 객체에 정합하여 제1 MR(Mixed-reality) 콘텐츠를 생성하는 제1 콘텐츠 생성부; A first content generation unit that generates first MR (Mixed-reality) content by aligning the above ultrasound image with a specific object;
상기 생체신호를 일정 공간에 정합하여 제2 MR 콘텐츠를 생성하는 제2 콘텐츠 생성부; 및A second content generation unit that generates second MR content by aligning the above biosignals to a certain space; and
상기 제1 콘텐츠 생성부 또는 상기 제2 콘텐츠 생성부에서 생성된 MR 콘텐츠를 디스플레이하는 출력기기;를 포함하고,An output device for displaying MR content generated by the first content generation unit or the second content generation unit;
상기 제1 콘텐츠 생성부는,The above first content creation unit,
상기 출력기기에 구비된 카메라를 통해 초음파 진단기기의 프로브 형상을 인식하고, 상기 프로브 형상의 움직임을 감지하여 상기 제1 MR 콘텐츠의 출력 위치를 변경하는 것을 특징으로 하는 혼합현실 기반 초음파 영상 출력 시스템.A mixed reality-based ultrasound image output system characterized by recognizing the probe shape of an ultrasound diagnostic device through a camera equipped in the output device, detecting movement of the probe shape, and changing the output position of the first MR content.
이하, 첨부된 도면들에 기재된 내용들을 참조하여 본 발명을 상세히 설명한다. 다만, 본 발명이 예시적 실시 예들에 의해 제한되거나 한정되는 것은 아니다. 각 도면에 제시된 동일 참조부호는 실질적으로 동일한 기능을 수행하는 부재를 나타낸다.Hereinafter, the present invention will be described in detail with reference to the contents described in the attached drawings. However, the present invention is not limited or restricted by the exemplary embodiments. The same reference numerals presented in each drawing represent components that perform substantially the same function.
본 발명의 목적 및 효과는 하기의 설명에 의해서 자연스럽게 이해되거나 보다 분명해질 수 있으며, 하기의 기재만으로 본 발명의 목적 및 효과가 제한되는 것은 아니다. 또한, 본 발명을 설명함에 있어서 본 발명과 관련된 공지 기술에 대한 구체적인 설명이, 본 발명의 요지를 불필요하게 흐릴 수 있다고 판단되는 경우에는 그 상세한 설명을 생략하기로 한다.The purpose and effect of the present invention can be naturally understood or made clearer by the following description, and the purpose and effect of the present invention are not limited to the following description alone. In addition, when explaining the present invention, if it is judged that a detailed description of a known technology related to the present invention may unnecessarily obscure the gist of the present invention, the detailed description will be omitted.
본 발명에서 사용하는 용어는 단지 특정한 실시예들을 설명하기 위해 사용된 것으로, 본 발명을 한정하려는 의도가 아니다. 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함한다. 본 출원에서, "포함하다" 또는 "가지다" 등의 용어는 발명의 설명에 기재된 특징, 숫자, 단계, 동작, 구성 요소, 부분품 또는 이들을 조합한 것이 존재함을 지정하려는 것이지, 하나 또는 그 이상의 다른 특징들이나 숫자, 단계, 동작, 구성 요소, 부분품 또는 이들을 조합한 것들의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 한다.The terminology used herein is only used to describe specific embodiments and is not intended to limit the present invention. The singular expression includes the plural expression unless the context clearly indicates otherwise. In this application, it should be understood that the terms "comprises" or "has" and the like are intended to specify the presence of a feature, number, step, operation, component, part or combination thereof described in the description of the invention, but do not exclude in advance the possibility of the presence or addition of one or more other features, numbers, steps, operations, components, parts or combinations thereof.
제1, 제2 등의 용어는 다양한 구성 요소들을 설명하는데 사용될 수 있지만, 상기 구성 요소들은 상기 용어들에 의해 한정되어서는 안된다. 상기 용어들은 하나의 구성 요소를 다른 구성 요소로부터 구별하는 목적으로만 사용된다. 예를 들어, 본 발명의 권리 범위를 벗어나지 않으면서 제1 구성 요소는 제2 구성 요소로 명명될 수 있고, 유사하게 제2 구성 요소도 제1 구성 요소로 명명될 수 있다.The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
다르게 정의되지 않는 한, 기술적이거나 과학적인 용어를 포함해서 여기서 사용되는 모든 용어들은 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자에 의해 일반적으로 이해되는 것과 동일한 의미를 갖는다. 일반적으로 사용되는 사전에 정의되어 있는 것과 같은 용어들은 관련 기술의 문맥상 가지는 의미와 일치하는 의미를 갖는 것으로 해석되어야 하며, 본 발명에서 명백하게 정의하지 않는 한, 이상적이거나 과도하게 형식적인 의미로 해석되지 않는다.Unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms defined in commonly used dictionaries, such as those defined in common usage, should be interpreted as having a meaning consistent with the meaning they have in the context of the relevant art, and shall not be interpreted in an idealized or overly formal sense unless expressly defined herein.
구성 요소를 해석함에 있어서, 별도의 명시적 기재가 없더라도 오차 범위를 포함하는 것으로 해석한다. 시간 관계에 대한 설명일 경우, 예를 들어, '~후에', '~에 이어서', '~다음에', '~전에' 등으로 시간 적 선후관계가 설명되는 경우, '바로' 또는 '직접'이 사용되지 않는 이상 연속적이지 않은 경우도 포함한다.In interpreting the components, even if there is no separate explicit description, it is interpreted as including the error range. In the case of a description of a temporal relationship, for example, if the temporal continuity is described as 'after', 'following', 'next to', 'before', etc., it also includes cases where it is not continuous unless 'right away' or 'directly' is used.
이하, 첨부한 도면들을 참조하여 본 발명의 기술적 구성을 상세하게 설명한다.Hereinafter, the technical configuration of the present invention will be described in detail with reference to the attached drawings.
도 1은 본 발명의 실시예에 따른 혼합현실 기반 초음파 영상 출력 시스템(100)의 구성도를 나타낸다. 도 1을 참조하면, 혼합현실 기반 초음파 영상 출력 시스템(100)은 데이터 수신부(110), 제1 콘텐츠 생성부(120), 제2 콘텐츠 생성부(130), 병변 검출부(140), 동기화부(150), 및 출력기기(160)를 포함할 수 있다. FIG. 1 illustrates a configuration diagram of a mixed reality-based ultrasound image output system (100) according to an embodiment of the present invention. Referring to FIG. 1, the mixed reality-based ultrasound image output system (100) may include a data receiving unit (110), a first content generating unit (120), a second content generating unit (130), a lesion detection unit (140), a synchronization unit (150), and an output device (160).
혼합현실 기반 초음파 영상 출력 시스템(100)은 생검 부위와 초음파를 동시에 볼 수 있게 하여 시술자의 피로도를 저감시킬 수 있고, 생검 부위에 초음파 영상과 병변 부위를 겹쳐볼 수 있게 하여 정확성과 안정성을 높일 수 있다. 혼합현실 기반 초음파 영상 출력 시스템(100)은 AR/VR/MR 기기 또는 글래스(glass) 등을 통해 초음파 영상과 함께 피검자의 생체신호를 출력할 수 있다. 혼합현실 기반 초음파 영상 출력 시스템(100)은 생체신호 등을 분석하여 도출된 신체 위험 신호를 시술자가 즉각 인지할 수 있도록 시술자가 착용한 출력기기(AR/VR/MR 기기 또는 글래스 등)를 통해 시술자의 시선방향에 표시하여 환자의 안전을 확보할 수 있다. The mixed reality-based ultrasound image output system (100) can reduce operator fatigue by allowing the biopsy site and ultrasound to be viewed simultaneously, and can increase accuracy and stability by allowing the ultrasound image and lesion site to be superimposed on the biopsy site. The mixed reality-based ultrasound image output system (100) can output the subject's bio-signals together with the ultrasound image through an AR/VR/MR device or glasses, etc. The mixed reality-based ultrasound image output system (100) can analyze bio-signals, etc. and display the derived body risk signals in the operator's line of sight through an output device (AR/VR/MR device or glasses, etc.) worn by the operator so that the operator can immediately recognize them, thereby ensuring patient safety.
혼합현실 기반 초음파 영상 출력 시스템(100)은 입력되는 신호들을 동기화하여 출력되는 정보의 신뢰성을 확보하고, 각각 다른 시점의 신호들이 출력되어 다른 부위에 바늘이 삽입되는 것을 방지할 수 있다는 이점이 있다. 혼합현실 기반 초음파 영상 출력 시스템(100)은 터치 없이 장비를 제어할 수 있는 환경을 제공하여 시술자의 자유도를 높일 수 있다.The mixed reality-based ultrasound image output system (100) has the advantage of ensuring the reliability of the output information by synchronizing the input signals and preventing the needle from being inserted into a different part by outputting signals at different times. The mixed reality-based ultrasound image output system (100) can increase the freedom of the operator by providing an environment in which the equipment can be controlled without touching.
혼합현실 기반 초음파 영상 출력 시스템(100)은 스마트폰 테더링 방식과 Stand-alone 방식으로 제공될 수 있다.The mixed reality-based ultrasound image output system (100) can be provided in a smartphone tethering method and a stand-alone method.
도 2는 본 발명의 실시예에 따른 stand-alone 방식의 혼합현실 기반 초음파 영상 출력 시스템 구성을 나타낸다. 도 2를 참조하면, 혼합현실 기반 초음파 영상 출력 시스템(100)은 별도의 사용자 단말(스마트 폰, 노트북 등) 없이 독립적으로 구동될 수 있다. stand-alone 방식은 혼합현실 구현에 필요한 컴퓨팅 연산을 출력기기(160) 자체에서 수행할 수 있다. stand-alone 방식은 사용자 단말을 들고 다닐 필요가 없기 때문에 편리한 사용성에 강점이 있다. Fig. 2 shows a configuration of a stand-alone mixed reality-based ultrasound image output system according to an embodiment of the present invention. Referring to Fig. 2, the mixed reality-based ultrasound image output system (100) can be operated independently without a separate user terminal (smart phone, laptop, etc.). The stand-alone method can perform computing operations required for implementing mixed reality in the output device (160) itself. The stand-alone method has the advantage of convenient usability because there is no need to carry a user terminal.
구체적으로, stand-alone 방식은 초음파 기기 및 생체신호 기기를 이용하여 피검자의 초음파 영상 및 생체신호를 측정한 후 이를 의료기기 데이터 처리 서버로 전송할 수 있다. 여기에서 의료기기 데이터 처리 서버는 본 발명의 데이터 수신부(110)를 의미할 수 있다. 의료기기 데이터 처리 서버는 환자 생체신호 변화정보와 초음파 영상을 AI 분석 서버(모듈)와 출력기기에 제공할 수 있다. Specifically, the stand-alone method can measure the ultrasound image and bio-signal of a subject using an ultrasound device and a bio-signal device and then transmit them to a medical device data processing server. Here, the medical device data processing server may mean the data receiving unit (110) of the present invention. The medical device data processing server can provide patient bio-signal change information and ultrasound images to an AI analysis server (module) and an output device.
AI 분석 서버는 AI 분석 결과를 출력할 수 있으며, 이때 AI 분석은 병변 검출을 의미할 수 있다. 즉, AI 분석 서버는 본 발명의 병변 검출부(140)를 의미할 수 있다. AI 분석 서버는 AI 분석 결과를 인터넷을 통해 MR HMD(Head Mounted Display) 또는 글래스에 제공할 수 있다. The AI analysis server can output AI analysis results, where the AI analysis may mean lesion detection. That is, the AI analysis server may mean the lesion detection unit (140) of the present invention. The AI analysis server may provide the AI analysis results to an MR HMD (Head Mounted Display) or glasses via the Internet.
MR HMD 또는 글래스는 의료기기 데이터 처리 서버로부터 환자 생체신호 변화정보와 초음파 영상을 인터넷을 통해 전송받을 수 있으며, 전송받은 데이터를 이용하여 초음파 영상을 특정 객체에 정합하여 제1 MR(Mixed-reality) 콘텐츠를 생성하고, 생체신호를 일정 공간에 정합하여 제2 MR 콘텐츠를 생성할 수 있다. 즉, stand-alone 방식은 제1 콘텐츠 생성부(120)와 제2 콘텐츠 생성부(130)가 출력기기(160) 내에 포함될 수 있다. MR HMD or glasses can receive patient bio-signal change information and ultrasound images from a medical device data processing server via the Internet, and using the received data, can create first MR (Mixed-reality) content by aligning the ultrasound image with a specific object, and can create second MR content by aligning the bio-signal with a certain space. That is, in the stand-alone method, the first content generation unit (120) and the second content generation unit (130) can be included in the output device (160).
도 3은 본 발명의 실시예에 따른 스마트폰 테더링 방식의 혼합현실 기반 초음파 영상 출력 시스템 구성을 나타낸다. 도 3을 참조하면, 혼합현실 기반 초음파 영상 출력 시스템(100)은 스마트 폰에 연결되어 구동될 수 있다. 스마트폰 테더링 방식은 출력기기(160) 자체에서 혼합현실 구현에 필요한 컴퓨팅 연산을 수행하지 않고 스마트 폰에서 수행될 수 있다. 스마트폰 테더링 방식은 출력기기(160) 내부에 컴퓨팅을 위한 프로세서, 배터리가 없어 가볍고 발열이 적은 강점이 있다. FIG. 3 shows the configuration of a mixed reality-based ultrasound image output system using a smartphone tethering method according to an embodiment of the present invention. Referring to FIG. 3, a mixed reality-based ultrasound image output system (100) can be connected to and operated by a smartphone. The smartphone tethering method can be performed on a smartphone without performing computing operations required for implementing mixed reality in the output device (160) itself. The smartphone tethering method has the advantage of being lightweight and generating less heat because there is no processor or battery for computing inside the output device (160).
구체적으로, 스마트폰 테더링 방식은 환자 생체신호 변화정보, 초음파 영상, 및 AI 분석 결과를 인터넷을 통해 스마트폰으로 전송받을 수 있으며, 스마트 폰은 전송받은 데이터를 이용하여 초음파 영상을 특정 객체에 정합하여 제1 MR(Mixed-reality) 콘텐츠를 생성하고, 생체신호를 일정 공간에 정합하여 제2 MR 콘텐츠를 생성할 수 있다. 즉, 스마트폰 테더링 방식은 제1 콘텐츠 생성부(120)와 제2 콘텐츠 생성부(130)가 출력기기(160) 내에 포함되지 않을 수 있다. Specifically, the smartphone tethering method can transmit patient bio-signal change information, ultrasound images, and AI analysis results to a smartphone via the Internet, and the smartphone can use the transmitted data to align the ultrasound images with a specific object to generate first MR (Mixed-reality) content, and align the bio-signals with a specific space to generate second MR content. That is, the smartphone tethering method may not include the first content generation unit (120) and the second content generation unit (130) within the output device (160).
데이터 수신부(110)는 피검자의 초음파 영상 및 생체신호를 수신할 수 있다. 데이터 수신부(110)는 초음파 진단기기 및 생체신호 측정기기와 유선 또는 무선으로 연결되어 실시간으로 초음파 영상 및 생체신호를 수신할 수 있다. 데이터 수신부(110)는 수신된 초음파 영상과 생체신호를 제1 콘텐츠 생성부(120), 제2 콘텐츠 생성부(130), 또는 병변 검출부(140)에 전송할 수 있다. 데이터 수신부(110)는 무선으로 다른 구성과 연결시 CDMA(Code Division Multiple Access), WCDMA(Wideband CDMA), LTE(Long Term Evolution) 또는 wifi 등의 광의의 이동통신망을 이용할 수 있다. 데이터 수신부(110)는 무선으로 다른 구성과 연결시 블루투스 등의 근거리무선통신을 이용할 수 있다. The data receiving unit (110) can receive ultrasound images and bio-signals of a subject. The data receiving unit (110) can be connected to an ultrasound diagnostic device and a bio-signal measuring device by wire or wirelessly to receive ultrasound images and bio-signals in real time. The data receiving unit (110) can transmit the received ultrasound images and bio-signals to a first content generating unit (120), a second content generating unit (130), or a lesion detecting unit (140). When the data receiving unit (110) is connected wirelessly to another component, it can use a broad mobile communication network such as CDMA (Code Division Multiple Access), WCDMA (Wideband CDMA), LTE (Long Term Evolution), or Wi-Fi. When the data receiving unit (110) is connected wirelessly to another component, it can use a short-range wireless communication such as Bluetooth.
제1 콘텐츠 생성부(120)는 초음파 영상을 특정 객체에 정합하여 제1 MR(Mixed-reality) 콘텐츠를 생성할 수 있다. 여기에서, 특정 객체란 초음파 영상이 측정하고자 하는 위치의 해부학적 구조에 1:1로 직접 중첩될 수 있는 신체 부위를 의미할 수 있다. The first content generation unit (120) can generate the first MR (Mixed-reality) content by aligning the ultrasound image with a specific object. Here, the specific object may mean a body part that can be directly superimposed 1:1 on the anatomical structure of the location that the ultrasound image is intended to measure.
제1 콘텐츠 생성부(120)는 초음파 진단기기, 데이터 수신부(110), 또는 출력기기(160)와 상호작용하며 제1 MR 콘텐츠를 보정할 수 있다. 즉, 제1 콘텐츠 생성부(120)는 초음파 진단기기의 프로브의 형상, 시술자의 눈과 출력기기(160) 사이의 거리 등을 통해 초음파 영상을 특정 객체에 정확히 정합될 수 있도록 제1 MR 콘텐츠를 보정할 수 있다. The first content generation unit (120) can correct the first MR content by interacting with the ultrasonic diagnostic device, the data receiving unit (110), or the output device (160). That is, the first content generation unit (120) can correct the first MR content so that the ultrasound image can be accurately aligned with a specific object through the shape of the probe of the ultrasonic diagnostic device, the distance between the operator's eyes and the output device (160), etc.
종래에는 초음파 진단기기의 프로브에 부착된 추적패턴(마커, QR 코드 등)을 분석하여 초음파 영상이 정합될 위치를 계산하였다. 그러나, 실제 초음파 영상을 이용하여 생검 또는 치료를 진행하는데 있어서, 시술자는 필요한 초음파 영상을 정확히 획득하기 위하여 프로브를 지속적으로 움직이게 된다. 이때, 프로브에 부착된 추적패턴은 변형되거나 보이지 않을 수 있다. 기술적으로 추적패턴이 30도만 기울어지거나 회전하여도, 카메라가 추적패턴을 인식하지 못한다. In the past, the tracking pattern (marker, QR code, etc.) attached to the probe of the ultrasound diagnostic device was analyzed to calculate the position where the ultrasound image should be aligned. However, when performing a biopsy or treatment using an actual ultrasound image, the operator continuously moves the probe to accurately obtain the necessary ultrasound image. At this time, the tracking pattern attached to the probe may be deformed or invisible. Technically, even if the tracking pattern is tilted or rotated only 30 degrees, the camera does not recognize the tracking pattern.
이와 같이 추적패턴을 이용하여 정합될 위치를 계산하는 것에는 한계가 있다. 따라서, 본 발명의 실시예에 따른 제1 콘텐츠 생성부(120)는 출력기기(160)에 구비된 카메라를 통해 초음파 진단기기의 프로브 형상을 인식하고, 프로브 형상의 움직임을 감지하여 제1 MR 콘텐츠의 출력 위치를 변경할 수 있다. There is a limit to calculating the position to be aligned using a tracking pattern in this way. Therefore, the first content generation unit (120) according to the embodiment of the present invention can recognize the probe shape of the ultrasonic diagnostic device through a camera equipped in the output device (160) and detect the movement of the probe shape to change the output position of the first MR content.
제1 콘텐츠 생성부(120)는 인식된 프로브 형상을 통해 프로브의 전면이 향하는 방향으로 제1 MR 콘텐츠가 출력되게 할 수 있으며, 바람직하게는 프로브와 피검자의 신체가 닿는 부분을 기준으로 하여 프로브의 전면 방향을 향하여 초음파 영상이 측정하고자 하는 위치의 해부학적 구조에 1:1로 직접 중첩되도록 제1 MR 콘텐츠의 출력 위치를 변경할 수 있다. The first content generation unit (120) can output the first MR content in the direction in which the front of the probe faces through the recognized probe shape, and preferably, can change the output position of the first MR content so that the ultrasound image directly overlaps 1:1 with the anatomical structure of the location to be measured toward the front of the probe based on the part where the probe and the subject's body come into contact.
초음파 기기는 프로브와 수직되는 일직선 방향으로 초음파를 출력하여 반사된 신호를 영상화하는 것이므로, 프로브와 수직된 방향의 신체부분이 영상화된 것을 초음파 영상이라 한다. 따라서, 초음파 영상이 측정하고자 하는 위치의 해부학적 구조에 1:1로 직접 중첩되도록 제1 MR 콘텐츠를 출력한다는 것의 의미는 초음파로 만들어내는 영상이 프로브 아래에 실제로 물리적으로 위치하고 있다고 느끼도록 출력기기를 통해 구현하겠다는 것이다. Since the ultrasound device outputs ultrasound in a straight direction perpendicular to the probe and visualizes the reflected signal, an ultrasound image is an image of a body part perpendicular to the probe. Therefore, the meaning of outputting the first MR content so that the ultrasound image is directly superimposed on the anatomical structure of the location to be measured is that the output device will implement it so that the image created by ultrasound is actually physically located under the probe.
도 4는 본 발명의 실시예에 따른 출력기기에 출력되는 MR 콘텐츠의 크기와 프로브 각도의 관계를 설명하기 위한 그림을 나타낸다. 도 4의 (a)를 참조하면, 프로브가 피검자의 신체에 수직으로 초음파를 출력할 때, 시술자의 AR 안경 내 출력되는 초음파 영상 영역을 확인할 수 있다. 시술자가 AR 안경을 착용하고 있다면, AR 안경을 착용한 시술자의 눈의 위치에서 프로브 아래에 있는 측정 영역이 어떻게 보일지를 계산하여 AR 안경에 출력한다. 따라서, 측정하고자 하는 위치의 해부학적 구조에 1:1로 직접 중첩되도록 MR 영상을 시술자에게 출력하면 신체 내부에 있는 실제 병변의 3차원 위치를 정확하게 알 수 있어 해당 위치에 정확히 바늘을 삽입할 수 있게 된다. 시술자의 눈에 프로브 아래에 있는 측정 영역이 어떻게 보일지에 대한 계산을 통해 AR 안경에 출력되는 초음파 영상의 크기가 정해질 수 있다. FIG. 4 is a diagram for explaining the relationship between the size of MR content output to an output device according to an embodiment of the present invention and the probe angle. Referring to FIG. 4 (a), when the probe outputs ultrasound perpendicularly to the body of the subject, the ultrasound image area output within the AR glasses of the practitioner can be confirmed. If the practitioner is wearing AR glasses, the measurement area under the probe is calculated to look like from the position of the eyes of the practitioner wearing the AR glasses and output to the AR glasses. Therefore, if the MR image is output to the practitioner so as to be directly superimposed 1:1 on the anatomical structure of the location to be measured, the three-dimensional location of the actual lesion inside the body can be accurately known, so that a needle can be accurately inserted into the corresponding location. The size of the ultrasound image output to the AR glasses can be determined through the calculation of how the measurement area under the probe looks like to the eyes of the practitioner.
도 4의 (b)를 참조하면, 프로브가 피검자의 신체와 둔각을 이룬 상태에서 초음파를 출력할 때, 시술자의 AR 안경 내 디스플레이되는 초음파 영상 영역을 확인할 수 있다. 이 경우 프로브 아래에 있는 측정 영역을 바라보는 시선 각도가 프로브가 피검자의 신체에 수직으로 초음파를 출력할 때보다 커지게 되고, 이에 따라 AR 안경 내 디스플레이되는 초음파 영상 영역의 크기도 커지게 된다. Referring to (b) of Fig. 4, when the probe outputs ultrasound at an obtuse angle with the subject's body, the ultrasound image area displayed within the operator's AR glasses can be confirmed. In this case, the viewing angle for the measurement area under the probe becomes larger than when the probe outputs ultrasound perpendicularly to the subject's body, and accordingly, the size of the ultrasound image area displayed within the AR glasses also becomes larger.
도 4의 (c)를 참조하면, 프로브가 피검자의 신체와 예각을 이룬 상태에서 초음파를 출력할 때, 시술자의 AR 안경 내 디스플레이되는 초음파 영상 영역을 확인할 수 있다. 이 경우 프로브 아래이 있는 측정 영역을 바라보는 시선 각도가 프로브가 피검자의 신체에 수직으로 초음파를 출력할 때보다 작아지게 되고, 이에 따라 AR 안경 내 디스플레이되는 초음파 영상 영역의 크기도 작아지게 된다. Referring to (c) of Fig. 4, when the probe outputs ultrasound at an acute angle to the subject's body, the ultrasound image area displayed within the operator's AR glasses can be confirmed. In this case, the viewing angle for the measurement area below the probe becomes smaller than when the probe outputs ultrasound perpendicularly to the subject's body, and accordingly, the size of the ultrasound image area displayed within the AR glasses also becomes smaller.
도 4의 (a) 내지 (c)를 통해 살펴본 것처럼, 프로브의 각도, 시술자와 프로브 사이의 거리 등 시술자가 착용한 출력기기와 프로브의 위치관계에 의해서 출력 기기에 디스플레이되는 초음파 영상의 크기는 달라질 수 있다. 다만, 디스플레이되는 초음파 영상의 크기가 과도하게 작아질 경우에는 시술자가 초음파 영상을 보며 시술하는데 어려움이 있을 수 있다. 이를 해결하기 위해, 도 4의 (d)와 같이 출력기기 내 디스플레이되는 초음파 영상 영역의 크기가 일정 이하인 경우 출력기기 내에 별도의 초음파 영상을 디스플레이할 수 있다. 따라서, 시술자는 출력기기와 프로브의 위치관계에 따라 초음파 영상이 과도하게 작게 디스플레이되는 경우에도 별도로 디스플레이된 초음파 영상을 통해 안전하게 시술을 진행할 수 있다. 또한, 시술자의 제어에 따라 AR 디스플레이의 특정 영역에 별도의 초음파 영상을 디스플레이할 수도 있다. As examined through (a) to (c) of Figs. 4, the size of the ultrasound image displayed on the output device may vary depending on the positional relationship between the output device worn by the practitioner and the probe, such as the angle of the probe, the distance between the practitioner and the probe, etc. However, if the size of the displayed ultrasound image is excessively small, the practitioner may have difficulty performing the procedure while viewing the ultrasound image. To solve this, as shown in (d) of Fig. 4, if the size of the ultrasound image area displayed in the output device is below a certain level, a separate ultrasound image may be displayed in the output device. Accordingly, the practitioner can safely perform the procedure through the separately displayed ultrasound image even if the ultrasound image is displayed excessively small depending on the positional relationship between the output device and the probe. In addition, a separate ultrasound image may be displayed in a specific area of the AR display according to the control of the practitioner.
제1 콘텐츠 생성부(120)는 추적패턴을 감지하는 방식으로 초음파 영상이 증강될 위치를 결정하는 것이 아니라 프로브의 형상을 통해 결정하는 것이므로, 시술 중 프로브의 움직임에도 신뢰성 있는 초음파 영상을 출력할 수 있다.The first content generation unit (120) determines the location where the ultrasound image will be enhanced not by detecting a tracking pattern, but by determining it through the shape of the probe, so that reliable ultrasound images can be output even when the probe moves during the procedure.
추적패턴을 사용하는 종래 기술의 경우에는 프로브 등에 추적패턴이 부착되어 있어야 MR HMD 또는 글래스를 통해 초음파 영상을 증강시킬 수 있었다. 본 발명의 실시예에 따른 혼합현실 기반 초음파 영상 출력 시스템(100)은 추적패턴을 이용하지 않고 프로브의 형상을 이용하므로, 별도의 설치 없이도 모듈화된 장비만을 구비하면 기존의 초음파 진단기기에도 적용될 수 있다.In the case of conventional technologies using tracking patterns, the tracking pattern had to be attached to a probe, etc., in order to enhance the ultrasound image through MR HMD or glasses. The mixed reality-based ultrasound image output system (100) according to an embodiment of the present invention does not use a tracking pattern but uses the shape of a probe, so it can be applied to existing ultrasound diagnostic devices by only having modularized equipment without separate installation.
제1 콘텐츠 생성부(120)는 병변 검출부(140)가 검출한 병변 영역이 다른 영역과 구분되도록 제1 MR 콘텐츠를 변경할 수 있다. 시술자는 초음파 영상에서 병변을 구분할 수 있으나, 병변 영역과 병변 영역 이외의 다른 영역을 구분하는 것은 매우 어렵다. 따라서, 현재는 시술자의 능력에 따라 병변 검출 여부가 결정된다. 제1 콘텐츠 생성부(120)는 병변 검출부(140)가 검출한 병변 영역의 외각선을 강조하거나, 병변 영역 내부의 색을 달리하는 방식으로 강조할 수 있다. 제1 콘텐츠 생성부(120)는 사용자의 설정에 따라 병변 내부를 확인하고 싶은 경우에는 검출한 병변 영역의 외각선을 강조할 수 있다. 제1 콘텐츠 생성부(120)는 사용자의 설정에 따라 병변의 크기가 확인하고 싶은 경우에는 병변 영역 내부의 색을 달리하는 방식으로 병변 영역을 강조할 수 있다. The first content generation unit (120) can change the first MR content so that the lesion area detected by the lesion detection unit (140) can be distinguished from other areas. Although the operator can distinguish the lesion in the ultrasound image, it is very difficult to distinguish the lesion area from other areas besides the lesion area. Therefore, currently, whether or not to detect the lesion is determined based on the operator's ability. The first content generation unit (120) can emphasize the outer line of the lesion area detected by the lesion detection unit (140) or emphasize it by changing the color of the inside of the lesion area. If the user wants to check the inside of the lesion according to the user's setting, the first content generation unit (120) can emphasize the outer line of the detected lesion area. If the user wants to check the size of the lesion according to the user's setting, the first content generation unit (120) can emphasize the lesion area by changing the color of the inside of the lesion area.
제1 콘텐츠 생성부(120)는 초음파 영상을 통해 바늘이 병변 영역을 접촉한 것이 확인되면, 병변 영역이 다른 영역과 구분되지 않도록 MR 콘텐츠를 변경할 수 있다. 시술자가 병변 영역을 구분하였다고 하여도, 병변 조직을 취득하거나 병변을 치료하기 위해 병변 영역에 바늘을 정확하게 찌르는 것은 매우 어려운 기술이다. 따라서, 제1 콘텐츠 생성부(120)는 병변 영역을 바늘이 정확히 찌른 경우 강조되었던 병변 영역을 다른 영역과 동일하게 복구시킴으로써 시술자에게 시각적으로 병변 영역에 접촉했음을 알릴 수 있다. The first content generation unit (120) can change the MR content so that the lesion area is not distinguished from other areas when it is confirmed through the ultrasound image that the needle has contacted the lesion area. Even if the operator has distinguished the lesion area, it is a very difficult technique to accurately insert a needle into the lesion area to acquire lesion tissue or treat the lesion. Therefore, the first content generation unit (120) can visually notify the operator that the lesion area has been contacted by restoring the highlighted lesion area to be identical to other areas when the needle has accurately inserted the lesion area.
제2 콘텐츠 생성부(130)는 생체신호를 일정 공간에 정합하여 제2 MR 콘텐츠를 생성할 수 있다. 여기에서, 일정 공간은 시술자가 초음파 영상에 집중하며 생검 또는 치료하는 중에도 출력된 생체신호를 확인할 수 있는 공간을 의미한다. 바람직하게는, 시술자의 우측 또는 좌측 눈동자의 시선 방향을 의미할 수 있다. 생체신호는 심전도(electrocardiogram, ECG), 뇌전도(electroencephalogram, EEG), 근전도(electromyogram, EMG), 피부전도도(galvanic skin response, GSR), 피부온도(skin temperature, SKT), 맥파(photoplethysmography, PPG), 맥박(pulse rate, PR), 혈압(blood pressure), 산소포화도 등을 포함할 수 있다. The second content generation unit (130) can generate second MR content by aligning a biosignal to a predetermined space. Here, the predetermined space means a space where the operator can check the output biosignal while focusing on the ultrasound image and performing a biopsy or treatment. Preferably, it can mean the gaze direction of the operator's right or left eye. The biosignal can include an electrocardiogram (ECG), an electroencephalogram (EEG), an electromyogram (EMG), a galvanic skin response (GSR), a skin temperature (SKT), a photoplethysmography (PPG), a pulse rate (PR), blood pressure, oxygen saturation, etc.
제2 콘텐츠 생성부(130)는 생체신호가 기준치를 벗어나는 경우 피검자에게 위험상황이 발생했다고 판단하여 시술자에게 경고 또는 알림을 줄 수 있다. 제2 콘텐츠 생성부(130)는 생체신호 이상 발생시 생체신호가 출력되는 화면을 깜빡이거나 적색 점멸이 발생하도록 제2 MR 콘텐츠를 변경할 수 있다. The second content generation unit (130) can determine that a dangerous situation has occurred for the subject if the bio-signal exceeds the reference value and can provide a warning or notification to the operator. The second content generation unit (130) can change the second MR content so that the screen on which the bio-signal is output blinks or flashes red when an abnormality in the bio-signal occurs.
도 5는 본 발명의 실시예에 따른 병변 검출부(140)의 구성도를 나타낸다. 도 5를 참조하면, 병변 검출부(140)는 입력부(141), 전처리부(142), 학습부(143), 병변 예측부(144), 및 후처리부(145)를 포함할 수 있다. Fig. 5 shows a configuration diagram of a lesion detection unit (140) according to an embodiment of the present invention. Referring to Fig. 5, the lesion detection unit (140) may include an input unit (141), a preprocessing unit (142), a learning unit (143), a lesion prediction unit (144), and a postprocessing unit (145).
병변 검출부(140)는 병변 영역이 주석 처리된 초음파 영상 데이터 세트로 학습된 인공지능을 이용하여 초음파 영상에서 병변 영역을 검출할 수 있다. 구체적으로, 병변 검출부(140)는 Semantic segmentation 모델을 이용하여 초음파 영상에서 병변 영역을 검출할 수 있다. Semantic segmentation 모델은 디지털 이미지를 여러 개의 픽셀 집합으로 나누는 과정으로, 분할을 통해 이미지의 표현을 해석하기 쉬운 것으로 단순화하여 변환할 수 있다. 병변 검출부(140)는 Semantic segmentation 모델로 CNN 기반 모델을 이용할 수 있다. The lesion detection unit (140) can detect a lesion area in an ultrasound image using artificial intelligence learned from an ultrasound image data set in which the lesion area is annotated. Specifically, the lesion detection unit (140) can detect a lesion area in an ultrasound image using a semantic segmentation model. The semantic segmentation model is a process of dividing a digital image into a plurality of pixel sets, and can simplify and convert the expression of the image into something easy to interpret through segmentation. The lesion detection unit (140) can use a CNN-based model as a semantic segmentation model.
입력부(141)는 정상군과 환자군의 초음파 영상을 학습, 테스트, 또는 실제 병변 검출을 위해 입력받을 수 있다. 입력부(141)는 데이터 수신부(110)로부터 초음파 영상을 수신할 수 있다. 입력부(141)는 수신된 초음파 영상을 전처리부(142)로 전송할 수 있다. The input unit (141) can receive ultrasound images of normal and patient groups for learning, testing, or actual lesion detection. The input unit (141) can receive ultrasound images from the data receiving unit (110). The input unit (141) can transmit the received ultrasound images to the preprocessing unit (142).
전처리부(142)는 입력부(141)로부터 수신된 초음파 영상을 학습 또는 검출에 유리하도록 전처리할 수 있다. 전처리부(142)는 픽셀값을 정규화하는 방식의 전처리를 수행할 수 있다. 전처리부(142)는 인공신경망에 입력되도록 수신받은 초음파 영상의 사이즈와 해상도를 조정하는 전처리를 수행할 수 있다. 학습에 사용되는 초음파 영상은 환자의 개인정보를 포함할 수 있어 보안 및 개인정보 유출의 문제가 발생할 수 있다. 따라서, 본 발명에 따른 전처리부(142)는 초음파 영상 및 메타데이터 내의 환자정보를 비식별화하는 전처리를 수행할 수 있다. The preprocessing unit (142) can preprocess the ultrasound image received from the input unit (141) to be advantageous for learning or detection. The preprocessing unit (142) can perform preprocessing in a manner of normalizing pixel values. The preprocessing unit (142) can perform preprocessing to adjust the size and resolution of the ultrasound image received so that it can be input to the artificial neural network. The ultrasound image used for learning may include the patient's personal information, which may cause security and personal information leakage issues. Therefore, the preprocessing unit (142) according to the present invention can perform preprocessing to de-identify the patient information in the ultrasound image and metadata.
학습부(143)는 전처리되고 병변 영역이 주석 처리된 초음파 영상을 이용하여 인공지능을 학습시킬 수 있다. 여기에서 학습시키는 인공지능은 CNN 기반 또는 SOTA 아키텍처 기반의 Semantic segmentation 모델일 수 있다. The learning unit (143) can train artificial intelligence using ultrasound images that have been preprocessed and have lesion areas annotated. Here, the artificial intelligence to be trained can be a CNN-based or SOTA architecture-based semantic segmentation model.
컨볼루션 뉴럴 네트워크 (convolutional neural network; CNN)는 동물의 시각 피질의 구성에서 영감을 받아, 이미지와 같은 격자 패턴을 갖는 데이터를 처리하기 위한 딥 러닝 모델의 일종이다. 컨볼루션 뉴럴 네트워크는 일반적으로 컨볼루션 레이어, 풀링 레이어 및 풀리 커넥티드 레이어 (fully connected layer)를 포함할 수 있다. 컨볼루션 레이어 및 풀링 레이어는 신경망 내에 반복적으로 존재할 수 있으며, 입력 데이터는 이러한 레이어 계층을 통해 출력으로 변환될 수 있다. 컨볼루션은 레이어는 특징 추출을 위해서 커널 (또는 마스크)를 이용하여, 커널의 각 요소와 입력 값 간의 요소별 곱은 각각의 위치에서 계산되고 합산되어 출력 값을 얻게되며, 이를 특징 맵 (feature map)이라 지칭한다. 이러한 절차는 임의의 수의 특징 맵을 형성하기 위해 여러 커널을 적용하며 반복될 수 있다. 컨볼루션 뉴럴 네트워크에서 컨볼루션 및 풀링 레이어는 특징 추출을 수행하는 반면, 풀리 커넥티드 레이어는 추출된 특징을 분류하는 동작 등의 최종 출력에 매핑한다. A convolutional neural network (CNN) is a type of deep learning model that is inspired by the structure of the visual cortex of animals and is designed to process data with grid patterns such as images. A convolutional neural network may generally include a convolutional layer, a pooling layer, and a fully connected layer. The convolutional layer and the pooling layer may exist repeatedly in the neural network, and the input data may be converted to an output through these layers. The convolutional layer uses a kernel (or mask) for feature extraction, and the element-wise product between each element of the kernel and the input value is calculated at each location and summed to obtain an output value, which is called a feature map. This procedure may be repeated by applying multiple kernels to form an arbitrary number of feature maps. In a convolutional neural network, the convolutional and pooling layers perform feature extraction, while the fully connected layer maps the extracted features to the final output, such as a classification operation.
컨볼루션 뉴럴 네트워크는 등의 뉴럴 네트워크는 출력 오류를 최소화하는 방향으로 학습될 수 있다. 입력 레이어에서 출력 레이어로부터 값을 추출하는 순전파과정과 별개로, 뉴럴 네트워크 내에서는 입력된 학습데이터 및 이에 대한 뉴럴 네트워크의 출력값 사이의 오류를 계산하고 이러한 오류를 줄이기 위해 각각의 레이어의 노드들에 대한 가중치를 업데이트 하는 역전파 (backpropagation)가 일어나게 된다. 컨볼루션 뉴럴 네트워크에서 학습시키는 과정은 주어진 학습데이터를 기반으로, 가장 오류가 적은 출력값을 추출하는 커널을 찾는 과정으로 요약될 수 있다. 커널은 컨볼루션 레이어의 훈련과정에서 자동으로 학습되는 유일한 매개변수이다. 반면, 컨볼루션 뉴럴 네트워크에서 커널의 크기, 커널의 수, 패딩 등은 훈련 과정을 시작하기 전에 설정해야하는 하이퍼 파라미터이며, 따라서, 커널의 크기, 커널의 수, 컨볼루션 레이어 및 풀링 레이어의 숫자에 따라서 각기 다른 컨볼루션 뉴럴 네트워크 모델로 구분될 수 있다.Convolutional neural networks can be trained in a way that minimizes output errors. Separately from the forward propagation process that extracts values from the input layer to the output layer, backpropagation occurs in the neural network to calculate the error between the input training data and the output value of the neural network for it, and update the weights of the nodes of each layer to reduce this error. The training process in a convolutional neural network can be summarized as the process of finding the kernel that extracts the output value with the least error based on the given training data. The kernel is the only parameter that is automatically learned during the training process of the convolutional layer. On the other hand, the kernel size, the number of kernels, and the padding in the convolutional neural network are hyperparameters that must be set before starting the training process, and therefore, different convolutional neural network models can be distinguished depending on the kernel size, the number of kernels, and the number of convolutional layers and pooling layers.
병변 예측부(144)는 학습부(143)에서 학습된 인공지능을 이용하여 피검자의 초음파 영상으로부터 병변 영역을 예측하고 검출할 수 있다. The lesion prediction unit (144) can predict and detect a lesion area from an ultrasound image of a subject using artificial intelligence learned in the learning unit (143).
후처리부(145)는 병변 예측부(144)에서 검출된 병변 영역이 다른 영역과 구분되도록 하는 후처리를 수행할 수 있다. 이는 전술한 제1 콘텐츠 생성부(120)에서 수행되는 기능과 동일하다. The post-processing unit (145) can perform post-processing to distinguish the lesion area detected by the lesion prediction unit (144) from other areas. This is the same function as that performed in the first content generation unit (120) described above.
도 6은 본 발명의 실시예에 따른 병변 검출부(140)가 인공지능을 이용하여 병변을 검출하는 프로세스를 나타낸다. 도 6을 참조하면, 병변 검출부(140)는 초음파 영상을 수신한 후 CNN(Convolutional Neural Network)에 입력한 후 고성능 Semantic segmentation 모델에 입력하여 병변 예측 영역과 다른 영역이 구분되는 바이너리 마스크를 획득할 수 있다. 이후 병변 검출부(140)는 병변 예측 영역 중 병변 영역을 선별할 수 있다.Figure 6 shows a process in which a lesion detection unit (140) according to an embodiment of the present invention detects a lesion using artificial intelligence. Referring to Figure 6, the lesion detection unit (140) can receive an ultrasound image, input it into a CNN (Convolutional Neural Network), and then input it into a high-performance Semantic segmentation model to obtain a binary mask in which the lesion prediction area and other areas are distinguished. Thereafter, the lesion detection unit (140) can select a lesion area from the lesion prediction area.
도 7은 본 발명의 실시예에 따른 동기화부(150)의 구성도를 나타낸다. 도 7을 참조하면, 동기화부(150)는 지연 동기화 모듈(151)과 예측 동기화 모듈(152)를 포함할 수 있다. Fig. 7 shows a configuration diagram of a synchronization unit (150) according to an embodiment of the present invention. Referring to Fig. 7, the synchronization unit (150) may include a delay synchronization module (151) and a prediction synchronization module (152).
동기화부(150)는 동일한 시점에 측정된 초음파 영상과 생체신호가 출력되고, 동일한 시점에 검출된 병변 영역이 표출될 수 있도록 초음파 영상, 생체신호 및 병변 영역을 동기화시킬 수 있다. 동기화부(150)는 시술자가 동일한 시점의 정보를 기준으로 생검 및 치료를 수행할 수 있도록 하는 중요한 역할을 수한다. 만약, 초음파 영상과 병변 영역이 각각 다른 시점에 측정된 것이라면, 시술자를 디스플레이된 병변 영역에 병변이 있을 것으로 기대하고 있으나 실제로는 다른 시점의 병변 영역으로 해당 위치에 병변이 없을 수도 있다. The synchronization unit (150) can synchronize the ultrasound image, the biosignal, and the lesion area so that the ultrasound image and the biosignal measured at the same time can be output, and the lesion area detected at the same time can be displayed. The synchronization unit (150) plays an important role in allowing the operator to perform biopsy and treatment based on information at the same time. If the ultrasound image and the lesion area are measured at different times, the operator expects that there will be a lesion in the displayed lesion area, but in reality, there may be no lesion at that location because it is a lesion area at a different time.
이와 같이, 각 신호들의 동기화가 필요한 이유는 초음파 영상, 생체신호, 또는 병변 영역이 구성되거나 검출되거나 전송되는데 소요되는 시간이 각각 다르기 때문이다. 동기화부(150)는 출력기기(160)를 통해 시술자에게 디스플레이되는 초음파 영상, 생체신호, 또는 병변 영역을 동일한 시점에 측정된 것으로 동기화시켜 안정성과 신뢰성을 확보할 수 있다. In this way, the reason why synchronization of each signal is necessary is because the time required for the ultrasound image, bio-signal, or lesion area to be composed, detected, or transmitted is different. The synchronization unit (150) can synchronize the ultrasound image, bio-signal, or lesion area displayed to the operator through the output device (160) with those measured at the same time, thereby ensuring stability and reliability.
도 8은 본 발명의 실시예에 따른 지연 동기화 모듈의 동기화 방식을 설명하기 위한 그림을 나타낸다. 도 8을 참조하면, 지연 동기화 모듈(151)은 병변 검출부(140)에서 검출한 병변 영역이 기설정된 기준 값보다 작은 경우 초음파 영상, 생체신호 및 병변 영역 중 가장 지연된 신호를 기준으로 동기화시킬 수 있다. Fig. 8 is a diagram for explaining a synchronization method of a delay synchronization module according to an embodiment of the present invention. Referring to Fig. 8, the delay synchronization module (151) can synchronize based on the most delayed signal among the ultrasound image, biosignal, and lesion region when the lesion region detected by the lesion detection unit (140) is smaller than a preset reference value.
구체적으로 살펴보면, 병변 영역, 초음파 영상, 및 생체신호는 각각 T1, T2, T3만큼 지연되어 신호가 도달될 수 있다(T1>T2>T3). 이때, 가장 지연시간이 긴 병변 영역을 기준으로 동기화를 하게 되면 동기화 기준 시점에 이미 측정된 값들이 존재하므로 동기화에 용이하다. 즉, 초음파 영상은 Tp2 만큼 이전 시점의 신호를 출력하고, 생체신호는 Tp3 만큼 이전 시점의 신호를 출력함으로써 병변 영역과 동기화할 수 있다. 이러한 동기화 방식은 이미 측정된 값들이 존재하므로 신뢰도와 정확도가 높다는 이점이 있으나, 지연시간이 길다는 단점이 있다. Specifically, the lesion area, ultrasound image, and bio-signal can be delayed by T1, T2, and T3, respectively, and the signals can be delivered (T1>T2>T3). At this time, if synchronization is performed based on the lesion area with the longest delay time, it is easy to synchronize because the measured values already exist at the synchronization reference time. That is, the ultrasound image can be synchronized with the lesion area by outputting the signal from a time point earlier than Tp2, and the bio-signal can be synchronized by outputting the signal from a time point earlier than Tp3. This synchronization method has the advantage of high reliability and accuracy because the measured values already exist, but has the disadvantage of a long delay time.
도 9는 본 발명의 실시예에 따른 예측 동기화 모듈(152)의 동기화 방식을 설명하기 위한 그림을 나타낸다. 도 9를 참조하면, 예측 동기화 모듈(152)은 병변 검출부(140)에서 검출한 병변 영역이 기설정된 기준 값보다 큰 경우 초음파 영상, 생체신호 및 병변 영역 중 가장 앞선 신호를 기준으로 동기화시킬 수 있다. FIG. 9 is a diagram illustrating a synchronization method of a prediction synchronization module (152) according to an embodiment of the present invention. Referring to FIG. 9, the prediction synchronization module (152) can synchronize based on the most advanced signal among the ultrasound image, biosignal, and lesion area when the lesion area detected by the lesion detection unit (140) is larger than a preset reference value.
구체적으로 살펴보면, 가장 지연시간이 짧은 생체신호를 기준으로 동기화를 하게 되면 동기화 기준 시점에 병변 영역과 초음파 영상은 도달한 데이터가 없다는 문제가 생긴다. 따라서 예측 동기화 모듈(152)은 보외법(extrapolation)을 이용하여, 가장 앞선 신호에 다른 신호를 동기화시킬 수 있다. 보외법이란 얻을 수 있는 자료가 한정되어 있어, 그 이상의 한계를 넘는 값을 얻고자 할 때 쓰는 방법을 의미한다. 다른 실시예로, 예측 동기화 모듈(152)은 인공지능을 이용하여 가장 앞선 신호의 시점까지의 데이터를 예측할 수 있다. Specifically, if synchronization is performed based on the biosignal with the shortest delay time, there is a problem that the lesion area and the ultrasound image do not have data that has reached the synchronization reference point in time. Therefore, the prediction synchronization module (152) can synchronize another signal to the most advanced signal using extrapolation. Extrapolation refers to a method used when the available data is limited and a value that exceeds the limit is desired to be obtained. In another embodiment, the prediction synchronization module (152) can predict data up to the point in time of the most advanced signal using artificial intelligence.
즉, 예측 동기화 모듈(152)은 Tf1 만큼 앞선 시점의 신호를 예측하여 병변 영역을 출력하고, Tf2 만큼 앞선 시점의 신호를 예측하여 초음파 영상을 출력함으로써 생체신호와 동기화할 수 있다. 이러한 동기화 방식은 가장 짧은 지연시간에 동기화하므로 지연시간이 짧다는 장점이 있으나, 아직 측정되지 않은 값들을 예측하여 출력하므로 신뢰도와 정확도가 떨어진다는 단점이 있다.That is, the prediction synchronization module (152) can synchronize with the biosignal by predicting the signal at a time point ahead by Tf1 and outputting the lesion area, and predicting the signal at a time point ahead by Tf2 and outputting the ultrasound image. This synchronization method has the advantage of a short delay time because it synchronizes at the shortest delay time, but has the disadvantage of low reliability and accuracy because it predicts and outputs values that have not yet been measured.
따라서, 본 발명의 실시예에 따른 혼합현실 기반 초음파 영상 출력 시스템(100)은 검출된 병변이 작은 경우에는 정확도가 높은 지연 동기화 모듈(151)을 사용하고, 검출된 병변이 큰 경우에는 지연 시간이 짧은 예측 동기화 모듈(152)을 사용할 수 있다. 지연 동기화 모듈(151)과 예측 동기화 모듈(152)은 사용자에 설정에 따라 선택될 수도 있다. Therefore, the mixed reality-based ultrasound image output system (100) according to the embodiment of the present invention can use a delay synchronization module (151) with high accuracy when the detected lesion is small, and can use a prediction synchronization module (152) with short delay time when the detected lesion is large. The delay synchronization module (151) and the prediction synchronization module (152) can be selected according to user settings.
도 10은 본 발명의 실시예에 따른 출력기기(160)를 통해 디스플레이되는 화면을 나타낸다. 도 10을 참조하면, 출력기기(160)는 제1 콘텐츠 생성부(120) 또는 제2 콘텐츠 생성부(130)에서 생성된 MR 콘텐츠를 디스플레이할 수 있다. 출력기기(160)는 시술자가 착용할 수 있는 기기일 수 있다. Fig. 10 illustrates a screen displayed through an output device (160) according to an embodiment of the present invention. Referring to Fig. 10, the output device (160) can display MR content generated by the first content generation unit (120) or the second content generation unit (130). The output device (160) can be a device that can be worn by a practitioner.
출력기기(160)는 제1 MR 콘텐츠와 제2 MR 콘텐츠를 증강시키는 방식으로 디스플레이할 수 있도록 AR/VR/MR 기기, 글래스, 또는 HMD일 수 있다. 출력기기(160)는 내부에 혼합현실 구현에 필요한 컴퓨팅 연산을 할 수 있도록 프로세서 및 배터리를 포함할 수 있다. 출력기기(160)는 경량화를 위해 컴퓨팅 연산을 할 수 있도록 프로세서 및 배터리를 포함하지 않을 수 있다. The output device (160) may be an AR/VR/MR device, glasses, or HMD that can display the first MR content and the second MR content in an augmented manner. The output device (160) may include a processor and a battery to perform computing operations required for implementing mixed reality internally. The output device (160) may not include a processor and a battery to perform computing operations in order to reduce weight.
출력기기(160)는 착용자의 전방방향을 촬영하는 제1 카메라와 착용자의 안면을 촬영하는 제2 카메라를 포함할 수 있다. 제1 카메라는 초음파 진단기기의 프로브 형상을 인식하고, 제2 카메라는 착용자의 시선방향을 추적할 수 있다. 제1 카메라는 전술한 제1 콘텐츠 생성부(120)에서 프로브 형상을 인식하는데 사용될 수 있다. The output device (160) may include a first camera that photographs the front direction of the wearer and a second camera that photographs the face of the wearer. The first camera may recognize the probe shape of the ultrasonic diagnostic device, and the second camera may track the direction of the wearer's gaze. The first camera may be used to recognize the probe shape in the first content generation unit (120) described above.
출력기기(160)는 제2 카메라에서 추적한 착용자의 시선방향을 따라 제2 MR 콘텐츠를 디스플레이할 수 있다. 전술한 바와 같이, 제2 MR 콘텐츠는 생체신호로 구성될 수 있으며, 생체신호는 시술자가 초음파 영상에 집중하는 상황에서도 상태를 파악될 수 있어야 한다. 따라서, 출력기기(160)는 제2 카메라를 이용하여 착용자의 좌측 또는 우측 눈동자의 시선방향을 감지하여, 해당 시선방향 근처에 제2 MR 콘텐츠를 디스플레이할 수 있다. The output device (160) can display the second MR content along the direction of the wearer's gaze tracked by the second camera. As described above, the second MR content can be composed of biosignals, and the biosignals must be able to identify their status even in a situation where the operator focuses on the ultrasound image. Accordingly, the output device (160) can detect the direction of the wearer's left or right eye gaze using the second camera and display the second MR content near the corresponding direction of gaze.
출력기기(160)는 제2 카메라가 촬영한 착용자의 눈 또는 눈동자의 움직임에 의해 제어할 수 있다. 시술자는 초음파 진단을 위해 한손에는 초음파 프로브를 파지하고 다른 한손에는 생검 또는 치료를 위해 바늘을 파지하게 된다. 따라서, 시술자는 출력기기(160)를 제어하기 위해서는 파지한 프로브 또는 바늘을 내려놓거나, 보조자에게 요청해야만 하는데, 시술자가 출력기기(160)를 빈번하게 제어하거나 응급상황이 발생하여 출력기기(160)를 신속하게 제어해야 하는 경우 제어하는데 한계점이 존재한다. 따라서, 출력기기(160)는 착용자의 눈의 깜빡임, 눈동자의 움직임에 의해 제어됨으로써, 시술자에게 높은 자유도를 제공할 수 있다. 여기에서, 제어는 UI 변경, 제1 MR 콘텐츠 또는 제2 MR 콘텐츠의 온/오프 등을 의미할 수 있다. The output device (160) can be controlled by the movement of the wearer's eyes or pupils captured by the second camera. The operator holds an ultrasound probe in one hand for ultrasound diagnosis and a needle in the other hand for biopsy or treatment. Therefore, in order to control the output device (160), the operator must put down the probe or needle that he or she is holding or request an assistant to do so. However, there is a limitation in controlling the output device (160) if the operator frequently controls the output device (160) or if an emergency situation occurs and the output device (160) must be quickly controlled. Therefore, the output device (160) can provide the operator with a high degree of freedom by being controlled by the blinking of the wearer's eyes or the movement of the pupils. Here, the control may mean changing the UI, turning on/off the first MR content or the second MR content, etc.
본 발명의 다른 실시예인 혼합현실 기반 초음파 영상 출력 방법은 데이터 수신단계, 제1 콘텐츠 생성단계, 제2 콘텐츠 생성단계, 병변 검출단계, 동기화 단계, 및 디스플레이하는 단계를 포함할 수 있다.A mixed reality-based ultrasound image output method, which is another embodiment of the present invention, may include a data receiving step, a first content generating step, a second content generating step, a lesion detection step, a synchronization step, and a displaying step.
데이터 수신단계는 초음파 진단기기와 생체신호 측정기기로부터 피검자의 초음파 영상 및 생체신호를 수신할 수 있다. 데이터 수신단계는 전술한 데이터 수신부(110)에서 수행되는 동작을 의미한다. The data receiving step can receive ultrasound images and bio-signals of the subject from the ultrasonic diagnostic device and the bio-signal measuring device. The data receiving step refers to an operation performed in the data receiving unit (110) described above.
제1 콘텐츠 생성단계는 초음파 영상을 특정 객체에 정합하여 제1 MR(Mixed-reality) 콘텐츠를 생성할 수 있다. 제1 콘텐츠 생성단계는 출력기기에 구비된 카메라를 통해 초음파 진단기기의 프로브 형상을 인식하고, 상기 프로브 형상의 움직임을 감지하여 상기 제1 MR 콘텐츠의 출력 위치를 변경할 수 있다. 제1 콘텐츠 생성단계는 전술한 제1 콘텐츠 생성부(120)에서 수행되는 동작을 의미한다. The first content generation step can generate first MR (Mixed-reality) content by aligning an ultrasound image with a specific object. The first content generation step can recognize the probe shape of an ultrasound diagnostic device through a camera equipped in an output device, and detect the movement of the probe shape to change the output position of the first MR content. The first content generation step refers to an operation performed in the first content generation unit (120) described above.
제2 콘텐츠 생성단계는 생체신호를 일정 공간에 정합하여 제2 MR 콘텐츠를 생성할 수 있다. 제2 콘텐츠 생성단계는 전술한 제2 콘텐츠 생성부(130)에서 수행되는 동작을 의미한다. The second content generation step can generate second MR content by aligning biosignals to a certain space. The second content generation step refers to an operation performed in the second content generation unit (130) described above.
병변 검출단계는 병변 영역이 주석 처리된 초음파 영상 데이터 세트로 학습된 인공지능을 이용하여 초음파 영상에서 병변 영역을 검출할 수 있다. 병변 검출단계는 전술한 병변 검출부(140)에서 수행되는 동작을 의미한다. The lesion detection step can detect lesion areas in ultrasound images using artificial intelligence trained on an ultrasound image data set in which lesion areas are annotated. The lesion detection step refers to an operation performed in the lesion detection unit (140) described above.
동기화 단계는 동일한 시점에 측정된 초음파 영상과 생체신호가 출력되고, 동일한 시점에 검출된 병변 영역이 표출될 수 있도록 초음파 영상, 생체신호 및 병변 영역을 동기화시킬 수 있다. 동기화 단계는 전술한 동기화부(150)에서 수행되는 동작을 의미한다. The synchronization step can synchronize the ultrasound image, biosignal, and lesion area so that the ultrasound image and biosignal measured at the same time can be output and the lesion area detected at the same time can be displayed. The synchronization step means the operation performed in the synchronization unit (150) described above.
디스플레이하는 단계는 제1 콘텐츠 생성단계 또는 상기 제2 콘텐츠 생성단계에서 생성된 MR 콘텐츠를 출력기기에 디스플레이할 수 있다. 디스플레이하는 단계는 전술한 출력기기(160)에서 수행되는 동작을 의미한다. The displaying step can display the MR content generated in the first content generation step or the second content generation step on an output device. The displaying step refers to an operation performed on the aforementioned output device (160).
이상에서 대표적인 실시예를 통하여 본 발명을 상세하게 설명하였으나, 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자는 상술한 실시예에 대하여 본 발명의 범주에서 벗어나지 않는 한도 내에서 다양한 변형이 가능함을 이해할 것이다. 그러므로 본 발명의 권리 범위는 설명한 실시예에 국한되어 정해져서는 안 되며, 후술하는 특허청구범위뿐만 아니라 특허청구범위와 균등 개념으로부터 도출되는 모든 변경 또는 변형된 형태에 의하여 정해져야 한다.Although the present invention has been described in detail through representative examples above, those skilled in the art will understand that various modifications can be made to the above-described embodiments without departing from the scope of the present invention. Therefore, the scope of the rights of the present invention should not be limited to the described embodiments, but should be determined by all changes or modifications derived from the claims and equivalent concepts as well as the claims.
[부호의 설명][Explanation of symbols]
100: 혼합현실 기반 초음파 영상 출력 시스템100: Mixed reality based ultrasound image output system
110: 데이터 수신부 120: 제1 콘텐츠 생성부110: Data receiving unit 120: First content generating unit
130: 제2 콘텐츠 생성부 140: 병변 검출부130: Second content creation unit 140: Lesion detection unit
141: 입력부 142: 전처리부141: Input section 142: Preprocessing section
143: 학습부 144: 병변 예측부143: Learning section 144: Lesion prediction section
145: 후처리부 150: 동기화부145: Post-processing section 150: Synchronization section
151: 지연 동기화 모듈 152: 예측 동기화 모듈151: Delayed Synchronization Module 152: Predictive Synchronization Module
160: 출력기기160: Output device
본 발명은 회전 변환에 불변이면서 주변 맥락을 반영할 수 있는 맥락 이미지 표현 방식이 적용된 디지털 슬라이드 이미지의 문맥 인식 방법 및 시스템을 제공하는 것을 일 목적으로 한다.The present invention aims to provide a method and system for recognizing the context of a digital slide image, to which a contextual image representation method that is invariant to rotational transformation and can reflect the surrounding context is applied.
Claims (11)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2023-0092427 | 2023-07-17 | ||
| KR1020230092427A KR20250012331A (en) | 2023-07-17 | 2023-07-17 | Mixed reality-based ultrasound image output system and method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025018587A1 true WO2025018587A1 (en) | 2025-01-23 |
Family
ID=94281981
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2024/007759 Pending WO2025018587A1 (en) | 2023-07-17 | 2024-06-07 | Mixed reality-based ultrasound image output system and method |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR20250012331A (en) |
| WO (1) | WO2025018587A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5662109A (en) * | 1990-12-14 | 1997-09-02 | Hutson; William H. | Method and system for multi-dimensional imaging and analysis for early detection of diseased tissue |
| KR20080053057A (en) * | 2006-12-08 | 2008-06-12 | 주식회사 메디슨 | Ultrasound Imaging System and Method for Forming and Displaying Mixed Image of Ultrasound Image and External Medical Image |
| KR20150000627A (en) * | 2013-06-25 | 2015-01-05 | 삼성전자주식회사 | Ultrasonic imaging apparatus and control method for thereof |
| KR20150108701A (en) * | 2014-03-18 | 2015-09-30 | 삼성전자주식회사 | System and method for visualizing anatomic elements in a medical image |
| JP2017159027A (en) * | 2016-03-07 | 2017-09-14 | 東芝メディカルシステムズ株式会社 | Ultrasonograph and ultrasonic diagnosis support device |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11287874B2 (en) | 2018-11-17 | 2022-03-29 | Novarad Corporation | Using optical codes with augmented reality displays |
-
2023
- 2023-07-17 KR KR1020230092427A patent/KR20250012331A/en active Pending
-
2024
- 2024-06-07 WO PCT/KR2024/007759 patent/WO2025018587A1/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5662109A (en) * | 1990-12-14 | 1997-09-02 | Hutson; William H. | Method and system for multi-dimensional imaging and analysis for early detection of diseased tissue |
| KR20080053057A (en) * | 2006-12-08 | 2008-06-12 | 주식회사 메디슨 | Ultrasound Imaging System and Method for Forming and Displaying Mixed Image of Ultrasound Image and External Medical Image |
| KR20150000627A (en) * | 2013-06-25 | 2015-01-05 | 삼성전자주식회사 | Ultrasonic imaging apparatus and control method for thereof |
| KR20150108701A (en) * | 2014-03-18 | 2015-09-30 | 삼성전자주식회사 | System and method for visualizing anatomic elements in a medical image |
| JP2017159027A (en) * | 2016-03-07 | 2017-09-14 | 東芝メディカルシステムズ株式会社 | Ultrasonograph and ultrasonic diagnosis support device |
Also Published As
| Publication number | Publication date |
|---|---|
| KR20250012331A (en) | 2025-01-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11195340B2 (en) | Systems and methods for rendering immersive environments | |
| KR102458587B1 (en) | Universal device and method to integrate diagnostic testing into treatment in real-time | |
| WO2020101283A1 (en) | Surgery assisting device using augmented reality | |
| US20180028088A1 (en) | Systems and methods for medical procedure monitoring | |
| CN113288069A (en) | System, method and computer program product for physiological monitoring | |
| US10835120B2 (en) | Extended medical test system | |
| CN110660486A (en) | Doctor health monitoring system based on wearable device | |
| WO2020132813A1 (en) | Physiological sign monitoring method for craniocerebral injury and medical monitoring device | |
| Jiang et al. | Pupil dilations during target-pointing respect Fitts' law | |
| WO2025018587A1 (en) | Mixed reality-based ultrasound image output system and method | |
| US20210287785A1 (en) | Automatic Sensing for Clinical Decision Support | |
| WO2016200224A1 (en) | Method for monitoring bioactivity of user, system, and non-transitory computer-readable recording medium | |
| US20220296158A1 (en) | System, method, and apparatus for temperature asymmetry measurement of body parts | |
| Sumriddetchkajorn et al. | Thermal analyzer enables improved lie detection in criminal-suspect interrogations | |
| Yin et al. | Health-MR: A Mixed Reality-Based Patient Registration and Monitor Medical System | |
| Benila et al. | Augmented Reality Based Doctor's Assistive System | |
| Sadok et al. | Performing the HINTS-exam using a mixed-reality head-mounted display in patients with acute vestibular syndrome: a feasibility study | |
| WO2020138731A1 (en) | Em sensor-based otolaryngology and neurosurgery medical training simulator and method | |
| US12458223B2 (en) | Infrared tele-video-oculography for remote evaluation of eye movements | |
| US20240242817A1 (en) | Visualization device | |
| US20250140401A1 (en) | Information processing apparatus, method for controlling information processing apparatus, and computer program | |
| GB2611556A (en) | Augmented reality ultrasound-control system for enabling remote direction of a user of ultrasound equipment by experienced practitioner | |
| CN117547233A (en) | Real-time monitoring method and device based on non-contact vital sign | |
| CN105982649A (en) | Portable optical measuring apparatus and measuring system | |
| Albert et al. | Feasibility Study to Identify Brain Activity and Eye-Tracking Features for Assessing Hazard Recognition Using Consumer-Grade Wearables in an Immersive Virtual Environment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24843308 Country of ref document: EP Kind code of ref document: A1 |