[go: up one dir, main page]

WO2025018587A1 - Système et procédé de sortie d'image ultrasonore basés sur la réalité mixte - Google Patents

Système et procédé de sortie d'image ultrasonore basés sur la réalité mixte Download PDF

Info

Publication number
WO2025018587A1
WO2025018587A1 PCT/KR2024/007759 KR2024007759W WO2025018587A1 WO 2025018587 A1 WO2025018587 A1 WO 2025018587A1 KR 2024007759 W KR2024007759 W KR 2024007759W WO 2025018587 A1 WO2025018587 A1 WO 2025018587A1
Authority
WO
WIPO (PCT)
Prior art keywords
ultrasound image
content
lesion
mixed reality
lesion area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2024/007759
Other languages
English (en)
Korean (ko)
Inventor
김도균
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Life Public Welfare Foundation
Original Assignee
Samsung Life Public Welfare Foundation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Life Public Welfare Foundation filed Critical Samsung Life Public Welfare Foundation
Publication of WO2025018587A1 publication Critical patent/WO2025018587A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient; User input means
    • A61B5/742Details of notification to user or communication with user or patient; User input means using visual displays
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Clinical applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5292Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves using additional data, e.g. patient information, image labeling, acquisition parameters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B2090/364Correlation of different images or relation of image positions in respect to the body
    • A61B2090/365Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image

Definitions

  • the present invention relates to a mixed reality-based ultrasound image output system and method, and more particularly, to a mixed reality-based ultrasound image output system and method that aligns and outputs an ultrasound image on an examination area using a device that displays in an AR or VR manner.
  • Ultrasound-guided biopsy is a test that obtains tissue samples by inserting a thin needle into a mass or local lesion outside the body under ultrasound guidance. Ultrasound-guided biopsy can help the needle accurately pierce the lesion, which can increase accuracy and reduce false negatives. Accordingly, ultrasound-guided biopsy is widely used in clinical practice.
  • Ultrasound-guided radiofrequency ablation is a method of treating tumors by burning them using the heat generated by inserting a needle-shaped electrode into the tumor and then passing an electric current while looking at the ultrasound examination screen.
  • Ultrasound-guided RF ablation has the advantage of being able to quickly and accurately insert the electrode inside the cancerous tumor and treat it through real-time guidance compared to CT (Computerized Tomography) and MRI (Magnetic Resonance Imaging) images.
  • Ultrasound-guided biopsy can cause high fatigue because the operator must hold the ultrasound probe for a long time and frequently turn the head to check and compare the ultrasound image with the biopsy site.
  • Ultrasound-guided biopsy is difficult to perform because the ultrasound monitor and biopsy site cannot be viewed at the same time, and it requires high expertise and skills, so only experienced specialists can perform it.
  • the ultrasound equipment is often placed far away due to the patient's bed, numerous pieces of equipment in the operating room, etc., which can hinder ultrasound-guided biopsy.
  • Ultrasound-guided radiofrequency ablation therapy may take longer to treat as the size of the tumor to be treated gradually decreases with the early detection of cancerous tumors, and it becomes difficult to identify the location of the tumor with ultrasound imaging.
  • Ultrasound-guided radiofrequency ablation therapy requires a high level of concentration during the treatment, making it difficult to timely monitor the patient's physical risk signs such as blood pressure, heart rate, and oxygen saturation.
  • the purpose of the present invention is to enable a practitioner to simultaneously view a biopsy site and ultrasound through a wearable output device, and to superimpose an ultrasound image and a lesion site on the biopsy site.
  • the present invention seeks to synchronize input signals to ensure the reliability of output information.
  • the present invention comprises: a data receiving unit which receives an ultrasound image and a bio-signal of a subject; a first content generating unit which generates a first MR (Mixed-reality) content by aligning the ultrasound image with a specific object; a second content generating unit which generates a second MR content by aligning the bio-signal with a specific space; and an output device which displays the MR content generated by the first content generating unit or the second content generating unit; wherein the first content generating unit is characterized in that it recognizes the probe shape of an ultrasound diagnostic device through a camera provided in the output device, and detects movement of the probe shape to change the output position of the first MR content.
  • the method may further include a lesion detection unit that detects a lesion area in the ultrasound image using artificial intelligence learned from an ultrasound image data set in which the lesion area is annotated.
  • a lesion detection unit that detects a lesion area in the ultrasound image using artificial intelligence learned from an ultrasound image data set in which the lesion area is annotated.
  • the first content generation unit can change the first MR content so that the lesion area detected by the lesion detection unit is distinguished from other areas.
  • the first content generation unit can change the MR content so that the lesion area is not distinguished from other areas when it is confirmed through the ultrasound image that the needle has contacted the lesion area.
  • the method may further include a synchronization unit that synchronizes the ultrasound image, the bio-signal, and the lesion area so that the ultrasound image and the bio-signal measured at the same time can be output and the lesion area detected at the same time can be displayed.
  • a synchronization unit that synchronizes the ultrasound image, the bio-signal, and the lesion area so that the ultrasound image and the bio-signal measured at the same time can be output and the lesion area detected at the same time can be displayed.
  • the synchronization unit may include a delay synchronization module that synchronizes based on the most delayed signal among the ultrasound image, the bio-signal, and the lesion area when the lesion area detected by the lesion detection unit is smaller than a preset reference value; and a prediction synchronization module that synchronizes based on the most advanced signal among the ultrasound image, the bio-signal, and the lesion area when the lesion area detected by the lesion detection unit is larger than a preset reference value.
  • the prediction synchronization module can synchronize another signal to the most advanced signal by using extrapolation.
  • the output device includes a first camera for photographing the forward direction of the wearer and a second camera for photographing the face of the wearer, wherein the first camera can recognize the probe shape of the ultrasonic diagnostic device, and the second camera can track the direction of the wearer's gaze.
  • the output device can display the second MR content along the direction of the wearer's gaze tracked by the second camera.
  • the output device can be controlled by the movement of the wearer's eye or pupil captured by the second camera.
  • the present invention includes a data receiving step of receiving an ultrasound image and a bio-signal of a subject; a first content generating step of generating a first MR (Mixed-reality) content by aligning the ultrasound image with a specific object; a second content generating step of generating a second MR content by aligning the bio-signal with a specific space; and a step of displaying the MR content generated in the first content generating step or the second content generating step on an output device; and another feature of the first content generating step is that the shape of a probe of an ultrasound diagnostic device is recognized through a camera provided in the output device, and the movement of the probe shape is detected to change the output position of the first MR content.
  • the present invention has the advantage of reducing operator fatigue by allowing the biopsy site and ultrasound to be viewed simultaneously, and increasing accuracy and stability by allowing the ultrasound image and lesion site to be superimposed on the biopsy site.
  • the present invention has an advantage in that it can ensure patient safety by displaying a physical danger signal in the line of sight of the operator through an output device that can be worn by the operator.
  • the present invention has the advantage of ensuring the reliability of output information by synchronizing input signals and preventing the needle from being inserted into a different part.
  • the present invention can increase the freedom of the operator by providing an environment in which equipment can be controlled without touching.
  • Figure 1 shows a configuration diagram of a mixed reality-based ultrasound image output system according to an embodiment of the present invention.
  • Figure 2 shows a configuration of a stand-alone mixed reality-based ultrasound image output system according to an embodiment of the present invention.
  • FIG. 3 shows a configuration of a mixed reality-based ultrasound image output system using a smartphone tethering method according to an embodiment of the present invention.
  • FIG. 4 is a drawing for explaining the relationship between the size of MR content output to an output device and the probe angle according to an embodiment of the present invention.
  • Figure 5 shows a configuration diagram of a lesion detection unit according to an embodiment of the present invention.
  • Figure 6 shows a process in which a lesion detection unit according to an embodiment of the present invention detects a lesion using artificial intelligence.
  • Figure 7 shows a configuration diagram of a synchronization unit according to an embodiment of the present invention.
  • FIG. 8 is a diagram illustrating a synchronization method of a delay synchronization module according to an embodiment of the present invention.
  • FIG. 9 is a diagram illustrating a synchronization method of a prediction synchronization module according to an embodiment of the present invention.
  • Figure 10 shows a screen displayed through an output device according to an embodiment of the present invention.
  • a data receiving unit that receives ultrasound images and bio-signals of a subject
  • a first content generation unit that generates first MR (Mixed-reality) content by aligning the above ultrasound image with a specific object
  • a mixed reality-based ultrasound image output system characterized by recognizing the probe shape of an ultrasound diagnostic device through a camera equipped in the output device, detecting movement of the probe shape, and changing the output position of the first MR content.
  • first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
  • FIG. 1 illustrates a configuration diagram of a mixed reality-based ultrasound image output system (100) according to an embodiment of the present invention.
  • the mixed reality-based ultrasound image output system (100) may include a data receiving unit (110), a first content generating unit (120), a second content generating unit (130), a lesion detection unit (140), a synchronization unit (150), and an output device (160).
  • the mixed reality-based ultrasound image output system (100) has the advantage of ensuring the reliability of the output information by synchronizing the input signals and preventing the needle from being inserted into a different part by outputting signals at different times.
  • the mixed reality-based ultrasound image output system (100) can increase the freedom of the operator by providing an environment in which the equipment can be controlled without touching.
  • the AI analysis server can output AI analysis results, where the AI analysis may mean lesion detection. That is, the AI analysis server may mean the lesion detection unit (140) of the present invention.
  • the AI analysis server may provide the AI analysis results to an MR HMD (Head Mounted Display) or glasses via the Internet.
  • MR HMD Head Mounted Display
  • MR HMD or glasses can receive patient bio-signal change information and ultrasound images from a medical device data processing server via the Internet, and using the received data, can create first MR (Mixed-reality) content by aligning the ultrasound image with a specific object, and can create second MR content by aligning the bio-signal with a certain space. That is, in the stand-alone method, the first content generation unit (120) and the second content generation unit (130) can be included in the output device (160).
  • FIG. 3 shows the configuration of a mixed reality-based ultrasound image output system using a smartphone tethering method according to an embodiment of the present invention.
  • a mixed reality-based ultrasound image output system (100) can be connected to and operated by a smartphone.
  • the smartphone tethering method can be performed on a smartphone without performing computing operations required for implementing mixed reality in the output device (160) itself.
  • the smartphone tethering method has the advantage of being lightweight and generating less heat because there is no processor or battery for computing inside the output device (160).
  • the smartphone tethering method can transmit patient bio-signal change information, ultrasound images, and AI analysis results to a smartphone via the Internet, and the smartphone can use the transmitted data to align the ultrasound images with a specific object to generate first MR (Mixed-reality) content, and align the bio-signals with a specific space to generate second MR content. That is, the smartphone tethering method may not include the first content generation unit (120) and the second content generation unit (130) within the output device (160).
  • the data receiving unit (110) can receive ultrasound images and bio-signals of a subject.
  • the data receiving unit (110) can be connected to an ultrasound diagnostic device and a bio-signal measuring device by wire or wirelessly to receive ultrasound images and bio-signals in real time.
  • the data receiving unit (110) can transmit the received ultrasound images and bio-signals to a first content generating unit (120), a second content generating unit (130), or a lesion detecting unit (140).
  • a broad mobile communication network such as CDMA (Code Division Multiple Access), WCDMA (Wideband CDMA), LTE (Long Term Evolution), or Wi-Fi.
  • the data receiving unit (110) When the data receiving unit (110) is connected wirelessly to another component, it can use a short-range wireless communication such as Bluetooth.
  • the first content generation unit (120) can generate the first MR (Mixed-reality) content by aligning the ultrasound image with a specific object.
  • the specific object may mean a body part that can be directly superimposed 1:1 on the anatomical structure of the location that the ultrasound image is intended to measure.
  • the tracking pattern (marker, QR code, etc.) attached to the probe of the ultrasound diagnostic device was analyzed to calculate the position where the ultrasound image should be aligned.
  • the operator continuously moves the probe to accurately obtain the necessary ultrasound image.
  • the tracking pattern attached to the probe may be deformed or invisible.
  • the camera does not recognize the tracking pattern.
  • the first content generation unit (120) can recognize the probe shape of the ultrasonic diagnostic device through a camera equipped in the output device (160) and detect the movement of the probe shape to change the output position of the first MR content.
  • the first content generation unit (120) can output the first MR content in the direction in which the front of the probe faces through the recognized probe shape, and preferably, can change the output position of the first MR content so that the ultrasound image directly overlaps 1:1 with the anatomical structure of the location to be measured toward the front of the probe based on the part where the probe and the subject's body come into contact.
  • an ultrasound image is an image of a body part perpendicular to the probe. Therefore, the meaning of outputting the first MR content so that the ultrasound image is directly superimposed on the anatomical structure of the location to be measured is that the output device will implement it so that the image created by ultrasound is actually physically located under the probe.
  • FIG. 4 is a diagram for explaining the relationship between the size of MR content output to an output device according to an embodiment of the present invention and the probe angle.
  • the ultrasound image area output within the AR glasses of the practitioner can be confirmed. If the practitioner is wearing AR glasses, the measurement area under the probe is calculated to look like from the position of the eyes of the practitioner wearing the AR glasses and output to the AR glasses. Therefore, if the MR image is output to the practitioner so as to be directly superimposed 1:1 on the anatomical structure of the location to be measured, the three-dimensional location of the actual lesion inside the body can be accurately known, so that a needle can be accurately inserted into the corresponding location.
  • the size of the ultrasound image output to the AR glasses can be determined through the calculation of how the measurement area under the probe looks like to the eyes of the practitioner.
  • the ultrasound image area displayed within the operator's AR glasses can be confirmed.
  • the viewing angle for the measurement area under the probe becomes larger than when the probe outputs ultrasound perpendicularly to the subject's body, and accordingly, the size of the ultrasound image area displayed within the AR glasses also becomes larger.
  • the ultrasound image area displayed within the operator's AR glasses can be confirmed.
  • the viewing angle for the measurement area below the probe becomes smaller than when the probe outputs ultrasound perpendicularly to the subject's body, and accordingly, the size of the ultrasound image area displayed within the AR glasses also becomes smaller.
  • the size of the ultrasound image displayed on the output device may vary depending on the positional relationship between the output device worn by the practitioner and the probe, such as the angle of the probe, the distance between the practitioner and the probe, etc.
  • the size of the displayed ultrasound image is excessively small, the practitioner may have difficulty performing the procedure while viewing the ultrasound image.
  • a separate ultrasound image may be displayed in the output device. Accordingly, the practitioner can safely perform the procedure through the separately displayed ultrasound image even if the ultrasound image is displayed excessively small depending on the positional relationship between the output device and the probe.
  • a separate ultrasound image may be displayed in a specific area of the AR display according to the control of the practitioner.
  • the first content generation unit (120) determines the location where the ultrasound image will be enhanced not by detecting a tracking pattern, but by determining it through the shape of the probe, so that reliable ultrasound images can be output even when the probe moves during the procedure.
  • the mixed reality-based ultrasound image output system (100) does not use a tracking pattern but uses the shape of a probe, so it can be applied to existing ultrasound diagnostic devices by only having modularized equipment without separate installation.
  • the first content generation unit (120) can change the first MR content so that the lesion area detected by the lesion detection unit (140) can be distinguished from other areas. Although the operator can distinguish the lesion in the ultrasound image, it is very difficult to distinguish the lesion area from other areas besides the lesion area. Therefore, currently, whether or not to detect the lesion is determined based on the operator's ability.
  • the first content generation unit (120) can emphasize the outer line of the lesion area detected by the lesion detection unit (140) or emphasize it by changing the color of the inside of the lesion area. If the user wants to check the inside of the lesion according to the user's setting, the first content generation unit (120) can emphasize the outer line of the detected lesion area. If the user wants to check the size of the lesion according to the user's setting, the first content generation unit (120) can emphasize the lesion area by changing the color of the inside of the lesion area.
  • the first content generation unit (120) can change the MR content so that the lesion area is not distinguished from other areas when it is confirmed through the ultrasound image that the needle has contacted the lesion area. Even if the operator has distinguished the lesion area, it is a very difficult technique to accurately insert a needle into the lesion area to acquire lesion tissue or treat the lesion. Therefore, the first content generation unit (120) can visually notify the operator that the lesion area has been contacted by restoring the highlighted lesion area to be identical to other areas when the needle has accurately inserted the lesion area.
  • the second content generation unit (130) can generate second MR content by aligning a biosignal to a predetermined space.
  • the predetermined space means a space where the operator can check the output biosignal while focusing on the ultrasound image and performing a biopsy or treatment. Preferably, it can mean the gaze direction of the operator's right or left eye.
  • the biosignal can include an electrocardiogram (ECG), an electroencephalogram (EEG), an electromyogram (EMG), a galvanic skin response (GSR), a skin temperature (SKT), a photoplethysmography (PPG), a pulse rate (PR), blood pressure, oxygen saturation, etc.
  • the second content generation unit (130) can determine that a dangerous situation has occurred for the subject if the bio-signal exceeds the reference value and can provide a warning or notification to the operator.
  • the second content generation unit (130) can change the second MR content so that the screen on which the bio-signal is output blinks or flashes red when an abnormality in the bio-signal occurs.
  • Fig. 5 shows a configuration diagram of a lesion detection unit (140) according to an embodiment of the present invention.
  • the lesion detection unit (140) may include an input unit (141), a preprocessing unit (142), a learning unit (143), a lesion prediction unit (144), and a postprocessing unit (145).
  • the lesion detection unit (140) can detect a lesion area in an ultrasound image using artificial intelligence learned from an ultrasound image data set in which the lesion area is annotated. Specifically, the lesion detection unit (140) can detect a lesion area in an ultrasound image using a semantic segmentation model.
  • the semantic segmentation model is a process of dividing a digital image into a plurality of pixel sets, and can simplify and convert the expression of the image into something easy to interpret through segmentation.
  • the lesion detection unit (140) can use a CNN-based model as a semantic segmentation model.
  • the input unit (141) can receive ultrasound images of normal and patient groups for learning, testing, or actual lesion detection.
  • the input unit (141) can receive ultrasound images from the data receiving unit (110).
  • the input unit (141) can transmit the received ultrasound images to the preprocessing unit (142).
  • the preprocessing unit (142) can preprocess the ultrasound image received from the input unit (141) to be advantageous for learning or detection.
  • the preprocessing unit (142) can perform preprocessing in a manner of normalizing pixel values.
  • the preprocessing unit (142) can perform preprocessing to adjust the size and resolution of the ultrasound image received so that it can be input to the artificial neural network.
  • the ultrasound image used for learning may include the patient's personal information, which may cause security and personal information leakage issues. Therefore, the preprocessing unit (142) according to the present invention can perform preprocessing to de-identify the patient information in the ultrasound image and metadata.
  • the learning unit (143) can train artificial intelligence using ultrasound images that have been preprocessed and have lesion areas annotated.
  • the artificial intelligence to be trained can be a CNN-based or SOTA architecture-based semantic segmentation model.
  • a convolutional neural network is a type of deep learning model that is inspired by the structure of the visual cortex of animals and is designed to process data with grid patterns such as images.
  • a convolutional neural network may generally include a convolutional layer, a pooling layer, and a fully connected layer.
  • the convolutional layer and the pooling layer may exist repeatedly in the neural network, and the input data may be converted to an output through these layers.
  • the convolutional layer uses a kernel (or mask) for feature extraction, and the element-wise product between each element of the kernel and the input value is calculated at each location and summed to obtain an output value, which is called a feature map. This procedure may be repeated by applying multiple kernels to form an arbitrary number of feature maps.
  • the convolutional and pooling layers perform feature extraction, while the fully connected layer maps the extracted features to the final output, such as a classification operation.
  • Convolutional neural networks can be trained in a way that minimizes output errors. Separately from the forward propagation process that extracts values from the input layer to the output layer, backpropagation occurs in the neural network to calculate the error between the input training data and the output value of the neural network for it, and update the weights of the nodes of each layer to reduce this error.
  • the training process in a convolutional neural network can be summarized as the process of finding the kernel that extracts the output value with the least error based on the given training data.
  • the kernel is the only parameter that is automatically learned during the training process of the convolutional layer.
  • the kernel size, the number of kernels, and the padding in the convolutional neural network are hyperparameters that must be set before starting the training process, and therefore, different convolutional neural network models can be distinguished depending on the kernel size, the number of kernels, and the number of convolutional layers and pooling layers.
  • the lesion prediction unit (144) can predict and detect a lesion area from an ultrasound image of a subject using artificial intelligence learned in the learning unit (143).
  • the post-processing unit (145) can perform post-processing to distinguish the lesion area detected by the lesion prediction unit (144) from other areas. This is the same function as that performed in the first content generation unit (120) described above.
  • Figure 6 shows a process in which a lesion detection unit (140) according to an embodiment of the present invention detects a lesion using artificial intelligence.
  • the lesion detection unit (140) can receive an ultrasound image, input it into a CNN (Convolutional Neural Network), and then input it into a high-performance Semantic segmentation model to obtain a binary mask in which the lesion prediction area and other areas are distinguished. Thereafter, the lesion detection unit (140) can select a lesion area from the lesion prediction area.
  • a CNN Convolutional Neural Network
  • Fig. 7 shows a configuration diagram of a synchronization unit (150) according to an embodiment of the present invention.
  • the synchronization unit (150) may include a delay synchronization module (151) and a prediction synchronization module (152).
  • the synchronization unit (150) can synchronize the ultrasound image, the biosignal, and the lesion area so that the ultrasound image and the biosignal measured at the same time can be output, and the lesion area detected at the same time can be displayed.
  • the synchronization unit (150) plays an important role in allowing the operator to perform biopsy and treatment based on information at the same time. If the ultrasound image and the lesion area are measured at different times, the operator expects that there will be a lesion in the displayed lesion area, but in reality, there may be no lesion at that location because it is a lesion area at a different time.
  • the synchronization unit (150) can synchronize the ultrasound image, bio-signal, or lesion area displayed to the operator through the output device (160) with those measured at the same time, thereby ensuring stability and reliability.
  • Fig. 8 is a diagram for explaining a synchronization method of a delay synchronization module according to an embodiment of the present invention.
  • the delay synchronization module (151) can synchronize based on the most delayed signal among the ultrasound image, biosignal, and lesion region when the lesion region detected by the lesion detection unit (140) is smaller than a preset reference value.
  • the lesion area, ultrasound image, and bio-signal can be delayed by T1, T2, and T3, respectively, and the signals can be delivered (T1>T2>T3).
  • the ultrasound image can be synchronized with the lesion area by outputting the signal from a time point earlier than Tp2
  • the bio-signal can be synchronized by outputting the signal from a time point earlier than Tp3.
  • This synchronization method has the advantage of high reliability and accuracy because the measured values already exist, but has the disadvantage of a long delay time.
  • FIG. 9 is a diagram illustrating a synchronization method of a prediction synchronization module (152) according to an embodiment of the present invention.
  • the prediction synchronization module (152) can synchronize based on the most advanced signal among the ultrasound image, biosignal, and lesion area when the lesion area detected by the lesion detection unit (140) is larger than a preset reference value.
  • the prediction synchronization module (152) can synchronize another signal to the most advanced signal using extrapolation. Extrapolation refers to a method used when the available data is limited and a value that exceeds the limit is desired to be obtained.
  • the prediction synchronization module (152) can predict data up to the point in time of the most advanced signal using artificial intelligence.
  • the prediction synchronization module (152) can synchronize with the biosignal by predicting the signal at a time point ahead by Tf1 and outputting the lesion area, and predicting the signal at a time point ahead by Tf2 and outputting the ultrasound image.
  • This synchronization method has the advantage of a short delay time because it synchronizes at the shortest delay time, but has the disadvantage of low reliability and accuracy because it predicts and outputs values that have not yet been measured.
  • the mixed reality-based ultrasound image output system (100) can use a delay synchronization module (151) with high accuracy when the detected lesion is small, and can use a prediction synchronization module (152) with short delay time when the detected lesion is large.
  • the delay synchronization module (151) and the prediction synchronization module (152) can be selected according to user settings.
  • Fig. 10 illustrates a screen displayed through an output device (160) according to an embodiment of the present invention.
  • the output device (160) can display MR content generated by the first content generation unit (120) or the second content generation unit (130).
  • the output device (160) can be a device that can be worn by a practitioner.
  • the output device (160) may be an AR/VR/MR device, glasses, or HMD that can display the first MR content and the second MR content in an augmented manner.
  • the output device (160) may include a processor and a battery to perform computing operations required for implementing mixed reality internally.
  • the output device (160) may not include a processor and a battery to perform computing operations in order to reduce weight.
  • the output device (160) may include a first camera that photographs the front direction of the wearer and a second camera that photographs the face of the wearer.
  • the first camera may recognize the probe shape of the ultrasonic diagnostic device, and the second camera may track the direction of the wearer's gaze.
  • the first camera may be used to recognize the probe shape in the first content generation unit (120) described above.
  • the output device (160) can display the second MR content along the direction of the wearer's gaze tracked by the second camera.
  • the second MR content can be composed of biosignals, and the biosignals must be able to identify their status even in a situation where the operator focuses on the ultrasound image. Accordingly, the output device (160) can detect the direction of the wearer's left or right eye gaze using the second camera and display the second MR content near the corresponding direction of gaze.
  • the output device (160) can be controlled by the movement of the wearer's eyes or pupils captured by the second camera.
  • the operator holds an ultrasound probe in one hand for ultrasound diagnosis and a needle in the other hand for biopsy or treatment. Therefore, in order to control the output device (160), the operator must put down the probe or needle that he or she is holding or request an assistant to do so.
  • the output device (160) can provide the operator with a high degree of freedom by being controlled by the blinking of the wearer's eyes or the movement of the pupils.
  • the control may mean changing the UI, turning on/off the first MR content or the second MR content, etc.
  • a mixed reality-based ultrasound image output method which is another embodiment of the present invention, may include a data receiving step, a first content generating step, a second content generating step, a lesion detection step, a synchronization step, and a displaying step.
  • the data receiving step can receive ultrasound images and bio-signals of the subject from the ultrasonic diagnostic device and the bio-signal measuring device.
  • the data receiving step refers to an operation performed in the data receiving unit (110) described above.
  • the first content generation step can generate first MR (Mixed-reality) content by aligning an ultrasound image with a specific object.
  • the first content generation step can recognize the probe shape of an ultrasound diagnostic device through a camera equipped in an output device, and detect the movement of the probe shape to change the output position of the first MR content.
  • the first content generation step refers to an operation performed in the first content generation unit (120) described above.
  • the second content generation step can generate second MR content by aligning biosignals to a certain space.
  • the second content generation step refers to an operation performed in the second content generation unit (130) described above.
  • the lesion detection step can detect lesion areas in ultrasound images using artificial intelligence trained on an ultrasound image data set in which lesion areas are annotated.
  • the lesion detection step refers to an operation performed in the lesion detection unit (140) described above.
  • the synchronization step can synchronize the ultrasound image, biosignal, and lesion area so that the ultrasound image and biosignal measured at the same time can be output and the lesion area detected at the same time can be displayed.
  • the synchronization step means the operation performed in the synchronization unit (150) described above.
  • the displaying step can display the MR content generated in the first content generation step or the second content generation step on an output device.
  • the displaying step refers to an operation performed on the aforementioned output device (160).
  • Data receiving unit 120 First content generating unit
  • Second content creation unit 140 Lesion detection unit
  • Post-processing section 150 Synchronization section
  • the present invention aims to provide a method and system for recognizing the context of a digital slide image, to which a contextual image representation method that is invariant to rotational transformation and can reflect the surrounding context is applied.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physiology (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

La présente invention comprend : une unité de réception de données destinée à recevoir une image ultrasonore et un bio-signal d'un sujet ; une première unité de génération de contenu destinée à générer un premier contenu de réalité mixte (RM) par mise en correspondance de l'image ultrasonore avec un objet spécifique ; une deuxième unité de génération de contenu destinée à générer un deuxième contenu RM par mise en correspondance du bio-signal avec un espace prédéterminé ; et un dispositif de sortie qui affiche un contenu RM généré par la première unité de génération de contenu ou la deuxième unité de génération de contenu, la première unité de génération de contenu reconnaissant une forme de sonde d'un dispositif de diagnostic ultrasonore par l'intermédiaire d'une caméra disposée dans le dispositif de sortie, détectant le mouvement de la forme de sonde, et changeant une position de sortie du premier contenu RM.
PCT/KR2024/007759 2023-07-17 2024-06-07 Système et procédé de sortie d'image ultrasonore basés sur la réalité mixte Pending WO2025018587A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020230092427A KR20250012331A (ko) 2023-07-17 2023-07-17 혼합현실 기반 초음파 영상 출력 시스템 및 방법
KR10-2023-0092427 2023-07-17

Publications (1)

Publication Number Publication Date
WO2025018587A1 true WO2025018587A1 (fr) 2025-01-23

Family

ID=94281981

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2024/007759 Pending WO2025018587A1 (fr) 2023-07-17 2024-06-07 Système et procédé de sortie d'image ultrasonore basés sur la réalité mixte

Country Status (2)

Country Link
KR (1) KR20250012331A (fr)
WO (1) WO2025018587A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5662109A (en) * 1990-12-14 1997-09-02 Hutson; William H. Method and system for multi-dimensional imaging and analysis for early detection of diseased tissue
KR20080053057A (ko) * 2006-12-08 2008-06-12 주식회사 메디슨 초음파 영상과 외부 의료영상의 혼합영상을 형성 및디스플레이하기 위한 초음파 영상 시스템 및 방법
KR20150000627A (ko) * 2013-06-25 2015-01-05 삼성전자주식회사 초음파 영상 장치 및 그 제어 방법
KR20150108701A (ko) * 2014-03-18 2015-09-30 삼성전자주식회사 의료 영상 내 해부학적 요소 시각화 시스템 및 방법
JP2017159027A (ja) * 2016-03-07 2017-09-14 東芝メディカルシステムズ株式会社 超音波診断装置及び超音波診断支援装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11287874B2 (en) 2018-11-17 2022-03-29 Novarad Corporation Using optical codes with augmented reality displays

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5662109A (en) * 1990-12-14 1997-09-02 Hutson; William H. Method and system for multi-dimensional imaging and analysis for early detection of diseased tissue
KR20080053057A (ko) * 2006-12-08 2008-06-12 주식회사 메디슨 초음파 영상과 외부 의료영상의 혼합영상을 형성 및디스플레이하기 위한 초음파 영상 시스템 및 방법
KR20150000627A (ko) * 2013-06-25 2015-01-05 삼성전자주식회사 초음파 영상 장치 및 그 제어 방법
KR20150108701A (ko) * 2014-03-18 2015-09-30 삼성전자주식회사 의료 영상 내 해부학적 요소 시각화 시스템 및 방법
JP2017159027A (ja) * 2016-03-07 2017-09-14 東芝メディカルシステムズ株式会社 超音波診断装置及び超音波診断支援装置

Also Published As

Publication number Publication date
KR20250012331A (ko) 2025-01-24

Similar Documents

Publication Publication Date Title
US11195340B2 (en) Systems and methods for rendering immersive environments
KR102458587B1 (ko) 진단 검사를 실시간 치료에 통합하기 위한 범용 장치 및 방법
US20210235992A1 (en) Attached sensor activation of additionally-streamed physiological parameters from non-contact monitoring systems and associated devices, systems, and methods
WO2020101283A1 (fr) Dispositif d'assistance chirurgicale mettant en œuvre la réalité augmentée
US20180028088A1 (en) Systems and methods for medical procedure monitoring
CN113288069A (zh) 用于生理监测的系统、方法和计算机程序产品
US10835120B2 (en) Extended medical test system
Ibragimov et al. The use of machine learning in eye tracking studies in medical imaging: a review
CN110660486A (zh) 一种基于穿戴设备的医生健康监测系统
WO2021071336A1 (fr) Dispositif d'affichage à lunettes intelligentes basé sur la détection du regard
WO2020132813A1 (fr) Procédé de surveillance de signe physiologique pour lésion cranio-cérébrale et dispositif de surveillance médicale
Jiang et al. Pupil dilations during target-pointing respect Fitts' law
WO2025018587A1 (fr) Système et procédé de sortie d'image ultrasonore basés sur la réalité mixte
US20210287785A1 (en) Automatic Sensing for Clinical Decision Support
WO2016200224A1 (fr) Procédé de surveillance de la bioactivité d'un utilisateur, système et support d'enregistrement non temporaire lisible par ordinateur
US20220296158A1 (en) System, method, and apparatus for temperature asymmetry measurement of body parts
Sumriddetchkajorn et al. Thermal analyzer enables improved lie detection in criminal-suspect interrogations
Benila et al. Augmented Reality Based Doctor's Assistive System
Sadok et al. Performing the HINTS-exam using a mixed-reality head-mounted display in patients with acute vestibular syndrome: a feasibility study
WO2020138731A1 (fr) Simulateur et procédé de formation médicale en oto-laryngologie et neurochirurgie basés sur un capteur em
US12458223B2 (en) Infrared tele-video-oculography for remote evaluation of eye movements
US20240242817A1 (en) Visualization device
US20250140401A1 (en) Information processing apparatus, method for controlling information processing apparatus, and computer program
GB2611556A (en) Augmented reality ultrasound-control system for enabling remote direction of a user of ultrasound equipment by experienced practitioner
CN117547233A (zh) 一种基于非接触式生命体征实时监测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24843308

Country of ref document: EP

Kind code of ref document: A1