WO2024144742A1 - A method and system that can be used to detect and manage patient movements during anatomical imaging for medical purposes - Google Patents
A method and system that can be used to detect and manage patient movements during anatomical imaging for medical purposes Download PDFInfo
- Publication number
- WO2024144742A1 WO2024144742A1 PCT/TR2023/051817 TR2023051817W WO2024144742A1 WO 2024144742 A1 WO2024144742 A1 WO 2024144742A1 TR 2023051817 W TR2023051817 W TR 2023051817W WO 2024144742 A1 WO2024144742 A1 WO 2024144742A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- anatomical
- image
- movement
- surrounding tissue
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
Definitions
- the invention relates to a method and system that can be used to follow up using classical computer imaging methods and to approve the patient examination protocol during imaging in order to detect patient movements and to manage patient movements in line with the detected image during the creation of an anatomical image (image with ultrasonographic and/or elastographic methods, etc.) for medical purposes.
- the field of medicine is further differentiated among these industries.
- the practitioner performing the procedure is expected to have serious experience in order to interpret or even collect the data collected in the field of medicine.
- Deep learning which is a sub-branch of artificial intelligence, is frequently preferred in the processing and interpretation of medical images, especially because it gives very successful results in image processing.
- Ultrasound detection of pathologies or suspicions of pathology in the musculoskeletal system can be given as an example of this situation. More specifically, the interaction of tendon movements or shoulder capsules with the circumference of the arm during 90° rotation can be given as an example of this application.
- Ultrasound has more than one advantage in terms of examining musculoskeletal pathologies; these are portability, rapid results, and the ability to examine structures such as the musculoskeletal tendon nerve during patient movements.
- General joint regions are subject to medical protocols in order to examine the anatomical structures they contain. The patient is asked to make certain movements in the joint region examined using more than one window. Patient movements determined by this protocol are essential in order to diagnose any distress that occurs during the movement of the structures.
- Patent application EP3482346 mentions a system and method for automatic detection, localization, and semantic segmentation of at least one anatomical object in a parameter space of an image created by an imaging system.
- the method is aimed at generating the image through the imaging system and providing an image of the anatomical object and the surrounding tissue to a processor.
- said method includes the development and training of a parameter space deep learning network including convolutional neural networks (CNNs) to automatically detect the anatomical object and the tissue surrounding the parameter space of the image.
- Said method also includes the automatic localization and segmentation of the peripheral tissue of the parameter space of the anatomical object and image by using additional convolutional neural networks.
- CNNs convolutional neural networks
- EP3482346 does not mention how to operate the imaging system used by personnel with lack of experience especially in conditions where the patient needs to be moved during imaging.
- Patent application US2012033868A1 mentions a method that can be used to automatically detect and measure the effects of patient movement during tomosynthesis scanning images.
- Patent application no. US2012033868A1 also does not mention tracking the image and directing the user in line with how to take the image while taking the image for the applications that need to be performed during the movement of the patient.
- the object of the invention is to create a method and system that can be used to detect patient movements and to manage patient movements in line with the detected image during the creation of an anatomical image for medical purposes.
- Errors to be made during the application will be prevented by using the system subject to the invention in order to continuously monitor and manage patient movements during the imaging process. Human errors will be possible even if the imaging is performed by experienced users.
- Imaging results can be optimized by controlling and managing the application with artificial intelligence.
- the imaging processes to be applied by directing the user during the process can be easily applied by users who do not have sufficient experience.
- Figure 1 A schematic view of the method used to detect and manage patient movements during medical imaging
- Figure 2 A representation of the detection of patient movements and the relationship between anatomical objects during medical imaging
- the embodiment of the invention in its most basic form, relates to at least one application that can take an image of at least one anatomical structure and surrounding tissue, includes convolutional neural networks to automatically detect the anatomical object and surrounding tissue, provides automatic localization and performs segmentation of the peripheral tissue of the anatomical structures that appear on the image with the use of convolutional neural networks, can automatically mark the anatomical structure and surrounding tissue in the created image and present it to the user as an image; a processor on which said application can work, comprises a medical anatomical imaging system that contains a display unit to present the created image to the user, provides an image of the anatomical object and the surrounding tissue and can transfer this image to said processor.
- the imaging system comprises the following: a motion tracking module that allows the user to direct these movement protocols to the patient in order to make them complete, if the movement of the anatomical structure is not among the protocol values, to determine the protocols regarding the movement of the limb/organ during imaging, to examine the movement during imaging, according to the anatomical region to be imaged, with one or more of the deep learning/machine learning/artificial intelligence methods.
- said system comprises the following: determining the protocols regarding the movement of the limb/organ during imaging, selecting the points on the relevant anatomical region, examining the movement of the points during imaging, directing the user to make these movement protocols complete if the movement of the anatomical structure is not among the protocol values, a motion tracking module that provides one or more of the deep learning/machine learning/artificial intelligence methods.
- the motion tracking module can work in conjunction with a video warning unit to guide the user, or it can be operated together with the sound module to provide an audible warning.
- the current display unit of the system can also be used as a video warning unit according to one of the embodiments of the invention.
- the anatomical imaging device for medical purposes is ultrasound according to the preferred embodiment of the invention.
- the method of the invention in its most basic form, comprises the following: providing at least one application that can take an image of at least one anatomical structure and surrounding tissue, includes convolutional neural networks to automatically detect the anatomical object and surrounding tissue, provides automatic localization and segmentation of the anatomical structures that appear on the image and the surrounding tissue of the image by using convolutional neural networks, automatically mark the anatomical structure and surrounding tissue in the generated image and present it to the user as an image, and at least one processor on which the said application can run; developing and training a deep learning network including one or more convolutional neural networks to automatically detect the anatomical object and the surrounding tissue; automatic positioning and segmentation of the anatomical object and surrounding tissue in the image by means of a convolutional neural network; automatically labeling the anatomical object and surrounding tissue in the image; displaying the labeled image to a user.
- said method comprises the following:
- One or more of the deep learning/machine learning/artificial intelligence methods can be used while operating at least one of these process steps.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Pathology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Physiology (AREA)
- Radiology & Medical Imaging (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method and system that can be used to follow up using classical computer imaging methods and to approve the patient examination protocol during imaging in order to detect patient movements and to manage patient movements in line with the detected image during the creation of an anatomical image (image with ultrasonographic and/or elastographic methods, etc.) for medical purposes.
Description
A METHOD AND SYSTEM THAT CAN BE USED TO DETECT AND MANAGE PATIENT MOVEMENTS DURING ANATOMICAE IMAGING FOR MEDICAL PURPOSES
Technical Field
The invention relates to a method and system that can be used to follow up using classical computer imaging methods and to approve the patient examination protocol during imaging in order to detect patient movements and to manage patient movements in line with the detected image during the creation of an anatomical image (image with ultrasonographic and/or elastographic methods, etc.) for medical purposes.
State of the Art
Today, the areas where artificial intelligence is used in many industries are increasing day by day due to the costs of human labor-intensive processes and the high experience of the person to be used.
The field of medicine is further differentiated among these industries. The practitioner performing the procedure is expected to have serious experience in order to interpret or even collect the data collected in the field of medicine.
Operations such as manual analysis, evaluation and classification of medical images constitute a considerable burden in terms of both cost and time. An incorrect interpretation or evaluation suggested by the analyst or persons, especially when such sensitive images are to be evaluated, can cause a vital error for the patient.
Deep learning, which is a sub-branch of artificial intelligence, is frequently preferred in the processing and interpretation of medical images, especially because it gives very successful results in image processing.
Even though the accuracy rate of procedures such as disease diagnosis has increased with the
development of medical imaging technologies, the correct interpretation of these images by experts is costly in terms of time and negative in terms of treatment process.
It is important for the patient to be on the move during imaging in order to diagnose the condition especially when using some imaging methods such as ultrasound.
Ultrasound detection of pathologies or suspicions of pathology in the musculoskeletal system can be given as an example of this situation. More specifically, the interaction of tendon movements or shoulder capsules with the circumference of the arm during 90° rotation can be given as an example of this application.
Ultrasound has more than one advantage in terms of examining musculoskeletal pathologies; these are portability, rapid results, and the ability to examine structures such as the musculoskeletal tendon nerve during patient movements.
However, it aims to display the diagnosed pathologies or suspicions of pathology by users with low ultrasound and medical experience and to obtain images of quality that can be diagnosed by physicians.
General joint regions are subject to medical protocols in order to examine the anatomical structures they contain. The patient is asked to make certain movements in the joint region examined using more than one window. Patient movements determined by this protocol are essential in order to diagnose any distress that occurs during the movement of the structures.
Since automatic diagnostic systems using artificial intelligence are created for the use of users with low medical experience, they reveal the possibility of users not complying with or incomplete compliance with protocols.
If the compliance of the movements with the protocols cannot be ensured due to the lack of experience of the user, this prevents the acquisition of the targeted diagnostic image.
Human errors that may occur during imaging may cause the wrong image to be taken during imaging even though the user is experienced.
Patent application EP3482346 mentions a system and method for automatic detection, localization, and semantic segmentation of at least one anatomical object in a parameter space of an image created by an imaging system.
It is understood that the method is aimed at generating the image through the imaging system and providing an image of the anatomical object and the surrounding tissue to a processor.
It is understood that said method includes the development and training of a parameter space deep learning network including convolutional neural networks (CNNs) to automatically detect the anatomical object and the tissue surrounding the parameter space of the image. Said method also includes the automatic localization and segmentation of the peripheral tissue of the parameter space of the anatomical object and image by using additional convolutional neural networks.
However, the application numbered EP3482346 does not mention how to operate the imaging system used by personnel with lack of experience especially in conditions where the patient needs to be moved during imaging.
Patent application US2012033868A1 mentions a method that can be used to automatically detect and measure the effects of patient movement during tomosynthesis scanning images.
Patent application no. US2012033868A1 also does not mention tracking the image and directing the user in line with how to take the image while taking the image for the applications that need to be performed during the movement of the patient.
Problems to be Solved by the Invention
The object of the invention is to create a method and system that can be used to detect patient movements and to manage patient movements in line with the detected image during the creation of an anatomical image for medical purposes.
It will be possible to direct the user in accordance with the movement protocols specific to the imaged region after monitoring/detecting the movements applied by the patient with said method.
Patient movements determined by these protocols can be used to diagnose any distress that occurs during the movement of the imaged structures. Therefore, the correct and certain application of movement protocols may increase the chance of success of imaging.
Errors to be made during the application will be prevented by using the system subject to the invention in order to continuously monitor and manage patient movements during the imaging process. Human errors will be possible even if the imaging is performed by experienced users.
Imaging results can be optimized by controlling and managing the application with artificial intelligence.
The imaging processes to be applied by directing the user during the process can be easily applied by users who do not have sufficient experience.
Application costs will decrease when the imaging process can be fully applied by automatic diagnostic systems using artificial intelligence.
Description of the Figures
Figure 1. A schematic view of the method used to detect and manage patient movements during medical imaging
Figure 2. A representation of the detection of patient movements and the relationship between anatomical objects during medical imaging
Description of the Invention
The embodiment of the invention, in its most basic form, relates to at least one application that can take an image of at least one anatomical structure and surrounding tissue, includes convolutional neural networks to automatically detect the anatomical object and surrounding tissue, provides automatic localization and performs segmentation of the peripheral tissue of the anatomical structures that appear on the image with the use of convolutional neural networks, can automatically mark the anatomical structure and surrounding tissue in the created image and present it to the user as an image; a processor on which said application can
work, comprises a medical anatomical imaging system that contains a display unit to present the created image to the user, provides an image of the anatomical object and the surrounding tissue and can transfer this image to said processor.
The imaging system according to the preferred embodiment of the invention comprises the following: a motion tracking module that allows the user to direct these movement protocols to the patient in order to make them complete, if the movement of the anatomical structure is not among the protocol values, to determine the protocols regarding the movement of the limb/organ during imaging, to examine the movement during imaging, according to the anatomical region to be imaged, with one or more of the deep learning/machine learning/artificial intelligence methods.
More specifically, said system comprises the following: determining the protocols regarding the movement of the limb/organ during imaging, selecting the points on the relevant anatomical region, examining the movement of the points during imaging, directing the user to make these movement protocols complete if the movement of the anatomical structure is not among the protocol values, a motion tracking module that provides one or more of the deep learning/machine learning/artificial intelligence methods.
The motion tracking module can work in conjunction with a video warning unit to guide the user, or it can be operated together with the sound module to provide an audible warning.
The current display unit of the system can also be used as a video warning unit according to one of the embodiments of the invention.
The anatomical imaging device for medical purposes is ultrasound according to the preferred embodiment of the invention.
The method of the invention, in its most basic form, comprises the following: providing at least one application that can take an image of at least one anatomical structure
and surrounding tissue, includes convolutional neural networks to automatically detect the anatomical object and surrounding tissue, provides automatic localization and segmentation of the anatomical structures that appear on the image and the surrounding tissue of the image by using convolutional neural networks, automatically mark the anatomical structure and surrounding tissue in the generated image and present it to the user as an image, and at least one processor on which the said application can run; developing and training a deep learning network including one or more convolutional neural networks to automatically detect the anatomical object and the surrounding tissue; automatic positioning and segmentation of the anatomical object and surrounding tissue in the image by means of a convolutional neural network; automatically labeling the anatomical object and surrounding tissue in the image; displaying the labeled image to a user.
More particularly, said method comprises the following:
• Determining the protocols regarding the movement of the anatomical structure to be imaged, selecting the points on the relevant anatomical region, examining the movement of the points during imaging, directing the user to make these movement protocols complete in case the movement of the anatomical structure is not among the protocol values can be operated with the process steps.
One or more of the deep learning/machine learning/artificial intelligence methods can be used while operating at least one of these process steps.
Claims
1. An anatomical imaging system for medical purposes that enables the generation of an image of the anatomical object and the surrounding tissue and can transfer this image to the said processor, comprising at least one application that can take an image of at least one anatomical structure and surrounding tissue, includes convolutional neural networks to automatically detect the anatomical object and surrounding tissue, provides automatic localization and segmentation of the surrounding tissue of anatomical structures that appear on the image by using convolutional neural networks, automatically marks the anatomical structure and surrounding tissue in the generated image and presents it to the user as an image; a processor on which the said application can run; a display unit for presenting the generated image to the user; characterized in that it comprises a motion tracking module that uses one or more deep learning/machine learning/artificial intelligence techniques to enable the user to determine the protocols for the movement of the limb / organ during imaging according to the anatomical region to be imaged, to examine the movement during imaging, and to direct the user to make the patient complete these movement protocols if the movement of the anatomical structure is not among the protocol values.
2. An anatomical imaging system for medical purposes according to Claim 1, characterized in that it comprises a motion tracking module that provides one or more of the deep learning/machine learning/artificial intelligence methods to determine the protocols for the movement of the limb/organ during imaging according to the anatomical region to be imaged, to select the points on the relevant anatomical region, to examine the movement of the points in the imaging process, and to direct the user to make these movement protocols fully if the movement of the anatomical structure is not among the protocol values.
3. An anatomical imaging system for medical purposes according to Claim 2, characterized in that it comprises a motion tracking module that can be operated together with the sound module to provide an audible warning, as well as working together with a video warning unit to guide the user.
4. An anatomical imaging system for medical purposes according to Claim 2, characterized in that it comprises a motion tracking module that can provide visual warning using the existing display unit of the system to guide the user.
5. An anatomical imaging system for medical purposes according to Claim 2, characterized in that it is an anatomical imaging device ultrasound machine for medical purposes.
6. An anatomical imaging method for medical purposes that comprises the steps of providing at least one application that can take an image of at least one anatomical structure and surrounding tissue, includes convolutional neural networks to automatically detect the anatomical object and surrounding tissue, provides automatic localization and segmentation of the anatomical structures that appear on the image and the surrounding tissue of the image by using convolutional neural networks, automatically mark the anatomical structure and surrounding tissue in the generated image and present it to the user as an image, and at least one processor on which the said application can run; developing and training a deep learning network including one or more convolutional neural networks to automatically detect the anatomical object and the surrounding tissue; automatic positioning and segmentation of the anatomical object and surrounding tissue in the image by means of a convolutional neural network; automatically labeling the anatomical object and surrounding tissue in the image; displaying the labeled image to a user, characterized in that it comprises the following steps
• determining protocols for movement during imaging according to the anatomical region to be imaged,
• selecting the points on the relevant anatomical region,
• examining the movement of the points during imaging,
• directing the user to perform these movement protocols to the patient in case the movement of the anatomical structure is not among the protocol values.
7. An anatomical imaging method for medical purposes according to Claim 1, characterized in that one or more of the deep learning/machine learning/artificial intelligence methods are used while operating at least one of the process steps.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TR2022020864 | 2022-12-28 | ||
| TR2022/020864 TR2022020864A2 (en) | 2022-12-28 | A PROCEDURE AND SYSTEM THAT CAN BE USED TO DETECT AND MANAGE PATIENT MOVEMENTS DURING ANATOMICAL IMAGE ACQUISITION FOR MEDICAL PURPOSES |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024144742A1 true WO2024144742A1 (en) | 2024-07-04 |
Family
ID=91719040
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/TR2023/051817 Ceased WO2024144742A1 (en) | 2022-12-28 | 2023-12-28 | A method and system that can be used to detect and manage patient movements during anatomical imaging for medical purposes |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024144742A1 (en) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200121294A1 (en) * | 2018-10-17 | 2020-04-23 | General Electric Company | Methods and systems for motion detection and compensation in medical images |
| US20210097679A1 (en) * | 2019-09-30 | 2021-04-01 | GE Precision Healthcare LLC | Determining degree of motion using machine learning to improve medical image quality |
| US20210100526A1 (en) * | 2019-10-04 | 2021-04-08 | GE Precision Healthcare LLC | System and methods for tracking anatomical features in ultrasound images |
| US20220354585A1 (en) * | 2021-04-21 | 2022-11-10 | The Cleveland Clinic Foundation | Robotic surgery |
| US20220358773A1 (en) * | 2019-09-12 | 2022-11-10 | Koninklijke Philips N.V. | Interactive endoscopy for intraoperative virtual annotation in vats and minimally invasive surgery |
-
2023
- 2023-12-28 WO PCT/TR2023/051817 patent/WO2024144742A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200121294A1 (en) * | 2018-10-17 | 2020-04-23 | General Electric Company | Methods and systems for motion detection and compensation in medical images |
| US20220358773A1 (en) * | 2019-09-12 | 2022-11-10 | Koninklijke Philips N.V. | Interactive endoscopy for intraoperative virtual annotation in vats and minimally invasive surgery |
| US20210097679A1 (en) * | 2019-09-30 | 2021-04-01 | GE Precision Healthcare LLC | Determining degree of motion using machine learning to improve medical image quality |
| US20210100526A1 (en) * | 2019-10-04 | 2021-04-08 | GE Precision Healthcare LLC | System and methods for tracking anatomical features in ultrasound images |
| US20220354585A1 (en) * | 2021-04-21 | 2022-11-10 | The Cleveland Clinic Foundation | Robotic surgery |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12011261B2 (en) | Autonomous diagnosis of ear diseases from biomarker data | |
| Li et al. | Image-guided navigation of a robotic ultrasound probe for autonomous spinal sonography using a shadow-aware dual-agent framework | |
| Giovinco et al. | A passing glance? Differences in eye tracking and gaze patterns between trainees and experts reading plain film bunion radiographs | |
| CN113610145A (en) | Model training method, image prediction method, training system and storage medium | |
| KR102531400B1 (en) | Artificial intelligence-based colonoscopy diagnosis supporting system and method | |
| JP7559350B2 (en) | Image processing device and program | |
| Chatelain et al. | Evaluation of gaze tracking calibration for longitudinal biomedical imaging studies | |
| Ibragimov et al. | The use of machine learning in eye tracking studies in medical imaging: a review | |
| US20240225588A1 (en) | Method and Apparatus of Intelligent Analysis for Liver Tumor | |
| US20250127572A1 (en) | Methods and systems for planning a surgical procedure | |
| CN118415588A (en) | Parkinson's disease diagnosis system and equipment based on virtual reality and eye tracking technology | |
| JP2006102353A (en) | Joint motion analysis device, joint motion analysis method, and joint motion analysis program | |
| Pershin et al. | Artificial intelligence for the analysis of workload-related changes in radiologists’ gaze patterns | |
| CN116616814B (en) | A device and method for realizing ultrasonic imaging control and display | |
| TWM594253U (en) | Intelligent analysis device for liver tumor | |
| US9918685B2 (en) | Medical image processing apparatus and method for medical image processing | |
| US11278260B1 (en) | Acquiring ultrasound image | |
| WO2024144742A1 (en) | A method and system that can be used to detect and manage patient movements during anatomical imaging for medical purposes | |
| Donovan et al. | The effect of feedback on performance in a fracture detection task | |
| TWI681755B (en) | System and method for measuring scoliosis | |
| White et al. | Modeling human eye behavior during mammographic scanning: Preliminary results | |
| JP2007097902A (en) | Ultrasonic inspection system | |
| JP7435242B2 (en) | Dynamic image analysis device, dynamic image analysis method and program | |
| CN112308817B (en) | Automatic positioning of structures | |
| TR2022020864A2 (en) | A PROCEDURE AND SYSTEM THAT CAN BE USED TO DETECT AND MANAGE PATIENT MOVEMENTS DURING ANATOMICAL IMAGE ACQUISITION FOR MEDICAL PURPOSES |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23913337 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |