WO2025199655A1 - Systèmes et méthodes de traitement d'anomalies du corps vitré reposant sur des ultrasons - Google Patents
Systèmes et méthodes de traitement d'anomalies du corps vitré reposant sur des ultrasonsInfo
- Publication number
- WO2025199655A1 WO2025199655A1 PCT/CA2025/050451 CA2025050451W WO2025199655A1 WO 2025199655 A1 WO2025199655 A1 WO 2025199655A1 CA 2025050451 W CA2025050451 W CA 2025050451W WO 2025199655 A1 WO2025199655 A1 WO 2025199655A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- ultrasound
- ultrasound transducer
- vitreous
- eye
- imaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/08—Clinical applications
- A61B8/0833—Clinical applications involving detecting or locating foreign bodies or organic structures
- A61B8/085—Clinical applications involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/10—Eye inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5215—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
- A61B8/5223—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting in contact-lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting in contact-lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
- A61F9/007—Methods or devices for eye surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N7/00—Ultrasound therapy
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Definitions
- the present disclosure relates to the use of therapeutic ultrasound for ophthalmological interventions. More particularly, the present disclosure relates to the use of therapeutic ultrasound for treatment of abnormalities of the vitreous.
- vitreous hemorrhage occurs when blood vessels within the vitreous leak blood, often due to trauma, diabetes, or age-related changes. This can result in blurred vision, floaters, and, in severe cases, vision loss. Patients suffering from vitreous hemorrhages that do not naturally resolve are treated surgically, most often with vitrectomy.
- Floaters are typically managed conservatively, with most cases resolving on their own over time. However, in cases where floaters significantly impair vision or are associated with other eye conditions, surgical interventions such as vitrectomy may be considered.
- control circuitry is further configured such that the deep learning algorithm is further configured to: classify a vitreous abnormality present within the abnormal region; and determine, according to a classification of the vitreous abnormality, ultrasound parameters suitable for achieving disruption, via cavitation, of the vitreous abnormality present within the abnormal region; and wherein the ultrasound device is controlled to deliver the focused ultrasound energy to the abnormal region according to the ultrasound parameters.
- the imaging ultrasound transducer array and the therapeutic ultrasound transducer are spatially aligned to insonify a common planar region during ultrasound imaging and ultrasound therapy, and wherein the imaging ultrasound transducer array is controllable to generate a two-dimensional image slice characterizing the common planar region; wherein the imaging ultrasound transducer array and the therapeutic ultrasound transducer are supported by a support structure, and wherein the imaging ultrasound transducer array, the therapeutic ultrasound transducer array, and the support structure together form an ultrasound transducer assembly, the ultrasound transducer assembly being movable to vary a location of the common planar region within the eye; wherein the control circuitry is configured to: after the ultrasound transducer assembly is moved, such that the common planar region is moved to a different location within the eye, repeat steps a) to c), thereby facilitating treatment of vitreous abnormalities associated with the different location.
- the ultrasound device may include an ultrasound transducer assembly positioning mechanism for moving the ultrasound transducer assembly, and wherein the control circuitry is configured to control the ultrasound transducer assembly positioning mechanism to move the ultrasound transducer assembly such that the common planar region is moved to the different location.
- the control circuitry may be configured to control the ultrasound transducer assembly positioning mechanism to move the ultrasound transducer assembly such that the common planar region is moved to the different location after receiving input selecting the different location.
- the control circuitry may be configured to control the ultrasound transducer assembly positioning mechanism to autonomously move the ultrasound transducer assembly such that the different location of the common planar region is adjacent to a current location of the common planar region.
- the control circuitry may be configured such to provide an alert when, as a consequence of motion of the ultrasound transducer assembly, the common planar region is moved to a previous location that has been previously treated with focused ultrasound energy.
- a system for treating a vitreous disorder via focused ultrasound therapy comprising: a coupling device capable of contacting an eye of a subject such that a chamber suitable for receiving an acoustic coupling medium is formed above the eye; an ultrasound device capable of docking with the coupling device to acoustically couple a distal ultrasound energy emitting surface of the ultrasound device with the eye, the ultrasound device comprising: an imaging ultrasound transducer array, the imaging ultrasound transducer array being controllable, when the ultrasound device is docked with the coupling device and acoustically coupled to the eye, to image at least a portion of the eye; a therapeutic ultrasound transducer array, the therapeutic ultrasound transducer array being controllable, when the ultrasound device is docked with the coupling device and acoustically coupled to the eye, to direct focused ultrasound energy within the eye; the imaging ultrasound transducer array and the therapeutic ultrasound transducer array being spatially aligned to insonify a common planar region
- a method of treating a vitreous disorder via focused ultrasound therapy comprising: providing a coupling device capable of contacting an eye of a subject such that a chamber suitable for receiving an acoustic coupling medium is formed above the eye; docking an ultrasound device with the coupling device to acoustically couple a distal ultrasound energy emitting surface of the ultrasound device with the eye, the ultrasound device comprising: an imaging ultrasound transducer array, the imaging ultrasound transducer array being controllable, when the ultrasound device is docked with the coupling device and acoustically coupled to the eye, to image at least a portion of the eye; and a therapeutic ultrasound transducer array, the therapeutic ultrasound transducer array being controllable, when the ultrasound device is docked with the coupling device and acoustically coupled to the eye, to direct focused ultrasound energy within the eye; and the method further comprising: a) controlling the imaging ultrasound transducer array to obtain ultrasound image data characterizing at least a portion of
- FIG. 1 is a schematic of an example system for treating vitreous abnormalities using focused ultrasound.
- FIGS. 2A, 2B and 2C show example configurations of a combined imaging and therapeutic ultrasound device for treating vitreous abnormalities.
- FIGS. 3A, 3B and 3C show example variations of the ultrasound devices of FIGS. 2A, 2B and 2C, respectively, in which an ophthalmoscope and associated light source are included.
- FIG. 4 schematically illustrates an example control and processing circuitry for controlling an ultrasound system for treating vitreous abnormalities using focused ultrasound.
- the terms “comprises” and “comprising” are to be construed as being inclusive and open ended, and not exclusive. Specifically, when used in the specification and claims, the terms “comprises” and “comprising” and variations thereof mean the specified features, steps or components are included. These terms are not to be interpreted to exclude the presence of other features, steps or components.
- exemplary means “serving as an example, instance, or illustration,” and should not be construed as preferred or advantageous over other configurations disclosed herein.
- the terms “about” and “approximately” are meant to cover variations that may exist in the upper and lower limits of the ranges of values, such as variations in properties, parameters, and dimensions. Unless otherwise specified, the terms “about” and “approximately” mean plus or minus 25 percent or less.
- any specified range or group is as a shorthand way of referring to each and every member of a range or group individually, as well as each and every possible sub-range or sub-group encompassed therein and similarly with respect to any sub-ranges or sub-groups therein. Unless otherwise specified, the present disclosure relates to and explicitly incorporates each and every specific member and combination of sub-ranges or subgroups.
- the term "on the order of”, when used in conjunction with a quantity or parameter, refers to a range spanning approximately one tenth to ten times the stated quantity or parameter.
- the example system includes an ultrasound device 100 that includes an imaging ultrasound transducer 110 and a therapeutic ultrasound transducer 120.
- the imaging ultrasound transducer 110 and therapeutic ultrasound transducer 120 are mechanically supported by a common mechanical support structure. Together, they define what is referred to herein as an ultrasound transducer assembly.
- the ultrasound device 100 may also include a housing, within which the ultrasound transducer assembly is supported.
- a lower portion of the coupling device 130 may include a thin layer, such as a thin layer of plastic or a silicone membrane, to facilitate the coupling of acoustic energy from the acoustic medium into the eye.
- a layer of coupling medium such as an ultrasound gel, may be applied between a membrane of the coupling device 130 and the eye of the patient to ensure proper acoustic coupling. Accordingly, the ultrasound device 100 and its docking with the coupling device 130 ensures proper contact, alignment and acoustic coupling between the transducers and the eye of the patient, thus facilitating the delivery of focused ultrasound to a target area while minimizing energy loss.
- a degasser 140 may be provided in fluid communication with the coupling medium residing within the coupling device 130 to remove air bubbles or dissolved gases from the coupling medium, thereby ensuring the efficiency and safety of the ultrasound transmission. By eliminating bubbles or gas pockets, acoustic coupling between the transducer and the eye of the patient is improved, allowing for more effective transmission of focused ultrasound energy.
- the imaging ultrasound transducer and the therapeutic ultrasound transducer may be supported (e.g. rigidly supported by a support/frame) such that an imaging axis of the imaging ultrasound transducer 110 is colinear with a therapeutic beam delivery axis of the therapeutic ultrasound transducer 120 for treatment control and monitoring.
- an imaging axis may be, for example, a beam propagation axis associated with a single-element imaging transducer that is mechanically scanned to collect image data, or may be, for example, an axis associated with an array of imaging ultrasound transducers, such as a central axis perpendicular to a 1 D or 2D imaging ultrasound transducer array.
- a therapeutic beam delivery axis may be, for example, a beam propagation axis associated with a single-element therapeutic transducer that is mechanically scanned to delivery ultrasound therapy to different regions of the eye, or may be, for example, an axis associated with an array of therapeutic ultrasound transducers, such as a central axis perpendicular to a 1 D or 2D therapeutic ultrasound transducer array.
- FIGS. 2A, 2B and 2C illustrate various example configurations of the ultrasound transducer device.
- FIG. 2A illustrates the case in which the therapeutic ultrasound transducer is a single-element ultrasound transducer 120A, which is movable, e.g. via movement of the ultrasound transducer assembly, to vary a position of a focused ultrasound beam within the eye.
- FIG. 2B illustrates an example case in which the therapeutic ultrasound transducer is provided in the form of an ultrasound transducer array capable of operation as a phased array. Although not shown, such an ultrasound transducer array may be configured as a 1 D or 2D transducer array.
- the therapeutic ultrasound transducer 120C may alternatively be provided as an array of single-element ultrasound transducers controlled with relative time delays for achieving suitable focusing.
- the central imaging ultrasound transducer 110 shown in the present non-limiting example as being co-linearly aligned with the therapeutic ultrasound transducer 120A-120C, may be a linear imaging ultrasound transducer array configured for B-mode imaging (i.e. configured to generate a 2D image, e.g. an image slice), or may be a 2D imaging array capable of generating a volumetric image.
- the ultrasound transducer assembly may also include an integrated optical ophthalmoscope 190 and associated light source 195.
- the ultrasound transducer assembly is manually movable, for example, with up to 6 degrees of freedom (3 translations and 3 rotations).
- the ultrasound transducer assembly is autonomously movable, for example, via a motorized-arm system, e.g. with up to 6 degrees of freedom (3 translations and 3 rotations).
- a motorized-arm system e.g. with up to 6 degrees of freedom (3 translations and 3 rotations).
- FIG. 1 shows a motorized arm 150 (e.g. a set of mechanically coupled arms that can be actuated to pivot via a set of motors), and an associated motor controller 160.
- the example three-axial positioning system allows precise control of the position and orientation of the imaging ultrasound transducer and the therapy ultrasound transducer.
- the autonomous control of the position and/or orientation of the ultrasound transducer assembly enables specific areas within the eye to be targeted for treatment, ensuring that the focused ultrasound is directed to the intended tissue volume, also ensuring spatial overlap between the imaging field of view and the transducer focal region as the ultrasound transducer assembly is moved relative to the eye.
- the example system includes therapeutic ultrasound transducer drive electronics that include, but are not limited to, a function generator 170, a power amplifier 172, and matching circuitry 174 (shown as a matching circuit box).
- the function generator 170 or more generally, waveform generating circuitry, produces electrical signals that define the ultrasound waveform, and example inputs of the function generator include the type of wave form (sinusoidal), frequency, voltage amplitude, pulse period (or interval), and number of cycles.
- a function generator can serve as trigger for another function generator, in order to achieve precise control of the treatment duration.
- the power amplifier 172 receives, as input, the electrical signal from the function generator and amplifies it to a level suitable for driving the ultrasound transducer. It ensures that the ultrasound transducer receives sufficient power to create the desired focused ultrasound field for treatment.
- the output power of the power amplifier may be approximately 800-1500 W, 800-2000 W, or, for example, 800-3000 W for the treatment of vitreous hemorrhage or floaters in vitreous humor.
- the matching circuitry 174 (matching box) is provided to match the impedance of the power amplifier to the impedance of the ultrasound transducer. It is connected in between the power amplifier and the therapeutic transducer. By matching impedances, the matching box helps to efficiently transfer the electrical energy from the power amplifier to the transducer.
- the matching circuitry 174 is beneficial for generating the desired focused ultrasound field with sufficient power and accuracy. It also provides a level of protection for both the power amplifier and the ultrasound transducer by minimizing reflections that could otherwise damage the components.
- the impedance of the matching box may be optimized for 50 ohms, and its phase is optimized for 0°.
- the parameters of the focused ultrasound treatment may be controlled by control and processing circuitry 500 (for example, a PC).
- control and processing circuitry 500 may be operably connected to the therapeutic ultrasound transducer driving circuitry.
- the control and processing circuitry 500 may also be connected to the motor controller 160 for autonomous (or operator directed) control of the position and/or orientation of the ultrasound transducer assembly, for example, to synchronize ultrasound imaging and ultrasound therapy with changes in position and/or orientation of the ultrasound transducer assembly.
- the control and processing circuitry may render, on a display device, a user interface that facilitates the planning, monitoring, and/or adjustment of the ultrasound treatment, for example, intraoperatively, in real-time.
- the user interface may provide feedback on treatment progress, ensures safety limits are not exceeded, and allows for customization of the therapy based on individual patient needs.
- the pulse (or burst) period refers to the time interval between the start of one pulse and the start of the next pulse in a train of pulses (burst). In some example implementations, this value may be 0-1 ms, 0-5 ms, 0-10 ms, or 0-100 ms. Decreasing the pulse (or burst) period leads to more precise control over the cavitation area (treatment zone).
- the pulse repetition frequency (PRF) is the inverse of the pulse (or burst) period, which may be, for example, 1 ,000-10,000 Hz, 200- 10,000 Hz, or 1-10,000 Hz. In some example implementations, the duty cycle may be set to 0-1 % to avoid producing significant thermal effects.
- the therapeutic transducer can be a concavely shaped single-element transducer, a phased-array system incorporating multiple single-element transducers, or a fully populated phased-array system with electronic steering capabilities. Its operating frequency may be, for example, 2-3 MHz, 1-3 MHz, 0.5-3 MHz, or 0.25-3 MHz
- the imaging ultrasound transducer can be a single-element transducer configured for imaging via mechanical sweeping/scanning, or it can be a provided as a phased array.
- Example operating frequencies for the imaging ultrasound transducer include 10-15 MHz, 10-20 MHz, or 10-25 MHz.
- the amplitude of negative pressure may be, for example, within the range of 10-30 MPa, 10-40 MPa, 10-50 MPa, or 10-60 MPa.
- a suitable driving parameter (e.g. a driving voltage) that is sufficient for treatment can be determined as follows.
- a plurality of test pulses e.g. a number between 2 and 10 test pulses
- a selected location of vitreous opacity e.g. a hemorrhage location corresponding to a blood clot or a floater location
- the test pulses are provided with an increasingly negative pressure within the lowest range (10 MPa), which can be adjusted by the input voltage of the ultrasound wave, since the pressure amplitude is positively correlated with the input voltage.
- the input voltage is (e.g.
- one or more deep learning algorithms are employed to process the ultrasound image data obtained by the imaging ultrasound transducer to perform object detection and/or segmentation of vitreous opacities to facilitate targeted and image- guided ultrasound treatment.
- the therapeutic imaging transducer when employed as a phased array, can be controlled to position the focus of the therapeutic ultrasound beam within a detected region of vitreous abnormality.
- U-Net is a convolutional neural network architecture widely used for medical image segmentation tasks, including ophthalmic ultrasound segmentation. Its symmetric encoder-decoder architecture is effective for capturing detailed structures in ultrasound images, facilitating accurate segmentation of eye tissues and abnormalities.
- DeepLab with its Atrous convolution mechanism, is well-suited for semantic segmentation tasks in medical imaging. It can efficiently process ophthalmic ultrasound images to identify and segment different eye structures and abnormalities with high accuracy.
- RetinaNet is a single-stage object detection model designed for detecting objects in images with high precision and speed. It can be adapted for eye abnormality detection in ophthalmic ultrasound images, providing rapid processing and accurate localization of abnormalities.
- Mask R-CNN extends the Faster R-CNN framework to perform instance segmentation in addition to object detection. It can accurately segment different eye structures and abnormalities in ophthalmic ultrasound images, facilitating precise localization and characterization of abnormalities.
- U-Net and DeepLab are primarily used for semantic segmentation, while RetinaNet, YOLOv4, Faster R-CNN, and Mask R-CNN are utilized for object detection and instance segmentation.
- EfficientNet is a versatile architecture that can be adapted for various tasks, including both semantic and instance segmentation. The choice of algorithm depends on factors such as the specific requirements of the task, available computational resources, and desired trade-offs between speed and accuracy.
- Comparing deep learning models in terms of the speed of their inference mode involves measuring the time it takes for each model to process input data and generate predictions.
- Small sized (shallow) models which typically have fewer than 1 million parameters (fewer than 10 layers) generally offer faster inference times and real-time prediction response.
- Medium sized models generally range from 1 million to 100 million parameters (between 10 to 100 layers) strike a balance between model complexity and inference speed.
- Real-time prediction response with medium-sized models can be acceptable for many applications, providing a good compromise between speed and accuracy. Large and deep models have more than 100 million parameters (more than 100 layers) can have slower inference times but can achieve higher predictive performance.
- models can offer sufficiently accurate performance for detecting vitreous hemorrhages or floaters, others can be better suited for real-time intraoperative use.
- the choice of model depends on the specific requirements of the application, including the desired level of accuracy, speed, and computational resources available. Additionally, fine-tuning and optimization may be beneficial to tailor the models to the specific use case and ensure optimal performance.
- Real-time performance is crucial for intraoperative use cases where quick decision-making is essential.
- Deep learning models that perform object detection, as opposed to semantic segmentation, may be employed to provide fast, real-time inference, while also providing sufficient localization of vitreous abnormalities to facilitate focal ultrasound therapy. These models offer fast inference times, enabling rapid processing of images during surgery.
- the type of deep learning model employed for pathology-specific object detection and/or segmentation may be dependent on the type of pathology (e.g. vitreous hemorrhage vs. floaters).
- models like RetinaNet, YOLOv4, and Faster R-CNN are suitable due to their ability to detect small and overlapping abnormalities with high precision. These models provide precise bounding box predictions, which can aid in accurately identifying vitreous hemorrhages in images. However, the accuracy of detection may still vary depending on factors such as image quality and the complexity of the scene.
- Instance segmentation enables the identification and segmentation of individual instances within an image, providing precise delineation of each floater. This level of granularity may be beneficial for accurate detection and characterization of floaters.
- a first deep learning algorithm is employed to perform segmentation of the vitreous to obtain segmented image data associated with the vitreous region
- a second deep learning model is employed to process the segmented vitreous image data to perform object detection and/or segmentation of abnormal regions with the vitreous.
- an intermediate deep learning algorithm can be employed to perform pathology classification on the segmented vitreous image data, prior to performing object detection and/or segmentation with a pathology-specific second deep learning algorithm.
- the initial vitreous segmentation can be performed using a method that does not employ deep learning.
- the vitreous region can be segmented using a model that first involves receiving, from a user, a selection of points on the vitreous, and subsequently fitting a circular or elliptical shape the vitreous using border detection methods.
- the model may be improved by receiving, from the user, points defining the lens, fitting a circular or elliptical shape to the lens, and subtracting the lens region from the initially segmented circular or elliptical vitreous region.
- the deep learning model employed for object detection and/or segmentation may be dependent on the type of therapeutic ultrasound transducer employed for therapy.
- the choice between a fixed-focus transducer and a transducer array may influence the selection of a suitable deep learning model.
- a transducer array might provide higher-resolution focusing suitable for more complex deep learning models, while a fixed-focus transducer might be sufficient for simpler deep learning model.
- the selection of the suitable deep learning model may also be dependent on the properties of the imaging ultrasound transducer.
- the properties of the imaging ultrasound transducer may influence the selection deep learning models based on factors such as image resolution, depth penetration, and the level of detail required for analysis.
- the therapeutic ultrasound transducer is autonomously controlled to deliver the therapeutic ultrasound energy to a region of vitreous opacity after having identified the region using a deep learning algorithm.
- an image is displayed on a user interface, the image identifying the abnormal region within the vitreous, and delivery of the focused ultrasound to the abnormal region is only initiated after receiving input from an operator authorizing treatment.
- the common planar region can be moved among a set of mutually adjacent locations to scan different regions of the eye.
- the movement of the ultrasound transducer assembly may be manual, semi-automated (e.g. motorized and controlled by an operator), or fully automated.
- an alert is generated when, as a consequence of motion of the ultrasound transducer assembly, the common planar region is moved to a previous location that has been previously treated with focused ultrasound energy.
- regions having vitreous abnormalities may initially be determined by performing ultrasound imaging at the plurality of locations of the common planar region, and subsequently moving the ultrasound transducer assembly to selected locations having identified vitreous abnormalities to deliver the focused ultrasound.
- an operator can select, on a user interface, a set of locations for subsequent therapeutic ultrasound treatment.
- the ultrasound transducer assembly and the generation of therapeutic ultrasound can then be autonomously controlled to deliver focused ultrasound therapy to the selected regions of vitreous abnormality.
- a non-limiting example workflow for employing the system according to FIG. 1 , or variations thereof, for the treatment of vitreous disorders via ultrasound therapy causing cavitation is described below.
- An operator e.g. surgeon
- the operator may prefer to initiate treatment when the image slice is aligned with the optic nerve.
- the user interface identifies this slice as the central slice and enables the user to lock onto this position.
- the eye (already immobilized) and the ultrasound transducer are docked and held constant.
- vitreous opacities will be directly visualized as hyperreflective areas within vitreous.
- the deep learning algorithm is employed to perform object location and/or segmentation of these vitreous opacities.
- the vitreous body will also be segmented. This will provide a safety zone, as all treatment will fall well within this region. If a treatment is planned that is outside of this safe zone, the treatment will not commence.
- the operator confirms which of the autonomously identified regions within the vitreous are to be treated. Once a single region is confirmed (e.g. via pressing a button or providing another form of input), the delivery of focused ultrasound (sonication) commences.
- the presence of sonication may be indicated or communicated according to one or more example modalities in order to provide one or more levels of feedback to the operator.
- feedback indicative of sonication could include auditory feedback (e.g. a “buzzing” sound).
- visual feedback can be provided, for example via display of ultrasound image data acquired during therapeutic sonication.
- an ultrasound image may be segmented to indicate the ultrasound cavitation signal, for example, using a deep learning segmentation algorithm.
- the segmented cavitation image e.g. bubble
- the segmented cavitation image can be overlayed on the vitreous opacities that were initially imaged and identified, for example, to provide a clear indication of the area that is receiving active treatment relative to the abnormal regions that were initially identified.
- the operator viewing the ultrasound image on the user interface, can confirm that the opacity has been treated based on a change in the ultrasound signature.
- the deep learning algorithm can be employed to confirm a sufficiency of treatment, and/or to identify one or more regions that require further treatment. If any further areas that require treatment are identified by either the operator or autonomously, the desired region can again be highlighted, and treatment can be repeated.
- the operator can then reposition the ultrasound transducer assembly to an adjacent slice (manually or semi- autonomously), or the system can autonomously reposition the ultrasound transducer assembly to the adjacent slice.
- the operator can decide how many slices to assess and potentially treat.
- the axial slices may be stored registered so the system can identify slices that have been previously treated.
- the system can autonomously scan the ultrasound transducer assembly through the treated slices, and identify if there are any further vitreous opacities that can be treated via focused ultrasound, thereby providing decision support to the operator. Finally, after the procedure is completed, the ultrasound device is removed from the coupling device, and the coupling device removed from the patient.
- FIG. 4 illustrates an example implementation of a control and processing circuitry/hardware 500 for controlling the ultrasound device for the treatment of vitreous disorders via focused ultrasound.
- the example control and processing hardware 500 may include a processor 510, a memory 515, a system bus 505, one or more input/output devices 520, and a plurality of optional additional devices such as communications interface 525, external storage 530, and a data acquisition interface 535.
- a display (not shown) may be employed to provide a user interface for facilitating input to control the operation of the system 500.
- the display may be directly integrated into a control and processing device (for example, as an embedded display), or may be provided as an external device (for example, an external monitor).
- the control and processing system 500 may include or be connectable to a console that provides an interface for facilitating an operator to control the ultrasound device.
- the console may include, for example, one or more input devices, such, but not limited to, a keypad, mouse, joystick, touchscreen, and may optionally include a display device.
- control of the motor controller can be implemented by control and processing circuitry 500, via executable instructions represented as motor control module 550, deep learning algorithm module 560, imaging beamforming module 570, and ultrasound therapy control module 580, respectively.
- executable instructions may be stored, for example, in the memory 515 and/or other internal storage.
- Some aspects of the present disclosure can be embodied, at least in part, in software, which, when executed on a computing system, transforms an otherwise generic computing system into a specialty-purpose computing system that is capable of performing the methods disclosed herein, or variations thereof. That is, the techniques can be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache, magnetic and optical disks, or a remote storage device. Further, the instructions can be downloaded into a computing device over a data network in a form of compiled and linked version.
- Batch Size The batch size is set to 8. Batch size determines the number of data samples used in each forward and backward pass during training. A batch size of 8 balances computational efficiency and model stability. Smaller batch sizes can lead to noisy updates, while larger batch sizes may require more memory.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Ophthalmology & Optometry (AREA)
- Molecular Biology (AREA)
- Vascular Medicine (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physiology (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Urology & Nephrology (AREA)
- Quality & Reliability (AREA)
- Surgical Instruments (AREA)
Abstract
L'invention concerne des systèmes, des méthodes et des dispositifs utilisés pour traiter des troubles du corps vitré à l'aide d'ultrasons focalisés. Un dispositif à ultrasons, configuré à la fois pour former des images ultrasonores de l'œil et distribuer de l'énergie ultrasonore focalisée dans l'œil, est utilisé pour obtenir des données d'imagerie ultrasonore caractérisant au moins le corps vitré. Ces données d'image sont traitées de manière autonome, par un algorithme d'apprentissage profond, pour effectuer une détection et/ou une segmentation d'objet, afin d'identifier une région anormale associée à l'opacité. Le transducteur ultrasonore thérapeutique est utilisé pour administrer de l'énergie ultrasonore thérapeutique aux régions anormales du corps vitré afin de générer une cavitation suffisante pour dissoudre les opacités. Le transducteur ultrasonore d'imagerie et le transducteur ultrasonore thérapeutique peuvent être déplacés ensemble pour permettre une imagerie coordonnée, une détection d'anomalie du corps vitré reposant sur l'apprentissage profond et une thérapie ultrasonore ciblée.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463571269P | 2024-03-28 | 2024-03-28 | |
| US63/571,269 | 2024-03-28 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025199655A1 true WO2025199655A1 (fr) | 2025-10-02 |
Family
ID=97219981
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CA2025/050451 Pending WO2025199655A1 (fr) | 2024-03-28 | 2025-03-28 | Systèmes et méthodes de traitement d'anomalies du corps vitré reposant sur des ultrasons |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025199655A1 (fr) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017062673A1 (fr) * | 2015-10-06 | 2017-04-13 | Aleyegn, Inc. | Procédés de cavitation dirigés par ultrasons et système pour traitements oculaires |
| US20210224997A1 (en) * | 2018-10-10 | 2021-07-22 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and computer-readable medium |
| CN113509209A (zh) * | 2021-08-11 | 2021-10-19 | 首都医科大学附属北京同仁医院 | 一种眼科超声成像方法及装置 |
| US20220104884A1 (en) * | 2019-02-08 | 2022-04-07 | The Board Of Trustees Of The University Of Illinois | Image-Guided Surgery System |
| US20230142825A1 (en) * | 2021-11-08 | 2023-05-11 | Arcscan, Inc. | Therapeutic method for the eye using ultrasound |
| US20230301727A1 (en) * | 2021-05-03 | 2023-09-28 | Microsurgical Guidance Solutions, Llc | Digital guidance and training platform for microsurgery of the retina and vitreous |
-
2025
- 2025-03-28 WO PCT/CA2025/050451 patent/WO2025199655A1/fr active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017062673A1 (fr) * | 2015-10-06 | 2017-04-13 | Aleyegn, Inc. | Procédés de cavitation dirigés par ultrasons et système pour traitements oculaires |
| US20210224997A1 (en) * | 2018-10-10 | 2021-07-22 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and computer-readable medium |
| US20220104884A1 (en) * | 2019-02-08 | 2022-04-07 | The Board Of Trustees Of The University Of Illinois | Image-Guided Surgery System |
| US20230301727A1 (en) * | 2021-05-03 | 2023-09-28 | Microsurgical Guidance Solutions, Llc | Digital guidance and training platform for microsurgery of the retina and vitreous |
| CN113509209A (zh) * | 2021-08-11 | 2021-10-19 | 首都医科大学附属北京同仁医院 | 一种眼科超声成像方法及装置 |
| US20230142825A1 (en) * | 2021-11-08 | 2023-05-11 | Arcscan, Inc. | Therapeutic method for the eye using ultrasound |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP3920858B1 (fr) | Système chirurgical guidé par image | |
| CN111432731B (zh) | 用于检测图像伪影的智能超声系统 | |
| US8496588B2 (en) | Procedures for an ultrasonic arc scanning apparatus | |
| Chatelain et al. | Optimization of ultrasound image quality via visual servoing | |
| US12329572B2 (en) | Composite ultrasound images | |
| IL264350A (en) | Transformation, display and visualization of the simulation using augmented virtual reality glasses | |
| CN112654304B (zh) | 利用超声成像的脂肪层识别 | |
| CN120418822A (zh) | 图像采集方法 | |
| JP2024542684A (ja) | フロータを検出するためのシステム及び方法 | |
| US20200178936A1 (en) | Implant assessment using ultrasound and optical imaging | |
| US20100036246A1 (en) | Automatic fat thickness measurements | |
| CN114727806A (zh) | 照护点超声(pocus)扫描辅助和相关装置、系统和方法 | |
| JP6125256B2 (ja) | 超音波診断装置、画像処理装置及び画像処理プログラム | |
| JP2021083782A (ja) | 超音波診断装置、医用画像撮影装置、学習装置、超音波画像表示方法及びプログラム | |
| US20240008811A1 (en) | Using artificial intelligence to detect and monitor glaucoma | |
| CN112842381B (zh) | 超声波诊断装置和显示方法 | |
| US20230142825A1 (en) | Therapeutic method for the eye using ultrasound | |
| CN113040814B (zh) | 超声微泡空化设备的成像处理方法及成像处理系统 | |
| WO2025199655A1 (fr) | Systèmes et méthodes de traitement d'anomalies du corps vitré reposant sur des ultrasons | |
| KR20230099728A (ko) | 초점의 전기적 스티어링을 수행하는 트랜스듀서 | |
| Brown et al. | Comparison of pulse sequences used for super-resolution ultrasound imaging with deep learning | |
| US20250262461A1 (en) | Ultrasound neuromodulation guided by artificial intelligence | |
| JP2016027842A (ja) | 超音波治療装置 | |
| JP2024539083A (ja) | 超音波撮像のための自動深度選択 | |
| WO2023092051A1 (fr) | Détection et quantification automatiques de structures anatomiques dans une image ultrasonore à l'aide d'une forme antérieure personnalisée |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25779196 Country of ref document: EP Kind code of ref document: A1 |