WO2024150235A1 - Dynamic anatomy deformation tracking for in-vivo navigation - Google Patents
Dynamic anatomy deformation tracking for in-vivo navigation Download PDFInfo
- Publication number
- WO2024150235A1 WO2024150235A1 PCT/IL2024/050048 IL2024050048W WO2024150235A1 WO 2024150235 A1 WO2024150235 A1 WO 2024150235A1 IL 2024050048 W IL2024050048 W IL 2024050048W WO 2024150235 A1 WO2024150235 A1 WO 2024150235A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- anatomy
- deformation
- data
- breathing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/267—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
- A61B1/2676—Bronchoscopes
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00681—Aspects not otherwise provided for
- A61B2017/00694—Aspects not otherwise provided for with means correcting for movement of or for synchronisation with the body
- A61B2017/00699—Aspects not otherwise provided for with means correcting for movement of or for synchronisation with the body correcting for movement caused by respiration, e.g. by triggering
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2061—Tracking techniques using shape-sensors, e.g. fiber shape sensors with Bragg gratings
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/39—Markers, e.g. radio-opaque or breast lesions markers
- A61B2090/3966—Radiopaque markers visible in an X-ray image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/0037—Performing a preliminary scan, e.g. a prescan for identifying a region of interest
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/06—Devices, other than using radiation, for detecting or locating foreign bodies ; Determining position of diagnostic devices within or on the body of the patient
- A61B5/061—Determining position of a probe within the body employing means separate from the probe, e.g. sensing internal probe position employing impedance electrodes on the surface of the body
- A61B5/062—Determining position of a probe within the body employing means separate from the probe, e.g. sensing internal probe position employing impedance electrodes on the surface of the body using magnetic field
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/06—Devices, other than using radiation, for detecting or locating foreign bodies ; Determining position of diagnostic devices within or on the body of the patient
- A61B5/065—Determining position of the probe employing exclusively positioning means located on or in the probe, e.g. using position sensors arranged on the probe
- A61B5/066—Superposing sensor position on an image of the patient, e.g. obtained by ultrasound or x-ray imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
- A61B5/113—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb occurring during breathing
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7285—Specific aspects of physiological measurement analysis for synchronizing or triggering a physiological measurement or image acquisition with a physiological event or waveform, e.g. an ECG signal
- A61B5/7289—Retrospective gating, i.e. associating measured signals or images with a physiological event after the actual measurement or image acquisition, e.g. by simultaneously recording an additional physiological signal during the measurement or image acquisition
Definitions
- the present invention in some embodiments thereof, relates to a deformation model and, more particularly, but not exclusively, to a deformation model for an elongated endoscopic device.
- Some known systems and methods of endoscopy try to follow or compensate for the anatomy movement during the procedure, for example in order to monitor the endoscopy on a display.
- Non-robotic navigation guided by single-sensor electromagnetic sensing optionally, with combined fluoroscopy
- Robotic navigation guided by fiber-optics shape sensing and 3. Robotic navigation guided by single-sensor electromagnetic sensing.
- Other systems known in the art include fluoroscopy-based systems, CBCT (cone-beam CT) guided systems, and video-registration based bronchoscopy.
- Some methods for monitoring the probe within the lung include providing a deformation model of the lung.
- Some deformation models are hierarchical, e.g. each node of the lung is dependent on the orientation of its adjacent nodes, for example, while preserving the length of the branches.
- U.S. Patent No. US10674982B2 disclosing a system and method for constructing fluoroscopic -based three dimensional volumetric data from two dimensional fluoroscopic images including a computing device configured to facilitate navigation of a medical device to a target area within a patient and a fluoroscopic imaging device configured to acquire a fluoroscopic video of the target area about a plurality of angles relative to the target area.
- the computing device is configured to determine a pose of the fluoroscopic imaging device for each frame of the fluoroscopic video and to construct fluoroscopic-based three dimensional volumetric data of the target area in which soft tissue objects are visible using a fast iterative three dimensional construction algorithm.
- US11341692B2 disclosing a system for facilitating identification and marking of a target in a fluoroscopic image of a body region of a patient, the system comprising one or more storage devices having stored thereon instructions for: receiving a CT scan and a fluoroscopic 3D reconstruction of the body region of the patient, wherein the CT scan includes a marking of the target; and generating at least one virtual fluoroscopy image based on the CT scan of the patient, wherein the virtual fluoroscopy image includes the target and the marking of the target, at least one hardware processor configured to execute these instructions, and a display configured to display to a user the virtual fluoroscopy image and the fluoroscopic 3D reconstruction.
- U.S. Patent No. US10653485B2 disclosing a method for implementing a dynamic three- dimensional lung map view for navigating a probe inside a patient's lungs includes loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images, inserting the probe into a patient's airways, registering a sensed location of the probe with the planned pathway, selecting a target in the navigation plan, presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe, navigating the probe through the airways of the patient's lungs toward the target, iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe, and updating the presented view by removing at least a part of an object forming part of the 3D model.
- U.S. Patent No. US11547377B2 disclosing a system and method for navigating to a target using fluoroscopic-based three dimensional volumetric data generated from two dimensional fluoroscopic images, including a catheter guide assembly including a sensor, an electromagnetic field generator, a fluoroscopic imaging device to acquire a fluoroscopic video of a target area about a plurality of angles relative to the target area, and a computing device.
- the computing device is configured to receive previously acquired CT data, determine the location of the sensor based on the electromagnetic field generated by the electromagnetic field generator, generate a three dimensional rendering of the target area based on the acquired fluoroscopic video, receive a selection of the catheter guide assembly in the generated three dimensional rendering, and register the generated three dimensional rendering of the target area with the previously acquired CT data to correct the position of the catheter guide assembly.
- Example 2 The method according to example 1, further comprising acquiring an external image.
- Example 3 The method according to example 1 or example 2, wherein said analyzing comprises analyzing from information from said external image.
- Example 4 The method according to any one of examples 1-3, further comprising generating a breathing model of said anatomy.
- Example 5 The method according to any one of examples 1-4, wherein said generating a breathing model comprises acquiring data of said anatomy in two or more breathing states.
- Example 6 The method according to any one of examples 1-5, further comprising interpolating a plurality of breathing states between said two or more breathing states based on said two or more breathing states.
- Example 7 The method according to any one of examples 1-6, wherein said analyzing comprises analyzing from information from said breathing model.
- Example 8 The method according to any one of examples 1-7, further comprising acquiring said image of said anatomy.
- Example 9 The method according to any one of examples 1-8, wherein said model nodes and model branches correspond to nodes and branches in said anatomy.
- Example 10 The method according to any one of examples 1-9, wherein said assigning nodes and branches to said anatomy is automatic.
- Example 11 The method according to any one of examples 1-10, wherein said initial registration comprises surveying said anatomy with said endoscope.
- Example 12 The method according to any one of examples 1-11, wherein said initial registration comprises generating a representation of said endoscope upon a deformed model of said anatomy showing a correct position of said endoscopy within said anatomy.
- Example 13 The method according to any one of examples 1-12, wherein said deformed model comprises positions and/or shapes of said model nodes and said model branches.
- Example 15 The method according to any one of examples 1-14, further comprising displaying an image of said initial registration upon a deformed model.
- Example 17 The method according to any one of examples 1-16, further comprising recovering position data from said shape data.
- Example 18 The method according to any one of examples 1-17, wherein said external image comprises one or more of a dynamic in-vivo imaging data and a dynamic ex- vivo imaging data.
- Example 19 The method according to any one of examples 1-18, wherein said dynamic in-vivo imaging is acquired by one or more of a camera, an ultrasound, a fluoroscope, a CBCT (Conebeam CT), a CT, and an MRI.
- a camera an ultrasound, a fluoroscope, a CBCT (Conebeam CT), a CT, and an MRI.
- Example 20 The method according to any one of examples 1-19, wherein said analyzing from information from said external image comprises updating said deformed model according to data from said external image.
- Example 21 The method according to any one of examples 1-20, wherein said updating said deformed model comprises identifying a target in said external image and updating a target position in said deformed representation of said anatomy elastically using said deformation model.
- Example 22 The method according to any one of examples 1-21, wherein said updating said deformed model comprises correcting a location of a target in transmitter coordinates using said deformation model by manually adding a constraint to said target.
- Example 23 The method according to any one of examples 1-22, wherein said at least one energy function comprises a first energy cost function and a second energy cost function.
- Example 24 The method according to any one of examples 1-23, wherein said first energy cost function allocates a suitable energy cost to a corresponding type movement of a part of said anatomy.
- Example 25 The method according to any one of examples 1-24, wherein said second energy cost function allocates a suitable energy cost to each type of deviation of a model representation of said endoscope from a part of said anatomy.
- Example 26 The method according to any one of examples 1-25, wherein said amending each of said nodes independently from each other comprises independently amending one or more of a position, a rotation and a stretch.
- Example 27 The method according to any one of examples 1-26, wherein one of said one or more anatomical constraints are actual anatomical physical constraints of said anatomy.
- Example 29 The method according to any one of examples 1-28, wherein said generating a predeformation model is performed preoperatively.
- Example 30 The method according to any one of examples 1-29, wherein said generating an initial registration is performed preoperatively and/or intraoperative.
- Example 31 The method according to any one of examples 1-30, wherein said acquiring position and/or orientation data is performed intraoperative.
- Example 32 The method according to any one of examples 1-31, wherein said generating a breathing model is performed preoperatively.
- Example 33 The method according to any one of examples 1-32, further comprising attaching one or more sensors to said patient.
- Example 34 The method according to any one of examples 1-33, further comprising monitoring a breathing of a patient using said one or more sensors.
- Example 35 The method according to any one of examples 1-34, further comprising monitoring a movement of said patient.
- Example 36 The method according to any one of examples 1-35, wherein said analyzing comprises analyzing from information from said monitoring said movement of said patient.
- Example 37 The method according to any one of examples 1-36, further comprising displaying said generated deformed representation of said anatomy.
- Example 39 A system for endoscopy monitoring, comprising: a. an endoscope comprising an elongated body and a plurality of sensors positioned along said elongated body; b. a processor connected to a memory and comprising instructions for: i. accessing one or more information; said information comprising:
- Example 40 The system according to example 39, further comprising a display.
- Example 41 The system according to example 39 or example 40, further comprising an imaging module connected to an external imaging device.
- Example 42 The system according to any one of examples 39-41, wherein said plurality of sensors are configured to provide one or more of a position, an orientation, a shape and a curve of said endoscope.
- Example 43 The system according to any one of examples 39-42, wherein said a position, an orientation, a shape and a curve of said endoscope are represented in transmitter coordinates.
- Example 44 A method for endoscopy monitoring, comprising: a. storing information comprising at least a model of an anatomy and position data received from a plurality of sensors or fiber optics shape sensor located on an interventional flexible elongated device configured to be inserted into the anatomy b. extracting constraints from the stored information; c. applying the constraints on a movement model of the anatomy; d. applying a first and a second cost function on the anatomy model, wherein the first cost function allocates a suitable energy cost to a corresponding type movement of a part of the anatomy, and the second cost function allocates a suitable energy cost to each type of deviation of a model representation of the elongated device from a part of the anatomy; and e. calculating a deformation of the anatomy model by optimizing movement of the anatomy model based on the cost functions, so that the energy cost is minimized.
- Example 46 The method according to example 44 or example 45, wherein said movement model includes adjustments to the shape and position of the anatomy model according to received imaging data.
- Example 47 The method according to any one of examples 44-46, wherein said movement model incorporates a motion fading effect, describing changes in time in the movement due to a certain movement cause.
- some embodiments of the present invention may be embodied as a system, method or computer program product. Accordingly, some embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, some embodiments of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Implementation of the method and/or system of some embodiments of the invention can involve performing and/or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of some embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware and/or by a combination thereof, e.g., using an operating system.
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electromagnetic, optical, or any suitable combination thereof.
- a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable medium and/or data used thereby may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for some embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- Some of the methods described herein are generally designed only for use by a computer, and may not be feasible or practical for performing purely manually might be expected to use completely different methods, e.g., making use of expert knowledge and/or the pattern recognition capabilities of the human brain, which would be vastly more efficient than manually going through the steps of the methods described herein.
- FIG. 1A is a schematic representation of an exemplary system for endoscopy monitoring, according to some embodiments of invention.
- FIG. IB is a schematic representation of the modules of the exemplary system, according to some embodiments of the invention.
- FIG. 2 is a flowchart of how data from the sensors are converted into data about shape and position of the elongated device, according to some embodiments of the invention
- FIG. 3 is a flowchart of an exemplary general overview of the method for generating a deformed, optionally visual, representation, according to some embodiments of the invention
- FIG. 4 is a flowchart of an exemplary part of the method for generating a deformed visual representation, according to some embodiments of the invention.
- FIG. 5 is a flowchart of an exemplary method for updating a deformed visual representation, according to some embodiments of the invention.
- FIG. 6 is showing a flowchart of an exemplary method for tracking dynamic anatomy deformation, according to some embodiments of the invention.
- the present invention in some embodiments thereof, relates to a deformation model and, more particularly, but not exclusively, to a deformation model for an elongated endoscopic device.
- an aspect of some embodiments of the invention relates to monitoring a navigation of an elongated device reaching a peripheral target endoscopically.
- the elongated device is a steerable catheter that is inserted into a lumen structure (such as the airways in the lung) and guided to a desired location (for example a lesion), either manually or by a guidance system.
- the device is tracked electromagnetically, such that its tip position is known in 3D transmitter coordinates and/or the entire or partial curve of the device is tracked accurately in 3D transmitter coordinates.
- the shape tracked device allows visualizing turns along the curve which contributes more information about the location of the tip of the device in the anatomy.
- the deformation and the changes, which occur constantly during the procedure due to the breathing of the patient, in the pathway that the device needs to take to reach the desired location are taken under consideration when planning and executing the navigation.
- a potential advantage of doing this is that it potentially avoids large inaccuracy in lung navigation procedures, where the system can "think" that the catheter tip has reached the desired location, while in reality it may be as far as 2-3 cm from the desired location.
- real-time deformations are monitored and compensated for. For example, if the patient moves during the intervention, the system is configured to adapt the model accordingly by dynamically and flexibly incorporate the real-time data into the model.
- An aspect of some embodiments of the invention relates to a deformation tracking algorithm configured to maintain a deformable registration between the preoperative CT scan and the breathing and deforming organ.
- the deformation tracking is based on a pre-trained deformation model - where a study of how certain organs, such as lung, deform under certain applied forces serves as pre-trained information and modeled as costs ( energies), and also on a real-time solver which minimizes the costs based on all available information (such as one or more endoscope position and shape, reference sensor locations, external imaging, etc.).
- the deformation model uses a skeletal / graph structure (multiple nodes), similar to finite elements physical simulation which is pre-trained for the lung (or for any other target organ).
- the system by solving the deformation of the organ in real-time, the system is able to “understand” how the preoperative CT is deformed in procedure and thus display the accurately curve-tracked device in its true deformed location (inside a lumen/airway).
- the deformation model strongly relies on a hierarchical structure (a tree/skeleton structure) of the nodes and allows only a limited number of parameters per node. It is also hierarchical in the sense that if a certain node is rotated then all or some children nodes are also rotated. This may be true for 3D printed plastic lung models, which may float freely in space, but in reality the inventors have found that this is not a natural behavior.
- the deformation model on the present invention is more general, where each node has its own independent position/rotation/stretch, and the nodes are tied by structural energies (not necessarily just neighboring nodes), such that a tree-like behavior can be achieved using one tuning, and a more realistic behavior (which is not strictly hierarchical, where the pleura for example constrains the peripheral airway endpoints from moving while other nodes can move more freely) can be achieved by another tuning.
- the deformation model on the present invention can include a constraint, which is described as an energy function, to prefer to keep all or part of peripheral endpoint nodes in their original location relative to the pleura or ribcage.
- the deformation model herein also enables more free breathing (as studied during the initial registration survey) as the segments can stretch and contract.
- the deformation model herein also incorporates a breathing model to enable more free breathing which is constrained to realistic breathing according to the breathing model.
- the deformation model herein also enables inputting external constraints, such as information from external imaging: external imaging may be used to determine/confirm the position of the endoscope relative to a target lesion.
- the offset can be inputted to the deformation model engine and the engine corrects itself “deformably” such that the anatomy is then deformed to account for that offset, and the lesion appears “deformably” in its true position relative to the endoscope.
- the correction is almost always rigid such that other anatomical features go “out of sync”. For example, in other EM based or fiber-optics shape sensing based navigation systems which are corrected using external imaging, a correction offset is applied to the navigation system as a rigid correction transform.
- This correction transform puts the target at its true location relative to the endoscope, as observed by the external imaging, but it does so rigidly.
- the surrounding anatomical features such as airways or other lumen structure can appear at wrong locations, which may impact the performance of renavigation to the target, to the extent that even if the user is finally able to reach the corrected target, the endoscope location in the anatomy may still not be at the target which may require additional repetitive scans and correction by external imaging.
- the deformation model herein the external information is natively inputted into the engine in a deformable way so that everything corrects itself seamlessly and the entire neighborhood of the target is adjusted to its true deformation state.
- the monitoring comprises monitoring the endoluminal device itself, for example, the shape and location of the endoluminal device within the lumens of the patient. In some embodiments, this is performed by utilizing a plurality of sensors positioned along the elongated body of the endoluminal device, which are tracked and referenced to an external, optionally fixed, frame of reference (for example, to an EM transmitter coordinate system).
- the endoluminal device shape and/or position is localized in transmitter coordinate system and is registered to the anatomy using a registration algorithm. In some embodiments, the anatomy is registered to transmitter coordinate system using a registration algorithm.
- the monitoring comprises monitoring the overall anatomy of the patient, for example by using an x-ray imaging device, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device, which are used during the procedure to monitor the advancement of the endoluminal device within the lumens of the patient.
- an x-ray imaging device for example, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device, which are used during the procedure to monitor the advancement of the endoluminal device within the lumens of the patient.
- a CBCT imaging device for example by using an x-ray imaging device, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device
- a clear disadvantage with X-ray based navigation is that a user cannot constantly radiate the patient, so the user is forced to just take a “roadmap” and then use it for navigation. This provides navigation on a frozen map that does not take under consideration deformation and/or does not perform deformation tracking.
- Another disadvantage is that with standard fluoroscopy the user is deprived of visualizing the airways, so the user cannot actually know where the roadmap is. In these cases, contrast materials could be used to enhance the airways in the image, but this is not trivial and usually it is not performed.
- the solution disclosed herein provides the best of both techniques, a high-quality deformation tracking that is complemented with information from an external imaging device, for confirmation and/or correction of the deformation tracking, based on the X-ray imaging.
- some methods for monitoring the endoluminal device within the lumens of the patient include providing a hierarchical deformation model of the lung.
- These hierarchical deformation models have each node of the modeled anatomy dependent on the position and/or orientation of its adjacent nodes. Some of these methods have the length of the branches of the modeled anatomy constant, which limits the movement of the nodes to three degrees of freedom, and optionally even limiting to two degrees, depending on how the “roll” degree of freedom is handled. Additionally, in some of these methods, deformation of a node may be calculated as dependent on deformation of many parent nodes. These assumptions may cause a slow and inaccurate (non-realistic) deformation calculation.
- deformation is computed locally - that is, by essentially bringing the tracked catheter into the nearest branch, which does not necessarily reflect the true state of the anatomy relative to the catheter. Additionally, in some of these methods, the anatomy is not deformed according to realistic constraints, such as pleura or ribcage constraints, which might impact the accuracy of the tracking. Additionally, in some of these methods, the anatomy does not “breath” (does not deform according to breathing) in accordance with true anatomical breathing, as recorded in the patient, which might impact the accuracy of the tracking.
- the present invention provides a solution that overcomes the deficiencies of the abovementioned monitoring methods by providing a monitoring system with advanced deformation models (referred hereinafter as the deformation model) and deformation tracking methods (referred hereinafter as the deformation tracking).
- the deformation model advanced deformation models
- the deformation tracking deformation tracking methods
- the invention comprises performing actions preoperatively and performing actions intraoperatively.
- a preoperative CT or MRI (or any other suitable preoperative scan) is performed to identify the desired location that requires treatment, and using that same CT, a map of the path that needs to be taken by the endoluminal device is prepared, along with a segmented map of the entire or partial lumen tree (such as airway tree, in the case of lung).
- some of the CT processed outputs (such as the entire or partial segmented lumen tree, a segmented lesion etc.) are used by the deformation model and tracking, or breathing model, or general, optionally parametrized, movement model, as described below.
- segmented/classified tissue information can be used by the deformation model/tracking to increase the accuracy of the deformation tracking (as described below).
- a breathing model of the patient is prepared, which is then utilized in the movement model 140 (for example breathing model and/or movements performed by the patient during the procedure - see below), as explained herein.
- exemplary intraoperative actions comprise one or more of monitoring the breathing (for example by using one or more sensors configured for monitoring the rhythm of the breathing and/or the movements of the chest), monitoring changes in the anatomy of the patient (for example by using one or more imaging devices and/or sensors positioned on the patient, for example, on the patient’s chest), monitoring the position and/or shape of the device (for example relative to a transmitter - “transmitter coordinates”), and monitoring the changes in the position and/or deformations of the patient.
- monitoring changes in the anatomy of the patient also includes the use of movement model 140, which is configured to amend the deformation model, for example, in view of sensed movement (for example, breathing) of the patient.
- the system 100 comprises a processor 102, a memory 104 (comprising a plurality of modules) and a display 106 (optionally showing a model of a target location 146).
- the system 100 comprises an imaging module 108, which may include an external imaging device, for example a three-dimensional imaging device, an x-ray imaging device, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device.
- an imaging module 108 may include an external imaging device, for example a three-dimensional imaging device, an x-ray imaging device, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device.
- a potential advantage of the system 100 is that it allows using an external imaging device (connected to imaging module 108) to input more constraints into the general deformation model.
- an external imaging device connected to imaging module 108 to input more constraints into the general deformation model.
- a lesion can be “seen” by a CBCT (cone -beam CT) scan at a position which may be different compared to the location of the endoluminal device, from the point of view of the system (based on navigation and deformable registration only).
- CBCT cone -beam CT
- the system 100 is configured to allow inputting this information into the deformation tracking engine as a constraint such that the entire deformation model elastically adjusts to reflect this information (and such that nearby features are also deformed accordingly, not just the feature which was seen in the external imaging), therefore, rather than correcting the location of important anatomical features rigidly, as other systems which rely on external imaging may do, the system is configured to correct the location of important features in a flexible manner using the deformation model.
- an interventional flexible elongated device 110 may be integrated into system 100 and/or provided separately.
- the elongated device 110 is inserted into a patient, for example the lungs 112 (referred hereinafter as “anatomy 112” or “the anatomy 112”, which is schematically shown in Figure la).
- the flexible elongated device 110 comprises an elongated body, for example an elongated flexible body, for example a catheter or another interventional device which may be inserted, for example, via a catheter during an interventional procedure.
- the flexible elongated device 110 comprises a plurality of shape and/or position sensors 114 (optionally also temperature sensors), configured for sensing positions and/or orientations along the flexible elongated device 110.
- shape and position refer hereinafter as tracking of one or more of a position, an orientation, a shape and a curve, of the elongated device.
- the sensed positions are transmitted to the processor 102 or to another processor.
- the processor 102 is configured for receiving the sensed positions and calculate a shape of the flexible elongated device 110, while positioned inside the anatomy 112.
- the shape of the flexible elongated device 110 is calculated by an external processor and processor 102 receives the calculated shape of the flexible elongated device 110, for example, in EM transmitter coordinates, while positioned inside anatomy 112.
- the flexible elongated device 110 includes a shape sensor, such as an optical fiber shape sensor, in which case the shape of the flexible elongated device 110 is sensed directly.
- the shape/position is first computed in transmitter coordinates.
- the user is enabled to localize the catheter inside the anatomy, and optionally, localize the deforming anatomy in transmitter coordinates. In some embodiments, this is based on the deformation model in addition with the tracking, which is a deformable breathing registration between the transmitter coordinates and the anatomy (which may initially be represented in preoperative CT coordinates).
- a potential advantage of using position sensors is that it potentially allows sensing the shape of the device relative to a certain coordinates system (such as an electromagnetic transmitter’s coordinate system), which allows to localize a curve of the flexible elongated device 110 in its true absolute position in space. Contrary to this, using a fiber optic shape sensor may only provide the relative shape of the device but might not provide the shape’s position in space.
- a potential advantage of knowing the position of the flexible elongated device 110 in space is that it potentially increases the accuracy of applied deformation models since this information can be incorporated in the minimized deformation energy functions described below. In some embodiments, the added accuracy is achieved due to having the information of position of the device and not just shape.
- Known solutions that utilize fiber optic solutions try to recover the position (for example, by integrating the sensed shape from a known reference point or origin, which may be considered as shape “transmitter coordinates”, or shape “reference coordinates”), but the unknown position always degrades their deformation tracking performance.
- the shape sensor can be thought of as localized in “reference coordinates” or “transmitter coordinates”, similarly to the EM transmitter coordinates described herein, but with greater inaccuracy (due to shape to position recovery integration error).
- the memory 104 is configured to store instructions 116, which instruct the processor 102 to perform operations of the methods described herein.
- a dynamic deformation 120 is calculated by the processor 102, and optionally a deformed visual representation 122 is generated, from information/data 118 received from components of the system 100 and/or from external systems (not shown), as described in more detail herein (see below more information on exemplary sources of information/data 118).
- exemplary sources of the information/data 118 are one or more of:
- a pre-deformation model 124 of anatomy 112 (for example, a preoperative, optionally processed, CT scan or MRI scan or any other suitable scan);
- An initial registration 128 of the device 110 (optionally pre-navigational, and optionally a deformed registration); based on an initial, optionally unsupervised, survey of the device inside the anatomy and on a first fitting of the deformation model onto the plurality of tracked curves of the device from the initial survey (optionally in combination with a breathing model).
- Position and/or orientation data 126 from sensors 114 which are collected during the procedure (intraoperative, optionally in transmitter coordinates);
- the catheter is “seen” located in certain 3D offset relative to the target lesion, which may be a different location than the location that the system “thinks” the device is positioned inside the anatomy (or that the lesion is positioned in transmitter coordinates).
- Another example can be a movement of the anatomy in relation to a previous image;
- a movement model 140 preoperative breathing and/or intraoperative breathing and movement of patient.
- the information/data 118 comprises data from a pre-deformation model 124 of anatomy 112, for example as mentioned above, a model of anatomy 116 in a static state (for example in full inhale state or in full exhale state), for example before the initialization of the interventional procedure and/or in a predetermined state and/or with a predetermined set of parameters.
- the pre-deformation model 124 is generated, for example by the processor 102, based on a static preoperative image, generated by any suitable type of diagnostic imaging scan method such as, for example a CT or MRI image.
- the system performs an initial registration 128 of the device 110 to be used as reference for the following monitoring of the device.
- the initial registration 128 is performed after the device 110 is inserted into the patient at the beginning of the procedure and before proceeding with the navigation towards the desired location within the anatomy 112.
- the initial registration 128 optionally comprises a visual device representation 130 of the device 110 upon a deformed model 132 of the anatomy 112, showing a correct position of the device 110 within the anatomy 112.
- the initial registration 128 is generated, for example, by the processor 102, by receiving position and/or shape data 126 from the sensors 114 and/or memory 104, for example while the device 110 is inserted to and/or moves within various parts of anatomy 112 (for example lung airways). In some embodiments, accordingly, adjustments are performed to the virtual position (position and/or orientation) of the device 112.
- the predeformation model 124 and/or its various parts are adjusted/updated according to position data 126 and/or according to the shape of the device 110 calculated based on position data 126.
- explanations regarding how the adjustments of the virtual position of predeformation model 124 are described in more detail herein.
- the deformed model 132 comprises positions and/or shapes of model nodes 138 and model branches 140, which correspond to nodes 142 and branches 144 of the anatomy 112, respectively.
- the display 106 is configured for displaying an image of the initial registration 128, for example showing the visual device representation 150 of the device 110 upon the deformed model 132.
- the deformed model 132 is a dynamically deformed model, meaning the model is updated periodically. Exemplary position and/or orientation data 126 from sensors 114 collected during the procedure
- the information/data 104 comprises position and/or orientation data 126 from sensors 114 and collected during the procedure. In some embodiments, this data is represented in transmitter coordinates and is used by the system to modify/update the deformation model.
- the information/data 104 comprises shape data 126 from shape sensor 114 and collected during the procedure.
- this data lacks position information (only represents shape) and position data is recovered from the shape data, for example, using shape integration methods relative to a known reference point.
- the integrated position and shape tracked sensor can be thought of as being localized in shape “reference coordinates” or shape “transmitter coordinates” and is then used by the system to modify/update the deformation model.
- a shape sensor such as a fiber-optic shape sensor
- the sensors 114 may refer to any of EM sensors, fiberoptics shape sensor or any other position/shape sensor.
- updates are performed from data received from the imaging module 108, for example, based on movement models 140 of anatomy 112, as described in more detail herein.
- the information/data 104 includes dynamic in-vivo imaging data 134 and/or dynamic ex-vivo imaging data 136.
- the dynamic imaging data 134 includes one or more of an image, ultrasound or other in-vivo imaging data obtained for example from a camera 138, which may be installed on the device 110 and/or capture in-vivo image or ultrasound data, for example while the device 110 is inserted to and/or moves within various parts of anatomy 112.
- the dynamic ex-vivo imaging data 136 includes image data obtained from imaging module 108.
- the information/data 104 includes various movement models 140, which may be calculated, generated and/or adjusted for anatomy 112.
- the movement models 140 enable the calculation (for example by processor 102), optionally dynamically and/or in real time, of movements and/or updated position of the anatomy 112 in various scenarios.
- flexible movement models 140 include models of movement of the anatomy 112 and/or its various parts, due to voluntary or non-voluntary movements of the body of the patient, various types of movement due to breathing, movement of device 110 within anatomy 112, and/or any other suitable kind of movement.
- the movement models 140 comprise at least a first cost function and a second cost function, as described in more detail herein.
- the movement models 140 are parametrized. For example, a breathing model can be parametrized using a single parameter, the breathing phase, which determines a stretching and compressing of a lung model based on that single parameter, as described in more detail below.
- an exemplary movement model 140 is a breathing model which describes movement of the anatomy 112 and/or parts of anatomy 112, due to breathing.
- the anatomy 112 includes the lungs anatomy
- the shape in space and/or the positions of the airways and nodes of the lungs may be affected by exhalation and inhalation and/or by the diaphragm (i.e. midriff) movement and/or by movement of other body parts due to breathing.
- a movement model 140 or a plurality of movement models 140 are generated to describe the various types of movements due to breathing and/or their effect on the shape in space and/or the positions of the airways and nodes of the lungs.
- the movement model 140 is configured for and/or comprises instructions for receiving as input in- vivo imaging data 134 and/or ex-vivo imaging data 136, and calculate adjustments to the shape and/or the positions of the model branches (e.g. airways) and nodes, according to the received image data.
- the model branches e.g. airways
- the movement models 140 may incorporate a retreating effect and/or a fading effect, describing changes in time of movements of and/or in the anatomy 112 due to a certain movement trigger and/or cause.
- the movement models 140 are configured for and/or comprises instructions for describing movement of various parts of the lungs (for example as translated into nodes and branches) during inhalation and/or exhalation, for example due to air flow and/or diaphragm motion, including a retreating effect and/or a fading effect of the motion of the various parts in time, for example towards the end of an inhalation or an exhalation.
- the processor 102 is configured for and/or comprises instructions for dynamically calculate a deformation 120 of the anatomy 112. In some embodiments, the calculation of the dynamic deformation 120 is based on the movement models 140. In some embodiments, the processor 102 is configured for and/or comprises instructions for applying various constraints on the movement models 140 to calculate the dynamic deformation 120. In some embodiments, the constraints are extracted from the plurality of information/data 118, such as, for example, pre-deformation model 124, position data 126, the initial registration 128, in-vivo imaging data 134, ex-vivo imaging data 136, and/or any other suitable information.
- the deformed model 132 comprises positions and/or shapes of model nodes 138 and model branches 140, which correspond to nodes 142 and branches 144 of the anatomy 112, respectively.
- a model node 138 may move and rotate in 6 degrees of freedom.
- a model node 138 may be subject to at least one cost function, which allocates a suitable energy cost to each type and magnitude of the movement.
- moving or rotating a model node 138 with respect to its immediate neighbors may cost a suitable amount of energy, for example according to the magnitude and/or direction of the movement, and/or according to the type and/or location of the node.
- moving or rotating a model node 138 with respect to its original position may cost a suitable amount of energy, for example according to the magnitude and/or direction of the movement, and/or according to the type and/or location of the node.
- moving or rotating a model node 138 with respect to a specific anatomy part or body organ may cost a suitable amount of energy, for example according to the magnitude and/or direction of the movement, and/or according to the type and/or location of the node.
- the visual representation 130 of the device 110 comprises a position and/or a shape, for example a full-length or partial shape calculated by the processor 102 based on position data or raw data from the sensors 114, representing a current position and/or a current shape of the device 110.
- a model node 138 of the deformed model 132 is subjected to at least a second cost function, which allocates a suitable energy cost to a deviation of the device representation 130 of the device 110 from a corresponding model node 138 and/or from a corresponding model branch 140 of the model 132, for example according to a type of the deviation and/or according to the magnitude and/or direction of the deviation.
- the device representation 130 of the device 110 optionally comprises representations of multiple simultaneously tracked devices, or, additionally or alternatively, also comprises past representations of the device 110 (for example, history locations of the device within a substantially same procedure).
- a potential advantage of this is that it potentially allows combining past information into the real-time solved deformation model.
- a plurality of curves are used by the “curve deviation” energy function to find the deformation which best fits all those time curves, according to the deformation model 120.
- this provides more information to the deformation tracking algorithms - instead of trying to find a deformation state which only fits the current position and shape of one or more tracked catheters, the deformation tracking algorithms find a deformation which fits the catheter state at current moment and/or past moments (for example 3 seconds history).
- a potential advantage of this is that it potentially provides more information for the deformation tracking and may so avoid overfitting of the deformation model onto the position and/or shape of the tracked catheter, under the assumption that the organ does not dynamically deform too much over a short time period (for example, 3 seconds).
- past curves can be weighted differently compared to the current curve. For example, decreasing weights can be assigned to past curves such that the older the curve, the smaller the weight. This accounts for the fact that “old” curves are less reliable compared to present curves in that they may belong to a differently old deformed state of the organ, which is not necessarily the current deformed state of the organ.
- time-based data is used to fit a timebased deformation model onto the time-based data.
- a deformation model may contain a time-based energy function which assigns a cost for certain motion (over time) of the deformation model.
- a time-based energy function assigns a cost for certain motion (over time) of the deformation model.
- it is known that the organ cannot deform/move very rapidly so this can be formulated using a time -based temporal structure energy function.
- a time-based deformation model can be fit such that it fits all the time-based data over time, but also tries to minimize the structural motion (deformation) of the organ over time (as encoded by a time-based energy function). This can potentially improve the accuracy of the deformation tracking.
- the deformation engine is an optimizer and tries its best to reach a minimum of the total energy after each frame.
- the engine may be very fast (for example 60 frames per second), so that the change between subsequent frames must be small and smooth, in order to provide a realistic state.
- a simple way to quantify this, for example, is a time-based energy: or where t is the current time and At is the time duration between this frame and the last. 7)(t) is the transformation at time t.
- the calculated deformation 120 is applied on one or more of: the pre-deformation model 124, the initial registration 128 and a previous deformed visual model 122, to calculate a new deformed model 122’ (not shown in Figures la-b), e.g., location, shape and/or orientation, of model 132 and/or its various parts, relative to device representation 130.
- the deformed model 120 is translated into the visual representation 130 of device 110 upon the deformed model 132 of the anatomy 112, which is displayed by the display 106, showing a correct shape and/or position of the anatomy 112 as represented by a deformed model 132 together with a correct shape and/or position of the device 110 relative to the anatomy 112, as represented by the device representation 30.
- the calculation of the dynamic deformation 120 comprises:
- a target such as a lesion or other one or more anatomical features
- combining multiple fluoroscopic 2D imaging enables reconstructing a local fluoroscopic 3D volume, in which a target can be identified in 3D Fluoro coordinates (rather than 2D Fluoro image coordinates).
- identifying a target in fluoroscopy, or in external image shall therefore also include the case of identifying a target in 3D reconstructed tomosynthesis volume, or in processed one or more external images.
- a constraint is added (manually, optionally automatically) to the deformation tracking, to force the target to transmitter coordinates (X,Y,Z), with some weight (compared to the other elastic constraints, which are described using energy cost functions). This “pulls” the target elastically to its true position in transmitter coordinates, as seen by the imaging device.
- the tip of the device can be identified in fluoro coordinates. Since the position of the tip is known in transmitter coordinates (as tracked by the position/shape sensors), it is possible to compute a registration between Fluoro to transmitter coordinates and repeat the above. This can be done for example by constructing a translation-based Fluoro — transmitter registration, by bringing the tip location from its position in Fluoro coordinates to transmitter coordinates.
- the Fluoro is aligned with the EM transmitter in some orientation, therefore allowing to know the orientation of the Fluoro — transmitter registration.
- a potential advantage of correcting the discrepancy using a deformable flexible model is that it potentially allows improving the overall accuracy of the deformed anatomical model compared to the anatomy, instead of just providing accuracy at the single point of correction (for example, at a target).
- the modeled anatomy is deformed such that is still respects its energy cost functions (which can encode for example the original structure of the anatomy among other constraints) and yet provides accuracy at the marked point of interest (for example, at a target). In some embodiments, this potentially improves the overall accuracy of the guidance system, especially in the case where further navigation is needed in order to reach the target with the device. In some embodiments, improving the overall accuracy increases the probability for successful re-navigation to target after correction.
- the deformation tracking since the deformation tracking is constrained to bring the target to its observed location in transmitter coordinates (with a certain constraint weight), but does not force it to stay on its location as was observed by the external imaging, it may account for deformation which may occur during the renavigation, such that the catheter will be able to reach the target, which may still be breathing and deforming during re-navigation, and thus may not necessarily end up after re-navigation at its initially observed location by the external imaging.
- a breathing movement model includes alternating stretching and compressing of a lung model, for example by moving each model node, to simulate breathing. It is not known if this is possible in previous deformation models, in which model branches keep their lengths, and in which the positions of the model nodes are determined, in some cases, by the rotations of many other “parent” model nodes.
- the breathing model is achieved by using two or more preoperative CT/MRI scans. For example, one scan can be performed in full inhalation and a second scan can be performed in exhalation.
- the anatomy can then be modeled according to the two scans.
- the deformation model can use the two scans to model the breathing of the anatomy in real-time based on the real-time tracked breathing state of the patient.
- the deformation model can use a breathing model that provides ready-to- be-used models of the lungs during the breathing.
- the two CT scans can be processed, and a tree skeletal model may be generated from the two scans.
- the two processed trees can be registered such that each node and branch or a partial set of nodes and branches of the tree is identified and matched in the two scans.
- the deformation model can then use the two scans to interpolate between the two models based on the breathing state of the patient, which provides ready-to-be-used models of the lungs during the breathing.
- the breathing state of the patient can be computed using any suitable method.
- the breathing state of the patient (inhale/exhale or a scale in between) is computed by attaching one or more position/reference sensors to the patient and monitoring the movement of the sensor while the patient breathes.
- applying a high-pass filter on an up/down or right/left motion of the one or more reference sensors can provide a motion which is indicative of the breathing motion of the patient, from which the breathing state can be computed, as also mentioned below.
- position/reference sensors attached to the patient monitor the movement of the patient during procedure.
- the registration can be updated by monitoring the movement of the patient, by applying 3D translations and rotations in accordance with the movement of the patient.
- the external deformation of the patient can be monitored.
- the deformation of the body of the patient can be monitored by attaching 3 or more reference sensors to the body of the patient.
- deformation in the relative shape of the sensor can account for deformation of the body of the patient, which can be extrapolated to model deformation of the lung. For example, when the patient coughs, this can be detected by monitoring the shape and movement of the reference sensors and certain deformation can be applied to the lung, using the deformation model.
- the model can be fitted during the initial registration, by matching between structural changes of the external reference sensors and curve states inside the anatomy, as done in the case of breathing. For example, it may be observed that whenever a certain structural change occurs on the external reference sensors, certain motion or transformation is applied to the lung. This, for example, can model the coughing of the patient such that the system will compensate for coughing by parametrizing it using the external reference sensors and applying it on the deformed lung, as may be learned during initial registration according to the deformation model.
- a preoperative scan of the anatomy is performed in full inhalation (IRV) while the guidance procedure may be performed in standard tidal volume breathing.
- knowing the ratio between the inflation state of the scan (from which the anatomical model may be generated) and the intraoperative breathing can assist the deformation model in modeling the effect of the breathing.
- the deformation model can assume that the preoperative scan was performed in 100% inhalation state, while the intraoperative breathing alternates between 20% inflation (at exhale state) to 80% inflation (at inhale state).
- the deformation model can then interpolate between two available scans, or between two breathing states of the anatomy (for example, by stretching and compressing of a lung model, as discussed above).
- the inflation state information is given to the deformation model by the system or by the user.
- the inflation state of the preoperative scan as well as the intraoperative breathing pattern are automatically computed by the deformation model.
- the breathing pattern can be modeled using a small number of parameters such as: preoperative scan inflation (can be assumed to be 100%), intraoperative exhale and inhale inflation ranges.
- these parameters can then participate in the energy minimization process (see below regarding “energy minimization process”) of the breathing deformation model such that the overall energy is minimized. In some embodiments, this works under the assumption that the deformation cost energy is minimal for the true breathing and deformation parameters.
- the breathing phase of the patient is tracked in real-time and provided to the deformation model by using an external measurement of the breathing.
- one or more position sensors are attached to the chest of the patient to track the breathing motion pattern of the patient.
- periodic height change of the chest of the patient is indicative of the periodic breathing of the patient.
- the deformation model is configured for and/or comprises instructions for using the tracked breathing phase to interpolate between inhale and exhale states of the deformation model.
- the initial inhale and exhale states of the deformation model can be learned in an initial deformable registration setup stage of the procedure. While this being technically “not intraoperative”, since it is meant to be performed just before beginning the procedure, it has been incorporated to the “intraoperative” actions.
- multiple positions and shapes of the elongated tracked device can be accumulated during a supervised or unsupervised survey of the physician inside the anatomy.
- the multiple positions and shapes can be used simultaneously in the deformation energy minimization process (see below) to find the initial deformation state of the anatomy.
- the deformation model may then fit two or more different models, for example: inhale and exhale model, corresponding to two or more breathing groups of the samples.
- the deformation model is configured for interpolating between these models during procedure.
- the breathing inflation parameters (as described above) can be solved in the process of the initial deformable registration.
- a general method for generating a deformed, optionally visual, representation comprises one or more of:
- a part of the method for generating a deformed visual representation comprises one or more of:
- a method for updating a deformed visual representation comprises one or more of:
- a target for example a lesion
- a target lesion can be identified in Fluoro/CBCT, for example, a target lesion can be seen in a fluoroscopic image by using tomosynthesis methods as mentioned above, then the target is transformed to EM transmitter coordinates (for example by Fluoro — EM registration, for example using radiopaque markers on the transmitter), and then it is compared to where the system “thinks” the target is (in transmitter coordinates) based on its deformation tracking and registration algorithms.
- a method for tracking dynamic anatomy deformation comprises one or more of:
- applying of the extracted constraints on the models 140 causes movement of the model 132 and/or parts of model 132, representing parts of the anatomy 112, and/or change in position of representation 130 of the device 110 (optionally also a visual representation) relative to the model 132 and/or parts of model 132.
- applying of the extracted constraints causes the position of the visual representation 130 of the device 110 to appear outside of the anatomy 112 as represented in model 132.
- the first cost function allocates a suitable energy cost to each type and magnitude of movement of each model node 138.
- the second cost function allocates a suitable energy cost to each type and magnitude of deviation of visual representation 130 of the device 110 from a corresponding model node 138 and/or model branch 140 of model 132.
- the aforementioned methods are performed by the processor 102 in response to received instructions 116. In some embodiments, the methods are performed periodically and/or upon a change in the information/data 118, and/or upon an event that may potentially cause change in the information/data 118.
- the device may appear to be located in a fork between two possible branches.
- a guidance system needs to determine the true position of the device (i.e., in this case, choose between the two optional branch locations).
- a deviation cost function may encode the deviation cost between the device and the first branch, but may also possibly address the second branch while computing the deviation.
- determining the true assignment of the branch is nontrivial.
- multiple hypothetical assignments are made between the device and multiple possible branches, for example, in the proximity of the device (but not necessarily considering only the nearest branch).
- the device can be assumed to be located inside one of K nearest branches.
- a deviation cost energy can be computed, and the overall deformation model cost can be computed and minimized (under the assumption that the device is located inside one of the K nearest branches).
- the overall deformation energy includes structural cost of the anatomy as well as divergence from the anatomy cost
- the branch assignment assumption which leads to the minimal final deformation energy is the true assumption
- the deformation model is set to the state of this minimal energy assumption.
- this mechanism assists in avoiding local minima in the live dynamic deformation tracking of the model.
- multiple asynchronous worker threads can run in parallel and possess a different assignment of the tracked device to any of K possible anatomical branches (for example, K nearest branches).
- the worker threads can minimize the energy and test multiple assignment hypotheses in parallel.
- the system may then choose the deformation state from the thread that achieved the minimum deformation cost as its current chosen state of the deformed anatomy.
- the system operates under the assumption that the true state of deformation is the one which minimizes the total cost of the deformation model (which as described, may include cost for structural change of the anatomy as well as a cost for the divergence of the device from the anatomical model).
- the system in order to avoid local minima in the dynamic deformation model, is configured to find the global minimum of the deformation model in a multi-step solution by using different weighting technique of the elongated tracked device.
- the system may have high confidence in finding the deformation of the proximal anatomy based on the proximal portion of the elongated tracked device.
- the system then decreases the weights of the distal portion of the device and minimizes the deformation energy based just on the “proximal device”.
- the proximal anatomy would then deform based on the proximal device.
- the peripheral anatomy is assumed to be deformed more correctly than initially.
- the system increases the weights for the distal portion of the elongated tracked device and repeats the energy minimization process, now punishing more strongly divergence relative to the distal device. In some embodiments, this causes the distal (peripheral) anatomy to deform stronger than before, but starting at a deformation state achieved in the previous step, thus reducing the risk of converging to a local minimum of the deformation model.
- this mechanism of gradual convergence can consist of two or more steps in which the relative weights along the elongated tracked device are increased towards the distal end of the device, or changed in any other suitable manner.
- the weights can grow gradually towards the distal end of the device between each convergence step but can then be decreased again towards the proximal portion of the device, then grow again towards the distal part.
- the relative weights along the tracked device can be changed randomly between different convergence attempts under the assumption that the true deformation state of the anatomy is a state which is highly indifferent to different relative weighting along the device. In some embodiments, by changing the different relative weights along the elongated tracked device the probability of convergence to the true global minimum of the deformation cost energy is increased.
- one or more assumed true position candidates of the device inside the anatomy are optionally obtained by using artificial intelligence tools (Al).
- the divergence cost energy would then compute the divergence between the elongated device and each of its true/hypothetical assumed positions inside the anatomy during the overall deformation energy minimization process, as obtained by the Al.
- Al architectures may be configured to estimate one or more true/hypothetical positions of device 110 and/or a shape or a curve of device 110 within the anatomy 112, based of system measurements such as, for example, distances between various sensors or other components and/or relative positions and/or orientations of various sensors or other components of device 110.
- the estimation is based on comparison of these device measurements with the local anatomy near where the device is assumed to be located inside the anatomy.
- local anatomy can be processed by the Al in the form of a local preoperative CT scan, airway segmentation or in any other suitable format which represents the anatomical structures near the device.
- the Al can then use the local anatomical information to find one or more true position candidates for the device inside the anatomy.
- the Al may be architectured as a U-Net which receives the full curve of the device in the form of an input 3D volume into the network, as well as local anatomy information in the form of a corresponding block of a CT scan or airway segmentation binary volume, and the U-Net may then output a 3D volume representing the most probable location of the device inside the given local anatomy, according to the Al.
- the U-Net may output a 3D volume equivalent to the input local anatomy volume where a pathway inside the anatomy is highlighted, where the elongated device is found to be located with the best probability, according to the Al.
- any kind of Al or other suitable method can be used to choose K-best anatomical location candidates (for example, branch candidates) which represent where the elongated device may be located inside the anatomy.
- K-best anatomical location candidates for example, branch candidates
- such estimated curve may be used as a constraint in one or more of motion models 140.
- the first cost function and the second cost function may be applied, and/or deformation 120 may be calculated, as described in detail herein above.
- the K-best candidates may be used as assumptions for the device- anatomy deviation cost function as part as the overall deformation energy minimization as described above.
- an assignment candidate (hypothesis) yielding the best overall deformation energy cost after minimization is then assumed to represent the true location of the device inside the anatomy, thus representing the true deformation of the anatomy (which explains the tracked curve of the device).
- compositions, methods or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
- a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
- range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as “from 1 to 6” should be considered to have specifically disclosed subranges such as “from 1 to 3”, “from 1 to 4”, “from 1 to 5”, “from 2 to 4”, “from 2 to 6”, “from 3 to 6”, etc.; as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
- N is the number of nodes and 14 et is the weight, which may be different for each node, for example, based on its radius. You can see that this is a one-node energy, since it contains a single sum over the nodes, with no interaction between them - no.
- the important part can be written as, for example, such that it is always non-negative, and it is exactly zero only when the node is at its initial position and rotation.
- the optimizer uses the true degrees of freedom of each node i - either its position and rotation as Euler angles, or its position and rotation as a quaternion: respectively.
- the returning energy can be alternatively defined as or
- the main one is the catheter energy.
- the idea is writing a function which depends on and on the transformations along the catheter (which are the external data and NOT part of the degrees of freedom in the optimization).
- the deformation engine receives a list of some number of tracked positions and orientations along the catheter(s) For example, if there are three catheters present, it can use all of them at once to get better results than using only one catheter. In that case, and A catl may not be necessarily equal to lV cat2 or to lV cat3 .
- T cat j As these T cat j are given to the deformation engine, the simplest guess as to where inside the airways a catheter point is - is the closest position in the lung. Since it is known that all catheter points should reside inside an airway, it is wanted the energy to be at a minimum (or zero) when all T cat j are inside airways, and to increase the further out of an airway they are. It can be imaged as a string connecting T cat j for each j to its closest lung point and the energy being some positive monotonic function of the length of the strings. For example:
- Node i is the closest to the position of the jth catheter point, and the final catheter energy is a sum over all such energies.
- the engine receives a catheter point which is outside the lungs, the energy will rise from zero. This will push the lung nodes away from their rest state in an attempt to decrease the catheter energy E cat . This change will, in turn, increase the previous E ret until an equilibrium is reached, at the minimum total energy. If there were only E cat , the catheter would pull the lungs towards it with no regard to its initial state. It is added E ret to allow the lungs to partially oppose changes.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Data Mining & Analysis (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Surgery (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Pulmonology (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Optics & Photonics (AREA)
- Veterinary Medicine (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Biophysics (AREA)
- Physics & Mathematics (AREA)
- Urology & Nephrology (AREA)
- Physiology (AREA)
- Otolaryngology (AREA)
- Endoscopes (AREA)
Abstract
The invention discloses systems and methods for monitoring an endoscope within an anatomy of a patient, the method including: generating a pre-deformation model based on an acquired image of the anatomy including assigning model nodes and model branches; generating an initial registration of the endoscope in the anatomy; acquiring position and/or orientation data in transmitter coordinates from sensors positioned in the endoscope while navigating within the anatomy; providing one or more anatomical constraints on each of the nodes and branches based on at least one energy function; analyzing the acquired data in view of the pre-deformation model, the initial registration and the at least one energy function; generating a deformed representation of the anatomy based on the analysis by using a deformation model; where the analyzing comprises deforming the model by amending each of the nodes independently from each other in view of the constraints.
Description
DYNAMIC ANATOMY DEFORMATION TRACKING FOR IN-VIVO NAVIGATION
RELATED APPLICATION/S
This application claims the benefit of priority of U.S. Provisional Patent Application No. 63/438,536 filed on 12 January 2023, the contents of which are incorporated herein by reference in their entirety.
FIELD AND BACKGROUND OF THE INVENTION
The present invention, in some embodiments thereof, relates to a deformation model and, more particularly, but not exclusively, to a deformation model for an elongated endoscopic device.
Some known systems and methods of endoscopy, for example in the lung (bronchoscopy), try to follow or compensate for the anatomy movement during the procedure, for example in order to monitor the endoscopy on a display.
Systems for navigating and/or monitoring probes moving within the lung have been proposed and/or marketed, including: 1. Non-robotic navigation guided by single-sensor electromagnetic sensing, optionally, with combined fluoroscopy; 2. Robotic navigation guided by fiber-optics shape sensing; and 3. Robotic navigation guided by single-sensor electromagnetic sensing. Other systems known in the art include fluoroscopy-based systems, CBCT (cone-beam CT) guided systems, and video-registration based bronchoscopy.
Some methods for monitoring the probe within the lung include providing a deformation model of the lung. Some deformation models are hierarchical, e.g. each node of the lung is dependent on the orientation of its adjacent nodes, for example, while preserving the length of the branches.
Additional background art includes U.S. Patent No. US10674982B2 disclosing a system and method for constructing fluoroscopic -based three dimensional volumetric data from two dimensional fluoroscopic images including a computing device configured to facilitate navigation of a medical device to a target area within a patient and a fluoroscopic imaging device configured to acquire a fluoroscopic video of the target area about a plurality of angles relative to the target area. The computing device is configured to determine a pose of the fluoroscopic imaging device for each frame of the fluoroscopic video and to construct fluoroscopic-based three dimensional volumetric data of the target area in which soft tissue objects are visible using a fast iterative three dimensional construction algorithm.
U.S. Patent No. US11341692B2 disclosing a system for facilitating identification and marking of a target in a fluoroscopic image of a body region of a patient, the system comprising one or more storage devices having stored thereon instructions for: receiving a CT scan and a fluoroscopic 3D reconstruction of the body region of the patient, wherein the CT scan includes a marking of the target; and generating at least one virtual fluoroscopy image based on the CT scan of the patient, wherein the virtual fluoroscopy image includes the target and the marking of the target, at least one hardware processor configured to execute these instructions, and a display configured to display to a user the virtual fluoroscopy image and the fluoroscopic 3D reconstruction.
U.S. Patent No. US10653485B2 disclosing a method for implementing a dynamic three- dimensional lung map view for navigating a probe inside a patient's lungs includes loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images, inserting the probe into a patient's airways, registering a sensed location of the probe with the planned pathway, selecting a target in the navigation plan, presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe, navigating the probe through the airways of the patient's lungs toward the target, iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe, and updating the presented view by removing at least a part of an object forming part of the 3D model.
U.S. Patent No. US11547377B2 disclosing a system and method for navigating to a target using fluoroscopic-based three dimensional volumetric data generated from two dimensional fluoroscopic images, including a catheter guide assembly including a sensor, an electromagnetic field generator, a fluoroscopic imaging device to acquire a fluoroscopic video of a target area about a plurality of angles relative to the target area, and a computing device. The computing device is configured to receive previously acquired CT data, determine the location of the sensor based on the electromagnetic field generated by the electromagnetic field generator, generate a three dimensional rendering of the target area based on the acquired fluoroscopic video, receive a selection of the catheter guide assembly in the generated three dimensional rendering, and register the generated three dimensional rendering of the target area with the previously acquired CT data to correct the position of the catheter guide assembly.
SUMMARY OF THE INVENTION
Following is a non-exclusive list including some examples of embodiments of the invention. The invention also includes embodiments which include fewer than all the features in
an example and embodiments using features from multiple examples, also if not expressly listed below.
Example 1. A method for monitoring an endoscope within an anatomy of a patient, comprising: a. generating a pre-deformation model based on an acquired image of said anatomy; said generation comprising assigning model nodes and model branches to said pre-deformation model anatomy; b. generating an initial registration of said endoscope in said anatomy; c. acquiring position and/or orientation data in transmitter coordinates from sensors positioned in said endoscope while navigating within said anatomy; d. providing one or more anatomical constraints on each of said nodes and branches based on at least one energy function; d. analyzing said acquired data in view of said pre-deformation model, said initial registration and said one or more anatomical constraints based on said at least one energy function; e. generating a deformed representation of said anatomy based on said analysis by using a deformation model; wherein said analyzing comprises deforming said model by amending each of said nodes independently form each other in view of said constraints.
Example 2. The method according to example 1, further comprising acquiring an external image.
Example 3. The method according to example 1 or example 2, wherein said analyzing comprises analyzing from information from said external image.
Example 4. The method according to any one of examples 1-3, further comprising generating a breathing model of said anatomy.
Example 5. The method according to any one of examples 1-4, wherein said generating a breathing model comprises acquiring data of said anatomy in two or more breathing states.
Example 6. The method according to any one of examples 1-5, further comprising interpolating a plurality of breathing states between said two or more breathing states based on said two or more breathing states.
Example 7. The method according to any one of examples 1-6, wherein said analyzing comprises analyzing from information from said breathing model.
Example 8. The method according to any one of examples 1-7, further comprising acquiring said image of said anatomy.
Example 9. The method according to any one of examples 1-8, wherein said model nodes and model branches correspond to nodes and branches in said anatomy.
Example 10. The method according to any one of examples 1-9, wherein said assigning nodes and branches to said anatomy is automatic.
Example 11. The method according to any one of examples 1-10, wherein said initial registration comprises surveying said anatomy with said endoscope.
Example 12. The method according to any one of examples 1-11, wherein said initial registration comprises generating a representation of said endoscope upon a deformed model of said anatomy showing a correct position of said endoscopy within said anatomy.
Example 13. The method according to any one of examples 1-12, wherein said deformed model comprises positions and/or shapes of said model nodes and said model branches.
Example 14. The method according to any one of examples 1-13, wherein said initial registration comprises receiving one or more position and/or shape data from sensors located in said endoscope while said endoscope is inserted within various parts of said anatomy.
Example 15. The method according to any one of examples 1-14, further comprising displaying an image of said initial registration upon a deformed model.
Example 16. The method according to any one of examples 1-15, wherein said acquiring position and/or orientation data comprises acquiring shape data without position data.
Example 17. The method according to any one of examples 1-16, further comprising recovering position data from said shape data.
Example 18. The method according to any one of examples 1-17, wherein said external image comprises one or more of a dynamic in-vivo imaging data and a dynamic ex- vivo imaging data.
Example 19. The method according to any one of examples 1-18, wherein said dynamic in-vivo imaging is acquired by one or more of a camera, an ultrasound, a fluoroscope, a CBCT (Conebeam CT), a CT, and an MRI.
Example 20. The method according to any one of examples 1-19, wherein said analyzing from information from said external image comprises updating said deformed model according to data from said external image.
Example 21. The method according to any one of examples 1-20, wherein said updating said deformed model comprises identifying a target in said external image and updating a target position in said deformed representation of said anatomy elastically using said deformation model.
Example 22. The method according to any one of examples 1-21, wherein said updating said deformed model comprises correcting a location of a target in transmitter coordinates using said deformation model by manually adding a constraint to said target.
Example 23. The method according to any one of examples 1-22, wherein said at least one energy function comprises a first energy cost function and a second energy cost function.
Example 24. The method according to any one of examples 1-23, wherein said first energy cost function allocates a suitable energy cost to a corresponding type movement of a part of said anatomy.
Example 25. The method according to any one of examples 1-24, wherein said second energy cost function allocates a suitable energy cost to each type of deviation of a model representation of said endoscope from a part of said anatomy.
Example 26. The method according to any one of examples 1-25, wherein said amending each of said nodes independently from each other comprises independently amending one or more of a position, a rotation and a stretch.
Example 27. The method according to any one of examples 1-26, wherein one of said one or more anatomical constraints are actual anatomical physical constraints of said anatomy.
Example 28. The method according to claim 27, wherein said actual anatomical physical constraints include keeping all or part of peripheral endpoint nodes in their original location relative to the pleura or ribcage.
Example 29. The method according to any one of examples 1-28, wherein said generating a predeformation model is performed preoperatively.
Example 30. The method according to any one of examples 1-29, wherein said generating an initial registration is performed preoperatively and/or intraoperative.
Example 31. The method according to any one of examples 1-30, wherein said acquiring position and/or orientation data is performed intraoperative.
Example 32. The method according to any one of examples 1-31, wherein said generating a breathing model is performed preoperatively.
Example 33. The method according to any one of examples 1-32, further comprising attaching one or more sensors to said patient.
Example 34. The method according to any one of examples 1-33, further comprising monitoring a breathing of a patient using said one or more sensors.
Example 35. The method according to any one of examples 1-34, further comprising monitoring a movement of said patient.
Example 36. The method according to any one of examples 1-35, wherein said analyzing comprises analyzing from information from said monitoring said movement of said patient.
Example 37. The method according to any one of examples 1-36, further comprising displaying said generated deformed representation of said anatomy.
Example 38. The method according to any one of examples 1-37, wherein said providing one or more anatomical constraints comprises manually providing a constraint on said deformed representation of said anatomy.
Example 39. A system for endoscopy monitoring, comprising: a. an endoscope comprising an elongated body and a plurality of sensors positioned along said elongated body; b. a processor connected to a memory and comprising instructions for: i. accessing one or more information; said information comprising:
A. a pre-deformation model based on an acquired image of an anatomy; said pre-deformation model comprising assigned model nodes and model branches;
B. an initial registration of said endoscope in said anatomy; ii. acquiring position and/or orientation data in transmitter coordinates from said plurality of sensors while navigating within said anatomy; iii. analyzing said acquired data in view of said pre-deformation model, said initial registration and one or more provided anatomical constraints on each of said nodes and branches based on at least one energy function; iv. generating a deformed representation of said anatomy based on said analysis; wherein said analyzing comprises deforming said model by amending each of said nodes independently form each other in view of said constraints.
Example 40. The system according to example 39, further comprising a display.
Example 41. The system according to example 39 or example 40, further comprising an imaging module connected to an external imaging device.
Example 42. The system according to any one of examples 39-41, wherein said plurality of sensors are configured to provide one or more of a position, an orientation, a shape and a curve of said endoscope.
Example 43. The system according to any one of examples 39-42, wherein said a position, an orientation, a shape and a curve of said endoscope are represented in transmitter coordinates.
Example 44. A method for endoscopy monitoring, comprising: a. storing information comprising at least a model of an anatomy and position data received from a plurality of sensors or fiber optics shape sensor located on an interventional flexible elongated device configured to be inserted into the anatomy b. extracting constraints from the stored information; c. applying the constraints on a movement model of the anatomy;
d. applying a first and a second cost function on the anatomy model, wherein the first cost function allocates a suitable energy cost to a corresponding type movement of a part of the anatomy, and the second cost function allocates a suitable energy cost to each type of deviation of a model representation of the elongated device from a part of the anatomy; and e. calculating a deformation of the anatomy model by optimizing movement of the anatomy model based on the cost functions, so that the energy cost is minimized.
Example 45. The method according to example 44, wherein said movement model includes a breathing model which describes parametrized movement of the anatomy due to breathing and by movement of other body parts due to breathing.
Example 46. The method according to example 44 or example 45, wherein said movement model includes adjustments to the shape and position of the anatomy model according to received imaging data.
Example 47. The method according to any one of examples 44-46, wherein said movement model incorporates a motion fading effect, describing changes in time in the movement due to a certain movement cause.
Example 48. The method according to any one of examples 44-47, further comprising receiving imaging data, identifying a discrepancy in the imaging data comparing to the anatomy model, and adjusting the anatomy model to correct the discrepancy, by extracting constraints based on the identified discrepancy and applying the constraints on the movement model.
Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.
As will be appreciated by one skilled in the art, some embodiments of the present invention may be embodied as a system, method or computer program product. Accordingly, some embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, some embodiments of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Implementation of the
method and/or system of some embodiments of the invention can involve performing and/or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of some embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware and/or by a combination thereof, e.g., using an operating system.
For example, hardware for performing selected tasks according to some embodiments of the invention could be implemented as a chip or a circuit. As software, selected tasks according to some embodiments of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In an exemplary embodiment of the invention, one or more tasks according to some exemplary embodiments of method and/or system as described herein are performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes a volatile memory for storing instructions and/or data and/or a non-volatile storage, for example, a magnetic hard-disk and/or removable media, for storing instructions and/or data. Optionally, a network connection is provided as well. A display and/or a user input device such as a keyboard or mouse are optionally provided as well.
Any combination of one or more computer readable medium(s) may be utilized for some embodiments of the invention. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electromagnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can
communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium and/or data used thereby may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for some embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Some embodiments of the present invention may be described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a
computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Some of the methods described herein are generally designed only for use by a computer, and may not be feasible or practical for performing purely manually might be expected to use completely different methods, e.g., making use of expert knowledge and/or the pattern recognition capabilities of the human brain, which would be vastly more efficient than manually going through the steps of the methods described herein.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.
In the drawings:
FIG. 1A is a schematic representation of an exemplary system for endoscopy monitoring, according to some embodiments of invention;
FIG. IB is a schematic representation of the modules of the exemplary system, according to some embodiments of the invention;
FIG. 2 is a flowchart of how data from the sensors are converted into data about shape and position of the elongated device, according to some embodiments of the invention;
FIG. 3 is a flowchart of an exemplary general overview of the method for generating a deformed, optionally visual, representation, according to some embodiments of the invention;
FIG. 4 is a flowchart of an exemplary part of the method for generating a deformed visual representation, according to some embodiments of the invention;
FIG. 5 is a flowchart of an exemplary method for updating a deformed visual representation, according to some embodiments of the invention; and
FIG. 6 is showing a flowchart of an exemplary method for tracking dynamic anatomy deformation, according to some embodiments of the invention.
DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION
The present invention, in some embodiments thereof, relates to a deformation model and, more particularly, but not exclusively, to a deformation model for an elongated endoscopic device.
Overview
An aspect of some embodiments of the invention relates to monitoring a navigation of an elongated device reaching a peripheral target endoscopically. In some embodiments, the elongated device is a steerable catheter that is inserted into a lumen structure (such as the airways in the lung) and guided to a desired location (for example a lesion), either manually or by a guidance system. In some embodiments, optionally, the device is tracked electromagnetically, such that its tip position is known in 3D transmitter coordinates and/or the entire or partial curve of the device is tracked accurately in 3D transmitter coordinates. In some embodiments, the shape tracked device allows visualizing turns along the curve which contributes more information about the location of the tip of the device in the anatomy. In some embodiments, tracking of the shape and the position is performed on the elongated device. In some embodiments, the registration between a preoperative CT or MRI (or any other suitable 3D scan of the patient) scan of the patient and the current state of the organ as it lies in space is not a rigid registration, it is a flexible registration that takes under consideration the flexibility and the deformations, optionally also over time (for example, real-time), of the organs, between their state in the preoperative CT scan and the realtime procedure. In some embodiments, changes and/or deformations caused by breathing are taken under consideration when monitoring the device. For example, the deformation and the changes, which occur constantly during the procedure due to the breathing of the patient, in the pathway that the device needs to take to reach the desired location are taken under consideration when planning and executing the navigation. In some embodiments, a potential advantage of doing this is that it potentially avoids large inaccuracy in lung navigation procedures, where the system can "think" that the catheter tip has reached the desired location, while in reality it may be as far as 2-3 cm from the desired location. In some embodiments, real-time deformations are monitored and compensated for. For example, if the patient moves during the intervention, the system is configured to adapt the model accordingly by dynamically and flexibly incorporate the real-time data into the model.
An aspect of some embodiments of the invention relates to a deformation tracking algorithm configured to maintain a deformable registration between the preoperative CT scan and the breathing and deforming organ. In some embodiments, the deformation tracking is based on a pre-trained deformation model - where a study of how certain organs, such as lung, deform under certain applied forces serves as pre-trained information and modeled as costs (energies), and also on a real-time solver which minimizes the costs based on all available information (such as one or more endoscope position and shape, reference sensor locations, external imaging, etc.). In some embodiments, the deformation model uses a skeletal / graph structure (multiple nodes), similar to
finite elements physical simulation which is pre-trained for the lung (or for any other target organ). In some embodiments, by solving the deformation of the organ in real-time, the system is able to “understand” how the preoperative CT is deformed in procedure and thus display the accurately curve-tracked device in its true deformed location (inside a lumen/airway).
In known deformation models, the deformation model strongly relies on a hierarchical structure (a tree/skeleton structure) of the nodes and allows only a limited number of parameters per node. It is also hierarchical in the sense that if a certain node is rotated then all or some children nodes are also rotated. This may be true for 3D printed plastic lung models, which may float freely in space, but in reality the inventors have found that this is not a natural behavior. In some embodiments, the deformation model on the present invention is more general, where each node has its own independent position/rotation/stretch, and the nodes are tied by structural energies (not necessarily just neighboring nodes), such that a tree-like behavior can be achieved using one tuning, and a more realistic behavior (which is not strictly hierarchical, where the pleura for example constrains the peripheral airway endpoints from moving while other nodes can move more freely) can be achieved by another tuning. For example, the deformation model on the present invention can include a constraint, which is described as an energy function, to prefer to keep all or part of peripheral endpoint nodes in their original location relative to the pleura or ribcage. In some embodiments, the deformation model herein also enables more free breathing (as studied during the initial registration survey) as the segments can stretch and contract. In some embodiments, the deformation model herein also incorporates a breathing model to enable more free breathing which is constrained to realistic breathing according to the breathing model. In some embodiments, the deformation model herein also enables inputting external constraints, such as information from external imaging: external imaging may be used to determine/confirm the position of the endoscope relative to a target lesion. In some embodiments, when the position of the tip on the target lesion compared to the endoscope, appears to be different than what the system “thinks” (based on all its elaborate deformation tracking algorithms) - the offset can be inputted to the deformation model engine and the engine corrects itself “deformably” such that the anatomy is then deformed to account for that offset, and the lesion appears “deformably” in its true position relative to the endoscope. Additionally, with other known "external imaging correction" systems, the correction is almost always rigid such that other anatomical features go "out of sync". For example, in other EM based or fiber-optics shape sensing based navigation systems which are corrected using external imaging, a correction offset is applied to the navigation system as a rigid correction transform. This correction transform puts the target at its true location relative to the endoscope, as observed by the external imaging, but it does so rigidly. When the user renavigates to the target,
the surrounding anatomical features, such as airways or other lumen structure can appear at wrong locations, which may impact the performance of renavigation to the target, to the extent that even if the user is finally able to reach the corrected target, the endoscope location in the anatomy may still not be at the target which may require additional repetitive scans and correction by external imaging. With the deformation model herein, the external information is natively inputted into the engine in a deformable way so that everything corrects itself seamlessly and the entire neighborhood of the target is adjusted to its true deformation state.
Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.
Introduction
During endoscopy procedures, it is important to perform one or more monitoring actions in order to facilitate reaching the desired location within the patient.
In some embodiments, the monitoring comprises monitoring the endoluminal device itself, for example, the shape and location of the endoluminal device within the lumens of the patient. In some embodiments, this is performed by utilizing a plurality of sensors positioned along the elongated body of the endoluminal device, which are tracked and referenced to an external, optionally fixed, frame of reference (for example, to an EM transmitter coordinate system). In some embodiments, the endoluminal device shape and/or position is localized in transmitter coordinate system and is registered to the anatomy using a registration algorithm. In some embodiments, the anatomy is registered to transmitter coordinate system using a registration algorithm.
In some embodiments, the monitoring comprises monitoring the overall anatomy of the patient, for example by using an x-ray imaging device, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device, which are used during the procedure to monitor the advancement of the endoluminal device within the lumens of the patient.
While both monitoring methods mentioned above are useful, neither each one individually nor in combination provide a satisfactory solution that takes under account, in a real-time continuous and seamlessly flexible manner, for the natural deformations that occur during the procedure due to the natural movement of the patient (for example due to breathing) and/or for the unnatural deformations caused by the movements of the endoluminal device while moving through
the lumens of the patient while reaching the desired location. For example, CBCT, fluoroscopy, REBUS, X-ray allow to identify the deformation of the organ, because the current state of the organ is being visualized, therefore, X-ray based methods allows in practice to “sense” (see) the deformation. A clear disadvantage with X-ray based navigation is that a user cannot constantly radiate the patient, so the user is forced to just take a “roadmap” and then use it for navigation. This provides navigation on a frozen map that does not take under consideration deformation and/or does not perform deformation tracking. Another disadvantage is that with standard fluoroscopy the user is deprived of visualizing the airways, so the user cannot actually know where the roadmap is. In these cases, contrast materials could be used to enhance the airways in the image, but this is not trivial and usually it is not performed. Therefore, this leaves the user with image registration-based methods - for example, registering the fluoro image to a virtual fluoro image created from CT, but then the CT is frozen and does not have deformation tracking, thereby providing again an inaccurate guiding system/method. Another possible solution is to do a full CBCT scan and then navigate on the scan. The CBCT scan does record the deformation, but then the navigation is again performed on a frozen (non-deforming) map.
Therefore, according to some embodiments of the invention, the solution disclosed herein provides the best of both techniques, a high-quality deformation tracking that is complemented with information from an external imaging device, for confirmation and/or correction of the deformation tracking, based on the X-ray imaging.
As mentioned above, some methods for monitoring the endoluminal device within the lumens of the patient (for example the lumens of the lung) include providing a hierarchical deformation model of the lung. These hierarchical deformation models have each node of the modeled anatomy dependent on the position and/or orientation of its adjacent nodes. Some of these methods have the length of the branches of the modeled anatomy constant, which limits the movement of the nodes to three degrees of freedom, and optionally even limiting to two degrees, depending on how the “roll” degree of freedom is handled. Additionally, in some of these methods, deformation of a node may be calculated as dependent on deformation of many parent nodes. These assumptions may cause a slow and inaccurate (non-realistic) deformation calculation.
Additionally, in some of these methods, deformation is computed locally - that is, by essentially bringing the tracked catheter into the nearest branch, which does not necessarily reflect the true state of the anatomy relative to the catheter. Additionally, in some of these methods, the anatomy is not deformed according to realistic constraints, such as pleura or ribcage constraints, which might impact the accuracy of the tracking. Additionally, in some of these methods, the
anatomy does not “breath” (does not deform according to breathing) in accordance with true anatomical breathing, as recorded in the patient, which might impact the accuracy of the tracking.
The present invention provides a solution that overcomes the deficiencies of the abovementioned monitoring methods by providing a monitoring system with advanced deformation models (referred hereinafter as the deformation model) and deformation tracking methods (referred hereinafter as the deformation tracking).
Exemplary temporal division of actions
In some embodiments, the invention comprises performing actions preoperatively and performing actions intraoperatively.
Exemplary preoperative actions
In general, a preoperative CT or MRI (or any other suitable preoperative scan) is performed to identify the desired location that requires treatment, and using that same CT, a map of the path that needs to be taken by the endoluminal device is prepared, along with a segmented map of the entire or partial lumen tree (such as airway tree, in the case of lung). In some embodiments, some of the CT processed outputs (such as the entire or partial segmented lumen tree, a segmented lesion etc.) are used by the deformation model and tracking, or breathing model, or general, optionally parametrized, movement model, as described below. For example, segmented/classified tissue information can be used by the deformation model/tracking to increase the accuracy of the deformation tracking (as described below).
In some embodiments, as part of the preoperative actions, a breathing model of the patient is prepared, which is then utilized in the movement model 140 (for example breathing model and/or movements performed by the patient during the procedure - see below), as explained herein.
Exemplary intraoperative actions
In some embodiments, exemplary intraoperative actions comprise one or more of monitoring the breathing (for example by using one or more sensors configured for monitoring the rhythm of the breathing and/or the movements of the chest), monitoring changes in the anatomy of the patient (for example by using one or more imaging devices and/or sensors positioned on the patient, for example, on the patient’s chest), monitoring the position and/or shape of the device (for example relative to a transmitter - “transmitter coordinates”), and monitoring the changes in the position and/or deformations of the patient. In some embodiments, monitoring changes in the anatomy of the patient also includes the use of movement model 140, which is configured to amend
the deformation model, for example, in view of sensed movement (for example, breathing) of the patient.
Exemplary system
In order to provide context to the deformation model described herein, a schematic representation of an exemplary system for endoscopy monitoring is shown in Figure la and Figure lb.
Referring now to Figure la, showing a schematic representation of an exemplary system 100 for endoscopy monitoring, according to some embodiments of invention, and also to Figure lb showing a schematic representation of the modules of the exemplary system, according to some embodiments of the invention. In some embodiments, the system 100 comprises a processor 102, a memory 104 (comprising a plurality of modules) and a display 106 (optionally showing a model of a target location 146). In some embodiments, optionally, the system 100 comprises an imaging module 108, which may include an external imaging device, for example a three-dimensional imaging device, an x-ray imaging device, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device.
In some embodiments, a potential advantage of the system 100 is that it allows using an external imaging device (connected to imaging module 108) to input more constraints into the general deformation model. For example, a lesion can be “seen” by a CBCT (cone -beam CT) scan at a position which may be different compared to the location of the endoluminal device, from the point of view of the system (based on navigation and deformable registration only). In some embodiments, the system 100 is configured to allow inputting this information into the deformation tracking engine as a constraint such that the entire deformation model elastically adjusts to reflect this information (and such that nearby features are also deformed accordingly, not just the feature which was seen in the external imaging), therefore, rather than correcting the location of important anatomical features rigidly, as other systems which rely on external imaging may do, the system is configured to correct the location of important features in a flexible manner using the deformation model.
In some embodiments, an interventional flexible elongated device 110 (also referred just as “the device 110” or “the elongated device 110”) may be integrated into system 100 and/or provided separately. In some embodiments, during an interventional procedure, the elongated device 110 is inserted into a patient, for example the lungs 112 (referred hereinafter as “anatomy 112” or “the anatomy 112”, which is schematically shown in Figure la).
In some embodiments, the flexible elongated device 110 comprises an elongated body, for example an elongated flexible body, for example a catheter or another interventional device which may be inserted, for example, via a catheter during an interventional procedure. In some embodiments, the flexible elongated device 110 comprises a plurality of shape and/or position sensors 114 (optionally also temperature sensors), configured for sensing positions and/or orientations along the flexible elongated device 110.
Referring now to Figure 2, showing a flowchart of how data from the sensors are converted into data about shape and position of the elongated device, according to some embodiments of the invention. The term “shape and position” or any other similar terms, refer hereinafter as tracking of one or more of a position, an orientation, a shape and a curve, of the elongated device. In some embodiments, the sensed positions are transmitted to the processor 102 or to another processor. In some embodiments, the processor 102 is configured for receiving the sensed positions and calculate a shape of the flexible elongated device 110, while positioned inside the anatomy 112. In some embodiments, additionally or alternatively, the shape of the flexible elongated device 110 is calculated by an external processor and processor 102 receives the calculated shape of the flexible elongated device 110, for example, in EM transmitter coordinates, while positioned inside anatomy 112. In some embodiments, additionally or alternatively, the flexible elongated device 110 includes a shape sensor, such as an optical fiber shape sensor, in which case the shape of the flexible elongated device 110 is sensed directly. In some embodiments, the shape/position is first computed in transmitter coordinates. In some embodiments, after applying the deformation model, the user is enabled to localize the catheter inside the anatomy, and optionally, localize the deforming anatomy in transmitter coordinates. In some embodiments, this is based on the deformation model in addition with the tracking, which is a deformable breathing registration between the transmitter coordinates and the anatomy (which may initially be represented in preoperative CT coordinates).
In some embodiments, a potential advantage of using position sensors is that it potentially allows sensing the shape of the device relative to a certain coordinates system (such as an electromagnetic transmitter’s coordinate system), which allows to localize a curve of the flexible elongated device 110 in its true absolute position in space. Contrary to this, using a fiber optic shape sensor may only provide the relative shape of the device but might not provide the shape’s position in space. In some embodiments, a potential advantage of knowing the position of the flexible elongated device 110 in space is that it potentially increases the accuracy of applied deformation models since this information can be incorporated in the minimized deformation energy functions described below. In some embodiments, the added accuracy is achieved due to having the information of position of the device and not just shape. Known solutions that utilize
fiber optic solutions try to recover the position (for example, by integrating the sensed shape from a known reference point or origin, which may be considered as shape “transmitter coordinates”, or shape “reference coordinates”), but the unknown position always degrades their deformation tracking performance. By recovering the position from a shape sensor, the shape sensor can be thought of as localized in “reference coordinates” or “transmitter coordinates”, similarly to the EM transmitter coordinates described herein, but with greater inaccuracy (due to shape to position recovery integration error).
Referring back to Figure la and also to Figure lb, in some embodiments, the memory 104 is configured to store instructions 116, which instruct the processor 102 to perform operations of the methods described herein. In some embodiments, a dynamic deformation 120 is calculated by the processor 102, and optionally a deformed visual representation 122 is generated, from information/data 118 received from components of the system 100 and/or from external systems (not shown), as described in more detail herein (see below more information on exemplary sources of information/data 118).
Exemplary sources of information/data 118
In some embodiments, exemplary sources of the information/data 118 are one or more of:
1. A pre-deformation model 124 of anatomy 112 (for example, a preoperative, optionally processed, CT scan or MRI scan or any other suitable scan);
2. An initial registration 128 of the device 110 (optionally pre-navigational, and optionally a deformed registration); based on an initial, optionally unsupervised, survey of the device inside the anatomy and on a first fitting of the deformation model onto the plurality of tracked curves of the device from the initial survey (optionally in combination with a breathing model).
3. Position and/or orientation data 126 from sensors 114 which are collected during the procedure (intraoperative, optionally in transmitter coordinates);
4. Information from the imaging module 108 (intraoperative); For example, the catheter is “seen” located in certain 3D offset relative to the target lesion, which may be a different location than the location that the system “thinks” the device is positioned inside the anatomy (or that the lesion is positioned in transmitter coordinates). Another example, can be a movement of the anatomy in relation to a previous image; and
5. A movement model 140 (preoperative breathing and/or intraoperative breathing and movement of patient).
Exemplary pre-deformation model 124 of anatomy 112
In some embodiments, the information/data 118 comprises data from a pre-deformation model 124 of anatomy 112, for example as mentioned above, a model of anatomy 116 in a static state (for example in full inhale state or in full exhale state), for example before the initialization of the interventional procedure and/or in a predetermined state and/or with a predetermined set of parameters. In some embodiments, the pre-deformation model 124 is generated, for example by the processor 102, based on a static preoperative image, generated by any suitable type of diagnostic imaging scan method such as, for example a CT or MRI image.
Exemplary initial registration 128 of the device 110
In some embodiments, at the beginning of the procedure, the system performs an initial registration 128 of the device 110 to be used as reference for the following monitoring of the device. In some embodiments, the initial registration 128 is performed after the device 110 is inserted into the patient at the beginning of the procedure and before proceeding with the navigation towards the desired location within the anatomy 112.
In some embodiments, the initial registration 128 optionally comprises a visual device representation 130 of the device 110 upon a deformed model 132 of the anatomy 112, showing a correct position of the device 110 within the anatomy 112. In some embodiments, the initial registration 128 is generated, for example, by the processor 102, by receiving position and/or shape data 126 from the sensors 114 and/or memory 104, for example while the device 110 is inserted to and/or moves within various parts of anatomy 112 (for example lung airways). In some embodiments, accordingly, adjustments are performed to the virtual position (position and/or orientation) of the device 112. In some embodiments, based on the received data, the predeformation model 124 and/or its various parts are adjusted/updated according to position data 126 and/or according to the shape of the device 110 calculated based on position data 126. In some embodiments, explanations regarding how the adjustments of the virtual position of predeformation model 124 are described in more detail herein.
In some embodiments, the deformed model 132 comprises positions and/or shapes of model nodes 138 and model branches 140, which correspond to nodes 142 and branches 144 of the anatomy 112, respectively.
In some embodiments, the display 106 is configured for displaying an image of the initial registration 128, for example showing the visual device representation 150 of the device 110 upon the deformed model 132. In some embodiments, the deformed model 132 is a dynamically deformed model, meaning the model is updated periodically.
Exemplary position and/or orientation data 126 from sensors 114 collected during the procedure
In some embodiments, the information/data 104 comprises position and/or orientation data 126 from sensors 114 and collected during the procedure. In some embodiments, this data is represented in transmitter coordinates and is used by the system to modify/update the deformation model.
In some embodiments, the information/data 104 comprises shape data 126 from shape sensor 114 and collected during the procedure. In some embodiments, this data lacks position information (only represents shape) and position data is recovered from the shape data, for example, using shape integration methods relative to a known reference point. In this case, the integrated position and shape tracked sensor can be thought of as being localized in shape “reference coordinates” or shape “transmitter coordinates” and is then used by the system to modify/update the deformation model.
In some embodiments, a shape sensor, such as a fiber-optic shape sensor, can thus be used in any of the methods described herein and the sensors 114 may refer to any of EM sensors, fiberoptics shape sensor or any other position/shape sensor.
Exemplary information data received from the imaging module 108
In some embodiments, updates are performed from data received from the imaging module 108, for example, based on movement models 140 of anatomy 112, as described in more detail herein.
In some embodiments, the information/data 104 includes dynamic in-vivo imaging data 134 and/or dynamic ex-vivo imaging data 136. In some embodiments, the dynamic imaging data 134 includes one or more of an image, ultrasound or other in-vivo imaging data obtained for example from a camera 138, which may be installed on the device 110 and/or capture in-vivo image or ultrasound data, for example while the device 110 is inserted to and/or moves within various parts of anatomy 112. In some embodiments, the dynamic ex-vivo imaging data 136 includes image data obtained from imaging module 108.
Exemplary movement models 140
In some embodiments, the information/data 104 includes various movement models 140, which may be calculated, generated and/or adjusted for anatomy 112. In some embodiments, the movement models 140 enable the calculation (for example by processor 102), optionally dynamically and/or in real time, of movements and/or updated position of the anatomy 112 in various scenarios. In some embodiments, flexible movement models 140 include models of
movement of the anatomy 112 and/or its various parts, due to voluntary or non-voluntary movements of the body of the patient, various types of movement due to breathing, movement of device 110 within anatomy 112, and/or any other suitable kind of movement. In some embodiments, the movement models 140 comprise at least a first cost function and a second cost function, as described in more detail herein. In some embodiments, the movement models 140 are parametrized. For example, a breathing model can be parametrized using a single parameter, the breathing phase, which determines a stretching and compressing of a lung model based on that single parameter, as described in more detail below.
In some embodiments, an exemplary movement model 140 is a breathing model which describes movement of the anatomy 112 and/or parts of anatomy 112, due to breathing. For example, in case the anatomy 112 includes the lungs anatomy, the shape in space and/or the positions of the airways and nodes of the lungs may be affected by exhalation and inhalation and/or by the diaphragm (i.e. midriff) movement and/or by movement of other body parts due to breathing. In some embodiments, a movement model 140 or a plurality of movement models 140 are generated to describe the various types of movements due to breathing and/or their effect on the shape in space and/or the positions of the airways and nodes of the lungs. In some embodiments, the movement model 140 is configured for and/or comprises instructions for receiving as input in- vivo imaging data 134 and/or ex-vivo imaging data 136, and calculate adjustments to the shape and/or the positions of the model branches (e.g. airways) and nodes, according to the received image data.
In some embodiments, the movement models 140 may incorporate a retreating effect and/or a fading effect, describing changes in time of movements of and/or in the anatomy 112 due to a certain movement trigger and/or cause. For example, the movement models 140 are configured for and/or comprises instructions for describing movement of various parts of the lungs (for example as translated into nodes and branches) during inhalation and/or exhalation, for example due to air flow and/or diaphragm motion, including a retreating effect and/or a fading effect of the motion of the various parts in time, for example towards the end of an inhalation or an exhalation.
Exemplary dynamic deformation 120
In some embodiments, the processor 102 is configured for and/or comprises instructions for dynamically calculate a deformation 120 of the anatomy 112. In some embodiments, the calculation of the dynamic deformation 120 is based on the movement models 140. In some embodiments, the processor 102 is configured for and/or comprises instructions for applying various constraints on the movement models 140 to calculate the dynamic deformation 120.
In some embodiments, the constraints are extracted from the plurality of information/data 118, such as, for example, pre-deformation model 124, position data 126, the initial registration 128, in-vivo imaging data 134, ex-vivo imaging data 136, and/or any other suitable information.
In some embodiments, as explained above, the deformed model 132 comprises positions and/or shapes of model nodes 138 and model branches 140, which correspond to nodes 142 and branches 144 of the anatomy 112, respectively. In some embodiments, a model node 138 may move and rotate in 6 degrees of freedom. In some embodiments, a model node 138 may be subject to at least one cost function, which allocates a suitable energy cost to each type and magnitude of the movement. In some embodiments, moving or rotating a model node 138 with respect to its immediate neighbors, may cost a suitable amount of energy, for example according to the magnitude and/or direction of the movement, and/or according to the type and/or location of the node. In some embodiments, moving or rotating a model node 138 with respect to its original position, may cost a suitable amount of energy, for example according to the magnitude and/or direction of the movement, and/or according to the type and/or location of the node. In some embodiments, moving or rotating a model node 138 with respect to a specific anatomy part or body organ, may cost a suitable amount of energy, for example according to the magnitude and/or direction of the movement, and/or according to the type and/or location of the node.
In some embodiments, as mentioned above, the visual representation 130 of the device 110 comprises a position and/or a shape, for example a full-length or partial shape calculated by the processor 102 based on position data or raw data from the sensors 114, representing a current position and/or a current shape of the device 110. In some embodiments, a model node 138 of the deformed model 132 is subjected to at least a second cost function, which allocates a suitable energy cost to a deviation of the device representation 130 of the device 110 from a corresponding model node 138 and/or from a corresponding model branch 140 of the model 132, for example according to a type of the deviation and/or according to the magnitude and/or direction of the deviation.
In some embodiments, the device representation 130 of the device 110 optionally comprises representations of multiple simultaneously tracked devices, or, additionally or alternatively, also comprises past representations of the device 110 (for example, history locations of the device within a substantially same procedure). In some embodiments, a potential advantage of this is that it potentially allows combining past information into the real-time solved deformation model.
For example, in some embodiments, a plurality of curves (present and past curves) are used by the “curve deviation” energy function to find the deformation which best fits all those time curves, according to the deformation model 120. In some embodiments, this provides more
information to the deformation tracking algorithms - instead of trying to find a deformation state which only fits the current position and shape of one or more tracked catheters, the deformation tracking algorithms find a deformation which fits the catheter state at current moment and/or past moments (for example 3 seconds history). In some embodiments, a potential advantage of this is that it potentially provides more information for the deformation tracking and may so avoid overfitting of the deformation model onto the position and/or shape of the tracked catheter, under the assumption that the organ does not dynamically deform too much over a short time period (for example, 3 seconds). In some embodiments, past curves can be weighted differently compared to the current curve. For example, decreasing weights can be assigned to past curves such that the older the curve, the smaller the weight. This accounts for the fact that “old” curves are less reliable compared to present curves in that they may belong to a differently old deformed state of the organ, which is not necessarily the current deformed state of the organ.
In some embodiments, additionally or alternatively, time-based data is used to fit a timebased deformation model onto the time-based data. For example, a deformation model may contain a time-based energy function which assigns a cost for certain motion (over time) of the deformation model. For example, it is known that the organ cannot deform/move very rapidly, so this can be formulated using a time -based temporal structure energy function. In some embodiments, by considering time-curves of catheter data (not just the present curve), a time-based deformation model can be fit such that it fits all the time-based data over time, but also tries to minimize the structural motion (deformation) of the organ over time (as encoded by a time-based energy function). This can potentially improve the accuracy of the deformation tracking.
In addition to the lung wanting to keep its structure on one hand and for the catheter to be fully inside an airway on the other hand, another strong constraint which can be placed on the deformation of each frame is that each frame cannot be too different from the previous one. In some embodiments, the deformation engine is an optimizer and tries its best to reach a minimum of the total energy after each frame. However, it is known that realistically, the lungs cannot deform too much between two frames. The engine may be very fast (for example 60 frames per second), so that the change between subsequent frames must be small and smooth, in order to provide a realistic state. A simple way to quantify this, for example, is a time-based energy:
or
where t is the current time and At is the time duration between this frame and the last. 7)(t) is the transformation at time t.
In some embodiments, the calculated deformation 120 is applied on one or more of: the pre-deformation model 124, the initial registration 128 and a previous deformed visual model 122, to calculate a new deformed model 122’ (not shown in Figures la-b), e.g., location, shape and/or orientation, of model 132 and/or its various parts, relative to device representation 130. In some embodiments, the deformed model 120 is translated into the visual representation 130 of device 110 upon the deformed model 132 of the anatomy 112, which is displayed by the display 106, showing a correct shape and/or position of the anatomy 112 as represented by a deformed model 132 together with a correct shape and/or position of the device 110 relative to the anatomy 112, as represented by the device representation 30.
In some embodiments, the calculation of the dynamic deformation 120, comprises:
1. Receiving in-vivo imaging data 134 and/or ex- vivo imaging data 136.
2. Identifying a discrepancy in the image data, of the relative locations, shapes and/or orientations of the anatomy 112 and the device 110. In some embodiments, identifying a discrepancy is performed based on a target (such as a lesion or other one or more anatomical features) and/or device identified in the image data and the relative location of the corresponding model of the target and/or device location 146.
3. Comparing the relative locations, shapes and/or orientations of the deformed model 132 and the visual representation 130 of device 110.
4. Adjusting the relative locations, shapes and/or orientations of model 132 to correct the discrepancy, e.g. to reach a state with no such discrepancy. In some embodiments, the adjustment is performed by applying the constraints, extracted from the appearing discrepancy, on models 140.
The following example is provided to allow a person having skill in the art to understand the invention.
For example, in the imaging, a target (for example a lesion or other anatomical feature) is identified at location (X,Y,Z) in Fluoro/CBCT coordinates. In the case of fluoroscopy, a target lesion may not be seen in standard fluoroscopic imaging, because it may only comprise a small portion of tissue relative to other more significant anatomical features (such as bones, diaphragm, heart, blood-vessels etc.). In this case, fluoroscopic 3D reconstruction methods, such as tomosynthesis, can be used to enhance the fluoroscopic image such that the lesion is seen, by algorithmically combining multiple fluoroscopic images from multiple known angles. In some embodiments, combining multiple fluoroscopic 2D imaging enables reconstructing a local fluoroscopic 3D volume, in which a target can be identified in 3D Fluoro coordinates (rather than
2D Fluoro image coordinates). In some embodiments, identifying a target in fluoroscopy, or in external image, as referred herein, shall therefore also include the case of identifying a target in 3D reconstructed tomosynthesis volume, or in processed one or more external images. After identifying a target location in Fluoro/CBCT coordinates (for example by marking multiple fluoroscopic 2D images of different known angles, or in a 3D reconstructed tomosynthesis volume), that target location is then registered to transmitter coordinates, based on Fluoro — Transmitter registration (for example, using radiopaque markers on the transmitter). At this point, the target location in transmitter coordinates is obtained, but the system, according to its deformable CT — transmitter registration “thinks” that the location of the target is at (X’,Y’,Z’) in transmitter coordinates, which is different than (X,Y,Z). At this point, a constraint is added (manually, optionally automatically) to the deformation tracking, to force the target to transmitter coordinates (X,Y,Z), with some weight (compared to the other elastic constraints, which are described using energy cost functions). This “pulls” the target elastically to its true position in transmitter coordinates, as seen by the imaging device.
In some embodiments, similarly, in the case that radiopaque markers are not found on the transmitter, and therefore there is not necessarily a Fluoro — transmitter registration, the tip of the device can be identified in fluoro coordinates. Since the position of the tip is known in transmitter coordinates (as tracked by the position/shape sensors), it is possible to compute a registration between Fluoro to transmitter coordinates and repeat the above. This can be done for example by constructing a translation-based Fluoro — transmitter registration, by bringing the tip location from its position in Fluoro coordinates to transmitter coordinates.
In some embodiments, it may be known that the Fluoro is aligned with the EM transmitter in some orientation, therefore allowing to know the orientation of the Fluoro — transmitter registration.
In some embodiments, a potential advantage of correcting the discrepancy using a deformable flexible model is that it potentially allows improving the overall accuracy of the deformed anatomical model compared to the anatomy, instead of just providing accuracy at the single point of correction (for example, at a target). In some embodiments, the modeled anatomy is deformed such that is still respects its energy cost functions (which can encode for example the original structure of the anatomy among other constraints) and yet provides accuracy at the marked point of interest (for example, at a target). In some embodiments, this potentially improves the overall accuracy of the guidance system, especially in the case where further navigation is needed in order to reach the target with the device. In some embodiments, improving the overall accuracy increases the probability for successful re-navigation to target after correction. In this case, since
the deformation tracking is constrained to bring the target to its observed location in transmitter coordinates (with a certain constraint weight), but does not force it to stay on its location as was observed by the external imaging, it may account for deformation which may occur during the renavigation, such that the catheter will be able to reach the target, which may still be breathing and deforming during re-navigation, and thus may not necessarily end up after re-navigation at its initially observed location by the external imaging.
Exemplary generation of breathing model as a preoperative action used in the movement model 140
In some embodiments, a breathing movement model includes alternating stretching and compressing of a lung model, for example by moving each model node, to simulate breathing. It is not known if this is possible in previous deformation models, in which model branches keep their lengths, and in which the positions of the model nodes are determined, in some cases, by the rotations of many other “parent” model nodes.
In some embodiments, the breathing model is achieved by using two or more preoperative CT/MRI scans. For example, one scan can be performed in full inhalation and a second scan can be performed in exhalation. In some embodiments, the anatomy can then be modeled according to the two scans. In some embodiments, the deformation model can use the two scans to model the breathing of the anatomy in real-time based on the real-time tracked breathing state of the patient. In some embodiments, the deformation model can use a breathing model that provides ready-to- be-used models of the lungs during the breathing.
In some embodiments, the two CT scans can be processed, and a tree skeletal model may be generated from the two scans. In some embodiments, the two processed trees can be registered such that each node and branch or a partial set of nodes and branches of the tree is identified and matched in the two scans. In some embodiments, the deformation model can then use the two scans to interpolate between the two models based on the breathing state of the patient, which provides ready-to-be-used models of the lungs during the breathing.
In some embodiments, the breathing state of the patient can be computed using any suitable method. For example, in some embodiments, the breathing state of the patient (inhale/exhale or a scale in between) is computed by attaching one or more position/reference sensors to the patient and monitoring the movement of the sensor while the patient breathes. For example, applying a high-pass filter on an up/down or right/left motion of the one or more reference sensors can provide a motion which is indicative of the breathing motion of the patient, from which the breathing state can be computed, as also mentioned below.
In some embodiments, position/reference sensors attached to the patient monitor the movement of the patient during procedure. For example, the registration can be updated by monitoring the movement of the patient, by applying 3D translations and rotations in accordance with the movement of the patient. In the case of a plurality of patient reference sensors, for example, 3 or more reference sensors, the external deformation of the patient can be monitored. For example, in some embodiments the deformation of the body of the patient can be monitored by attaching 3 or more reference sensors to the body of the patient. In some embodiments, by monitoring the positions of the reference sensors, deformation in the relative shape of the sensor can account for deformation of the body of the patient, which can be extrapolated to model deformation of the lung. For example, when the patient coughs, this can be detected by monitoring the shape and movement of the reference sensors and certain deformation can be applied to the lung, using the deformation model. In some embodiments, by attaching 3 or more sensors to the body of the patient and by registering those sensors to anatomical positions on the body of the patient, for example, using the initial registration, those anatomical positions can be tracked during procedure by monitoring the position of the reference sensors. In some embodiments, when those sensors change in position or in relative shape (in structure), it is then known that the anatomical positions changed accordingly in position or in relative shape, and this can be inputted to the lung deformation model as information. For example, a model can be learned which maps between structural change of the body or chest of the patient, as tracked by the anatomical sensors, and deformation of the lung. In some embodiments, the model can be fitted during the initial registration, by matching between structural changes of the external reference sensors and curve states inside the anatomy, as done in the case of breathing. For example, it may be observed that whenever a certain structural change occurs on the external reference sensors, certain motion or transformation is applied to the lung. This, for example, can model the coughing of the patient such that the system will compensate for coughing by parametrizing it using the external reference sensors and applying it on the deformed lung, as may be learned during initial registration according to the deformation model.
In some embodiments, a preoperative scan of the anatomy is performed in full inhalation (IRV) while the guidance procedure may be performed in standard tidal volume breathing. In some embodiments, knowing the ratio between the inflation state of the scan (from which the anatomical model may be generated) and the intraoperative breathing can assist the deformation model in modeling the effect of the breathing. For example, the deformation model can assume that the preoperative scan was performed in 100% inhalation state, while the intraoperative breathing alternates between 20% inflation (at exhale state) to 80% inflation (at inhale state). In some embodiments, the deformation model can then interpolate between two available scans, or between
two breathing states of the anatomy (for example, by stretching and compressing of a lung model, as discussed above).
In some embodiments, the inflation state information is given to the deformation model by the system or by the user.
In some embodiments, as will be further explained below, the inflation state of the preoperative scan as well as the intraoperative breathing pattern are automatically computed by the deformation model. For example, the breathing pattern can be modeled using a small number of parameters such as: preoperative scan inflation (can be assumed to be 100%), intraoperative exhale and inhale inflation ranges. In some embodiments, these parameters can then participate in the energy minimization process (see below regarding “energy minimization process”) of the breathing deformation model such that the overall energy is minimized. In some embodiments, this works under the assumption that the deformation cost energy is minimal for the true breathing and deformation parameters.
Exemplary breathing monitoring as intraoperative actions
In some embodiments, the breathing phase of the patient is tracked in real-time and provided to the deformation model by using an external measurement of the breathing. For example, one or more position sensors are attached to the chest of the patient to track the breathing motion pattern of the patient. For example, periodic height change of the chest of the patient is indicative of the periodic breathing of the patient. In some embodiments, the deformation model is configured for and/or comprises instructions for using the tracked breathing phase to interpolate between inhale and exhale states of the deformation model.
In some embodiments, the initial inhale and exhale states of the deformation model can be learned in an initial deformable registration setup stage of the procedure. While this being technically “not intraoperative”, since it is meant to be performed just before beginning the procedure, it has been incorporated to the “intraoperative” actions.
In some embodiments, in this initial deformable registration setup stage, multiple positions and shapes of the elongated tracked device can be accumulated during a supervised or unsupervised survey of the physician inside the anatomy. In some embodiments, the multiple positions and shapes can be used simultaneously in the deformation energy minimization process (see below) to find the initial deformation state of the anatomy. In some embodiments, combined with real-time tracked breathing phase information of each of the sampled position and shape of the elongated endoluminal device, the deformation model may then fit two or more different models, for example: inhale and exhale model, corresponding to two or more breathing groups of the samples.
In some embodiments, the deformation model is configured for interpolating between these models during procedure.
In some embodiments, additionally, the breathing inflation parameters (as described above) can be solved in the process of the initial deformable registration.
Exemplary methods
Referring now to Figure 3, showing a flowchart of an exemplary general overview of the method for generating a deformed, optionally visual, representation, according to some embodiments of the invention. In some embodiments, a general method for generating a deformed, optionally visual, representation comprises one or more of:
1. Generating a plurality of information/data from one or more of an external source, an internal source and optionally an additional source (302).
2. Providing the plurality of information/data to the processor (304).
3. Analyzing the information/data (306).
4. Generating a deformed, optionally also visual, representation based on said analyzed information/data (308).
Referring now to Figure 4, showing a flowchart of an exemplary part of the method for generating a deformed visual representation, according to some embodiments of the invention. In some embodiments, a part of the method for generating a deformed visual representation comprises one or more of:
1. Acquiring a pre-deformed model of the anatomy (402).
2. Acquiring an initial registration of the device (404).
3. Acquiring an updated registration of the device (406).
4. Applying the updated registration of the device on the pre-deformed model of anatomy (408).
5. Generating a deformed model of the anatomy (410).
Referring now to Figure 5, showing a flowchart of an exemplary method for updating a deformed visual representation, according to some embodiments of the invention. In some embodiments, a method for updating a deformed visual representation comprises one or more of:
1. Receiving in-vivo imaging data 134 and/or ex- vivo imaging data 136 (502).
2. Identifying a discrepancy in the image data, of the relative locations, shapes and/or orientations of the anatomy 112 and optionally the device 110 (504). In some embodiments, it is
not mandatory to have the device inside the patient during the actuation of the external imaging. For example, as mentioned above, a target (for example a lesion) can be identified in Fluoro/CBCT, for example, a target lesion can be seen in a fluoroscopic image by using tomosynthesis methods as mentioned above, then the target is transformed to EM transmitter coordinates (for example by Fluoro — EM registration, for example using radiopaque markers on the transmitter), and then it is compared to where the system “thinks” the target is (in transmitter coordinates) based on its deformation tracking and registration algorithms.
3. Comparing the relative locations, shapes and/or orientations of the deformed model 132 and the representation 130 of device 110 (506).
4. Adjusting the relative locations, shapes and/or orientations of deformed model 132 to correct the discrepancy (508).
Referring now to Figure 6, showing a flowchart of an exemplary method for tracking dynamic anatomy deformation, according to some embodiments of the invention. In some embodiments, a method for tracking dynamic anatomy deformation comprises one or more of:
1. Extracting constraints from the plurality of information (602).
2. Applying the extracted constraints on the movement models (604). In some embodiments, applying of the extracted constraints on the models 140 causes movement of the model 132 and/or parts of model 132, representing parts of the anatomy 112, and/or change in position of representation 130 of the device 110 (optionally also a visual representation) relative to the model 132 and/or parts of model 132. In some embodiments, applying of the extracted constraints causes the position of the visual representation 130 of the device 110 to appear outside of the anatomy 112 as represented in model 132.
3. Applying a first cost function (606) on model 132. In some embodiments, the first cost function allocates a suitable energy cost to each type and magnitude of movement of each model node 138.
4. Applying a second cost function (608) on model 132. In some embodiments, the second cost function allocates a suitable energy cost to each type and magnitude of deviation of visual representation 130 of the device 110 from a corresponding model node 138 and/or model branch 140 of model 132.
5. Optimizing movement of model 132 (610) and/or of parts of model 132 to a new position, based on the cost functions, so that the energy cost is minimized.
In some embodiments, the aforementioned methods are performed by the processor 102 in response to received instructions 116. In some embodiments, the methods are performed periodically and/or upon a change in the information/data 118, and/or upon an event that may potentially cause change in the information/data 118.
Exemplary cost functions
In some embodiments, it may not be trivial to determine the true position of the device inside the anatomy (or viewed alternatively, as the true registration and deformation of the anatomy in EM transmitter coordinates and relative to one or more devices which are tracked in EM transmitter coordinates). In some embodiments, due to deformation of the anatomy, the device may appear to be located in a fork between two possible branches. In some embodiments, a guidance system needs to determine the true position of the device (i.e., in this case, choose between the two optional branch locations). In some embodiments, a deviation cost function may encode the deviation cost between the device and the first branch, but may also possibly address the second branch while computing the deviation. In some embodiments, determining the true assignment of the branch (the device being in the first or in the second branch) is nontrivial. In some embodiments, multiple hypothetical assignments are made between the device and multiple possible branches, for example, in the proximity of the device (but not necessarily considering only the nearest branch). In some embodiments, the device can be assumed to be located inside one of K nearest branches. In some embodiments, under each hypothesis, a deviation cost energy can be computed, and the overall deformation model cost can be computed and minimized (under the assumption that the device is located inside one of the K nearest branches). In some embodiments, after being minimized, since the overall deformation energy includes structural cost of the anatomy as well as divergence from the anatomy cost, it can be assumed that the branch assignment assumption which leads to the minimal final deformation energy is the true assumption, and the deformation model is set to the state of this minimal energy assumption. In some embodiments, this mechanism assists in avoiding local minima in the live dynamic deformation tracking of the model. In some embodiments, multiple asynchronous worker threads can run in parallel and possess a different assignment of the tracked device to any of K possible anatomical branches (for example, K nearest branches). In some embodiments, the worker threads can minimize the energy and test multiple assignment hypotheses in parallel. In some embodiments, the system may then choose the deformation state from the thread that achieved the minimum deformation cost as its current chosen state of the deformed anatomy. In some embodiments, the system operates under the assumption that the true state of deformation is the one which minimizes the total cost of the deformation model
(which as described, may include cost for structural change of the anatomy as well as a cost for the divergence of the device from the anatomical model).
In some embodiments, in order to avoid local minima in the dynamic deformation model, the system is configured to find the global minimum of the deformation model in a multi-step solution by using different weighting technique of the elongated tracked device. In some embodiments, the system may have high confidence in finding the deformation of the proximal anatomy based on the proximal portion of the elongated tracked device. In some embodiments, the system then decreases the weights of the distal portion of the device and minimizes the deformation energy based just on the “proximal device”. In some embodiments, the proximal anatomy would then deform based on the proximal device. However, deformation of the distal (peripheral) anatomy would also be achieved to a certain extent since the deformation model also possesses structural energy (i.e., the proximal anatomy cannot deform without somewhat deforming the peripheral anatomy). In some embodiments, after completing this step, the peripheral anatomy is assumed to be deformed more correctly than initially. In some embodiments, at this stage, the system increases the weights for the distal portion of the elongated tracked device and repeats the energy minimization process, now punishing more strongly divergence relative to the distal device. In some embodiments, this causes the distal (peripheral) anatomy to deform stronger than before, but starting at a deformation state achieved in the previous step, thus reducing the risk of converging to a local minimum of the deformation model. In some embodiments, this mechanism of gradual convergence can consist of two or more steps in which the relative weights along the elongated tracked device are increased towards the distal end of the device, or changed in any other suitable manner. In some embodiments, the weights can grow gradually towards the distal end of the device between each convergence step but can then be decreased again towards the proximal portion of the device, then grow again towards the distal part. In some embodiments, the relative weights along the tracked device can be changed randomly between different convergence attempts under the assumption that the true deformation state of the anatomy is a state which is highly indifferent to different relative weighting along the device. In some embodiments, by changing the different relative weights along the elongated tracked device the probability of convergence to the true global minimum of the deformation cost energy is increased.
In some embodiments, one or more assumed true position candidates of the device inside the anatomy are optionally obtained by using artificial intelligence tools (Al). In some embodiments, the divergence cost energy would then compute the divergence between the elongated device and each of its true/hypothetical assumed positions inside the anatomy during the overall deformation energy minimization process, as obtained by the Al. In some embodiments, Al
architectures may be configured to estimate one or more true/hypothetical positions of device 110 and/or a shape or a curve of device 110 within the anatomy 112, based of system measurements such as, for example, distances between various sensors or other components and/or relative positions and/or orientations of various sensors or other components of device 110. In some embodiments, the estimation is based on comparison of these device measurements with the local anatomy near where the device is assumed to be located inside the anatomy. In some embodiments, local anatomy can be processed by the Al in the form of a local preoperative CT scan, airway segmentation or in any other suitable format which represents the anatomical structures near the device. In some embodiments, the Al can then use the local anatomical information to find one or more true position candidates for the device inside the anatomy. In some embodiments, the Al may be architectured as a U-Net which receives the full curve of the device in the form of an input 3D volume into the network, as well as local anatomy information in the form of a corresponding block of a CT scan or airway segmentation binary volume, and the U-Net may then output a 3D volume representing the most probable location of the device inside the given local anatomy, according to the Al. In some embodiments, the U-Net may output a 3D volume equivalent to the input local anatomy volume where a pathway inside the anatomy is highlighted, where the elongated device is found to be located with the best probability, according to the Al. Alternatively, any kind of Al or other suitable method can be used to choose K-best anatomical location candidates (for example, branch candidates) which represent where the elongated device may be located inside the anatomy. In some embodiments, such estimated curve may be used as a constraint in one or more of motion models 140. In some embodiments, then, the first cost function and the second cost function may be applied, and/or deformation 120 may be calculated, as described in detail herein above. In some embodiments, the K-best candidates may be used as assumptions for the device- anatomy deviation cost function as part as the overall deformation energy minimization as described above. In some embodiments, an assignment candidate (hypothesis) yielding the best overall deformation energy cost after minimization is then assumed to represent the true location of the device inside the anatomy, thus representing the true deformation of the anatomy (which explains the tracked curve of the device).
As used herein with reference to quantity or value, the term “about” means “within ± 10 % of’.
The terms “comprises”, “comprising”, “includes”, “including”, “has”, “having” and their conjugates mean “including but not limited to”.
The term “consisting of’ means “including and limited to”.
The term “consisting essentially of’ means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
As used herein, the singular forms “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a compound” or “at least one compound” may include a plurality of compounds, including mixtures thereof.
Throughout this application, embodiments of this invention may be presented with reference to a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as “from 1 to 6” should be considered to have specifically disclosed subranges such as “from 1 to 3”, “from 1 to 4”, “from 1 to 5”, “from 2 to 4”, “from 2 to 6”, “from 3 to 6”, etc.; as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
Whenever a numerical range is indicated herein (for example “10-15”, “10 to 15”, or any pair of numbers linked by these another such range indication), it is meant to include any number (fractional or integral) within the indicated range limits, including the range limits, unless the context clearly dictates otherwise. The phrases “range/ranging/ranges between” a first indicate number and a second indicate number and “range/ranging/ranges from” a first indicate number “to”, “up to”, “until” or “through” (or another such range-indicating term) a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numbers therebetween.
Unless otherwise indicated, numbers used herein and any number ranges based thereon are approximations within the accuracy of reasonable measurement and rounding errors as understood by persons skilled in the art.
It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.
Various embodiments and aspects of the present invention as delineated hereinabove and as claimed in the claims section below find experimental and/or calculated support in the following examples.
EXAMPLES
Reference is now made to the following examples of how the cost functions are used and how they affect the deformation methods, which together with the above descriptions illustrate some embodiments of the invention in a non limiting fashion.
For simplicity, start by assuming no breathing. The preoperative registration is therefore purely rigid, meaning the same translation and rotation acting on each node in the lung. In this state, before inserting a catheter (no external data for deformation), all nodes are considered at rest and the total energy is zero. These initial position and rotation of the nodes are written as 4 X 4 transformation matrices , and their deformed transformations are
where are “small” rigid transformations around the initial ones. The purpose of the deformation engine is to find It does this by minimizing some energy which is a sum of many sub-energies,
each a function of one or more A
One simple form of energy, which we call the “returning” energy, acts on each node separately and opposes any change. It can be written as
where N is the number of nodes and 14 et is the weight, which may be different for each node, for example, based on its radius. You can see that this is a one-node energy, since it contains a single sum over the nodes, with no interaction between them - no The important part can be
written as, for example,
such that it is always non-negative, and it is exactly zero only when the node is at its initial position and rotation.
This form, however, is not very helpful, since the optimization engine does not actually act on the matrices - they are used mainly for display. The optimizer uses the true degrees of freedom of each node i - either its position and rotation as Euler angles, or its position and rotation as a quaternion:
respectively.
The identity rotation H3x3 is equivalent to {<z = ? = y = 0} in Euler angles form and to {q0 = 1, q = q2 = q3 = 0} in quaternion form. Again, only in the initial, non-deformed state, is this energy equal to zero.
If this were the only energy, the lung would forever stay in its non-deformed state since the energy is already at a minimum. An energy is added which is not at a minimum in the initial (resting) state and tries to push
away from the identity. The optimizer will then try to find the balance between the two energies such that their sum is at a minimum.
The main one is the catheter energy. The idea is writing a function which depends on and on the transformations along the catheter (which are the external data and
NOT part of the degrees of freedom in the optimization). At each tracking frame, the deformation engine receives a list of some number of tracked positions and orientations along the catheter(s)
For example, if there are three catheters present, it can use all of them at once to get better results than using only one catheter. In that case,
and Acatl may not be necessarily equal to lVcat2 or to lVcat3.
As these Tcatj are given to the deformation engine, the simplest guess as to where inside the airways a catheter point is - is the closest position in the lung. Since it is known that all catheter points should reside inside an airway, it is wanted the energy to be at a minimum (or zero) when all Tcat j are inside airways, and to increase the further out of an airway they are. It can be imaged as a string connecting Tcat j for each j to its closest lung point and the energy being some positive monotonic function of the length of the strings. For example:
(only taking into account the positions along the catheter) or
(also making sure that the nodes around the catheter are aligned with the catheter. This, again, can be written using true degrees of freedom, similar to Ecat.)
Node i is the closest to the position of the jth catheter point, and the final catheter energy is a sum over all such energies. Now, if the engine receives a catheter point which is outside the lungs, the energy will rise from zero. This will push the lung nodes away from their rest state in an attempt to decrease the catheter energy Ecat. This change will, in turn, increase the previous Eret until an equilibrium is reached, at the minimum total energy. If there were only Ecat, the catheter would pull the lungs towards it with no regard to its initial state. It is added Eret to allow the lungs to partially oppose changes.
Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
It is the intent of the applicant(s) that all publications, patents and patent applications referred to in this specification are to be incorporated in their entirety by reference into the specification, as if each individual publication, patent or patent application was specifically and individually noted when referenced that it is to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting. In addition, any priority document(s) of this application is/are hereby incorporated herein by reference in its/their entirety.
Claims
1. A method for monitoring an endoscope within an anatomy of a patient, comprising: a. generating a pre-deformation model based on an acquired image of said anatomy; said generation comprising assigning model nodes and model branches to said pre-deformation model anatomy; b. generating an initial registration of said endoscope in said anatomy; c. acquiring position and/or orientation data in transmitter coordinates from sensors positioned in said endoscope while navigating within said anatomy; d. providing one or more anatomical constraints on each of said nodes and branches based on at least one energy function; d. analyzing said acquired data in view of said pre-deformation model, said initial registration and said one or more anatomical constraints based on said at least one energy function; e. generating a deformed representation of said anatomy based on said analysis by using a deformation model; wherein said analyzing comprises deforming said model by amending each of said nodes independently from each other in view of said constraints.
2. The method according to claim 1, further comprising acquiring an external image.
3. The method according to claim 2, wherein said analyzing comprises analyzing from information from said external image.
4. The method according to claim 1, further comprising generating a breathing model of said anatomy.
5. The method according to claim 4, wherein said generating a breathing model comprises acquiring data of said anatomy in two or more breathing states.
6. The method according to claim 5, further comprising interpolating a plurality of breathing states between said two or more breathing states based on said two or more breathing states.
7. The method according to claim 4, wherein said analyzing comprises analyzing from information from said breathing model.
8. The method according to claim 1, further comprising acquiring said image of said anatomy.
9. The method according to claim 1, wherein said model nodes and model branches correspond to nodes and branches in said anatomy.
10. The method according to claim 1, wherein said assigning nodes and branches to said anatomy is automatic.
11. The method according to claim 1, wherein said initial registration comprises surveying said anatomy with said endoscope.
12. The method according to claim 1, wherein said initial registration comprises generating a representation of said endoscope upon a deformed model of said anatomy showing a correct position of said endoscopy within said anatomy.
13. The method according to claim 12, wherein said deformed model comprises positions and/or shapes of said model nodes and said model branches.
14. The method according to claim 1, wherein said initial registration comprises receiving one or more position and/or shape data from sensors located in said endoscope while said endoscope is inserted within various parts of said anatomy.
15. The method according to claim 1, further comprising displaying an image of said initial registration upon a deformed model.
16. The method according to claim 1, wherein said acquiring position and/or orientation data comprises acquiring shape data without position data.
17. The method according to claim 16, further comprising recovering position data from said shape data.
18. The method according to claim 2, wherein said external image comprises one or more of a dynamic in-vivo imaging data and a dynamic ex- vivo imaging data.
19. The method according to claim 18, wherein said dynamic in-vivo imaging is acquired by one or more of a camera, an ultrasound, a fluoroscope, a CBCT (Cone-beam CT), a CT, and an MRI.
20. The method according to claim 3, wherein said analyzing from information from said external image comprises updating said deformed model according to data from said external image.
21. The method according to claim 20, wherein said updating said deformed model comprises identifying a target in said external image and updating a target position in said deformed representation of said anatomy elastically using said deformation model.
22. The method according to claim 20, wherein said updating said deformed model comprises correcting a location of a target in transmitter coordinates using said deformation model by adding a constraint to said target.
23. The method according to claim 1, wherein said at least one energy function comprises a first energy cost function and a second energy cost function.
24. The method according to claim 23, wherein said first energy cost function allocates a suitable energy cost to a corresponding type movement of a part of said anatomy.
25. The method according to claim 23, wherein said second energy cost function allocates a suitable energy cost to each type of deviation of a model representation of said endoscope from a part of said anatomy.
26. The method according to claim 1, wherein said amending each of said nodes independently from each other comprises independently amending one or more of a position, a rotation and a stretch.
27. The method according to claim 1, wherein one of said one or more anatomical constraints are actual anatomical physical constraints of said anatomy.
28. The method according to claim 27, wherein said actual anatomical physical constraints include keeping all or part of peripheral endpoint nodes in their original location relative to the pleura or ribcage.
29. The method according to claim 1, wherein said generating a pre-deformation model is performed preoperatively.
30. The method according to claim 1, wherein said generating an initial registration is performed preoperatively and/or intraoperative.
31. The method according to claim 1, wherein said acquiring position and/or orientation data is performed intraoperative.
32. The method according to claim 4, wherein said generating a breathing model is performed preoperatively.
33. The method according to claim 1, further comprising attaching one or more sensors to said patient.
34. The method according to claim 33, further comprising monitoring a breathing of a patient using said one or more sensors.
35. The method according to claim 1, further comprising monitoring a movement of said patient.
36. The method according to claim 35, wherein said analyzing comprises analyzing from information from said monitoring said movement of said patient.
37. The method according to claim 1, further comprising displaying said generated deformed representation of said anatomy.
38. The method according to claim 1, wherein said providing one or more anatomical constraints comprises providing a constraint on said deformed representation of said anatomy.
39. A system for endoscopy monitoring, comprising: a. an endoscope comprising an elongated body and a plurality of sensors positioned along said elongated body; b. a processor connected to a memory and comprising instructions for: i. accessing one or more information; said information comprising:
A. a pre-deformation model based on an acquired image of an anatomy; said pre-deformation model comprising assigned model nodes and model branches;
B. an initial registration of said endoscope in said anatomy; ii. acquiring position and/or orientation data in transmitter coordinates from said plurality of sensors while navigating within said anatomy; iii. analyzing said acquired data in view of said pre-deformation model, said initial registration and one or more provided anatomical constraints on each of said nodes and branches based on at least one energy function; iv. generating a deformed representation of said anatomy based on said analysis; wherein said analyzing comprises deforming said model by amending each of said nodes independently form each other in view of said constraints.
40. The system according to claim 39, further comprising a display.
41. The system according to claim 39, further comprising an imaging module connected to an external imaging device.
42. The system according to claim 39, wherein said plurality of sensors are configured to provide one or more of a position, an orientation, a shape and a curve of said endoscope.
43. The system according to claim 42, wherein said a position, an orientation, a shape and a curve of said endoscope are represented in transmitter coordinates.
44. A method for endoscopy monitoring, comprising: a. storing information comprising at least a model of an anatomy and position data received from a plurality of sensors or fiber optics shape sensor located on an interventional flexible elongated device configured to be inserted into the anatomy b. extracting constraints from the stored information;
c. applying the constraints on a movement model of the anatomy; d. applying a first and a second cost function on the anatomy model, wherein the first cost function allocates a suitable energy cost to a corresponding type movement of a part of the anatomy, and the second cost function allocates a suitable energy cost to each type of deviation of a model representation of the elongated device from a part of the anatomy; and e. calculating a deformation of the anatomy model by optimizing movement of the anatomy model based on the cost functions, so that the energy cost is minimized.
45. The method according to claim 44, wherein said movement model includes a breathing model which describes parametrized movement of the anatomy due to breathing and by movement of other body parts due to breathing.
46. The method according to claim 44, wherein said movement model includes adjustments to the shape and position of the anatomy model according to received imaging data.
47. The method according to claim 44, wherein said movement model incorporates a motion fading effect, describing changes in time in the movement due to a certain movement cause.
48. The method according to claim 44, further comprising receiving imaging data, identifying a discrepancy in the imaging data comparing to the anatomy model, and adjusting the anatomy model to correct the discrepancy, by extracting constraints based on the identified discrepancy and applying the constraints on the movement model.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP24741466.7A EP4648666A1 (en) | 2023-01-12 | 2024-01-12 | Dynamic anatomy deformation tracking for in-vivo navigation |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363438536P | 2023-01-12 | 2023-01-12 | |
| US63/438,536 | 2023-01-12 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024150235A1 true WO2024150235A1 (en) | 2024-07-18 |
Family
ID=91896522
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IL2024/050048 Ceased WO2024150235A1 (en) | 2023-01-12 | 2024-01-12 | Dynamic anatomy deformation tracking for in-vivo navigation |
Country Status (2)
| Country | Link |
|---|---|
| EP (1) | EP4648666A1 (en) |
| WO (1) | WO2024150235A1 (en) |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160073928A1 (en) * | 2003-12-12 | 2016-03-17 | University Of Washington | Catheterscope 3d guidance and interface system |
| US10085671B2 (en) * | 2012-05-14 | 2018-10-02 | Intuitive Surgical Operations, Inc. | Systems and methods for deformation compensation using shape sensing |
| US10314656B2 (en) * | 2014-02-04 | 2019-06-11 | Intuitive Surgical Operations, Inc. | Systems and methods for non-rigid deformation of tissue for virtual navigation of interventional tools |
| US20200000526A1 (en) * | 2017-02-01 | 2020-01-02 | Intuitive Surgical Operations, Inc. | Systems and methods of registration for image-guided procedures |
| US10653485B2 (en) * | 2014-07-02 | 2020-05-19 | Covidien Lp | System and method of intraluminal navigation using a 3D model |
| US20200179060A1 (en) * | 2018-12-06 | 2020-06-11 | Covidien Lp | Deformable registration of computer-generated airway models to airway trees |
| US20200205904A1 (en) * | 2013-12-09 | 2020-07-02 | Intuitive Surgical Operations, Inc. | Systems and methods for device-aware flexible tool registration |
| US20220156925A1 (en) * | 2019-03-14 | 2022-05-19 | Koninklijke Philips N.V. | Dynamic interventional three-dimensional model deformation |
| US20220175468A1 (en) * | 2019-09-09 | 2022-06-09 | Magnisity Ltd. | Magnetic flexible catheter tracking system and method using digital magnetometers |
-
2024
- 2024-01-12 WO PCT/IL2024/050048 patent/WO2024150235A1/en not_active Ceased
- 2024-01-12 EP EP24741466.7A patent/EP4648666A1/en active Pending
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160073928A1 (en) * | 2003-12-12 | 2016-03-17 | University Of Washington | Catheterscope 3d guidance and interface system |
| US10085671B2 (en) * | 2012-05-14 | 2018-10-02 | Intuitive Surgical Operations, Inc. | Systems and methods for deformation compensation using shape sensing |
| US20200205904A1 (en) * | 2013-12-09 | 2020-07-02 | Intuitive Surgical Operations, Inc. | Systems and methods for device-aware flexible tool registration |
| US10314656B2 (en) * | 2014-02-04 | 2019-06-11 | Intuitive Surgical Operations, Inc. | Systems and methods for non-rigid deformation of tissue for virtual navigation of interventional tools |
| US10653485B2 (en) * | 2014-07-02 | 2020-05-19 | Covidien Lp | System and method of intraluminal navigation using a 3D model |
| US20200000526A1 (en) * | 2017-02-01 | 2020-01-02 | Intuitive Surgical Operations, Inc. | Systems and methods of registration for image-guided procedures |
| US20200179060A1 (en) * | 2018-12-06 | 2020-06-11 | Covidien Lp | Deformable registration of computer-generated airway models to airway trees |
| US20220156925A1 (en) * | 2019-03-14 | 2022-05-19 | Koninklijke Philips N.V. | Dynamic interventional three-dimensional model deformation |
| US20220175468A1 (en) * | 2019-09-09 | 2022-06-09 | Magnisity Ltd. | Magnetic flexible catheter tracking system and method using digital magnetometers |
Also Published As
| Publication number | Publication date |
|---|---|
| EP4648666A1 (en) | 2025-11-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11690527B2 (en) | Apparatus and method for four dimensional soft tissue navigation in endoscopic applications | |
| US12310677B2 (en) | Deformable registration of computer-generated airway models to airway trees | |
| US11024026B2 (en) | Adaptive navigation technique for navigating a catheter through a body channel or cavity | |
| US12408991B2 (en) | Dynamic deformation tracking for navigational bronchoscopy | |
| US11164324B2 (en) | GPU-based system for performing 2D-3D deformable registration of a body organ using multiple 2D fluoroscopic views | |
| US9265468B2 (en) | Fluoroscopy-based surgical device tracking method | |
| CN103458764B (en) | Shape sensing assisted medical procedure | |
| JP2010510815A (en) | Adaptive navigation technology for navigating a catheter through a body passage or cavity | |
| WO2018165478A1 (en) | Shell-constrained localization of vasculature | |
| CN114748141A (en) | Puncture needle three-dimensional pose real-time reconstruction method and device based on X-ray image | |
| CN111403017B (en) | Medical auxiliary device, system, and method for determining deformation of an object | |
| CN101479769A (en) | Model-based determination of the contraction status of a periodically contracting object | |
| WO2024150235A1 (en) | Dynamic anatomy deformation tracking for in-vivo navigation | |
| CN119338860B (en) | Breathing compensation method, device, equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24741466 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWP | Wipo information: published in national office |
Ref document number: 2024741466 Country of ref document: EP |