[go: up one dir, main page]

WO2024150235A1 - Suivi de déformation anatomique dynamique pour navigation in vivo - Google Patents

Suivi de déformation anatomique dynamique pour navigation in vivo Download PDF

Info

Publication number
WO2024150235A1
WO2024150235A1 PCT/IL2024/050048 IL2024050048W WO2024150235A1 WO 2024150235 A1 WO2024150235 A1 WO 2024150235A1 IL 2024050048 W IL2024050048 W IL 2024050048W WO 2024150235 A1 WO2024150235 A1 WO 2024150235A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
anatomy
deformation
data
breathing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/IL2024/050048
Other languages
English (en)
Inventor
Ron Barak
Amit Cohen
Benjamin GREENBURG
Eyal KLEIN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magnisity Ltd
Original Assignee
Magnisity Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Magnisity Ltd filed Critical Magnisity Ltd
Priority to EP24741466.7A priority Critical patent/EP4648666A1/fr
Publication of WO2024150235A1 publication Critical patent/WO2024150235A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/267Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
    • A61B1/2676Bronchoscopes
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods
    • A61B2017/00681Aspects not otherwise provided for
    • A61B2017/00694Aspects not otherwise provided for with means correcting for movement of or for synchronisation with the body
    • A61B2017/00699Aspects not otherwise provided for with means correcting for movement of or for synchronisation with the body correcting for movement caused by respiration, e.g. by triggering
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/105Modelling of the patient, e.g. for ligaments or bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2061Tracking techniques using shape-sensors, e.g. fiber shape sensors with Bragg gratings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/39Markers, e.g. radio-opaque or breast lesions markers
    • A61B2090/3966Radiopaque markers visible in an X-ray image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • A61B5/0037Performing a preliminary scan, e.g. a prescan for identifying a region of interest
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/06Devices, other than using radiation, for detecting or locating foreign bodies ; Determining position of diagnostic devices within or on the body of the patient
    • A61B5/061Determining position of a probe within the body employing means separate from the probe, e.g. sensing internal probe position employing impedance electrodes on the surface of the body
    • A61B5/062Determining position of a probe within the body employing means separate from the probe, e.g. sensing internal probe position employing impedance electrodes on the surface of the body using magnetic field
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/06Devices, other than using radiation, for detecting or locating foreign bodies ; Determining position of diagnostic devices within or on the body of the patient
    • A61B5/065Determining position of the probe employing exclusively positioning means located on or in the probe, e.g. using position sensors arranged on the probe
    • A61B5/066Superposing sensor position on an image of the patient, e.g. obtained by ultrasound or x-ray imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/113Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb occurring during breathing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7285Specific aspects of physiological measurement analysis for synchronizing or triggering a physiological measurement or image acquisition with a physiological event or waveform, e.g. an ECG signal
    • A61B5/7289Retrospective gating, i.e. associating measured signals or images with a physiological event after the actual measurement or image acquisition, e.g. by simultaneously recording an additional physiological signal during the measurement or image acquisition

Definitions

  • the present invention in some embodiments thereof, relates to a deformation model and, more particularly, but not exclusively, to a deformation model for an elongated endoscopic device.
  • Some known systems and methods of endoscopy try to follow or compensate for the anatomy movement during the procedure, for example in order to monitor the endoscopy on a display.
  • Non-robotic navigation guided by single-sensor electromagnetic sensing optionally, with combined fluoroscopy
  • Robotic navigation guided by fiber-optics shape sensing and 3. Robotic navigation guided by single-sensor electromagnetic sensing.
  • Other systems known in the art include fluoroscopy-based systems, CBCT (cone-beam CT) guided systems, and video-registration based bronchoscopy.
  • Some methods for monitoring the probe within the lung include providing a deformation model of the lung.
  • Some deformation models are hierarchical, e.g. each node of the lung is dependent on the orientation of its adjacent nodes, for example, while preserving the length of the branches.
  • U.S. Patent No. US10674982B2 disclosing a system and method for constructing fluoroscopic -based three dimensional volumetric data from two dimensional fluoroscopic images including a computing device configured to facilitate navigation of a medical device to a target area within a patient and a fluoroscopic imaging device configured to acquire a fluoroscopic video of the target area about a plurality of angles relative to the target area.
  • the computing device is configured to determine a pose of the fluoroscopic imaging device for each frame of the fluoroscopic video and to construct fluoroscopic-based three dimensional volumetric data of the target area in which soft tissue objects are visible using a fast iterative three dimensional construction algorithm.
  • US11341692B2 disclosing a system for facilitating identification and marking of a target in a fluoroscopic image of a body region of a patient, the system comprising one or more storage devices having stored thereon instructions for: receiving a CT scan and a fluoroscopic 3D reconstruction of the body region of the patient, wherein the CT scan includes a marking of the target; and generating at least one virtual fluoroscopy image based on the CT scan of the patient, wherein the virtual fluoroscopy image includes the target and the marking of the target, at least one hardware processor configured to execute these instructions, and a display configured to display to a user the virtual fluoroscopy image and the fluoroscopic 3D reconstruction.
  • U.S. Patent No. US10653485B2 disclosing a method for implementing a dynamic three- dimensional lung map view for navigating a probe inside a patient's lungs includes loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images, inserting the probe into a patient's airways, registering a sensed location of the probe with the planned pathway, selecting a target in the navigation plan, presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe, navigating the probe through the airways of the patient's lungs toward the target, iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe, and updating the presented view by removing at least a part of an object forming part of the 3D model.
  • U.S. Patent No. US11547377B2 disclosing a system and method for navigating to a target using fluoroscopic-based three dimensional volumetric data generated from two dimensional fluoroscopic images, including a catheter guide assembly including a sensor, an electromagnetic field generator, a fluoroscopic imaging device to acquire a fluoroscopic video of a target area about a plurality of angles relative to the target area, and a computing device.
  • the computing device is configured to receive previously acquired CT data, determine the location of the sensor based on the electromagnetic field generated by the electromagnetic field generator, generate a three dimensional rendering of the target area based on the acquired fluoroscopic video, receive a selection of the catheter guide assembly in the generated three dimensional rendering, and register the generated three dimensional rendering of the target area with the previously acquired CT data to correct the position of the catheter guide assembly.
  • Example 2 The method according to example 1, further comprising acquiring an external image.
  • Example 3 The method according to example 1 or example 2, wherein said analyzing comprises analyzing from information from said external image.
  • Example 4 The method according to any one of examples 1-3, further comprising generating a breathing model of said anatomy.
  • Example 5 The method according to any one of examples 1-4, wherein said generating a breathing model comprises acquiring data of said anatomy in two or more breathing states.
  • Example 6 The method according to any one of examples 1-5, further comprising interpolating a plurality of breathing states between said two or more breathing states based on said two or more breathing states.
  • Example 7 The method according to any one of examples 1-6, wherein said analyzing comprises analyzing from information from said breathing model.
  • Example 8 The method according to any one of examples 1-7, further comprising acquiring said image of said anatomy.
  • Example 9 The method according to any one of examples 1-8, wherein said model nodes and model branches correspond to nodes and branches in said anatomy.
  • Example 10 The method according to any one of examples 1-9, wherein said assigning nodes and branches to said anatomy is automatic.
  • Example 11 The method according to any one of examples 1-10, wherein said initial registration comprises surveying said anatomy with said endoscope.
  • Example 12 The method according to any one of examples 1-11, wherein said initial registration comprises generating a representation of said endoscope upon a deformed model of said anatomy showing a correct position of said endoscopy within said anatomy.
  • Example 13 The method according to any one of examples 1-12, wherein said deformed model comprises positions and/or shapes of said model nodes and said model branches.
  • Example 15 The method according to any one of examples 1-14, further comprising displaying an image of said initial registration upon a deformed model.
  • Example 17 The method according to any one of examples 1-16, further comprising recovering position data from said shape data.
  • Example 18 The method according to any one of examples 1-17, wherein said external image comprises one or more of a dynamic in-vivo imaging data and a dynamic ex- vivo imaging data.
  • Example 19 The method according to any one of examples 1-18, wherein said dynamic in-vivo imaging is acquired by one or more of a camera, an ultrasound, a fluoroscope, a CBCT (Conebeam CT), a CT, and an MRI.
  • a camera an ultrasound, a fluoroscope, a CBCT (Conebeam CT), a CT, and an MRI.
  • Example 20 The method according to any one of examples 1-19, wherein said analyzing from information from said external image comprises updating said deformed model according to data from said external image.
  • Example 21 The method according to any one of examples 1-20, wherein said updating said deformed model comprises identifying a target in said external image and updating a target position in said deformed representation of said anatomy elastically using said deformation model.
  • Example 22 The method according to any one of examples 1-21, wherein said updating said deformed model comprises correcting a location of a target in transmitter coordinates using said deformation model by manually adding a constraint to said target.
  • Example 23 The method according to any one of examples 1-22, wherein said at least one energy function comprises a first energy cost function and a second energy cost function.
  • Example 24 The method according to any one of examples 1-23, wherein said first energy cost function allocates a suitable energy cost to a corresponding type movement of a part of said anatomy.
  • Example 25 The method according to any one of examples 1-24, wherein said second energy cost function allocates a suitable energy cost to each type of deviation of a model representation of said endoscope from a part of said anatomy.
  • Example 26 The method according to any one of examples 1-25, wherein said amending each of said nodes independently from each other comprises independently amending one or more of a position, a rotation and a stretch.
  • Example 27 The method according to any one of examples 1-26, wherein one of said one or more anatomical constraints are actual anatomical physical constraints of said anatomy.
  • Example 29 The method according to any one of examples 1-28, wherein said generating a predeformation model is performed preoperatively.
  • Example 30 The method according to any one of examples 1-29, wherein said generating an initial registration is performed preoperatively and/or intraoperative.
  • Example 31 The method according to any one of examples 1-30, wherein said acquiring position and/or orientation data is performed intraoperative.
  • Example 32 The method according to any one of examples 1-31, wherein said generating a breathing model is performed preoperatively.
  • Example 33 The method according to any one of examples 1-32, further comprising attaching one or more sensors to said patient.
  • Example 34 The method according to any one of examples 1-33, further comprising monitoring a breathing of a patient using said one or more sensors.
  • Example 35 The method according to any one of examples 1-34, further comprising monitoring a movement of said patient.
  • Example 36 The method according to any one of examples 1-35, wherein said analyzing comprises analyzing from information from said monitoring said movement of said patient.
  • Example 37 The method according to any one of examples 1-36, further comprising displaying said generated deformed representation of said anatomy.
  • Example 39 A system for endoscopy monitoring, comprising: a. an endoscope comprising an elongated body and a plurality of sensors positioned along said elongated body; b. a processor connected to a memory and comprising instructions for: i. accessing one or more information; said information comprising:
  • Example 40 The system according to example 39, further comprising a display.
  • Example 41 The system according to example 39 or example 40, further comprising an imaging module connected to an external imaging device.
  • Example 42 The system according to any one of examples 39-41, wherein said plurality of sensors are configured to provide one or more of a position, an orientation, a shape and a curve of said endoscope.
  • Example 43 The system according to any one of examples 39-42, wherein said a position, an orientation, a shape and a curve of said endoscope are represented in transmitter coordinates.
  • Example 44 A method for endoscopy monitoring, comprising: a. storing information comprising at least a model of an anatomy and position data received from a plurality of sensors or fiber optics shape sensor located on an interventional flexible elongated device configured to be inserted into the anatomy b. extracting constraints from the stored information; c. applying the constraints on a movement model of the anatomy; d. applying a first and a second cost function on the anatomy model, wherein the first cost function allocates a suitable energy cost to a corresponding type movement of a part of the anatomy, and the second cost function allocates a suitable energy cost to each type of deviation of a model representation of the elongated device from a part of the anatomy; and e. calculating a deformation of the anatomy model by optimizing movement of the anatomy model based on the cost functions, so that the energy cost is minimized.
  • Example 46 The method according to example 44 or example 45, wherein said movement model includes adjustments to the shape and position of the anatomy model according to received imaging data.
  • Example 47 The method according to any one of examples 44-46, wherein said movement model incorporates a motion fading effect, describing changes in time in the movement due to a certain movement cause.
  • some embodiments of the present invention may be embodied as a system, method or computer program product. Accordingly, some embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, some embodiments of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon. Implementation of the method and/or system of some embodiments of the invention can involve performing and/or completing selected tasks manually, automatically, or a combination thereof. Moreover, according to actual instrumentation and equipment of some embodiments of the method and/or system of the invention, several selected tasks could be implemented by hardware, by software or by firmware and/or by a combination thereof, e.g., using an operating system.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electromagnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium and/or data used thereby may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for some embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • Some of the methods described herein are generally designed only for use by a computer, and may not be feasible or practical for performing purely manually might be expected to use completely different methods, e.g., making use of expert knowledge and/or the pattern recognition capabilities of the human brain, which would be vastly more efficient than manually going through the steps of the methods described herein.
  • FIG. 1A is a schematic representation of an exemplary system for endoscopy monitoring, according to some embodiments of invention.
  • FIG. IB is a schematic representation of the modules of the exemplary system, according to some embodiments of the invention.
  • FIG. 2 is a flowchart of how data from the sensors are converted into data about shape and position of the elongated device, according to some embodiments of the invention
  • FIG. 3 is a flowchart of an exemplary general overview of the method for generating a deformed, optionally visual, representation, according to some embodiments of the invention
  • FIG. 4 is a flowchart of an exemplary part of the method for generating a deformed visual representation, according to some embodiments of the invention.
  • FIG. 5 is a flowchart of an exemplary method for updating a deformed visual representation, according to some embodiments of the invention.
  • FIG. 6 is showing a flowchart of an exemplary method for tracking dynamic anatomy deformation, according to some embodiments of the invention.
  • the present invention in some embodiments thereof, relates to a deformation model and, more particularly, but not exclusively, to a deformation model for an elongated endoscopic device.
  • an aspect of some embodiments of the invention relates to monitoring a navigation of an elongated device reaching a peripheral target endoscopically.
  • the elongated device is a steerable catheter that is inserted into a lumen structure (such as the airways in the lung) and guided to a desired location (for example a lesion), either manually or by a guidance system.
  • the device is tracked electromagnetically, such that its tip position is known in 3D transmitter coordinates and/or the entire or partial curve of the device is tracked accurately in 3D transmitter coordinates.
  • the shape tracked device allows visualizing turns along the curve which contributes more information about the location of the tip of the device in the anatomy.
  • the deformation and the changes, which occur constantly during the procedure due to the breathing of the patient, in the pathway that the device needs to take to reach the desired location are taken under consideration when planning and executing the navigation.
  • a potential advantage of doing this is that it potentially avoids large inaccuracy in lung navigation procedures, where the system can "think" that the catheter tip has reached the desired location, while in reality it may be as far as 2-3 cm from the desired location.
  • real-time deformations are monitored and compensated for. For example, if the patient moves during the intervention, the system is configured to adapt the model accordingly by dynamically and flexibly incorporate the real-time data into the model.
  • An aspect of some embodiments of the invention relates to a deformation tracking algorithm configured to maintain a deformable registration between the preoperative CT scan and the breathing and deforming organ.
  • the deformation tracking is based on a pre-trained deformation model - where a study of how certain organs, such as lung, deform under certain applied forces serves as pre-trained information and modeled as costs ( energies), and also on a real-time solver which minimizes the costs based on all available information (such as one or more endoscope position and shape, reference sensor locations, external imaging, etc.).
  • the deformation model uses a skeletal / graph structure (multiple nodes), similar to finite elements physical simulation which is pre-trained for the lung (or for any other target organ).
  • the system by solving the deformation of the organ in real-time, the system is able to “understand” how the preoperative CT is deformed in procedure and thus display the accurately curve-tracked device in its true deformed location (inside a lumen/airway).
  • the deformation model strongly relies on a hierarchical structure (a tree/skeleton structure) of the nodes and allows only a limited number of parameters per node. It is also hierarchical in the sense that if a certain node is rotated then all or some children nodes are also rotated. This may be true for 3D printed plastic lung models, which may float freely in space, but in reality the inventors have found that this is not a natural behavior.
  • the deformation model on the present invention is more general, where each node has its own independent position/rotation/stretch, and the nodes are tied by structural energies (not necessarily just neighboring nodes), such that a tree-like behavior can be achieved using one tuning, and a more realistic behavior (which is not strictly hierarchical, where the pleura for example constrains the peripheral airway endpoints from moving while other nodes can move more freely) can be achieved by another tuning.
  • the deformation model on the present invention can include a constraint, which is described as an energy function, to prefer to keep all or part of peripheral endpoint nodes in their original location relative to the pleura or ribcage.
  • the deformation model herein also enables more free breathing (as studied during the initial registration survey) as the segments can stretch and contract.
  • the deformation model herein also incorporates a breathing model to enable more free breathing which is constrained to realistic breathing according to the breathing model.
  • the deformation model herein also enables inputting external constraints, such as information from external imaging: external imaging may be used to determine/confirm the position of the endoscope relative to a target lesion.
  • the offset can be inputted to the deformation model engine and the engine corrects itself “deformably” such that the anatomy is then deformed to account for that offset, and the lesion appears “deformably” in its true position relative to the endoscope.
  • the correction is almost always rigid such that other anatomical features go “out of sync”. For example, in other EM based or fiber-optics shape sensing based navigation systems which are corrected using external imaging, a correction offset is applied to the navigation system as a rigid correction transform.
  • This correction transform puts the target at its true location relative to the endoscope, as observed by the external imaging, but it does so rigidly.
  • the surrounding anatomical features such as airways or other lumen structure can appear at wrong locations, which may impact the performance of renavigation to the target, to the extent that even if the user is finally able to reach the corrected target, the endoscope location in the anatomy may still not be at the target which may require additional repetitive scans and correction by external imaging.
  • the deformation model herein the external information is natively inputted into the engine in a deformable way so that everything corrects itself seamlessly and the entire neighborhood of the target is adjusted to its true deformation state.
  • the monitoring comprises monitoring the endoluminal device itself, for example, the shape and location of the endoluminal device within the lumens of the patient. In some embodiments, this is performed by utilizing a plurality of sensors positioned along the elongated body of the endoluminal device, which are tracked and referenced to an external, optionally fixed, frame of reference (for example, to an EM transmitter coordinate system).
  • the endoluminal device shape and/or position is localized in transmitter coordinate system and is registered to the anatomy using a registration algorithm. In some embodiments, the anatomy is registered to transmitter coordinate system using a registration algorithm.
  • the monitoring comprises monitoring the overall anatomy of the patient, for example by using an x-ray imaging device, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device, which are used during the procedure to monitor the advancement of the endoluminal device within the lumens of the patient.
  • an x-ray imaging device for example, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device, which are used during the procedure to monitor the advancement of the endoluminal device within the lumens of the patient.
  • a CBCT imaging device for example by using an x-ray imaging device, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device
  • a clear disadvantage with X-ray based navigation is that a user cannot constantly radiate the patient, so the user is forced to just take a “roadmap” and then use it for navigation. This provides navigation on a frozen map that does not take under consideration deformation and/or does not perform deformation tracking.
  • Another disadvantage is that with standard fluoroscopy the user is deprived of visualizing the airways, so the user cannot actually know where the roadmap is. In these cases, contrast materials could be used to enhance the airways in the image, but this is not trivial and usually it is not performed.
  • the solution disclosed herein provides the best of both techniques, a high-quality deformation tracking that is complemented with information from an external imaging device, for confirmation and/or correction of the deformation tracking, based on the X-ray imaging.
  • some methods for monitoring the endoluminal device within the lumens of the patient include providing a hierarchical deformation model of the lung.
  • These hierarchical deformation models have each node of the modeled anatomy dependent on the position and/or orientation of its adjacent nodes. Some of these methods have the length of the branches of the modeled anatomy constant, which limits the movement of the nodes to three degrees of freedom, and optionally even limiting to two degrees, depending on how the “roll” degree of freedom is handled. Additionally, in some of these methods, deformation of a node may be calculated as dependent on deformation of many parent nodes. These assumptions may cause a slow and inaccurate (non-realistic) deformation calculation.
  • deformation is computed locally - that is, by essentially bringing the tracked catheter into the nearest branch, which does not necessarily reflect the true state of the anatomy relative to the catheter. Additionally, in some of these methods, the anatomy is not deformed according to realistic constraints, such as pleura or ribcage constraints, which might impact the accuracy of the tracking. Additionally, in some of these methods, the anatomy does not “breath” (does not deform according to breathing) in accordance with true anatomical breathing, as recorded in the patient, which might impact the accuracy of the tracking.
  • the present invention provides a solution that overcomes the deficiencies of the abovementioned monitoring methods by providing a monitoring system with advanced deformation models (referred hereinafter as the deformation model) and deformation tracking methods (referred hereinafter as the deformation tracking).
  • the deformation model advanced deformation models
  • the deformation tracking deformation tracking methods
  • the invention comprises performing actions preoperatively and performing actions intraoperatively.
  • a preoperative CT or MRI (or any other suitable preoperative scan) is performed to identify the desired location that requires treatment, and using that same CT, a map of the path that needs to be taken by the endoluminal device is prepared, along with a segmented map of the entire or partial lumen tree (such as airway tree, in the case of lung).
  • some of the CT processed outputs (such as the entire or partial segmented lumen tree, a segmented lesion etc.) are used by the deformation model and tracking, or breathing model, or general, optionally parametrized, movement model, as described below.
  • segmented/classified tissue information can be used by the deformation model/tracking to increase the accuracy of the deformation tracking (as described below).
  • a breathing model of the patient is prepared, which is then utilized in the movement model 140 (for example breathing model and/or movements performed by the patient during the procedure - see below), as explained herein.
  • exemplary intraoperative actions comprise one or more of monitoring the breathing (for example by using one or more sensors configured for monitoring the rhythm of the breathing and/or the movements of the chest), monitoring changes in the anatomy of the patient (for example by using one or more imaging devices and/or sensors positioned on the patient, for example, on the patient’s chest), monitoring the position and/or shape of the device (for example relative to a transmitter - “transmitter coordinates”), and monitoring the changes in the position and/or deformations of the patient.
  • monitoring changes in the anatomy of the patient also includes the use of movement model 140, which is configured to amend the deformation model, for example, in view of sensed movement (for example, breathing) of the patient.
  • the system 100 comprises a processor 102, a memory 104 (comprising a plurality of modules) and a display 106 (optionally showing a model of a target location 146).
  • the system 100 comprises an imaging module 108, which may include an external imaging device, for example a three-dimensional imaging device, an x-ray imaging device, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device.
  • an imaging module 108 may include an external imaging device, for example a three-dimensional imaging device, an x-ray imaging device, a CBCT imaging device, a fluoroscopy imaging device, a radial ultrasound probe (REBUS) and/or any other suitable type of ex-vivo or in-vivo imaging device.
  • a potential advantage of the system 100 is that it allows using an external imaging device (connected to imaging module 108) to input more constraints into the general deformation model.
  • an external imaging device connected to imaging module 108 to input more constraints into the general deformation model.
  • a lesion can be “seen” by a CBCT (cone -beam CT) scan at a position which may be different compared to the location of the endoluminal device, from the point of view of the system (based on navigation and deformable registration only).
  • CBCT cone -beam CT
  • the system 100 is configured to allow inputting this information into the deformation tracking engine as a constraint such that the entire deformation model elastically adjusts to reflect this information (and such that nearby features are also deformed accordingly, not just the feature which was seen in the external imaging), therefore, rather than correcting the location of important anatomical features rigidly, as other systems which rely on external imaging may do, the system is configured to correct the location of important features in a flexible manner using the deformation model.
  • an interventional flexible elongated device 110 may be integrated into system 100 and/or provided separately.
  • the elongated device 110 is inserted into a patient, for example the lungs 112 (referred hereinafter as “anatomy 112” or “the anatomy 112”, which is schematically shown in Figure la).
  • the flexible elongated device 110 comprises an elongated body, for example an elongated flexible body, for example a catheter or another interventional device which may be inserted, for example, via a catheter during an interventional procedure.
  • the flexible elongated device 110 comprises a plurality of shape and/or position sensors 114 (optionally also temperature sensors), configured for sensing positions and/or orientations along the flexible elongated device 110.
  • shape and position refer hereinafter as tracking of one or more of a position, an orientation, a shape and a curve, of the elongated device.
  • the sensed positions are transmitted to the processor 102 or to another processor.
  • the processor 102 is configured for receiving the sensed positions and calculate a shape of the flexible elongated device 110, while positioned inside the anatomy 112.
  • the shape of the flexible elongated device 110 is calculated by an external processor and processor 102 receives the calculated shape of the flexible elongated device 110, for example, in EM transmitter coordinates, while positioned inside anatomy 112.
  • the flexible elongated device 110 includes a shape sensor, such as an optical fiber shape sensor, in which case the shape of the flexible elongated device 110 is sensed directly.
  • the shape/position is first computed in transmitter coordinates.
  • the user is enabled to localize the catheter inside the anatomy, and optionally, localize the deforming anatomy in transmitter coordinates. In some embodiments, this is based on the deformation model in addition with the tracking, which is a deformable breathing registration between the transmitter coordinates and the anatomy (which may initially be represented in preoperative CT coordinates).
  • a potential advantage of using position sensors is that it potentially allows sensing the shape of the device relative to a certain coordinates system (such as an electromagnetic transmitter’s coordinate system), which allows to localize a curve of the flexible elongated device 110 in its true absolute position in space. Contrary to this, using a fiber optic shape sensor may only provide the relative shape of the device but might not provide the shape’s position in space.
  • a potential advantage of knowing the position of the flexible elongated device 110 in space is that it potentially increases the accuracy of applied deformation models since this information can be incorporated in the minimized deformation energy functions described below. In some embodiments, the added accuracy is achieved due to having the information of position of the device and not just shape.
  • Known solutions that utilize fiber optic solutions try to recover the position (for example, by integrating the sensed shape from a known reference point or origin, which may be considered as shape “transmitter coordinates”, or shape “reference coordinates”), but the unknown position always degrades their deformation tracking performance.
  • the shape sensor can be thought of as localized in “reference coordinates” or “transmitter coordinates”, similarly to the EM transmitter coordinates described herein, but with greater inaccuracy (due to shape to position recovery integration error).
  • the memory 104 is configured to store instructions 116, which instruct the processor 102 to perform operations of the methods described herein.
  • a dynamic deformation 120 is calculated by the processor 102, and optionally a deformed visual representation 122 is generated, from information/data 118 received from components of the system 100 and/or from external systems (not shown), as described in more detail herein (see below more information on exemplary sources of information/data 118).
  • exemplary sources of the information/data 118 are one or more of:
  • a pre-deformation model 124 of anatomy 112 (for example, a preoperative, optionally processed, CT scan or MRI scan or any other suitable scan);
  • An initial registration 128 of the device 110 (optionally pre-navigational, and optionally a deformed registration); based on an initial, optionally unsupervised, survey of the device inside the anatomy and on a first fitting of the deformation model onto the plurality of tracked curves of the device from the initial survey (optionally in combination with a breathing model).
  • Position and/or orientation data 126 from sensors 114 which are collected during the procedure (intraoperative, optionally in transmitter coordinates);
  • the catheter is “seen” located in certain 3D offset relative to the target lesion, which may be a different location than the location that the system “thinks” the device is positioned inside the anatomy (or that the lesion is positioned in transmitter coordinates).
  • Another example can be a movement of the anatomy in relation to a previous image;
  • a movement model 140 preoperative breathing and/or intraoperative breathing and movement of patient.
  • the information/data 118 comprises data from a pre-deformation model 124 of anatomy 112, for example as mentioned above, a model of anatomy 116 in a static state (for example in full inhale state or in full exhale state), for example before the initialization of the interventional procedure and/or in a predetermined state and/or with a predetermined set of parameters.
  • the pre-deformation model 124 is generated, for example by the processor 102, based on a static preoperative image, generated by any suitable type of diagnostic imaging scan method such as, for example a CT or MRI image.
  • the system performs an initial registration 128 of the device 110 to be used as reference for the following monitoring of the device.
  • the initial registration 128 is performed after the device 110 is inserted into the patient at the beginning of the procedure and before proceeding with the navigation towards the desired location within the anatomy 112.
  • the initial registration 128 optionally comprises a visual device representation 130 of the device 110 upon a deformed model 132 of the anatomy 112, showing a correct position of the device 110 within the anatomy 112.
  • the initial registration 128 is generated, for example, by the processor 102, by receiving position and/or shape data 126 from the sensors 114 and/or memory 104, for example while the device 110 is inserted to and/or moves within various parts of anatomy 112 (for example lung airways). In some embodiments, accordingly, adjustments are performed to the virtual position (position and/or orientation) of the device 112.
  • the predeformation model 124 and/or its various parts are adjusted/updated according to position data 126 and/or according to the shape of the device 110 calculated based on position data 126.
  • explanations regarding how the adjustments of the virtual position of predeformation model 124 are described in more detail herein.
  • the deformed model 132 comprises positions and/or shapes of model nodes 138 and model branches 140, which correspond to nodes 142 and branches 144 of the anatomy 112, respectively.
  • the display 106 is configured for displaying an image of the initial registration 128, for example showing the visual device representation 150 of the device 110 upon the deformed model 132.
  • the deformed model 132 is a dynamically deformed model, meaning the model is updated periodically. Exemplary position and/or orientation data 126 from sensors 114 collected during the procedure
  • the information/data 104 comprises position and/or orientation data 126 from sensors 114 and collected during the procedure. In some embodiments, this data is represented in transmitter coordinates and is used by the system to modify/update the deformation model.
  • the information/data 104 comprises shape data 126 from shape sensor 114 and collected during the procedure.
  • this data lacks position information (only represents shape) and position data is recovered from the shape data, for example, using shape integration methods relative to a known reference point.
  • the integrated position and shape tracked sensor can be thought of as being localized in shape “reference coordinates” or shape “transmitter coordinates” and is then used by the system to modify/update the deformation model.
  • a shape sensor such as a fiber-optic shape sensor
  • the sensors 114 may refer to any of EM sensors, fiberoptics shape sensor or any other position/shape sensor.
  • updates are performed from data received from the imaging module 108, for example, based on movement models 140 of anatomy 112, as described in more detail herein.
  • the information/data 104 includes dynamic in-vivo imaging data 134 and/or dynamic ex-vivo imaging data 136.
  • the dynamic imaging data 134 includes one or more of an image, ultrasound or other in-vivo imaging data obtained for example from a camera 138, which may be installed on the device 110 and/or capture in-vivo image or ultrasound data, for example while the device 110 is inserted to and/or moves within various parts of anatomy 112.
  • the dynamic ex-vivo imaging data 136 includes image data obtained from imaging module 108.
  • the information/data 104 includes various movement models 140, which may be calculated, generated and/or adjusted for anatomy 112.
  • the movement models 140 enable the calculation (for example by processor 102), optionally dynamically and/or in real time, of movements and/or updated position of the anatomy 112 in various scenarios.
  • flexible movement models 140 include models of movement of the anatomy 112 and/or its various parts, due to voluntary or non-voluntary movements of the body of the patient, various types of movement due to breathing, movement of device 110 within anatomy 112, and/or any other suitable kind of movement.
  • the movement models 140 comprise at least a first cost function and a second cost function, as described in more detail herein.
  • the movement models 140 are parametrized. For example, a breathing model can be parametrized using a single parameter, the breathing phase, which determines a stretching and compressing of a lung model based on that single parameter, as described in more detail below.
  • an exemplary movement model 140 is a breathing model which describes movement of the anatomy 112 and/or parts of anatomy 112, due to breathing.
  • the anatomy 112 includes the lungs anatomy
  • the shape in space and/or the positions of the airways and nodes of the lungs may be affected by exhalation and inhalation and/or by the diaphragm (i.e. midriff) movement and/or by movement of other body parts due to breathing.
  • a movement model 140 or a plurality of movement models 140 are generated to describe the various types of movements due to breathing and/or their effect on the shape in space and/or the positions of the airways and nodes of the lungs.
  • the movement model 140 is configured for and/or comprises instructions for receiving as input in- vivo imaging data 134 and/or ex-vivo imaging data 136, and calculate adjustments to the shape and/or the positions of the model branches (e.g. airways) and nodes, according to the received image data.
  • the model branches e.g. airways
  • the movement models 140 may incorporate a retreating effect and/or a fading effect, describing changes in time of movements of and/or in the anatomy 112 due to a certain movement trigger and/or cause.
  • the movement models 140 are configured for and/or comprises instructions for describing movement of various parts of the lungs (for example as translated into nodes and branches) during inhalation and/or exhalation, for example due to air flow and/or diaphragm motion, including a retreating effect and/or a fading effect of the motion of the various parts in time, for example towards the end of an inhalation or an exhalation.
  • the processor 102 is configured for and/or comprises instructions for dynamically calculate a deformation 120 of the anatomy 112. In some embodiments, the calculation of the dynamic deformation 120 is based on the movement models 140. In some embodiments, the processor 102 is configured for and/or comprises instructions for applying various constraints on the movement models 140 to calculate the dynamic deformation 120. In some embodiments, the constraints are extracted from the plurality of information/data 118, such as, for example, pre-deformation model 124, position data 126, the initial registration 128, in-vivo imaging data 134, ex-vivo imaging data 136, and/or any other suitable information.
  • the deformed model 132 comprises positions and/or shapes of model nodes 138 and model branches 140, which correspond to nodes 142 and branches 144 of the anatomy 112, respectively.
  • a model node 138 may move and rotate in 6 degrees of freedom.
  • a model node 138 may be subject to at least one cost function, which allocates a suitable energy cost to each type and magnitude of the movement.
  • moving or rotating a model node 138 with respect to its immediate neighbors may cost a suitable amount of energy, for example according to the magnitude and/or direction of the movement, and/or according to the type and/or location of the node.
  • moving or rotating a model node 138 with respect to its original position may cost a suitable amount of energy, for example according to the magnitude and/or direction of the movement, and/or according to the type and/or location of the node.
  • moving or rotating a model node 138 with respect to a specific anatomy part or body organ may cost a suitable amount of energy, for example according to the magnitude and/or direction of the movement, and/or according to the type and/or location of the node.
  • the visual representation 130 of the device 110 comprises a position and/or a shape, for example a full-length or partial shape calculated by the processor 102 based on position data or raw data from the sensors 114, representing a current position and/or a current shape of the device 110.
  • a model node 138 of the deformed model 132 is subjected to at least a second cost function, which allocates a suitable energy cost to a deviation of the device representation 130 of the device 110 from a corresponding model node 138 and/or from a corresponding model branch 140 of the model 132, for example according to a type of the deviation and/or according to the magnitude and/or direction of the deviation.
  • the device representation 130 of the device 110 optionally comprises representations of multiple simultaneously tracked devices, or, additionally or alternatively, also comprises past representations of the device 110 (for example, history locations of the device within a substantially same procedure).
  • a potential advantage of this is that it potentially allows combining past information into the real-time solved deformation model.
  • a plurality of curves are used by the “curve deviation” energy function to find the deformation which best fits all those time curves, according to the deformation model 120.
  • this provides more information to the deformation tracking algorithms - instead of trying to find a deformation state which only fits the current position and shape of one or more tracked catheters, the deformation tracking algorithms find a deformation which fits the catheter state at current moment and/or past moments (for example 3 seconds history).
  • a potential advantage of this is that it potentially provides more information for the deformation tracking and may so avoid overfitting of the deformation model onto the position and/or shape of the tracked catheter, under the assumption that the organ does not dynamically deform too much over a short time period (for example, 3 seconds).
  • past curves can be weighted differently compared to the current curve. For example, decreasing weights can be assigned to past curves such that the older the curve, the smaller the weight. This accounts for the fact that “old” curves are less reliable compared to present curves in that they may belong to a differently old deformed state of the organ, which is not necessarily the current deformed state of the organ.
  • time-based data is used to fit a timebased deformation model onto the time-based data.
  • a deformation model may contain a time-based energy function which assigns a cost for certain motion (over time) of the deformation model.
  • a time-based energy function assigns a cost for certain motion (over time) of the deformation model.
  • it is known that the organ cannot deform/move very rapidly so this can be formulated using a time -based temporal structure energy function.
  • a time-based deformation model can be fit such that it fits all the time-based data over time, but also tries to minimize the structural motion (deformation) of the organ over time (as encoded by a time-based energy function). This can potentially improve the accuracy of the deformation tracking.
  • the deformation engine is an optimizer and tries its best to reach a minimum of the total energy after each frame.
  • the engine may be very fast (for example 60 frames per second), so that the change between subsequent frames must be small and smooth, in order to provide a realistic state.
  • a simple way to quantify this, for example, is a time-based energy: or where t is the current time and At is the time duration between this frame and the last. 7)(t) is the transformation at time t.
  • the calculated deformation 120 is applied on one or more of: the pre-deformation model 124, the initial registration 128 and a previous deformed visual model 122, to calculate a new deformed model 122’ (not shown in Figures la-b), e.g., location, shape and/or orientation, of model 132 and/or its various parts, relative to device representation 130.
  • the deformed model 120 is translated into the visual representation 130 of device 110 upon the deformed model 132 of the anatomy 112, which is displayed by the display 106, showing a correct shape and/or position of the anatomy 112 as represented by a deformed model 132 together with a correct shape and/or position of the device 110 relative to the anatomy 112, as represented by the device representation 30.
  • the calculation of the dynamic deformation 120 comprises:
  • a target such as a lesion or other one or more anatomical features
  • combining multiple fluoroscopic 2D imaging enables reconstructing a local fluoroscopic 3D volume, in which a target can be identified in 3D Fluoro coordinates (rather than 2D Fluoro image coordinates).
  • identifying a target in fluoroscopy, or in external image shall therefore also include the case of identifying a target in 3D reconstructed tomosynthesis volume, or in processed one or more external images.
  • a constraint is added (manually, optionally automatically) to the deformation tracking, to force the target to transmitter coordinates (X,Y,Z), with some weight (compared to the other elastic constraints, which are described using energy cost functions). This “pulls” the target elastically to its true position in transmitter coordinates, as seen by the imaging device.
  • the tip of the device can be identified in fluoro coordinates. Since the position of the tip is known in transmitter coordinates (as tracked by the position/shape sensors), it is possible to compute a registration between Fluoro to transmitter coordinates and repeat the above. This can be done for example by constructing a translation-based Fluoro — transmitter registration, by bringing the tip location from its position in Fluoro coordinates to transmitter coordinates.
  • the Fluoro is aligned with the EM transmitter in some orientation, therefore allowing to know the orientation of the Fluoro — transmitter registration.
  • a potential advantage of correcting the discrepancy using a deformable flexible model is that it potentially allows improving the overall accuracy of the deformed anatomical model compared to the anatomy, instead of just providing accuracy at the single point of correction (for example, at a target).
  • the modeled anatomy is deformed such that is still respects its energy cost functions (which can encode for example the original structure of the anatomy among other constraints) and yet provides accuracy at the marked point of interest (for example, at a target). In some embodiments, this potentially improves the overall accuracy of the guidance system, especially in the case where further navigation is needed in order to reach the target with the device. In some embodiments, improving the overall accuracy increases the probability for successful re-navigation to target after correction.
  • the deformation tracking since the deformation tracking is constrained to bring the target to its observed location in transmitter coordinates (with a certain constraint weight), but does not force it to stay on its location as was observed by the external imaging, it may account for deformation which may occur during the renavigation, such that the catheter will be able to reach the target, which may still be breathing and deforming during re-navigation, and thus may not necessarily end up after re-navigation at its initially observed location by the external imaging.
  • a breathing movement model includes alternating stretching and compressing of a lung model, for example by moving each model node, to simulate breathing. It is not known if this is possible in previous deformation models, in which model branches keep their lengths, and in which the positions of the model nodes are determined, in some cases, by the rotations of many other “parent” model nodes.
  • the breathing model is achieved by using two or more preoperative CT/MRI scans. For example, one scan can be performed in full inhalation and a second scan can be performed in exhalation.
  • the anatomy can then be modeled according to the two scans.
  • the deformation model can use the two scans to model the breathing of the anatomy in real-time based on the real-time tracked breathing state of the patient.
  • the deformation model can use a breathing model that provides ready-to- be-used models of the lungs during the breathing.
  • the two CT scans can be processed, and a tree skeletal model may be generated from the two scans.
  • the two processed trees can be registered such that each node and branch or a partial set of nodes and branches of the tree is identified and matched in the two scans.
  • the deformation model can then use the two scans to interpolate between the two models based on the breathing state of the patient, which provides ready-to-be-used models of the lungs during the breathing.
  • the breathing state of the patient can be computed using any suitable method.
  • the breathing state of the patient (inhale/exhale or a scale in between) is computed by attaching one or more position/reference sensors to the patient and monitoring the movement of the sensor while the patient breathes.
  • applying a high-pass filter on an up/down or right/left motion of the one or more reference sensors can provide a motion which is indicative of the breathing motion of the patient, from which the breathing state can be computed, as also mentioned below.
  • position/reference sensors attached to the patient monitor the movement of the patient during procedure.
  • the registration can be updated by monitoring the movement of the patient, by applying 3D translations and rotations in accordance with the movement of the patient.
  • the external deformation of the patient can be monitored.
  • the deformation of the body of the patient can be monitored by attaching 3 or more reference sensors to the body of the patient.
  • deformation in the relative shape of the sensor can account for deformation of the body of the patient, which can be extrapolated to model deformation of the lung. For example, when the patient coughs, this can be detected by monitoring the shape and movement of the reference sensors and certain deformation can be applied to the lung, using the deformation model.
  • the model can be fitted during the initial registration, by matching between structural changes of the external reference sensors and curve states inside the anatomy, as done in the case of breathing. For example, it may be observed that whenever a certain structural change occurs on the external reference sensors, certain motion or transformation is applied to the lung. This, for example, can model the coughing of the patient such that the system will compensate for coughing by parametrizing it using the external reference sensors and applying it on the deformed lung, as may be learned during initial registration according to the deformation model.
  • a preoperative scan of the anatomy is performed in full inhalation (IRV) while the guidance procedure may be performed in standard tidal volume breathing.
  • knowing the ratio between the inflation state of the scan (from which the anatomical model may be generated) and the intraoperative breathing can assist the deformation model in modeling the effect of the breathing.
  • the deformation model can assume that the preoperative scan was performed in 100% inhalation state, while the intraoperative breathing alternates between 20% inflation (at exhale state) to 80% inflation (at inhale state).
  • the deformation model can then interpolate between two available scans, or between two breathing states of the anatomy (for example, by stretching and compressing of a lung model, as discussed above).
  • the inflation state information is given to the deformation model by the system or by the user.
  • the inflation state of the preoperative scan as well as the intraoperative breathing pattern are automatically computed by the deformation model.
  • the breathing pattern can be modeled using a small number of parameters such as: preoperative scan inflation (can be assumed to be 100%), intraoperative exhale and inhale inflation ranges.
  • these parameters can then participate in the energy minimization process (see below regarding “energy minimization process”) of the breathing deformation model such that the overall energy is minimized. In some embodiments, this works under the assumption that the deformation cost energy is minimal for the true breathing and deformation parameters.
  • the breathing phase of the patient is tracked in real-time and provided to the deformation model by using an external measurement of the breathing.
  • one or more position sensors are attached to the chest of the patient to track the breathing motion pattern of the patient.
  • periodic height change of the chest of the patient is indicative of the periodic breathing of the patient.
  • the deformation model is configured for and/or comprises instructions for using the tracked breathing phase to interpolate between inhale and exhale states of the deformation model.
  • the initial inhale and exhale states of the deformation model can be learned in an initial deformable registration setup stage of the procedure. While this being technically “not intraoperative”, since it is meant to be performed just before beginning the procedure, it has been incorporated to the “intraoperative” actions.
  • multiple positions and shapes of the elongated tracked device can be accumulated during a supervised or unsupervised survey of the physician inside the anatomy.
  • the multiple positions and shapes can be used simultaneously in the deformation energy minimization process (see below) to find the initial deformation state of the anatomy.
  • the deformation model may then fit two or more different models, for example: inhale and exhale model, corresponding to two or more breathing groups of the samples.
  • the deformation model is configured for interpolating between these models during procedure.
  • the breathing inflation parameters (as described above) can be solved in the process of the initial deformable registration.
  • a general method for generating a deformed, optionally visual, representation comprises one or more of:
  • a part of the method for generating a deformed visual representation comprises one or more of:
  • a method for updating a deformed visual representation comprises one or more of:
  • a target for example a lesion
  • a target lesion can be identified in Fluoro/CBCT, for example, a target lesion can be seen in a fluoroscopic image by using tomosynthesis methods as mentioned above, then the target is transformed to EM transmitter coordinates (for example by Fluoro — EM registration, for example using radiopaque markers on the transmitter), and then it is compared to where the system “thinks” the target is (in transmitter coordinates) based on its deformation tracking and registration algorithms.
  • a method for tracking dynamic anatomy deformation comprises one or more of:
  • applying of the extracted constraints on the models 140 causes movement of the model 132 and/or parts of model 132, representing parts of the anatomy 112, and/or change in position of representation 130 of the device 110 (optionally also a visual representation) relative to the model 132 and/or parts of model 132.
  • applying of the extracted constraints causes the position of the visual representation 130 of the device 110 to appear outside of the anatomy 112 as represented in model 132.
  • the first cost function allocates a suitable energy cost to each type and magnitude of movement of each model node 138.
  • the second cost function allocates a suitable energy cost to each type and magnitude of deviation of visual representation 130 of the device 110 from a corresponding model node 138 and/or model branch 140 of model 132.
  • the aforementioned methods are performed by the processor 102 in response to received instructions 116. In some embodiments, the methods are performed periodically and/or upon a change in the information/data 118, and/or upon an event that may potentially cause change in the information/data 118.
  • the device may appear to be located in a fork between two possible branches.
  • a guidance system needs to determine the true position of the device (i.e., in this case, choose between the two optional branch locations).
  • a deviation cost function may encode the deviation cost between the device and the first branch, but may also possibly address the second branch while computing the deviation.
  • determining the true assignment of the branch is nontrivial.
  • multiple hypothetical assignments are made between the device and multiple possible branches, for example, in the proximity of the device (but not necessarily considering only the nearest branch).
  • the device can be assumed to be located inside one of K nearest branches.
  • a deviation cost energy can be computed, and the overall deformation model cost can be computed and minimized (under the assumption that the device is located inside one of the K nearest branches).
  • the overall deformation energy includes structural cost of the anatomy as well as divergence from the anatomy cost
  • the branch assignment assumption which leads to the minimal final deformation energy is the true assumption
  • the deformation model is set to the state of this minimal energy assumption.
  • this mechanism assists in avoiding local minima in the live dynamic deformation tracking of the model.
  • multiple asynchronous worker threads can run in parallel and possess a different assignment of the tracked device to any of K possible anatomical branches (for example, K nearest branches).
  • the worker threads can minimize the energy and test multiple assignment hypotheses in parallel.
  • the system may then choose the deformation state from the thread that achieved the minimum deformation cost as its current chosen state of the deformed anatomy.
  • the system operates under the assumption that the true state of deformation is the one which minimizes the total cost of the deformation model (which as described, may include cost for structural change of the anatomy as well as a cost for the divergence of the device from the anatomical model).
  • the system in order to avoid local minima in the dynamic deformation model, is configured to find the global minimum of the deformation model in a multi-step solution by using different weighting technique of the elongated tracked device.
  • the system may have high confidence in finding the deformation of the proximal anatomy based on the proximal portion of the elongated tracked device.
  • the system then decreases the weights of the distal portion of the device and minimizes the deformation energy based just on the “proximal device”.
  • the proximal anatomy would then deform based on the proximal device.
  • the peripheral anatomy is assumed to be deformed more correctly than initially.
  • the system increases the weights for the distal portion of the elongated tracked device and repeats the energy minimization process, now punishing more strongly divergence relative to the distal device. In some embodiments, this causes the distal (peripheral) anatomy to deform stronger than before, but starting at a deformation state achieved in the previous step, thus reducing the risk of converging to a local minimum of the deformation model.
  • this mechanism of gradual convergence can consist of two or more steps in which the relative weights along the elongated tracked device are increased towards the distal end of the device, or changed in any other suitable manner.
  • the weights can grow gradually towards the distal end of the device between each convergence step but can then be decreased again towards the proximal portion of the device, then grow again towards the distal part.
  • the relative weights along the tracked device can be changed randomly between different convergence attempts under the assumption that the true deformation state of the anatomy is a state which is highly indifferent to different relative weighting along the device. In some embodiments, by changing the different relative weights along the elongated tracked device the probability of convergence to the true global minimum of the deformation cost energy is increased.
  • one or more assumed true position candidates of the device inside the anatomy are optionally obtained by using artificial intelligence tools (Al).
  • the divergence cost energy would then compute the divergence between the elongated device and each of its true/hypothetical assumed positions inside the anatomy during the overall deformation energy minimization process, as obtained by the Al.
  • Al architectures may be configured to estimate one or more true/hypothetical positions of device 110 and/or a shape or a curve of device 110 within the anatomy 112, based of system measurements such as, for example, distances between various sensors or other components and/or relative positions and/or orientations of various sensors or other components of device 110.
  • the estimation is based on comparison of these device measurements with the local anatomy near where the device is assumed to be located inside the anatomy.
  • local anatomy can be processed by the Al in the form of a local preoperative CT scan, airway segmentation or in any other suitable format which represents the anatomical structures near the device.
  • the Al can then use the local anatomical information to find one or more true position candidates for the device inside the anatomy.
  • the Al may be architectured as a U-Net which receives the full curve of the device in the form of an input 3D volume into the network, as well as local anatomy information in the form of a corresponding block of a CT scan or airway segmentation binary volume, and the U-Net may then output a 3D volume representing the most probable location of the device inside the given local anatomy, according to the Al.
  • the U-Net may output a 3D volume equivalent to the input local anatomy volume where a pathway inside the anatomy is highlighted, where the elongated device is found to be located with the best probability, according to the Al.
  • any kind of Al or other suitable method can be used to choose K-best anatomical location candidates (for example, branch candidates) which represent where the elongated device may be located inside the anatomy.
  • K-best anatomical location candidates for example, branch candidates
  • such estimated curve may be used as a constraint in one or more of motion models 140.
  • the first cost function and the second cost function may be applied, and/or deformation 120 may be calculated, as described in detail herein above.
  • the K-best candidates may be used as assumptions for the device- anatomy deviation cost function as part as the overall deformation energy minimization as described above.
  • an assignment candidate (hypothesis) yielding the best overall deformation energy cost after minimization is then assumed to represent the true location of the device inside the anatomy, thus representing the true deformation of the anatomy (which explains the tracked curve of the device).
  • compositions, methods or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure.
  • a compound or “at least one compound” may include a plurality of compounds, including mixtures thereof.
  • range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as “from 1 to 6” should be considered to have specifically disclosed subranges such as “from 1 to 3”, “from 1 to 4”, “from 1 to 5”, “from 2 to 4”, “from 2 to 6”, “from 3 to 6”, etc.; as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.
  • N is the number of nodes and 14 et is the weight, which may be different for each node, for example, based on its radius. You can see that this is a one-node energy, since it contains a single sum over the nodes, with no interaction between them - no.
  • the important part can be written as, for example, such that it is always non-negative, and it is exactly zero only when the node is at its initial position and rotation.
  • the optimizer uses the true degrees of freedom of each node i - either its position and rotation as Euler angles, or its position and rotation as a quaternion: respectively.
  • the returning energy can be alternatively defined as or
  • the main one is the catheter energy.
  • the idea is writing a function which depends on and on the transformations along the catheter (which are the external data and NOT part of the degrees of freedom in the optimization).
  • the deformation engine receives a list of some number of tracked positions and orientations along the catheter(s) For example, if there are three catheters present, it can use all of them at once to get better results than using only one catheter. In that case, and A catl may not be necessarily equal to lV cat2 or to lV cat3 .
  • T cat j As these T cat j are given to the deformation engine, the simplest guess as to where inside the airways a catheter point is - is the closest position in the lung. Since it is known that all catheter points should reside inside an airway, it is wanted the energy to be at a minimum (or zero) when all T cat j are inside airways, and to increase the further out of an airway they are. It can be imaged as a string connecting T cat j for each j to its closest lung point and the energy being some positive monotonic function of the length of the strings. For example:
  • Node i is the closest to the position of the jth catheter point, and the final catheter energy is a sum over all such energies.
  • the engine receives a catheter point which is outside the lungs, the energy will rise from zero. This will push the lung nodes away from their rest state in an attempt to decrease the catheter energy E cat . This change will, in turn, increase the previous E ret until an equilibrium is reached, at the minimum total energy. If there were only E cat , the catheter would pull the lungs towards it with no regard to its initial state. It is added E ret to allow the lungs to partially oppose changes.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pulmonology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Optics & Photonics (AREA)
  • Veterinary Medicine (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Biophysics (AREA)
  • Physics & Mathematics (AREA)
  • Urology & Nephrology (AREA)
  • Physiology (AREA)
  • Otolaryngology (AREA)
  • Endoscopes (AREA)

Abstract

L'invention concerne des systèmes et des procédés de contrôle d'un endoscope dans l'anatomie d'un patient, le procédé comprenant : la génération d'un modèle de prédéformation basé sur une image acquise de l'anatomie, y compris l'attribution de nœuds et de branches de modèle ; la génération d'un enregistrement initial de l'endoscope dans l'anatomie ; l'acquisition de données de position et/ou d'orientation dans des coordonnées d'émetteur à partir de capteurs disposés dans l'endoscope tout en naviguant dans l'anatomie ; la fourniture d'une ou plusieurs contraintes anatomiques sur chacun des nœuds et des branches sur la base d'au moins une fonction d'énergie ; l'analyse des données acquises en fonction du modèle de prédéformation, de l'enregistrement initial et de ladite au moins une fonction d'énergie ; la génération d'une représentation déformée de l'anatomie sur la base de l'analyse à l'aide d'un modèle de déformation ; l'analyse consistant à déformer le modèle en modifiant chacun des nœuds indépendamment les uns des autres en fonction des contraintes.
PCT/IL2024/050048 2023-01-12 2024-01-12 Suivi de déformation anatomique dynamique pour navigation in vivo Ceased WO2024150235A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP24741466.7A EP4648666A1 (fr) 2023-01-12 2024-01-12 Suivi de déformation anatomique dynamique pour navigation in vivo

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363438536P 2023-01-12 2023-01-12
US63/438,536 2023-01-12

Publications (1)

Publication Number Publication Date
WO2024150235A1 true WO2024150235A1 (fr) 2024-07-18

Family

ID=91896522

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2024/050048 Ceased WO2024150235A1 (fr) 2023-01-12 2024-01-12 Suivi de déformation anatomique dynamique pour navigation in vivo

Country Status (2)

Country Link
EP (1) EP4648666A1 (fr)
WO (1) WO2024150235A1 (fr)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160073928A1 (en) * 2003-12-12 2016-03-17 University Of Washington Catheterscope 3d guidance and interface system
US10085671B2 (en) * 2012-05-14 2018-10-02 Intuitive Surgical Operations, Inc. Systems and methods for deformation compensation using shape sensing
US10314656B2 (en) * 2014-02-04 2019-06-11 Intuitive Surgical Operations, Inc. Systems and methods for non-rigid deformation of tissue for virtual navigation of interventional tools
US20200000526A1 (en) * 2017-02-01 2020-01-02 Intuitive Surgical Operations, Inc. Systems and methods of registration for image-guided procedures
US10653485B2 (en) * 2014-07-02 2020-05-19 Covidien Lp System and method of intraluminal navigation using a 3D model
US20200179060A1 (en) * 2018-12-06 2020-06-11 Covidien Lp Deformable registration of computer-generated airway models to airway trees
US20200205904A1 (en) * 2013-12-09 2020-07-02 Intuitive Surgical Operations, Inc. Systems and methods for device-aware flexible tool registration
US20220156925A1 (en) * 2019-03-14 2022-05-19 Koninklijke Philips N.V. Dynamic interventional three-dimensional model deformation
US20220175468A1 (en) * 2019-09-09 2022-06-09 Magnisity Ltd. Magnetic flexible catheter tracking system and method using digital magnetometers

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160073928A1 (en) * 2003-12-12 2016-03-17 University Of Washington Catheterscope 3d guidance and interface system
US10085671B2 (en) * 2012-05-14 2018-10-02 Intuitive Surgical Operations, Inc. Systems and methods for deformation compensation using shape sensing
US20200205904A1 (en) * 2013-12-09 2020-07-02 Intuitive Surgical Operations, Inc. Systems and methods for device-aware flexible tool registration
US10314656B2 (en) * 2014-02-04 2019-06-11 Intuitive Surgical Operations, Inc. Systems and methods for non-rigid deformation of tissue for virtual navigation of interventional tools
US10653485B2 (en) * 2014-07-02 2020-05-19 Covidien Lp System and method of intraluminal navigation using a 3D model
US20200000526A1 (en) * 2017-02-01 2020-01-02 Intuitive Surgical Operations, Inc. Systems and methods of registration for image-guided procedures
US20200179060A1 (en) * 2018-12-06 2020-06-11 Covidien Lp Deformable registration of computer-generated airway models to airway trees
US20220156925A1 (en) * 2019-03-14 2022-05-19 Koninklijke Philips N.V. Dynamic interventional three-dimensional model deformation
US20220175468A1 (en) * 2019-09-09 2022-06-09 Magnisity Ltd. Magnetic flexible catheter tracking system and method using digital magnetometers

Also Published As

Publication number Publication date
EP4648666A1 (fr) 2025-11-19

Similar Documents

Publication Publication Date Title
US11690527B2 (en) Apparatus and method for four dimensional soft tissue navigation in endoscopic applications
US12310677B2 (en) Deformable registration of computer-generated airway models to airway trees
US11024026B2 (en) Adaptive navigation technique for navigating a catheter through a body channel or cavity
US12408991B2 (en) Dynamic deformation tracking for navigational bronchoscopy
US11164324B2 (en) GPU-based system for performing 2D-3D deformable registration of a body organ using multiple 2D fluoroscopic views
US9265468B2 (en) Fluoroscopy-based surgical device tracking method
CN103458764B (zh) 形状感测辅助的医疗程序
JP2010510815A (ja) 身体の通路または腔を通してカテーテルをナビゲートする適応的ナビゲーション技術
WO2018165478A1 (fr) Localisation à contrainte de coque de système vasculaire
CN114748141A (zh) 基于x射线图像的穿刺针三维位姿实时重建方法及装置
CN111403017B (zh) 医学辅助设备、系统、和用于确定对象的变形的方法
CN101479769A (zh) 基于模型确定周期性收缩对象的收缩状态
WO2024150235A1 (fr) Suivi de déformation anatomique dynamique pour navigation in vivo
CN119338860B (zh) 呼吸补偿方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24741466

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 2024741466

Country of ref document: EP