WO2025223977A1 - Navigation system for assisting a user navigating a surgical tool during a medical intervention - Google Patents
Navigation system for assisting a user navigating a surgical tool during a medical interventionInfo
- Publication number
- WO2025223977A1 WO2025223977A1 PCT/EP2025/060598 EP2025060598W WO2025223977A1 WO 2025223977 A1 WO2025223977 A1 WO 2025223977A1 EP 2025060598 W EP2025060598 W EP 2025060598W WO 2025223977 A1 WO2025223977 A1 WO 2025223977A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- current
- localization
- navigation system
- patient
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/102—Modelling of surgical devices, implants or prosthesis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/101—Computer-aided simulation of surgical operations
- A61B2034/105—Modelling of the patient, e.g. for ligaments or bones
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2055—Optical tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2063—Acoustic tracking systems, e.g. using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B2090/364—Correlation of different images or relation of image positions in respect to the body
- A61B2090/365—Correlation of different images or relation of image positions in respect to the body augmented reality, i.e. correlating a live optical image with another image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/373—Surgical systems with images on a monitor during operation using light, e.g. by using optical scanners
- A61B2090/3735—Optical coherence tomography [OCT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/376—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/376—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
- A61B2090/3762—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/378—Surgical systems with images on a monitor during operation using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/39—Markers, e.g. radio-opaque or breast lesions markers
- A61B2090/3937—Visible markers
- A61B2090/3941—Photoluminescent markers
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/39—Markers, e.g. radio-opaque or breast lesions markers
- A61B2090/3966—Radiopaque markers visible in an X-ray image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/39—Markers, e.g. radio-opaque or breast lesions markers
- A61B2090/397—Markers, e.g. radio-opaque or breast lesions markers electromagnetic other than visible, e.g. microwave
- A61B2090/3975—Markers, e.g. radio-opaque or breast lesions markers electromagnetic other than visible, e.g. microwave active
- A61B2090/3979—Markers, e.g. radio-opaque or breast lesions markers electromagnetic other than visible, e.g. microwave active infrared
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/20—Surgical microscopes characterised by non-optical aspects
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/90—Identification means for patients or instruments, e.g. tags
- A61B90/98—Identification means for patients or instruments, e.g. tags using electromagnetic means, e.g. transponders
Definitions
- the present invention relates to the general field of surgical assistance, particularly, to a navigation system and associated method for assisting a user in navigating a surgical tool during a medical intervention carried out on a patient.
- surgeons analyze the medical file of the patient and the medical images previously acquired to mentally prepare themselves and visualize a best path with the different steps to carry out.
- the medical images can come from different apparatuses such as Magnetic Resonance Images (MRI), Computed Tomography (CT) scans, X-rays images or ultra- sound images, for example, the surgeons have to study them separately and mentally aggregate all the information to have a global representation of the situation and foresee what they will see during the surgery.
- MRI Magnetic Resonance Images
- CT Computed Tomography
- ultra- sound images for example, the surgeons have to study them separately and mentally aggregate all the information to have a global representation of the situation and foresee what they will see during the surgery.
- a 3D model of the body part of the patient that requires a medical intervention may be generated. Based on the 3D model and their experience, surgeons plan the surgery by selecting a path into the body to reach the zone that needs to be treated. [0005] Despite all the attention and preoperative preparation of surgeons, the reality is generally quite different during the surgery, particularly during a surgery on soft organs that can move and warp in the body, or on tumors that may have evolve since the acquisition of the preoperative medical images. All these changes may modify the surgery plan in intraoperative. Consequently, there is a need for a navigation system that concatenates preoperative and intraoperative information in real-time so that surgeons can react and adapt the surgical plan in function of the real situation and the real position of the organ to be operated on.
- the present disclosure relates to a navigation system for assisting a user in navigating at least one surgical tool during a medical intervention carried out on a patient, said patient being previously equipped with at least one fiducial located on a specific body part of said patient, said specific body part comprising at least one neurovascular bundle, said navigation system comprising: at least one imaging device configured to: o obtain (i.e.
- each 2D image comprises a representation of a predefined area around a tip of said at least one surgical tool; and o acquire specific information related to said at least one neurovascular bundle; at least one tracking device configured to obtain, in real-time, localization information related to: o a current fiducial localization of said at least one fiducial; o a current tool localization of the tip of said at least one surgical tool; o a current imaging device localization of said at least one imaging device; at least one processor configured to: o receive said intraoperative flux of 2D images and said localization information; o generate, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of said patient based on said intraoperative flux of 2D images, said specific information related to said at least one neurovascular bundle, said current fiducial localization and said current imaging device localization; o compute a corresponding current virtual tool localization in said virtual environment using said current tool localization
- the navigation system offers several key advantages, including: dynamic integration between 3D modeling, tracking, and imaging; real-time, adaptive, and continuous updates of the patient specific 3D model; and an interactive, closed-loop architecture that ensures seamless and responsive navigation performance.
- the present disclosure thus relates to a system for assisting a user in navigating at least one surgical tool during a medical intervention carried out on a patient, the patient being previously equipped with at least one fiducial located on a specific body part of the patient, the specific body part comprising at least one neurovascular bundle.
- the navigation system comprises at least one imaging device, at least one tracking device, at least one processor, and at least one output.
- the navigation system may optionally comprise also the at least one surgical tool.
- the at least one imaging device is configured to: generate an intraoperative flux of 2D images, wherein each 2D image comprises a representation of a predefined area around a tip of the at least one surgical tool; and acquire specific information related to the at least one neurovascular bundle.
- the at least one tracking device is configured to obtain, in real-time, localization information related to: a current fiducial localization of the at least one fiducial; a current tool localization of the tip of the at least one surgical tool; a current imaging device localization of the at least one imaging device.
- the at least one processor is configured to: receive the intraoperative flux of 2D images and the localization information; generate, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of the patient based on the 2D images, the current fiducial localization and the current imaging device localization; compute a corresponding current virtual tool localization in the virtual environment using the current tool localization.
- the navigation system further comprises a deep learning segmentation module configured to detect in real-time at least said at least one neurovascular bundle from said intraoperative flux of 2D images.
- the at least one output is configured to output in real-time the virtual environment comprising the current augmented 3D model and the current virtual tool localization to assist the user during the medical intervention.
- the user (such as a surgeon for example) is provided with a high level of accuracy and precision, reducing the risk of complications and improving patient outcomes during and after the medical intervention.
- This navigation system is particularly useful in complex or difficult surgical cases where traditional navigation methods may not be sufficient.
- the navigation system comprises one or more of the features described in the following embodiments, taken alone or in any possible combination: [0018] the at least one processor is further configured to receive preoperative medical images of the patient, the preoperative medical images comprising images of the specific body part and the at least one fiducial, and to generate, before the medical intervention, the patient specific 3D model based on the received preoperative medical images;
- the at least one processor is further configured to apply a blockchain tokenization to the generated current augmented 3D model, so as to obtain a tokenized 3D model;
- the tokenized 3D model is obtained using a blockchain protocol, to ensure traceability, data integrity and interoperability of patient-specific surgical data;
- the tokenized 3D model is stored in a decentralized blockchain-based database compliant with clinical data confidentiality standards;
- the navigation system may include a blockchain tokenization module configured to:
- the preoperative medical images of the patient being previously acquired by at least two different medical imaging systems among: a Magnetic Resonance Imaging - MRI - system or a Computed Tomography - CT - scan, and an ultrasound apparatus;
- the at least one processor is further configured to compute a path for guiding the user during the medical intervention so as to avoid damaging the at least one neurovascular bundle; [0027] the at least one processor is further configured to update and optimize, in realtime, the path for guiding the user according to the 2D images and the localization information;
- ICG indocyanine green
- the at least one output is further configured to output the path on the current augmented 3D model
- the at least one output is further configured to output visual guidance information related to the path on the intraoperative flux of 2D images
- the at least one processor is further configured to automatically detect at least one critical area according to the current augmented 3D model and the intraoperative flux of 2D images, the at least one critical area comprising at least one of: the at least one neurovascular bundle, a predetermined organ, a predetermined blood vessel, a predetermined nerve, a predetermined tissue;
- the at least one processor is further configured to detect and segment critical anatomical structures (e.g., neurovascular bundles, blood vessels, nerves) using a trained deep learning model such as a CNN (Convolutional Neural Network).
- a trained deep learning model such as a CNN (Convolutional Neural Network).
- the at least one output is further configured to output critical area information related to the at least one critical area in the current augmented 3D model
- the at least one output is further configured to display a realtime augmented interface comprising the current augmented 3D model, surgical tool path, and context-aware alerts; [0036] in an embodiment, the at least one output is further configured to display:
- an augmented interface comprising the surgical tool’s position, path, critical structure alerts, and real-time feedback;
- the at least one fiducial is at least one of: radio-opaque fiducials, visual fiducials, electromagnetic fiducials, implantable markers, radiofrequency identification - RFID - tags, biopsy clips, fluorescent markers;
- the at least one imaging device is further configured to visualize a particular substance, the particular substance being previously injected into the patient before or during the medical intervention;
- the at least one imaging device comprises a near-infrared (NIR) camera
- the particular substance is indocyanine green
- generating, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of said patient further comprises superimposing on said patient specific 3D model an image obtained with the at least one imaging device, said obtained image comprising a representation of said at least one neurovascular bundle;
- the at least one surgical tool and the at least one imaging device are firmly attached to each other;
- the navigation system further comprises a memory configured to save and record preselected information, the preselected information comprising at least one of: the patient specific 3D model, the current augmented 3D model, the intraoperative flux of 2D images, the current fiducial localization, the current tool localization, the current imaging device localization, and/or, when available, the received preoperative medical images, the path computed, and/or the path updated and optimized and/or the tokenized 3D model; and/or [0044] the specific body part comprises at least one soft organ or soft tissue among: prostate, brain, heart, lung, liver, stomach, intestine, kidney, bladder, uterus.
- the navigation system is operatively coupled to a robotic surgical platform for automated or semi -automated execution of the planned surgical path;
- the navigation system is optionally operatively coupled to a robotic surgical system (e.g., Da Vinci system), and provides real-time navigation guidance and dynamic path correction based on the updated patient specific 3D model.
- a robotic surgical system e.g., Da Vinci system
- the present disclosure relates to a computer- implemented method for assisting a user navigating at least one surgical tool during a medical intervention carried out on a patient, the patient being previously equipped with at least one fiducial located on a specific body part of the patient, the specific body part comprising at least one neurovascular bundle.
- the method comprises: obtaining (i.e. acquiring) an intraoperative flux of 2D images from at least one imaging device, wherein each 2D image comprise a representation of a predefined area around a tip of the at least one surgical tool; acquiring specific information related to the at least one neurovascular bundle; obtaining, in real-time, localization information related to: a current fiducial localization of the at least one fiducial; a current tool localization of the tip of the at least one surgical tool; a current imaging device localization of the at least one imaging device; generating, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of the patient based on the 2D images, the current fiducial localization and the current imaging device localization; computing a corresponding current virtual tool localization in the virtual environment using the current tool localization; and outputting in real-time the virtual environment comprising the current augmented 3D model and the current virtual tool localization to assist the user during the medical intervention
- the disclosure relates to a computer program comprising software code adapted to perform a method for for navigating in a patient body compliant with any of the above execution modes when the program is executed by a processor.
- the present disclosure further pertains to a non-transitory program storage device, readable by a computer, tangibly embodying a program of instructions executable by the computer to perform a method for navigating in a patient body, compliant with the present disclosure.
- Such a non-transitory program storage device can be, without limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, or any suitable combination of the foregoing. It is to be appreciated that the following, while providing more specific examples, is merely an illustrative and not exhaustive listing as readily appreciated by one of ordinary skill in the art: a portable computer diskette, a hard disk, a ROM, an EPROM (Erasable Programmable ROM) or a Flash memory, a portable CD-ROM (Compact-Disc ROM).
- adapted and “configured” are used in the present disclosure as broadly encompassing initial configuration, later adaptation or complementation of the present device, or any combination thereof alike, whether effected through material or software means (including firmware).
- processor should not be construed to be restricted to hardware capable of executing software, and refers in a general way to a processing device, which can for example include a computer, a microprocessor, an integrated circuit, or a programmable logic device (PLD).
- the processor may also encompass one or more Graphics Processing Units (GPU), whether exploited for computer graphics and image processing or other functions.
- GPU Graphics Processing Unit
- the instructions and/or data enabling to perform associated and/or resulting functionalities may be stored on any processor- readable medium such as, e.g., an integrated circuit, a hard disk, a CD (Compact Disc), an optical disc such as a DVD (Digital Versatile Disc), a RAM (Random-Access Memory) or a ROM (Read-Only Memory). Instructions may be notably stored in hardware, software, firmware or in any combination thereof.
- processor- readable medium such as, e.g., an integrated circuit, a hard disk, a CD (Compact Disc), an optical disc such as a DVD (Digital Versatile Disc), a RAM (Random-Access Memory) or a ROM (Read-Only Memory).
- Instructions may be notably stored in hardware, software, firmware or in any combination thereof.
- Machine learning designates in a traditional way computer algorithms improving automatically through experience, on the ground of training data enabling to adjust parameters of computer models through gap reductions between expected outputs extracted from the training data and evaluated outputs computed by the computer models.
- Datasets are collections of data used to build an ML mathematical model, so as to make data-driven predictions or decisions.
- supervised learning i.e. inferring functions from known input-output examples in the form of labelled training data
- three types of ML datasets are typically dedicated to three respective kinds of tasks: “training”, i.e. fitting the parameters, “validation”, i.e. tuning ML hyperparameters (which are parameters used to control the learning process), and “testing”, i.e. checking independently of a training dataset exploited for building a mathematical model that the latter model provides satisfying results.
- a “neural network (NN)” designates a category of ML comprising nodes (called “neurons”), and connections between neurons modeled by “weights”. For each neuron, an output is given in function of an input or a set of inputs by an “activation function”. Neurons are generally organized into multiple “layers”, so that neurons of one layer connect only to neurons of the immediately preceding and immediately following layers.
- Figure l is a flow chart showing an embodiment of a method for assisting a user navigating at least one surgical tool during a medical intervention
- Figure 2 is an illustration of a navigation system that may implement the method represented in figure 1.
- the functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software.
- the functions may be provided by a single dedicated processor, a single shared processor, or a plurality of individual processors, some of which may be shared.
- the present disclosure relates to a system, a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out a computer-implemented method, and a computer-implemented method for assisting a user navigating at least one surgical tool during a medical intervention carried out on a patient.
- the navigation can be direct or indirect if the user goes through a surgical robot, like Intuitive robots (cf. the Da Vinci robotic surgical system).
- the patient 10 has being previously equipped with at least one fiducial 20 located on a specific body part 30 of the patient 10.
- this specific body part 30 comprises at least one neurovascular bundle.
- the specific body part 30 may comprise at least one soft organ or soft tissue, such as: prostate, brain, heart, lung, liver, stomach, intestine, kidney, bladder, uterus.
- Neurovascular bundles - NVB - are key structures in the human body comprising combination of blood vessels (vascular) and nerves (neuro) that travel together within certain regions. These bundles are critical for providing soft tissues or organs with the necessary blood supply and neural connections.
- a neurovascular bundle in the brain is associated with the circle of Willis, a circulatory anastomosis that supplies blood to the brain.
- the circle of Willis is located at the base of the brain and involves the convergence of several arteries, creating a network that ensures a continuous blood supply to various parts of the brain.
- sparing as much as possible the neurovascular bundles during the medical intervention preserve erectile function and improve the outcome of the patient. That is why providing an accurate navigation system that assists the surgeon to securely navigate and perform surgery around such neurovascular bundles is important.
- the at least one fiducial 20 used can be chosen among: radio-opaque fiducials, visual fiducials, electromagnetic fiducials, implantable markers, radiofrequency identification - RFID - tags, biopsy clips, fluorescent markers.
- Fiducials serve as reference markers or landmarks that help the surgical team precisely locate and track specific points in the patient's anatomy.
- the use of fiducial elements enhances the accuracy and reliability of surgical procedures, especially in minimally invasive and image-guided interventions. For example, they can be placed around a tumor or around a neurovascular bundle to delimit their perimeter and locate them precisely.
- fiducial markers The selection of the fiducial markers depends on several factors, including the specific requirements of the surgical technique, the imaging modalities used, and the anatomical location of the markers. Surgeons typically consider the following criteria when choosing fiducial markers:
- fiducial markers should be easily detectable in the imaging modalities used during the surgical procedure; size and shape: fiducial markers should be small enough to be placed within or near the target anatomical structures of the specific body part without causing interference or obstruction, and the shape may vary depending on the anatomical location and the imaging modality used (examples of common shapes include spheres, cylinders, or crosses);
- the markers should be resistant to displacement or migration caused by patient movements, tissue manipulation, or fluid dynamics within the body; ease of placement: Surgeons prefer fiducial markers that are easy to place or implant using minimally invasive techniques, such as a needle or catheter; compatibility with the navigation system 1000: fiducial markers should be compatible with the navigation system 1000 used in the medical intervention; clinical considerations: the choice of fiducial markers may also depend on the specific clinical requirements of the procedure, such as the need for tumor localization, target delineation, or organ motion tracking.
- the navigation method may comprise the following steps:
- Step 10 obtaining an intraoperative flux of 2D images from at least one imaging device, wherein each 2D image comprises a representation of a predefined area around a tip of said at least one surgical tool;
- Step 20 acquiring specific information related to said at least one neurovascular bundle
- Step 40 generating, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of said patient (NB: hereinafter referred to as the “patient specific 3D model”) based on said intraoperative flux of 2D images and said specific information related to said at least one neurovascular bundle, said current fiducial localization and said current imaging device localization;
- Step 50 computing a corresponding current virtual tool localization in said virtual environment using said current tool localization
- Step 60 outputting in real-time said virtual environment comprising the current augmented 3D model and the current virtual tool localization to assist the user during the medical intervention.
- imaging devices that can be used to acquire and send the intraoperative flux of 2D images received in step 10 are: fluoroscopy systems which provide real-time X-ray imaging, allowing for continuous visualization of anatomical structures and medical devices such as the fiducials or surgical tools; mini C-Arm systems which are compact fluoroscopy devices that provide high- resolution 2D imaging; intraoperative CT Scanners which may provide 2D and 3D imaging capabilities, offering high-resolution imaging to guide a surgical procedure; mobile X-ray Machines equipped with fluoroscopic capabilities, which are often used in operating rooms for intraoperative imaging; surgical endoscopes equipped with 2D imaging sensors which allow for visualization of internal organs and anatomical structures during minimally invasive procedures, such as laparoscopic surgeries, arthroscopic procedures, and endoscopic interventions; intraoperative ultrasound systems which provide real-time 2D imaging of soft tissues, organs, and blood vessels, offering dynamic visualization and guidance for surgical navigation;
- DR Digital Radiography
- Surgical Microscopes with Integrated Cameras which provide magnified, high- resolution 2D imaging of the surgical field during procedures; intraoperative Optical Coherence Tomography (OCT) Systems which utilize light-based imaging to provide high-resolution, cross-sectional 2D images of tissues and structures; or intraoperative imaging catheters such as intravascular ultrasound (IVUS) catheters and optical coherence tomography (OCT) catheters for example.
- IVUS intravascular ultrasound
- OCT optical coherence tomography
- the at least one imaging device further comprises a near-infrared (NIR) camera configured to acquire Indocyanine Green - Near-Infrared Fluorescence.
- NIR near-infrared
- the specific information related to said at least one neurovascular bundle according to this embodiment may be a fluorescence image. In this case, the patient had been previously injected with florescence dye.
- the at least one imaging device is further configured to allow visualization of a particular substance (such as, for example: indocyanine green which is visible in near-infrared fluorescence, called “ICG-NIRF” or a contrast medium), the particular substance being previously injected into the patient before or during the medical intervention.
- a particular substance such as, for example: indocyanine green which is visible in near-infrared fluorescence, called “ICG-NIRF” or a contrast medium
- ICG-NIRF indocyanine green which is visible in near-infrared fluorescence
- this enables to better identify the neurovascular bundles - NVB - by visualizing vascularization and hemostasis.
- visualizing a particular substance comprises obtaining intraoperative indocyanine green (ICG) fluorescence images.
- the intraoperative identification of neurovascular bundles may rely on the injection of a fluorescent agent, such as indocyanine green (ICG), that allows real-time intraoperative visualization of neurovascular bundles.
- a fluorescent agent such as indocyanine green (ICG)
- ICG indocyanine green
- NIR near-infrared
- the combination of a fluorescent agent (e.g. ICG) and NIR imaging enhances anatomical visualization, improves the safety and the precision of the surgical procedure and reduces the risk of nerve damage.
- the NIR-ICG imaging (combination of ICG and NIR imaging) may be fully and seamlessly embedded within the navigation system 1000. This allows real-time visualization of vascular structures and tissue perfusion during surgery, directly within the navigation system. Advantageously, the surgeon can therefore make informed decisions based on both anatomical and functional data without switching systems or interrupting the workflow.
- the at least one surgical tool and the at least one imaging device are firmly attached to each other. They are joined together so that the at least one imaging device follow the movements of the at least one surgical tool, and acquire a coordinated 2D view of the predefined area around the tip of the at least one surgical tool.
- the at least one imaging device may be embedded in the at least one surgical tool or vice versa.
- the at least one surgical tool may be configured to be manipulated directly or indirectly by the user. Indeed, attaching the at least one surgical tool and the at least one imaging device ensures synchronized movement between the surgical tool and the imaging device.
- the navigation method further comprises calculating from the specific information related to said at least one neurovascular bundle supplementary information such as, for example: the name (e.g.
- NVB located near the prostate autonomic nerve, particularly sympathetic or parasympathetic nerve, sensory nerve, artery like inferior vesical artery or middle rectal artery, vein like prostatic venus plexus
- the position e.g.: the position in the body of the patient or in the specific body part considered, or the relative position in relation to the at least one surgical tool
- the critical organ or tissue i.e. critical element, critical structure associated to or located near the neurovascular bundle(s).
- calculating the supplementary information related to said at least one neurovascular bundle may be achieved using a convolutional neural network (CNN) such as U-Net, DeepLabV3 or Mask R-CNN.
- CNN convolutional neural network
- Medical images e.g., MRI or CT scans
- These annotated images may be used to train the CNN, with U-Net using its encoder-decoder architecture to extract features and segment structures like the NVB, and Mask R-CNN performing both object detection and pixelwise segmentation of these critical elements, structures, or areas.
- the CNN may be trained using techniques like data augmentation and optimized with loss functions such as dice coefficient loss to ensure accurate segmentation, allowing the system to automatically identify and/or delineate neurovascular bundles and surrounding tissues in real-time during surgery.
- the localization information related to the current fiducial localization, the current tool localization and the current imaging device localization may be relative coordinates between each element (i.e. the at least one fiducial, the at least one surgical tool and/or the at least one imaging device), or absolute coordinates in a chosen coordinate system (e.g.: the coordinate system associated to a tracking device).
- the generation of the current augmented 3D model in step S40 may be realized by using a machine learning model which would update, in real time, a patient specific 3D model with information extracted from the intraoperative flux of 2D images received, said specific information related to said at least one neurovascular bundle, the current fiducial localization and the current imaging device localization.
- This approach provides a more comprehensive and accurate representation of the body part that is operated and its structures.
- the current augmented 3D model may be dynamically updated using a machine learning model trained on the preoperative and intraoperative (i.e. intraoperative flux of 2D images) medical images.
- the generation of the current augmented 3D model may be multimodal based on, for example, MRI and/or CT and/or ultrasound images which enables a detailed and global view of the patient’s anatomical structures, more particularly in the specific body part of the patient.
- the patient specific 3D model may then be dynamically updated via rigid and elastic registration, allowing for continuous adaptation in response to tissue deformations during surgery: for instance, a rigid registration algorithm may be used at the beginning of the procedure to align the patient specific 3D model with the patient's anatomy based on fiducial elements implanted before the procedure and using the ICP (Iterative Closest Point) algorithm and least squares method; non-rigid registration may also be used to adjust the patient specific 3D model in real-time to tissue deformations induced by surgery and align it with perioperative images (e.g. intraoperative flux of 2D images) based on the Diffeomorphic Demons Algorithm, ICP, FFD (Free Form Deformation), and TPS (Thin Plate Splines) algorithms.
- perioperative images e.g. intraoperative flux of 2D images
- a real-time image fusion may be used to ensure an evolving and precise mapping of the patient’s anatomy, integrating both imaging data (e.g. information extracted from the intraoperative flux of 2D images received) and contextual information (e.g. current fiducial localization, current imaging device localization) to provide the surgeon with an accurate, up-to-date representation of the patient anatomy throughout the surgery.
- imaging data e.g. information extracted from the intraoperative flux of 2D images received
- contextual information e.g. current fiducial localization, current imaging device localization
- Real-time image fusion may also include a multimodal fusion of preoperative and perioperative images using Mutual Information (MI), Deep Learning (CNN) algorithms, SSIM (Structural similarity Index measure) and Feature matching (ORB: Oriented FAST and Rotated BRIEF, SURF: Speeded-Up Robust Features) to superimpose the intraoperative flux of 2D images received and the patient specific 3D model and maximize their correspondence.
- MI Mutual Information
- CNN Deep Learning
- SSIM Structuretural similarity Index measure
- ORB Oriented FAST and Rotated BRIEF
- SURF Speeded-Up Robust Features
- generating, in a virtual environment, a current augmented 3D model by updating a patient-specific 3D model of the specific body part further comprises superimposing on the patient-specific 3D model an image obtained with a Near Infra-Red (NIR) camera, the obtained image comprising a representation of at least one neurovascular bundle, allowing for a dynamic overlay of the representation on the patientspecific 3D model in real-time.
- NIR Near Infra-Red
- the at least one imaging device may be an NIR camera configured to capture in real-time the fluorescence emitted by ICG, enabling the identification of neurovascular bundles and their real-time superimposition onto the current augmented 3D model; this may also allow accounting for vascular and perfusion changes as well as tissue oxygenation monitoring.
- the at least one processor is further configured to apply a blockchain tokenization to the generated current augmented 3D model, so as to obtain a tokenized 3D model.
- the tokenized 3D model may be obtained using a blockchain protocol, to ensure traceability, data integrity and interoperability of patientspecific surgical data.
- blockchain tokenization ensures the authenticity of the augmented 3D model by associating it with a timestamp and a record of modifications, safeguarding data integrity, eliminating unauthorized changes, and providing secure access to authorized medical entities.
- the tokenized 3D model may be stored in a decentralized blockchain-based database compliant with clinical data confidentiality standards.
- the patient specific 3D model can be received or loaded from a memory before the medical intervention, or it can also be generated by the navigation method based on preoperative medical images (see the optional steps S80 and S90 described below).
- VR Virtual Reality
- HMDs head-mounted displays
- motion-tracking technology to immerse users in virtual worlds
- Augmented Reality (AR) Applications which overlay digital information, including 3D models, onto the user's view of the real-world using smartphones, tablets, or AR glasses (e.g.: Microsoft HoloLens, Google ARCore, Apple ARKit...);
- 3D Modeling Software with Visualization Capabilities which generally include built-in tools for visualizing 3D models in virtual environments (e.g.: Autodesk Maya, Blender, SketchUp...);
- Immersive Visualization Systems such as CAVE (Cave Automatic Virtual Environment) systems and PowerWalls, which create large-scale immersive environments for visualizing 3D models (e.g.: CAVE systems, PowerWalls, 3D Visualization Labs. . .);
- step S50 Computing the corresponding current virtual tool localization (i.e. current virtual tool localization that corresponds to the current real tool localization of the tip of the at least one surgical tool) in said virtual environment in step S50, is intended to enable the user to locate, in real time, the at least one surgical tool in space (particularly in relation to the at least one neurovascular bundle - NVB) and see its orientation in the current augmented 3D model.
- current virtual tool localization i.e. current virtual tool localization that corresponds to the current real tool localization of the tip of the at least one surgical tool
- the current augmented 3D model and the current virtual tool localization outputted in real time in step S60 may for example be displayed to the user, for example on a screen or a virtual reality headset (in order to have an immersive view), so as to assist the user during the medical intervention by enabling him for example to visualize in real time the position of the surgical tool in relation to the at least one NVB which should be preserved as much as possible during the medical intervention.
- a virtual reality headset in order to have an immersive view
- the intraoperative flux of 2D images is also outputted according to the current tool localization, so as to follow in real time the position of the at least one surgical tool and have a direct and local view of the predefined area around the tip of the at least one surgical tool.
- the at least one output may be further configured to display a real-time augmented interface comprising the current augmented 3D model, surgical tool path, and context-aware alerts.
- the at least one output may render an augmented interface including the current augmented 3D model, the surgical path, and context-aware warnings or alerts.
- the at least one output may be further configured to display: - an augmented interface comprising the surgical tool’s position, path, critical structure alerts, and real-time feedback; - multimodal data overlays including fluorescence signals, updated trajectories, and segmentation-based risk zones.
- the navigation method further comprises optional steps, such receiving preoperative images S80, generating a patient specific 3D model S90, computing a path within the patient specific 3D model SI 00, updating and optimizing the path SI 10.
- the patient specific 3D model of the specific body part of the patient which is used at step S40, is generated before the medical intervention based on preoperative medical images.
- these preoperative medical images of the patient are generally acquired upstream from the medical intervention.
- the patient specific 3D model of the specific body part of the patient is generated (step S90).
- machine learning can be used to generate the patient specific 3D model using the preoperative medical images (such as DICOM images for example).
- learning algorithms can be trained to identify and segment the anatomical structures of interest in the medical images.
- critical anatomical structures e.g., neurovascular bundles, nerves, blood vessels
- a trained deep learning model such as a CNN (Convolutional Neural Network).
- Such patient specific 3D model of the specific body part enables to plan the medical intervention (e.g.: for determining the best path to access to the organ or anatomical element to operate/treat) after diagnosis, and corresponds specifically to the patient to be operated with all his/her specific features, and not to a general anatomical representation of the corresponding body part.
- these preoperative medical images may be previously acquired by at least two different medical imaging systems among: a Magnetic Resonance Imaging - MRI - system or a Computed Tomography - CT - scan, and an ultrasound apparatus.
- Each medical imaging system enable to access different levels of detail (each medical imaging system having different resolution) and different type of tissues (e.g.: MRI enables to visualize soft tissue details based on their water content and molecular properties, ultrasounds enable to visualize moving structures, such as the beating heart or flowing blood in vessels, CT scans enable to or visualize bone injuries or detecting complex fractures, X-rays are more specific for bones imaging).
- the patient specific 3D model of the specific body part of the patient may incorporate different types of images, which is more precise than a 3D model which would be based only on one type of medical images (e.g. : MRI or CT scans only).
- a path for guiding the user during the medical intervention may be computed (step SI 00), based for example on the patient specific 3D model of the specific body part of the patient, and so as for example to avoid damaging said at least one neurovascular bundle (NVB).
- This path may be computed according to different options, for example:
- Patient-Specific Factors like Individual patient characteristics, such as age, overall health, comorbidities, and anatomical variations, influence the choice of guiding path;
- such a path may be computed by a machine learning model, which could have been trained on previous medical interventions information for example.
- the path may be computed based on a RNN (Recurrent Neural Network) and reinforcement learning (DQN: Deep Q-Network, PPO: Proximal Policy Optimization).
- the RNN may for example be trained on historical surgical data (e.g. recorded paths from previous surgeries) to predict the next best surgical tool movement based on previous actions, patient-specific anatomy, and/or deformations.
- the DQN may learn the optimal actions by assigning rewards based on safe surgical tool movements (e.g.
- Real-time data from the imaging device i.e. intraoperative flux of 2D images
- the path may be updated and optimized (step SI 10) throughout the medical intervention, in real time, according to the 2D images received and the localization information.
- the path may be updated and optimized using for example an averaging shortest path algorithm, potential fields for obstacle avoidance, and/or a machine learning model trained on prior interventions.
- the path may be updated and optimized using interactive assistance features such as haptic feedback to the surgeon (e.g. alert when the surgical tool deviates beyond a predefined safety threshold) to prevent path error (e.g. the surgical tool approaching a critical structure) and/or overlaying the augmented 3D model onto a surgical interface (e.g., robotic console), ultimately reducing the risk of damage of a critical structure.
- haptic feedback e.g. alert when the surgical tool deviates beyond a predefined safety threshold
- path error e.g. the surgical tool approaching a critical structure
- a surgical interface e.g., robotic console
- the system may identify a new position of the NVB and integrate this data with a dynamic path planning algorithm which recalculates the surgical tool’s path to avoid the NVB, thereby ensuring a better safety and efficiency.
- this real-time optimization of the path allows continuous real-time adaptation of the path of the surgical tool 100.
- the calculated surgical tool’s path may be updated and optimized using haptic feedback to the surgeon based on a real-time risk detection system that integrates several algorithms aiming to enhance surgical precision and safety: this system may combine Bayesian models, Kalman Filters, and Monte Carlo simulations to continuously assess the surgical environment, predict potential risks, and adjust the calculated surgical tool's path accordingly.
- Bayesian models may be used to update and refine the probabilities of potential risks, while the Kalman Filter may help to estimate the real-time localization of the surgical tool (i.e. current tool localization) and critical structures.
- Monte Carlo simulations may be employed to predict the likelihood of collisions or adverse interactions with critical structures by simulating different scenarios in a probabilistic manner.
- object tracking algorithms may be utilized to track the position of the surgical tool (i.e. current tool localization of the tip of the at least one surgical tool) and critical anatomical structures in the virtual environment, providing continuous feedback about the proximity of the surgical tool to critical structures.
- the tracking data may be then fed into a machine learning model, such as Random Forests or Support Vector Machines (SVMs), which may be trained on historical surgical data (e.g. recorded paths from previous surgeries) to classify situations where the surgical tool is likely to approach critical structures.
- haptic feedback may be provided to the surgeon as a tactile alert when the surgical tool deviates beyond a predefined safety threshold (e.g.
- a distance limit between the surgical tool and a critical structure allowing the surgeon to make real-time adjustments.
- a visual alert may be displayed on a surgical interface (e.g. robotic console, augmented reality overlay), highlighting an updated path and warning of a potential risk of damaging a critical structure.
- a surgical interface e.g. robotic console, augmented reality overlay
- this combination of machine learning, real-time data processing, and interactive feedback ensures the surgical path remains safer, more effective, and more optimized throughout the surgical procedure.
- the surgical tool’s path computed before the medical intervention is outputted to be displayed on the current augmented 3D model (i.e. the updated 3D model), so as to better assist the user during the medical intervention by showing to the user the best path to avoid damaging the NVB among other things.
- visual guidance information related to the path may be outputted and displayed on the intraoperative flux of 2D images to better assist the user during the medical intervention.
- the visual guidance information can be: arrows to indicate the direction to follow, a color change if the user gets too close to a critical element, structure, or area, such as a vital organ or the at least one NVB for example, the name of some anatomical elements, etc. . .
- the navigating method further detects at least one critical element, structure or area according to the current augmented 3D model and said per- intraoperative flux of 2D images.
- the at least one critical element, structure or area may comprise: the at least one neurovascular bundle, a predetermined organ, a predetermined blood vessel, a predetermined nerve, a predetermined tissue, for example.
- critical area information related to said at least one critical element, structure or area may advantageously be outputted and displayed on the current augmented 3D model and/or on the intraoperative flux of 2D images (as described above).
- NAVIGATION SYSTEM The navigation method for assisting a user navigating at least one surgical tool during a medical intervention carried out on a patient, as previously described in the different embodiments, can be implemented in a navigation system 1000 as illustrated in figure 2.
- the navigation system 1000 comprises: optionally, the at least one surgical tool 100; at least one imaging device 200; at least one tracking device 300; at least one processor 400; and at least one output 500.
- Examples of surgical tool 100 that may be used are: surgical scalpels, electrocautery devices, retractors, laparoscopic instruments, robotic surgical instrument, endoscopic equipment, morcellator, suturing instruments, specific catheters, hemostatic agents, surgical staplers, specimen retrieval devices. Furthermore, the at least one surgical tool may be configured to be manipulated directly or indirectly by the user.
- imaging devices are: fluoroscopy systems, mini C-Arm systems, intraoperative CT Scanners, mobile X-ray Machines, surgical endoscopes, intraoperative ultrasound systems, Digital Radiography (DR) systems, Surgical Microscopes, intraoperative Optical Coherence Tomography (OCT) Systems, or intraoperative imaging catheters.
- the at least one imaging device may be embedded or partially embedded in the at least one surgical tool, or mechanically connected to the at least one surgical tool.
- the at least one imaging device is configured to generate an intraoperative flux of 2D images, wherein each 2D image comprises a representation of a predefined area around a tip of said at least one surgical tool, and to carry out step 20 as previously described.
- this helps prevent accidental injury to critical elements, structures, or areas, by providing real-time monitoring of the proximity of the tip of the surgical tool to a critical element, structure, or area.
- it aids in adapting to tissue deformation, ensuring the surgical tool stays within a safe limit even as the anatomy changes, while enhancing the surgeon's control over the procedure, improving safety and accuracy of the surgery.
- the at least one tracking device is configured to carry out step S30 as previously described.
- tracking device Depending on the type of fiducial used, examples of tracking device are:
- Optical Tracking Systems such as infrared cameras to track the position of fiducial markers equipped with reflective or passive optical markers (e.g.: Polaris Spectra Optical Tracking System, NDI Aurora Optotrak Optical Tracking System);
- Electromagnetic Tracking Systems using electromagnetic fields to track the position and orientation of fiducial markers equipped with electromagnetic sensors (e.g.: Ascension 3D Guidance TrakSTAR Electromagnetic Tracking System, NDI Aurora Electromagnetic Tracking System);
- Hybrid Tracking Systems which combine multiple tracking technologies, such as optical, electromagnetic, and ultrasound, to provide robust and accurate tracking of fiducial markers (e.g.: Brainlab Hybrid Navigation System, Northern Digital Inc. (NDI) Polaris Viera Hybrid Tracking System);
- Robot-Assisted Tracking Systems integrating fiducial tracking capabilities into robotic surgical platforms, allowing for precise navigation and instrument guidance during minimally invasive procedures (e.g. : da Vinci Surgical System with Integrated Tracking, Mazor Robotics Renaissance Guidance System); Intraoperative Imaging Systems such as fluoroscopy, CT, and MRI.
- the at least one tracking device may be embedded or partially embedded in the at least one surgical tool.
- the at least one processor is configured to carry out steps S10, S40 and S50 as previously described.
- the at least one processor may also carry out the following optional steps S80, S90, S100 and/or SI 10 as previously described.
- the navigation system may include a blockchain tokenization module configured to:
- the navigation system may be operatively coupled to a robotic surgical platform for automated or semi-automated execution of the planned surgical path.
- the navigation system may be coupled with a robotic surgical platform configured to execute a portion of the planned surgical path autonomously or semi-autonomously.
- the navigation system may be optionally operatively coupled to a robotic surgical system (e.g., Da Vinci system), and may provide real-time navigation guidance and dynamic path correction based on the updated patient specific 3D model.
- all the elements of the navigation system contribute to a single, dynamic, patient-specific, multi-channel and secure system for assisting a user in navigating at least one surgical tool 100 during a medical intervention carried out on a patient.
- the present disclosure relates to a computer program product comprising software code configured to perform the method for navigating in a patient body according to any one of the embodiments previously described.
- the present disclosure relates to a non-transitory program storage device, readable by a computer tangibly embodying a program of instructions executable by the computer to perform the method for navigating in a patient body according to any one of the embodiments previously described.
Landscapes
- Health & Medical Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Robotics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
The present invention relates to a navigation system (1000) and associated method for assisting a user navigating a surgical tool during a medical intervention carried out on a patient (10) previously equipped with at least one fiducial (20) located on a specific body part (30). The navigation system and method combine preoperative and intraoperative information in real-time so as to generate a current augmented 3D model of the specific body part and a current virtual tool localization to assist the user during the medical intervention. The invention provides a GPS-like surgical navigation system (1000), dynamically updating a patient-specific 3D model (i.e. current augmented 3D model) in real time during soft-tissue procedures, ensuring preservation of critical neurovascular bundles.
Description
NAVIGATION SYSTEM FOR ASSISTING A USER NAVIGATING A SURGICAL TOOL DURING A MEDICAL INTERVENTION
FIELD OF INVENTION
[0001] The present invention relates to the general field of surgical assistance, particularly, to a navigation system and associated method for assisting a user in navigating a surgical tool during a medical intervention carried out on a patient.
BACKGROUND OF INVENTION
[0002] Modem surgical procedures increasingly demand a high level of precision and adaptability to complex anatomical structures. The evolution of minimally invasive techniques has necessitated the development of advanced tools to assist surgeons in navigating intricate pathways within the human body. Conventional surgical methods often rely on visual cues and tactile feedback alone, leaving room for error and limiting the surgeon's ability to explore challenging anatomical regions.
[0003] For example, before each surgery, surgeons analyze the medical file of the patient and the medical images previously acquired to mentally prepare themselves and visualize a best path with the different steps to carry out. As the medical images can come from different apparatuses such as Magnetic Resonance Images (MRI), Computed Tomography (CT) scans, X-rays images or ultra- sound images, for example, the surgeons have to study them separately and mentally aggregate all the information to have a global representation of the situation and foresee what they will see during the surgery.
[0004] Advantageously, when the medical images are 3D, a 3D model of the body part of the patient that requires a medical intervention may be generated. Based on the 3D model and their experience, surgeons plan the surgery by selecting a path into the body to reach the zone that needs to be treated.
[0005] Despite all the attention and preoperative preparation of surgeons, the reality is generally quite different during the surgery, particularly during a surgery on soft organs that can move and warp in the body, or on tumors that may have evolve since the acquisition of the preoperative medical images. All these changes may modify the surgery plan in intraoperative. Consequently, there is a need for a navigation system that concatenates preoperative and intraoperative information in real-time so that surgeons can react and adapt the surgical plan in function of the real situation and the real position of the organ to be operated on.
SUMMARY
[0006] The present disclosure relates to a navigation system for assisting a user in navigating at least one surgical tool during a medical intervention carried out on a patient, said patient being previously equipped with at least one fiducial located on a specific body part of said patient, said specific body part comprising at least one neurovascular bundle, said navigation system comprising: at least one imaging device configured to: o obtain (i.e. acquire) an intraoperative flux of 2D images, wherein each 2D image comprises a representation of a predefined area around a tip of said at least one surgical tool; and o acquire specific information related to said at least one neurovascular bundle; at least one tracking device configured to obtain, in real-time, localization information related to: o a current fiducial localization of said at least one fiducial; o a current tool localization of the tip of said at least one surgical tool; o a current imaging device localization of said at least one imaging device; at least one processor configured to: o receive said intraoperative flux of 2D images and said localization information;
o generate, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of said patient based on said intraoperative flux of 2D images, said specific information related to said at least one neurovascular bundle, said current fiducial localization and said current imaging device localization; o compute a corresponding current virtual tool localization in said virtual environment using said current tool localization; at least one output configured to output in real-time said virtual environment comprising the current augmented 3D model and the current virtual tool localization to assist the user during the medical intervention.
[0007] Advantageously, the navigation system offers several key advantages, including: dynamic integration between 3D modeling, tracking, and imaging; real-time, adaptive, and continuous updates of the patient specific 3D model; and an interactive, closed-loop architecture that ensures seamless and responsive navigation performance.
[0008] The present disclosure thus relates to a system for assisting a user in navigating at least one surgical tool during a medical intervention carried out on a patient, the patient being previously equipped with at least one fiducial located on a specific body part of the patient, the specific body part comprising at least one neurovascular bundle.
[0009] The navigation system comprises at least one imaging device, at least one tracking device, at least one processor, and at least one output. The navigation system may optionally comprise also the at least one surgical tool.
[0010] The at least one imaging device is configured to: generate an intraoperative flux of 2D images, wherein each 2D image comprises a representation of a predefined area around a tip of the at least one surgical tool; and acquire specific information related to the at least one neurovascular bundle.
[0011] Advantageously, this allows a dynamic integration of peroperative data (i.e. intraoperative flux of 2D images) in updating in real-time the patient specific 3D model.
[0012] The at least one tracking device is configured to obtain, in real-time, localization information related to: a current fiducial localization of the at least one fiducial; a current tool localization of the tip of the at least one surgical tool; a current imaging device localization of the at least one imaging device.
[0013] The at least one processor is configured to: receive the intraoperative flux of 2D images and the localization information; generate, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of the patient based on the 2D images, the current fiducial localization and the current imaging device localization; compute a corresponding current virtual tool localization in the virtual environment using the current tool localization.
[0014] According to an embodiment, the navigation system further comprises a deep learning segmentation module configured to detect in real-time at least said at least one neurovascular bundle from said intraoperative flux of 2D images.
[0015] The at least one output is configured to output in real-time the virtual environment comprising the current augmented 3D model and the current virtual tool localization to assist the user during the medical intervention.
[0016] With such a navigation system that uses intraoperative information in real-time, the user (such as a surgeon for example) is provided with a high level of accuracy and precision, reducing the risk of complications and improving patient outcomes during and after the medical intervention. This navigation system is particularly useful in complex or difficult surgical cases where traditional navigation methods may not be sufficient.
[0017] According to other advantageous aspects of the invention, the navigation system comprises one or more of the features described in the following embodiments, taken alone or in any possible combination:
[0018] the at least one processor is further configured to receive preoperative medical images of the patient, the preoperative medical images comprising images of the specific body part and the at least one fiducial, and to generate, before the medical intervention, the patient specific 3D model based on the received preoperative medical images;
[0019] in an embodiment, the current augmented 3D model is dynamically updated using a machine learning model trained on the preoperative and intraoperative (i.e. intraoperative flux of 2D images) medical images;
[0020] the at least one processor is further configured to apply a blockchain tokenization to the generated current augmented 3D model, so as to obtain a tokenized 3D model;
[0021] in an embodiment, the tokenized 3D model is obtained using a blockchain protocol, to ensure traceability, data integrity and interoperability of patient-specific surgical data;
[0022] in an embodiment, the tokenized 3D model is stored in a decentralized blockchain-based database compliant with clinical data confidentiality standards;
[0023] in an embodiment, the navigation system may include a blockchain tokenization module configured to:
- tokenize the dynamically updated 3D model to ensure data traceability and integrity;
- store the model securely on a decentralized ledger, compliant with healthcare confidentiality regulations;
[0024] the preoperative medical images of the patient being previously acquired by at least two different medical imaging systems among: a Magnetic Resonance Imaging - MRI - system or a Computed Tomography - CT - scan, and an ultrasound apparatus;
[0025] the at least one output is further configured to output the intraoperative flux of 2D images according to the current tool localization;
[0026] the at least one processor is further configured to compute a path for guiding the user during the medical intervention so as to avoid damaging the at least one neurovascular bundle;
[0027] the at least one processor is further configured to update and optimize, in realtime, the path for guiding the user according to the 2D images and the localization information;
[0028] advantageously, this allows a real-time computation of an optimized PATH based on soft tissue deformations and dynamic vascularization visualized for instance by indocyanine green (ICG);
[0029] the at least one output is further configured to output the path on the current augmented 3D model;
[0030] the at least one output is further configured to output visual guidance information related to the path on the intraoperative flux of 2D images;
[0031 ] the at least one processor is further configured to automatically detect at least one critical area according to the current augmented 3D model and the intraoperative flux of 2D images, the at least one critical area comprising at least one of: the at least one neurovascular bundle, a predetermined organ, a predetermined blood vessel, a predetermined nerve, a predetermined tissue;
[0032] the at least one processor is further configured to detect and segment critical anatomical structures (e.g., neurovascular bundles, blood vessels, nerves) using a trained deep learning model such as a CNN (Convolutional Neural Network).
[0033] advantageously, this allows automatic detection of critical area(s), along with a contextualized display, and the ability of the navigation system to adapt to tissue deformations;
[0034] the at least one output is further configured to output critical area information related to the at least one critical area in the current augmented 3D model;
[0035] in an embodiment, the at least one output is further configured to display a realtime augmented interface comprising the current augmented 3D model, surgical tool path, and context-aware alerts;
[0036] in an embodiment, the at least one output is further configured to display:
- an augmented interface comprising the surgical tool’s position, path, critical structure alerts, and real-time feedback;
- multimodal data overlays including fluorescence signals, updated trajectories, and segmentation-based risk zones;
[0037] the at least one fiducial is at least one of: radio-opaque fiducials, visual fiducials, electromagnetic fiducials, implantable markers, radiofrequency identification - RFID - tags, biopsy clips, fluorescent markers;
[0038] the at least one imaging device is further configured to visualize a particular substance, the particular substance being previously injected into the patient before or during the medical intervention;
[0039] the at least one imaging device comprises a near-infrared (NIR) camera;
[0040] the particular substance is indocyanine green;
[0041] wherein generating, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of said patient further comprises superimposing on said patient specific 3D model an image obtained with the at least one imaging device, said obtained image comprising a representation of said at least one neurovascular bundle;
[0042] the at least one surgical tool and the at least one imaging device are firmly attached to each other;
[0043] the navigation system further comprises a memory configured to save and record preselected information, the preselected information comprising at least one of: the patient specific 3D model, the current augmented 3D model, the intraoperative flux of 2D images, the current fiducial localization, the current tool localization, the current imaging device localization, and/or, when available, the received preoperative medical images, the path computed, and/or the path updated and optimized and/or the tokenized 3D model; and/or
[0044] the specific body part comprises at least one soft organ or soft tissue among: prostate, brain, heart, lung, liver, stomach, intestine, kidney, bladder, uterus.
[0045] in an embodiment, the navigation system is operatively coupled to a robotic surgical platform for automated or semi -automated execution of the planned surgical path;
[0046] in an embodiment, the navigation system is optionally operatively coupled to a robotic surgical system (e.g., Da Vinci system), and provides real-time navigation guidance and dynamic path correction based on the updated patient specific 3D model.
[0047] According to another aspect, the present disclosure relates to a computer- implemented method for assisting a user navigating at least one surgical tool during a medical intervention carried out on a patient, the patient being previously equipped with at least one fiducial located on a specific body part of the patient, the specific body part comprising at least one neurovascular bundle.
[0048] The method comprises: obtaining (i.e. acquiring) an intraoperative flux of 2D images from at least one imaging device, wherein each 2D image comprise a representation of a predefined area around a tip of the at least one surgical tool; acquiring specific information related to the at least one neurovascular bundle; obtaining, in real-time, localization information related to: a current fiducial localization of the at least one fiducial; a current tool localization of the tip of the at least one surgical tool; a current imaging device localization of the at least one imaging device; generating, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of the patient based on the 2D images, the current fiducial localization and the current imaging device localization; computing a corresponding current virtual tool localization in the virtual environment using the current tool localization; and
outputting in real-time the virtual environment comprising the current augmented 3D model and the current virtual tool localization to assist the user during the medical intervention
[0049] The preferential and advantageous characteristics linked to the system previously described are also applicable to the process described below.
[0050] In addition, the disclosure relates to a computer program comprising software code adapted to perform a method for for navigating in a patient body compliant with any of the above execution modes when the program is executed by a processor.
[0051] In other words, the disclosure relates to a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out a computer-implemented method for assisting a user navigating at least one surgical tool during a medical intervention carried out on a patient, said patient being previously equipped with at least one fiducial located on a specific body part of said patient, said specific body part comprising at least one neurovascular bundle, said method comprising: obtaining an intraoperative flux of 2D images from at least one imaging device, wherein each 2D image comprises a representation of a predefined area around a tip of said at least one surgical tool; acquiring specific information related to said at least one neurovascular bundle; obtaining, in real-time, localization information related to: o a current fiducial localization of said at least one fiducial; o a current tool localization of the tip of said at least one surgical tool; o a current imaging device localization of said at least one imaging device; generating, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of said patient based on said 2D images, said current fiducial localization and said current imaging device localization; computing a corresponding current virtual tool localization in said virtual environment using said current tool localization; and
outputting in real-time said virtual environment comprising the current augmented 3D model and the current virtual tool localization to assist the user during the medical intervention.
[0052] The present disclosure further pertains to a non-transitory program storage device, readable by a computer, tangibly embodying a program of instructions executable by the computer to perform a method for navigating in a patient body, compliant with the present disclosure.
[0053] Such a non-transitory program storage device can be, without limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, or any suitable combination of the foregoing. It is to be appreciated that the following, while providing more specific examples, is merely an illustrative and not exhaustive listing as readily appreciated by one of ordinary skill in the art: a portable computer diskette, a hard disk, a ROM, an EPROM (Erasable Programmable ROM) or a Flash memory, a portable CD-ROM (Compact-Disc ROM).
DEFINITIONS
[0054] In the present invention, the following terms have the following meanings:
[0055] The terms “adapted” and “configured” are used in the present disclosure as broadly encompassing initial configuration, later adaptation or complementation of the present device, or any combination thereof alike, whether effected through material or software means (including firmware).
[0056] The term “processor” should not be construed to be restricted to hardware capable of executing software, and refers in a general way to a processing device, which can for example include a computer, a microprocessor, an integrated circuit, or a programmable logic device (PLD). The processor may also encompass one or more Graphics Processing Units (GPU), whether exploited for computer graphics and image processing or other functions. Additionally, the instructions and/or data enabling to perform associated and/or resulting functionalities may be stored on any processor-
readable medium such as, e.g., an integrated circuit, a hard disk, a CD (Compact Disc), an optical disc such as a DVD (Digital Versatile Disc), a RAM (Random-Access Memory) or a ROM (Read-Only Memory). Instructions may be notably stored in hardware, software, firmware or in any combination thereof.
[0057] “Machine learning (ML)” designates in a traditional way computer algorithms improving automatically through experience, on the ground of training data enabling to adjust parameters of computer models through gap reductions between expected outputs extracted from the training data and evaluated outputs computed by the computer models.
[0058] A “hyper-parameter” presently means a parameter used to carry out an upstream control of a model construction, such as a remembering-forgetting balance in sample selection or a width of a time window, by contrast with a parameter of a model itself, which depends on specific situations. In ML applications, hyper-parameters are used to control the learning process.
[0059] “Datasets” are collections of data used to build an ML mathematical model, so as to make data-driven predictions or decisions. In “supervised learning” (i.e. inferring functions from known input-output examples in the form of labelled training data), three types of ML datasets (also designated as ML sets) are typically dedicated to three respective kinds of tasks: “training”, i.e. fitting the parameters, “validation”, i.e. tuning ML hyperparameters (which are parameters used to control the learning process), and “testing”, i.e. checking independently of a training dataset exploited for building a mathematical model that the latter model provides satisfying results.
[0060] A “neural network (NN)” designates a category of ML comprising nodes (called “neurons”), and connections between neurons modeled by “weights”. For each neuron, an output is given in function of an input or a set of inputs by an “activation function”. Neurons are generally organized into multiple “layers”, so that neurons of one layer connect only to neurons of the immediately preceding and immediately following layers.
[0061] The above ML definitions are compliant with their usual meaning, and can be completed with numerous associated features and properties, and definitions of related numerical objects, well known to a person skilled in the ML field. Additional terms will
be defined, specified or commented wherever useful throughout the following description.
BRIEF DESCRIPTION OF THE DRAWINGS
[0062] The present disclosure will be better understood, and other specific features and advantages will emerge upon reading the following description of particular and non-restrictive illustrative embodiments, the description making reference to the annexed drawings wherein:
[0063] Figure l is a flow chart showing an embodiment of a method for assisting a user navigating at least one surgical tool during a medical intervention
[0064] Figure 2 is an illustration of a navigation system that may implement the method represented in figure 1.
DETAILED DESCRIPTION
[0065] The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its scope.
[0066] All examples and conditional language recited herein are intended for educational purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
[0067] Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in
the future, i.e., any elements developed that perform the same function, regardless of structure.
[0068] Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein may represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
[0069] The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, a single shared processor, or a plurality of individual processors, some of which may be shared.
[0070] It should be understood that the elements shown in the figures may be implemented in various forms of hardware, software or combinations thereof. Preferably, these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces.
[0071] The present disclosure relates to a system, a computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out a computer-implemented method, and a computer-implemented method for assisting a user navigating at least one surgical tool during a medical intervention carried out on a patient.
[0072] The navigation can be direct or indirect if the user goes through a surgical robot, like Intuitive robots (cf. the Da Vinci robotic surgical system).
[0073] As illustrated in Figure 2, the patient 10 has being previously equipped with at least one fiducial 20 located on a specific body part 30 of the patient 10. Particularly, this specific body part 30 comprises at least one neurovascular bundle. For example, the specific body part 30 may comprise at least one soft organ or soft tissue, such as: prostate, brain, heart, lung, liver, stomach, intestine, kidney, bladder, uterus.
[0074] Neurovascular bundles - NVB - are key structures in the human body comprising combination of blood vessels (vascular) and nerves (neuro) that travel together within certain regions. These bundles are critical for providing soft tissues or organs with the necessary blood supply and neural connections. For example, one prominent example of a neurovascular bundle in the brain is associated with the circle of Willis, a circulatory anastomosis that supplies blood to the brain. The circle of Willis is located at the base of the brain and involves the convergence of several arteries, creating a network that ensures a continuous blood supply to various parts of the brain. In the case of radical prostatectomy, sparing as much as possible the neurovascular bundles during the medical intervention preserve erectile function and improve the outcome of the patient. That is why providing an accurate navigation system that assists the surgeon to securely navigate and perform surgery around such neurovascular bundles is important.
[0075] Regarding the at least one fiducial 20 used, it can be chosen among: radio-opaque fiducials, visual fiducials, electromagnetic fiducials, implantable markers, radiofrequency identification - RFID - tags, biopsy clips, fluorescent markers.
[0076] Fiducials serve as reference markers or landmarks that help the surgical team precisely locate and track specific points in the patient's anatomy. The use of fiducial elements enhances the accuracy and reliability of surgical procedures, especially in minimally invasive and image-guided interventions. For example, they can be placed around a tumor or around a neurovascular bundle to delimit their perimeter and locate them precisely.
[0077] The selection of the fiducial markers depends on several factors, including the specific requirements of the surgical technique, the imaging modalities used, and the anatomical location of the markers. Surgeons typically consider the following criteria when choosing fiducial markers:
- visibility in imaging modalities: fiducial markers should be easily detectable in the imaging modalities used during the surgical procedure; size and shape: fiducial markers should be small enough to be placed within or near the target anatomical structures of the specific body part without causing interference or obstruction, and the shape may vary depending on the anatomical
location and the imaging modality used (examples of common shapes include spheres, cylinders, or crosses);
- biocompatibility: fiducial markers should be made of materials that are biocompatible and non-reactive within the body to minimize the risk of adverse reactions or tissue inflammation (examples of materials commonly used for fiducial markers include titanium, stainless steel, gold, or biocompatible plastics); stability and durability: fiducial markers should remain stable and securely in place throughout the duration of the surgical procedure and any subsequent imaging scans. Furthermore, the markers should be resistant to displacement or migration caused by patient movements, tissue manipulation, or fluid dynamics within the body; ease of placement: Surgeons prefer fiducial markers that are easy to place or implant using minimally invasive techniques, such as a needle or catheter; compatibility with the navigation system 1000: fiducial markers should be compatible with the navigation system 1000 used in the medical intervention; clinical considerations: the choice of fiducial markers may also depend on the specific clinical requirements of the procedure, such as the need for tumor localization, target delineation, or organ motion tracking.
[0078] NAVIGATION METHOD
[0079] As illustrated on Figure 1, the navigation method may comprise the following steps:
Step 10: obtaining an intraoperative flux of 2D images from at least one imaging device, wherein each 2D image comprises a representation of a predefined area around a tip of said at least one surgical tool;
Step 20: acquiring specific information related to said at least one neurovascular bundle;
Step 30: obtaining, in real-time, localization information related to a current fiducial localization of said at least one fiducial, a current tool localization of the
tip of said at least one surgical tool, and a current imaging device localization of said at least one imaging device;
Step 40: generating, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of said patient (NB: hereinafter referred to as the “patient specific 3D model”) based on said intraoperative flux of 2D images and said specific information related to said at least one neurovascular bundle, said current fiducial localization and said current imaging device localization;
Step 50: computing a corresponding current virtual tool localization in said virtual environment using said current tool localization; and
Step 60: outputting in real-time said virtual environment comprising the current augmented 3D model and the current virtual tool localization to assist the user during the medical intervention.
[0080] Examples of imaging devices that can be used to acquire and send the intraoperative flux of 2D images received in step 10 are: fluoroscopy systems which provide real-time X-ray imaging, allowing for continuous visualization of anatomical structures and medical devices such as the fiducials or surgical tools; mini C-Arm systems which are compact fluoroscopy devices that provide high- resolution 2D imaging; intraoperative CT Scanners which may provide 2D and 3D imaging capabilities, offering high-resolution imaging to guide a surgical procedure; mobile X-ray Machines equipped with fluoroscopic capabilities, which are often used in operating rooms for intraoperative imaging; surgical endoscopes equipped with 2D imaging sensors which allow for visualization of internal organs and anatomical structures during minimally invasive procedures, such as laparoscopic surgeries, arthroscopic procedures, and endoscopic interventions;
intraoperative ultrasound systems which provide real-time 2D imaging of soft tissues, organs, and blood vessels, offering dynamic visualization and guidance for surgical navigation;
- Digital Radiography (DR) systems which offer high-quality 2D imaging with enhanced image processing capabilities;
Surgical Microscopes with Integrated Cameras which provide magnified, high- resolution 2D imaging of the surgical field during procedures; intraoperative Optical Coherence Tomography (OCT) Systems which utilize light-based imaging to provide high-resolution, cross-sectional 2D images of tissues and structures; or intraoperative imaging catheters such as intravascular ultrasound (IVUS) catheters and optical coherence tomography (OCT) catheters for example.
[0081] According to one embodiment, the at least one imaging device further comprises a near-infrared (NIR) camera configured to acquire Indocyanine Green - Near-Infrared Fluorescence. The specific information related to said at least one neurovascular bundle according to this embodiment may be a fluorescence image. In this case, the patient had been previously injected with florescence dye.
[0082] In other words, the at least one imaging device is further configured to allow visualization of a particular substance (such as, for example: indocyanine green which is visible in near-infrared fluorescence, called “ICG-NIRF” or a contrast medium), the particular substance being previously injected into the patient before or during the medical intervention. Advantageously, this enables to better identify the neurovascular bundles - NVB - by visualizing vascularization and hemostasis. In an example, visualizing a particular substance comprises obtaining intraoperative indocyanine green (ICG) fluorescence images.
[0083] Indeed, the intraoperative identification of neurovascular bundles may rely on the injection of a fluorescent agent, such as indocyanine green (ICG), that allows real-time intraoperative visualization of neurovascular bundles. After intravenous administration of the fluorescent agent, the fluorescent agent binds to plasma proteins and becomes detectable using for example a near-infrared (NIR) camera. Advantageously, the
combination of a fluorescent agent (e.g. ICG) and NIR imaging enhances anatomical visualization, improves the safety and the precision of the surgical procedure and reduces the risk of nerve damage.
[0084] The NIR-ICG imaging (combination of ICG and NIR imaging) may be fully and seamlessly embedded within the navigation system 1000. This allows real-time visualization of vascular structures and tissue perfusion during surgery, directly within the navigation system. Advantageously, the surgeon can therefore make informed decisions based on both anatomical and functional data without switching systems or interrupting the workflow.
[0085] In some embodiments, the at least one surgical tool and the at least one imaging device are firmly attached to each other. They are joined together so that the at least one imaging device follow the movements of the at least one surgical tool, and acquire a coordinated 2D view of the predefined area around the tip of the at least one surgical tool. For example, the at least one imaging device may be embedded in the at least one surgical tool or vice versa. Furthermore, the at least one surgical tool may be configured to be manipulated directly or indirectly by the user. Indeed, attaching the at least one surgical tool and the at least one imaging device ensures synchronized movement between the surgical tool and the imaging device. This configuration may enhance precision in tracking the surgical tool’s path and monitoring surrounding structures while enabling the surgeon to receive up-to-date information, improving the accuracy of the procedure and preventing unintended damage to nearby tissues. According to one embodiment, the navigation method further comprises calculating from the specific information related to said at least one neurovascular bundle supplementary information such as, for example: the name (e.g. in the case of NVB located near the prostate: autonomic nerve, particularly sympathetic or parasympathetic nerve, sensory nerve, artery like inferior vesical artery or middle rectal artery, vein like prostatic venus plexus...) and/or the position (e.g.: the position in the body of the patient or in the specific body part considered, or the relative position in relation to the at least one surgical tool) of the nervous and vascular elements forming the neurovascular bundles, the critical organ or tissue (i.e. critical element, critical structure) associated to or located near the neurovascular bundle(s).
[0086] For example, calculating the supplementary information related to said at least one neurovascular bundle may be achieved using a convolutional neural network (CNN) such as U-Net, DeepLabV3 or Mask R-CNN. Medical images (e.g., MRI or CT scans) may be collected and preprocessed, for instance with manual annotations to identify critical elements or structures or areas. These annotated images may be used to train the CNN, with U-Net using its encoder-decoder architecture to extract features and segment structures like the NVB, and Mask R-CNN performing both object detection and pixelwise segmentation of these critical elements, structures, or areas. The CNN may be trained using techniques like data augmentation and optimized with loss functions such as dice coefficient loss to ensure accurate segmentation, allowing the system to automatically identify and/or delineate neurovascular bundles and surrounding tissues in real-time during surgery.
[0087] In step S30, the localization information related to the current fiducial localization, the current tool localization and the current imaging device localization may be relative coordinates between each element (i.e. the at least one fiducial, the at least one surgical tool and/or the at least one imaging device), or absolute coordinates in a chosen coordinate system (e.g.: the coordinate system associated to a tracking device).
[0088] The generation of the current augmented 3D model in step S40 may be realized by using a machine learning model which would update, in real time, a patient specific 3D model with information extracted from the intraoperative flux of 2D images received, said specific information related to said at least one neurovascular bundle, the current fiducial localization and the current imaging device localization. This approach provides a more comprehensive and accurate representation of the body part that is operated and its structures. The current augmented 3D model may be dynamically updated using a machine learning model trained on the preoperative and intraoperative (i.e. intraoperative flux of 2D images) medical images.
[0089] For example, the generation of the current augmented 3D model may be multimodal based on, for example, MRI and/or CT and/or ultrasound images which enables a detailed and global view of the patient’s anatomical structures, more particularly in the specific body part of the patient. The patient specific 3D model may
then be dynamically updated via rigid and elastic registration, allowing for continuous adaptation in response to tissue deformations during surgery: for instance, a rigid registration algorithm may be used at the beginning of the procedure to align the patient specific 3D model with the patient's anatomy based on fiducial elements implanted before the procedure and using the ICP (Iterative Closest Point) algorithm and least squares method; non-rigid registration may also be used to adjust the patient specific 3D model in real-time to tissue deformations induced by surgery and align it with perioperative images (e.g. intraoperative flux of 2D images) based on the Diffeomorphic Demons Algorithm, ICP, FFD (Free Form Deformation), and TPS (Thin Plate Splines) algorithms.
[0090] Additionally, a real-time image fusion may be used to ensure an evolving and precise mapping of the patient’s anatomy, integrating both imaging data (e.g. information extracted from the intraoperative flux of 2D images received) and contextual information (e.g. current fiducial localization, current imaging device localization) to provide the surgeon with an accurate, up-to-date representation of the patient anatomy throughout the surgery. Real-time image fusion may also include a multimodal fusion of preoperative and perioperative images using Mutual Information (MI), Deep Learning (CNN) algorithms, SSIM (Structural similarity Index measure) and Feature matching (ORB: Oriented FAST and Rotated BRIEF, SURF: Speeded-Up Robust Features) to superimpose the intraoperative flux of 2D images received and the patient specific 3D model and maximize their correspondence.
[0091] In an embodiment, generating, in a virtual environment, a current augmented 3D model by updating a patient-specific 3D model of the specific body part further comprises superimposing on the patient-specific 3D model an image obtained with a Near Infra-Red (NIR) camera, the obtained image comprising a representation of at least one neurovascular bundle, allowing for a dynamic overlay of the representation on the patientspecific 3D model in real-time. In an example, the at least one imaging device may be an NIR camera configured to capture in real-time the fluorescence emitted by ICG, enabling the identification of neurovascular bundles and their real-time superimposition onto the current augmented 3D model; this may also allow accounting for vascular and perfusion changes as well as tissue oxygenation monitoring.
[0092] In an embodiment, the at least one processor is further configured to apply a blockchain tokenization to the generated current augmented 3D model, so as to obtain a tokenized 3D model. In other words, the tokenized 3D model may be obtained using a blockchain protocol, to ensure traceability, data integrity and interoperability of patientspecific surgical data. Advantageously, blockchain tokenization ensures the authenticity of the augmented 3D model by associating it with a timestamp and a record of modifications, safeguarding data integrity, eliminating unauthorized changes, and providing secure access to authorized medical entities. In an embodiment, the tokenized 3D model may be stored in a decentralized blockchain-based database compliant with clinical data confidentiality standards.
[0093] The patient specific 3D model can be received or loaded from a memory before the medical intervention, or it can also be generated by the navigation method based on preoperative medical images (see the optional steps S80 and S90 described below).
[0094] Examples of virtual environments that can be useable in the present method are: Virtual Reality (VR) Environments, which utilize head-mounted displays (HMDs) and motion-tracking technology to immerse users in virtual worlds (e.g. : Oculus Rift, HTC Vive, PlayStation VR. . .);
Augmented Reality (AR) Applications which overlay digital information, including 3D models, onto the user's view of the real-world using smartphones, tablets, or AR glasses (e.g.: Microsoft HoloLens, Google ARCore, Apple ARKit...);
3D Modeling Software with Visualization Capabilities which generally include built-in tools for visualizing 3D models in virtual environments (e.g.: Autodesk Maya, Blender, SketchUp...);
Immersive Visualization Systems, such as CAVE (Cave Automatic Virtual Environment) systems and PowerWalls, which create large-scale immersive environments for visualizing 3D models (e.g.: CAVE systems, PowerWalls, 3D Visualization Labs. . .);
- Web-Based Virtual Environments which leverage web technologies to deliver interactive 3D experiences through web browsers, and are accessible across
various devices and platforms, making them ideal for collaborative design and remote visualization (e.g.: Mozilla Hubs, Sketchfab, Unity WebGL. .
and Game Engines with Virtual Environment Capabilities, which provide tools for creating interactive virtual environments, including real-time rendering of 3D models, (e.g.: Unity, Unreal Engine, Cry Engine. . .).
[0095] Computing the corresponding current virtual tool localization (i.e. current virtual tool localization that corresponds to the current real tool localization of the tip of the at least one surgical tool) in said virtual environment in step S50, is intended to enable the user to locate, in real time, the at least one surgical tool in space (particularly in relation to the at least one neurovascular bundle - NVB) and see its orientation in the current augmented 3D model.
[0096] The current augmented 3D model and the current virtual tool localization outputted in real time in step S60 may for example be displayed to the user, for example on a screen or a virtual reality headset (in order to have an immersive view), so as to assist the user during the medical intervention by enabling him for example to visualize in real time the position of the surgical tool in relation to the at least one NVB which should be preserved as much as possible during the medical intervention.
[0097] In an embodiment, the intraoperative flux of 2D images is also outputted according to the current tool localization, so as to follow in real time the position of the at least one surgical tool and have a direct and local view of the predefined area around the tip of the at least one surgical tool.
[0098] In an embodiment, the at least one output may be further configured to display a real-time augmented interface comprising the current augmented 3D model, surgical tool path, and context-aware alerts. In other words, the at least one output may render an augmented interface including the current augmented 3D model, the surgical path, and context-aware warnings or alerts.
[0099] In an embodiment, the at least one output may be further configured to display: - an augmented interface comprising the surgical tool’s position, path, critical structure alerts, and real-time feedback;
- multimodal data overlays including fluorescence signals, updated trajectories, and segmentation-based risk zones.
[0100] Advantageously and as illustrated in figure 1, the navigation method further comprises optional steps, such receiving preoperative images S80, generating a patient specific 3D model S90, computing a path within the patient specific 3D model SI 00, updating and optimizing the path SI 10.
[0101] According to an embodiment, the patient specific 3D model of the specific body part of the patient, which is used at step S40, is generated before the medical intervention based on preoperative medical images. To do so, these preoperative medical images of the patient are generally acquired upstream from the medical intervention. After receiving the preoperative medical images that comprise images of the specific body part to be operated and of the at least one fiducial (step S80), the patient specific 3D model of the specific body part of the patient is generated (step S90). According to an embodiment, machine learning can be used to generate the patient specific 3D model using the preoperative medical images (such as DICOM images for example). Particularly, learning algorithms can be trained to identify and segment the anatomical structures of interest in the medical images. In an embodiment, critical anatomical structures (e.g., neurovascular bundles, nerves, blood vessels) are detected and segmented using a trained deep learning model such as a CNN (Convolutional Neural Network).
[0102] Such patient specific 3D model of the specific body part enables to plan the medical intervention (e.g.: for determining the best path to access to the organ or anatomical element to operate/treat) after diagnosis, and corresponds specifically to the patient to be operated with all his/her specific features, and not to a general anatomical representation of the corresponding body part.
[0103] Advantageously, these preoperative medical images may be previously acquired by at least two different medical imaging systems among: a Magnetic Resonance Imaging - MRI - system or a Computed Tomography - CT - scan, and an ultrasound apparatus. Each medical imaging system enable to access different levels of detail (each medical imaging system having different resolution) and different type of tissues (e.g.: MRI
enables to visualize soft tissue details based on their water content and molecular properties, ultrasounds enable to visualize moving structures, such as the beating heart or flowing blood in vessels, CT scans enable to or visualize bone injuries or detecting complex fractures, X-rays are more specific for bones imaging...). Thus, the patient specific 3D model of the specific body part of the patient may incorporate different types of images, which is more precise than a 3D model which would be based only on one type of medical images (e.g. : MRI or CT scans only).
[0104] Advantageously and as previously stated, a path for guiding the user during the medical intervention may be computed (step SI 00), based for example on the patient specific 3D model of the specific body part of the patient, and so as for example to avoid damaging said at least one neurovascular bundle (NVB). This path may be computed according to different options, for example:
Anatomical Considerations, the surgeon evaluating the patient's anatomy to determine the optimal path for accessing the target area while avoiding vital structures, such as NVBs, nerves, blood vessels, and organs;
- Path of Least Trauma, the guiding path being chosen to minimize trauma to surrounding tissues and organs;
Accessibility and Exposure of the surgical site and of the target area, by taking into account factors such as tissue retraction, instrument placement, and the angle of approach to optimize visibility and maneuverability during the procedure;
- Functional Outcome so as to preserve functional integrity and optimize postoperative outcomes;
- Patient-Specific Factors like Individual patient characteristics, such as age, overall health, comorbidities, and anatomical variations, influence the choice of guiding path;
Technological Support such as surgical navigation systems, imaging modalities, and intraoperative tools that may aid in planning and executing the guiding path;
- Preoperative imaging studies, such as CT scans, MRI scans, or angiograms, that may help the surgeon visualize the anatomy and plan the optimal path for surgery; Safety and Risk Mitigation by evaluating potential risks, such as injury to adj acent structures, bleeding, or nerve damage; and/or
Surgeon Expertise and Experience by taking into account the surgeon's proficiency with different surgical techniques and approaches influences.
[0105] In an embodiment, such a path may be computed by a machine learning model, which could have been trained on previous medical interventions information for example.
[0106] For example, the path may be computed based on a RNN (Recurrent Neural Network) and reinforcement learning (DQN: Deep Q-Network, PPO: Proximal Policy Optimization). The RNN may for example be trained on historical surgical data (e.g. recorded paths from previous surgeries) to predict the next best surgical tool movement based on previous actions, patient-specific anatomy, and/or deformations. The DQN may learn the optimal actions by assigning rewards based on safe surgical tool movements (e.g. smooth suturing of wounds, controlled incision avoiding unnecessary tissue damage, minimal tool rotation to avoid tissue strain, gentle retraction to maintain tissue visibility without excessive force) and/or avoiding critical structures, elements, or areas, while the PPO may refine the path by optimizing the path through stable, incremental updates to ensure safety. Real-time data from the imaging device (i.e. intraoperative flux of 2D images) may also allow to adjust the patient specific 3D model, enabling dynamic, adaptive path planning throughout the surgery.
[0107] Advantageously, the path may be updated and optimized (step SI 10) throughout the medical intervention, in real time, according to the 2D images received and the localization information.
[0108] The path may be updated and optimized using for example an averaging shortest path algorithm, potential fields for obstacle avoidance, and/or a machine learning model trained on prior interventions.
[0109] The path may be updated and optimized using interactive assistance features such as haptic feedback to the surgeon (e.g. alert when the surgical tool deviates beyond a predefined safety threshold) to prevent path error (e.g. the surgical tool approaching a critical structure) and/or overlaying the augmented 3D model onto a surgical interface (e.g., robotic console), ultimately reducing the risk of damage of a critical structure. For
example, using the received 2D images and a deep learning-based tissue segmentation model, the system may identify a new position of the NVB and integrate this data with a dynamic path planning algorithm which recalculates the surgical tool’s path to avoid the NVB, thereby ensuring a better safety and efficiency. Advantageously, this real-time optimization of the path allows continuous real-time adaptation of the path of the surgical tool 100.
[0110] In an example, the calculated surgical tool’s path may be updated and optimized also using the intraoperative ICG fluorescence images. Advantageously, ICG fluorescence images may be used to adjust the calculated surgical tool’s path in real-time, improving the preservation of critical structures (e.g. a NVB) during surgery.
[0111] The calculated surgical tool’s path may be updated and optimized using haptic feedback to the surgeon based on a real-time risk detection system that integrates several algorithms aiming to enhance surgical precision and safety: this system may combine Bayesian models, Kalman Filters, and Monte Carlo simulations to continuously assess the surgical environment, predict potential risks, and adjust the calculated surgical tool's path accordingly. Bayesian models may be used to update and refine the probabilities of potential risks, while the Kalman Filter may help to estimate the real-time localization of the surgical tool (i.e. current tool localization) and critical structures. Monte Carlo simulations may be employed to predict the likelihood of collisions or adverse interactions with critical structures by simulating different scenarios in a probabilistic manner.
[0112] In addition, object tracking algorithms may be utilized to track the position of the surgical tool (i.e. current tool localization of the tip of the at least one surgical tool) and critical anatomical structures in the virtual environment, providing continuous feedback about the proximity of the surgical tool to critical structures. The tracking data may be then fed into a machine learning model, such as Random Forests or Support Vector Machines (SVMs), which may be trained on historical surgical data (e.g. recorded paths from previous surgeries) to classify situations where the surgical tool is likely to approach critical structures. Based on the output of the machine learning model, haptic feedback may be provided to the surgeon as a tactile alert when the surgical tool deviates beyond a
predefined safety threshold (e.g. a distance limit between the surgical tool and a critical structure), allowing the surgeon to make real-time adjustments. Additionally, a visual alert may be displayed on a surgical interface (e.g. robotic console, augmented reality overlay), highlighting an updated path and warning of a potential risk of damaging a critical structure. Advantageously, this combination of machine learning, real-time data processing, and interactive feedback ensures the surgical path remains safer, more effective, and more optimized throughout the surgical procedure. In an embodiment, the surgical tool’s path computed before the medical intervention (or the updated and optimized path when available), is outputted to be displayed on the current augmented 3D model (i.e. the updated 3D model), so as to better assist the user during the medical intervention by showing to the user the best path to avoid damaging the NVB among other things.
[0113] Advantageously, visual guidance information related to the path may be outputted and displayed on the intraoperative flux of 2D images to better assist the user during the medical intervention. For example, the visual guidance information can be: arrows to indicate the direction to follow, a color change if the user gets too close to a critical element, structure, or area, such as a vital organ or the at least one NVB for example, the name of some anatomical elements, etc. . .
[0114] In an embodiment, the navigating method further detects at least one critical element, structure or area according to the current augmented 3D model and said per- intraoperative flux of 2D images. As described above, the at least one critical element, structure or area may comprise: the at least one neurovascular bundle, a predetermined organ, a predetermined blood vessel, a predetermined nerve, a predetermined tissue, for example.
[0115] In this case, critical area information related to said at least one critical element, structure or area may advantageously be outputted and displayed on the current augmented 3D model and/or on the intraoperative flux of 2D images (as described above).
[0116] NAVIGATION SYSTEM
[0117] The navigation method for assisting a user navigating at least one surgical tool during a medical intervention carried out on a patient, as previously described in the different embodiments, can be implemented in a navigation system 1000 as illustrated in figure 2.
[0118] The navigation system 1000 comprises: optionally, the at least one surgical tool 100; at least one imaging device 200; at least one tracking device 300; at least one processor 400; and at least one output 500.
[0119] The navigation system 1000 may be a surgical robot or may be embedded into a surgical robot such as Intuitive’s robot “Da Vinci”, the navigation being indirect in this case.
[0120] Examples of surgical tool 100 that may be used are: surgical scalpels, electrocautery devices, retractors, laparoscopic instruments, robotic surgical instrument, endoscopic equipment, morcellator, suturing instruments, specific catheters, hemostatic agents, surgical staplers, specimen retrieval devices. Furthermore, the at least one surgical tool may be configured to be manipulated directly or indirectly by the user.
[0121] As previously described, examples of imaging devices are: fluoroscopy systems, mini C-Arm systems, intraoperative CT Scanners, mobile X-ray Machines, surgical endoscopes, intraoperative ultrasound systems, Digital Radiography (DR) systems, Surgical Microscopes, intraoperative Optical Coherence Tomography (OCT) Systems, or intraoperative imaging catheters. Furthermore, the at least one imaging device may be embedded or partially embedded in the at least one surgical tool, or mechanically connected to the at least one surgical tool.
[0122] The at least one imaging device is configured to generate an intraoperative flux of 2D images, wherein each 2D image comprises a representation of a predefined area around a tip of said at least one surgical tool, and to carry out step 20 as previously described. Advantageously, this helps prevent accidental injury to critical elements,
structures, or areas, by providing real-time monitoring of the proximity of the tip of the surgical tool to a critical element, structure, or area. Furthermore, it aids in adapting to tissue deformation, ensuring the surgical tool stays within a safe limit even as the anatomy changes, while enhancing the surgeon's control over the procedure, improving safety and accuracy of the surgery.
[0123] The at least one tracking device is configured to carry out step S30 as previously described.
[0124] Depending on the type of fiducial used, examples of tracking device are:
Optical Tracking Systems such as infrared cameras to track the position of fiducial markers equipped with reflective or passive optical markers (e.g.: Polaris Spectra Optical Tracking System, NDI Aurora Optotrak Optical Tracking System);
- Electromagnetic Tracking Systems using electromagnetic fields to track the position and orientation of fiducial markers equipped with electromagnetic sensors (e.g.: Ascension 3D Guidance TrakSTAR Electromagnetic Tracking System, NDI Aurora Electromagnetic Tracking System);
- Ultrasound Tracking Systems utilizing ultrasound waves to track the position of fiducial markers within the body (e.g.: Sonowand Navigation System, Medtronic Stealth Station Surgical Navigation System);
- Hybrid Tracking Systems which combine multiple tracking technologies, such as optical, electromagnetic, and ultrasound, to provide robust and accurate tracking of fiducial markers (e.g.: Brainlab Hybrid Navigation System, Northern Digital Inc. (NDI) Polaris Viera Hybrid Tracking System);
- Robot-Assisted Tracking Systems integrating fiducial tracking capabilities into robotic surgical platforms, allowing for precise navigation and instrument guidance during minimally invasive procedures (e.g. : da Vinci Surgical System with Integrated Tracking, Mazor Robotics Renaissance Guidance System); Intraoperative Imaging Systems such as fluoroscopy, CT, and MRI.
[0125] In the same manner as for the at least one imaging device, the at least one tracking device may be embedded or partially embedded in the at least one surgical tool.
[0126] The at least one processor is configured to carry out steps S10, S40 and S50 as previously described. Advantageously, the at least one processor may also carry out the following optional steps S80, S90, S100 and/or SI 10 as previously described.
[0127] In an embodiment, the system may further comprise a deep learning segmentation module configured to detect in real-time the neurovascular bundles from the intraoperative flux of 2D images.
[0128] Advantageously, the navigation system may further comprise a memory configured to save and record information. The information may comprise at least one of the patient specific 3D model, the current augmented 3D model, the per-intraoperative flux of 2D images, the current fiducial localization, the current surgical tool localization, the current imaging device localization, and/or, when available, the received preoperative medical images, the computed surgical tool path, and/or the updated and/or optimized surgical tool path.
[0129] According to an embodiment, the navigation system may include a blockchain tokenization module configured to:
- tokenize the dynamically updated 3D model to ensure data traceability and integrity;
- store the model securely on a decentralized ledger, compliant with healthcare confidentiality regulations.
[0130] According to an embodiment, the navigation system may be operatively coupled to a robotic surgical platform for automated or semi-automated execution of the planned surgical path. In other words, the navigation system may be coupled with a robotic surgical platform configured to execute a portion of the planned surgical path autonomously or semi-autonomously. For example, the navigation system may be optionally operatively coupled to a robotic surgical system (e.g., Da Vinci system), and may provide real-time navigation guidance and dynamic path correction based on the updated patient specific 3D model.
[0131] Advantageously, all the elements of the navigation system contribute to a single, dynamic, patient-specific, multi-channel and secure system for assisting a user in
navigating at least one surgical tool 100 during a medical intervention carried out on a patient.
[0132] COMPUTER PROGRAM
[0133] According to another aspect, the present disclosure relates to a computer program product comprising software code configured to perform the method for navigating in a patient body according to any one of the embodiments previously described.
[0134] NON-TRANSITORY PROGRAM STORAGE DEVICE
[0135] According to another aspect, the present disclosure relates to a non-transitory program storage device, readable by a computer tangibly embodying a program of instructions executable by the computer to perform the method for navigating in a patient body according to any one of the embodiments previously described.
Claims
1. A navigation system (1000) for assisting a user in navigating at least one surgical tool (100) during a medical intervention carried out on a patient (10), said patient being previously equipped with at least one fiducial (20) located on a specific body part (30) of said patient (10), said specific body part comprising at least one neurovascular bundle, said navigation system comprising: at least one imaging device (200) configured to: o obtain an intraoperative flux of 2D images, wherein each 2D image comprises a representation of a predefined area around a tip of said at least one surgical tool (100); and o acquire specific information related to said at least one neurovascular bundle; at least one tracking device (300) configured to obtain (S30), in real-time, localization information related to: o a current fiducial localization of said at least one fiducial; o a current tool localization of the tip of said at least one surgical tool (100); o a current imaging device localization of said at least one imaging device; at least one processor (400) configured to: o receive said intraoperative flux of 2D images and said localization information; o generate, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of said patient based on said intraoperative flux of 2D images, said specific information related to said at least one neurovascular bundle, said current fiducial localization and said current imaging device localization; o compute a corresponding current virtual tool localization in said virtual environment using said current tool localization;
at least one output (500) configured to output in real-time said virtual environment comprising the current augmented 3D model and the current virtual tool localization to assist the user during the medical intervention (S60).
2. The navigation system according to claim 1, wherein said at least one processor is further configured to: receive preoperative medical images of said patient, said preoperative medical images comprising images of said specific body part and said at least one fiducial (S80); generate, before said medical intervention, said patient specific 3D model based on the received preoperative medical images (S90).
3. The navigation system according to claim 2, wherein said preoperative medical images of said patient being previously acquired by at least two different medical imaging systems among: a Magnetic Resonance Imaging - MRI - system or a Computed Tomography - CT - scan, and an ultrasound apparatus.
4. The navigation system according to any one of claims 2 to 3, wherein said at least one output is further configured to output said intraoperative flux of 2D images according to said current tool localization.
5. The navigation system according to any one of claims 1 to 4, wherein said at least one processor is further configured to compute a path for guiding said user during said medical intervention so as to avoid damaging said at least one neurovascular bundle (SI 00).
6. The navigation system according to claim 5, wherein said at least one processor is further configured to update and optimize, in real-time, the path for guiding said user according to said 2D images and said localization information (SI 10).
7. The navigation system according to claim 5 or 6, wherein said at least one output is further configured to output said path on said current augmented 3D model.
8. The navigation system according to any one of claims 5 to 7 and according to claim 3, wherein said at least one output is further configured to output visual guidance information related to said path on the intraoperative flux of 2D images.
9. The navigation system according to any one of claims 1 to 8, wherein said at least one processor is further configured to automatically detect at least one critical area according to the current augmented 3D model and said intraoperative flux of 2D images, said at least one critical area comprising at least one of: said at least one neurovascular bundle, a predetermined organ, a predetermined blood vessel, a predetermined nerve, a predetermined tissue.
10. The navigation system according to claim 9, wherein said at least one output is further configured to output critical area information related to said at least one critical area in the current augmented 3D model.
11. The navigation system according to any one of claims 1 to 10, wherein the at least one fiducial is at least one of: radio-opaque fiducials, visual fiducials, electromagnetic fiducials, implantable markers, radiofrequency identification - RFID - tags, biopsy clips, fluorescent markers.
12. The navigation system according to any one of claims 1 to 11, wherein, said at least one imaging device is further configured to visualize a particular substance, said particular substance being previously injected into said patient before or during said medical intervention.
13. The navigation system according to any one of claims 1 to 12, wherein said at least one surgical tool (100) and said at least one imaging device are firmly attached to each other.
14. The navigation system according to any one of claims 1 to 13, further comprising a memory configured to save and record preselected information, said preselected information comprising at least one of: said patient specific 3D model, said current augmented 3D model, said intraoperative flux of 2D images, said current fiducial localization, said current tool localization, said current imaging device localization,
and/or, when available, the received preoperative medical images, the path computed, and/or the path updated and optimized.
15. The navigation system according to any one of claims 1 to 14, wherein said specific body part comprises at least one soft organ or soft tissue among: prostate, brain, heart, lung, liver, stomach, intestine, kidney, bladder, uterus.
16. The navigation system according to any one of claims 1 to 15, wherein at least one imaging device comprises a near-infrared (NIR) camera configured to acquire specific information being Indocyanine Green - Near-Infrared Fluorescence.
17. The navigation system according to any one of claims 1 to 16, further comprising a deep learning segmentation module configured to detect in real-time at least said at least one neurovascular bundle from said intraoperative flux of 2D images.
18. The navigation system according to any one of claims 1 to 17, said navigation system being operatively coupled to a robotic surgical platform for automated or semi-automated execution of the planned surgical path.
19. A computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out a computer-implemented method for assisting a user navigating at least one surgical tool during a medical intervention carried out on a patient, said patient being previously equipped with at least one fiducial located on a specific body part of said patient, said specific body part comprising at least one neurovascular bundle, said method comprising: obtaining an intraoperative flux of 2D images from at least one imaging device, wherein each 2D image comprises a representation of a predefined area around a tip of said at least one surgical tool (S10); acquiring specific information related to said at least one neurovascular bundle (S20); obtaining, in real-time, localization information (S30) related to: o a current fiducial localization of said at least one fiducial; o a current tool localization of the tip of said at least one surgical tool (100);
o a current imaging device localization of said at least one imaging device; generating, in a virtual environment, a current augmented 3D model by updating in real-time a patient specific 3D model of the specific body part of said patient based on said intraoperative flux of 2D images, said specific information related to said at least one neurovascular bundle, said current fiducial localization and said current imaging device localization (S40); computing a corresponding current virtual tool localization in said virtual environment using said current tool localization (S50); and outputting in real-time said virtual environment comprising the current augmented 3D model and the current virtual tool localization to assist the user during the medical intervention (S60).
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP24305632 | 2024-04-24 | ||
| EP24305632.2 | 2024-04-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025223977A1 true WO2025223977A1 (en) | 2025-10-30 |
Family
ID=91073194
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2025/060598 Pending WO2025223977A1 (en) | 2024-04-24 | 2025-04-16 | Navigation system for assisting a user navigating a surgical tool during a medical intervention |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025223977A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080269588A1 (en) * | 2007-04-24 | 2008-10-30 | Medtronic, Inc. | Intraoperative Image Registration |
| US20130317351A1 (en) * | 2012-05-22 | 2013-11-28 | Vivant Medical, Inc. | Surgical Navigation System |
| US20200345426A1 (en) * | 2019-05-03 | 2020-11-05 | Neil Glossop | Systems, methods, and devices for registering and tracking organs during interventional procedures |
| US20230081244A1 (en) * | 2021-04-19 | 2023-03-16 | Globus Medical, Inc. | Computer assisted surgical navigation system for spine procedures |
-
2025
- 2025-04-16 WO PCT/EP2025/060598 patent/WO2025223977A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080269588A1 (en) * | 2007-04-24 | 2008-10-30 | Medtronic, Inc. | Intraoperative Image Registration |
| US20130317351A1 (en) * | 2012-05-22 | 2013-11-28 | Vivant Medical, Inc. | Surgical Navigation System |
| US20200345426A1 (en) * | 2019-05-03 | 2020-11-05 | Neil Glossop | Systems, methods, and devices for registering and tracking organs during interventional procedures |
| US20230081244A1 (en) * | 2021-04-19 | 2023-03-16 | Globus Medical, Inc. | Computer assisted surgical navigation system for spine procedures |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Attanasio et al. | Autonomy in surgical robotics | |
| US11980505B2 (en) | Visualization of depth and position of blood vessels and robot guided visualization of blood vessel cross section | |
| US11622815B2 (en) | Systems and methods for providing proximity awareness to pleural boundaries, vascular structures, and other critical intra-thoracic structures during electromagnetic navigation bronchoscopy | |
| US11304686B2 (en) | System and method for guided injection during endoscopic surgery | |
| US11737827B2 (en) | Pathway planning for use with a navigation planning and procedure system | |
| JP2016517288A (en) | Planning, Guidance and Simulation System and Method for Minimally Invasive Treatment (Cross-reference of Related Applications) This application is a “PLANNING,” filed on March 15, 2013, which is incorporated herein by reference in its entirety. Priority of US Provisional Patent Application No. 61 / 800,155 entitled “NAVIGATION SIMULATION SYSTEMS ANDMETHODS FORMINIMALY INVASI VETHERAPY”. This application also has priority to US Provisional Patent Application No. 61 / 924,993, filed January 8, 2014, entitled “PLANNING, NAVIGATION SIMULATION SYSTEMS AND METHODS FORMINIMALY INVASI VETHERAPY”, which is incorporated herein by reference in its entirety. Insist. This application also claims the priority of US Provisional Patent Application No. 61 / 845,256, entitled “SURGICALTRAININGANDIMAGEMAGRAINPHANTOM”, filed July 11, 2013, which is incorporated herein by reference in its entirety. . This application also claims the priority of US Provisional Patent Application No. 61 / 900,122, entitled “SURGICALTRAININGANDIMAGINGBRAINPHANTOM” filed on November 5, 2013, which is incorporated herein by reference in its entirety. . | |
| EP2866638A1 (en) | Enhanced visualization of blood vessels using a robotically steered endoscope | |
| US10588702B2 (en) | System and methods for updating patient registration during surface trace acquisition | |
| CA2976573C (en) | Methods for improving patient registration | |
| KR20190080706A (en) | Program and method for displaying surgical assist image | |
| JP2023519331A (en) | Modeling and Feedback Loops of Holographic Treatment Areas for Surgical Procedures | |
| CN115965667A (en) | Method and device for tissue structure model data fusion of interventional operation | |
| CN116313028A (en) | Medical assistance device, method, and computer-readable storage medium | |
| KR101864411B1 (en) | Program and method for displaying surgical assist image | |
| EP3946125B1 (en) | Determining a surgical port for a trocar or laparoscope | |
| Mageed | Navigating the landscape of next-generation surgery through a new fractal scalpel | |
| WO2025223977A1 (en) | Navigation system for assisting a user navigating a surgical tool during a medical intervention | |
| CN118302127A (en) | Medical instrument guidance system including a guidance system for percutaneous nephrolithotomy procedures, and associated devices and methods | |
| JP7495216B2 (en) | Endoscopic surgery support device, endoscopic surgery support method, and program | |
| Quintero-Peña et al. | The Fusion of Robotics and Imaging: A Vision of the Future | |
| Kunz et al. | Multimodal risk-based path planning for neurosurgical interventions | |
| Marescaux et al. | Augmented reality for surgery and interventional therapy | |
| Antico | 4D ultrasound image guidance for autonomous knee arthroscopy | |
| Mageed | The Dawn of a New Fractal Scalpel: Navigating the Landscape of Next-Generation Surgery | |
| Liu et al. | Augmented Reality in Image-Guided Robotic Surgery |