WO2025231398A1 - Gui to display relative airway size - Google Patents
Gui to display relative airway sizeInfo
- Publication number
- WO2025231398A1 WO2025231398A1 PCT/US2025/027553 US2025027553W WO2025231398A1 WO 2025231398 A1 WO2025231398 A1 WO 2025231398A1 US 2025027553 W US2025027553 W US 2025027553W WO 2025231398 A1 WO2025231398 A1 WO 2025231398A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- catheter
- interference
- luminal network
- patient
- user interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000094—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/267—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
- A61B1/2676—Bronchoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00147—Holding or positioning arrangements
- A61B1/0016—Holding or positioning arrangements using motor drive units
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/012—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor characterised by internal passages or accessories therefor
- A61B1/015—Control of fluid supply or evacuation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B17/00—Surgical instruments, devices or methods
- A61B2017/00743—Type of operation; Specification of treatment sites
- A61B2017/00809—Lung operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B2034/301—Surgical robots for introducing or steering flexible instruments inserted into the body, e.g. catheters or endoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/32—Surgical robots operating autonomously
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
Definitions
- the present disclosure relates to the field of navigating medical devices within a patient, and in particular, planning a pathway though a luminal network of a patient and navigating medical devices to a target.
- MRI magnetic resonance imaging
- CT computed tomography
- CBCT cone-beam computed tomography
- fluoroscopy including 3D fluoroscopy
- pre-operative scans may be utilized for target identification and intraoperative guidance.
- real-time imaging may be required to obtain a more accurate and current image of the target area.
- real-time image data displaying the current location of a medical device with respect to the target and its surroundings may be needed to navigate the medical device to the target in a safe and accurate manner (e.g., without causing damage to other organs or tissue).
- an endoscopic approach has proven useful in navigating to areas of interest within a patient.
- endoscopic navigation systems have been developed that use previously acquired MRI data or CT image data to generate a three-dimensional (3D) rendering, model, or volume of the particular body part such as the lungs.
- the acquired MRI data or CT Image data may be acquired during the procedure (perioperatively).
- the resulting volume generated from the MRI scan or CT scan is then utilized to create a navigation plan to facilitate the advancement of the endoscope (or other suitable medical device) within the patient anatomy to an area of interest.
- the volume generated may be used to update a previously created navigation plan.
- a locating or tracking system such as an electromagnetic (EM) tracking system, or fiberoptic shape sensing system may be utilized in conjunction with, for example, CT data, to facilitate guidance of the endoscope to the area of interest.
- EM electromagnetic
- fiberoptic shape sensing system may be utilized in conjunction with, for example, CT data, to facilitate guidance of the endoscope to the area of interest.
- larger access devices may have difficulty navigating to the area of interest if the area of interest is located adjacent to small or narrow airways.
- the difficulty of navigating larger access devices, such as those having a camera, within narrow airways can lead to increased surgical times to navigate to the proper position relative to the area of interest which can lead to navigational inaccuracies or the use of fluoroscopy that leads to additional set-up time and radiation exposure.
- a surgical system includes a catheter and a workstation operably coupled to the catheter, the workstation including processing means configured to generate a 3D model of a luminal network of a patient’s lungs, identify target tissue in the generated 3D model, receive catheter information, display, on a user interface, a pathway through the luminal network to the identified target tissue, determine an amount of interference between the catheter and airways of the luminal network along the pathway through the luminal network to the identified target tissue, display, on the user interface, a visual indicator of the determined amount of interference, determine a position of the catheter within the luminal network of the patient as the catheter is navigated along the pathway to the identified target tissue, and update, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter within the luminal network.
- the processing means may be configured to receive patient information and to modify the determined amount of interference based on the received catheter information and the received patient information.
- the processing means may be configured to display, on the user interface, the visual indicator of the determined amount of interference in the form of a color gradient superimposed on the pathway through the luminal network.
- the processing means may be configured to display, on the user interface, the visual indicator of the determined amount of interference in the form of a background color, wherein the color of the background color is updated on the user interface based upon the determined amount of interference at the determined position of the catheter within the luminal network of the patient.
- the processing means may be configured to display, on the user interface, a visual indicator of the determined amount of interference in the form of a bar graph.
- the processing means may be configured to define a cut-off value, wherein the cut-off value is based on a pre-determined amount of interference between the catheter and the airways of the luminal network.
- the processing means may be configured to identify a position along the pathway through the luminal network wherein the determined amount of interference is greater than the defined cut-off value.
- the processing means may be configured to display, on the user interface, a visual indicator of the position of the defined cut-off value on the displayed pathway to the identified target tissue.
- the processing means may be configured to change, on the user interface, a form of the visual indicator of the defined cut-off when the determined position of the catheter corresponds with the determined location of the defined cut-off on the pathway to the identified target tissue.
- the processing means may be configured to determine an amount of interference between the catheter and the airways of the luminal network based on a minimum bend radius of the catheter.
- a surgical system includes a catheter and a workstation operably coupled to the catheter, the workstation including processing means configured to generate a 3D model of a luminal network of a patient’s lungs, display, on a user interface, the generated 3D model of the luminal network of the patient’s lungs, receive catheter information, define a cut-off value, wherein the cut-off value is determined based on a pre-determined amount of interference between the catheter and the airways of the luminal network of the patient’s lungs, identify position of the airways of the luminal network of the patient’s lungs where the determined amount of interference is greater than the defined cut-off value, and display, on the user interface, a visual indicator of the identified portions of airways of the luminal network where the determined amount of interference is greater than the defined cut-off value.
- the processing means may be configured to change, on the user interface, a form of the visual indicator of the identified portions of airways based upon a determined position of the catheter within the luminal network of the patient’s lungs.
- the processing means may be configured to receive patient information and to modify the defined cut-off value based on the received catheter information and the received patient information. [0019] In other aspects, the processing means may be configured to define the cut-off value based on a minimum bend radius of the catheter.
- the processing means may be configured to define the cut-off value based on a determined minimum threshold value for a length of the catheter that is required to be stiffened to treat target tissue identified within the patient’s lungs.
- a method of operating a surgical system includes generating a 3D model of a luminal network of a patient’s lungs, identifying target tissue in the generated 3D model, receiving catheter information, displaying, on a user interface, a pathway through the luminal network to the identified target tissue, determining an amount of interference between the catheter and airways of the luminal network along the pathway through the luminal network to the identified target tissue, displaying, on the user interface, a visual indicator of the determined amount of interference, determining a position of the catheter within the luminal network of the patient as the catheter is navigated along the pathway to the identified target tissue, and updating, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter within the luminal network.
- the method may include receiving patient information and modifying the determined amount of interference based on the received catheter information and the received patient information.
- displaying, on the user interface, the visual indicator of the determined amount of interference may include displaying, on the user interface, a color gradient superimposed on the pathway through the luminal network corresponding to the determined amount of interference.
- displaying, on the user interface, the visual indicator of the determined amount of interference may include displaying, on the user interface, a background color, wherein the color of the background color is updated on the user interface based upon the determined amount of interference at the determined position of the catheter within the luminal network of the patient.
- displaying, on the user interface, the visual indicator of the determined amount of interference may include displaying, on the user interface, a bar graph corresponding to the determined amount of interference.
- FIG. 1 is a schematic view of a surgical system provided in accordance with the disclosure
- FIG. 2 is a perspective view of a distal portion of a catheter of the surgical system of FIG. 1;
- FIG. 3 is a schematic view of a workstation of the surgical system of FIG. 1;
- FIG. 4 is a depiction of a graphical user interface of the surgical system of FIG. 1 illustrating a 3D representation of a patient’s airways and a generated pathway to an area of interest within the patient’s lungs;
- FIG. 5 is an enlarged view of the area of detail indicated in FIG. 4;
- FIG. 6 is a schematic view of the medical device of FIG. 1 illustrating a bend radius of the medical device
- FIG. 7 is a schematic view of the medical device of FIG. 1 illustrating a length of the medical device that is stiffened within the patient’s airways;
- FIG. 8 is a depiction of the graphical user interface of the surgical system of FIG. 1 illustrating a 3D representation of the patient’s airways depicting a visual indicator based upon potential interference between a medical device of the surgical system of FIG. 1 and the patient’s airways in accordance with the disclosure;
- FIG. 9 is a depiction of the graphical user interface of the surgical system of FIG. 1 illustrating a 3D representation of the patient’s airways and another embodiment of a visual indicator based upon potential interference between the medical device of the surgical system of FIG. 1 and the patient’s airways in accordance with the disclosure;
- FIG. 10 is a depiction of the graphical user interface of the surgical system of FIG.
- FIG. 1 illustrating a 3D representation of the patient’s airways and yet another embodiment of a visual indicator based upon potential interference between the medical device of the surgical system of FIG. 1 and the patient’s airways in accordance with the disclosure;
- FIG. 11 A is a flow diagram of a method of indicating potential interference between a medical device and airways of the patient’s luminal network in accordance with the disclosure
- FIG. 1 IB is a continuation of the flow diagram of FIG. 11 A;
- FIG. 12 is a perspective view of a robotic surgical system of the surgical system of FIG. 1;
- FIG. 13 is an exploded view of a drive mechanism of an extended working channel of the surgical system of FIG. 1.
- the disclosure is directed to a surgical system configured to enable navigation of a medical device through a luminal network of a patient, such as for example the lungs.
- the surgical system generates a 3 -dimensional (3D) representation of the airways of the patient using pre-procedure images, such as for example, CT, CBCT, or MRI images and identifies anatomical landmarks or target tissue (e.g., for example, bifurcations or lesions) within the 3D representation.
- pre-procedure images such as for example, CT, CBCT, or MRI images and identifies anatomical landmarks or target tissue (e.g., for example, bifurcations or lesions) within the 3D representation.
- the system generates a plurality of pathways to the target tissue through the luminal network of the patient’s lungs.
- the surgical system determines an amount of interference between the medical device and the airways of the luminal network along the pathway to the target tissue.
- the amount of interference may be based upon a size of the medical device, a minimum bend radius of the medical device, a determined length of the medical device that is required to be stiffened to treat the target tissue, and combinations thereof.
- the surgical system displays the 3D representation of the airways of the patient’s lungs, as well as a selected pathway through the luminal network to the identified target tissue.
- the surgical system displays a visual indicator on the user interface to indicate an amount of interference between the medical device and the airways of the luminal network at locations along the pathway to the target tissue.
- the visual indicator may be a color gradient that is superimposed on the pathway to the target tissue.
- the surgical system may assign colors to different zones or amounts of interference along the pathway to the target tissue, such as green to indicate little to no interference, yellow to indicate some interference or interference between the catheter and the airways in a line-to-line orientation, and red to indicate significant interference or airways that are significantly smaller than the medical device.
- the surgical system may also change a background color of the tree view of the user interface based upon the determined interference at the current location of the medical device within the luminal network or the determined interference immediately distal to the location of the medical device. It is envisioned that the background may flash, change form, or otherwise quickly change color depending upon the determined amount of interference at the current position of the medical device within the luminal network. Additionally, the surgical system may display a bar graph or other similar indicator on the user interface to indicate the amount of interference between the catheter and the airways of the luminal network.
- the surgical system may define a cut-off value corresponding to a maximum allowed amount of interference between the catheter and the airways of the luminal network.
- the surgical system may display a visual indicator on the displayed 3D representation of the luminal network, such as for example, a line, colors, hash-marks, and other patterns at a determined location where the amount of interference exceeds the cut-off value.
- the surgical system may cause the visual indicators to change form, change colors, or flash when the determined position of the catheter approaches or coincides with the determined location within the luminal network.
- these and other aspects of the present disclosure enable a clinician to make an informed decision on whether the continue advancing the medical device in a distal direction, whether to navigate into an airway located at a bifurcation, or whether to treat target tissue at a particular location within the luminal network of the patient. Inclusion of these other metrics allows for optimization of the user experience, patient safety, and procedural time efficiency as compared to relying on proximity of the medical device relative to target tissue alone.
- FIG. 1 illustrates a system 10 in accordance with the disclosure facilitating navigation of a medical device through a luminal network and to an area of interest.
- the surgical system 10 is generally configured to identify target tissue, automatically register real-time images captured by a surgical instrument to a generated 3 -dimensional (3D) model and navigate the surgical instrument to the target tissue.
- the system 10 includes a catheter guide assembly 12 including an extended working channel (EWC) 14, which may be a smart extended working channel (sEWC) including an electromagnetic (EM) sensor.
- EWC extended working channel
- sEWC smart extended working channel
- EM electromagnetic
- the sEWC 14 is inserted into a bronchoscope 16 for access to a luminal network of the patient P.
- the sEWC 14 may be inserted into a working channel of the bronchoscope 16 for navigation through a patient P’s luminal network, such as for example, the lungs.
- the sEWC 14 may itself include imaging capabilities via an integrated camera or optics component (not shown) and therefore, a separate bronchoscope 16 is not strictly required.
- the sEWC 14 may be selectively locked to the bronchoscope 16 using a bronchoscope adapter 16a.
- the bronchoscope adapter 16a is configured to permit motion of the sEWC 14 relative to the bronchoscope 16 (which may be referred to as an unlocked state of the bronchoscope adapter 16a) or inhibit motion of the sEWC 14 relative to the bronchoscope 16 (which may be referred to as a locked state of the bronchoscope adapter 16a).
- Bronchoscope adapters 16a are currently marketed and sold by Medtronic PLC under the brand names EDGE® Bronchoscope Adapter or the ILLUMISITE® Bronchoscope Adapter and are contemplated as being usable with the disclosure.
- the sEWC 14 may include one or more EM sensors 14a disposed in or on the sEWC 14 at a predetermined distance from the distal end 14b of the sEWC 14. It is contemplated that the EM sensor 14a may be a five degree-of-freedom sensor or a six degree-of-freedom sensor. As can be appreciated, the position and orientation of the EM sensor 14a of the sEWC relative to a reference coordinate system, and thus a distal portion of the sEWC 14 within an electromagnetic field can be derived.
- Catheter guide assemblies 12 are currently marketed and sold by Medtronic PLC under the brand names SUPERDIMENSION® Procedure Kits, ILLUMISITETM Endobronchial Procedure Kit, ILLUMISITETM Navigation Catheters, or EDGE® Procedure Kits, and are contemplated as being usable with the disclosure.
- a catheter 70 including one or more EM sensors 72, is inserted into the sEWC 14 and selectively locked into position relative to the sEWC 14 such that the sensor 72 extends a predetermined distance beyond a distal tip of the sEWC 14.
- the EM sensor 72 disposed on the catheter 70 is separate from the EM sensor 14a disposed on the sEWC.
- the EM sensor 72 is disposed on or in the catheter 70 a predetermined distance from a distal end portion 76 of the catheter 70. In this manner, the system 10 is able to determine a position of a distal portion of the catheter 70 within the luminal network of the patient P.
- the catheter 70 may be selectively locked relative to the sEWC 14 at any time, regardless of the position of the distal end portion 76 of the catheter 70 relative to the sEWC 14. It is contemplated that the catheter 70 may be selectively locked to a handle 12a of the catheter guide assembly 12 using any suitable means, such as for example, a snap fit, a press fit, a friction fit, a cam, one or more detents, threadable engagement, or a chuck clamp. It is envisioned that the EM sensor 72 may be a five degree-of-freedom sensor or a six degree-of-freedom sensor. As will be described in further detail hereinbelow, the position and orientation of the EM sensor 72 of the catheter 70 relative to a reference coordinate system, and thus a distal portion of the catheter 70, within an electromagnetic field can be derived.
- At least one camera 74 is disposed on or adjacent a distal end surface 76a of the catheter 70 and is configured to capture, for example, still images, real-time images, or realtime video. Although generally described as being disposed on the distal end surface 76a of the catheter 70, it is envisioned that the camera 74 may be disposed on any suitable location on the camera 70, such as for example, a sidewall.
- the catheter 70 may include one or more light sources 80 disposed on or adjacent to the distal end surface 76a of the catheter 70 or any other suitable location (e.g., for example, a side surface or a protuberance).
- the light source 80 may be or may include, for example, a light emitting diode (LED), an optical fiber connected to a light source that is located external to the patient P, or combinations thereof, and may emit one or more of white, IR, or near infrared (NIR) light.
- the camera 74 may be, for example, a white light camera, IR camera, or NIR camera, a camera that is capable of capturing white light and NIR light, or combinations thereof.
- the camera 74 is a white light mini complementary metal-oxide semiconductor (CMOS) camera, although it is contemplated that the camera 74 may be any suitable camera, such as for example, a charge-coupled device (CCD), a complementary metal-oxide- semiconductor (CMOS), a N-type metal-oxide-semiconductor (NMOS), and in embodiments, may be an infrared (IR) camera, depending upon the design needs of the system 10.
- CMOS complementary metal-oxide- semiconductor
- NMOS N-type metal-oxide-semiconductor
- IR infrared
- the camera 74 captures images of the patient P’s anatomy from a perspective of looking out from the distal end portion 76 of the catheter 70.
- the camera 74 may be a dual lens camera or a Red Blue Green and Depth (RGB-D) camera configured to identify a distance between the camera 74 and anatomical features within the patient P’s anatomy without departing from the scope of the disclosure. As described hereinabove, it is envisioned that the camera 74 may be disposed on the catheter 70, the sEWC 14, or the bronchoscope 16.
- RGB-D Red Blue Green and Depth
- the sEWC 14 may be deployed through the working channel of a bronchoscope 16.
- the sEWC 14 may receive a catheter 70 including a sensor 72 and camera 74, or alternatively imaging capabilities (e.g., camera 74) may be directly built into the sEWC 14 thus obviating the need for the removable catheter 70.
- the sEWC 14 with or without imaging capabilities may also be utilized with or without the bronchoscope.
- the catheter 70 may be deployed either through the sEWC 14, or through the bronchoscope 16 without departing from the scope of the disclosure.
- the catheter 70 may include a working channel 82 defined through a proximal portion (not shown) and the distal end surface 76a, although in embodiments, it is contemplated that the working channel 82 may extend through a sidewall of the catheter 70 depending upon the design needs of the catheter 70. As can be appreciated, the working channel 82 is configured to receive a locatable guide (not shown) or a surgical tool 90, such as for example, a biopsy tool.
- the catheter 70 includes an inertial measurement unit (IMU) 84 disposed within or adjacent to the distal end portion 76.
- IMU inertial measurement unit
- the IMU 84 detects an orientation of the distal end portion 76 of the catheter 70 relative to a reference coordinate frame and detects movement and speed of the distal end portion 76 of the catheter 70 as the catheter 70 is navigated within the patient P’s luminal network. Using the data received from the IMU 84, the system 10 is able to determine alignment and trajectory information of the distal end portion 76 of the catheter 70. In embodiments, the system 10 may utilize the data received from the IMU 84 to determine a gravity vector, which may be used to determine the orientation of the distal end portion 76 of the catheter 70 within the airways of the patient P.
- the instant disclosure is not so limited and may be used in conjunction with flexible sensors, such as for example, fiber-bragg grating sensors, ultrasonic sensors, without sensors, or combinations thereof.
- flexible sensors such as for example, fiber-bragg grating sensors, ultrasonic sensors, without sensors, or combinations thereof.
- the devices and systems described herein may be used in conjunction with robotic systems such that robotic actuators drive that sEWC 14 or bronchoscope 16 proximate the target.
- the system 10 generally includes an operating table 52 configured to support a patient P and monitoring equipment 24 coupled to the sEWC 14, the bronchoscope 16, or the catheter 70 (e.g., for example, a video display for displaying the video images received from the video imaging system of the bronchoscope 12 or the camera 74 of the catheter 70), a locating or tracking system 46 including a tracking module 48, a plurality of reference sensors 50 and a transmitter mat 54 including a plurality of incorporated markers, and a workstation 20 having a computing device 22 including software and/or hardware used to facilitate identification of a target, pathway planning to the target, navigation of a medical device to the target, and/or confirmation and or determination of placement of, for example, the sEWC 14, the bronchoscope 16, the catheter 70, or a surgical tool (e.g., for example, the surgical tool 90), relative to the target.
- a surgical tool e.g., for example, the surgical tool 90
- the tracking system 46 is, for example, a six degrees-of-freedom electromagnetic locating or tracking system, or other suitable system for determining position and orientation of, for example, a distal portion the sEWC 14, the bronchoscope 16, the catheter 70, or a surgical tool, for performing registration of a detected position of one or more of the EM sensors 14a or 72 and a three-dimensional (3D) model generated from a CT, CBCT, or MRI image scan.
- the tracking system 46 is configured for use with the sEWC 14 and the catheter 70, and particularly with the EM sensors 14a and 72.
- the transmitter mat 54 is positioned beneath the patient P.
- the transmitter mat 54 generates an electromagnetic field around at least a portion of the patient P within which the position of the plurality of reference sensors 50 and the EM sensors 14a and 74 can be determined with the use of the tracking module 48.
- the transmitter mat 54 generates three or more electromagnetic fields.
- One or more of the reference sensors 50 are attached to the chest of the patient P.
- coordinates of the reference sensors 50 within the electromagnetic field generated by the transmitter mat 54 are sent to the computing device 22 where they are used to calculate a patient P coordinate frame of reference (e.g., for example, a reference coordinate frame).
- registration is generally performed using coordinate locations of the 3D model and 2D images from the planning phase, with the patient P’s airways as observed through the bronchoscope 12 or catheter 70 and allow for the navigation phase to be undertaken with knowledge of the location of the EM sensors 14a and 72.
- any one of the EM sensors 14a and 72 may be a single coil sensor that enables the system 10 to identify the position of the sEWC 14 or the catheter 70 within the EM field generated by the transmitter mat 54, although it is contemplated that the EM sensors 14a and 72 may be any suitable sensor and may be a sensor capable of enabling the system 10 to identify the position, orientation, and/or pose of the sEWC 14 or the catheter 70 within the EM field.
- the instant disclosure is not so limited and may be used in conjunction with flexible sensors, such as for example, fiber-bragg grating sensors, inertial measurement units (IMU), ultrasonic sensors, optical sensors, pose sensors (e.g., for example, ultra- wide band, global positioning system, fiber-bragg, radio-opaque markers), without sensors, or combinations thereof. It is contemplated that the devices and systems described herein may be used in conjunction with robotic systems such that robotic actuators drive the sEWC 14 or bronchoscope 16 proximate the target.
- IMU inertial measurement units
- ultrasonic sensors e.g., ultra- wide band, global positioning system, fiber-bragg, radio-opaque markers
- the visualization of intra-body navigation of a medical device e.g., for example a biopsy tool or a therapy tool
- a target e.g. , for example, a lesion
- An imaging device 56 e.g., for example, a CT imaging device, such as for example, a conebeam computed tomography (CBCT) device, including but not limited to Medtronic pic’s O- armTM system
- CBCT conebeam computed tomography
- the images, sequence of images, or video captured by the imaging device 56 may be stored within the imaging device 56 or transmitted to the computing device 22 for storage, processing, and display.
- the imaging device 56 may move relative to the patient P so that images may be acquired from different angles or perspectives relative to the patient P to create a sequence of images, such as for example, a fluoroscopic video.
- the pose of the imaging device 56 relative to the patient P while capturing the images may be estimated via markers incorporated with the transmitter mat 54.
- the markers are positioned under the patient P, between the patient P and the operating table 52, and between the patient P and a radiation source or a sensing unit of the imaging device 56.
- the markers incorporated with the transmitter mat 54 may be two separate elements which may be coupled in a fixed manner or alternatively may be manufactured as a single unit. It is contemplated that the imaging device 56 may include a single imaging device or more than one imaging device.
- the workstation 20 includes a computer 22 and a display 24 that is configured to display one or more user interfaces 26 and/or 28.
- the workstation 20 may be a desktop computer or a tower configuration with the display 24 or may be a laptop computer or other computing device.
- the workstation 20 includes a processor 30 which executes software stored in a memory 32.
- the memory 32 may store video or other imaging data captured by the bronchoscope 16 or catheter 70 or preprocedure images from, for example, a computer -tomography (CT) scan, Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), Cone-beam CT, amongst others.
- CT computer -tomography
- PET Positron Emission Tomography
- MRI Magnetic Resonance Imaging
- Cone-beam CT Cone-beam CT
- the display 24 may be incorporated into a head mounted display such as an augmented reality (AR) headset such as the HoloLens offered by Microsoft Corp.
- AR augmented reality
- a network interface 36 enables the workstation 20 to communicate with a variety of other devices and systems via the Internet.
- the network interface 36 may connect the workstation 20 to the Internet via a wired or wireless connection. Additionally, or alternatively, the communication may be via an ad-hoc Bluetooth® or wireless network enabling communication with a wide-area network (WAN) and/or a local area network (LAN).
- the network interface 36 may connect to the Internet via one or more gateways, routers, and network address translation (NAT) devices.
- the network interface 36 may communicate with a cloud storage system 38, in which further image data and videos may be stored.
- the cloud storage system 38 may be remote from or on the premises of the hospital, such as in a control or hospital information technology room.
- An input module 40 receives input from an input device such as a keyboard, a mouse, voice commands, amongst others.
- An output module 42 connects the processor 30 and the memory 32 to a variety of output devices such as the display 24.
- the workstation 20 may include its own display 44, which may be a touchscreen display.
- the software application utilizes preprocedure CT image data, either stored in the memory 32 or retrieved via the network interface 36, for generating and viewing a 3D model of the patient P’s anatomy, enabling the identification of target tissue TT on the 3D model (automatically, semi-automatically, or manually), and in embodiments, allowing for the selection of a pathway PW through the patient P’s anatomy to the target tissue, as will be described in further detail hereinbelow.
- Examples of such an application is the ILOGIC® planning and navigation suites and the ILLUMISITE® planning and navigation suites currently marketed by Medtronic PLC.
- the 3D model may be displayed on the display 24 or another suitable display associated with the workstation 20, such as for example, the display 44, or in any other suitable fashion.
- various views of the 3D model may be provided and/or the 3D model may be manipulated to facilitate identification of target tissue TT on the 3D model and/or selection of a suitable pathway PW to the target tissue.
- the 3D model may be generated by segmenting and reconstructing the airways of the patient P’s lungs to generate a 3D airway tree 100.
- the reconstructed 3D airway tree 100 includes various branches and bifurcations which, in embodiments, may be labeled using, for example, well accepted nomenclature such as RBI (right branch 1), LB1 (left branch 1, or Bl (bifurcation one).
- the segmentation and labeling of the airways of the patient P’s lungs is performed to a resolution that includes terminal bronchioles having a diameter of approximately less than 1 mm.
- segmenting the airways of the patient P’s lungs to terminal bronchioles improves the accuracy registration between the position of the sEWC 14 and catheter 70 and the 3D model, improves the accuracy of the pathway to the target, and improves the ability of the software application to identify the location of the sEWC 14 and catheter 70 within the airways and navigate the sEWC 14 and catheter 70 to the target tissue.
- Those of skill in the art will recognize that a variety of different algorithms may be employed to segment the CT image data set, including, for example, connected component, region growing, thresholding, clustering, watershed segmentation, or edge detection. It is envisioned that the entire reconstructed 3D airway tree may be labeled, or only branches or branch points within the reconstructed 3D airway tree that are located adjacent to the pathway to the target tissue.
- the software stored in the memory 32 may identify and segment out a targeted critical structure (e.g., for example, blood vessels, lymphatic vessels, lesions, and/or other intrathoracic structures) within the 3D model. It is envisioned that the segmentation process may be performed automatically, manually, or a combination of both. The segmentation process isolates the targeted critical structure from the surrounding tissue in the 3D model and identifies its position within the 3D model. In embodiments, the software application segments the CT images to terminal bronchioles that are less than 1 mm in diameter such that branches and/or bifurcations are identified and labeled deep into the patient P’s luminal network. It is envisioned that this position can be updated depending upon the view selected on the display 24 such that the view of the segmented targeted critical structure may approximate a view captured by the camera 74 of the catheter 70.
- a targeted critical structure e.g., for example, blood vessels, lymphatic vessels, lesions, and/or other intrathoracic structures
- the software application segments the CT images to terminal bronchi
- the software stored in the memory 32 uses the 3D model tree 100, or in embodiments, the 3D model or combinations thereof, the software stored in the memory 32 generates a plurality of proposed pathways PW through the luminal network of the patient P to the target tissue TT.
- the software stored in the memory 32 identifies the location of the target tissue TT within the patient P’s lungs and identifies a proposed pathway PW starting with airways that are the smallest and nearest to the target tissue TT.
- the software stored in the memory 32 propagates the pathway PW contiguously through subsequently larger airways until a proposed pathway PW reaches the trachea of the patient P. This process is repeated until a predetermined number of pathways PW have been generated, or all possible pathways PW have been identified.
- Registration of the patient P’s location on the transmitter mat 54 may be performed by moving the EM sensors 14a and/or 72 through the airways of the patient P.
- the software stored on the memory 32 periodically determines the location of the EM sensors 14a or 72 within the coordinate system as the sEWC 14 of the catheter 70 is moving through the airways using the transmitter mat 54, the reference sensors 50, and the tracking system 46.
- the location data may be represented on the user interface 26 as a marker or other suitable visual indicator, a plurality of which develop a point cloud having a shape that may approximate the interior geometry of the 3D model.
- the shape resulting from this location data is compared to an interior geometry of passages of a 3D model, and a location correlation between the shape and the 3D model based on the comparison is determined.
- the software identifies non-tissue space (e.g., for example, air filled cavities) in the 3D model.
- the software aligns, or registers, an image representing a location of the EM sensors 14a or 72 with the 3D model and/or 2D images generated from the 3D model, which are based on the recorded location data and an assumption that the sEWC 14 or the catheter 70 remains located in nontissue space in a patient P’s airways.
- a manual registration technique may be employed by navigating the sEWC 14 or catheter 70 with the EM sensors 14a and 72 to prespecified locations in the lungs of the patient P, and manually correlating the images from the bronchoscope 16 or the catheter 70 to the model data of the 3D model.
- a point cloud e.g., for example, a plurality of location data points
- registration can be completed utilizing any number of location data points, and in one non-limiting embodiment, may utilize only a single location data point.
- the medical device may have an outer dimension of about between 3.5 mm to 4.2 mm. It is contemplated that the software stored on the memory 32 may automatically identify the medical device being used or the outer dimension of the medical device, or the type and dimensions of the medical device may be manually entered.
- the software stored on the memory 32 may assign a predetermined threshold value to the inner dimensions of the airways, such as for example, a maximum inner dimension through which the medical device is able to be navigated within.
- the predetermined threshold value may be an inner dimension that is about equal to or less than the outer dimension of the medical device.
- the software stored on the memory 32 may apply an offset to the identified maximum inner dimension, such as for example, an inner dimension that is a percentage of the maximum inner dimension, which may be a percentage increase of the maximum inner dimension. In this manner, the software stored on the memory 32 may increase the maximum inner dimension of the airways by a predetermined amount to ensure that airways adjacent to navigable airways are segmented and rendered.
- the software stored on the memory 32 may reduce the inner dimension of the airways from about equal to a medical device having an outer dimension of 3.5 mm to an inner dimension of about 2 mm.
- the software stored on the memory 32 may automatically or manually identify a range of motion or minimum bend radius R of the medical device (FIG. 6).
- the range of motion or minimum bend radius R of the medical device limits or otherwise inhibits the medical device from traversing pathways through the airways of the patient P requiring bends or curves tighter or otherwise smaller than the minimum bend radius R.
- the software stored on the memory 32 may assign a predetermined threshold value to the bends or curves within the airways of the patient P, such as a minimum bend radius R achievable by the medical device.
- the software stored on the memory 32 may apply an offset to the predetermined threshold value of the bend radius R, such as for example, a bend radius R that is a percentage of the minimum bend radius R, which may be a percentage increase of the minimum bend radius R. In this manner, the software stored on the memory 32 may increase the minimum bend radius R by a predetermined amount to ensure that the medical device is capable of being navigated through airways of the patient P to the target tissue TT.
- one or more portions along the length of the medical device may require stiffening to accomplish the desired diagnostic or therapeutic task. Stiffening of one or more portions along the length of the medical device requires space within the airways of the patient P to accommodate the stiffening of the medical device (e.g., for example, a linear length). In this manner, the software stored on the memory 32 may require a portion of the medical device adjacent to the distal end portion of the medical device to stiffen, which may abut or otherwise contact walls of the airways of the patient P.
- the software stored on the memory 32 may determine a minimum threshold value for the length L s of the medical device that is required to be stiffened (FIG. 7), which may take into account an outer dimension of the medical device, an inner dimension of the airways of the patient P, and bends or bifurcations within the airways of the patient P.
- the software stored on the memory 32 assigns or otherwise correlates the mechanical properties (e.g., for example, an outer dimension or a minimum bend radius) of the medical device to the properties of portions and/or segments of the airways of the patient’s luminal network along the selected pathway PW to the target tissue TT.
- the software stored on the memory 32 may consider locations within the patient P’s airways where stiffening one or more portions of the medical device is required to treat the target tissue TT. It is envisioned that these values and other relevant data may be stored in a look-up table or automatically assigned to patient P’s anatomy along the pathway PW to the target tissue.
- the software stored on the memory 32 color codes, overlays, or otherwise superimposes on the 3D model tree 100 or other portion of the user interface 26 information or another indicator of the airway geometry in comparison to the above-described medical device properties.
- the software stored on the memory 32 displays a color gradient or assign a color to various zones along the pathway PW to the target tissue, such as for example, green for indicating no or a low likelihood of airway interference with the medical device, yellow for indicating potential airway interference with the medical device or line-to-line interference, and red for indicating likely airway interference with the medical device or airways having an inner dimension that is smaller or significantly smaller than an outer dimension of the medical device.
- the threshold values for each of these zones can be automatically assigned based upon airway geometry and medical device properties, manually entered, or manually adjusted by the user.
- a bar graph 110 (FIG. 9), gauges, or other indicators may be displayed on the user interface 26 to indicate the likelihood of airway interference with the medical device.
- a portion or all of the background of the tree view of the user interface 26 may change color or flash as the medical device approaches airways that have a high likelihood of interference with the medical device.
- the software stored on the memory 32 utilizes various factors, which in embodiments may be the factors described hereinabove with respect to the properties of the medical device, the airways of the patient P’s luminal network, and the pathway through the luminal network of the patient P to the target tissue TT to determine a cutoff value or limit as to an amount of interference between the medical device and the airways of the patient P. In this manner, a position within the luminal network of the patient P where distal advancement of the medical device should be terminated or otherwise stopped is determined. An amount of interference between the medical device and the airways that is greater than the cut-off value is indicated as unnavigable by the medical device.
- the cut-off may be automatically determined by the software stored on the memory 32, manually entered, or manually adjusted by the user.
- the software stored on the memory 32 displays an indicator or other visual marker 112 on the 3D model 100 displayed on the user interface 26.
- the software stored on the memory 32 may cause the indicator 112 to flash, change color, increase in size, or combinations thereof when the medical device approaches the location within the airways of the patient P’s luminal network identified by the indicator 112.
- the background of the user interface 26 displaying the tree view may flash or otherwise change color, and in embodiments, a warning or other message (not shown) may be displayed on the user interface 26 to indicate to the user that the medical device is approaching or is located at the cut-off.
- airways of the luminal network where the determined amount of interference is greater than the cut-off value may be indicated using colors or other infills, such as for example, hash-marks and other patterns.
- a method of indicating potential interference between a medical device and airways of the patient’s luminal network is described and generally identified by reference numeral 200.
- the patient P is imaged and the captured images are stored in the memory 32.
- the software stored in the memory 32 generates a 3D representation of the patient P’s airways.
- target tissue TT is identified in the generated 3D representation of the patient P’s airways.
- patient P information such as for example, the type of procedure being performed, patient P history, and the volume of the target tissue is received.
- medical device information such as the type of medical device used to navigate to the target tissue TT, the type of surgical tool used to treat or sample the target tissue TT, and the size of the medical device used to navigate to the target tissue TT is received.
- the software stored in the memory 32 generates proposed pathways to the target tissue TT through the luminal network of the patient P.
- the software stored in the memory 32 analyzes the generated proposed pathways PW to calculate an amount of, or a likelihood of, interference between the airways of the patient P and the surgical tool used to treat or sample the target tissue TT along the pathway PW to the target tissue TT.
- the software stored in the memory modifies the calculated amount of, or likelihood of, interference based on the medical device information received at step 210.
- the software stored in the memory 32 may modify the calculated amount of, or likelihood of, interference based on the patient P information received at step 208 in addition to, or in lieu of, using the medical device information.
- a pathway PW to the target tissue TT is selected and an indicator of the amount of, or likelihood, of interference between the surgical tool and the airways of the luminal network of the patient P along the selected pathway PW is displayed on the user interface 26.
- a cut-off past which the surgical tool should not be navigated on the selected pathway PW to the target tissue TT may be defined, and at step 224, the defined cut-off may be displayed on the user interface 26.
- the surgical tool is navigated to the target tissue TT along the selected pathway PW.
- the displayed indicator of interference between the airways of the patient P and the surgical tool is updated based upon the position of the surgical tool within the airways of the patient P.
- the method returns to steps 226 and step 228. If it is determined that the surgical tool is located adjacent to the target tissue TT or the surgical tool has reached the defined cut-off, the method ends at step 232.
- the abovedescribed method may be repeated as many times as necessary depending upon the needs of the user or the procedure being performed.
- the system 10 may include a robotic surgical system 600 having a drive mechanism 602 including a robotic arm 604 operably coupled to a base or cart 606, which may, in embodiments, be the workstation 20.
- the robotic arm 604 includes a cradle 608 that is configured to receive a portion of the sEWC 14.
- the sEWC 14 is coupled to the cradle 608 using any suitable means (e.g., for example, straps, mechanical fasteners, and/or couplings).
- the robotic surgical system 600 may communicate with the sEWC 14 via electrical connection (e.g., for example, contacts and/or plugs) or may be in wireless communication with the sEWC 14 to control or otherwise effectuate movement of one or more motors (FIG. 16) disposed within the sEWC 14 and in embodiments, may receive images captured by a camera (not shown) associated with the sEWC 14.
- the robotic surgical system 600 may include a wireless communication system 610 operably coupled thereto such that the sEWC 14 may wirelessly communicate with the robotic surgical system 600 and/or the workstation 20 via WiFi, Bluetooth®, for example.
- the robotic surgical system 600 may omit the electrical contacts altogether and may communicate with the sEWC 14 wirelessly or may utilize both electrical contacts and wireless communication.
- the wireless communication system 610 is substantially similar to the network interface 36 (FIG. 3) described hereinabove, and therefore, will not be described in detail herein in the interest of brevity.
- the robotic surgical system 600 and the workstation 20 may be one in the same, or in embodiments, may be widely distributed over multiple locations within the operating room. It is contemplated that the workstation 20 may be disposed in a separate location and the display 44 (FIGS. 1 and 3) may be an overhead monitor disposed within the operating room.
- the sEWC 14 may be manually actuated via cables or push wires, or for example, may be electronically operated via one or more buttonsjoysticks, toggles, actuators (not shown) operably coupled to a drive mechanism 614 disposed within an interior portion of the sEWC 14 that is operably coupled to a proximal portion of the sEWC 14, although it is envisioned that the drive mechanism 614 may be operably coupled to any portion of the sEWC 14.
- the drive mechanism 614 effectuates manipulation or articulation of the distal end of the sEWC 14 in four degrees of freedom or two planes of articulation (e.g., for example, left, right, up, or down), which is controlled by two push-pull wires, although it is contemplated that the drive mechanism 614 may include any suitable number of wires to effectuate movement or articulation of the distal end of the sEWC 14 in greater or fewer degrees of freedom without departing from the scope of the disclosure.
- the distal end of the sEWC 14 may be manipulated in more than two planes of articulation, such as for example, in polar coordinates, or may maintain an angle of the distal end relative to the longitudinal axis of the sEWC 14 while altering the azimuth of the distal end of the sEWC 14 or vice versa.
- the system 10 may define a vector or trajectory of the distal end of the sEWC 14 in relation to the two planes of articulation.
- the drive mechanism 614 may be cable actuated using artificial tendons or pull wires 616 (e.g., for example, metallic, non-metallic, and/or composite) or may be a nitinol wire mechanism.
- the drive mechanism 614 may include motors 618 or other suitable devices capable of effectuating movement of the pull wires 616. In this manner, the motors 618 are disposed within the sEWC 14 such that rotation of an output shaft the motors 618 effectuates a corresponding articulation of the distal end of the sEWC 14.
- the sEWC 14 may not include motors 618 disposed therein. Rather, the drive mechanism 614 disposed within the sEWC 14 may interface with motors 622 disposed within the cradle 608 of the robotic surgical system 600.
- the sEWC 14 may include a motor or motors 618 for controlling articulation of the distal end 14b of the sEWC 14 in one plane (e.g, for example, left/null or right/null) and the drive mechanism 624 of the robotic surgical system 600 may include at least one motor 622 to effectuate the second axis of rotation and for axial motion.
- the motor 618 of the sEWC 14 and the motors 622 of the robotic surgical system 600 cooperate to effectuate four-way articulation of the distal end of the sEWC 14 and effectuate rotation of the sEWC 14.
- the sEWC 14 becomes increasingly cheaper to manufacture and may be a disposable unit.
- the sEWC 14 may be integrated into the robotic surgical system 600 (e.g, for example, one piece) and may not be a separate component.
- a camera 74 may be inserted into the sEWC 14 or catheter 70 allowing for the camera’s removal during a procedure.
- the camera 74 may be a permanent component of the sEWC 14 or catheter 70.
- images of the luminal network are captured during the navigation of sEWC 14 or catheter 70 through the luminal network. These images can be presented for example, on display 24 (e.g., in UI 26 and/or 28) either separate or alongside the 3D model tree 100.
- the indicator of interference (e.g., bar graph 110) can be actively updated as the sEWC 114 or catheter 70 is advanced through the luminal network, and simultaneously the clinician is able to observe live video images captured by the camera 74 from within the luminal network.
- the 3D model tree 100 is generated based on pre-procedural or intraprocedural images (e.g., CT, CBCT, or MR images). Particularly with regard to pre- procedural images, they are typically captured at full breath hold. As a result, the 3D model tree 100 depicts the airways at substantially their maximum normal size under normal operating (breathing) pressures. [0077] There are two challenges with regards to the 3D model tree 100 based on the captured images. First, as the 3D model tree 100 approaches the periphery of the airways, there may be airways through which the sEWC 14 or catheter 70 must be navigated to reach the target tissue, but which are not represented in the 3D model tree 100.
- pre-procedural or intraprocedural images e.g., CT, CBCT, or MR images.
- pre-procedural images e.g., CT, CBCT, or MR images.
- pre- procedural images they are typically captured at full breath hold.
- the second challenge is that the 3D model tree 100 is not truly reflective of the actual resilience/compliance of the tissues of the patient.
- every patient is different, thus while an average value for resilience/compliance can be applied to the 3D model tree 100 when modeling the ability of an airway to expand, this is not necessarily accurate of a particular patient’s airways and there may be differences in-situ for the sEWC 14 or catheter 70 to navigate through particularly distal airways of the luminal network (e.g., generating the UI 26 in FIG. 8) compared to the modeled interference.
- aspects of the disclosure are directed to addressing the challenges noted above.
- images e.g., video
- these images can be analyzed via an image processing application stored in the memory of the computing device 22.
- the images captured by the camera 74 may be continually analyzed during the navigation to confirm that no buckling, kinking, or change of shape of the airways, as a result of force being transferred from the sEWC 14 or catheter 70 to the airway wall, is observed or detected.
- the interference indicator (e.g., bar graph 110) can be updated in the UI 26 to display the image detected interference, which may not necessarily correspond with the 3D model tree 100 based interference (e.g., as depicted n FIG. 8).
- the modeled interferences serve as a guide to the clinician but can be confirmed or corrected using the image analysis during the navigation with the sEWC 14 or catheter 70 in the luminal network.
- this process may run continually in the background during navigation, or at certain parts of the navigation (e.g., where interference is expected) and one or more indicators of the detected interference or detected lack of interference can be presented to the clinician. Further, with the images displayed the clinician is free to make their own assessments regarding changes in shape of the airway and the interference.
- the image processing application may make a determination of the actual size of the airway being navigated. This may be done by comparing changes in the field of view of the camera 74 as the sEWC 14 or catheter 70 is being navigated into the luminal network. Comparing these changes can provide a basis of comparison to the actual diameter of the sEWC 14 or catheter 70. In some instances the diameter of the airway or lumen may be displayed on the UI 26, 28 along with or as an alternative to the interference indicator.
- the sEWC 14 or catheter 70 may include a working channel 82, the working channel may be operably connected to a fluid source (e.g., an air-filled or liquid-filled syringe).
- a fluid source e.g., an air-filled or liquid-filled syringe.
- a dose of a fluid such as oxygen, saline, etc., can be injected into the luminal network. This injection of fluid may be at a pressure in excess of normal breathing pressures within the lungs.
- the application can analyze the images before, during, and after the injection of the fluid to detect and quantify the magnitude of the change in diameter of the airway. If no change in airway diameter is detected, then the interference indicator (e.g., bar graph 110) may not be altered, and the clinician has confirmed the 3D model tree 100 based interference determinations. Alternatively, an indication of no change can be displayed in the UI 26, 28 to provide this confirmation. However, where a change in diameter is detected the interference indicator (e.g., bar graph 110) may be updated to alert the clinician to the change in expected interference being altered based on the in-situ determination.
- the interference indicator e.g., bar graph 110
- the clinician has confirmed that further navigation along the desired pathway can continue without fear of tearing or damaging the airway tissues.
- the ability to increase the diameter of the airway may be a result of the actual resilience and compliance of the lungs differing from that employed during the modeling and when assessing interferences in the 3D model tree 100.
- the ability to increase the diameter may be a result of the over pressurization beyond normal breathing pressures.
- the tissues in the periphery of the lungs is quite flexible, thus some over pressurization can enlarge the airways beyond the detected size during full breath hold and the original imaging that yielded the 3D model tree 100.
- the injection of fluid may be repeated at multiple instances if necessary to navigate to a desired location prior to acquisition of a biopsy or application of a therapy to the target tissue.
- the 3D model tree 100 may not include all the airways of the lungs, and in particular some of the smaller airways in the periphery closer to the target tissue. Nonetheless, it may be necessary or desirable to navigate the sEWC 14 or catheter 70 through these airways to reach the target tissue. Since these airways are not part of the 3D model tree 100, no interference indication may have been generated for these airways, and thus no interference indicator (e.g., bar graph 110) is available for these sections of the luminal network.
- no interference indicator e.g., bar graph 110
- the interference determinations may be made in real-time during the navigation.
- the navigation images are captured by the camera 74 and based on perceived changes in the images an assessment can be made regarding the diameter of the airway.
- These real-time assessments of the images captured by the camera 74 can then be presented on the UI 26.
- the inflating fluid may be employed where the clinician requires further assurances that the sEWC 14 or catheter 70 can be navigated within an airway.
- much of the navigation described above may be autonomous or semi-autonomous navigation where the robotic surgical system 600 is employed to advance the sEWC 14 or catheter 70 through the airways of the patient.
- navigation can be autonomous until arriving at a location where the expected interference exceeds a threshold.
- the application analyzing the images from the camera 74 may assess actual interference and where appropriate inject fluid to over pressurize the airway and assess the in-situ interference of the airways.
- the interference may be displayed on the user interface (e.g., bar graph 110). If, however, it is determined that the measured in-situ interference in the captured images is below the threshold such that further advancement of the sEWC 14 or catheter 70 can be undertaken the interference indicator (e.g., bar graph 110) may be updated and navigation continued.
- further navigation may require confirmation be received via the UI 26 from the clinician that the observed interference in the images is below the threshold and that further navigation is desired. Still further, the further navigation may
- the clinician is provided with interference information regarding the forces on the tissue applied by the sEWC 14 or catheter 70. This information informs the clinician on how or whether they wish to proceed with navigation or conduct a biopsy or therapy procedure from the navigated to location. Further, the modeled interferences can be updated and augmented based on in-situ acquired images from within the luminal network. Utilizing these tools the clinician can be provided with accurate and actionable information to determine whether further navigation is possible without fear of damaging patient tissues.
- computer-readable media can be any available media that can be accessed by the processor 30. That is, computer readable storage media may include non-transitory, volatile, and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as, for example, computer-readable instructions, data structures, program modules or other data.
- computer-readable storage media may include RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, DVD, Blu-Ray or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information, and which may be accessed by the workstation 20.
- a surgical system comprising: a catheter; a workstation operably coupled to the catheter, the workstation including processing means configured to: generate a 3D model of a luminal network of a patient’s lungs; identify target tissue in the generated 3D model; receive catheter information; display, on a user interface, a pathway through the luminal network to the identified target tissue; determine an amount of interference between the catheter and airways of the luminal network along the pathway through the luminal network to the identified target tissue; display, on the user interface, a visual indicator of the determined amount of interference; determine a position of the catheter within the luminal network of the patient as the catheter is navigated along the pathway to the identified target tissue; and update, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter within the luminal network.
- the processing means is configured to receive patient information and to modify the determined amount of interference based on the received catheter information and the received patient information.
- processing means is configured to display, on the user interface, the visual indicator of the determined amount of interference in the form of a color gradient superimposed on the pathway through the luminal network.
- processing means is configured to display, on the user interface, the visual indicator of the determined amount of interference in the form of a background color, wherein the color of the background color is updated on the user interface based upon the determined amount of interference at the determined position of the catheter within the luminal network of the patient.
- processing means is configured to display, on the user interface, a visual indicator of the determined amount of interference in the form of a bar graph.
- the processing means is configured to define a cut-off value, wherein the cut-off value is based on a pre-determined amount of interference between the catheter and the airways of the luminal network.
- processing means is configured to identify a position along the pathway through the luminal network where the determined amount of interference is greater than the defined cut-off value.
- processing means is configured to display, on the user interface, a visual indicator of the position of the defined cutoff on the displayed pathway to the identified target tissue.
- processing means is configured to change, on the user interface, a form of the visual indicator of the defined cut-off when the determined position of the catheter corresponds with the determined location of the defined cut-off on the pathway to the identified target tissue.
- processing means is configured to determine an amount of interference between the catheter and the airways of the luminal network based on a minimum bend radius of the catheter.
- a surgical system comprising: a catheter; and a workstation operably coupled to the catheter, the workstation including processing means configured to: generate a 3D model of a luminal network of a patient’s lungs; display, on a user interface, the generated 3D model of the luminal network of the patient’s lungs; receive catheter information; define a cut-off value, wherein the cut-off value is determined based on a pre-determined amount of interference between the catheter and the airways of the luminal network of the patient’s lungs; determine an amount of interference between the catheter and airways of the luminal network of the patient’s lungs; identify portions of airways of the luminal network of the patient’s lungs where the determined amount of interference is greater than the defined cut-off value; and display, on the user interface, a visual indicator of the identified portions of airways of the luminal network where the determined amount of interference is greater than the defined cutoff value.
- processing means is configured to change, on the user interface, a form of the visual indicator of the identified portions of airways based upon a determined position of the catheter within the luminal network of the patient’s lungs.
- processing means is configured to receive patient information and to modify the defined cut-off value based on the received catheter information and the received patient information.
- processing means is configured to define the cut-off value based on a minimum bend radius of the catheter.
- processing means is configured to define the cut-off value based on a determined minimum threshold value for a length L s of the catheter that is required to be stiffened to treat target tissue identified within the patient’s lungs.
- a method of operating a surgical system comprising: generating a 3D model of a luminal network of a patient’s lungs; identifying target tissue in the generated 3D model; receiving catheter information; displaying, on a user interface, a pathway through the luminal network to the identified target tissue; determining an amount of interference between the catheter and airways of the luminal network along the pathway through the luminal network to the identified target tissue; displaying, on the user interface, a visual indictor of the determined amount of interference; determining a position of the catheter within the luminal network of the patient as the catheter is navigated along the pathway to the identified target tissue; and updating, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter within the luminal network.
- displaying, on the user interface, the visual indicator of the determined amount of interference includes displaying, on the user interface, a color gradient superimposed on the pathway through the luminal network corresponding to the determined amount of interference.
- displaying, on the user interface, the visual indicator of the determined amount of interference includes displaying, on the user interface, a background color, wherein the color of the background color is updated on the user interface based upon the determined amount of interference at the determined position of the catheter within the luminal network of the patient.
- displaying, on the user interface, the visual indicator of the determined amount of interference includes displaying, on the user interface, a bar graph corresponding to the determined amount of interference.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Veterinary Medicine (AREA)
- Robotics (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Pulmonology (AREA)
- Signal Processing (AREA)
- Otolaryngology (AREA)
- Physiology (AREA)
- Human Computer Interaction (AREA)
- Endoscopes (AREA)
Abstract
A surgical system includes a catheter and a workstation operably coupled to the catheter, the workstation including processing means configured to generate a 3D model of a luminal network of a patient's lungs, identify target tissue in the generated 3D model, receive catheter information, display, on a user interface, a pathway through the luminal network to the identified target tissue, determine an amount of interference between the catheter and airways of the luminal network along the pathway, display, on the user interface, a visual indicator of the determined amount of interference, determine a position of the catheter within the luminal network of the patient at the catheter is navigated to the identified target tissue, and update, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter.
Description
GUI TO DISPLAY RELATIVE AIRWAY SIZE
BACKGROUND
Technical Field
[0001] The present disclosure relates to the field of navigating medical devices within a patient, and in particular, planning a pathway though a luminal network of a patient and navigating medical devices to a target.
Description of Related Art
[0002] There are several commonly applied medical methods, such as endoscopic procedures or minimally invasive procedures, for treating various maladies affecting organs including the liver, brain, heart, lungs, gall bladder, kidneys, and bones. Often, one or more imaging modalities, such as magnetic resonance imaging (MRI), ultrasound imaging, computed tomography (CT), cone-beam computed tomography (CBCT), or fluoroscopy (including 3D fluoroscopy) are employed by clinicians to identify and navigate to areas of interest within a patient and ultimately a target for biopsy or treatment. In some procedures, pre-operative scans may be utilized for target identification and intraoperative guidance. However, real-time imaging may be required to obtain a more accurate and current image of the target area. Furthermore, real-time image data displaying the current location of a medical device with respect to the target and its surroundings may be needed to navigate the medical device to the target in a safe and accurate manner (e.g., without causing damage to other organs or tissue).
[0003] For example, an endoscopic approach has proven useful in navigating to areas of interest within a patient. To enable the endoscopic approach endoscopic navigation systems have been developed that use previously acquired MRI data or CT image data to generate a three-dimensional (3D) rendering, model, or volume of the particular body part such as the lungs.
[0004] In some applications, the acquired MRI data or CT Image data may be acquired during the procedure (perioperatively). The resulting volume generated from the MRI scan or CT scan is then utilized to create a navigation plan to facilitate the advancement of the endoscope (or other suitable medical device) within the patient anatomy to an area of interest. In some cases, the volume generated may be used to update a previously created navigation plan. A locating or tracking system, such as an electromagnetic (EM) tracking system, or fiberoptic shape sensing system may be utilized in conjunction with, for example, CT data, to facilitate guidance of the endoscope to the area of interest.
[0005] However, larger access devices may have difficulty navigating to the area of interest if the area of interest is located adjacent to small or narrow airways. As can be appreciated, the difficulty of navigating larger access devices, such as those having a camera, within narrow airways can lead to increased surgical times to navigate to the proper position relative to the area of interest which can lead to navigational inaccuracies or the use of fluoroscopy that leads to additional set-up time and radiation exposure.
SUMMARY
[0006] A surgical system includes a catheter and a workstation operably coupled to the catheter, the workstation including processing means configured to generate a 3D model of a luminal network of a patient’s lungs, identify target tissue in the generated 3D model, receive catheter information, display, on a user interface, a pathway through the luminal network to the identified target tissue, determine an amount of interference between the catheter and airways of the luminal network along the pathway through the luminal network to the identified target tissue, display, on the user interface, a visual indicator of the determined amount of interference, determine a position of the catheter within the luminal network of the patient as the catheter is navigated along the pathway to the identified target tissue, and update, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter within the luminal network.
[0007] In aspects, the processing means may be configured to receive patient information and to modify the determined amount of interference based on the received catheter information and the received patient information.
[0008] In other aspects, the processing means may be configured to display, on the user interface, the visual indicator of the determined amount of interference in the form of a color gradient superimposed on the pathway through the luminal network.
[0009] In certain aspects, the processing means may be configured to display, on the user interface, the visual indicator of the determined amount of interference in the form of a background color, wherein the color of the background color is updated on the user interface based upon the determined amount of interference at the determined position of the catheter within the luminal network of the patient.
[0010] In other aspects, the processing means may be configured to display, on the user interface, a visual indicator of the determined amount of interference in the form of a bar graph.
[0011] In aspects, the processing means may be configured to define a cut-off value, wherein the cut-off value is based on a pre-determined amount of interference between the catheter and the airways of the luminal network.
[0012] In certain aspects, the processing means may be configured to identify a position along the pathway through the luminal network wherein the determined amount of interference is greater than the defined cut-off value.
[0013] In other aspects, the processing means may be configured to display, on the user interface, a visual indicator of the position of the defined cut-off value on the displayed pathway to the identified target tissue.
[0014] In aspects, the processing means may be configured to change, on the user interface, a form of the visual indicator of the defined cut-off when the determined position of the catheter corresponds with the determined location of the defined cut-off on the pathway to the identified target tissue.
[0015] In certain aspects, the processing means may be configured to determine an amount of interference between the catheter and the airways of the luminal network based on a minimum bend radius of the catheter.
[0016] In accordance with another aspect of the disclosure, a surgical system includes a catheter and a workstation operably coupled to the catheter, the workstation including processing means configured to generate a 3D model of a luminal network of a patient’s lungs, display, on a user interface, the generated 3D model of the luminal network of the patient’s lungs, receive catheter information, define a cut-off value, wherein the cut-off value is determined based on a pre-determined amount of interference between the catheter and the airways of the luminal network of the patient’s lungs, identify position of the airways of the luminal network of the patient’s lungs where the determined amount of interference is greater than the defined cut-off value, and display, on the user interface, a visual indicator of the identified portions of airways of the luminal network where the determined amount of interference is greater than the defined cut-off value.
[0017] In certain aspects, the processing means may be configured to change, on the user interface, a form of the visual indicator of the identified portions of airways based upon a determined position of the catheter within the luminal network of the patient’s lungs.
[0018] In aspects, the processing means may be configured to receive patient information and to modify the defined cut-off value based on the received catheter information and the received patient information.
[0019] In other aspects, the processing means may be configured to define the cut-off value based on a minimum bend radius of the catheter.
[0020] In aspects, the processing means may be configured to define the cut-off value based on a determined minimum threshold value for a length of the catheter that is required to be stiffened to treat target tissue identified within the patient’s lungs.
[0021] In accordance with another aspect of the disclosure, a method of operating a surgical system includes generating a 3D model of a luminal network of a patient’s lungs, identifying target tissue in the generated 3D model, receiving catheter information, displaying, on a user interface, a pathway through the luminal network to the identified target tissue, determining an amount of interference between the catheter and airways of the luminal network along the pathway through the luminal network to the identified target tissue, displaying, on the user interface, a visual indicator of the determined amount of interference, determining a position of the catheter within the luminal network of the patient as the catheter is navigated along the pathway to the identified target tissue, and updating, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter within the luminal network.
[0022] In aspects, the method may include receiving patient information and modifying the determined amount of interference based on the received catheter information and the received patient information.
[0023] In other aspects, displaying, on the user interface, the visual indicator of the determined amount of interference may include displaying, on the user interface, a color gradient superimposed on the pathway through the luminal network corresponding to the determined amount of interference.
[0024] In certain aspects, displaying, on the user interface, the visual indicator of the determined amount of interference may include displaying, on the user interface, a background color, wherein the color of the background color is updated on the user interface based upon the determined amount of interference at the determined position of the catheter within the luminal network of the patient.
[0025] In aspects, displaying, on the user interface, the visual indicator of the determined amount of interference may include displaying, on the user interface, a bar graph corresponding to the determined amount of interference.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] Various aspects and embodiments of the disclosure are described hereinbelow with references to the drawings, wherein:
[0027] FIG. 1 is a schematic view of a surgical system provided in accordance with the disclosure;
[0028] FIG. 2 is a perspective view of a distal portion of a catheter of the surgical system of FIG. 1;
[0029] FIG. 3 is a schematic view of a workstation of the surgical system of FIG. 1;
[0030] FIG. 4 is a depiction of a graphical user interface of the surgical system of FIG. 1 illustrating a 3D representation of a patient’s airways and a generated pathway to an area of interest within the patient’s lungs;
[0031] FIG. 5 is an enlarged view of the area of detail indicated in FIG. 4;
[0032] FIG. 6 is a schematic view of the medical device of FIG. 1 illustrating a bend radius of the medical device;
[0033] FIG. 7 is a schematic view of the medical device of FIG. 1 illustrating a length of the medical device that is stiffened within the patient’s airways;
[0034] FIG. 8 is a depiction of the graphical user interface of the surgical system of FIG. 1 illustrating a 3D representation of the patient’s airways depicting a visual indicator based upon potential interference between a medical device of the surgical system of FIG. 1 and the patient’s airways in accordance with the disclosure;
[0035] FIG. 9 is a depiction of the graphical user interface of the surgical system of FIG. 1 illustrating a 3D representation of the patient’s airways and another embodiment of a visual indicator based upon potential interference between the medical device of the surgical system of FIG. 1 and the patient’s airways in accordance with the disclosure;
[0036] FIG. 10 is a depiction of the graphical user interface of the surgical system of FIG.
1 illustrating a 3D representation of the patient’s airways and yet another embodiment of a visual indicator based upon potential interference between the medical device of the surgical system of FIG. 1 and the patient’s airways in accordance with the disclosure;
[0037] FIG. 11 A is a flow diagram of a method of indicating potential interference between a medical device and airways of the patient’s luminal network in accordance with the disclosure;
[0038] FIG. 1 IB is a continuation of the flow diagram of FIG. 11 A;
[0039] FIG. 12 is a perspective view of a robotic surgical system of the surgical system of FIG. 1; and
[0040] FIG. 13 is an exploded view of a drive mechanism of an extended working channel of the surgical system of FIG. 1.
DETAILED DESCRIPTION
[0041] The disclosure is directed to a surgical system configured to enable navigation of a medical device through a luminal network of a patient, such as for example the lungs. The surgical system generates a 3 -dimensional (3D) representation of the airways of the patient using pre-procedure images, such as for example, CT, CBCT, or MRI images and identifies anatomical landmarks or target tissue (e.g., for example, bifurcations or lesions) within the 3D representation. The system generates a plurality of pathways to the target tissue through the luminal network of the patient’s lungs. Using properties of the medical device being used to navigate the airways of the luminal network of the patient’s lungs and the properties of the airways themselves, the surgical system determines an amount of interference between the medical device and the airways of the luminal network along the pathway to the target tissue. The amount of interference may be based upon a size of the medical device, a minimum bend radius of the medical device, a determined length of the medical device that is required to be stiffened to treat the target tissue, and combinations thereof.
[0042] The surgical system displays the 3D representation of the airways of the patient’s lungs, as well as a selected pathway through the luminal network to the identified target tissue. To aid navigation of the medical device within the luminal network and provide additional information to the clinician performing the procedure, the surgical system displays a visual indicator on the user interface to indicate an amount of interference between the medical device and the airways of the luminal network at locations along the pathway to the target tissue. In embodiments, the visual indicator may be a color gradient that is superimposed on the pathway to the target tissue. The surgical system may assign colors to different zones or amounts of interference along the pathway to the target tissue, such as green to indicate little to no interference, yellow to indicate some interference or interference between the catheter and the airways in a line-to-line orientation, and red to indicate significant interference or airways that are significantly smaller than the medical device. The surgical system may also change a background color of the tree view of the user interface based upon the determined interference at the current location of the medical device within the luminal network or the determined interference immediately distal to the location of the medical device. It is envisioned that the
background may flash, change form, or otherwise quickly change color depending upon the determined amount of interference at the current position of the medical device within the luminal network. Additionally, the surgical system may display a bar graph or other similar indicator on the user interface to indicate the amount of interference between the catheter and the airways of the luminal network.
[0043] In embodiments, the surgical system may define a cut-off value corresponding to a maximum allowed amount of interference between the catheter and the airways of the luminal network. The surgical system may display a visual indicator on the displayed 3D representation of the luminal network, such as for example, a line, colors, hash-marks, and other patterns at a determined location where the amount of interference exceeds the cut-off value. The surgical system may cause the visual indicators to change form, change colors, or flash when the determined position of the catheter approaches or coincides with the determined location within the luminal network.
[0044] As can be appreciated, these and other aspects of the present disclosure enable a clinician to make an informed decision on whether the continue advancing the medical device in a distal direction, whether to navigate into an airway located at a bifurcation, or whether to treat target tissue at a particular location within the luminal network of the patient. Inclusion of these other metrics allows for optimization of the user experience, patient safety, and procedural time efficiency as compared to relying on proximity of the medical device relative to target tissue alone. These and other aspects of the disclosure will be described in further detail hereinbelow. Although generally described with reference to the lung, it is contemplated that the systems and methods described herein may be used with any structure within the patient’s body, such as the liver, kidney, prostate, gynecological, amongst others.
[0045] Turning now to the drawings, FIG. 1 illustrates a system 10 in accordance with the disclosure facilitating navigation of a medical device through a luminal network and to an area of interest. As will be described in further detail hereinbelow, the surgical system 10 is generally configured to identify target tissue, automatically register real-time images captured by a surgical instrument to a generated 3 -dimensional (3D) model and navigate the surgical instrument to the target tissue.
[0046] The system 10 includes a catheter guide assembly 12 including an extended working channel (EWC) 14, which may be a smart extended working channel (sEWC) including an electromagnetic (EM) sensor. In one embodiment, the sEWC 14 is inserted into a bronchoscope 16 for access to a luminal network of the patient P. In this manner, the sEWC
14 may be inserted into a working channel of the bronchoscope 16 for navigation through a patient P’s luminal network, such as for example, the lungs. It is envisioned that the sEWC 14 may itself include imaging capabilities via an integrated camera or optics component (not shown) and therefore, a separate bronchoscope 16 is not strictly required. In embodiments, the sEWC 14 may be selectively locked to the bronchoscope 16 using a bronchoscope adapter 16a. In this manner, the bronchoscope adapter 16a is configured to permit motion of the sEWC 14 relative to the bronchoscope 16 (which may be referred to as an unlocked state of the bronchoscope adapter 16a) or inhibit motion of the sEWC 14 relative to the bronchoscope 16 (which may be referred to as a locked state of the bronchoscope adapter 16a). Bronchoscope adapters 16a are currently marketed and sold by Medtronic PLC under the brand names EDGE® Bronchoscope Adapter or the ILLUMISITE® Bronchoscope Adapter and are contemplated as being usable with the disclosure.
[0047] As compared to an EWC, the sEWC 14 may include one or more EM sensors 14a disposed in or on the sEWC 14 at a predetermined distance from the distal end 14b of the sEWC 14. It is contemplated that the EM sensor 14a may be a five degree-of-freedom sensor or a six degree-of-freedom sensor. As can be appreciated, the position and orientation of the EM sensor 14a of the sEWC relative to a reference coordinate system, and thus a distal portion of the sEWC 14 within an electromagnetic field can be derived. Catheter guide assemblies 12 are currently marketed and sold by Medtronic PLC under the brand names SUPERDIMENSION® Procedure Kits, ILLUMISITE™ Endobronchial Procedure Kit, ILLUMISITE™ Navigation Catheters, or EDGE® Procedure Kits, and are contemplated as being usable with the disclosure.
[0048] With reference to FIG. 2, a catheter 70, including one or more EM sensors 72, is inserted into the sEWC 14 and selectively locked into position relative to the sEWC 14 such that the sensor 72 extends a predetermined distance beyond a distal tip of the sEWC 14. As can be appreciated, the EM sensor 72 disposed on the catheter 70 is separate from the EM sensor 14a disposed on the sEWC. The EM sensor 72 is disposed on or in the catheter 70 a predetermined distance from a distal end portion 76 of the catheter 70. In this manner, the system 10 is able to determine a position of a distal portion of the catheter 70 within the luminal network of the patient P. It is envisioned that the catheter 70 may be selectively locked relative to the sEWC 14 at any time, regardless of the position of the distal end portion 76 of the catheter 70 relative to the sEWC 14. It is contemplated that the catheter 70 may be selectively locked to a handle 12a of the catheter guide assembly 12 using any suitable means, such as for
example, a snap fit, a press fit, a friction fit, a cam, one or more detents, threadable engagement, or a chuck clamp. It is envisioned that the EM sensor 72 may be a five degree-of-freedom sensor or a six degree-of-freedom sensor. As will be described in further detail hereinbelow, the position and orientation of the EM sensor 72 of the catheter 70 relative to a reference coordinate system, and thus a distal portion of the catheter 70, within an electromagnetic field can be derived.
[0049] At least one camera 74 is disposed on or adjacent a distal end surface 76a of the catheter 70 and is configured to capture, for example, still images, real-time images, or realtime video. Although generally described as being disposed on the distal end surface 76a of the catheter 70, it is envisioned that the camera 74 may be disposed on any suitable location on the camera 70, such as for example, a sidewall. In embodiments, the catheter 70 may include one or more light sources 80 disposed on or adjacent to the distal end surface 76a of the catheter 70 or any other suitable location (e.g., for example, a side surface or a protuberance). The light source 80 may be or may include, for example, a light emitting diode (LED), an optical fiber connected to a light source that is located external to the patient P, or combinations thereof, and may emit one or more of white, IR, or near infrared (NIR) light. In this manner, the camera 74 may be, for example, a white light camera, IR camera, or NIR camera, a camera that is capable of capturing white light and NIR light, or combinations thereof. In one non-limiting embodiment, the camera 74 is a white light mini complementary metal-oxide semiconductor (CMOS) camera, although it is contemplated that the camera 74 may be any suitable camera, such as for example, a charge-coupled device (CCD), a complementary metal-oxide- semiconductor (CMOS), a N-type metal-oxide-semiconductor (NMOS), and in embodiments, may be an infrared (IR) camera, depending upon the design needs of the system 10. As can be appreciated, the camera 74 captures images of the patient P’s anatomy from a perspective of looking out from the distal end portion 76 of the catheter 70. In embodiments, the camera 74 may be a dual lens camera or a Red Blue Green and Depth (RGB-D) camera configured to identify a distance between the camera 74 and anatomical features within the patient P’s anatomy without departing from the scope of the disclosure. As described hereinabove, it is envisioned that the camera 74 may be disposed on the catheter 70, the sEWC 14, or the bronchoscope 16.
[0050] In accordance with the disclosure, the sEWC 14 may be deployed through the working channel of a bronchoscope 16. The sEWC 14 may receive a catheter 70 including a sensor 72 and camera 74, or alternatively imaging capabilities (e.g., camera 74) may be directly
built into the sEWC 14 thus obviating the need for the removable catheter 70. The sEWC 14 with or without imaging capabilities may also be utilized with or without the bronchoscope. Thus, in some instances just the sEWC 14 is employed. Still further, the catheter 70 may be deployed either through the sEWC 14, or through the bronchoscope 16 without departing from the scope of the disclosure.
[0051] Continuing with FIG. 2, in embodiments, the catheter 70 may include a working channel 82 defined through a proximal portion (not shown) and the distal end surface 76a, although in embodiments, it is contemplated that the working channel 82 may extend through a sidewall of the catheter 70 depending upon the design needs of the catheter 70. As can be appreciated, the working channel 82 is configured to receive a locatable guide (not shown) or a surgical tool 90, such as for example, a biopsy tool. The catheter 70 includes an inertial measurement unit (IMU) 84 disposed within or adjacent to the distal end portion 76. As can be appreciated, the IMU 84 detects an orientation of the distal end portion 76 of the catheter 70 relative to a reference coordinate frame and detects movement and speed of the distal end portion 76 of the catheter 70 as the catheter 70 is navigated within the patient P’s luminal network. Using the data received from the IMU 84, the system 10 is able to determine alignment and trajectory information of the distal end portion 76 of the catheter 70. In embodiments, the system 10 may utilize the data received from the IMU 84 to determine a gravity vector, which may be used to determine the orientation of the distal end portion 76 of the catheter 70 within the airways of the patient P. Although generally described as using the IMU 84 to detect an orientation and/or movement of the distal end portion 76 of the catheter 70, the instant disclosure is not so limited and may be used in conjunction with flexible sensors, such as for example, fiber-bragg grating sensors, ultrasonic sensors, without sensors, or combinations thereof. As will be described in further detail hereinbelow, it is contemplated that the devices and systems described herein may be used in conjunction with robotic systems such that robotic actuators drive that sEWC 14 or bronchoscope 16 proximate the target.
[0052] Referring again to FIG. 1, the system 10 generally includes an operating table 52 configured to support a patient P and monitoring equipment 24 coupled to the sEWC 14, the bronchoscope 16, or the catheter 70 (e.g., for example, a video display for displaying the video images received from the video imaging system of the bronchoscope 12 or the camera 74 of the catheter 70), a locating or tracking system 46 including a tracking module 48, a plurality of reference sensors 50 and a transmitter mat 54 including a plurality of incorporated markers, and a workstation 20 having a computing device 22 including software and/or hardware used
to facilitate identification of a target, pathway planning to the target, navigation of a medical device to the target, and/or confirmation and or determination of placement of, for example, the sEWC 14, the bronchoscope 16, the catheter 70, or a surgical tool (e.g., for example, the surgical tool 90), relative to the target.
[0053] The tracking system 46 is, for example, a six degrees-of-freedom electromagnetic locating or tracking system, or other suitable system for determining position and orientation of, for example, a distal portion the sEWC 14, the bronchoscope 16, the catheter 70, or a surgical tool, for performing registration of a detected position of one or more of the EM sensors 14a or 72 and a three-dimensional (3D) model generated from a CT, CBCT, or MRI image scan. The tracking system 46 is configured for use with the sEWC 14 and the catheter 70, and particularly with the EM sensors 14a and 72.
[0054] Continuing with FIG. 1, the transmitter mat 54 is positioned beneath the patient P. The transmitter mat 54 generates an electromagnetic field around at least a portion of the patient P within which the position of the plurality of reference sensors 50 and the EM sensors 14a and 74 can be determined with the use of the tracking module 48. In one non-limiting embodiment, the transmitter mat 54 generates three or more electromagnetic fields. One or more of the reference sensors 50 are attached to the chest of the patient P. In embodiments, coordinates of the reference sensors 50 within the electromagnetic field generated by the transmitter mat 54 are sent to the computing device 22 where they are used to calculate a patient P coordinate frame of reference (e.g., for example, a reference coordinate frame). As will be described in further detail hereinbelow, registration is generally performed using coordinate locations of the 3D model and 2D images from the planning phase, with the patient P’s airways as observed through the bronchoscope 12 or catheter 70 and allow for the navigation phase to be undertaken with knowledge of the location of the EM sensors 14a and 72. It is envisioned that any one of the EM sensors 14a and 72 may be a single coil sensor that enables the system 10 to identify the position of the sEWC 14 or the catheter 70 within the EM field generated by the transmitter mat 54, although it is contemplated that the EM sensors 14a and 72 may be any suitable sensor and may be a sensor capable of enabling the system 10 to identify the position, orientation, and/or pose of the sEWC 14 or the catheter 70 within the EM field.
[0055] Although generally described with respect to EMN systems using EM sensors, the instant disclosure is not so limited and may be used in conjunction with flexible sensors, such as for example, fiber-bragg grating sensors, inertial measurement units (IMU), ultrasonic sensors, optical sensors, pose sensors (e.g., for example, ultra- wide band, global positioning
system, fiber-bragg, radio-opaque markers), without sensors, or combinations thereof. It is contemplated that the devices and systems described herein may be used in conjunction with robotic systems such that robotic actuators drive the sEWC 14 or bronchoscope 16 proximate the target.
[0056] In accordance with aspects of the disclosure, the visualization of intra-body navigation of a medical device (e.g., for example a biopsy tool or a therapy tool), towards a target (e.g. , for example, a lesion) may be a portion of a larger workflow of a navigation system. An imaging device 56 (e.g., for example, a CT imaging device, such as for example, a conebeam computed tomography (CBCT) device, including but not limited to Medtronic pic’s O- arm™ system) capable of acquiring 2D and 3D images or video of the patient P is also included in the particular aspect of system 10. The images, sequence of images, or video captured by the imaging device 56 may be stored within the imaging device 56 or transmitted to the computing device 22 for storage, processing, and display. In embodiments, the imaging device 56 may move relative to the patient P so that images may be acquired from different angles or perspectives relative to the patient P to create a sequence of images, such as for example, a fluoroscopic video. The pose of the imaging device 56 relative to the patient P while capturing the images may be estimated via markers incorporated with the transmitter mat 54. The markers are positioned under the patient P, between the patient P and the operating table 52, and between the patient P and a radiation source or a sensing unit of the imaging device 56. The markers incorporated with the transmitter mat 54 may be two separate elements which may be coupled in a fixed manner or alternatively may be manufactured as a single unit. It is contemplated that the imaging device 56 may include a single imaging device or more than one imaging device.
[0057] Continuing with FIG. 1 and with additional reference to FIG. 3, the workstation 20 includes a computer 22 and a display 24 that is configured to display one or more user interfaces 26 and/or 28. The workstation 20 may be a desktop computer or a tower configuration with the display 24 or may be a laptop computer or other computing device. The workstation 20 includes a processor 30 which executes software stored in a memory 32. The memory 32 may store video or other imaging data captured by the bronchoscope 16 or catheter 70 or preprocedure images from, for example, a computer -tomography (CT) scan, Positron Emission Tomography (PET), Magnetic Resonance Imaging (MRI), Cone-beam CT, amongst others. In addition, the memory 32 may store one or more software applications 34 to be executed on the processor 30. Though not explicitly illustrated, the display 24 may be incorporated into a head
mounted display such as an augmented reality (AR) headset such as the HoloLens offered by Microsoft Corp.
[0058] A network interface 36 enables the workstation 20 to communicate with a variety of other devices and systems via the Internet. The network interface 36 may connect the workstation 20 to the Internet via a wired or wireless connection. Additionally, or alternatively, the communication may be via an ad-hoc Bluetooth® or wireless network enabling communication with a wide-area network (WAN) and/or a local area network (LAN). The network interface 36 may connect to the Internet via one or more gateways, routers, and network address translation (NAT) devices. The network interface 36 may communicate with a cloud storage system 38, in which further image data and videos may be stored. The cloud storage system 38 may be remote from or on the premises of the hospital, such as in a control or hospital information technology room. An input module 40 receives input from an input device such as a keyboard, a mouse, voice commands, amongst others. An output module 42 connects the processor 30 and the memory 32 to a variety of output devices such as the display 24. In embodiments, the workstation 20 may include its own display 44, which may be a touchscreen display.
[0059] In a planning or pre-procedure phase, the software application utilizes preprocedure CT image data, either stored in the memory 32 or retrieved via the network interface 36, for generating and viewing a 3D model of the patient P’s anatomy, enabling the identification of target tissue TT on the 3D model (automatically, semi-automatically, or manually), and in embodiments, allowing for the selection of a pathway PW through the patient P’s anatomy to the target tissue, as will be described in further detail hereinbelow. Examples of such an application is the ILOGIC® planning and navigation suites and the ILLUMISITE® planning and navigation suites currently marketed by Medtronic PLC. The 3D model may be displayed on the display 24 or another suitable display associated with the workstation 20, such as for example, the display 44, or in any other suitable fashion. Using the workstation 20, various views of the 3D model may be provided and/or the 3D model may be manipulated to facilitate identification of target tissue TT on the 3D model and/or selection of a suitable pathway PW to the target tissue.
[0060] It is envisioned that the 3D model may be generated by segmenting and reconstructing the airways of the patient P’s lungs to generate a 3D airway tree 100. The reconstructed 3D airway tree 100 includes various branches and bifurcations which, in embodiments, may be labeled using, for example, well accepted nomenclature such as RBI
(right branch 1), LB1 (left branch 1, or Bl (bifurcation one). In embodiments, the segmentation and labeling of the airways of the patient P’s lungs is performed to a resolution that includes terminal bronchioles having a diameter of approximately less than 1 mm. As can be appreciated, segmenting the airways of the patient P’s lungs to terminal bronchioles improves the accuracy registration between the position of the sEWC 14 and catheter 70 and the 3D model, improves the accuracy of the pathway to the target, and improves the ability of the software application to identify the location of the sEWC 14 and catheter 70 within the airways and navigate the sEWC 14 and catheter 70 to the target tissue. Those of skill in the art will recognize that a variety of different algorithms may be employed to segment the CT image data set, including, for example, connected component, region growing, thresholding, clustering, watershed segmentation, or edge detection. It is envisioned that the entire reconstructed 3D airway tree may be labeled, or only branches or branch points within the reconstructed 3D airway tree that are located adjacent to the pathway to the target tissue.
[0061] In embodiments, the software stored in the memory 32 may identify and segment out a targeted critical structure (e.g., for example, blood vessels, lymphatic vessels, lesions, and/or other intrathoracic structures) within the 3D model. It is envisioned that the segmentation process may be performed automatically, manually, or a combination of both. The segmentation process isolates the targeted critical structure from the surrounding tissue in the 3D model and identifies its position within the 3D model. In embodiments, the software application segments the CT images to terminal bronchioles that are less than 1 mm in diameter such that branches and/or bifurcations are identified and labeled deep into the patient P’s luminal network. It is envisioned that this position can be updated depending upon the view selected on the display 24 such that the view of the segmented targeted critical structure may approximate a view captured by the camera 74 of the catheter 70.
[0062] With reference to FIGS. 4-7, using the 3D model tree 100, or in embodiments, the 3D model or combinations thereof, the software stored in the memory 32 generates a plurality of proposed pathways PW through the luminal network of the patient P to the target tissue TT. In this manner, the software stored in the memory 32 identifies the location of the target tissue TT within the patient P’s lungs and identifies a proposed pathway PW starting with airways that are the smallest and nearest to the target tissue TT. The software stored in the memory 32 propagates the pathway PW contiguously through subsequently larger airways until a proposed pathway PW reaches the trachea of the patient P. This process is repeated until a predetermined
number of pathways PW have been generated, or all possible pathways PW have been identified.
[0063] Registration of the patient P’s location on the transmitter mat 54 may be performed by moving the EM sensors 14a and/or 72 through the airways of the patient P. In this manner, the software stored on the memory 32 periodically determines the location of the EM sensors 14a or 72 within the coordinate system as the sEWC 14 of the catheter 70 is moving through the airways using the transmitter mat 54, the reference sensors 50, and the tracking system 46. The location data may be represented on the user interface 26 as a marker or other suitable visual indicator, a plurality of which develop a point cloud having a shape that may approximate the interior geometry of the 3D model. The shape resulting from this location data is compared to an interior geometry of passages of a 3D model, and a location correlation between the shape and the 3D model based on the comparison is determined. In addition, the software identifies non-tissue space (e.g., for example, air filled cavities) in the 3D model. The software aligns, or registers, an image representing a location of the EM sensors 14a or 72 with the 3D model and/or 2D images generated from the 3D model, which are based on the recorded location data and an assumption that the sEWC 14 or the catheter 70 remains located in nontissue space in a patient P’s airways. In embodiments, a manual registration technique may be employed by navigating the sEWC 14 or catheter 70 with the EM sensors 14a and 72 to prespecified locations in the lungs of the patient P, and manually correlating the images from the bronchoscope 16 or the catheter 70 to the model data of the 3D model. Although generally described herein as utilizing a point cloud e.g., for example, a plurality of location data points), it is envisioned that registration can be completed utilizing any number of location data points, and in one non-limiting embodiment, may utilize only a single location data point.
[0064] As can be appreciated, access devices or other medical devices navigated within the luminal network of the patient become unable to navigate within small airways approximating the outer dimension of the medical device. In this manner, the inner dimensions of unnavigable airways vary depending upon the size of the medical device being used. In one non-limiting embodiment, the medical device may have an outer dimension of about between 3.5 mm to 4.2 mm. It is contemplated that the software stored on the memory 32 may automatically identify the medical device being used or the outer dimension of the medical device, or the type and dimensions of the medical device may be manually entered. The software stored on the memory 32 may assign a predetermined threshold value to the inner dimensions of the airways, such as for example, a maximum inner dimension through which the medical device is able to
be navigated within. The predetermined threshold value may be an inner dimension that is about equal to or less than the outer dimension of the medical device. In embodiments, the software stored on the memory 32 may apply an offset to the identified maximum inner dimension, such as for example, an inner dimension that is a percentage of the maximum inner dimension, which may be a percentage increase of the maximum inner dimension. In this manner, the software stored on the memory 32 may increase the maximum inner dimension of the airways by a predetermined amount to ensure that airways adjacent to navigable airways are segmented and rendered. In one non-limiting embodiment, the software stored on the memory 32 may reduce the inner dimension of the airways from about equal to a medical device having an outer dimension of 3.5 mm to an inner dimension of about 2 mm.
[0065] It is envisioned that the software stored on the memory 32 may automatically or manually identify a range of motion or minimum bend radius R of the medical device (FIG. 6). As can be appreciated, the range of motion or minimum bend radius R of the medical device limits or otherwise inhibits the medical device from traversing pathways through the airways of the patient P requiring bends or curves tighter or otherwise smaller than the minimum bend radius R. The software stored on the memory 32 may assign a predetermined threshold value to the bends or curves within the airways of the patient P, such as a minimum bend radius R achievable by the medical device. In embodiments, the software stored on the memory 32 may apply an offset to the predetermined threshold value of the bend radius R, such as for example, a bend radius R that is a percentage of the minimum bend radius R, which may be a percentage increase of the minimum bend radius R. In this manner, the software stored on the memory 32 may increase the minimum bend radius R by a predetermined amount to ensure that the medical device is capable of being navigated through airways of the patient P to the target tissue TT.
[0066] As can be appreciated, during navigation of the medical device through the patient P’s airways to the target tissue TT and treatment of the target tissue TT, one or more portions along the length of the medical device may require stiffening to accomplish the desired diagnostic or therapeutic task. Stiffening of one or more portions along the length of the medical device requires space within the airways of the patient P to accommodate the stiffening of the medical device (e.g., for example, a linear length). In this manner, the software stored on the memory 32 may require a portion of the medical device adjacent to the distal end portion of the medical device to stiffen, which may abut or otherwise contact walls of the airways of the patient P. It is envisioned that the software stored on the memory 32 may determine a minimum threshold value for the length Ls of the medical device that is required to be stiffened
(FIG. 7), which may take into account an outer dimension of the medical device, an inner dimension of the airways of the patient P, and bends or bifurcations within the airways of the patient P.
[0067] With additional reference to FIG. 8, the software stored on the memory 32 assigns or otherwise correlates the mechanical properties (e.g., for example, an outer dimension or a minimum bend radius) of the medical device to the properties of portions and/or segments of the airways of the patient’s luminal network along the selected pathway PW to the target tissue TT. In embodiments, the software stored on the memory 32 may consider locations within the patient P’s airways where stiffening one or more portions of the medical device is required to treat the target tissue TT. It is envisioned that these values and other relevant data may be stored in a look-up table or automatically assigned to patient P’s anatomy along the pathway PW to the target tissue.
[0068] As can be appreciated, during navigation of the medical device to the target tissue TT, it is often difficult to visualize or otherwise determine whether the medical device is navigable within certain airways of the patient P’s luminal network. In accordance with the disclosure, the software stored on the memory 32 color codes, overlays, or otherwise superimposes on the 3D model tree 100 or other portion of the user interface 26 information or another indicator of the airway geometry in comparison to the above-described medical device properties. In one non-limiting embodiment, the software stored on the memory 32 displays a color gradient or assign a color to various zones along the pathway PW to the target tissue, such as for example, green for indicating no or a low likelihood of airway interference with the medical device, yellow for indicating potential airway interference with the medical device or line-to-line interference, and red for indicating likely airway interference with the medical device or airways having an inner dimension that is smaller or significantly smaller than an outer dimension of the medical device.. The threshold values for each of these zones can be automatically assigned based upon airway geometry and medical device properties, manually entered, or manually adjusted by the user. Although generally described as displaying a color- coded gradient on the pathway PW to the target tissue TT, the disclosure is not so limited. It is envisioned that a bar graph 110 (FIG. 9), gauges, or other indicators may be displayed on the user interface 26 to indicate the likelihood of airway interference with the medical device. In embodiments, a portion or all of the background of the tree view of the user interface 26 may change color or flash as the medical device approaches airways that have a high likelihood of interference with the medical device.
[0069] With additional reference to FIG. 10, the software stored on the memory 32 utilizes various factors, which in embodiments may be the factors described hereinabove with respect to the properties of the medical device, the airways of the patient P’s luminal network, and the pathway through the luminal network of the patient P to the target tissue TT to determine a cutoff value or limit as to an amount of interference between the medical device and the airways of the patient P. In this manner, a position within the luminal network of the patient P where distal advancement of the medical device should be terminated or otherwise stopped is determined. An amount of interference between the medical device and the airways that is greater than the cut-off value is indicated as unnavigable by the medical device. It is envisioned that the cut-off may be automatically determined by the software stored on the memory 32, manually entered, or manually adjusted by the user. The software stored on the memory 32 displays an indicator or other visual marker 112 on the 3D model 100 displayed on the user interface 26. In embodiments, the software stored on the memory 32 may cause the indicator 112 to flash, change color, increase in size, or combinations thereof when the medical device approaches the location within the airways of the patient P’s luminal network identified by the indicator 112. It is envisioned that the background of the user interface 26 displaying the tree view may flash or otherwise change color, and in embodiments, a warning or other message (not shown) may be displayed on the user interface 26 to indicate to the user that the medical device is approaching or is located at the cut-off. In embodiments, airways of the luminal network where the determined amount of interference is greater than the cut-off value may be indicated using colors or other infills, such as for example, hash-marks and other patterns.
[0070] With reference to FIGS. 11 A and 1 IB, a method of indicating potential interference between a medical device and airways of the patient’s luminal network is described and generally identified by reference numeral 200. Initially, at step 202, the patient P is imaged and the captured images are stored in the memory 32. At step 204, the software stored in the memory 32 generates a 3D representation of the patient P’s airways. At step 206, target tissue TT is identified in the generated 3D representation of the patient P’s airways. Optionally, at step 208, patient P information, such as for example, the type of procedure being performed, patient P history, and the volume of the target tissue is received. Optionally, in parallel, at step 210, medical device information, such as the type of medical device used to navigate to the target tissue TT, the type of surgical tool used to treat or sample the target tissue TT, and the size of the medical device used to navigate to the target tissue TT is received. At step 212, the software stored in the memory 32 generates proposed pathways to the target tissue TT through
the luminal network of the patient P. At step 214, the software stored in the memory 32 analyzes the generated proposed pathways PW to calculate an amount of, or a likelihood of, interference between the airways of the patient P and the surgical tool used to treat or sample the target tissue TT along the pathway PW to the target tissue TT. Optionally, at step 216, the software stored in the memory modifies the calculated amount of, or likelihood of, interference based on the medical device information received at step 210. Optionally, at step 218 the software stored in the memory 32 may modify the calculated amount of, or likelihood of, interference based on the patient P information received at step 208 in addition to, or in lieu of, using the medical device information. At step 220, a pathway PW to the target tissue TT is selected and an indicator of the amount of, or likelihood, of interference between the surgical tool and the airways of the luminal network of the patient P along the selected pathway PW is displayed on the user interface 26. Optionally, at step 222, a cut-off past which the surgical tool should not be navigated on the selected pathway PW to the target tissue TT may be defined, and at step 224, the defined cut-off may be displayed on the user interface 26. At step 226, the surgical tool is navigated to the target tissue TT along the selected pathway PW. In parallel, at step 228, the displayed indicator of interference between the airways of the patient P and the surgical tool is updated based upon the position of the surgical tool within the airways of the patient P. At step 230, it is determined if the surgical tool is located adjacent to the target tissue TT or has reached the defined cut-off along the pathway PW to the target tissue TT. If it is determined that the surgical tool is not located adjacent to the target tissue TT or the surgical tool has not reached the defined cut-off, the method returns to steps 226 and step 228. If it is determined that the surgical tool is located adjacent to the target tissue TT or the surgical tool has reached the defined cut-off, the method ends at step 232. As can be appreciated, the abovedescribed method may be repeated as many times as necessary depending upon the needs of the user or the procedure being performed.
[0071] Turning to FIGS. 12 and 13, it is envisioned that the system 10 may include a robotic surgical system 600 having a drive mechanism 602 including a robotic arm 604 operably coupled to a base or cart 606, which may, in embodiments, be the workstation 20. The robotic arm 604 includes a cradle 608 that is configured to receive a portion of the sEWC 14. The sEWC 14 is coupled to the cradle 608 using any suitable means (e.g., for example, straps, mechanical fasteners, and/or couplings). It is envisioned that the robotic surgical system 600 may communicate with the sEWC 14 via electrical connection (e.g., for example, contacts and/or plugs) or may be in wireless communication with the sEWC 14 to control or otherwise
effectuate movement of one or more motors (FIG. 16) disposed within the sEWC 14 and in embodiments, may receive images captured by a camera (not shown) associated with the sEWC 14. In this manner, it is contemplated that the robotic surgical system 600 may include a wireless communication system 610 operably coupled thereto such that the sEWC 14 may wirelessly communicate with the robotic surgical system 600 and/or the workstation 20 via WiFi, Bluetooth®, for example. As can be appreciated, the robotic surgical system 600 may omit the electrical contacts altogether and may communicate with the sEWC 14 wirelessly or may utilize both electrical contacts and wireless communication. The wireless communication system 610 is substantially similar to the network interface 36 (FIG. 3) described hereinabove, and therefore, will not be described in detail herein in the interest of brevity. As indicated hereinabove, the robotic surgical system 600 and the workstation 20 may be one in the same, or in embodiments, may be widely distributed over multiple locations within the operating room. It is contemplated that the workstation 20 may be disposed in a separate location and the display 44 (FIGS. 1 and 3) may be an overhead monitor disposed within the operating room. [0072] As indicated hereinabove, it is envisioned that the sEWC 14 may be manually actuated via cables or push wires, or for example, may be electronically operated via one or more buttonsjoysticks, toggles, actuators (not shown) operably coupled to a drive mechanism 614 disposed within an interior portion of the sEWC 14 that is operably coupled to a proximal portion of the sEWC 14, although it is envisioned that the drive mechanism 614 may be operably coupled to any portion of the sEWC 14. The drive mechanism 614 effectuates manipulation or articulation of the distal end of the sEWC 14 in four degrees of freedom or two planes of articulation (e.g., for example, left, right, up, or down), which is controlled by two push-pull wires, although it is contemplated that the drive mechanism 614 may include any suitable number of wires to effectuate movement or articulation of the distal end of the sEWC 14 in greater or fewer degrees of freedom without departing from the scope of the disclosure. It is contemplated that the distal end of the sEWC 14 may be manipulated in more than two planes of articulation, such as for example, in polar coordinates, or may maintain an angle of the distal end relative to the longitudinal axis of the sEWC 14 while altering the azimuth of the distal end of the sEWC 14 or vice versa. In one non-limiting embodiment, the system 10 may define a vector or trajectory of the distal end of the sEWC 14 in relation to the two planes of articulation.
[0073] It is envisioned that the drive mechanism 614 may be cable actuated using artificial tendons or pull wires 616 (e.g., for example, metallic, non-metallic, and/or composite) or may
be a nitinol wire mechanism. In embodiments, the drive mechanism 614 may include motors 618 or other suitable devices capable of effectuating movement of the pull wires 616. In this manner, the motors 618 are disposed within the sEWC 14 such that rotation of an output shaft the motors 618 effectuates a corresponding articulation of the distal end of the sEWC 14.
[0074] Although generally described as having the motors 618 disposed within the sEWC 14, it is contemplated that the sEWC 14 may not include motors 618 disposed therein. Rather, the drive mechanism 614 disposed within the sEWC 14 may interface with motors 622 disposed within the cradle 608 of the robotic surgical system 600. In embodiments, the sEWC 14 may include a motor or motors 618 for controlling articulation of the distal end 14b of the sEWC 14 in one plane (e.g, for example, left/null or right/null) and the drive mechanism 624 of the robotic surgical system 600 may include at least one motor 622 to effectuate the second axis of rotation and for axial motion. In this manner, the motor 618 of the sEWC 14 and the motors 622 of the robotic surgical system 600 cooperate to effectuate four-way articulation of the distal end of the sEWC 14 and effectuate rotation of the sEWC 14. As can be appreciated, by removing the motors 618 from the sEWC 14, the sEWC 14 becomes increasingly cheaper to manufacture and may be a disposable unit. In embodiments, the sEWC 14 may be integrated into the robotic surgical system 600 (e.g, for example, one piece) and may not be a separate component.
[0075] As noted hereinabove, a camera 74 may be inserted into the sEWC 14 or catheter 70 allowing for the camera’s removal during a procedure. Alternatively, the camera 74 may be a permanent component of the sEWC 14 or catheter 70. With the camera 74 images of the luminal network are captured during the navigation of sEWC 14 or catheter 70 through the luminal network. These images can be presented for example, on display 24 (e.g., in UI 26 and/or 28) either separate or alongside the 3D model tree 100. As a result, the indicator of interference (e.g., bar graph 110) can be actively updated as the sEWC 114 or catheter 70 is advanced through the luminal network, and simultaneously the clinician is able to observe live video images captured by the camera 74 from within the luminal network.
[0076] As will be appreciated, the 3D model tree 100 is generated based on pre-procedural or intraprocedural images (e.g., CT, CBCT, or MR images). Particularly with regard to pre- procedural images, they are typically captured at full breath hold. As a result, the 3D model tree 100 depicts the airways at substantially their maximum normal size under normal operating (breathing) pressures.
[0077] There are two challenges with regards to the 3D model tree 100 based on the captured images. First, as the 3D model tree 100 approaches the periphery of the airways, there may be airways through which the sEWC 14 or catheter 70 must be navigated to reach the target tissue, but which are not represented in the 3D model tree 100. The second challenge is that the 3D model tree 100 is not truly reflective of the actual resilience/compliance of the tissues of the patient. As will be appreciated, every patient is different, thus while an average value for resilience/compliance can be applied to the 3D model tree 100 when modeling the ability of an airway to expand, this is not necessarily accurate of a particular patient’s airways and there may be differences in-situ for the sEWC 14 or catheter 70 to navigate through particularly distal airways of the luminal network (e.g., generating the UI 26 in FIG. 8) compared to the modeled interference.
[0078] Aspects of the disclosure are directed to addressing the challenges noted above. With the camera 74 capturing images (e.g., video) as the sEWC 14 or catheter 70 is navigated through the luminal network, these images can be analyzed via an image processing application stored in the memory of the computing device 22. Where the airway currently being navigated was assessed based on the 3D model tree 100 to accommodate the sEWC 14 or catheter 70 (See e.g., FIG. 8) the images captured by the camera 74 may be continually analyzed during the navigation to confirm that no buckling, kinking, or change of shape of the airways, as a result of force being transferred from the sEWC 14 or catheter 70 to the airway wall, is observed or detected. Where a change in shape of the airway wall is detected by the application in the images acquired by the camera 74 the interference indicator (e.g., bar graph 110) can be updated in the UI 26 to display the image detected interference, which may not necessarily correspond with the 3D model tree 100 based interference (e.g., as depicted n FIG. 8). In this manner, the modeled interferences serve as a guide to the clinician but can be confirmed or corrected using the image analysis during the navigation with the sEWC 14 or catheter 70 in the luminal network. As will be appreciated, this process may run continually in the background during navigation, or at certain parts of the navigation (e.g., where interference is expected) and one or more indicators of the detected interference or detected lack of interference can be presented to the clinician. Further, with the images displayed the clinician is free to make their own assessments regarding changes in shape of the airway and the interference.
[0079] In a further aspect of the disclosure, in addition to detecting kinking or other changes in shape of the airway, the image processing application may make a determination of
the actual size of the airway being navigated. This may be done by comparing changes in the field of view of the camera 74 as the sEWC 14 or catheter 70 is being navigated into the luminal network. Comparing these changes can provide a basis of comparison to the actual diameter of the sEWC 14 or catheter 70. In some instances the diameter of the airway or lumen may be displayed on the UI 26, 28 along with or as an alternative to the interference indicator.
[0080] In accordance with a further aspect of the disclosure, the sEWC 14 or catheter 70 may include a working channel 82, the working channel may be operably connected to a fluid source (e.g., an air-filled or liquid-filled syringe). Upon reaching a location within the luminal network at which the UI 26 indicates an interference between the sEWC 14 or catheter 70 and the walls of the luminal network, if the clinician desires in-situ confirmation, using the fluid source operably connected to the working channel a dose of a fluid such as oxygen, saline, etc., can be injected into the luminal network. This injection of fluid may be at a pressure in excess of normal breathing pressures within the lungs. This increase in pressure can temporarily and reversibly expand the airway. Using the captured images from the camera 74, the application can analyze the images before, during, and after the injection of the fluid to detect and quantify the magnitude of the change in diameter of the airway. If no change in airway diameter is detected, then the interference indicator (e.g., bar graph 110) may not be altered, and the clinician has confirmed the 3D model tree 100 based interference determinations. Alternatively, an indication of no change can be displayed in the UI 26, 28 to provide this confirmation. However, where a change in diameter is detected the interference indicator (e.g., bar graph 110) may be updated to alert the clinician to the change in expected interference being altered based on the in-situ determination. With this information the clinician has confirmed that further navigation along the desired pathway can continue without fear of tearing or damaging the airway tissues. As noted above, the ability to increase the diameter of the airway may be a result of the actual resilience and compliance of the lungs differing from that employed during the modeling and when assessing interferences in the 3D model tree 100. Alternatively, the ability to increase the diameter may be a result of the over pressurization beyond normal breathing pressures. As is known, the tissues in the periphery of the lungs is quite flexible, thus some over pressurization can enlarge the airways beyond the detected size during full breath hold and the original imaging that yielded the 3D model tree 100. As will be appreciated, the injection of fluid may be repeated at multiple instances if necessary to navigate to a desired location prior to acquisition of a biopsy or application of a therapy to the target tissue.
[0081] In accordance with a further aspect of the disclosure, in some instances the 3D model tree 100 may not include all the airways of the lungs, and in particular some of the smaller airways in the periphery closer to the target tissue. Nonetheless, it may be necessary or desirable to navigate the sEWC 14 or catheter 70 through these airways to reach the target tissue. Since these airways are not part of the 3D model tree 100, no interference indication may have been generated for these airways, and thus no interference indicator (e.g., bar graph 110) is available for these sections of the luminal network. Utilizing the camera 74, and the image processing applications, the interference determinations may be made in real-time during the navigation. During the navigation images are captured by the camera 74 and based on perceived changes in the images an assessment can be made regarding the diameter of the airway. These real-time assessments of the images captured by the camera 74 can then be presented on the UI 26. In addition, the inflating fluid may be employed where the clinician requires further assurances that the sEWC 14 or catheter 70 can be navigated within an airway. [0082] As will be appreciated, much of the navigation described above may be autonomous or semi-autonomous navigation where the robotic surgical system 600 is employed to advance the sEWC 14 or catheter 70 through the airways of the patient. In one aspect of the disclosure, navigation can be autonomous until arriving at a location where the expected interference exceeds a threshold. Upon arriving at the location at which the threshold is reached, the application analyzing the images from the camera 74 may assess actual interference and where appropriate inject fluid to over pressurize the airway and assess the in-situ interference of the airways. Where continued interference above the threshold is detected, the interference may be displayed on the user interface (e.g., bar graph 110). If, however, it is determined that the measured in-situ interference in the captured images is below the threshold such that further advancement of the sEWC 14 or catheter 70 can be undertaken the interference indicator (e.g., bar graph 110) may be updated and navigation continued. In some instances, however, further navigation may require confirmation be received via the UI 26 from the clinician that the observed interference in the images is below the threshold and that further navigation is desired. Still further, the further navigation may
[0083] Utilizing the methods and tools described herein, particularly during motorized or robotic catheter navigation where traditional haptic feedback is minimized, the clinician is provided with interference information regarding the forces on the tissue applied by the sEWC 14 or catheter 70. This information informs the clinician on how or whether they wish to proceed with navigation or conduct a biopsy or therapy procedure from the navigated to
location. Further, the modeled interferences can be updated and augmented based on in-situ acquired images from within the luminal network. Utilizing these tools the clinician can be provided with accurate and actionable information to determine whether further navigation is possible without fear of damaging patient tissues.
[0084] From the foregoing and with reference to the various figures, those skilled in the art will appreciate that certain modifications can be made to the disclosure without departing from the scope of the disclosure.
[0085] Although the description of computer-readable media contained herein refers to solid-state storage, it should be appreciated by those skilled in the art that computer-readable storage media can be any available media that can be accessed by the processor 30. That is, computer readable storage media may include non-transitory, volatile, and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as, for example, computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media may include RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, DVD, Blu-Ray or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information, and which may be accessed by the workstation 20.
[0086] The invention may be further described by reference to the following numbered paragraphs:
1. A surgical system, comprising: a catheter; a workstation operably coupled to the catheter, the workstation including processing means configured to: generate a 3D model of a luminal network of a patient’s lungs; identify target tissue in the generated 3D model; receive catheter information; display, on a user interface, a pathway through the luminal network to the identified target tissue; determine an amount of interference between the catheter and airways of the luminal network along the pathway through the luminal network to the identified target tissue; display, on the user interface, a visual indicator of the determined amount of interference; determine a position of the catheter within the luminal network of the patient as the catheter is navigated along the pathway to the identified target tissue; and update, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter within the luminal network.
2. The system according to paragraph 1, wherein the processing means is configured to receive patient information and to modify the determined amount of interference based on the received catheter information and the received patient information.
3. The system according to paragraph 1, wherein the processing means is configured to display, on the user interface, the visual indicator of the determined amount of interference in the form of a color gradient superimposed on the pathway through the luminal network.
4. The system according to paragraph 1, wherein the processing means is configured to display, on the user interface, the visual indicator of the determined amount of interference in the form of a background color, wherein the color of the background color is updated on the user interface based upon the determined amount of interference at the determined position of the catheter within the luminal network of the patient.
5. The system according to paragraph 1, wherein the processing means is configured to display, on the user interface, a visual indicator of the determined amount of interference in the form of a bar graph.
6. The system according to paragraph 1, wherein the processing means is configured to define a cut-off value, wherein the cut-off value is based on a pre-determined amount of interference between the catheter and the airways of the luminal network.
7. The system according to paragraph 6, wherein the processing means is configured to identify a position along the pathway through the luminal network where the determined amount of interference is greater than the defined cut-off value.
8. The system according to paragraph 7, wherein the processing means is configured to display, on the user interface, a visual indicator of the position of the defined cutoff on the displayed pathway to the identified target tissue.
9. The system according to paragraph 8, wherein the processing means is configured to change, on the user interface, a form of the visual indicator of the defined cut-off
when the determined position of the catheter corresponds with the determined location of the defined cut-off on the pathway to the identified target tissue.
10. The system according to paragraph 1, wherein the processing means is configured to determine an amount of interference between the catheter and the airways of the luminal network based on a minimum bend radius of the catheter.
11. A surgical system, comprising: a catheter; and a workstation operably coupled to the catheter, the workstation including processing means configured to: generate a 3D model of a luminal network of a patient’s lungs; display, on a user interface, the generated 3D model of the luminal network of the patient’s lungs; receive catheter information; define a cut-off value, wherein the cut-off value is determined based on a pre-determined amount of interference between the catheter and the airways of the luminal network of the patient’s lungs; determine an amount of interference between the catheter and airways of the luminal network of the patient’s lungs; identify portions of airways of the luminal network of the patient’s lungs where the determined amount of interference is greater than the defined cut-off value; and display, on the user interface, a visual indicator of the identified portions of airways of the luminal network where the determined amount of interference is greater than the defined cutoff value.
12. The system according to paragraph 11, wherein the processing means is configured to change, on the user interface, a form of the visual indicator of the identified portions of airways based upon a determined position of the catheter within the luminal network of the patient’s lungs.
13. The system according to paragraph 11, wherein the processing means is configured to receive patient information and to modify the defined cut-off value based on the received catheter information and the received patient information.
14. The system according to paragraph 11, wherein the processing means is configured to define the cut-off value based on a minimum bend radius of the catheter.
15. The system according to paragraph 11, wherein the processing means is configured to define the cut-off value based on a determined minimum threshold value for a length Ls of the catheter that is required to be stiffened to treat target tissue identified within the patient’s lungs.
16. A method of operating a surgical system, comprising: generating a 3D model of a luminal network of a patient’s lungs; identifying target tissue in the generated 3D model; receiving catheter information; displaying, on a user interface, a pathway through the luminal network to the identified target tissue; determining an amount of interference between the catheter and airways of the luminal network along the pathway through the luminal network to the identified target tissue; displaying, on the user interface, a visual indictor of the determined amount of interference; determining a position of the catheter within the luminal network of the patient as the catheter is navigated along the pathway to the identified target tissue; and updating, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter within the luminal network.
17. The method according to paragraph 16, further comprising receiving patient information and modifying the determined amount of interference based on the received catheter information and the received patient information.
18. The method according to paragraph 16, wherein displaying, on the user interface, the visual indicator of the determined amount of interference includes displaying, on the user interface, a color gradient superimposed on the pathway through the luminal network corresponding to the determined amount of interference.
19. The method according to paragraph 16, wherein displaying, on the user interface, the visual indicator of the determined amount of interference includes displaying, on the user interface, a background color, wherein the color of the background color is updated on the user interface based upon the determined amount of interference at the determined position of the catheter within the luminal network of the patient.
20. The method according to paragraph 16, wherein displaying, on the user interface, the visual indicator of the determined amount of interference includes displaying, on the user interface, a bar graph corresponding to the determined amount of interference.
Claims
1. A surgical system, comprising: a catheter; a workstation operably coupled to the catheter, the workstation including processing means configured to: generate a 3D model of a luminal network of a patient’s lungs; identify target tissue in the generated 3D model; receive catheter information; display, on a user interface, a pathway through the luminal network to the identified target tissue; determine an amount of interference between the catheter and airways of the luminal network along the pathway through the luminal network to the identified target tissue; display, on the user interface, a visual indicator of the determined amount of interference; determine a position of the catheter within the luminal network of the patient as the catheter is navigated along the pathway to the identified target tissue; and update, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter within the luminal network.
2. The system according to claim 1, wherein the processing means is configured to receive patient information and to modify the determined amount of interference based on the received catheter information and the received patient information.
3. The system according to claim 1, wherein the processing means is configured to display, on the user interface, the visual indicator of the determined amount of interference in the form of a color gradient superimposed on the pathway through the luminal network.
4. The system according to claim 1, wherein the processing means is configured to display, on the user interface, the visual indicator of the determined amount of interference
in the form of a background color, wherein the color of the background color is updated on the user interface based upon the determined amount of interference at the determined position of the catheter within the luminal network of the patient.
5. The system according to claim 1, wherein the processing means is configured to display, on the user interface, a visual indicator of the determined amount of interference in the form of a bar graph.
6. The system according to claim 1, wherein the processing means is configured to define a cut-off value, wherein the cut-off value is based on a pre-determined amount of interference between the catheter and the airways of the luminal network.
7. The system according to claim 6, wherein the processing means is configured to identify a position along the pathway through the luminal network where the determined amount of interference is greater than the defined cut-off value.
8. The system according to claim 7, wherein the processing means is configured to display, on the user interface, a visual indicator of the position of the defined cut-off on the displayed pathway to the identified target tissue.
9. The system according to claim 8, wherein the processing means is configured to change, on the user interface, a form of the visual indicator of the defined cut-off when the determined position of the catheter corresponds with the determined location of the defined cut-off on the pathway to the identified target tissue.
10. The system according to claim 1, wherein the processing means is configured to determine an amount of interference between the catheter and the airways of the luminal network based on a minimum bend radius of the catheter.
11. A surgical system, comprising: a catheter; and a workstation operably coupled to the catheter, the workstation including processing means configured to:
generate a 3D model of a luminal network of a patient’s lungs; display, on a user interface, the generated 3D model of the luminal network of the patient’s lungs; receive catheter information; define a cut-off value, wherein the cut-off value is determined based on a predetermined amount of interference between the catheter and the airways of the luminal network of the patient’s lungs; determine an amount of interference between the catheter and airways of the luminal network of the patient’s lungs; identify portions of airways of the luminal network of the patient’s lungs where the determined amount of interference is greater than the defined cut-off value; and display, on the user interface, a visual indicator of the identified portions of airways of the luminal network where the determined amount of interference is greater than the defined cut-off value.
12. The system according to claim 11, wherein the processing means is configured to change, on the user interface, a form of the visual indicator of the identified portions of airways based upon a determined position of the catheter within the luminal network of the patient’s lungs.
13. The system according to claim 11, wherein the processing means is configured to receive patient information and to modify the defined cut-off value based on the received catheter information and the received patient information.
14. The system according to claim 11, wherein the processing means is configured to define the cut-off value based on a minimum bend radius of the catheter.
15. A method of operating a surgical system, comprising: generating a 3D model of a luminal network of a patient’s lungs; identifying target tissue in the generated 3D model; receiving catheter information;
displaying, on a user interface, a pathway through the luminal network to the identified target tissue; determining an amount of interference between the catheter and airways of the luminal network along the pathway through the luminal network to the identified target tissue; displaying, on the user interface, a visual indicator of the determined amount of interference; determining a position of the catheter within the luminal network of the patient as the catheter is navigated along the pathway to the identified target tissue; and updating, on the user interface, the visual indicator of the determined amount of interference based upon the determined position of the catheter within the luminal network.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463642367P | 2024-05-03 | 2024-05-03 | |
| US63/642,367 | 2024-05-03 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025231398A1 true WO2025231398A1 (en) | 2025-11-06 |
Family
ID=95899492
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/027553 Pending WO2025231398A1 (en) | 2024-05-03 | 2025-05-02 | Gui to display relative airway size |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025231398A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100121316A1 (en) * | 2007-04-26 | 2010-05-13 | Koninklijke Philips Electronics N.V. | Risk indication for surgical procedures |
| US20180055582A1 (en) * | 2016-08-31 | 2018-03-01 | Covidien Lp | Pathway planning for use with a navigation planning and procedure system |
| US20200054399A1 (en) * | 2017-04-18 | 2020-02-20 | Intuitive Surgical. Operations, Inc. | Graphical user interface for monitoring an image-guided procedure |
| WO2022251715A2 (en) * | 2021-05-27 | 2022-12-01 | Covidien Lp | Improved systems and methods of navigating a medical device in a body lumen using fuzzy logic combined with device parameters, direct user inputs, and distributed anonymized data |
-
2025
- 2025-05-02 WO PCT/US2025/027553 patent/WO2025231398A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100121316A1 (en) * | 2007-04-26 | 2010-05-13 | Koninklijke Philips Electronics N.V. | Risk indication for surgical procedures |
| US20180055582A1 (en) * | 2016-08-31 | 2018-03-01 | Covidien Lp | Pathway planning for use with a navigation planning and procedure system |
| US20200054399A1 (en) * | 2017-04-18 | 2020-02-20 | Intuitive Surgical. Operations, Inc. | Graphical user interface for monitoring an image-guided procedure |
| WO2022251715A2 (en) * | 2021-05-27 | 2022-12-01 | Covidien Lp | Improved systems and methods of navigating a medical device in a body lumen using fuzzy logic combined with device parameters, direct user inputs, and distributed anonymized data |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7503603B2 (en) | SYSTEM AND METHOD FOR USING REGISTRATED FLUOROSCOPIC IMAGES IN IMAGE GUIDED SURGERY - Patent application | |
| US20220346886A1 (en) | Systems and methods of pose estimation and calibration of perspective imaging system in image guided surgery | |
| US11622815B2 (en) | Systems and methods for providing proximity awareness to pleural boundaries, vascular structures, and other critical intra-thoracic structures during electromagnetic navigation bronchoscopy | |
| EP3164050B1 (en) | Dynamic 3d lung map view for tool navigation inside the lung | |
| US20210100627A1 (en) | Systems and methods related to elongate devices | |
| CN110087576A (en) | System and method for registering an elongated device to a three-dimensional image in an image-guided procedure | |
| JP2020124501A (en) | Systems and methods for visualizing navigation of medical devices relative to targets | |
| EP3500159B1 (en) | System for the use of soft-point features to predict respiratory cycles and improve end registration | |
| EP3607906A1 (en) | Identification and notification of tool displacement during medical procedure | |
| US20230360212A1 (en) | Systems and methods for updating a graphical user interface based upon intraoperative imaging | |
| WO2022146918A1 (en) | Systems for dynamic image-based localization | |
| EP4271310A1 (en) | Systems for integrating intraoperative image data with minimally invasive medical techniques | |
| WO2025231398A1 (en) | Gui to display relative airway size | |
| US20250072978A1 (en) | Electromagnetic and camera-guided navigation | |
| US20250040995A1 (en) | Updating enb to ct registration using intra-op camera | |
| US20250098937A1 (en) | Autonomous lumen centering of endobronchial access devices | |
| WO2025046407A1 (en) | Electromagnetic and camera-guided navigation | |
| WO2025114807A1 (en) | Systems and methods for solving camera pose relative to working channel tip | |
| WO2025032436A1 (en) | Updating electromagnetic navigation bronchoscopy to computed tomography registration using intra-operative camera | |
| US20240358444A1 (en) | Autonomous navigation of an endoluminal robot | |
| WO2025175171A1 (en) | Improved path planning and alignment for lung navigation | |
| WO2025068848A1 (en) | Autonomous lumen centering of endobronchial access devices | |
| WO2025235930A1 (en) | Smart biopsy using magnetic proximity sensing |