[go: up one dir, main page]

WO2025019377A1 - Planification et navigation autonomes de robot à continuum avec entrée vocale - Google Patents

Planification et navigation autonomes de robot à continuum avec entrée vocale Download PDF

Info

Publication number
WO2025019377A1
WO2025019377A1 PCT/US2024/037930 US2024037930W WO2025019377A1 WO 2025019377 A1 WO2025019377 A1 WO 2025019377A1 US 2024037930 W US2024037930 W US 2024037930W WO 2025019377 A1 WO2025019377 A1 WO 2025019377A1
Authority
WO
WIPO (PCT)
Prior art keywords
instructions
autonomous navigation
target
target path
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/037930
Other languages
English (en)
Inventor
Franklin King
Nobuhiko Hata
Takahisa Kato
Fumitaro Masaki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Brigham and Womens Hospital Inc
Canon USA Inc
Original Assignee
Brigham and Womens Hospital Inc
Canon USA Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Brigham and Womens Hospital Inc, Canon USA Inc filed Critical Brigham and Womens Hospital Inc
Publication of WO2025019377A1 publication Critical patent/WO2025019377A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1615Programme controls characterised by special kind of manipulator, e.g. planar, scara, gantry, cantilever, space, closed chain, passive/active joints and tendon driven manipulators
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000094Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00147Holding or positioning arrangements
    • A61B1/0016Holding or positioning arrangements using motor drive units
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/005Flexible endoscopes
    • A61B1/0051Flexible endoscopes with controlled bending of insertion part
    • A61B1/0052Constructional details of control elements, e.g. handles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/267Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
    • A61B1/2676Bronchoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/25User interfaces for surgical systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/32Surgical robots operating autonomously
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • A61B34/37Leader-follower robots
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B23/00Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
    • G02B23/24Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
    • G02B23/2476Non-optical details, e.g. housings, mountings, supports
    • G02B23/2484Arrangements in relation to a camera or imaging device
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B18/00Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body
    • A61B2018/00571Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body for achieving a particular surgical effect
    • A61B2018/00577Ablation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/107Visualisation of planned trajectories or target regions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2051Electromagnetic tracking systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • A61B2034/2046Tracking techniques
    • A61B2034/2065Tracking using image or pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/374NMR or MRI
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • A61B2090/3762Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/378Surgical systems with images on a monitor during operation using ultrasound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/361Image-producing devices, e.g. surgical cameras
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40234Snake arm, flexi-digit robotic manipulator, a hand at each end

Definitions

  • the present disclosure generally relates to a continuum robot system and more particularly to a continuum robot that is a steerable catheter than can be navigated autonomously as well as methods and mediums for autonomous navigation.
  • Endoscopy, bronchoscopy, catheterization, and other medical procedures facilitate the ability to look inside a body.
  • a flexible medical tool may be inserted into a patient’s body, and an instrument may be passed through the tool to examine or treat an area inside the body.
  • a bronchoscope is an endoscopic instrument to view inside the airways of a patient.
  • Catheters and other medical tools may be inserted through a tool channel in the bronchoscope to provide a pathway to a target area in the patient for diagnosis, planning, medical procedure(s), treatment, etc.
  • Robotic bronchoscopes, robotic endoscopes, or other robotic imaging devices may be equipped with a tool channel or a camera and biopsy tools, and such devices (or users of such devices) may insert/ retract the camera and biopsy tools to exchange such components.
  • the robotic bronchoscopes, endoscopes, or other imaging devices may be used in association with a display system and a control system.
  • An imaging device such as a camera
  • An imaging device may be placed in the bronchoscope, the endoscope, or other imaging device/system to capture images inside the patient and to help control and move the bronchoscope, the endoscope, or the other type of imaging device, and a display or monitor may be used to view the captured images.
  • An endoscopic camera that may be used for control may be positioned at a distal part of a catheter or probe (e.g., at a tip section).
  • the display system may display, on the monitor, an image or images captured by the camera, and the display system may have a display coordinate used for displaying the captured image or images.
  • the control system may control a moving direction of the tool channel or the camera. For example, the tool channel or the camera may be bent according to a control by the control system.
  • the control system may have an operational controller (such as, but not limited to, a joystick, a gamepad, a controller, an input device, etc.), and physicians may rotate or otherwise move the camera, probe, catheter, etc. to control same.
  • an operational controller such as, but not limited to, a joystick, a gamepad, a controller, an input device, etc.
  • physicians may rotate or otherwise move the camera, probe, catheter, etc. to control same.
  • control methods or systems are limited in effectiveness.
  • information obtained from an endoscopic camera at a distal end or tip section may help decide which way to move the distal end or tip section, such information does not provide details on how the other bending sections or portions of the bronchoscope, endoscope, or other type of imaging device may move to best assist the navigation.
  • At least one application is looking inside the body relates to lung cancer, which is the most common cause of cancer-related deaths in the United States. It is also a commonly diagnosed malignancy, second only to breast cancer in women and prostate cancer in men. Early diagnosis of lung cancer is shown to improve patient outcomes, particularly in peripheral pulmonary nodules (PPNs). During a procedure, such as a transbronchial biopsy, targeting lung lesions or nodules may be challenging.
  • PPNs peripheral pulmonary nodules
  • Electromagnetically Navigated Bronchoscopy is increasingly applied in the transbronchial biopsy of PPNs due to its excellent safety profile, with fewer pneumothoraxes, chest tubes, significant hemorrhage episodes, and respiratory failure episodes than a CT-guided biopsy strategy (see e.g., as discussed in C. R. Dalek, et al., J Bronchology Interv Pulmonol, vol. 19, no. 4, pp. 294-303, Oct. 2012, doi: 10.1097/LBR.0B013E318272157D, which is incorporated by reference herein in its entirety).
  • ENB has lower diagnostic accuracy or value due to dynamic deformation of the tracheobronchial tree by bronchoscope maneuvers (see e.g., as discussed in T. Whelan, et al., International Journal of Robotics Research, vol. 35, no. 14, pp. 1697-1716, Dec. 2016, doi: 10.1177/0278364916669237, which is incorporated by reference herein in its entirety) and module motion due to the breathing motion of the lung (see e.g., as discussed in A. Chen, et al., Chest, vol. 147, no. 5, pp. 1275-1281, May 2015, doi: 10.1378/CHEST.14-1425, which is incorporated by reference herein in its entirety).
  • VNB Vision-based tracking
  • ENB has been proposed to address the aforementioned issue of CT-to-body divergence (see e.g., as discussed in D. J. Mirota, et al., Annu Rev Boeng, vol. 13, pp. 297-319, Jul. 2011, doi: 10.1146/ANNUREV- BIOENG-071910-124757, which is incorporated by reference herein in its entirety).
  • Visionbased tracking in VNB does not require an electromagnetic tracking sensor to localize the bronchoscope in CT; rather, VNB directly localizes the bronchoscope using the camera view, conceptually removing the chance of CT-to-body divergence.
  • Jaeger, et al. (as discussed in H. A. Jaeger et al., IEEE Trans Biomed Eng, vol. 64, no. 8, pp. 1972-1979, Aug. 2017, doi: 10.1109/TBME.2016.2623383, which is incorporated by reference herein in its entirety) proposed such a method where Jaeger, et al. incorporated a custom tendon-driven catheter design with Electro-magnetic (EM) sensors controlled with an electromechanical drive train.
  • EM Electro-magnetic
  • U.S. Pat. Pub 2022/0160433 provides an automatic tool presence and workflow recognition, and states that the UI data can include a broad range of inputs made by the user and captured by the input devices, including buttons, menus, gestures, and voice commands. However, this simply provides voice input and does not enhance the autonomous workflow or overcome the issue of inappropriate user input.
  • At least one imaging, optical, or control device, system, method, and storage medium for controlling one or more endoscopic or imaging devices or systems, for example, by implementing automatic (e.g., robotic) or manual control of each portion or section of the at least one imaging, optical, or control device, system, method, and storage medium to keep track of and to match the state or state(s) of a first portion or section in a case where each portion or section reaches or approaches a same or similar, or approximately same or similar, state or state(s) and to provide a more appropriate navigation of a device (such as, but not limited to, a bronchoscopic catheter being navigated to reach a nodule).
  • automatic e.g., robotic
  • manual control of each portion or section of the at least one imaging, optical, or control device, system, method, and storage medium to keep track of and to match the state or state(s) of a first portion or section in a case where each portion or section reaches or approaches a same or similar, or approximately same or similar, state or state(s
  • an autonomous navigation robot including 1) perception step, 2) planning step, 3) control step is described.
  • a method and system is provided user-interface for user to instruct commands, and reflects the commands to a plan. Since the level of control for a user as compared to the more automatic navigation of the continuum robot, is not always clear, it can be counterintuitiv e for users to instruct the system for the intended autonomously navigated route or make any changes or modifications effectively within the autonomous navigation.
  • a system and method that determines the target paths among the various paths based on user instruction, pre-operative instructions, and the autonomous system.
  • an autonomous navigation robot system having a continuum robot, a camera at the distal end of the continuum robot, one or more actuators to steer and move the continuum robot, (or alternatively actuators for bending motions and one or more motors for linear motion), and a controller.
  • the controller is configured to perform three steps: 1) perception step, 2) planning step, and 3) control step, where each of these three steps may be performed using the images from the camera and without the need for registering the continuum robot with an external image.
  • the continuum robot may be a steerable catheter, such as a bronchoscope.
  • an autonomous navigation robot system comprising: a continuum robot; a camera at the distal end of the continuum robot; one or more actuators to steer and/or move the continuum robot a user input device, a display; and a controller.
  • the controller configured to: detect one or more lumens in an image from the camera, and select a target path in the one or more lumens by receiving instructions as to the target path from the user input device.
  • the user input device comprises a voice input device, where the instructions to select a target path are limited to a library of instructions.
  • This library of instructions consists of less than 20, less than 16, less than 12, less than 10, less than 9, less than 8, less than 7, or less than 6 instructions.
  • These instructions preferably include instructions for left, right, up and down or other similar direction-based instructions.
  • the user input device comprises an input device, such as one w ith 4 - 10 buttons.
  • the controller is further configured to show information on the display, such as the detected one or more lumen or an indicator thereof; information of the selected target path, and/or information of the most recent instructions inputted as to the target path.
  • This information may comprise: differentiating between a symbol for the target path and one or more signals for other available paths based on user instructions; displaying an arrow, triangle, or bar at the edge of the display corresponding to the direction of the direction information; displaying one or more symbols for the paths and concurrently differentiate the symbol for the target path among the paths based on user instructions; and/or displaying and indicator of the direction information.
  • the controller is further configured to command the one or more actuators to move or bend the continuum robot along the selected target path.
  • selecting a target path comprises: receiving direction information from the user input device about a direction of travel for the continuum robot; defining a closest lumen as the lumen from the detected one or more lumens that is closest to the edge of the display that corresponds to the direction of travel; and creating a vector between a current location of the continuum robot and a center of the closest lumen.
  • the autonomous navigation robot system may also perform a perception step, such as wherein the controller further uses a depth map produced by processing one or more images from the camera; applies thresholding using an automated method; fits a geometric shape (e.g., a circle or an oval) in or on one or more detected objects; and defines one or more targets.
  • a perception step such as wherein the controller further uses a depth map produced by processing one or more images from the camera; applies thresholding using an automated method; fits a geometric shape (e.g., a circle or an oval) in or on one or more detected objects; and defines one or more targets.
  • an autonomous navigation robot system comprising: a continuum robot; a camera at the distal end of the continuum robot; one or more actuators to steer and move the continuum robot; a voice input device; a display; and a controller.
  • the controller is configured to: detect one or more lumens, receive a direction information from the voice input device, define a target lumen as the lumen from the detected one or more lumens that is closest to an edge of the display that is defined by the direction information, create a target path between a current location of the continuum robot and a center of the target lumen, and command the one or more actuators to move or bend the continuum robot along the target path.
  • the additional embodiments as disclosed herein above may also be applicable to the autonomous navigation robot system as described in this paragraph.
  • the limited library of instructions may consists of less than 20, less than 16, less than 12, less than 10, less than 9, less than 8, less than 7, or less than 6 instructions. These instructions preferably include instructions for left, right, up and down or other similar direction-based instructions.
  • the user input device comprises an input device, such as one with 4 - 10 buttons.
  • the method further includes showing information on the display, such as the detected one or more lumen or an indicator thereof; information of the selected target path, and/or information of the most recent instructions inputted as to the target path.
  • This method may additionally comprise: differentiating between a symbol for the target path and one or more signals for other available paths based on user instructions; displaying an arrow, triangle, or bar at the edge of the display corresponding to the direction of the direction information; displaying one or more symbols for the paths and concurrently differentiate the symbol for the target path among the paths based on user instructions; and/or displaying and indicator of the direction information.
  • the method for autonomous navigation further comprising sending instructions to one or more actuators to steer and/or move the steerable catheter along the target path.
  • an information processing apparatus to control a continuum robot; at least one memory storing instructions; and at least one processor that executes the instructions stored in the memory to cause the automated navigation as discussed herein is provided . It is a further objects of the present disclosure to provide a non-transitory computer-readable storage medium storing at least one program for causing a computer to execute a method for controlling a continuum robot to perform automated navigation as discussed herein.
  • FIG. 1 illustrates at least one embodiment of an imaging, continuum robot, or endoscopic apparatus or system in accordance with one or more aspects of the present disclosure.
  • FIG. 2 is a schematic diagram showing at least one embodiment of an imaging, steerable catheter, or continuum robot apparatus or system in accordance with one or more aspects of the present disclosure
  • FIG. 3(a) illustrate at least one embodiment example of a continuum robot and/or medical device that may be used with one or more technique(s), including autonomous navigation technique(s), in accordance with one or more aspects of the present disclosure.
  • Detail A illustrates one guide ring of the steerable catheter.
  • FIGS. 3(b) - 3(c) illustrate one or more principles of catheter or continuum robot tip manipulation by actuating one or more bending segments of a continuum robot or steerable catheter 104 of FIG. 3(a) in accordance with one or more aspects of the present disclosure.
  • FIG. 4 is a schematic diagram showing at least one embodiment of an imaging, continuum robot, steerable catheter, or endoscopic apparatus or system in accordance with one or more aspects of the present disclosure.
  • FIG. 5 is a schematic diagram showing at least one embodiment of a console or computer that may be used with one or more autonomous navigation technique(s) in accordance with one or more aspects of the present disclosure.
  • FIG. 6 is a flowchart of at least one embodiment of a method for planning an operation of at least one embodiment of a continuum robot or steerable catheter apparatus or system in accordance with one or more aspects of the present disclosure.
  • FIG. 7 is a flowchart of at least one embodiment of a method for performing autonomous navigation, movement detection, and/or control for a continuum robot or steerable catheter in accordance with one or more aspects of the present disclosure.
  • FIG. 8(a) shows images of at least one embodiment of an application example of autonomous navigation technique(s) and movement detection for a camera view (left), a depth map (center), and a thresholded image (right) in accordance with one or more aspects of the present disclosure.
  • FIG. 8(b) shows images of one embodiment showing a camera view (left), a semi-transparent color coded depth map overlaid onto a camera view (center) and a thresholded image (right).
  • FIG. 9 is an exemplary image having two airways and indicators of circle fit and target path in accordance with one or more aspects of the present disclosure.
  • FIG. 10 is an exemplary’ image having two airways and indicators of circle fit and target path in accordance with one or more aspects of the present disclosure.
  • FIG. 11 is a diagram showing two lumens and the threshold.
  • FIG. 12 is a flowchart of at least one embodiment of a method for controlling the steerable catheter in accordance with one or more aspects of the present disclosure.
  • FIG. 13 is a flowchart of at least one embodiment of a method for controlling the steerable catheter, including speed setting, in accordance with one or more aspects of the present disclosure.
  • FIG. 14 is a diagram indicating bending speed and moving speed in accordance with one or more aspects of the present disclosure.
  • FIG. 15 is a flowchart of at least one embodiment of a method for controlling the steerable catheter including setting the threshold, in accordance with one or more aspects of the present disclosure.
  • FIG. 16 is a diagram of an airway with an indication of two thresholds, in accordance with one or more aspects of the present disclosure.
  • FIG. 17 is a flowchart of at least one embodiment of a method for controlling the steerable catheter, including blood detection, in accordance with one or more aspects of the present disclosure.
  • FIG. 18 shows at least one embodiment a control software or a User Interface that may be used with one or more robots, robotic catheters, robotic bronchoscopes, methods, and/or other features in accordance with one or more aspects of the present disclosure.
  • FIGS. 19(a) - 19(b) illustrate at least one embodiment of a bronchoscopic image with detected airways and an estimated depth map (or depth estimation) with or using detected airways, respectively, in one or more bronchoscopic images in accordance w ith one or more aspects of the present disclosure.
  • FIGS. 20(a) - 20(b) illustrate at least one embodiment of a pipeline that may be used for a bronchoscope, apparatus, device, or system (or used with one or more methods or storage mediums), and a related camera view employing voice recognition, respectively, of the present disclosure in accordance with one or more aspects of the present disclosure.
  • FIGS. 21(a) - 21(c) illustrate a navigation screen for a clinical target location in or at a lesion reached by autonomous driving, a robotic bronchoscope in a phantom having reached the location corresponding to the location of the lesion in an ex vivo setup, and breathing cycle information (FIG. 21(c) using EM sensors, respectively, in accordance with one or more aspects of the present disclosure.
  • FIGS. 22(a) - 22(c) illustrate views of at least one embodiment of a navigation algorithm performing at various branching points in a phantom where FIG.
  • FIG. 22(a) shows a path on which the target location (dot) was not reached (e.g., the algorithm may not have traversed the last bifurcation where an airway on the right was not detected), where FIG. 22(b) shows a path on which the target location (dot) was successfully reached, and where FIG. 22(c) shows a path on which the target location was also successful reached in accordance with one or more aspects of the present disclosure.
  • FIGS. 23(a) - 23(b) illustrate graphs showing success at branching point(s) with respect to Local Curvature (LC) and Plane Rotation (PR), respectively, for all data combined in one or more embodiments in accordance with one or more aspects of the present disclosure.
  • LC Local Curvature
  • PR Plane Rotation
  • FIGS. 24(a) - 24(c) illustrate one or more impacts of breathing motion on a performance of the one or more navigation algorithm! s) where FIG. 24(a) shows a path on which the target location (ex vivo # 1 LLL) was reached with and without breathing motion (BM), where FIG. 24(b) shows a path on which the target location ex vivo # 1 RLL) was not reached w ithout BM but was reached with BM, and where FIG. 24(c) shows a path on which the target location (ex vivo #1 RML) was reached without BM was not reached with BM in accordance with one or more aspects of the present disclosure.
  • FIGS. 25(a) - 25(b) illustrate the box plots for time for the operator or the autonomous navigation to bend the robotic catheter in one or more embodiments and for the maximum force for the operator or the autonomous navigation at each bifurcation point in one or more embodiments in accordance with one or more aspects of the present disclosure.
  • FIGS. 26(a) - 26(d) illustrate one or more examples of depth estimation failure and artifact robustness that may be observed in one or more embodiments in accordance with one or more aspects of the present disclosure.
  • FIGS. 27(a) - 27(b) illustrate graphs for the dependency of the time for a bending command and the force at each bifurcation point, respectively, on the airway generation of a lung in accordance with one or more aspects of the present disclosure.
  • FIG. 1 illustrates a simplified representation of a medical environment, such as an operating room, where a robotic catheter system too can be used.
  • FIG. 2 illustrates a functional block diagram of the robotic catheter system 100.
  • FIGS. 3(a) - 3(c) represents the catheter and bending.
  • FIGS. 4 - 5 illustrates a logical block diagram of the robotic catheter system too.
  • the system too includes a system console 102 (computer cart) operatively connected to a steerable catheter 104 via a robotic platform 106.
  • the robotic platform 106 includes one or more than one robotic arm 108 and a linear translation stage 110.
  • a user 112 controls the robotic catheter system too via a user interface unit (operation unit) to perform an intraluminal procedure on a patient 114 positioned on an operating table 116.
  • the user interface may include at least one of a main display 118 (a first user interface unit), a secondary display 120 (a second user interface unit), and a handheld controller 124 (a third user interface unit).
  • the main display 118 may include, for example, a large display screen attached to the system console 102 or mounted on a wall of the operating room and may be, for example, designed as part of the robotic catheter system too or be part of the operating room equipment.
  • a secondary display 120 that is a compact (portable) display deuce configured to be removably attached to the robotic platform 106. Examples of the secondary display 120 include a portable tablet computer or a mobile communication device (a cellphone).
  • the steerable catheter 104 is actuated via an actuator unit 122.
  • the actuator unit 103 is removably attached to the linear translation stage 110 of the robotic platform 106.
  • the handheld controller 124 may include a gamepad-like controller with a joystick having shift levers and/or push buttons. It may be a one-handed controller or a two- handed controller.
  • the actuator unit 122 is enclosed in a housing having a shape of a catheter handle.
  • One or more access ports 126 are provided in or around the catheter handle. The access port 126 is used for inserting and/or withdrawing end effector tools and/or fluids when performing an interventional procedure of the patient 114.
  • the system console 102 includes a system controller 128, a display controller
  • the main display 118 may include a conventional display device such as a liquid crystal display (LCD), an OLED display, a QLED display or the like.
  • the main display 118 provides a graphic interface unit (GUI) configured to display one or more views. These views include live view image 132, an intraoperative image 134, and a preoperative image 136, and other procedural information 138. Other views that may be displayed include a model view, a navigational information view-, and/or a composite view.
  • the live image view- 132 may be an image from a camera at the tip of the catheter. This v iew may also include, for example, information about the perception and navigation of the catheter 104.
  • the prcopcrativ e image 136 may include pre-acquired 3D or 2D medical images of the patient acquired by conventional imaging modalities such as computer tomography (CT), magnetic resonance imaging (MRI), or ultrasound imaging.
  • CT computer tomography
  • MRI magnetic resonance imaging
  • the intraoperative image 134 may include images used for image guided procedure such images may be acquired by fluoroscopy or CT imaging modalities.
  • Intraoperative image 134 may be augmented, combined, or correlated with information obtained from a sensor, camera image, or catheter data.
  • the sensor may be located at the distal end of the catheter.
  • the catheter tip tracking sensor 140 may be, for example, an electromagnetic (EM) sensor.
  • a catheter tip position detector 142 is included in the robotic catheter system too; this catheter tip position detector would include an EM field generator operatively connected to the system controller 128.
  • Suitable electromagnetic sensors for use with a steerable catheter are well-known and described, for example, in U.S. Pat. No.: 6,201,387 and international publication W02020194212A1.
  • the diagram of FIG. 2 illustrates the robotic catheter system 100 includes the system controller 128 operatively connected to the display controller 130, which is connected to the display unit 118, and to the hand held control 124.
  • the system controller 128 is also connected to the actuator unit 122 via the robotic platform 106, which includes the linear translation stage 110.
  • the actuator unit 122 includes a plurality of motors 144 that control the plurality of drive wires 160. These drive wires travel through the steerable catheter 104.
  • One or more access ports 126 may be located on the catheter.
  • the catheter includes a proximal section 148 located between the actuator and the proximal bending section 152 where they actuate the proximal bending section.
  • Three of the six drive wires 160 continue through the distal bending section 156 where they actuate this section and allow r for a range of movement. This figure is shown with two bendable sections (152 and 156). Other embodiments as described herein can have three bendable sections (see FIG. 3). In some embodiments, a single bending section may be provided, or alternatively, four or more bendable sections may be present in the catheter.
  • FIG. 3A shows an exemplary embodiment of a steerable catheter 104.
  • the steerable catheter 104 includes a non-steerable proximal section 148, a steerable distal section 150, and a catheter tip 158.
  • the proximal section 148 and distal bendable section 150 (including 152, 154 and 156) are joined to each other by a plurality of drive wires 160 arranged along the w all of the catheter.
  • the proximal section 148 is configured with thru- holes or grooves or conduits to pass drive wires 160 from the distal section 150 to the actuator unit 122.
  • the distal section 150 is comprised of a plurality of bending segments including at least a distal segment 156, a middle segment 154, and a proximal segment 152. Each bending segment is bent by actuation of at least some of the plurality of drive wires 160 (driving members).
  • the posture of the catheter may be supported by non-illustrated supporting wires (support members) also arranged along the wall of the catheter (see U.S. Pat. Pub. US2021/0308423).
  • the proximal ends of drive wires 160 are connected to individual actuators or motors 144 of the actuator unit 122, while the distal ends of the drive wires 160 are selectively anchored to anchor members in the different bending segments of the distal bendable section 150.
  • Each bending segment is formed by a plurality of ring-shaped components (rings) with thru-holes, grooves, or conduits along the wall of the rings.
  • the ring-shaped components are defined as wire-guiding members 162 or anchor members 164 depending on their function within the catheter.
  • Anchor members 164 are ring-shaped components onto which the distal end of one or more drive wires 160 are attached.
  • Wire-guiding members 162 are ring-shaped components through w hich some drive wires 160 slide through (without being attached thereto).
  • FIG. 3(a) illustrates an exemplary embodiment of a ringshaped component (a wire-guiding member 162 or an anchor member 164).
  • Each ringshaped component includes a central opening which forms the tool channel 168, and plural conduits 166 (grooves, sub-channels, or thru-holes) arranged lengthwise equidistant from the central opening along the annular wall of each ring-shaped component.
  • an inner cover such as is described in U.S. Pat. Pub US2021/0369085 and US2022/0126060, may be included to provide a smooth inner channel and provide protection.
  • the non-steerable proximal section 148 is a flexible tubular shaft and can be made of extruded polymer material.
  • the tubular shaft of the proximal section 148 also has a central opening or tool channel 168 and plural conduits 166 along the wall of the shaft surrounding the tool channel 168.
  • An outer sheath may cover the tubular shaft and the steerable section 150. In this manner, at least one tool channel 168 formed inside the steerable catheter 104 provides passage for an imaging device and/or end effector tools from the insertion port 126 to the distal end of the steerable catheter 104.
  • the actuator unit 122 includes one or more servo motors or piezoelectric actuators.
  • the actuator unit 122 bends one or more of the bending segments of the catheter by applying a pushing and/or pulling force to the drive wires 160.
  • each of the three bendable segments of the steerable catheter 104 has a plurality of drive wires 160. If each bendable segment is actuated by three drive wires 160, the steerable catheter 104 has nine driving w ires arranged along the w all of the catheter. Each bendable segment of the catheter is bent by the actuator unit 122 by pushing or pulling at least one of these nine drive wires 160. Force is applied to each individual drive wire in order to manipulate/ steer the catheter to a desired pose.
  • the actuator unit 122 assembled with steerable catheter 104 is mounted on the linear translation stage 110.
  • Linear translation stage 110 includes a slider and a linear motor.
  • the linear translation stage 110 is motorized, and can be controlled by the system controller 128 to insert and remove the steerable catheter 104 to/from the patient’s bodily lumen.
  • An imaging device 170 that can be inserted through the tool channel 168 includes an endoscope camera (videoscope) along with illumination optics (e.g., optical fibers or LEDs).
  • the illumination optics provides light to irradiate the lumen and/or a lesion target which is a region of interest within the patient.
  • End effector tools refer endoscopic surgical tools including clamps, graspers, scissors, staplers, ablation or biopsy needles, and other similar tools, which serve to manipulate body parts (organs or tumorous tissue) during examination or surgery.
  • the imaging device 170 may be what is commonly known as a chip-on-tip camera and may be color or black-and-white.
  • a tracking sensor 140 (e.g., an EM tracking sensor) is attached to the catheter tip 158.
  • steerable catheter 104 and the tracking sensor 140 can be tracked by the tip position detector 142.
  • the tip position detector 142 detects a position of the tracking sensor 140, and outputs the detected positional information to the system controller 100.
  • the system controller 128, receives the positional information from the tip position detector 142, and continuously records and displays the position of the steerable catheter 104 w ith respect to the patient’s coordinate system.
  • the system controller 128 controls the actuator unit 122 and the linear translation stage 110 in accordance with the manipulation commands input by the user 112 via one or more of the user interface units (the handheld controller 124, a GUI at the main display 118 or touchscreen buttons at the secondary display 120).
  • the user interface units the handheld controller 124, a GUI at the main display 118 or touchscreen buttons at the secondary display 120.
  • FIG. 3(b) and FIG. 3(c) show exemplary' catheter tip manipulations by actuating one or more bending segments of the steerable catheter 104.
  • manipulating only the most distal segment 156 of the steerable section changes the position and orientation of the catheter tip 158.
  • manipulating one or more bending segments (152 or 154) other than the most distal segment affects only the position of catheter tip 158, but does not affect the orientation of the catheter tip.
  • actuation of distal segment 155 changes the catheter tip from a position Pi having orientation 01, to a position P2 having orientation 02, to position P3 having orientation O3, to position P4 having orientation O4, etc.
  • FIG. 3(b) actuation of distal segment 155 changes the catheter tip from a position Pi having orientation 01, to a position P2 having orientation 02, to position P3 having orientation O3, to position P4 having orientation O4, etc.
  • actuation of the middle segment 154 changes the position of catheter tip 158 from a position Pl having orientation 01 to a position P2 and position P3 having the same orientation 01.
  • exemplary catheter tip manipulations shown in FIG. 3(b) and FIG. 3(c) can be performed during catheter navigation (i.e., while inserting the catheter through tortuous anatomies).
  • the exemplary catheter tip manipulations shown in FIG. 3(b) and FIG. 3(c) apply namely to the targeting mode applied after the catheter tip has been navigated to a predetermined distance (a targeting distance) from the target.
  • FIG. 4 illustrates the system controller 128 executes software programs and controls the display controller 130 to display a navigation screen (e.g., a live view image 132) on the main display 118 and/or the secondary display 120.
  • the display controller 130 may include a graphics processing unit (GPU) or a video display controller (VDC).
  • FIG. 5 illustrates components of the system controller 128 and/or the display controller 130.
  • the system controller 128 and the display controller 130 can be configured separately.
  • the system controller 128 and the display controller 102 can be configured as one device.
  • the system controller 128 and the display controller 130 comprise substantially the same components.
  • the system controller 128 may be a computer, where the computer or other system may also include a database and/or another type of memory as well as one or more input devices (e.g., a mouse, a keyboard, a speaker etc.,) that may be connected through an operations interface and/or output devices.
  • the system controller 128 may comprise a processor.
  • the system controller 128 and display controller 130 may include a central processing unit (CPU 182) comprised of one or more processors (microprocessors), a random access memory (RAM 184) module, an input/output (I/O 186) interface, a read only memory (ROM 180), and data storage memory (e.g., a hard disk drive (HDD 188) or solid state drive (SSD).
  • CPU 182 central processing unit
  • processors microprocessors
  • RAM 184 random access memory
  • I/O 186 input/output
  • ROM 180 read only memory
  • data storage memory e.g., a hard disk drive (HDD 188) or solid state drive (SSD).
  • the system controller 128 or computer may also include a GPU, a solid state drive (SSD), an operational interface and/or a networking interface.
  • SSD solid state drive
  • the ROM 180 and/or HDD 188 store the operating system (OS) software, and software programs necessary for executing the functions of the robotic catheter system 100 as a whole.
  • the RAM 184 is used as a workspace memory.
  • the CPU 182 executes the software programs developed in the RAM 184.
  • the I/O 186 inputs, for example, positional information to the display controller 130, and outputs information for displaying the navigation screen to the one or more displays (main display 118 and/or secondary display 120).
  • the navigation screen is a graphical user interface (GUI) generated by a software program but, it may also be generated by firmware, or a combination of software and firmware.
  • GUI graphical user interface
  • the system controller 128 may control the steerable catheter 104 based on any known kinematic algorithms applicable to continuum or snake-like catheter robots.
  • the system controller controls the steerable catheter 104 based on an algorithm known as follow the leader (FTL) algorithm.
  • FTL follow the leader
  • the most distal segment 156 of the steerable section 150 is actively controlled with forward kinematic values, while the middle segment 154 and the proximal segment 152 (following sections) of the steerable catheter 104 move at a first position in the same way as the distal section moved at the first position or a second position near the first position.
  • the display controller 130 acquires position information of the steerable catheter 104 from system controller 102. Alternatively, the display controller 130 may acquire the position information directly from the tip position detector 142.
  • the steerable catheter 104 may be a single-use or limited-use catheter device. In other words, the steerable catheter 104 can be attachable to, and detachable from, the actuator unit 122 to be disposable.
  • the display controller 130 can generate and outputs a live- view image or other view(s) or a navigation screen to the main display 118 and/or the secondary display 120.
  • This view can optionally be registered with a 3D model of a patient’s anatomy (a branching structure) and the position information of at least a portion of the catheter (e.g., position of the catheter tip 158) by executing pre-programmed software routines.
  • one or more end effector tools can be inserted through the access port 126 at the proximal end of the catheter, and such tools can be guided through the tool channel 168 of the catheter body to perform an intraluminal procedure from the distal end of the catheter.
  • the tool may be a medical tool such as an endoscope camera, forceps, a needle or other biopsy or ablation tools.
  • the tool may be described as an operation tool or working tool.
  • the working tool is inserted or removed through the working tool access port 126.
  • an embodiment of using a steerable catheter to guide a tool to a target is explained.
  • the tool may include an endoscope camera or an end effector tool, which can be guided through a steerable catheter under the same principles. In a procedure there is usually a planning procedure, a registration procedure, a targeting procedure, and an operation procedure.
  • the system controller 128 includes an autonomous navigation mode. During the autonomous navigation mode, the user do not need to control the bending and translational insertion position of steerable catheter 104.
  • the autonomous navigation mode comprises 1) perception step, 2) planning step and 3) control step.
  • system controller 128 receives endoscope view and analyses the endoscope view to find addressable airways from the current position/orientation of steerable catheter 104. At end of this analysis, the system controller 128 percepts these addressable airways as paths in the endoscope view.
  • the autonomous navigation mode can use a novel supervised-autonomous driving approach(es) that integrate a novel depth-based airway tracking method(s) and a robotic bronchoscope.
  • the present disclosure provides extensively developed and validated autonomous navigation approaches for both advancing and centering continuum robots, such as, but not limited to, for robotic bronchoscopy.
  • the inventors represent, to the best of the inventors’ knowledge, that the feature(s) of the present disclosure provide the initial autonomous navigation technique(s) applicable in continuum robots, bronchoscopy, etc. that require no retraining and have undergone full validation in vitro, ex vivo, and in vivo.
  • one or more features of the present disclosure incorporate unsupervised depth estimation from an image (e.g., a bronchoscopic image), coupled with a continuum robot (e.g., a robotic bronchoscope), and functions without any a priori knowledge of the patient’s anatomy, which is a significant advancement.
  • a continuum robot e.g., a robotic bronchoscope
  • one or more methods of the present disclosure constitutes and provides one or more foundational perception algorithms guiding the movements of the robot, continuum robot, or robotic bronchoscope. By simultaneously handling the tasks of advancing and centering the robot, probe, catheter, robotic bronchoscope, etc.
  • the method(s) of the present disclosure may assist physicians in concentrating on the clinical decision-making to reach the target, which achieves or provides enhancements to the efficacy of such imaging, bronchoscopy, etc.
  • One or more devices, systems, methods, and storage mediums for performing control or navigation including of a multi-section continuum robot and/or for viewing, imaging, and/ or characterizing tissue and/ or lesions, or an object or sample, using one or more imaging techniques (e.g., robotic bronchoscope imaging, bronchoscope imaging, etc.) or modalities (such as, but not limited to, computed tomography (CT), Magnetic Resonance Imaging (MRI), any other techniques or modalities used in imaging (e.g., Optical Coherence Tomography (OCT), Near infrared fluorescence (NIRF), Near etc.) are disclosed herein.
  • CT computed tomography
  • MRI Magnetic Resonance Imaging
  • OCT Optical Coherence Tomography
  • NIRF Near infrared fluorescence
  • the planning step is a step to determine a target path, which is the destination for the steerable catheter 104. While there are a couple of different approaches to select one of the paths as the target path, this invention uniquely include means to reflect user instructions concurrently for the decision of target path among the precepted paths. Once the system determines the target paths with this concurrent user instructions, the target path is sent to the next step, a control step.
  • the control step is a step to control the steerable catheter 104 and linear translation stage 110 to navigate the steerable catheter 104 to the target path. This step is also an automatic step.
  • the system controller 128 uses an information relating to the real time endoscope view, the target path and an internal design & status information on the robotic catheter system 100.
  • the robotic catheter system 100 can navigate steerable catheter 104 autonomously by reflecting the user’s intention efficiently.
  • FIG. 8(a) signifies one of the design examples of this invention.
  • the realtime endoscope view 800 are displayed in main display 118 (as a user output device) in system console 102 (800). The user can see the airways in the real-time endoscope view 1 through main display 118.
  • This real-time endoscope view 800 is also sent to system controller 128.
  • system controller 128 processes real-time endoscope view' 800 and identifies path candidates by using image processing algorithms. Among these path candidates, the system controller 128 select the paths 2 with the designed computation processes, then displays the paths 2 with a circle with the real-time endoscope view 800.
  • the system controller 128 provides for interaction from the user, such as a cursor, so that the user can indicate the target path by moving the cursor with joystick 124.
  • the system controller 128 recognizes the path with the cursor as the target path (FIG. 8(a)).
  • system controller 128 can pause the motion of the actuator unit 122 and linear translation stage 110 during the user is moving the cursor 3 so that the user can select the target path with the minimal change of the real-time endoscope view 1 and paths 2 since the system does not move.
  • FIG. 6 is a flowchart showing steps of at least one planning procedure of an operation of the continuum robot/catheter device 104.
  • One or more of the processors discussed herein may execute the steps shown in FIG. 6, and these steps may be performed by executing a software program read from a storage medium, including, but not limited to, the ROMno or HDD 150, by CPU 120 or by any other processor discussed herein.
  • One or more methods of planning using the continuum robot/catheter device 104 may include one or more of the following steps: (i) In step s6oi, one or more images such, as CT or MRI images, may be acquired; (ii) In step S602, a three dimensional model of a branching structure (for example, an airway model of lungs or a model of an object, specimen or other portion of a body) may be generated based on the acquired one or more images; (iii) In step S603, a target on the branching structure may be determined (e.g., based on a user instruction, based on preset or stored information, etc.); (iv) In step S604, a route of the continuum robot/catheter device 104 to reach the target (e.g., on the branching structure) may be determined (e.g., based on a user instruction, based on preset or stored information, based on a combination of user instruction and stored or preset information, etc.); (v) In
  • the system controller 102 may operate to perform an autonomous navigation mode.
  • the autonomous navigation mode may include or comprise: (1) a perception step, (2) a planning step, and (3) a control step.
  • the system controller 102 may receive an endoscope view (or imaging data) and may analyze the endoscope view (or imaging data) to find addressable airways from the current position/orientation of the steerable catheter 104. At an end of this analysis, the system controller 102 identifies or perceives these addressable airways as paths in the endoscope view (or imaging data).
  • the planning step is a step to determine a target path, which is the destination for the steerable catheter 104. While there are a couple of different approaches to select one of the paths as the target path, the present disclosure uniquely includes means to reflect user instructions concurrently for the decision of a target path among the identified or perceived paths. Once the system 1000 determines the target paths while considering concurrent user instructions, the target path is sent to the next step, i.e., the control step.
  • the control step is a step to control the steerable catheter 104 and the linear translation stage 122 (or any other portion of the robotic platform 108) to navigate the steerable catheter 104 to the target path, pose, state, etc. This step may also be performed as an automatic step.
  • the system controller 102 operates to use information relating to the real time endoscope view (e.g., the view 134), the target path, and an internal design & status information on the robotic catheter system 1000.
  • the robotic catheter system 1000 may navigate the steerable catheter 104 autonomously, which achieves reflecting the user’s intention efficiently.
  • the real-time endoscope view 134 may be displayed in a main display 101-1 (as a user input/output device) in the system 1000.
  • the user may see the airw ays in the real-time endoscope view 134 through the main display 101-1.
  • This realtime endoscope view 134 may also be sent to the system controller 102.
  • the system controller 102 may process the real-time endoscope view 134 and may identify path candidates by using image processing algorithms. Among these path candidates, the system controller 102 may select the paths with the designed computation processes, and then may display the paths with a circle, octagon, or other geometric shape with the real-time endoscope view 134 as discussed further below for FIGS. 7-8.
  • the system controller 102 may provide a cursor so that the user may indicate the target path by moving the cursor with the joystick 105.
  • the system controller 102 operates to recognize the path with the cursor as the target path.
  • the system controller 102 may can pause the motion of the actuator unit 103 and the linear translation stage 122 while the user is moving the cursor so that the user may select the target path with a minimal change of the real-time endoscope view 134 and paths since the system 1000 would not move in such a scenario.
  • the features of the present disclosure may be performed using artificial intelligence, including the autonomous driving mode.
  • deep learning may be used for performing autonomous driving using deep learning for localization. Any features of the present disclosure may be used with artificial intelligence features discussed in J. Sganga, D. Eng, C. Graetzel, and D. B. Camarillo, “Autonomous Driving in the Lung using Deep Learning for Localization,” Jul. 2019, Available: https://arxiv.org/abs/1907.08136vi, the disclosure of which is incorporated by reference herein in its entirety.
  • the system controller 102 may operate to perform a depth map mode.
  • a depth map may be generated or obtained from one or more images (e.g., bronchoscopic images, CT images, images of another imaging modality, etc.).
  • a depth of each image may be identified or evaluated to generate the depth map or maps.
  • the generated depth map or maps may be used to perform autonomous navigation, movement detection, and/or control of a continuum robot, a steerable catheter, an imaging device or system, etc. as discussed herein.
  • the depth map may be generated as described in PCT/US2024/025546, herein incorporated by reference in its entirety.
  • thresholding may be applied to the generated depth map or maps, or to the depth map mode, to evaluate accuracy for navigation purposes.
  • a threshold may be set for an acceptable distance between the ground truth (and/or a target camera location, a predetermined camera location, an actual camera location, etc.) and an estimated camera location for a catheter or continuum robot (e.g., the catheter or continuum robot 104).
  • the threshold may defined such that the distance between the ground truth (and/or a target camera location, a predetermined camera location, an actual camera location, etc.) and an estimated camera location is equal to or less than, or less than, a set or predetermined distance of one or more of the following: 5 mm, 10 mm, about 5 mm, about 10 mm, any other distance set by a user of the device (depending on a particular application).
  • the predetermined distance may be less than 5 mm or less than about 5 mm. Any other type of thresholding may be applied to the depth mapping to improve and/or confirm the accuracy of the depth map(s).
  • thresholding may be applied to segment the one or more images to help identify or find one or more objects and to ultimately help define one or more targets used for the autonomous navigation, movement detection, and/or control features of the present disclosure.
  • a depth map or maps may be created or generated using one or more images (e.g., CT images, bronchoscopic images, images of another imaging modality, vessel images, etc.), and then, by applying a threshold to the depth map, the objects in the one or more images may be segmented (e.g., a lung may be segmented, one or more airways may be segmented, etc.).
  • the segmented portions of the one or more images may define one or more navigation targets for a next automatic robotic movement, navigation, and/or control. Examples of segmented airways are discussed further below w ith respect to FIGS. 8(a) and 8(b).
  • one or more of the automated methods that may be used to apply thresholding may include one or more of the following: a watershed method (such as, but not limited to, watershed method(s) discussed in L. J. Belaid and W. Mourou, 2011, vol. 28, no. 2, p.
  • a k-means method such as, but not limited to, k-means method(s) discussed in T. Kanungo et al., IEEE Trans Pattern Anal Mach Intell, vol. 24, no. 7, pp. 881-892, 2002, doi: Doi 10. iiO9/Tpami.2OO2.1017616, which is incorporated by reference herein in its entirety
  • an automatic threshold method such as, but not limited to, automatic threshold method(s) discussed in N. Otsu, IEEE Trans Syst Man Cybern, vol. 9, no. 1, pp.
  • peak detection may include any of the techniques discussed herein, including, but not limited to, the techniques discussed in at least “8 Peak detection,” Data Handling in Science and Technology, vol. 21, no. C, pp. 183-190, Jan. 1998, doi: 10.1016/80922-3487(98)80027-0, which is incorporated by reference herein in its entirety.
  • the depth map(s) may be obtained, and/or the quality of the obtained depth map(s) may be evaluated, using artificial intelligence structure, such as, but not limited, convolutional neural networks, generative adversarial networks (GANs), neural networks, any other AI structure or feature(s) discussed herein, any other AI network structure(s) known to those skilled in the art, etc.
  • artificial intelligence structure such as, but not limited, convolutional neural networks, generative adversarial networks (GANs), neural networks, any other AI structure or feature(s) discussed herein, any other AI network structure(s) known to those skilled in the art, etc.
  • GANs generative adversarial networks
  • a generator of a generative adversarial network may operate to generate an image(s) that is/ are so similar to ground truth image(s) that a discriminator of the generative adversarial network is not able to distinguish between the generated image(s) and the ground truth image(s).
  • the generative adversarial network may include one or more generators and one or more discriminators.
  • Each generator of the generative adversarial network may operate to estimate depth of each image (e.g., a CT image, a bronchoscopic image, etc.), and each discriminator of the generative adversarial network may operate to determine whether the estimated depth of each image (e.g., a CT image, a bronchoscopic image, etc.) is estimated (or fake) or ground truth (or real).
  • an Al network such as, but not limited to, a GAN or a consistent GAN (cGAN), may receive an image or images as an input and may obtain or create a depth map for each image or images.
  • an Al network may evaluate obtained one or more images (e.g., a CT image, a bronchoscopic image, etc.), one or more virtual images, and one or more ground truth depth maps to generate depth map(s) for the one or more images and/or evaluate the generated depth map(s).
  • a Three Cycle-Consistent Generative Adversarial Network (3CGAN) may be used to obtain the depth map(s) and/or evaluate the quality of the depth map(s), and an unsupervised learning method (designed and trained in an unsupervised procedure) may be employed on the depth map(s) and the one or more images (e.g., a CT image or images, a bronchoscopic image or images, any other obtained image or images, etc.).
  • any feature or features of obtaining a depth map or performing a depth map mode of the present disclosure may be used with any of the depth map or depth estimation features as discussed in A. Banach, F. King, F. Masaki, H. Tsukada, and N. Hata, “Visually Navigated Bronchoscopy using three cycle-Consistent generative adversarial network for depth estimation,” Med Image Anal, vol. 73, p. 102164, Oct. 2021, doi: 10.1016/J.MEDIA.2021.102164, the disclosure of which is incorporated by reference herein in its entirety.
  • the system controller 102 may operate to perform a computation of one or more lumen (e.g., a lumen computation mode) and/or one or more of the following: a one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles fit/blob process, a peak detection, and/or a deepest point analysis.
  • a lumen computation mode e.g., a lumen computation mode
  • the computation of one or more lumen may include a one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles fit/blob process, a peak detection, and/or a deepest point analysis.
  • the problem of fitting a circle to a binary’ object is equivalent to the problem of fitting a circle to a set of points.
  • the set of points is the boundary’ points of the binai object.
  • a circle/blob fit is not limited thereto (as discussed herein, any’ one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/ or triangles (or other shape(s)) may be used). Indeed, there are several other variations that can be applied as described in D. Umbach and K. N. Jones, "A few methods for fitting circles to data," in IEEE Transactions on Instrumentation and Measurement, vol. 52, no. 6, pp. 1881-1885, Dec. 2003, doi: 10.1109/TIM.2003.820472. Blob fitting can be achieved on the binary objects by' calculating their circularity as 4 Area/ (perimeter) 2 and then defining the circle radius.
  • Peak detection is performed in a 1-D signal and is defined as the extreme value of the signal.
  • 2-D image peak detection is defined as the highest value of the 2-D matrix.
  • depth map is the 2-D matrix, and its peak is the highest value of the depth math which actually' correspond to the deepest point.
  • the depth map produces an image which predicts the depth of the airways, therefore for each airw ay there is a concentration of non-zero pixels around a deepest point that the GANs predicted.
  • FIG. 7 is a flowchart showing steps of at least one procedure for performing autonomous navigation, movement detection, and/or control technique(s) for a continuum robot/catheter deGee (e.g., such as continuum robot/catheter device 104).
  • processors discussed herein, one or more Al networks discussed herein, and/or a combination thereof may execute the steps shown in FIG. 7, and these steps may be performed by executing a software program read from a storage medium, including, but not limited to, the ROM 110 or HDD 150, by CPU 120 or by any other processor discussed herein.
  • one or more images e.g., one or more camera images, one or more CT images (or images of another imaging modality), one or more bronchoscopic images, etc.
  • step S703 based on the td value, the method continues to perform the selected target detection method and proceeds to step S704 for the peak detection method or mode, to step S706 for the thresholding method or mode, or to step S711 for the deepest point method or mode;
  • step S704 for the peak detection method or mode
  • step S706 for the thresholding method or mode
  • step S711 for the deepest point method or mode
  • step S704 1
  • step S706 the thresholding method or mode
  • step S711 for the deepest point method or mode
  • the steps S701 through S712 of FIG. 7 may be performed again for an obtained or received next image or images to evaluate the next movement, pose, position, orientation, or state for the autonomous navigation, movement detection, and/or control of the continuum robot or steerable catheter (or imaging device or system) 104.
  • the method may estimate (automatically or manually) the depth map or maps (e.g., a 2D or 3D depth map or maps) of one or more images.
  • the one or more depth maps may be estimated or determined using any technique discussed herein, including, but not limited to, artificial intelligence.
  • any Al network including, but not limited to a neural network, a convolutional neural network, a generative adversarial network, any other Al network or structure discussed herein or known to those skilled in the art, etc., may be used to estimate or determine the depth map or maps (e.g., automatically).
  • the autonomous navigation, movement detection, and/or control technique(s) of the present disclosure are not limited thereto.
  • a counter may not be used and/or a target detection method value may not be used such that at least one embodiment may iteratively perform a target detection method of a plurality of target detection methods and move on and use the next target detection method of the plurality of the target detection methods until a target or targets is/ are found.
  • one or more embodiments may continue to use one or more of the other target detection methods (or any combination of the plurality of target detection methods or modes) to confirm and/or evaluate the accuracy and/or results of the target detection method or mode used to find the already-identified one or more targets.
  • the identified one or more targets may be double checked, triple checked, etc.
  • one or more steps of FIG. 7, such as, but not limited to step S707 for binarization, may be omitted in one or more embodiments.
  • the steps S701 through S712 of FIG. 7 may be performed again for an obtained or received next image or images to evaluate the next movement, pose, position, orientation, or state for the autonomous navigation, movement detection, and/or control of the continuum robot or steerable catheter (or imaging device or system) 104.
  • the method may estimate (automatically or manually) the depth map or maps (e.g., a 2D or 3D depth map or maps) of one or more images.
  • the one or more depth maps may be estimated or determined using any technique discussed herein, including, but not limited to, artificial intelligence.
  • any Al network including, but not limited to a neural network, a convolutional neural network, a generative adversarial network, any other Al network or structure discussed herein or known to those skilled in the art, etc., may be used to estimate or determine the depth map or maps (e.g., automatically).
  • the autonomous navigation, movement detection, and/or control technique(s) of the present disclosure are not limited thereto.
  • a counter may not be used and/or a target detection method value may not be used such that at least one embodiment may iteratively perform a target detection method of a plurality of target detection methods and move on and use the next target detection method of the plurality of the target detection methods until a target or targets is/ are found.
  • one or more embodiments may continue to use one or more of the other target detection methods (or any combination of the plurality of target detection methods or modes) to confirm and/or evaluate the accuracy and/or results of the target detection method or mode used to find the already-identified one or more targets.
  • the identified one or more targets may be double checked, triple checked, etc.
  • step S707 for binarization may be omitted in one or more embodiments. For example, if segmentation is done using three categories, such as airways, background and edges of the image, then instead of a binary image, the image has three colors.
  • FIG. 8(a) shows images of at least one embodiment of an application example of autonomous navigation and/or control technique(s) and movement detection for a camera Anew 800 (left), a depth map 801 (center), and a thresholded image 802 (right) in accordance with one or more aspects of the present disclosure.
  • a depth map may be created using the bronchoscopic images and then, by applying a threshold to the depth map, the airways may be segmented.
  • the segmented airways shown in thresholded image 802 may define the navigation targets (shown in the octagons of image 802) of the next automatic robotic movement.
  • Fig. 8(b) shows images of showing a camera view (left), a semi-transparent depth map (that may be color coded) overlaid onto a camera view (center) and a thresholded image (right).
  • the continuum robot or steerable catheter 104 may follow the target(s) (which a user may change by dragging and dropping the target(s) (e.g., a user may drag and drop an identifier for the target, the user may drag and drop a cross or an x element representing the location for the target, etc.) in one or more embodiments), and the continuum robot or steerable catheter 104 may move forward and rotate on its own while targeting a predetermined location (e.g., a center) of the target(s) of the airway.
  • a predetermined location e.g., a center
  • the depth map (see e.g., in image 801) may be processed with any combination of blob/circle fit, peak detection, and/or deepest point methods or modes to detect the airways that are segmented.
  • the detected airways may define the navigation targets of the next automatic robotic movement.
  • the continuum robot or steerable catheter 104 may move in a direction of the airway with its center closer to the cross or identifier.
  • the continuum robot or steerable catheter 104 may move forward and may rotate in an autonomous fashion targeting the center of the airway (or any other designated or set point or area of the airway) in one or more embodiments.
  • a circle fit algorithm is discussed herein for one or more embodiments.
  • the circle shape provides an advantage in that it has a low computational burden, and the lumen within a lung may be substantially circular.
  • other geometric shapes may be used or preferred in a number of embodiments.
  • the lumen are more oval than circular, so an oval geometric shape may be used or preferred.
  • the apparatuses, systems, methods, and/or other features of the present disclosure may be optimized to other geometries as well, depending on the particular application(s) embodied or desired.
  • one or more airways may be deformed due to one or more reasons or conditions (e.g., environmental changes, patient diagnosis, structural specifics for one or more lungs or other objects or targets, etc.).
  • the circle fit may be used for the planning shown in FIG. 8(a), this figure shows an octagon defining the fitting of the lumen in the images. Such a difference may help with clarifying the different information being provided in the display.
  • an indicator of the geometric fit e.g., a circle fit
  • it may have the same geometry as used in the fitting algorithm, or it may have a different geometry, such as the octagon shown in FIG. 8(a).
  • continuum robots are flexible systems used in transbronchial biopsy, offering enhanced precision and dexterity. Training these robots is challenging due to their nonlinear behavior, necessitating advanced control algorithms and extensive data collection. Autonomous advancements are crucial for improving their maneuverability.
  • Sganga, et al. introduced deep learning approaches for localizing a bronchoscope using real-time bronchoscopic video as discussed in J. Sganga, D. Eng, C. Graetzel, and D. Camarillo, “Offsetnet: Deep learning for localization in the lung using rendered images,” in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 5046-5052, the disclosure of which is incorporated by reference herein in its entirety.
  • Zou, et al. proposed a method for accurately detecting the lumen center in bronchoscopy images as discussed in Y. Zou, B. Guan, J. Zhao, S. Wang, X. Sun, and J.
  • This study of the present disclosure aimed to develop and validate the autonomous advancement of a robotic bronchoscope using depth map perception.
  • the approach involves generating depth maps and employing automated lumen detection to enhance the robot’s accuracy and efficiency.
  • an early feasibility study evaluated the performance of autonomous advancement in lung phantoms derived from CT scans of lung cancer subjects.
  • Bronchoscopic operations were conducted using a snake robot developed in the researchers’ lab (some of the features of which are discussed in F. Masaki, F. King, T. Kato, H. Tsukada, Y. Colson, and N. Hata, “Technical validation of multi-section robotic bronchoscope with first person view control for transbronchial biopsies of peripheral lung,” IEEE Transactions on Biomedical Engineering, vol. 68, no. 12, pp. 3534-3542, 2021, which is incorporated by reference herein in its entirety), equipped with a bronchoscopic camera (OVM6946 OmniVision, CA).
  • the captured bronchoscopic images were transmitted to a control workstation, where depth maps were created using a method involving a Three Cycle-Consistent Generative Adversarial Network (3CGAN) (see e.g., a 3cGAN as discussed in A. Banach, F. King, F. Masaki, H. Tsukada, and N. Hata, “Visually navigated bronchoscopy using three cycle-consistent generative adversarial network for depth estimation,” Medical Image Analysis, vol. 73, p. 102164, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1361841521002103, the disclosure of which is incorporated by reference herein in its entirety).
  • a combination of thresholding and blob detection algorithms, methods, or modes was used to detect the airway path, along with peak detection for missed airways.
  • a control vector was computed from the chosen point of advancement (identified centroid or deepest point) to the center of the depth map image. This control vector represents the direction of movement on the 2D plane of original RGB and depth map images.
  • a software-emulated joystick/gamepad was used in place of the physical interface to control the snake robot (also referred to herein as a continuum robot, steerable catheter, imaging device or system, etc.). The magnitude of the control vector was calculated, and if magnitude fell below a threshold, the robot advanced. If the magnitude exceeded the threshold, the joystick was tilted to initiate bending. This process was repeated using a new image from the Snake Robot interface.
  • the robotic bronchoscope was initially positioned in front of the carina within the trachea.
  • the program was initiated by the operator or user, who possessed familiarity with the softw are, initiating the robot’s movement.
  • the operator’s or user’s sole task was to drag and drop a green cross within the bronchoscopic image to indicate the desired direction.
  • Visual assessment was used to determine whether autonomous advancement to the intended airway was successfully achieved at each branching point. Summary statistics were generated to evaluate the success rate based on the order of branching generations and lobe segments.
  • FIGS. 9 to 11 illustrate features of at least one embodiment of a continuum robot apparatus 10 configuration to implement automatic correction of a direction to which a tool channel or a camera moves or is bent in a case where a displayed image is rotated.
  • the continuum robot apparatus 10 enables to keep a correspondence between a direction on a monitor (top, bottom, right or left of the monitor) and a direction the tool channel or the camera moves on the monitor according to a particular directional command (up, down, turn right or turn left) even if the displayed image is rotated.
  • the continuum robot apparatus 10 also may be used with any of the autonomous navigation, movement detection, and/or control features of the present disclosure.
  • FIGS. 9 and 10 signify a design example of the planning step.
  • the system controller 128 highlights the two paths found in the perception step as path 1 (902) and path 2 (904).
  • the display also includes a cursor 906 shown as a crosshair.
  • Other indicators that highlight the path or paths found in the perception step may alternatively be provided, such as a ring or circle or any other indication of path.
  • the cursor 906 has been moved into path 2 (904), having a white circular feature.
  • This can be done, for example by user instruction to select this path is selected as the target path.
  • the user can use the handheld controller (e.g., a joystick) 124 to select or change the target path by clicking on or inside a lumen or on or inside an indication of the lumen /path candidate.
  • a touchscreen or a voice input is used.
  • the system controller may show, on the display, an indication of which lumen was selected by the user.
  • a color change black versus white circles, or any other color scheme or other indicator, such as red and green, black and green, bolded, highlighted, dashed
  • FIG. 9 is a selection of the right-most lumen.
  • FIG. 10 is a selection of the left-most lumen. In FIG. 10, the selection is indicated by a bolder circle for path 1/904 the target path (e.g., path 1, 904) and a dashed circle for the unselected path 902.
  • Concurrent user instruction for the target path among the paths allows reflecting user’s intention to the robotic catheter system during autonomous navigation effectively.
  • Another optional feature is also shown in FIG. 10, where the cursor 906 has changed from a crosshair to a circle. In the workflow leading up to the image shown in FIG.
  • the user moved the cursor 906, which is optimized for the user to view both the cursor and the underlying image, until it was touching or inside the selected path 904, at which time, the cursor 906 is changed to a less obvious indicator, shown here as a small circle.
  • the user may select or change the target path at any point in the planning step, as the autonomous system move into the control step, the user can optionally adjust the target path as described herein, or the prior target path can be used until the predetermined insertion depth is reached.
  • the need for additional selection of the target path can depend on the branching of the lumen network, where, for example, the user provides target path information as each branch within the airway becomes visible in the camera image.
  • color is used in this example to indicate the selection of the target path, other colors or other indicators may be used as well, such as a bolder indicator, a flashing indicator, removal of the indicator around the non-selected path lumen, etc.
  • the dedicated user instruction GUI allows the user to select the target path immediately even when the paths are more than two.
  • the user can form intention intuitively and accurately w ith the visual information in one place.
  • the differentiating the symbol for the target path achieve the user instruction with the minimal symbols without the dedicated user instruction GUI and allows the user to learn/understand how* to read symbols with the minimal effort.
  • circles are used as the symbol of the paths on the 2D interface, and the cursor provides a symbol for input of user instruction to the GUI.
  • These GUIs have minimal obstacles for the endoscope view in the user output device. Also, selecting object with the cursor are very familiar maneuver from common computer operation, the user can easily learn howto use it.
  • the user input device is a voice input device. Since the autonomous system is driving the steerable catheter, full directional control of the catheter is unnecessary for autonomous driving. It can also be unwanted.
  • a limited library of commands that the user can provide gives full control to the user to select which lumen is the correct one for the next navigation step, but prevents the user from, for example, trying to keep the steerable catheter in the center of the lumen as they would do w ith a manual catheter since this can be accomplished through the autonomous function.
  • the limited library also simplifies the system.
  • Effective voice commands can be in the form of a limited library- in the system and are, for the one system as provided herewith are (1) start, (2) stop, (3) center, (4) up, (5) down, (6) right, (7) left, and (8) back. Other systems may have more or fewer commands. Some systems w ill have a different selection of commands, depending on the use for the steerable catheter.
  • voice commands are classified as one of the effective commands and acted on as such. Verbalizations that are not classified as one of the effective commands are ignored when the sent command is not recognized as the effective commands.
  • the user or users train the system to recognize their particular enunciation of the effective voice commands.
  • the system is pre-set with the range of enunciations that are effective commands.
  • the instructions are limited to the eight commands listed above or variants thereof. In other embodiments, the instructions are limited to less than or equal to 4, 6, 8, 10, 12, 14, 16, 18, 20, or 24 commands. The commands may all be limited to instructions for selecting a target path in a lumen.
  • Some embodiments provide autonomous navigation with voice command.
  • start S1010
  • Autonomous navigation mode is displayed on the main display 118.
  • the autonomous navigation system detects airways in the camera view (S1020). The centers of detected airways are displayed as diamond mark (560) in the detected airways in the camera view.
  • the user in order for the user to select an airway for the steerable catheter to move in, the user sends one of voice commands from the options of “center”, “up”, “down”, “right” and “left”.
  • the voice command is accepted by the system, the color of “x” mark on the selected location is changed from black to red and a triangle 570 is displayed on the selected mark.
  • the selected location stays at the same location until a different location is accepted to the system.
  • the “x” mark on the center is set when the autonomous navigation mode is started.
  • the system sets the closest airway from the selected x mark as the airway to be aimed based on the distance between the selected x mark and each diamond mark in the detected airways (S1030).
  • the autonomous system always has at least one intended airway until there is at least one airway candidate. This feature avoids the situation without the intended airway and make the system behavior robust.
  • the user can stop the autonomous navigation any time by sending a voice command, “stop”, and can start the manual navigation using a handheld controller 124 to control the steerable catheter and the linear translational stage.
  • “Manual navigation mode” is displayed on the main display 118.
  • the user can restart the autonomous navigation when needed by sending a voice command, “start”.
  • the user can stop the autonomous navigation by, for example, an interaction with the handheld controller.
  • the user can send the commands using other input devices including a number pad or a general computer keyboard.
  • the commands can be assigned as numbers on the number pad or other letters on the keyboard.
  • the other input devices may be used along with or instead of voice commands.
  • the input device is has a limited number of keys/buttons that can be pressed by the user. For example, a numerical keypad is used where 2, 4, 6, and 8 are the four directions and 5 is center. The additional numbers on the keypad may be without function, or they may provide an angled movement.
  • the continuum robot and its navigation can be exemplified by a steerable catheter and particularly by a bronchoscopy procedure to diagnose tumorous tissue in a region of interest. This is described below.
  • FIG. 11 shows the typical camera image during the autonomous navigation and
  • FIG. 12 shows the flowchart of this exemplary autonomous navigation.
  • the autonomous navigation system receives an image from the camera and defines a position point in the image.
  • the position point can be used as a proxy to the position of the distal tip of the continuum robot.
  • the position point is the center of the camera Anew.
  • the position point is offset from the center of the camera view based on, for example, a known offset of the camera within the continuum robot.
  • the autonomous navigation system compares the distance between the position point (e.g., the center of the camera view) and the target point w ith the threshold (S1050). If the distance between the position point and the target point is longer than the threshold, the robotic platform bends the steerable catheter toward the target point (S1060) until the distance between the position point and the target point is smaller than the threshold. If the distance between the position point view and the target point is smaller than the threshold, the robotic platform moves the linear translational stage forward (S1080).
  • the system detects the airways in the camera view (S1020).
  • This method and system are particularly described above, and include using a depth map produced by processing one or more images obtained from the camera, fitting the lumen or lumens (e.g., airways) using an algorithm such as a circle fit, peak detection algorithm or similar, where, in instances of the camera image not able to perform the algorithm to find one or more lumen, using the deepest point.
  • the system sets the airway to be aimed (S1030). This provides a target path. This can be done by user interaction as discussed hereinabove or through an automated process.
  • the route of the continuum robot to reach the target is determined and (in step S605), the generated model and the decided route on the model may be stored.
  • a plurality of target points are used to navigate the continuum robot along the target path.
  • the target points in the lumen e.g., airway
  • the target point may be, for example, the center of the circle that was used to fit to the airway.
  • the target point is shown as a “+” mark 540 and an arrow 580 connecting the “+” mark and the center of the camera view (S1040).
  • the autonomous system must aim the distal end of the steerable catheter towards the target point and move the steerable catheter forward, towards the target point (S1050).
  • a threshold is set. If the target point is inside of the threshold, the steerable catheter is advanced, increasing the insertion depth (S1070) . If the target point is not inside of the threshold, then the distal end of the steerable catheter is bent towards the target point (S1060). After the steerable catheter is bent further, the controller must re-assess whether the target point is inside of the threshold (S1050). If the steerable catheter has not yet reached the region of interest, or as at the predetermined insertion depth, the steerable catheter will move forward S1080).
  • various parameters of the robotic platform 106 including the frame rate of the camera, the bending speed of the steerable catheter, the speed of linear translational stage 110, and the predetermined insertion depth of the linear translational stage are set. They each independently may be preset (e.g., a standard base value) or set by the user for the procedure. If set by the user, it may be based on the user’s preference, or based on specifics of the patient. The parameters may be calculated or be obtained from a look-up table.
  • the predetermined insertion depth may be estimated as the distance from the carina to the region of interest based on, for example, one or more preprocedural CT images. These parameters may be set before the start of the automated motion portion of the procedure, such as when the steerable catheter 104 reaches the carina.
  • a threshold is used during autonomous function to decide w hether to bend the steerable catheter to optimize the direction and/or angle of the tip or to move forw ard using the linear translational stage.
  • the threshold 510 may be a constant and can be defined based on the dimensions of the camera view-.
  • the threshold relates to the distance from the center of the camera view, (e.g., the center of the a dotted circle 510 in a camera view- 520, w hich is the center of the distal end of the steerable catheter) to the center of the airway 530 that has been selected as the target path (the target point 540).
  • the threshold value is visualized in FIG. 11 as a dotted circle 510, however, it can alternatively be configured as a vector.
  • the vector represents the direction of movement on the image plane of the image from the camera and depth map images, where the magnitude of the vector is the threshold value.
  • the distance to be set as the threshold may be decided based on, for example, data from a lung phantom model. Alternatively, the threshold may be based on a library of threshold data. In some embodiments, the threshold set to 10%, 15%, 20%, 25%, 30%, 35%, or 40% or a value therebetween of the camera view dimension (i.e., the distance of a diagonal line across the camera view), in some embodiments, the threshold set to between 25% and 35%, or around 30%.
  • an indicator of the navigation mode being used such as displaying “Autonomous navigation mode” on the main display 118.
  • the autonomous navigation system detects airways in the camera view (S1020).
  • the user places a mouse pointer 550 on the airway to be aimed 530 in the camera view for the autonomous navigation system to set the airway to be aimed (S1030).
  • the autonomous navigation system detects the target point 540 in the detected airway as described above (S1040), then the autonomous navigation system compares the distance between the center of the camera view and the target point with the threshold (Si 050).
  • the robotic platform bends the steerable catheter toward the target point (S1060) until the distance between the center of the camera view and the target point is smaller than the threshold. If the distance between the center of the camera view and the target point is smaller than the threshold, the robotic platform moves the linear translational stage forward (S1080).
  • the user has the ability to stop the autonomous navigation any time by pushing a button on the handheld controller 124 and can start the manual navigation to control the steerable catheter and the linear translational stage by the handheld controller.
  • an indicator of this control is provided, such as a display of “Manual navigation mode” on the main display 118.
  • the user can restart the autonomous navigation when needed by, for example, pushing a button on the handheld controller 124.
  • the user has the option to switch the navigation mode to the manual navigation and, for example, deploy a biopsy tool toward the region of interest to take a sample through the working tool access port.
  • the region of interest e.g., tumorous tissue
  • a Return to the Carina function may be included w ith the autonomous driving catheter and system. While the robotic platform is bending the steerable catheter and moving the linear translational stage forward, all input signals to the robotic platform may be recorded in the data storage memory' HDD 188 regardless of navigation modes. When the user indicates a start of the Return to Carina function (e.g., hitting the appropriate button on the handheld controller 124), the robotic platform inversely applies the recorded input signal taken during insertion to the steerable catheter and the linear translational stage.
  • an insertion depth may be set before driving the steerable catheter.
  • the autonomous navigation system can be instructed to stop bending the steerable catheter and moving the linear translational stage forward (S1070).
  • the robotic platform then switches the mode from the autonomous navigation to the manual navigation. This allows the user to start interacting with the region of interest (e.g., take a biopsy) or to provide additional adjustments to the location or orientation of the steerable catheter.
  • the frame rate is set for safe movement.
  • the steps from S1020 to S1080 in FIG. 12 can be conducted at every single frame of camera image.
  • the maximum frame rate of the camera image that can be handled for the autonomous navigation is decided.
  • the speed of bending the steerable catheter and the speed of moving the linear translational stage are decided. If it is important to move the steerable catheter based on the images when the steerable catheter is moving faster than can be ‘seen’ by the images from the camera.
  • the speed of the linear translational stage may be set less than 5 [mm/sec] .
  • Other frame rates and risk factors will suggest different speeds.
  • FIG. 13 shows an exemplary' flowchart to set the speed of bending the steerable catheter and the speed of moving the linear translational stage based on the target point in the detected airway to be aimed.
  • FIG. 14 shows an exemplary’ display’ at the parameter settings.
  • the user can set the bending speed of the steerable catheter at two points in the camera view, Bending speed 1 and Bending speed 2.
  • the autonomous navigation system sets the bending speed of the steerable catheter by linearly interpolating Bending speed 1 and Bending speed 2 based on the target point in the detected airway to be aimed (S1055).
  • Bending speed 2 is slower than Bending speed 1 so that the steerable catheter does not overbend.
  • the user can set the bending speed of the linear translational stage at two points in the camera view, Moving speed 1 and Moving speed 2.
  • the autonomous navigation system sets the moving speed of the linear translational stage by linearly interpolating Moving speed 1 and Moving speed 2 based on the target point in the detected airway to be aimed (S1075).
  • Moving speed 1 is faster than Moving speed 2 because the closer to the center of the airway the steerable catheter is, the less risky the steerable catheter collide to the airway wall.
  • the steerable catheter can reach the target point faster with less risk for the steerable catheter to collide to the airway wall.
  • FIG. 15 shows an exemplary flowchart to adjust the threshold based on the location of the steerable catheter in the lung.
  • the system detects the airway or airways in the camera view (S1020). The system then sets an airway to be aimed towards (S1030). Information as to which airway will be aimed may come from user input or from the controller.
  • the system detects the target point in the airway (S1040). A threshold is set (S1045) and it is determined whether the target point is inside of the threshold (S1050). In this case, as the position point is set as the center of the image, so the target point is inside of the threshold (S1050) if the target point is sufficiently close to the current position of the steerable catheter tip.
  • the tip of the steerable catheter in the lung is bent towards the target point (Si 060). This bend may be a set amount, it may be to the approximate location of the target point, etc.
  • the system detects the airway in the camera view again (S1020) and the workflow repeats. If, in step S1050, the target point is inside of the threshold, it is determined whether the predetermined insertion depth is reached (S1070). If this depth is not yet reached, the steerable catheter is moved forward in the airway (S1080).
  • the system detects the airways in the camera view (S1020) and the workflow repeats. If, in step S1070, the predetermined insertion depth is reached, the workflow is ended (S1090).
  • FIG. 16 shows an exemplary the display at the parameter settings, illustrating where two different thresholds are used.
  • the user can set two thresholds to decide for the autonomous function to bend the steerable catheter or to move forward the linear translational stage based on the insertion depth at the parameter settings.
  • Threshold 1 and Threshold 2 are located at the carina and at the region of interest in the planning view created from the preoperative CT image.
  • the autonomous driving can start using Threshold 1, as shone by the large dashed circle, when the airway is large and the need for tight control and steering is not as great.
  • Threshold 2 is smaller than Threshold 1, as shown by the smaller circle in FIG. 16. Since the airway is smaller the further into the lung the catheter moves, the robotic catheter may need to be bent more accurately towards the center of the airways before it moves forward. Thus, a smaller threshold may be required.
  • the autonomous navigation system can set the threshold at each frame of the camera view, as described as S1045 in the above workflow.
  • the threshold can be a linearly interpolation of Threshold 1 and Threshold 2 based on the insertion depth (S1045).
  • there are two (or more) different thresholds where ehte threshold changes from one to another when a pre-defined insertion depth or other indication of depth into the lumen is reached.
  • the thresholds are changed based on the lumen diameter at the location of the steerable catheter.
  • the steerable catheter can move faster and spends less time to bend around the carina, leading to less time for bronchoscopy, but maintains an accurate and precise navigation further into the periphery where a deviation from the center of the airway would increase risk to the patient.
  • FIG. 17 shows a flowchart with an exemplary method to abort the autonomous navigation when the blood is detected in the camera view.
  • the criterion to abort bronchoscopy may be defined as the ratio of the number of pixels indicating the blood divided by the total number of pixels in a camera image.
  • an imaging processing library e.g. OpenCV
  • the ratio of the number of pixels indicating the blood divided by the total number of pixels in a camera image exceeds the predetermined ratio, the autonomous navigation is aborted.
  • the mucous in the airway can be detected using an imaging processing library.
  • the number of yellow pixels in a RGB camera view can be used.
  • the steerable catheter can be automatically stopped during bronchoscopy when an emergency situation is detected.
  • one or more methods of the present disclosure were validated on one clinically derived phantom and two ex-vivo pig lung specimens with and without simulated breathing motion, resulting in 261 advancement paths in total, and in an in vivo animal.
  • the achieved target reachability in phantoms was 73.3%
  • in ex-vivo specimens without breathing motion was 77% and 78%
  • in ex-vivo specimens with breathing motion was 69% and 76%.
  • the proposed supervised-autonomous navigation/driving approach(es) in the lung is/are proven to be clinically feasible.
  • this system or systems have the potential to redefine the standard of care for lung cancer patients, leading to more accurate diagnoses and streamlined healthcare workflows.
  • the present disclosure provides features that integrate the healthcare sector with robotic-assisted surgery (RAS) and transforms same into Minimally Invasive Surgery (MIS). Not only does RAS align well with MIS outcomes (see e.g., J. Kang, et al., Annals of surgery, vol. 257, no. 1, pp. 95-101 (2013), which is incorporated by reference herein in its entirety), but RAS also promises enhanced dexterity and precision compared to traditional MIS techniques (see e.g., D. Hu, etal., The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 14, no. 1, p. 01872 (2018), which is incorporated by reference herein in its entirety).
  • At least one objective of the studies discussed in the present disclosure is to develop and clinically validate a supervised- autonomous navigation/driving approach in robotic bronchoscopy.
  • one or more methodologies of the present disclosure utilize unsupervised depth estimation from the bronchoscopic image (see e.g., Y. Zou, et al., IEEE Transactions on Medical Robotics and Bionics, vol. 4, no. 3, pp. 588-598 (2022), which is incorporated by reference herein in its entirety), coupled with the robotic bronchoscope (see e.g., J. Zhang, et al., Nature Communications, vol. 15, no. 1, p. 241 (Jan.
  • the inventors of the present disclosure introduce one or more advanced airway tracking method(s). These methods, rooted in the detection of airways within the estimated bronchoscopic depth map, may form the foundational perception algorithm that orchestrates the robotic bronchoscope’s movements in one or more embodiments.
  • the snake robot may be a robotic bronchoscope composed of, or including at least, the following parts in one or more embodiments: i) the robotic catheter, ii) the actuator unit, iii) the robotic arm, and iv) the software in one or more embodiments (see e.g., FIG. 1, FIG. 9, FIG. 12(c), etc. discussed herein) or the robotic catheter described in one or more of: U.S. Pat. 11,096,552; U.S. Pat. 11,559490; U.S. Pat. 11,622,828 ; U.S. Pat. 11,730,551; U.S. Pat. 11,926,062; US2021/0121162; US2021/0369085; US2022/0016394; US2022/0202277;
  • the steering structure of the catheter may include two distal bending sections: the tip and middle sections, and one proximal bending section without an intermediate passive section/segment.
  • Each of the sections may have its own degree of freedom (DOF) (see e.g., A. Banach, et al., “Medical image analysis, vol. 73, p. 102164 (2021).
  • DOF degree of freedom
  • the catheter may be actuated through the actuator unit attached to the robotic arm and may include nine motors that control the nine catheter wires. Each motor may operate to bend one wire of the catheter by applying pushing or pulling force to the drive wire.
  • Both the robotic catheter and actuator may be attached to a robotic arm, including a rail that allows for a linear translation of the catheter.
  • the movement of the catheter over or along the rail may be achieved through a linear stage actuator, which pushes or pulls the actuator and the attached catheter.
  • the catheter, actuator unit, and robotic arm may be coupled into a system controller, which allows their communication with the software. While not limited thereto, the robot’s movement may be achieved using a handheld controller (gamepad) or, like in the studies discussed herein, through autonomous driving software.
  • the validation design of the robotic bronchoscope was performed by replicating real surgical scenarios, where the bronchoscope entered the trachea and navigated in the airways toward a predefined target (see e.g., L.
  • apparatuses and systems, and methods and storage mediums for performing navigation, movement, and/or control, and/or for performing depth map-driven autonomous advancement of a multi-section continuum robot may operate to characterize biological objects, such as, but not limited to, blood, mucus, lesions, tissue, etc.
  • the autonomous driving method feature(s) of the present disclosure relies/rely on the 2D image from the monocular bronchoscopic camera without tracking hardware or prior CT segmentation in one or more embodiments.
  • a 200x200 pixel grayscale bronchoscopic image serves as input for a deep learning model (scGAN (see e.g., A. Banach, F. King, F. Masaki, H. Tsukada, N. Hata, “Medical image analysis, vol. 73, p. 102164 (2021), the disclosure of which is incorporated by reference herein in its entirety)) that generates a bronchoscopic depth map.
  • scGAN deep learning model
  • 3cGAN s adversarial loss accumulates losses across six levels:
  • Lgan 6 Lgan lev + Lgan ⁇ + • • • + Lgan lev (1)
  • the cycle consistency loss combines the cycle consistency losses from all three level pairs: where A stands for the bronchoscopic image, B stands for the depth map, C stands for virtual bronchoscopic image, X represents estimation of X, and the lower index i stands for the networks level.
  • the total loss function of the 3cGAN is:
  • the 3CGAN model underwent unsupervised training using bronchoscopic images from phantoms derived from segmented airways. Bronchoscopic operations to acquire the training data were performed using a Scope 4 bronchoscope (Arnbu Inc, Columbia, MD), while virtual bronchoscopic images and ground truth depth maps were generated in Unity (Unity Technologies, San Francisco, CA).
  • the training ex-vivo dataset contained 2458 images.
  • the network was trained in PyTorch using an Adam optimizer on 50 epochs with a learning rate of 2 to -4 and a batch size of one. Training time was approximately 30 hours, and less than 0.02s for the inference of one depth map on a GTX 1080 Ti GPU.
  • the depth map was generated from the 3cGAN models by inputting the 2D image from the bronchoscopic camera.
  • the bronchoscopic image and/or the depth map was then processed for airway detection using a combination of blob detection, thresholding, and peak detection (see e.g., FIG. 11(a) discussed below).
  • Blob detection was performed on a depth map w here 20% of the deepest area w as thresholded, and the centroids of the resulting shapes were treated as potential points of advancement for the robot to bend and advance towards.
  • Peak detection was performed as a secondary detection method to detect airways that may have been missed by the blob detection. Any peaks detected inside an existing detected blob were disregarded.
  • Direction vector control command may be performed using the directed airways to decide to employ bending and/or insertion, and/or such information may be passed or transmitted to software to control the robot and to perform autonomous advancement.
  • one or more embodiments of the present disclosure may be a robotic bronchoscope using a robotic catheter and actuator unit, a robotic arm, and/or a control software or a User Interface.
  • one or more robotic bronchoscopes may use any of the subject features individually or in combination.
  • depth estimation may be performed from bronchoscopic images and with airway detection (see e.g., FIGS. 19(a) - 19(b).
  • a bronchoscope and/or a processor or computer in use therewith may use a bronchoscopic image with detected airways and an estimated depth map (or depth estimation) with or using detected airways.
  • a pixel of a set or predetermined color (e.g., red or any other desired color or other indicator) 1002 represents a center of the detected airway.
  • the depth map was generated from the 3cGAN models by inputting the 2D image from the bronchoscopic camera.
  • the depth map was then processed for airway detection using a combination of blob detection (see e.g., T. Kato, F. King, K. Takagi, N. Hata, 1EEE/ASME Transactions on Mechatronics pp. 1-1 (2020), the disclosure of which is incorporated by reference herein in its entirety), thresholding, and peak detection (see e.g., F. Masaki, F. King, T. Kato, H. Tsukada, Y. Colson, and N. Hata, IEEE Transactions on Biomedical Engineering , vol. 68, no. 12, pp.
  • Blob detection was performed on a depth map where 20% of the deepest area was thresholded, and the centroids of the resulting shapes were treated as potential points of advancement for the robot to bend and advance towards. Peak detection (see e.g., F. Masaki, 2021) was performed as a secondary detection method to detect airways that may have been missed by the blob detection. Any peaks detected inside an existing detected blob were disregarded.
  • the integrated control using first-person view- grants physicians the capability to guide the distal section’s motion via visual feedback from the robotic bronchoscope.
  • users may determine only the lateral and vertical movements of the third (e.g., most distal) section, along with the general advancement or retraction of the robotic bronchoscope.
  • the user’s control of the third section may be performed using the computer mouse and drag and drop a cross or plus sign 1003 to the desired direction as shown in FIG. 20(a) and/or FIG. 20(b).
  • a voice control may also be implemented additionally or alternatively to the mouse-operated cross or plus sign 1003.
  • an operator or user may select an airway for the robotic bronchoscope to aim using voice recognition algorithm (VoiceBot, Fortress, Ontario, Canada) via a headset (J100 Pro, Jeeco, Shenzhen, China).
  • the options acceptable as input commands to control the robotic bronchoscope were the four cardinal directions (up, down, left, right, and center) and start/stop.
  • a cross 1003 was shown on top of the endoscopic camera view .
  • the system automatically selected the closest airway to the mark out of the airways detected by the trained 3cGAN model, and sent commands to the robotic catheter to bend the catheter toward the airway (see FIG. 20(b), which shows an example of the camera view- in a case where voice recognition algorithm accepted “up”.
  • the cross 1003 indicated in which direction was being selected and the line or segment 1004 showed the expected trajectory of the robotic catheter).
  • any feature of the present disclosure may be used with features, including, but not limited to, training feature(s), autonomous navigation feature(s), artificial intelligence feature(s), etc., as discussed and referenced herein.
  • the target airway is identified based on its center proximity to the user-set marker visible as the cross or cross/plus sign 1003 in one or more embodiments as shown in FIGS. 19(a) - 19(b) (the cross may be any set or predetermined color, e.g., green or other chosen color).
  • a direction vector may be computed from the center of the depth map to the center of this target detected airway. The vector may inform a virtual gamepad controller (or other type of controller) and/or one or more processors, instigating or being responsible for the bending of the bronchoscopic tip.
  • the robot may advance in a straight line if this direction vector’s magnitude is less than 30% of the camera view’s width, which is called linear stage engagement (LSE).
  • LSE linear stage engagement
  • the process may repeat for each image frame received from the bronchoscopic camera without influence from previous frames.
  • the bronchoscopic robot may maintain a set or predetermined/calculated linear speed (e.y., of 2 mm/s) and a set or predetermined/calculated bending speed (e.y., of 15 deg/s).
  • the movements of the initial two sections may be managed by the FTL motion algorithm, based on the movement history of the third section.
  • the reverse FTL motion algorithm may control all three sections, leveraging the combined movement history of all sections recorded during the advancement phase, allowing users to retract the robotic bronchoscope whenever necessary.
  • a most distal segment may be actively controlled with forward kinematic values, while a middle segment and another middle or proximal segment e.g., one or more following sections) of a steerable catheter or continuum robot move at a first position in the same way as the distal section moved at the first position or a second position near the first position.
  • the FTL algorithm may be used in addition to the robotic control features of the present disclosure.
  • the middle section and the proximal section (e.g., following sections) of a continuum robot may move at a first position (or other state) in the same or similar way as the distal section moved at the first position (or other state) or a second position (or state) near the first position (or state) e.g., during insertion of the continuum robot/ catheter, by using the navigation, movement, and/or control feature(s) of the present disclosure, etc.) .
  • the middle section and the distal section of the continuum robot may move at a first position or state in the same/ similar/ approximately similar way as the proximal section moved at the first position or state or a second position or state near the first position (e.g., during removal of the continuum robot/catheter).
  • the continuum robot/catheter may be removed by automatically and/or manually moving along the same or similar, or approximately same or similar, path that the continuum robot/catheter used to enter a target (e.g., a body of a patient, an object, a specimen (e.g., tissue), etc.) using the FTL algorithm, including, but not limited to, using FTL with the one or more control, depth map-driven autonomous advancement, or other technique(s) discussed herein.
  • Other FTL features may be used w ith the one or more features of the present disclosure.
  • one or more embodiments may receive/obtain one or more bronchoscopic images (which may be input into a 3cGAN or any other Al-related architecture/structure for processing) such that a network (e.g., a neural network, a 3cGAN, a GAN, a convolutional neural network, any other Al architecture/structure, etc.) and/or one or more processors may estimate a depth map from the one or more bronchoscopic images.
  • a network e.g., a neural network, a 3cGAN, a GAN, a convolutional neural network, any other Al architecture/structure, etc.
  • processors may estimate a depth map from the one or more bronchoscopic images.
  • An airw ay detection algorithm or process may identify the one or more airways in the bronchoscopic image(s) and/or in the depth map (e.g., such as, but not limited to, using thresholding, blob detection, peak detection, and/or any other process for identifying one or more airways as discussed herein and/or as may be set by a user and/or one or more processors, etc.).
  • the pixel 1002 may represent a center of a detected airway and the cross or plus sign 1003 may represent the desired direction determined by the user (e.g., moved using a drag and drop feature, using a touch screen feature, entering a manual command, etc.) and/or by one or more processors (see e.g., any of the processors discussed herein).
  • the line or segment 1004 (which may also be of any set or predetermined color, such as, but not limited to, blue) may be the direction vector between the center of the image/depth map and the center of the detected blob closer or closest in proximity to the cross or plus sign 1003.
  • the direction vector control command may decide between bending and insertion.
  • the direction vector may then be sent to the robot’s control software by a virtual gamepad (or other controller or processor) which may initiate the autonomous advancement.
  • a virtual gamepad or other controller or processor
  • FIG. 20(a) at least one embodiment may have a network estimate a depth map from a bronchoscopic image, and the airway detection algorithm(s) may identify the airways.
  • the pixel 1002, the cross or plus sign 1003, and the line or segment 1004 may be employed in the same or similar fashion such that discussion of the subject features shown in FIGS. 19(a) -mt9(b) and FIG. 20(a) will not be repeated. Characteristics of models and scans for at least one study performed is shown in Table 1 below:
  • FIG. 21(b) shows a robotic bronchoscope in the phantom having reached the location corresponding to the location of the lesion in the patient’s lung, using the proposed supervised-autonomous navigation.
  • the 62-year-old male patient presented with a nodule measuring 21x21x16 [mm] in the right upper lobe (RUL).
  • FIGS. 2i(ao - 21(b) illustrate a navigation screen for a clinical target location 125 in or at a lesion reached by autonomous driving and a robotic bronchoscope in a phantom having reached the location corresponding to the location of the lesion using one or more navigation features, respectively.
  • Various procedures were performed at the lesion’s location, including bronchoalveolar lavage, transbronchial needle aspiration, brushing, and transbronchial lung biopsy. The procedure progressed without immediate complications.
  • the inventors via the experiment, aimed to ascertain whether the proposed autonomous driving method(s) would achieve the same clinical target (which the experiment confirmed that such method(s) would achieve the same clinical target) .
  • one target in the phantom replicated the lesion’s location in the patient’s lung.
  • Airway segmentation of the chest CT scan mentioned above was performed using ‘Thresholding’ and ‘Grow from Seeds’ techniques within 3D Slicer software.
  • a physical/tangible mold replica of the walls of the segmented airways was created using 3D printing in ABS plastic. The printed mold was later filled to produce the Patient Device Phantom using a silicone rubber compound, which was left to cure before being removed from the mold.
  • the inventors via the experiment, also validated the method features on two ex-vivo porcine lungs with and without breathing motion simulation.
  • a human Breathing motion was simulated using an AMBU bag with a 2-second interval between the inspiration phases.
  • the target locations were determined as the airways with a diameter constraint imposed to limit movement of the robotic bronchoscope.
  • the phantom contained 75 targets, where ex-vivo lung #1 had 52 targets, and ex-vivo lung # 2 had 41 targets. The targets were positioned across all airways. This resulted in generating a total number of 168 advancement paths and 1163 branching points without breathing simulation (phantom plus ex-vivo scenarios), and 93 advancement paths and 675 branching points with breathing motion simulation (BM) (ex-vivo) see Table 1).
  • BM breathing motion simulation
  • Each of the phantoms and specimens contained target locations in all the lobes.
  • LC Local Cu n ature
  • PR Plane Rotation
  • the Menger curvature was determined using the point itself, the fifteen preceding points, and the fifteen subsequent points, encompassing approximately 5 mm along the centerline.
  • LC is expressed in [mnr 1 ].
  • PR measures the angle of rotation of the airw ay branch on a plane, independent of its angle relative to the trachea. This metric is based on the concept that maneuvering the bronchoscope outside the current plane of motion increases the difficulty of advancement. To assess this, the given vector was compared to the current plane of motion of the bronchoscope.
  • the plane of motion was initially determined by two vectors in the trachea, establishing a planethat intersects the trachea laterally (on the left-right plane of the human body). If the centerline surpassed a threshold of 0.75 [rad] (42 [deg]) for more than a hundred consecutive points, a new plane was defined. This approach allowed for multiple changes in the plane of motion along one centerline if the path indicated it.
  • the PR is represented in [rad]. Both LC and PR have been proven significant in the success rate of advancement with user-controlled robotic bronchoscopes. In this study, the metrics of LC and PR have been selected as maximum values of the generated LC and PR outputs from the ‘Centerline Module’ at each branching point along the path towards the target location, and the maximum values were recorded for further analysis.
  • FIG. 21(c) shows six consecutive breathing cycles measured by the EM tracking sensors as an example of the breathing motion.
  • FIGS. 19(a) - 19(b) in the desired direction.
  • the local camera coordinate frame was calibrated with the robot’s coordinate system, and the robotic software was designed to advance toward the detected airway closest to the green cross placed by the operator. One advancement per target was performed and recorded. If the riving algorithm failed, the recording was stopped at the point of failure.
  • the primary metric collected in this study was target reachability, defining the success in reaching the target location in each advancement.
  • the secondary metric was success at each branching point determined as a binary’ measurement based on visual assessment of the robot entering the user-defined airway.
  • the other metrics included target generation, target lobe, local curvature (LC) and plane rotation (PR) at each branching point, type of branching point, the total time and total path length to reach the target location (if successfully reached), and time to failure location together with airway generation of failure (if not successfully reached) .
  • Path length was determined as the linear distance advanced by the robot from the starting point to the target or failure location.
  • a navigator sending voice commands to the autonomous navigation randomly selected the airway at each bifurcation point for the robotic catheter to move in and ended the autonomous navigation when the mucus blocked the endoscopic camera view.
  • the navigator was not allowed to change the selected airway before the robotic catheter moved into the selected airway, and not allowed to retract the robotic catheter in the middle of one attempt.
  • a) Time forbending command Input commands to control the robotic catheter including moving forward, retraction and bending were recorded at 100 Hz. The time for bending command was collected as the summation of the time for the operator or autonomous navigation software to send input commands to bend the robotic catheter at a bifurcation point.
  • the inventors analyzed the data using multiple regression models with time and force as response, generation number and operator type (human or autonomous), and their interaction as predictors.
  • the inventors treated generation as a continuous variable, so that the main effect of operator type is the difference in intercepts between lines fit for each ty pe, and the interaction term is the corresponding difference in slopes.
  • the target reachability achieved in phantom was 73.3%. 481 branching points were tried in the phantom for autonomous robotic advancements. The overall success rate at branching points achieved was 95.8%. The branching points comprised 399 bifurcations and 82 trifurcations. The success rates at bifurcations and trifurcations were 97% and 92%, respectively.
  • the average LC and PR at successful branching points were respectively 287.5 ⁇ 125.5 [mm -1 ] and O-4 ⁇ O.2 [rad].
  • the average LC and PR at failed branching points were respectively 429.5 ⁇ 133.7 [mm- 1 ] and 0.9 ⁇ 0.3 [rad] .
  • the paired Wilcoxon signed- rank test showed no statistical significance of LC (p ⁇ o.ooi) and PR (p ⁇ o.ooi). Boxplots showing the significance of LC and PR on success at branching points are presented in FIGS. 23(a) - 23(b) together with ex-vivo data.
  • FIGS. 22(a) - 22(c) illustrate views of at least one embodiment of a navigation algorithm performing at various branching points in a phantom
  • FIG. 22(a) shows a path on which the target location (dot) was not reached (e.g., the algorithm may not have traversed the last bifurcation where an airway on the right was not detected)
  • FIG. 22(b) shows a path on which the target location (dot) was successfully reached
  • FIG. 22(c) shows a path on which the target location was also successful reached.
  • the highlighted squares represent estimated depth maps with detected airways at each visible branching point on paths toward target locations.
  • the black frame (or a frame of another set/first color) represents success at a branching point and the frame of a set or predetermined color (e.g., red or other different/second color) (e.g., frame 1006 may be the frame of a red or different/second color as show n in the bottom right frame of FIG. 22(a) represents a failure at a branching point. All three targets were in RLL.
  • a set or predetermined color e.g., red or other different/second color
  • frame 1006 may be the frame of a red or different/second color as show n in the bottom right frame of FIG. 22(a) represents a failure at a branching point. All three targets were in RLL.
  • Red pixel(s) represent the center of a detected airway
  • green cross e.g., the cross or plus sign 1003
  • the blue segment e.p., the segment 1004 is the direction vector between the center of the image/depth map and the center of the detected blob in closer or closest proximity to the green cross (e.g., the cross or plus sign 1003).
  • FIGS. 23(a) illustrate graphs showing success at branching point(s) with respect to Local Curvature (LC) and Plane Rotation (PR), respectively, for all data combined in one or more embodiments.
  • FIGS. 23(a) -23(b) show the statistically significant difference between successful performance at branching points with respect to LC (see FIG. 23(a) and PR (see FIG. 23(b).
  • LC is expressed in [mm -1 ] and PR in [rad].
  • the target reachability achieved in ex-vivo #1 was 77% and in ex-vivo #2 78% without breathing motion.
  • the target reachability achieved in ex-vivo #1 was 69% and in ex-vivo #2 76% with breathing motion.
  • branching points were tried in the ex-vivo#i and 583 in ex-vivo#2 for autonomous robotic advancements.
  • the overall success rate at branching points achieved was 97% in ex-vivo #1 and 97% in ex-vivo#2 without BM, and 96% in ex-vivo #1 and 97% in ex-vivo#2 with BM.
  • the branching points comprised 327 bifurcations and 62 trifurcations in ex-vivo#i and 255 bifurcations and 38 trifurcations in ex-vivo#2 without BM.
  • the branching points comprised 326 bifurcations and 59 trifurcations in ex-vivo#i and 252 bifurcations and 38 trifurcations in ex-vivo# 2 with BM.
  • the success rates without BM at bifurcations and trifurcations were respectively 98% and 92% in ex-vivo#i, and 97% and 95% in ex-vivo#2.
  • the success rates with BM at bifurcations and trifurcations were respectively 96% and 93% in ex-vivo#i, and 96% and 97% in ex-vivo#2.
  • the average LC and PR at successful branching points were respectively 211.9 ⁇ 112.6 [mm -1 ] and 0.4 ⁇ 0.2 [rad] for ex-vivo#i, and 184.5 ⁇ 110.4 [mm -1 ] and 0.6 ⁇ 0.2 [rad] for ex-vivo#2.
  • the average LC and PR at failed branching points were respectively 393.7 ⁇ 153.5 [mm -1 ] and 0.6 ⁇ 0.3 [rad] for ex-vivo# 1, and 369.5 ⁇ 200.6 [mm -1 ] and 0.7 ⁇ 0.4 [rad] for ex-vivo#!.
  • FIGS. 23(a) - 23(b) represent the comparison of LC and PR for successful and failed branching points, for all data (phantom, ex-vivos, ex-vivos with breathing motion) combined.
  • results of Local Curvature (LC) and Plane Rotation (PR) were displayed on three advancement paths towards different target locations with highlighted, color-coded values of LC and PR along the paths.
  • the views illustrated impact(s) of Local Curvature (LC) and Plane Rotation (PR) on one or more performances of one or more embodiments of a navigation algorithm where one view illustrated a path toward a target location in RML of ex vivo #1, which was reached successfully, where another view illustrated a path toward a target location in LLL of ex vivo #1, which was reached successfully, and where yet another view illustrated a path toward a target location in RLL of the phantom, which failed at a location marked with a square (e.g., a red square).
  • a square e.g., a red square
  • FIGS. 24(a) - 24(c) illustrate three advancement paths towards different target locations (see bine dots) using one or more embodiments of navigation feature(s) with and without BM.
  • FIGS. 24(a) - 24(c) illustrate one or more impacts of breathing motion on a performance of the one or more navigation algorithm(s), where FIG.
  • FIG. 24(a) shows a path on which the target location (ex vivo #1 LLL) was reached with and without breathing motion (BM), where FIG. 24(b) shows a path on which the target location (ex vivo #1 RLL) was not reached without BM but was reached with BM (such as result illustrates that at times BM may help the algorithm(s) with detecting and entering the right airway for one or more embodiments of the present disclosure), and where FIG. 24(c) shows a path on which the target location (ex vivo #1 RML) was reached without BM was not reached with BM (such a result illustrates that at times BM may affect performance of an algorithm in one or more situations. That said, the algorithms of the present disclosure are still highly effective under such a condition).
  • the highlighted squares represent estimated depth maps with detected airways at each visible branching point on paths toward target locations.
  • the black frame represents success at a branching point and the red frame represents a failure at a branching point.
  • FIG. 25(a) illustrates the box plots for time for the operator or the autonomous navigation to bend the robotic catheter
  • FIG. 25(b) illustrates the box plots for the maximum force for the operator or the autonomous navigation at each bifurcation point.
  • FIG. 27(a) and 27(b) Dependency on the airway generation of the lung: The dependency of the time and the force on the airway generation of the lung are shown in FIG. 27(a) and 27(b) with regression lines and 95% confidential intervals. For both metrics, the differences of the regression lines for each operator type become larger as the airway generation increases.
  • FIGS. 27(a) and 27(b) show scatter plots for time to bend the robotic catheter (FIG. 27(a) and maximum force for a human operator and/or the autonomous navigation software (FIG. 27(b)), respectively. Solid lines showed the linear regression lines w ith 95% confidential intervals. While not required, jittering was applied on a horizontal axis for visualization.
  • the inventors have implemented the autonomous advancement of the bronchoscopic robot into a practical clinical tool, providing physicians with the capability to manually outline the robot’s desired path. This is achieved by simply placing a marker on the screen in the intended direction using the computer mouse (or other input device). While motion planning remains under physician control, both airway detection and motion execution are fully autonomous features. This amalgamation of manual control and autonomy is technological; according to the inventors’ knowledge, the methods of the present disclosure represent the pioneering clinical instrument facilitating airw ay tracking for supervised-autonomous driving within a target (e.y., the lung). To validate its effectiveness, the inventors assessed the performance of the driving algorithm(s), emphasizing target reachability and success at branching points.
  • a target e.y., the lung
  • the autonomous driving was compared with two human operators using a gamepad controller in a living swine model under breathing motion.
  • Our blinded comparison study revealed that the autonomous driving took less time to bend the robotic catheter and applied less force to the anatomy than the navigation by human operator using a gamepad controller, suggesting the autonomous driving successfully identified the center of the airway in the camera view even with breathing motion and accurately moved the robotic catheter into the identified airway.
  • One or more embodiments of the present disclosure is in accordance with two studies that recently introduced the approach for autonomous driving in the lung (see e.g., J. Sganga, et al., RAL, pp. 1-10 (2019), which is incorporated by reference herein in its entirety, and Y. Zou, etal., IEEE Transactions on Medical Robotics and Bionics, vol. 4, no. 3, pp. 588-598 (2022), which is incorporated by reference herein in its entirety).
  • the first study reports 95% target reachability w ith the robot reaching the target in 19 out of 20 trials, but it is limited to 4 targets (J- Sganga, et al., RAL, pp.
  • the subject study does not report any details on the number of targets, the location of the targets within lung anatomy, the origin of the human lung phantom, and the statistical analysis to identify the reasons for failure.
  • the only metric used is the time to target.
  • Both of these Sganga, et al. and Zou, et al. studies differ from the present disclosure in numerous w ays, including, but not limited to, in the design of the method(s) of the present disclosure and the comprehensiveness of clinical validation.
  • the methods of those two studies are based on airway detection from supervised learning algorithms.
  • one or more methods of the present disclosure first estimate the bronchoscopic depth map using an unsupervised generative learning technique (A. Banach, F. King, F. Masaki, H. Tsukada, N.
  • One or more embodiments of the presented method of the present disclosure may be dependent on the quality of bronchoscopic depth estimation by 3cGAN (see e.g., A. Banach, F. King, F. Masaki, H. Tsukada, N. Hata, “Medical image analysis, vol. 73, p.
  • FIGS. 26(a) - 26(d) illustrate one or more examples of depth estimation failure and artifact robustness that may be observed in one or more embodiments.
  • FIG. 26(a) shows a scenario where the depth map (right side of FIG. 26(a) was not estimated accurately and therefore the airway detection algorithm did not detect the airway partially visible on the right side of the bronchoscopic image (left side of FIG. 26(a).
  • FIG. 26(b) shows a scenario where the depth map estimated the airways accurately despite presence of debris.
  • FIG. 26(c) shows a scenario opposite to the one presented in FIG. 26(a) where the airway on the right side of the bronchoscopic image (left side of FIG. 26(c) is more visible and the airway detection algorithm detects it successfully.
  • FIG. 26(d) shows a scenario where a visual artifact is ignored by the depth estimation algorithm and both visible airways are detected in the depth map.
  • Another possible scenario may be related to the fact that the control algorithm should guide the robot along the centerline.
  • Dynamic LSE operates to solve that issue and to guide the robot towards the centerline when not at a branching point.
  • the inventors also identified the failure at branching points as a result of lacking short-term memory, and that using short-term memory may increase success rate(s) at branching points.
  • the algorithm may detect some of the visible airways only for a short moment, not leaving enough time for the control algorithm to react.
  • a potential solution would involve such short-term memory that ‘remembers’ the detected airways and forces the control algorithm to make the bronchoscopic camera ‘look around’ and make sure that no airways were missed.
  • Such a ‘look around’ mode implemented between certain time or distance intervals may also prevent from missing airways that were not visible in the bronchoscopic image in one or more embodiments of the present disclosure.
  • the present disclosure and/or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with continuum robot devices, systems, methods, and/or storage mediums and/or with endoscope devices, systems, methods, and/or storage mediums.
  • continuum robot devices, systems, methods, and/or storage mediums are disclosed in at least: U.S. Pat. 11,882,365 , filed on February 6, 2022, the disclosure of which is incorporated by reference herein in its entirety.
  • Such endoscope devices, systems, methods, and/ or storage mediums are disclosed in at least: U.S. Pat. Pub.
  • artificial intelligence structure(s) such as, but not limited to, residual networks, neural networks, convolutional neural networks, GANs, cGANs, etc.
  • other types of Al structure(s) and/or network(s) may be used.
  • the below discussed network/structure examples are illustrative only, and any of the features of the present disclosure may be used with any Al structure or network, including Al networks that are less complex than the network structures discussed below).
  • One or more processors or computers 128 may be part of a system in which the one or more processors or computers 128 (or any other processor discussed herein) communicate with other devices e.g., a database, a memory, an input device, an output device, etc.).
  • one or more models may have been trained previously and stored in one or more locations, such as, but not limited to, the memory, the database, etc.
  • one or more models and/or data discussed herein e.q., training data, testing data, validation data, imaging data, etc.
  • a device such as the input device.
  • a user may employ an input device (which may be a separate computer or processor, a voice detector (e.g., a microphone), a keyboard, a touchscreen, or any other input device known to those skilled in the art).
  • an input device may not be used e.g., where user interaction is eliminated by one or more artificial intelligence features discussed herein).
  • the output device may receive one or more outputs discussed herein to perform coregistration, autonomous navigation, movement detection, control, and/or any other process discussed herein.
  • the database and/or the memory may have outputted information ⁇ e.g., trained model(s), detected marker information, image data, test data, validation data, training data, coregistration result(s), segmentation model information, object detection/regression model information, combination model information, etc.) stored therein.
  • outputted information e.g., trained model(s), detected marker information, image data, test data, validation data, training data, coregistration result(s), segmentation model information, object detection/regression model information, combination model information, etc.
  • one or more embodiments may include several types of data stores, memory, storage media, etc. as discussed above, and such storage media, memory, data stores, etc. may be stored locally or remotely.
  • the input may be the entire image frame or frames
  • the output may be the centroid coordinates of a target, an octagon, circle (e.g., using circle fit) or other geometric shape used, one or more airways, and/or coordinates of a portion of a catheter or probe.
  • Any of a variety of architecture of a regression model may be used.
  • the regression model may use a combination of one or more convolution layers, one or more max-pooling layers, and one or more fully connected dense layers.
  • the Kernel size, Width/Number of filters (output size), and Stride sizes of each layer may be varied dependent on the input image or data as well as the preferred output.
  • hyperparameter search with, for example, a fixed optimizer and with a different w idth may be performed.
  • One or more embodiments may use one or more features for a regression model as discussed in “Deep Residual Learning for Image Recognition” to Kaiming He, et al., Microsoft Research, December 10, 2015, which is incorporated by reference herein in its entirety.
  • Other embodiments will use the features for a regression model as discussed in J. Sganga, et al., , “Autonomous Driving in the Lung using Deep Learning for Localization,” Jul. 2019 arxiv.org/abs/1907.08136vl, the disclosure of which is incorporated by reference herein in its entirety.
  • the output from a segmentation model is a “probability” of each pixel that may be categorized as a target or as an estimate (incorrect) or actual (correct) match, post-processing after prediction ria the trained segmentation model may be developed to better define, determine, or locate the final coordinate of catheter location and/or determine the autonomous navigation, movement detection, and/or control status of the catheter or continuum robot.
  • One or more embodiments of a semantic segmentation model may be performed using the One- Hundred Layers Tiramisu method discussed in “The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation” to Simon .Jegou, et al., Montreal Institute for Learning Algorithms, published October 31, 2017
  • a batch size (of images in a batch) may be one or more of the following: 1, 2, 4, 8, 16, and, from the one or more experiments performed, a bigger batch size typically performs better e.g., with greater accuracy).
  • the optimization of all of these hyper-parameters depends on the size of the available data set as well as the available computer/computing resources; thus, once more data is available, different hyperparameter values may be chosen. Additionally, in one or more embodiments, steps/ epoch may be 25, 50, 100, and the epochs may be greater than (>) 1000. In one or more embodiments, a convolutional autoencoder (CAE) may be used.
  • CAE convolutional autoencoder
  • present disclosure and/or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with continuum robotic systems and catheters, such as, but not limited to, those described in U.S. Patent Publication Nos. 2019/0105468; 2021/0369085; 2020/0375682; 2021/0121162; 2021/0121051; and 2022-0040450, each of which patents and/or patent publications are incorporated by reference herein in their entireties.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Robotics (AREA)
  • Optics & Photonics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Pulmonology (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Otolaryngology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Endoscopes (AREA)
  • Manipulator (AREA)

Abstract

Parmi les exemples de navigation autonome, de détection de mouvement et/ou de commande de l'invention, on peut citer, sans s'y limiter, la navigation autonome d'une ou de plusieurs parties d'un robot continuum vers une cible particulière, la détection de mouvement du robot continuum, le lissage par mimétisme et/ou le ou les changements d'état d'un robot à continuum. Les exemples d'applications comprennent l'imagerie, l'évaluation et le diagnostic d'objets biologiques, notamment pour des applications gastro-intestinales, cardiaques, bronchiques et/ou ophtalmiques, et sont obtenus au moyen d'un ou de plusieurs instruments optiques, notamment des sondes optiques, des cathéters, des endoscopes et des bronchoscopes. Les techniques proposées ici améliorent également l'efficacité du traitement et de l'imagerie tout en permettant d'obtenir des images plus précises, et permettent également d'obtenir des dispositifs, des systèmes, des procédés et des supports d'enregistrement qui réduisent la charge mentale et physique et améliorent la facilité d'utilisation.
PCT/US2024/037930 2023-07-14 2024-07-12 Planification et navigation autonomes de robot à continuum avec entrée vocale Pending WO2025019377A1 (fr)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US202363513794P 2023-07-14 2023-07-14
US202363513803P 2023-07-14 2023-07-14
US63/513,803 2023-07-14
US63/513,794 2023-07-14
US202363587637P 2023-10-03 2023-10-03
US63/587,637 2023-10-03
US202363603523P 2023-11-28 2023-11-28
US63/603,523 2023-11-28

Publications (1)

Publication Number Publication Date
WO2025019377A1 true WO2025019377A1 (fr) 2025-01-23

Family

ID=94282649

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/US2024/037930 Pending WO2025019377A1 (fr) 2023-07-14 2024-07-12 Planification et navigation autonomes de robot à continuum avec entrée vocale
PCT/US2024/037935 Pending WO2025019378A1 (fr) 2023-07-14 2024-07-12 Détection de mouvement de dispositif et planification de navigation et/ou navigation autonome pour un robot continuum ou un dispositif ou système endoscopique
PCT/US2024/037924 Pending WO2025019373A1 (fr) 2023-07-14 2024-07-12 Navigation autonome d'un robot continuum

Family Applications After (2)

Application Number Title Priority Date Filing Date
PCT/US2024/037935 Pending WO2025019378A1 (fr) 2023-07-14 2024-07-12 Détection de mouvement de dispositif et planification de navigation et/ou navigation autonome pour un robot continuum ou un dispositif ou système endoscopique
PCT/US2024/037924 Pending WO2025019373A1 (fr) 2023-07-14 2024-07-12 Navigation autonome d'un robot continuum

Country Status (1)

Country Link
WO (3) WO2025019377A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120116216B (zh) * 2025-03-17 2025-11-28 复旦大学 一种三段式柔性机器人避障位姿控制方法
CN120635865B (zh) * 2025-08-13 2025-12-09 中国电建集团西北勘测设计研究院有限公司 一种自动驾驶障碍物意图预测与避让方法及系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160183841A1 (en) * 2013-08-15 2016-06-30 Intuitive Surgical Operations Inc. Graphical User Interface For Catheter Positioning And Insertion
US20170209028A1 (en) * 2010-09-15 2017-07-27 Koninklijke Philips N.V. Robotic control of an endoscope from blood vessel tree images
US20210045827A1 (en) * 2019-08-15 2021-02-18 Verb Surgical Inc. Admittance compensation for surgical tool
US20210259521A1 (en) * 2020-02-21 2021-08-26 Canon U.S.A., Inc. Controller for selectively controlling manual or robotic operation of endoscope probe
US20210343397A1 (en) * 2020-04-30 2021-11-04 Clearpoint Neuro, Inc. Surgical planning systems that automatically assess different potential trajectory paths and identify candidate trajectories for surgical systems
US20230060639A1 (en) * 2020-02-12 2023-03-02 Board Of Regents Of The University Of Texas System Microrobotic systems and methods for endovascular interventions
WO2023037367A1 (fr) * 2021-09-09 2023-03-16 Magnisity Ltd. Dispositif endoluminal autodirecteur utilisant une carte luminale déformable dynamique

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7324661B2 (en) * 2004-04-30 2008-01-29 Colgate-Palmolive Company Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization
US20140287393A1 (en) * 2010-11-04 2014-09-25 The Johns Hopkins University System and method for the evaluation of or improvement of minimally invasive surgery skills
KR102420386B1 (ko) * 2016-06-30 2022-07-13 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 영상 안내식 시술 중에 복수의 모드에서 안내 정보를 디스플레이하기 위한 그래픽 사용자 인터페이스
US10022192B1 (en) * 2017-06-23 2018-07-17 Auris Health, Inc. Automatically-initialized robotic systems for navigation of luminal networks
US20190105468A1 (en) * 2017-10-05 2019-04-11 Canon U.S.A., Inc. Medical continuum robot with multiple bendable sections
JP2021506549A (ja) * 2017-12-18 2021-02-22 キャプソ・ヴィジョン・インコーポレーテッド カプセルカメラを使用した胃の検査方法および装置
AU2021232090A1 (en) * 2020-03-06 2022-10-27 Histosonics, Inc. Minimally invasive histotripsy systems and methods
US11786106B2 (en) * 2020-05-26 2023-10-17 Canon U.S.A., Inc. Robotic endoscope probe having orientation reference markers
CN115990042A (zh) * 2021-10-20 2023-04-21 奥林巴斯株式会社 内窥镜系统以及使用内窥镜系统进行导引和成像的方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170209028A1 (en) * 2010-09-15 2017-07-27 Koninklijke Philips N.V. Robotic control of an endoscope from blood vessel tree images
US20160183841A1 (en) * 2013-08-15 2016-06-30 Intuitive Surgical Operations Inc. Graphical User Interface For Catheter Positioning And Insertion
US20210045827A1 (en) * 2019-08-15 2021-02-18 Verb Surgical Inc. Admittance compensation for surgical tool
US20230060639A1 (en) * 2020-02-12 2023-03-02 Board Of Regents Of The University Of Texas System Microrobotic systems and methods for endovascular interventions
US20210259521A1 (en) * 2020-02-21 2021-08-26 Canon U.S.A., Inc. Controller for selectively controlling manual or robotic operation of endoscope probe
US20210343397A1 (en) * 2020-04-30 2021-11-04 Clearpoint Neuro, Inc. Surgical planning systems that automatically assess different potential trajectory paths and identify candidate trajectories for surgical systems
WO2023037367A1 (fr) * 2021-09-09 2023-03-16 Magnisity Ltd. Dispositif endoluminal autodirecteur utilisant une carte luminale déformable dynamique

Also Published As

Publication number Publication date
WO2025019378A1 (fr) 2025-01-23
WO2025019373A1 (fr) 2025-01-23

Similar Documents

Publication Publication Date Title
US20230088056A1 (en) Systems and methods for navigation in image-guided medical procedures
US12251175B2 (en) Medical instrument driving
US12251177B2 (en) Control scheme calibration for medical instruments
US12156704B2 (en) Intraluminal navigation using ghost instrument information
KR20230040311A (ko) 하이브리드 영상화 및 조종용 시스템 및 방법
WO2025019377A1 (fr) Planification et navigation autonomes de robot à continuum avec entrée vocale
CN120659587A (zh) 用于生成用于医疗过程的3d导航界面的系统和方法
JP2022546419A (ja) 器具画像信頼性システム及び方法
US20220202273A1 (en) Intraluminal navigation using virtual satellite targets
KR20250018156A (ko) 로봇 내시경의 자체 정렬 및 조정을 위한 시스템 및 방법
WO2025117336A1 (fr) Cathéters orientables et différences de force de fil
US20250143812A1 (en) Robotic catheter system and method of replaying targeting trajectory
WO2024134467A1 (fr) Segmentation lobulaire du poumon et mesure de la distance nodule à limite de lobe
US20240164853A1 (en) User interface for connecting model structures and associated systems and methods
WO2025184464A1 (fr) Navigation autonome d'un cathéter orientable
CN118302127A (zh) 包括用于经皮肾镜取石术程序的引导系统的医疗器械引导系统以及相关联的装置和方法
US20250170363A1 (en) Robotic catheter tip and methods and storage mediums for controlling and/or manufacturing a catheter having a tip
US20250295290A1 (en) Vision-based anatomical feature localization
EP4454571A1 (fr) Navigation autonome d'un robot endoluminal
US20230225802A1 (en) Phase segmentation of a percutaneous medical procedure
CN121002532A (zh) 从术中图像中提取伸长装置
WO2025059207A1 (fr) Appareil médical doté d'une structure de support et son procédé d'utilisation
WO2024081745A2 (fr) Localisation et ciblage de petites lésions pulmonaires
WO2025188850A1 (fr) Dispositif de lithotripsie autonome et procédés d'affichage d'actions correctives associées
JP2025506137A (ja) ナビゲーションが改良された気管支鏡グラフィカルユーザーインターフェイス

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24843783

Country of ref document: EP

Kind code of ref document: A1