WO2025019378A1 - Détection de mouvement de dispositif et planification de navigation et/ou navigation autonome pour un robot continuum ou un dispositif ou système endoscopique - Google Patents
Détection de mouvement de dispositif et planification de navigation et/ou navigation autonome pour un robot continuum ou un dispositif ou système endoscopique Download PDFInfo
- Publication number
- WO2025019378A1 WO2025019378A1 PCT/US2024/037935 US2024037935W WO2025019378A1 WO 2025019378 A1 WO2025019378 A1 WO 2025019378A1 US 2024037935 W US2024037935 W US 2024037935W WO 2025019378 A1 WO2025019378 A1 WO 2025019378A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- targets
- continuum robot
- display
- images
- navigation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1615—Programme controls characterised by special kind of manipulator, e.g. planar, scara, gantry, cantilever, space, closed chain, passive/active joints and tendon driven manipulators
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000094—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00147—Holding or positioning arrangements
- A61B1/0016—Holding or positioning arrangements using motor drive units
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/005—Flexible endoscopes
- A61B1/0051—Flexible endoscopes with controlled bending of insertion part
- A61B1/0052—Constructional details of control elements, e.g. handles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/267—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the respiratory tract, e.g. laryngoscopes, bronchoscopes
- A61B1/2676—Bronchoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/25—User interfaces for surgical systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/32—Surgical robots operating autonomously
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/30—Surgical robots
- A61B34/37—Leader-follower robots
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B23/00—Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
- G02B23/24—Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
- G02B23/2476—Non-optical details, e.g. housings, mountings, supports
- G02B23/2484—Arrangements in relation to a camera or imaging device
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B18/00—Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body
- A61B2018/00571—Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body for achieving a particular surgical effect
- A61B2018/00577—Ablation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
- A61B2034/107—Visualisation of planned trajectories or target regions
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2051—Electromagnetic tracking systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/20—Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
- A61B2034/2046—Tracking techniques
- A61B2034/2065—Tracking using image or pattern recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/374—NMR or MRI
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/376—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
- A61B2090/3762—Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy using computed tomography systems [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
- A61B2090/378—Surgical systems with images on a monitor during operation using ultrasound
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/361—Image-producing devices, e.g. surgical cameras
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
- A61B90/36—Image-producing devices or illumination devices not otherwise provided for
- A61B90/37—Surgical systems with images on a monitor during operation
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40234—Snake arm, flexi-digit robotic manipulator, a hand at each end
Definitions
- the present disclosure generally relates to imaging and, more particularly, to bronchoscope(s), robotic bronchoscope(s), robot apparatus(es), method(s), and storage medium(s) that operate to image a target, object, or specimen (such as, but not limited to, a lung, a biological object or sample, tissue, etc.) and/or to a continuum robot apparatus, method, and storage medium to implement robotic control for all sections of a catheter or imaging device/apparatus or system to perform navigation planning and/or autonomous navigation and/or to match a state or states when each section reaches or approaches a same or similar, or approximately a same or similar, state or states of a first section of the catheter or imaging device, apparatus, or system.
- a target, object, or specimen such as, but not limited to, a lung, a biological object or sample, tissue, etc.
- a continuum robot apparatus, method, and storage medium to implement robotic control for all sections of a catheter or imaging device/apparatus or system to perform navigation planning and/or autonomous navigation
- One or more bronchoscopic, endoscopic, medical, camera, catheter, or imaging devices, systems, and methods and/or storage mediums for use with same, are discussed herein.
- One or more devices, methods, or storage mediums may be used for medical applications and, more particularly, to steerable, flexible medical devices that may be used for or with guide tools and devices in medical procedures, including, but not limited to, endoscopes, cameras, and catheters.
- BACKGROUND [0003] Endoscopy, bronchoscopy, catheterization, and other medical procedures facilitate the ability to look inside a body.
- a flexible medical tool may be inserted into a patient’s body, and an instrument may be passed through the tool to examine or treat an area inside the body.
- a bronchoscope is an endoscopic instrument to view inside the airways of a patient. Catheters and other medical tools may be inserted through a tool channel in the bronchoscope to provide a pathway to a target area in the patient for diagnosis, planning, medical procedure(s), treatment, etc.
- Robotic bronchoscopes, robotic endoscopes, or other robotic imaging devices may be equipped with a tool channel or a camera and biopsy tools, and such devices (or users of such devices) may insert/retract the camera and biopsy tools to exchange such components.
- the robotic bronchoscopes, endoscopes, or other imaging devices may be used in association with a display system and a control system.
- An imaging device such as a camera
- An imaging device may be placed in the bronchoscope, the endoscope, or other imaging device/system to capture images inside the patient and to help control and move the bronchoscope, the endoscope, or the other type of imaging device, and a display or monitor may be used to view the captured images.
- An endoscopic camera that may be used for control may be positioned at a distal part of a catheter or probe (e.g., at a tip section).
- the display system may display, on the monitor, an image or images captured by the camera, and the display system may have a display coordinate used for displaying the captured image or images.
- the control system may control a moving direction of the tool channel or the camera.
- the tool channel or the camera may be bent according to a control by the control system.
- the control system may have an operational controller (such as, but not limited to, a joystick, a gamepad, a controller, an input device, etc.), and physicians may rotate or otherwise move the camera, probe, catheter, etc. to control same.
- an operational controller such as, but not limited to, a joystick, a gamepad, a controller, an input device, etc.
- physicians may rotate or otherwise move the camera, probe, catheter, etc. to control same.
- control methods or systems are limited in effectiveness. Indeed, while information obtained from an endoscopic camera at a distal end or tip section may help decide which way to move the distal end or tip section, such information does not provide details on how the other bending sections or portions of the bronchoscope, endoscope, or other type of imaging device may move to best assist the navigation.
- At least one application is looking inside the body relates to lung cancer, which is the most common cause of cancer-related deaths in the United States. It is also a commonly diagnosed malignancy, second only to breast cancer in women and prostate cancer in men. Early diagnosis of lung cancer is shown to improve patient outcomes, particularly in peripheral pulmonary nodules (PPNs). During a procedure, such as a transbronchial biopsy, targeting lung lesions or nodules may be challenging.
- PPNs peripheral pulmonary nodules
- Electromagnetically Navigated Bronchoscopy is increasingly applied in the transbronchial biopsy of PPNs due to its excellent safety profile, with fewer pneumothoraxes, chest tubes, significant hemorrhage episodes, and respiratory failure episodes than a CT-guided biopsy strategy (see e.g., as discussed in C. R. Dale, D. K. Madtes, V. S. Fan, J. A. Gorden, and D. L. Veenstra, “Navigational bronchoscopy with biopsy versus computed tomography-guided biopsy for the diagnosis of a solitary pulmonary nodule: a cost-consequences analysis,” J Bronchology Interv Pulmonol, vol. 19, no. 4, pp.
- ENB has lower diagnostic accuracy or value due to dynamic deformation of the tracheobronchial tree by bronchoscope maneuvers (see e.g., as discussed in T. Whelan, R. F. Salas-Moreno, B. Glocker, A. J. Davison, and S. honegger, “ElasticFusion,” International Journal of Robotics Research, vol.35, no.14, pp. 1697–1716, Dec.
- Robotic-assisted biopsy has emerged as a minimally invasive and precise approach for obtaining tissue samples from suspicious pulmonary lesions in lung cancer diagnosis.
- VNB Vision-based tracking
- Vision-based tracking in VNB does not require an electromagnetic tracking sensor to localize the bronchoscope in CT; rather, VNB directly localizes the bronchoscope using the camera view, conceptually removing the chance of CT-to-body divergence.
- Jaeger, et al. (as discussed in H. A. Jaeger et al., “Automated Catheter Navigation With Electromagnetic Image Guidance,” IEEE Trans Biomed Eng, vol. 64, no. 8, pp. 1972–1979, Aug. 2017, doi: 10.1109/TBME.2016.2623383, which is incorporated by reference herein in its entirety) proposed such a method where Jaeger, et al. incorporated a custom tendon-driven catheter design with Electro-magnetic (EM) sensors controlled with an electromechanical drive train.
- EM Electro-magnetic
- Zou, et al. Y. Zou, et al., IEEE Transactions on Medical Robotics and Bionics, vol. 4, no. 3, pp. 588-598 (2022), which is incorporated by reference herein in its entirety
- this approach was tailored for computer-aided manual bronchoscopes rather than specifically for robotic bronchoscopes.
- a device such as, but not limited to, a bronchoscopic catheter being navigated to reach a nodule.
- RAS robotic-assisted surgery
- At least one imaging, optical, or control device, system, method, and storage medium for controlling one or more endoscopic or imaging devices or systems, for example, by implementing automatic (e.g., robotic) or manual control of each portion or section of the at least one imaging, optical, or control device, system, method, and storage medium to keep track of and to match the state or state(s) of a first portion or section in a case where each portion or section reaches or approaches a same or similar, or approximately same or similar, state or state(s) and to provide a more appropriate navigation of a device (such as, but not limited to, a bronchoscopic catheter being navigated to reach a nodule).
- automatic e.g., robotic
- manual control of each portion or section of the at least one imaging, optical, or control device, system, method, and storage medium to keep track of and to match the state or state(s) of a first portion or section in a case where each portion or section reaches or approaches a same or similar, or approximately same or similar, state or state(s
- imaging e.g., computed tomography (CT), Magnetic Resonance Imaging (MRI), etc.
- MRI Magnetic Resonance Imaging
- storage mediums for using a navigation and/or control method or methods (manual or automatic) in one or more apparatuses or systems (e.g., an imaging apparatus or system, an endoscopic imaging device or system, etc.).
- CT computed tomography
- MRI Magnetic Resonance Imaging
- the present disclosure provides novel supervised-autonomous driving approach(es) that integrate a novel depth-based airway tracking method(s) and a robotic bronchoscope.
- the present disclosure provides extensively developed and validated navigation planning and/or autonomous navigation approaches for both advancing and centering continuum robots, such as, but not limited to, for robotic bronchoscopy.
- the inventors represent, to the best of the inventors’ knowledge, that the feature(s) of the present disclosure provide the initial autonomous navigation and/or planning technique(s) applicable in continuum robots, bronchoscopy, etc. that require no retraining and have undergone full validation in vitro, ex vivo, and in vivo.
- one or more features of the present disclosure incorporate unsupervised depth estimation from an image (e.g., a bronchoscopic image), coupled with a continuum robot (e.g., a robotic bronchoscope), and functions without any a priori knowledge of the patient’s anatomy, which is a significant advancement.
- a continuum robot e.g., a robotic bronchoscope
- one or more methods of the present disclosure constitutes and provides one or more foundational perception algorithms guiding the movements of the robot, continuum robot, or robotic bronchoscope. By simultaneously handling the tasks of advancing and centering the robot, probe, catheter, robotic bronchoscope, etc.
- the method(s) of the present disclosure may assist physicians in concentrating on the clinical decision- making to reach the target, which achieves or provides enhancements to the efficacy of such imaging, bronchoscopy, etc.
- One or more devices, systems, methods, and storage mediums for navigation planning and/or performing control or navigation including of a multi-section continuum robot and/or for viewing, imaging, and/or characterizing tissue and/or lesions, or an object or sample, using one or more imaging techniques (e.g., robotic bronchoscope imaging, bronchoscope imaging, etc.) or modalities (such as, but not limited to, computed tomography (CT), Magnetic Resonance Imaging (MRI), any other techniques or modalities used in imaging (e.g., Optical Coherence Tomography (OCT), Near infrared fluorescence (NIRF), Near infrared auto-fluorescence (NIRAF), Spectrally Encoded Endoscopes (SEE)), etc.) are disclosed herein.
- imaging techniques e.g., robotic bronchoscope imaging, bronchoscope imaging, etc.
- modalities such as, but not limited to, computed tomography (CT), Magnetic Resonance Imaging (MRI), any other techniques or modal
- movement and/or planned movement/navigation of a robot may be automatically calculated and autonomous navigation or control to a target, sample, or object (e.g., a nodule, a lung, an airway, a predetermined location in a sample, a predetermined location in a patient, etc.) may be achieved and/or planned.
- a target, sample, or object e.g., a nodule, a lung, an airway, a predetermined location in a sample, a predetermined location in a patient, etc.
- the planning, advancement, movement, and/or control of the robot may be secured in one or more embodiments (e.g., the robot will not fall into a loop).
- automatically calculating the navigation plan and/or movement of the robot may be provided (e.g., targeting an airway during a bronchoscopy or other lung-related procedure/imaging may be performed automatically so that any next move or control is automatic), and planning and/or autonomous navigation to a predetermined target, sample, or object (e.g., a nodule, a lung, a location in a sample, a location in a patient, etc.) is feasible and may be achieved (e.g., such that a CT path does not need to be extracted, any other pre-processing may be avoided or may not need to be extracted, etc.).
- a predetermined target, sample, or object e.g., a nodule, a lung, a location in a sample, a location in a patient, etc.
- the planning, autonomous navigation, movement detection, and/or control may be employed so that an apparatus or system may operate to: use a depth map produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter (e.g., such as, but not limited to, a bronchoscopy robotic device or system); apply thresholding using an automated method; fit one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on one or more detected objects; set a center or portion of the one or more set or predetermined geometric shapes or of the one or more circles, rectangles, squares, ovals, octagons, and/or triangles as a target for a next movement of the continuum robot or steerable catheter; in a case where one or more targets are not detected, then apply peak detection to the depth map and use one or more detected peaks as the one or more targets; in a case where one or more peaks are not detected, then use a
- automatic targeting for the planning, movement, navigation, and/or control may be provided (e.g., in a lung application, airway targeting which plans the next move may be automatic).
- the one or more processors may apply thresholding to define an area of a target, sample, or object (e.g., to define an area of an airway/vessel or other target).
- a navigation plan may include (and may not be limited to) one or more of the following: a next movement of the continuum robot, one or more next movements of the continuum robot, one or more targets, all of the next movements of the continuum robot, all of the determined next movements of the continuum robot, one or more next movements of the continuum robot to reach the one or more targets, etc.
- the navigation plan may be updated or data may be added to the navigation plan, where the data may include any additionally determined next movement of the continuum robot.
- the planning, autonomous navigation, movement detection, and/or control may be employed so that an apparatus or system may include one or more processors that may operate to: use one or more geometry metrics produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter; apply thresholding using an automated method to the geometry metrics; define one or more targets for a next movement of the continuum robot or steerable catheter; and advance the continuum robot or steerable catheter to the one or more targets or choose automatically, semi-automatically, or manually one of the detected one or more targets.
- the one or more processors may further operate to define the one or more targets by setting a center or portion(s) of the one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles as one or more targets for a next movement of the continuum robot or steerable catheter.
- the one or more processors may further operate to: use or process a depth map or maps as the geometry metrics produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter (e.g., such as, but not limited to, a bronchoscopy robotic device or system, obtained from a memory or storage, etc.); apply thresholding using an automated method and detect one or more objects; fit one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on the one or more detected objects; in a case where the one or more targets are not detected, then apply peak detection to the depth map or maps and use one or more detected peaks as the one or more targets; and/or in a case where one or more peaks are not detected, then use a deepest point of the depth map or maps as the one or more targets.
- a continuum robot or steerable catheter e.g., such as, but not limited to, a bronchoscopy robotic device or
- the continuum robot or steerable catheter may be automatically advanced.
- automatic targeting for the planning, movement, navigation, and/or control may be provided (e.g., in a lung application, airway targeting which plans the next move may be automatic).
- fitting the one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in the one or more detected objects may include blob detection and/or peak detection to identify the one or more targets and/or to confirm the identified or detected one or more targets.
- the one or more processors may further operate to: take a still image or images, use or process a depth map for the taken still image or images, apply thresholding to the taken still image or images and detect one or more objects, fit one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles for the one or more objects of the taken still image or images, define one or more targets for a next movement of the continuum robot or steerable catheter based on the taken still image or images; and advance the continuum robot or steerable catheter to the one or more targets or choose automatically, semi-automatically, or manually one of the detected one or more targets.
- the one or more processors further operate to repeat any of the features (such as, but not limited to, obtaining a depth map, performing thresholding, performing a fit based on one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles, performing peak detection, determining a deepest point, etc.) of the present disclosure for a next or subsequent image or images.
- Such next or subsequent images may be evaluated to distinguish from where to register the continuum robot or steerable catheter with an external image, and/or such next or subsequent images may be evaluated to perform registration or co-registration for changes due to movement, breathing, or any other change that may occur during imaging or a procedure with a continuum robot or steerable catheter.
- One or more navigation planning, autonomous navigation, movement detection, and/or control methods of the present disclosure may include one or more of the following: using one or more geometry metrics produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter; applying thresholding using an automated method to the one or more geometry metrics; and defining one or more targets for a next movement of the continuum robot or steerable catheter based on the one or more geometric metrics to define or determine a navigation plan including one or more next movements of the continuum robot or catheter.
- the method may further include advancing the continuum robot or steerable catheter to the one or more targets or choose automatically, semi-automatically, or manually one of the detected one or more targets.
- the method(s) may further include one or more of the following: displaying the one or more targets on one of the one or more images on a display or indicating on the display or via an indicator on the display or via a light or indicator on the continuum robot that the navigation plan has been defined or determined, using or processing a depth map or maps as the geometry metrics produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter (e.g., such as, but not limited to, a bronchoscopy robotic device or system); applying thresholding using an automated method and detecting one or more objects; fitting one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on the one or more detected objects; and/or defining one or more targets for a next movement of the continuum robot or steerable catheter.
- a depth map or maps as the geometry metrics produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter (e.g.,
- the method(s) may further include one or more of the following: using a depth map or maps as the geometry metrics by processing the one or more images; fitting one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on one or more detected objects; defining the one or more targets based on the one or more set or predetermined geometric shapes or based on one or more circles, rectangles, squares, ovals, octagons, and/or triangles of the one or more detected objects; setting a center or portion of the one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles as the one or more targets for a next movement of the continuum robot or steerable catheter; in a case where the one or more targets are not detected, then applying peak detection to the depth map or maps and use one or more detected peaks as the one or more targets; and/or in a
- defining the one or more targets may include setting a center or portion(s) of the one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles as one or more targets for a next movement of the continuum robot or steerable catheter.
- the method may further include the following steps: in a case where the one or more targets are not detected, then applying peak detection to the depth map or maps and using one or more detected peaks as the one or more targets; and/or in a case where one or more peaks are not detected, then using a deepest point of the depth map or maps as the one or more targets.
- the continuum robot or steerable catheter may be automatically advanced during the advancing step.
- automatic targeting the movement, navigation, and/or control may be provided (e.g., in a lung application, airway targeting which plans the next move may be automatic).
- the continuum robot may be a steerable catheter with a camera at a distal end of the steerable catheter.
- One or more embodiments of the present disclosure may employ use of depth mapping during navigation planning and/or autonomous navigation (e.g., airway(s) of a lung may be detected using a depth map during bronchoscopy of lung airways to achieve, assist, or improve autonomous navigation and/or planning through the lung airways).
- any combination of one or more of the following may be used: camera viewing, one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles or blob detection and fitting, depth mapping, peak detection, thresholding, and/or deepest point detection.
- octagons may be fitting to detected one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles or blob(s) and a target, sample, or object may be shown in one of the octagons (or other predetermined or set geometric shape) (e.g., in a center of the octagon or other shape).
- a depth map may enable the guidance of the continuum robot, steerable catheter, or other imaging device or system (e.g., a bronchoscope in an airway or airways) with minimal human intervention.
- one or more of the automated methods that may be used to apply thresholding may include one or more of the following: a watershed method, a k-means method, an automatic threshold method using a sharp slope method and/or any combination of the subject methods.
- peak detection may include any of the techniques discussed herein, including, but not limited to, the techniques discussed in at least “8 Peak detection,” Data Handling in Science and Technology, vol. 21, no.
- a non-transitory computer-readable storage medium may store at least one program for causing a computer to execute a method for performing navigation planning and/or autonomous navigation for a continuum robot or steerable catheter, where the method may include one or more of the following: using one or more geometry metrics produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter; applying thresholding using an automated method to the one or more geometry metrics; and defining one or more targets for a next movement of the continuum robot or steerable catheter based on the one or more geometry metrics to define or determine a navigation plan including one or more next movements of the continuum robot.
- the method may further include advancing the continuum robot or steerable catheter to the one or more targets or choose automatically, semi-automatically, or manually one of the detected one or more targets.
- the method(s) may further include one or more of the following: displaying the one or more targets on one of the one or more images on a display or indicating on the display or via an indicator on the display or via a light or indicator on the continuum robot that the navigation plan has been defined or determined, using or processing a depth map or maps as the geometry metrics produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter (e.g., such as, but not limited to, a bronchoscopy robotic device or system); applying thresholding using an automated method and detecting one or more objects; fitting one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on the one or more detected objects; and/or defining one or more targets for a next movement of the
- the method(s) may further include one or more of the following: using a depth map or maps as the geometry metrics by processing the one or more images; fitting one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on one or more detected objects; defining the one or more targets based on the one or more set or predetermined geometric shapes or based on the one or more circles, rectangles, squares, ovals, octagons, and/or triangles of the one or more detected objects; setting a center or portion of the one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles as the one or more targets for a next movement of the continuum robot or steerable catheter; in a case where the one or more targets are not detected, then applying peak detection to the depth map or maps and use one or more detected peaks as the one or more targets; and/or in
- defining the one or more targets may include setting a center or portion(s) of the one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles as one or more targets for a next movement of the continuum robot or steerable catheter.
- the method may further include the following steps: in a case where the one or more targets are not detected, then applying peak detection to the depth map or maps and using one or more detected peaks as the one or more targets; and/or in a case where one or more peaks are not detected, then using a deepest point of the depth map or maps as the one or more targets.
- the continuum robot or steerable catheter may be automatically advanced during the advancing step.
- automatic targeting the movement, navigation, and/or control may be provided (e.g., in a lung application, airway targeting which plans the next move may be automatic).
- a continuum robot for performing navigation planning, autonomous navigation, movement detection, and/or control may include: one or more processors that operate to: (i) obtain or receive one or more images from or via a continuum robot or steerable catheter; (ii) select a target detection method of a plurality of detection methods, where the plurality of detection methods includes at least a peak detection method or mode, a thresholding method or mode, and a deepest point method or mode; (iii) use one or more geometry metrics produced by processing the obtained or received one or more images; and (iv) perform the selected target detection method, wherein: in a case where the peak detection method or mode is selected, identify one or more peaks and set the one or more peaks as one or more targets for a next movement of a navigation plan of the continuum robot or steerable catheter; in a case where the thresholding method or mode is selected, perform binarization, apply thresholding using an automated method to the geometry metrics, and define one or more targets for a next movement of
- the one or more processors may further operate to, in a case where one or more targets are identified, autonomously or automatically move the continuum robot or steerable catheter to the one or more targets.
- the one or more processors further operate to one or more of the following: in a case where one or more targets are identified, autonomously or automatically move the continuum robot to the one or more targets, or display the one or more targets on one of the one or more images on a display or indicate on the display or via an indicator on the display or via a light or indicator on the continuum robot that the navigation plan has been planned or determined; define or determine that the navigation plan includes one or more next movements of the continuum robot; use a depth map or maps as the one or more geometry metrics by processing the one or more images; fit one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on one or more detected objects; define the one or more targets based on the one or more set or predetermined geometric shapes or
- the one or more processors operate to repeat the obtain or receive attribute, the select a target detection method attribute, the use of a depth map or maps, and the performance of the selected target detection method, and to, in a case where one or more targets are identified, autonomously or automatically move the continuum robot to the one or more targets, or display the one or more targets on one of the one or more images on a display or indicate on the display or via an indicator on the display or via a light or indicator on the continuum robot that the navigation plan has been planned or determined.
- one or more processors, one or more continuum robots, one or more catheters, one or more imaging devices, one or more methods, and/or one or more storage mediums may further operate to employ artificial intelligence for any technique of the present disclosure, including, but not limited to, one or more of the following: (i) estimate or determine the depth map or maps using artificial intelligence (AI) architecture, where the artificial intelligence architecture includes one or more of the following: a neural network, a convolutional neural network, a generative adversarial network (GAN), a consistent generative adversarial network (cGAN), a three cycle-consistent generative adversarial network (3cGAN), recurrent neural networks, and/or any other AI architecture discussed herein; and/or (ii) use a neural network, a convolutional neural network, a generative adversarial network (GAN), a consistent generative adversarial network (cGAN), a three cycle- consistent generative adversarial network (3cGAN), recurrent neural
- the continuum robot may be a steerable catheter with a camera at a distal end of the steerable catheter.
- a method for performing navigation planning, autonomous navigation, movement detection, and/or control for a continuum robot comprising: (i) obtaining or receiving one or more images from or via a continuum robot or steerable catheter; (ii) selecting a target detection method of a plurality of detection methods, where the plurality of detection methods includes at least a peak detection method or mode, a thresholding method or mode, and a deepest point method or mode; (iii) using one or more geometry metrics produced by processing the obtained or received one or more images; and (iv) performing the selected target detection method, wherein: in a case where the peak detection method or mode is selected, identifying one or more peaks and set the one or more peaks as one or more targets for a next movement of a navigation plan of the continuum robot or steerable catheter; in a case where the thresholding method
- the method may further include: in a case where one or more targets are identified, autonomously or automatically moving the continuum robot or steerable catheter to the one or more targets.
- the method may further include one or more of the following: in a case where one or more targets are identified, autonomously or automatically move the continuum robot to the one or more targets, or display the one or more targets on one of the one or more images on a display or indicate on the display or via an indicator on the display or via a light or indicator on the continuum robot that the navigation plan has been planned or determined; define or determine that the navigation plan includes one or more next movements of the continuum robot; using a depth map or maps as the one or more geometry metrics by processing the one or more images; fitting one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on one or more detected objects; defining the one or more targets based on the one or more set or predetermined geometric shapes or based
- the method may further include repeating, for one or more next or subsequent images: the obtaining or receiving step, the selecting a target detection method step, the using of a depth map or maps step, and the performing of the selected target detection method step; and, in a case where one or more targets are identified, autonomously or automatically moving the continuum robot to the one or more targets, or displaying the one or more targets on one of the one or more images on a display or indicating on the display or via an indicator on the display or via a light or indicator on the continuum robot that the navigation plan has been planned or determined.
- the method may further include the autonomous or automatic moving of the continuum robot or steerable catheter step.
- a non-transitory computer-readable storage medium storing at least one program for causing a computer to execute a method for performing navigation planning, autonomous navigation, movement detection, and/or control for a continuum robot, the method comprising: (i) obtaining or receiving one or more images from or via a continuum robot or steerable catheter; (ii) selecting a target detection method of a plurality of detection methods, where the plurality of detection methods includes at least a peak detection method or mode, a thresholding method or mode, and a deepest point method or mode; (iii) using one or more geometry metrics produced by processing the obtained or received one or more images; and (iv) performing the selected target detection method, wherein: in a case where the peak detection method or mode is selected, identifying one or more peaks and set the one or more peaks as one or more targets for a next movement of a navigation plan of the continuum robot or steerable catheter; in a case where the thresholding method or mode is selected, performing binar
- a continuum robot or steerable catheter may include one or more of the following: (i) a distal bending section or portion, wherein the distal bending section or portion is commanded or instructed automatically or based on an input of a user of the continuum robot or steerable catheter; (ii) a plurality of bending sections or portions including a distal or most distal bending portion or section and the rest of the plurality of the bending sections or portions; and/or (iii) the one or more processors further operate to instruct or command the forward motion, or the motion in the set or predetermined direction, of the motorized linear stage and/or of the continuum robot or steerable catheter automatically or autonomously and/or based on an input of a user of the continuum robot.
- a continuum robot or steerable catheter may further include: a base and an actuator that operates to bend the plurality of the bending sections or portions independently; and a motorized linear stage and/or a sensor or camera that operates to move the continuum robot or steerable catheter forward and backward, and/or in the predetermined or set direction or directions, wherein the one or more processors operate to control the actuator and the motorized linear stage and/or the sensor or camera.
- the plurality of bending sections or portions may each include driving wires that operate to bend a respective section or portion of the plurality of sections or portions, wherein the driving wires are connected to an actuator so that the actuator operates to bend one or more of the plurality of bending sections or portions using the driving wires.
- One or more embodiments may include a user interface of or disposed on a base, or disposed remotely from a base, the user interface operating to receive an input from a user of the continuum robot or steerable catheter to move one or more of the plurality of bending sections or portions and/or a motorized linear stage and/or a sensor or camera, wherein the one or more processors further operate to receive the input from the user interface, and the one or more processors and/or the user interface operate to use a base coordinate system.
- One or more displays may be provided to display a navigation plan and/or an autonomous navigation path of the continuum robot or steerable catheter.
- the continuum robot may further include an operational controller or joystick that operates to issue or input one or more commands or instructions as an input to one or more processors, the input including an instruction or command to move one or more of a plurality of bending sections or portions and/or a motorized linear stage and/or a sensor or camera;
- the continuum robot may further include a display to display one or more images taken by the continuum robot; and/or
- the continuum robot may further include an operational controller or joystick that operates to issue or input one or more commands or instructions to one or more processors, the input including an instruction or command to move one or more of a plurality of bending sections or portions and/or a motorized linear stage and/or a sensor or camera, and the operational controller or joystick operates to be controlled by a user of the continuum robot.
- the continuum robot or the steerable catheter may include a plurality of bending sections or portions and may include an endoscope camera, wherein one or more processors operate or further operate to receive one or more endoscopic images from the endoscope camera, and wherein the continuum robot further comprises a display that operates to display the one or more endoscopic images.
- the present disclosure provides features that integrate the healthcare sector with robotic-assisted surgery (RAS) and transforms same into Minimally Invasive Surgery (MIS). Not only does RAS align well with MIS outcomes (see e.g., J. Kang, et al., Annals of surgery, vol. 257, no. 1, pp. 95–101 (2013), which is incorporated by reference herein in its entirety), but RAS also promises enhanced dexterity and precision compared to traditional MIS techniques (see e.g., D. Hu, et al., The International Journal of Medical Robotics and Computer Assisted Surgery, vol. 14, no.
- the potential for increased autonomy in RAS is significant and is provided for in one or more features of the present disclosure.
- Enhanced autonomous features of the present disclosure may bolster safety by diminishing human error and streamline surgical procedures, consequently reducing the overall time taken (3, 4).
- a higher degree of autonomy provided by the one or more features of the present disclosure may mitigate excessive interaction forces between surgical instruments and body cavities, which may minimize risks like perforation and embolization.
- automation in surgical procedures becomes more prevalent, surgeons may transition to more supervisory roles, focusing on strategic decisions rather than hands-on execution (see e.g., A.
- At least one objective of the studies discussed in the present disclosure is to develop and clinically validate a supervised- autonomous navigation/driving and/or navigation planning approach in robotic bronchoscopy.
- one or more methodologies of the present disclosure utilize unsupervised depth estimation from the bronchoscopic image (see e.g., Y. Zou, et al., IEEE Transactions on Medical Robotics and Bionics, vol. 4, no. 3, pp. 588-598 (2022), which is incorporated by reference herein in its entirety), coupled with the robotic bronchoscope (see e.g., J.
- Robotic Bronchoscope features for one or more embodiments and for performed studies
- Bronchoscopic operations were performed using a snake robot developed using the OVM6946 bronchoscopic camera (OmniVision, CA, USA).
- the snake robot may be a robotic bronchoscope composed of, or including at least, the following parts in one or more embodiments: i) the robotic catheter, ii) the actuator unit, iii) the robotic arm, and iv) the software in one or more embodiments (see e.g., FIG.1, FIG.9, FIG.12C, etc. discussed below).
- the robotic catheter may be developed to emulate, and improve upon and outperform, a manual catheter, and, in one or more embodiments, the robotic catheter may include nine drive wires which travel through or traverse the steerable catheter, housed within an outer skin made of polyether block amide (PEBA) of 0.13 mm thickness.
- PEBA polyether block amide
- the catheter may include a central channel which allows for inserting the bronchoscopic camera.
- the outer and inner diameters (OD, ID) of the catheter may be 3 and 1.8 mm, respectively.
- the steering structure of the catheter may include two distal bending sections: the tip and middle sections, and one proximal bending section without an intermediate passive section/segment. Each of the sections may have its own degree of freedom (DOF).
- DOF degree of freedom
- the catheter may be actuated through the actuator unit attached to the robotic arm and may include nine motors that control the nine catheter wires. Each motor may operate to bend one wire of the catheter by applying pushing or pulling force to the drive wire.
- Both the robotic catheter and actuator may be attached to a robotic arm, including a rail that allows for a linear translation of the catheter. The movement of the catheter over or along the rail may be achieved through a linear stage actuator, which pushes or pulls the actuator and the attached catheter.
- the catheter, actuator unit, and robotic arm may be coupled into a system controller, which allows their communication with the software. While not limited thereto, the robot’s movement may be achieved using a handheld controller (gamepad) or, like in the studies discussed herein, through autonomous driving software.
- apparatuses and systems, and methods and storage mediums for performing navigation, movement, and/or control, and/or for performing depth map-driven autonomous advancement of a multi-section continuum robot may operate to characterize biological objects, such as, but not limited to, blood, mucus, lesions, tissue, etc.
- Any discussion of a state, pose, position, orientation, navigation, path, or other state type discussed herein is discussed merely as a non-limiting, non-exhaustive embodiment example, and any state or states discussed herein may be used interchangeably/alternatively or additionally with the specifically mentioned type of state.
- Autonomous driving and/or control technique(s) may be employed to adjust, change, or control any state, pose, position, orientation, navigation, path, or other state type that may be used in one or more embodiments for a continuum robot or steerable catheter.
- Physicians or other users of the apparatus or system may have reduced or saved labor and/or mental burden using the apparatus or system due to the navigation planning, autonomous navigation, control, and/or orientation (or pose, or position, etc.) feature(s) of the present disclosure. Additionally, one or more features of the present disclosure may achieve a minimized or reduced interaction with anatomy (e.g., of a patient), object, or target (e.g., tissue) during use, which may reduce the physical and/or mental burden on a patient or target.
- anatomy e.g., of a patient
- object e.g., tissue
- a labor of a user to control and/or navigate e.g., rotate, translate, etc.
- the imaging apparatus or system or a portion thereof e.g., a catheter, a probe, a camera, one or more sections or portions of a catheter, probe, camera, etc.
- an imaging device or system, or a portion of the imaging device or system e.g., a catheter, a probe, etc.
- the continuum robot, and/or the steerable catheter may include multiple sections or portions, and the multiple sections or portions may be multiple bending sections or portions.
- the imaging device or system may include manual and/or automatic navigation and/or control features.
- a user of the imaging device or system may control each section or portion, and/or the imaging device or system (or steerable catheter, continuum robot, etc.) may operate to automatically control (e.g., robotically control) each section or portion, such as, but not limited to, via one or more navigation planning, autonomous navigation, movement detection, and/or control techniques of the present disclosure.
- Navigation, control, and/or orientation feature(s) may include, but are not limited to, implementing mapping of a pose (angle value(s), plane value(s), etc.) of a first portion or section (e.g., a tip portion or section, a distal portion or section, a predetermined or set portion or section, a user selected or defined portion or section, etc.) to a stage position/state (or a position/state of another structure being used to map path or path-like information), controlling angular position(s) of one or more of the multiple portions or sections, controlling rotational orientation or position(s) of one or more of the multiple portions or sections, controlling (manually or automatically (e.g., robotically)) one or more other portions or sections of the imaging device or system (e.g., continuum robot, steerable catheter, etc.) to match the navigation/orientation/position/pose of the first portion or section in a case where the one or more other portions or sections reach (e.g., subsequently reach, reach at
- an imaging device or system may enter a target along a path where a first section or portion of the imaging device or system (or portion of the device or system) is used to set the navigation or control path and position(s), and each subsequent section or portion of the imaging device or system (or portion of the device or system) is controlled to follow the first section or portion such that each subsequent section or portion matches the orientation and position of the first section or portion at each location along the path.
- each section or portion of the imaging device or system is controlled to match the prior orientation and position (for each section or portion) for each of the locations along the path.
- an imaging device or system may enter and exit a target, an object, a specimen, a patient (e.g., a lung of a patient, an esophagus of a patient, another portion of a patient, another organ of a patient, a vessel of a patient, etc.), etc. along the same path and using the same orientation for entrance and exit to achieve an optimal navigation, orientation, and/or control path.
- the navigation, control, and/or orientation feature(s) are not limited thereto, and one or more devices or systems of the present disclosure may include any other desired navigation, control, and/or orientation specifications or details as desired for a given application or use.
- the first portion or section may be a distal or tip portion or section of the imaging device or system.
- the first portion or section may be any predetermined or set portion or section of the imaging device or system, and the first portion or section may be predetermined or set manually by a user of the imaging device or system or may be set automatically by the imaging device or system.
- a “change of orientation” may be defined in terms of direction and magnitude. For example, each interpolated step may have a same direction, and each interpolated step may have a larger magnitude as each step approaches a final orientation.
- any motion along a single direction may be the accumulation of a small motion in that direction.
- the small motion may have a unique or predetermined set of wire position changes to achieve the orientation change.
- Large or larger motion(s) in that direction may use a plurality of the small motions to achieve the large or larger motion(s).
- Dividing a large change into a series of multiple changes of the small or predetermined/set change may be used as one way to perform interpolation.
- Interpolation may be used in one or more embodiments to produce a desired or target motion, and at least one way to produce the desired or target motion may be to interpolate the change of wire positions.
- an apparatus or system may include one or more processors that operate to: instruct or command a distal bending section or portion of a catheter or a probe of the continuum robot such that the distal bending section or portion achieves, or is disposed at, a bending pose or position, the catheter or probe of the continuum robot having a plurality of bending sections or portions and a base; store or obtain the bending pose or position of the distal bending section or portion and store or obtain a position or state of a motorized linear stage (or other structure used to map path or path-like information) that operates to move the catheter or probe of the continuum robot in a case where the one or more processors instruct or command forward motion, or a motion in a set or predetermined direction or directions, of the motorized linear stage (or other predetermined or set structure for mapping path or path-like information); generate a goal or target bending pose or position for each corresponding section or portion of the catheter or probe from, or based on, the previous
- an apparatus/device or system may have one or more of the following exist or occur: (i) the distal bending section or portion may be the most distal bending section or portion, and the most distal bending section or portion may be commanded or instructed automatically or based on an input of a user of the continuum robot in a case where the motorized linear stage (or other structure used for mapping path or path- like information) is stable or stationary; (ii) the plurality of bending sections or portions may include the distal or most distal bending portion or section and the rest of the plurality of the bending sections or portions; (iii) the one or more processors may further operate to instruct or command the forward motion, or the motion in the set or predetermined direction, of the motorized linear stage (or other structure used for mapping path or path-like information) automatically or based on an input of a user of the continuum robot; and/or (iv) the plane may be created or defined based on a base coordinate system or based on a system
- an apparatus or system may further include: an actuator that operates to bend the plurality of the bending sections or portions independently and the base; and the motorized linear stage (or other structure used for mapping path or path-like information) that operates to move the continuum robot forward and backward, and/or in the predetermined or set direction or directions, wherein the one or more processors operate to control the actuator and the motorized linear stage (or other structure used for mapping path or path-like information).
- One or more embodiments may include a user interface of or disposed on the base, or disposed remotely from the base, the user interface operating to receive an input from a user of the continuum robot to move one or more of the plurality of bending sections or portions and/or the motorized linear stage (or other structure used for mapping path or path-like information), wherein the one or more processors further operate to receive the input from the user interface, and the one or more processors and/or the user interface operate to use a base coordinate system.
- the plurality of bending sections or portions may each include driving wires that operate to bend a respective section or portion of the plurality of sections or portions, wherein the driving wires are connected to the actuator so that the actuator operates to bend the plurality of bending sections or portions using the driving wires.
- the navigation planning, autonomous navigation, movement detection, and/or control may occur such that any intermediate orientations of one or more of the plurality of bending sections or portions is guided towards respective desired, predetermined, or set orientations (e.g., such that the steerable catheter, continuum robot, or other imaging device or system may reach the one or more targets).
- the catheter or probe of the continuum robot may be a steerable catheter or probe including the plurality of bending sections or portions and including an endoscope camera, wherein the one or more processors further operate to receive one or more endoscopic images from the endoscope camera, and wherein the continuum robot further comprises a display that operates to display the one or more endoscopic images.
- One or more embodiments may include one or more of the following features: (i) the continuum robot may further include an operational controller or joystick that operates to issue or input one or more commands or instructions as an input to the one or more processors, the input including an instruction or command to move one or more of the plurality of bending sections or portions and/or the motorized linear stage (or other structure used for mapping path or path-like information); (ii) the continuum robot may further include a display to display one or more images taken by the continuum robot; and/or (iii) the continuum robot may further include an operational controller or joystick that operates to issue or input one or more commands or instructions to the one or more processors, the input including an instruction or command to move one or more of the plurality of bending sections or portions and/or the motorized linear stage (or other structure used for mapping path or path-like information), and the operational controller or joystick operates to be controlled by a user of the continuum robot.
- an apparatus or system may include one or more processors that operate to: receive or obtain an image or images showing pose or position information of a tip section of a catheter or probe having a plurality of sections including at least the tip section; track a history of the pose or position information of the tip section of the catheter or probe during a period of time; and use the history of the pose or position information of the tip section to determine how to align or transition, move, or adjust (e.g., robotically, manually, automatically, etc.) each section of the plurality of sections of the catheter or probe.
- processors that operate to: receive or obtain an image or images showing pose or position information of a tip section of a catheter or probe having a plurality of sections including at least the tip section; track a history of the pose or position information of the tip section of the catheter or probe during a period of time; and use the history of the pose or position information of the tip section to determine how to align or transition, move, or adjust (e.g., robotically, manually, automatically, etc.) each section of the plurality of
- apparatuses and systems, and methods and storage mediums for performing correction(s) and/or adjustment(s) to a direction or view, and/or for performing navigation planning and/or autonomous navigation may operate to characterize biological objects, such as, but not limited to, blood, mucus, tissue, etc.
- biological objects such as, but not limited to, blood, mucus, tissue, etc.
- One or more embodiments of the present disclosure may be used in clinical application(s), such as, but not limited to, intervascular imaging, intravascular imaging, bronchoscopy, atherosclerotic plaque assessment, cardiac stent evaluation, intracoronary imaging using blood clearing, balloon sinuplasty, sinus stenting, arthroscopy, ophthalmology, ear research, veterinary use and research, etc.
- one or more technique(s) discussed herein may be employed as or along with features to reduce the cost of at least one of manufacture and maintenance of the one or more apparatuses, devices, systems, and storage mediums by reducing or minimizing a number of optical and/or processing components and by virtue of the efficient techniques to cut down cost (e.g., physical labor, mental burden, fiscal cost, time and complexity, etc.) of use/manufacture of such apparatuses, devices, systems, and storage mediums.
- cut down cost e.g., physical labor, mental burden, fiscal cost, time and complexity, etc.
- explanatory embodiments may include several novel features, and a particular feature may not be essential to some embodiments of the devices, systems, and methods that are described herein.
- one or more additional devices, one or more systems, one or more methods, and one or more storage mediums using imaging, imaging adjustment or correction technique(s), autonomous navigation and/or planning technique(s), and/or other technique(s) are discussed herein. Further features of the present disclosure will in part be understandable and will in part be apparent from the following description and with reference to the attached drawings.
- FIG. 1 illustrates at least one embodiment of an imaging, continuum robot, or endoscopic apparatus or system in accordance with one or more aspects of the present disclosure
- FIGS. 3A-3B illustrate at least one embodiment example of a continuum robot and/or medical device that may be used with one or more technique(s), including autonomous navigation and/or planning technique(s), in accordance with one or more aspects of the present disclosure
- FIGS.3C-3D illustrate one or more principles of catheter or continuum robot tip manipulation by actuating one or more bending segments of a continuum robot or steerable catheter 104 of FIGS.3A-3B in accordance with one or more aspects of the present disclosure
- FIG. 3C-3D illustrate one or more principles of catheter or continuum robot tip manipulation by actuating one or more bending segments of a continuum robot or steerable catheter 104 of FIGS.3A-3B in accordance with one or more aspects of the present disclosure
- FIG.3C-3D illustrate one or more principles of catheter or continuum robot tip manipulation by actuating one or more bending segments of a continuum robot or steerable catheter 104 of FIGS.3A-3B in accordance with one or more aspects of the present disclosure
- FIG. 4 is a schematic diagram showing at least one embodiment of an imaging, continuum robot, steerable catheter, or endoscopic apparatus or system in accordance with one or more aspects of the present disclosure
- FIG. 5 is a schematic diagram showing at least one embodiment of a console or computer that may be used with one or more autonomous navigation and/or planning technique(s) in accordance with one or more aspects of the present disclosure
- FIG. 6 is a flowchart of at least one embodiment of a method for planning an operation of at least one embodiment of a continuum robot or steerable catheter apparatus or system in accordance with one or more aspects of the present disclosure
- FIG. 6 is a flowchart of at least one embodiment of a method for planning an operation of at least one embodiment of a continuum robot or steerable catheter apparatus or system in accordance with one or more aspects of the present disclosure
- FIG. 6 is a flowchart of at least one embodiment of a method for planning an operation of at least one embodiment of a continuum robot or steerable catheter apparatus or system in accordance with one or more aspects of the
- FIG. 7 is a flowchart of at least one embodiment of a method for performing navigation planning, autonomous navigation, movement detection, and/or control for a continuum robot or steerable catheter in accordance with one or more aspects of the present disclosure
- FIG. 8 shows images of at least one embodiment of an application example of navigation planning and/or autonomous navigation technique(s) and movement detection for a camera view (left), a depth map (center), and a thresholded image (right) in accordance with one or more aspects of the present disclosure
- FIG.9 shows at least one embodiment a control software or a User Interface that may be used with one or more robots, robotic catheters, robotic bronchoscopes, methods, and/or other features in accordance with one or more aspects of the present disclosure
- FIGS.10A-10B illustrate at least one embodiment of a bronchoscopic image with detected airways and an estimated depth map (or depth estimation) with or using detected airways, respectively, in one or more bronchoscopic images in accordance with one or more aspects of the present disclosure;
- FIGS.14A-14B illustrate graphs showing success at branching point(s) with respect to Local Curvature (LC) and Plane Rotation (PR), respectively, for all data combined in one or more embodiments in accordance with one or more aspects of the present disclosure;
- FIGS.14A-14B illustrate graphs showing success at branching point(s) with respect to Local Curvature (LC) and Plane Rotation (PR), respectively, for all data combined in one or more embodiments in accordance with one or more aspects of the present disclosure
- FIGS.16A-16B illustrate the box plots for time for the operator or the autonomous navigation/planning to bend the robotic catheter in one or more embodiments and for the maximum force for the operator or the autonomous navigation/planning at each bifurcation point in one or more embodiments in accordance with one or more aspects of the present disclosure; [0079] FIGS.
- FIGS. 18A-18B illustrate graphs for the dependency of the time for a bending command and the force at each bifurcation point, respectively, on the airway generation of a lung in accordance with one or more aspects of the present disclosure
- FIG.19 illustrates a diagram of a continuum robot that may be used with one or more autonomous navigation and/or planning technique(s) or method(s) in accordance with one or more aspects of the present disclosure
- FIG. 20 illustrates a block diagram of at least one embodiment of a continuum robot in accordance with one or more aspects of the present disclosure
- FIG. 21 illustrates a block diagram of at least one embodiment of a controller in accordance with one or more aspects of the present disclosure
- FIG.22 shows a schematic diagram of an embodiment of a computer that may be used with one or more embodiments of an apparatus or system, or one or more methods, discussed herein in accordance with one or more aspects of the present disclosure
- FIG.23 shows a schematic diagram of another embodiment of a computer that may be used with one or more embodiments of an imaging apparatus or system, or methods, discussed herein in accordance with one or more aspects of the present disclosure
- FIG.24 shows a schematic diagram of at least an embodiment of a system using a computer or processor, a memory, a database, and input and output devices in accordance with one or more aspects of the present disclosure
- FIG. 25 shows a created architecture of or for a regression model(s) that may be used for navigation planning, autonomous navigation, movement detection, and/or control techniques and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure
- FIG. 26 shows a convolutional neural network architecture that may be used for navigation planning, autonomous navigation, movement detection, and/or control techniques and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure
- FIG. 26 shows a convolutional neural network architecture that may be used for navigation planning, autonomous navigation, movement detection, and/or control techniques and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure
- FIG. 26 shows a convolutional neural network architecture that may be used for navigation planning, autonomous navigation, movement detection, and/or control techniques and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure
- FIG. 27 shows a created architecture of or for a regression model(s) that may be used for navigation planning, autonomous navigation, movement detection, and/or control techniques and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure
- FIG.28 is a schematic diagram of or for a segmentation model(s) that may be used for catheter connection and/or disconnection detection and/or any other technique discussed herein in accordance with one or more aspects of the present disclosure.
- One or more devices, systems, methods and storage mediums for viewing, imaging, and/or characterizing tissue, or an object or sample, using one or more imaging techniques or modalities such as, but not limited to, computed tomography (CT), Magnetic Resonance Imaging (MRI), any other techniques or modalities used in imaging (e.g., Optical Coherence Tomography (OCT), Near infrared fluorescence (NIRF), Near infrared auto-fluorescence (NIRAF), Spectrally Encoded Endoscopes (SEE)), etc.
- CT computed tomography
- MRI Magnetic Resonance Imaging
- OCT Optical Coherence Tomography
- NIRF Near infrared fluorescence
- NIRAF Near infrared auto-fluorescence
- SEE Spectrally Encoded Endoscopes
- FIGS.1 through 28 Several embodiments of the present disclosure, which may be carried out by the one or more embodiments of an apparatus, system, method, and/or computer-readable storage medium of the present disclosure, are described diagrammatically and visually in FIGS.1 through 28.
- One or more embodiments of the present disclosure avoid the aforementioned issues by providing a simple and fast method or methods that provide navigation planning, autonomous navigation, movement detection, and/or control technique(s) as discussed herein.
- navigation planning, autonomous navigation, movement detection, and/or control techniques may be performed using artificial intelligence and/or one or more processors as discussed in the present disclosure.
- navigation planning, autonomous navigation, movement detection, and/or control is/are performed to reduce the amount of skill or training needed to perform imaging, medical imaging, one or more procedures (e.g., bronchoscopies), etc., and may reduce the time and cost of imaging or an overall procedure or procedures.
- the navigation planning, autonomous navigation, movement detection, and/or control techniques may be used with a co-registration (e.g., CT co- registration, cone-beam CT (CBCT) co-registration, etc.) to enhance a successful targeting rate for a predetermined sample, target, or object (e.g., a lung, a portion of a lung, a vessel, a nodule, etc.) by minimizing human error.
- a co-registration e.g., CT co- registration, cone-beam CT (CBCT) co-registration, etc.
- CBCT may be used to locate a target, sample, or object (e.g., the lesion(s) or nodule(s) of a lung or airways) along with an imaging device (e.g., a steerable catheter, a continuum robot, etc.) and to co-register the target, sample, or object (e.g., the lesions or nodules) with the device shown in an image to achieve proper guidance.
- an imaging device e.g., a steerable catheter, a continuum robot, etc.
- imaging e.g., computed tomography (CT), Magnetic Resonance Imaging (MRI), etc.
- MRI Magnetic Resonance Imaging
- storage mediums for using a navigation and/or control method or methods (manual or automatic) in one or more apparatuses or systems (e.g., an imaging apparatus or system, an endoscopic imaging device or system, etc.).
- CT computed tomography
- MRI Magnetic Resonance Imaging
- imaging e.g., computed tomography (CT), Magnetic Resonance Imaging (MRI), etc.
- CT computed tomography
- MRI Magnetic Resonance Imaging
- apparatuses, systems, methods, and storage mediums for using a navigation and/or control method or methods for achieving navigation planning, autonomous navigation, movement detection, and/or control through a target, sample, or object (e.g., lung airway(s) during bronchoscopy) in one or more apparatuses or systems (e.g., an imaging apparatus or system, an endoscopic imaging device or system, etc.).
- CT computed tomography
- MRI Magnetic Resonance Imaging
- movement of a robot may be automatically calculated and navigation planning, autonomous navigation, and/or control to a target, sample, or object (e.g., a nodule, a lung, an airway, a predetermined location in a sample, a predetermined location in a patient, etc.) may be achieved.
- a target, sample, or object e.g., a nodule, a lung, an airway, a predetermined location in a sample, a predetermined location in a patient, etc.
- the advancement, movement, and/or control of the robot may be secured in one or more embodiments (e.g., the robot will not fall into a loop).
- automatically calculating the movement of the robot may be provided (e.g., targeting an airway during a bronchoscopy or other lung-related procedure/imaging may be performed automatically so that any next move or control is automatic), and navigation planning and/or autonomous navigation to a predetermined target, sample, or object (e.g., a nodule, a lung, a location in a sample, a location in a patient, etc.) is feasible and may be achieved (e.g., such that a CT path does not need to be extracted, any other pre-processing may be avoided or may not need to be extracted, etc.).
- a predetermined target, sample, or object e.g., a nodule, a lung, a location in a sample, a location in a patient, etc.
- the navigation planning, autonomous navigation, movement detection, and/or control may be employed so that an apparatus or system may operate to: use a depth map produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter (e.g., such as, but not limited to, a bronchoscopy robotic device or system (such as, but not limited to, a robotic catheter device(s), system(s), and method(s) discussed in PCT/US2023/062508, filed on February 13, 2023, which is incorporated by reference herein in its entirety)); apply thresholding using an automated method; fit a one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles /blob in or on one or more detected objects; set a center or portion of the one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles as a target for a next movement of the continuum robot or
- automatic targeting the movement, navigation, and/or control may be provided (e.g., in a lung application, airway targeting which plans the next move may be automatic).
- the one or more processors may apply thresholding to define an area of a target, sample, or object (e.g., to define an area of an airway/vessel or other target).
- fitting the one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in the one or more detected objects may include blob detection and/or peak detection to identify the one or more targets and/or to confirm the identified or detected one or more targets.
- the one or more processors may further operate to: take a still image or images, use or process a depth map for the taken still image or images, apply thresholding to the taken still image or images and detect one or more objects, fit a one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles /blob for the one or more objects of the taken still image or images, define one or more targets for a next movement of the continuum robot or steerable catheter based on the taken still image or images; and advance the continuum robot or steerable catheter to the one or more targets or choose automatically, semi-automatically, or manually one of the detected one or more targets.
- the one or more processors further operate to repeat any of the features (such as, but not limited to, obtaining a depth map, performing thresholding, performing a fit based on one or more set or predetermined geometric shapes or based on one or more circles, rectangles, squares, ovals, octagons, and/or triangles, performing peak detection, determining a deepest point, etc.) of the present disclosure for a next or subsequent image or images.
- Such next or subsequent images may be evaluated to distinguish from where to register the continuum robot or steerable catheter with an external image, and/or such next or subsequent images may be evaluated to perform registration or co-registration for changes due to movement, breathing, or any other change that may occur during imaging or a procedure with a continuum robot or steerable catheter.
- a navigation plan may include (and may not be limited to) one or more of the following: a next movement of the continuum robot, one or more next movements of the continuum robot, one or more targets, all of the next movements of the continuum robot, all of the determined next movements of the continuum robot, one or more next movements of the continuum robot to reach the one or more targets, etc.
- the navigation plan may be updated or data may be added to the navigation plan, where the data may include any additionally determined next movement of the continuum robot.
- the navigation planning, autonomous navigation, movement detection, and/or control may be employed so that an apparatus or system may include one or more processors that may operate to: use or process a depth map produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter (e.g., such as, but not limited to, a bronchoscopy robotic device or system); apply thresholding using an automated method and detecting one or more objects; fit a one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles /blob in or on the one or more detected objects; define one or more targets for a next movement of the continuum robot or steerable catheter; and advance the continuum robot or steerable catheter to the one or more targets or choose automatically, semi-automatically, or manually one of the detected one or more targets.
- a continuum robot or steerable catheter e.g., such as, but not limited to, a bronchoscopy robotic device or system
- the one or more processors may further operate to define the one or more targets by setting a center or portion(s) of the one or more set or predetermined geometric shapes or the one or more circles, rectangles, squares, ovals, octagons, and/or triangles as one or more targets for a next movement of the continuum robot or steerable catheter.
- the one or more processors may further operate to: in a case where the one or more targets are not detected, then apply peak detection to the depth map and use one or more detected peaks as the one or more targets; and/or in a case where one or more peaks are not detected, then use a deepest point of the depth map as the one or more targets.
- the continuum robot or steerable catheter may be automatically advanced during the advancing step.
- automatic targeting the movement, navigation, and/or control may be provided (e.g., in a lung application, airway targeting which plans the next move may be automatic).
- One or more navigation planning, autonomous navigation, movement detection, and/or control methods of the present disclosure may include one or more of the following: using or processing a depth map produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter (e.g., such as, but not limited to, a bronchoscopy robotic device or system (such as, but not limited to, a robotic catheter device(s), system(s), and method(s) discussed in PCT/US2023/062508, filed on February 13, 2023, which is incorporated by reference herein in its entirety)); applying thresholding using an automated method and detecting one or more objects; fitting one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on the one or more detected objects; defining one or more targets for a next movement of the continuum robot or steerable catheter; and advancing the continuum robot or steerable catheter to the one or more targets or choose automatically, semi- automatically, or manually one
- defining the one or more targets may include setting a center or portion(s) of the one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles as one or more targets for a next movement of the continuum robot or steerable catheter.
- the method may further include the following steps: in a case where the one or more targets are not detected, then applying peak detection to the depth map and using one or more detected peaks as the one or more targets; and/or in a case where one or more peaks are not detected, then using a deepest point of the depth map as the one or more targets.
- the continuum robot or steerable catheter may be automatically advanced during the advancing step.
- automatic targeting the movement, navigation, and/or control may be provided (e.g., in a lung application, airway targeting which plans the next move may be automatic).
- One or more embodiments of the present disclosure may employ use of depth mapping during navigation planning and/or autonomous navigation (e.g., airway(s) of a lung may be detected using a depth map during bronchoscopy of lung airways to achieve, assist, or improve navigation planning and/or autonomous navigation through the lung airways).
- any combination of one or more of the following may be used: camera viewing, one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles or blob detection and fitting, depth mapping, peak detection, thresholding, and/or deepest point detection.
- octagons may be fitting to detected one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles or blob(s) and a target, sample, or object may be shown in one of the octagons (or other predetermined or set geometric shape) (e.g., in a center of the octagon or other shape).
- a depth map may enable the guidance of the continuum robot, steerable catheter, or other imaging device or system (e.g., a bronchoscope in an airway or airways) with minimal human intervention.
- one or more of the automated methods that may be used to apply thresholding may include one or more of the following: a watershed method (such as, but not limited to, watershed method(s) discussed in L. J. Belaid and W. Mourou, “IMAGE SEGMENTATION: A WATERSHED TRANSFORMATION ALGORITHM,” 2011, vol.
- a k-means method such as, but not limited to, k-means method(s) discussed in T. Kanungo, D. M. Mount, N. S. professor, C. D. Piatko, R. Silverman, and A. Y. Wu, “An efficient k-means clustering algorithm: Analysis and implementation,” IEEE Trans Pattern Anal Mach Intell, vol. 24, no. 7, pp.
- an automatic threshold method such as, but not limited to, automatic threshold method(s) discussed in N. Otsu, “Threshold Selection Method from Gray-Level Histograms,” IEEE Trans Syst Man Cybern, vol.9, no.1, pp.62–66, 1979, which is incorporated by reference herein in its entirety
- a sharp slope method such as, but not limited to, sharp slope method(s) discussed in U.S. Pat. Pub. No. 2023/0115191 A1, published on April 13, 2023, which is incorporated by reference herein in its entirety
- peak detection may include any of the techniques discussed herein, including, but not limited to, the techniques discussed in at least “8 Peak detection,” Data Handling in Science and Technology, vol. 21, no. C, pp. 183–190, Jan. 1998, doi: 10.1016/S0922-3487(98)80027-0, which is incorporated by reference herein in its entirety.
- a non-transitory computer-readable storage medium may store at least one program for causing a computer to execute a method for performing navigation planning and/or autonomous navigation for a continuum robot or steerable catheter, where the method may include one or more of the following: using or processing a depth map produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter (e.g., such as, but not limited to, a bronchoscopy robotic device or system (such as, but not limited to, a robotic catheter device(s), system(s), and method(s) discussed in PCT/US2023/062508, filed on February 13, 2023, which is incorporated by reference herein in its entirety)); applying thresholding using an automated method and detecting one or more objects; fitting one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on the one or more detected objects; defining one or more targets for a next movement
- defining the one or more targets may include setting a center or portion(s) of the one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles as one or more targets for a next movement of the continuum robot or steerable catheter.
- the method may further include the following steps: in a case where the one or more targets are not detected, then applying peak detection to the depth map and using one or more detected peaks as the one or more targets; and/or in a case where one or more peaks are not detected, then using a deepest point of the depth map as the one or more targets.
- the continuum robot or steerable catheter may be automatically advanced during the advancing step.
- automatic targeting the movement, navigation, and/or control may be provided (e.g., in a lung application, airway targeting which plans the next move may be automatic).
- Autonomous driving and/or control technique(s) may be employed to adjust, change, or control any state, pose, position, orientation, navigation, path, or other state type that may be used in one or more embodiments for a continuum robot or steerable catheter.
- Physicians or other users of the apparatus or system may have reduced or saved labor and/or mental burden using the apparatus or system due to the navigation planning, autonomous navigation, control, and/or orientation (or pose, or position, etc.) feature(s) of the present disclosure.
- one or more features of the present disclosure may achieve a minimized or reduced interaction with anatomy (e.g., of a patient), object, or target (e.g., tissue) during use, which may reduce the physical and/or mental burden on a patient or target.
- a labor of a user to control and/or navigate e.g., rotate, translate, etc.
- the imaging apparatus or system or a portion thereof e.g., a catheter, a probe, a camera, one or more sections or portions of a catheter, probe, camera, etc.
- an imaging device or system, or a portion of the imaging device or system e.g., a catheter, a probe, etc.
- the continuum robot, and/or the steerable catheter may include multiple sections or portions, and the multiple sections or portions may be multiple bending sections or portions.
- the imaging device or system may include manual and/or automatic navigation and/or control features.
- a user of the imaging device or system may control each section or portion, and/or the imaging device or system (or steerable catheter, continuum robot, etc.) may operate to automatically control (e.g., robotically control) each section or portion, such as, but not limited to, via one or more navigation planning, autonomous navigation, movement detection, and/or control techniques of the present disclosure.
- Navigation, control, and/or orientation feature(s) may include, but are not limited to, implementing mapping of a pose (angle value(s), plane value(s), etc.) of a first portion or section (e.g., a tip portion or section, a distal portion or section, a predetermined or set portion or section, a user selected or defined portion or section, etc.) to a stage position/state (or a position/state of another structure being used to map path or path-like information), controlling angular position(s) of one or more of the multiple portions or sections, controlling rotational orientation or position(s) of one or more of the multiple portions or sections, controlling (manually or automatically (e.g., robotically)) one or more other portions or sections of the imaging device or system (e.g., continuum robot, steerable catheter, etc.) to match the navigation/orientation/position/pose of the first portion or section in a case where the one or more other portions or sections reach (e.g., subsequently reach, reach at
- an imaging device or system may enter a target along a path where a first section or portion of the imaging device or system (or portion of the device or system) is used to set the navigation or control path and position(s), and each subsequent section or portion of the imaging device or system (or portion of the device or system) is controlled to follow the first section or portion such that each subsequent section or portion matches the orientation and position of the first section or portion at each location along the path.
- each section or portion of the imaging device or system is controlled to match the prior orientation and position (for each section or portion) for each of the locations along the path.
- an imaging device or system may enter and exit a target, an object, a specimen, a patient (e.g., a lung of a patient, an esophagus of a patient, another portion of a patient, another organ of a patient, a vessel of a patient, etc.), etc. along the same path and using the same orientation for entrance and exit to achieve an optimal navigation, orientation, and/or control path.
- the navigation, control, and/or orientation feature(s) are not limited thereto, and one or more devices or systems of the present disclosure may include any other desired navigation, control, and/or orientation specifications or details as desired for a given application or use.
- the first portion or section may be a distal or tip portion or section of the imaging device or system.
- the first portion or section may be any predetermined or set portion or section of the imaging device or system, and the first portion or section may be predetermined or set manually by a user of the imaging device or system or may be set automatically by the imaging device or system.
- a “change of orientation” may be defined in terms of direction and magnitude. For example, each interpolated step may have a same direction, and each interpolated step may have a larger magnitude as each step approaches a final orientation.
- any motion along a single direction may be the accumulation of a small motion in that direction.
- the small motion may have a unique or predetermined set of wire position changes to achieve the orientation change.
- Large or larger motion(s) in that direction may use a plurality of the small motions to achieve the large or larger motion(s).
- Dividing a large change into a series of multiple changes of the small or predetermined/set change may be used as one way to perform interpolation.
- Interpolation may be used in one or more embodiments to produce a desired or target motion, and at least one way to produce the desired or target motion may be to interpolate the change of wire positions.
- AI artificial intelligence
- Using artificial intelligence for example (but not limited to), deep/machine learning, residual learning, a computer vision task (keypoint or object detection and/or image segmentation), using a unique architecture structure of a model or models, using a unique training process, using input data preparation techniques, using input mapping to the model, using post-processing and interpretation of the output data, etc., one or more embodiments of the present disclosure may achieve a better or maximum success rate of navigation planning, autonomous navigation, movement detection, and/or control without (or with less) user interactions, and may reduce processing time to perform navigation planning, autonomous navigation, movement detection, and/or control techniques.
- One or more artificial intelligence structures may be used to perform navigation planning, autonomous navigation, movement detection, and/or control techniques, such as, but not limited to, a neural net or network (e.g., the same neural net or network that operates to determine whether a video or image from an endoscope or other imaging device is working; the same neural net or network that has obtained or determined the depth map, etc.; an additional neural net or network, a convolutional network (e.g., a convolutional neural network (CNN) may operate to use a visual output of a camera (e.g., a bronchoscopic camera, a catheter camera, a detector, etc.) to automatically detect one or more airways, one or more objects, one or more areas of one or more airways or objects, etc.), recurrent network, another network discussed herein, etc.).
- a neural net or network e.g., the same neural net or network that operates to determine whether a video or image from an endoscope or other imaging device is working; the same neural net or network that has obtained
- an apparatus for performing navigation planning, autonomous navigation, movement detection, and/or control using artificial intelligence may include: a memory; and one or more processors in communication with the memory, the one or more processors operating to: using or processing a depth map produced by processing one or more images obtained by or coming from a continuum robot or steerable catheter (e.g., such as, but not limited to, a bronchoscopy robotic device or system (such as, but not limited to, a robotic catheter device(s), system(s), and method(s) discussed in PCT/US2023/062508, filed on February 13, 2023, which is incorporated by reference herein in its entirety), obtained by a continuum robot or steerable catheter from the memory, obtained via one or more neural networks, etc.); applying thresholding using an automated method and detecting one or more objects; fitting one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles in or on the one or more detected objects; defining one
- defining the one or more targets may include setting a center or portion(s) of the one or more set or predetermined geometric shapes or the one or more circles, rectangles, squares, ovals, octagons, and/or triangles as one or more targets for a next movement of the continuum robot or steerable catheter.
- the method may further include the following steps: in a case where the one or more targets are not detected, then applying peak detection to the depth map and using one or more detected peaks as the one or more targets; and/or in a case where one or more peaks are not detected, then using a deepest point of the depth map as the one or more targets.
- the continuum robot or steerable catheter may be automatically advanced during the advancing step.
- automatic targeting the movement, navigation, and/or control may be provided (e.g., in a lung application, airway targeting which plans the next move may be automatic).
- One or more of the artificial intelligence features discussed herein that may be used in one or more embodiments of the present disclosure, includes but is not limited to, using one or more of deep learning, a computer vision task, keypoint detection, a unique architecture of a model or models, a unique training process or algorithm, a unique optimization process or algorithm, input data preparation techniques, input mapping to the model, pre-processing, post- processing, and/or interpretation of the output data as substantially described herein or as shown in any one of the accompanying drawings.
- Neural networks may include a computer system or systems.
- a neural network may include or may comprise an input layer, one or more hidden layers of neurons or nodes, and an output layer. The input layer may be where the values are passed to the rest of the model.
- the input layer may be the place where the transformed navigation, movement, and/or control data may be passed to a model for evaluation.
- the hidden layer(s) may be a series of layers that contain or include neurons or nodes that establish connections between the neurons or nodes in the other hidden layers. Through training, the values of each of the connections may be altered so that, due to the training, the system/systems will trigger when the expected pattern is detected.
- the output layer provides the result(s) of the model.
- this may be a Boolean (true/false) value for detecting the one or more targets, for detecting the one or more objects, detecting the one or more peaks, detecting the deepest point, or any other calculation, detection, or process/technique discussed herein.
- One or more features discussed herein may be determined using a convolutional auto-encoder, Gaussian filters, Haralick features, and/or thickness or shape of the sample(s), target(s), or object(s).
- FIG.1 illustrates a simplified representation of a medical environment, such as an operating room, where a robotic catheter system 1000 may be used.
- FIG. 2 illustrates a functional block diagram that may be used in at least one embodiment of the robotic catheter system 1000.
- FIGS.3A-3D represent at least one embodiment of the catheter 104 (see FIGS.
- FIG.4 illustrates a logical block diagram that may be used for the robotic catheter system 1000.
- the system 1000 may include a computer cart (see e.g., the controller 100, 102 in FIG.1) operatively connected to a steerable catheter or continuum robot 104 via a robotic platform 108.
- the robotic platform 108 includes one or more than one robotic arm 132 and a rail 110 (see e.g., FIGS.1-2) and/or linear translation stage 122 (see e.g., FIG.2). [0108] As shown in FIGS.
- one or more embodiments of a system 1000 for performing navigation planning, autonomous navigation, movement detection, and/or control may include one or more of the following: a display controller 100, a display 101-1, a display 101-2, a controller 102, an actuator 103, a continuum device (also referred to herein as a “steerable catheter” or “an imaging device”) 104, an operating portion 105, a camera or tracking sensor 106 (e.g., an electromagnetic (EM) tracking sensor), a catheter tip position/orientation/pose/state detector 107 (which may be optional (e.g., a camera 106 may be used instead of the tracking sensor 106 and the position/state detector 107), and a rail 110 (which may be attached to or combined with a linear translation stage 122) (for example, as shown in at least FIGS.1-2).
- EM electromagnetic
- the system 1000 may include one or more processors, such as, but not limited to, a display controller 100, a controller 102, a CPU 120, a controller 50, a CPU 51, a console or computer 1200 or 1200’, a CPU 1201, any other processor or processors discussed herein, etc., that operate to execute a software program, to control the one or more adjustment, control, and/or smoothing technique(s) discussed herein, and to control display of a navigation screen on one or more displays 101.
- processors such as, but not limited to, a display controller 100, a controller 102, a CPU 120, a controller 50, a CPU 51, a console or computer 1200 or 1200’, a CPU 1201, any other processor or processors discussed herein, etc.
- the one or more processors may generate a three dimensional (3D) model of a structure (for example, a branching structure like airway of lungs of a patient, an object to be imaged, tissue to be imaged, etc.) based on images, such as, but not limited to, CT images, MRI images, etc.
- 3D three dimensional
- the 3D model may be received by the one or more processors (e.g., the display controller 100, the controller 102, the CPU 120, the controller 50, the CPU 51, the console or computer 1200 or 1200’, the CPU 1201, any other processor or processors discussed herein, etc.) from another device.
- a two-dimensional (2D) model may be used instead of 3D model in one or more embodiments.
- the 2D or 3D model may be generated before a navigation starts.
- the 2D or 3D model may be generated in real-time (in parallel with the navigation).
- examples of generating a model of branching structure are explained. However, the models may not be limited to a model of branching structure.
- a model of a route direct to a target may be used instead of the branching structure.
- a model of a broad space may be used, and the model may be a model of a place or a space where an observation or a work is performed by using a continuum robot 104 explained below.
- a user U e.g., a physician, a technician, etc.
- the user interface may include at least one of a main or first display 101-1 (a first user interface unit), a second display 101-2 (a second user interface unit), and a handheld controller 105 (a third user interface unit).
- the main or first display 101-1 may include, for example, a large display screen attached to the system 1000 and/or the controllers 101, 102 of the system 1000 or mounted on a wall of the operating room and may be, for example, designed as part of the robotic catheter system 1000 or may be part of the operating room equipment.
- a secondary display 101-2 that is a compact (portable) display device configured to be removably attached to the robotic platform 108.
- the second or secondary display 101-2 may include, but are not limited to, a portable tablet computer, a mobile communication device (a cellphone), a tablet, a laptop, etc.
- the steerable catheter 104 may be actuated via an actuator unit 103.
- the actuator unit 103 may be removably attached to the robotic platform 108 or any component thereof (e.g., the robotic arm 132, the rail 110, and/or the linear translation stage 122).
- the handheld controller 105 may include a gamepad-like controller with a joystick having shift levers and/or push buttons, and the controller 105 may be a one-handed controller or a two-handed controller.
- the actuator unit 103 may be enclosed in a housing having a shape of a catheter handle.
- the system 1000 includes at least a system controller 102, a display controller 100, and the main display 101-1.
- the main display 101-1 may include a conventional display device such as a liquid crystal display (LCD), an OLED display, a QLED display, etc.
- the main display 101-1 may provide or display a graphic interface unit (GUI) configured to display one or more views. These views may include a live view image 134, an intraoperative image 135, a preoperative image 136, and other procedural information 138.
- GUI graphic interface unit
- the live image view 134 may be an image from a camera at the tip of the catheter 104.
- the live image view 134 may also include, for example, information about the perception and navigation of the catheter 104.
- the preoperative image 136 may include pre-acquired 3D or 2D medical images of the patient P acquired by conventional imaging modalities such as computer tomography (CT), magnetic resonance imaging (MRI), ultrasound imaging, or any other desired imaging modality.
- CT computer tomography
- MRI magnetic resonance imaging
- ultrasound imaging or any other desired imaging modality.
- the intraoperative image 135 may include images used for image guided procedure such images may be acquired by fluoroscopy or CT imaging modalities (or another desired imaging modality).
- the intraoperative image 135 may be augmented, combined, or correlated with information obtained from a sensor, camera image, or catheter data.
- the sensor may be located at the distal end of the catheter 104.
- the catheter tip tracking sensor 106 may be, for example, an electromagnetic (EM) sensor. If an EM sensor is used, a catheter tip position detector 107 may be included in the robotic catheter system 1000; the catheter tip position detector 107 may include an EM field generator operatively connected to the system controller 102. Alternatively, a camera 106 may be used instead of the tracking sensor 106 and the position detector 107 to determine and output detected positional/state information to the system controller 102.
- EM electromagnetic
- Suitable electromagnetic sensors for use with a steerable catheter may be used with any feature of the present disclosure, including the sensors discussed, for example, in U.S. Pat. No.6,201,387 and in International Pat. Pub. WO 2020/194212 A1, which are incorporated by reference herein in their entireties.
- the display controller 100 may acquire position/orientation/navigation/pose/state (or other state) information of the continuum robot 104 from a controller 102.
- the display controller 100 may acquire the position/orientation/navigation/pose/state (or other state) information directly from a tip position/orientation/navigation/pose/state (or other state) detector 107.
- FIG.2 illustrates the robotic catheter system 1000 including the system controller 102 operatively connected to the display controller 100, which is connected to the first display 101-1 and to the second display 101-2.
- the system controller 102 is also connected to the actuator 103 via the robotic platform 108 or any component thereof (e.g., the robotic arm 132, the rail 110, and/or the linear translation stage 122).
- the actuator unit 103 may include a plurality of motors 144 that operate to control a plurality of drive wires 160 (while not limited to any particular number of drive wires 160, FIG.2 shows that six (6) drive wires 160 are being used in the subject embodiment example).
- the drive wires 160 travel through the steerable catheter or continuum robot 104.
- One or more access ports 126 may be located on the catheter 104 (and may include an insertion/extraction detector 109).
- the proximal section 148 is configured with through-holes (or thru-holes) or grooves or conduits to pass drive wires 160 from the distal section 152, 154, 156 to the actuator unit 103.
- the distal section 152, 154, 156 is comprised of a plurality of bending segments including at least a distal segment 156, a middle segment 154, and a proximal segment 152. Each bending segment is bent by actuation of at least some of the plurality of drive wires 160 (driving members).
- the posture of the catheter 104 may be supported by supporting wires (support members) also arranged along the wall of the catheter 104 (as discussed in U.S. Pat. Pub.
- Each ring-shaped component 162, 164 may include a central opening which may form a tool channel 168 and may include a plurality of conduits 166 (grooves, sub-channels, or through-holes (or thru-holes)) arranged lengthwise (and which may be equidistant from the central opening) along the annular wall of each ring-shaped component 162, 164.
- an inner cover such as is described in U.S. Pat. Pub. US2021/0369085 and US2022/0126060, which are incorporated by reference herein in their entireties, may be included to provide a smooth inner channel and to provide protection.
- the non-steerable proximal section 148 may be a flexible tubular shaft and may be made of extruded polymer material.
- the tubular shaft of the proximal section 148 also may have a central opening or tool channel 168 and plural conduits 166 along the wall of the shaft surrounding the tool channel 168.
- An outer sheath may cover the tubular shaft and the steerable section 152, 154, 156.
- at least one tool channel 168 formed inside the steerable catheter 104 provides passage for an imaging device and/or end effector tools from the insertion port 126 to the distal end of the steerable catheter 104.
- the actuator unit 103 may include, in one or more embodiments, one or more servo motors or piezoelectric actuators.
- the actuator unit 103 may operate to bend one or more of the bending segments of the catheter 104 by applying a pushing and/or pulling force to the drive wires 160.
- each of the three bendable segments of the steerable catheter 104 has a plurality of drive wires 160. If each bendable segment is actuated by three drive wires 160, the steerable catheter 104 has nine driving wires arranged along the wall of the catheter 104. Each bendable segment of the catheter 104 is bent by the actuator unit 103 by pushing or pulling at least one of these nine drive wires 160. Force is applied to each individual drive wire in order to manipulate/steer the catheter 104 to a desired pose.
- the actuator unit 103 assembled with steerable catheter 104 may be mounted on the robotic platform 108 or any component thereof (e.g., the robotic arm 132, the rail 110, and/or the linear translation stage 122).
- the robotic platform 108, the rail 110, and/or the linear translation stage 122 may include a slider and a linear motor.
- the robotic platform 108 or any component thereof e.g., the robotic arm 132, the rail 110, and/or the linear translation stage 122
- the robotic platform 108 or any component thereof is motorized, and may be controlled by the system controller 102 to insert and remove the steerable catheter 104 to/from the target, sample, or object (e.g., the patient, the patient’s bodily lumen, one or more airways, a lung, etc.).
- An imaging device 180 that may be inserted through the tool channel 168 includes an endoscope camera (videoscope) along with illumination optics (e.g., optical fibers or LEDs) (or any other camera or imaging device, tool, etc. discussed herein or known to those skilled in the art).
- the illumination optics provide light to irradiate the lumen and/or a lesion target which is a region of interest within the target, sample, or object (e.g., in a patient).
- End effector tools may refer to endoscopic surgical tools including clamps, graspers, scissors, staplers, ablation or biopsy needles, and other similar tools, which serve to manipulate body parts (organs or tumorous tissue) during imaging, examination, or surgery.
- the imaging device 180 may be what is commonly known as a chip-on-tip camera and may be color (e.g., take one or more color images) or black-and-white (e.g., take one or more black-and-white images). In one or more embodiments, a camera may support color and black-and-white images.
- a tracking sensor e.g., an EM tracking sensor
- a camera 106 may be attached to the catheter tip 320.
- the steerable catheter 104 and the tracking sensor 106 may be tracked by the tip position detector 107.
- the tip position detector 107 detects a position of the tracking sensor 106, and outputs the detected positional information to the system controller 102.
- a camera 106 may be used instead of the tracking sensor 106 and the position detector 107 to determine and output detected positional/state information to the system controller 102.
- the system controller 102 receives the positional information from the tip position detector 107, and continuously records and displays the position of the steerable catheter 104 with respect to the coordinate system of the target, sample, or object (e.g., a patient, a lung, an airway(s), a vessel, etc.).
- the system controller 102 operates to control the actuator unit 103 and the robotic platform 108 or any component thereof (e.g., the robotic arm 132, the rail 110, and/or the linear translation stage 122) in accordance with the manipulation commands input by the user U via one or more of the input and/or display devices (e.g., the handheld controller 105, a GUI at the main display 101-1, touchscreen buttons at the secondary display 101-2, etc.).
- FIG.3C and FIG.3D show exemplary catheter tip manipulations by actuating one or more bending segments of the steerable catheter 104. As illustrated in FIG. 3C, manipulating only the most distal segment 156 of the steerable section may change the position and orientation of the catheter tip 320.
- manipulating one or more bending segments (152 or 154) other than the most distal segment may affect only the position of catheter tip 320, but may not affect the orientation of the catheter tip 320.
- actuation of distal segment 156 changes the catheter tip from a position P1 having orientation O1, to a position P2 having orientation O2, to position P3 having orientation O3, to position P4 having orientation O4, etc.
- actuation of the middle segment 152 and/or the middle segment 154 may change the position of the catheter tip 320 from a position P1 having orientation O1 to a position P2 and position P3 having the same orientation O1.
- exemplary catheter tip manipulations shown in FIG.3C and FIG.3D may be performed during catheter navigation (e.g., while inserting the catheter 104 through tortuous anatomies, one or more targets, samples, objects, a patient, etc.).
- the one or more catheter tip manipulations shown in FIG.3C and FIG. 3D may apply namely to the targeting mode applied after the catheter tip 320 has been navigated to a predetermined distance (a targeting distance) from the target, sample, or object.
- the actuator 103 may proceed or retreat along a rail 110 (e.g., to translate the actuator 103, the continuum robot/catheter 104, etc.), and the actuator 103 and continuum robot 104 may proceed or retreat in and out of the patient’s body or other target, object, or specimen (e.g., tissue).
- the catheter device 104 may include a plurality of driving backbones and may include a plurality of passive sliding backbones.
- the catheter device 104 may include at least nine (9) driving backbones and at least six (6) passive sliding backbones.
- the catheter device 104 may include an atraumatic tip at the end of the distal section of the catheter device 104.
- FIG.4 illustrates that a system 1000 may include the system controller 102 which may operate to execute software programs and control the display controller 100 to display a navigation screen (e.g., a live view image 134) on the main display 101-1 and/or the secondary display 101-2.
- the display controller 100 may include a graphics processing unit (GPU) or a video display controller (VDC) (or any other suitable hardware discussed herein or known to those skilled in the art.
- FIG. 5 illustrates components of the system controller 102 and/or the display controller 100.
- the system controller 102 and the display controller 100 may be configured separately. Alternatively, the system controller 102 and the display controller 100 may be configured as one device.
- the system controller 102 and the display controller 100 may include substantially the same components in one or more embodiments.
- the system controller 102 and the display controller 100 may include a central processing unit (CPU 120) (which may be comprised of one or more processors (microprocessors)), a random access memory (RAM 130) module, an input/output (I/O 140) interface, a read only memory (ROM 110), and data storage memory (e.g., a hard disk drive (HDD 150) or solid state drive (SSD)).
- the ROM 110 and/or HDD 150 store the operating system (OS) software, and software programs necessary for executing the functions of the robotic catheter system 1000 as a whole.
- OS operating system
- the RAM 130 is used as a workspace memory.
- the CPU 120 executes the software programs developed in the RAM 130.
- the I/O 140 inputs, for example, positional information to the display controller 102, and outputs information for displaying the navigation screen to the one or more displays (main display 101-1 and/or secondary display 101-2).
- the navigation screen is a graphical user interface (GUI) generated by a software program but, it may also be generated by firmware, or a combination of software and firmware.
- GUI graphical user interface
- the system controller 102 may control the steerable catheter 104 based on any known kinematic algorithms applicable to continuum or snake-like catheter robots. For example, the system controller controls the steerable catheter 104 based on an algorithm known as follow the leader (FTL) algorithm.
- FTL follow the leader
- the display controller 100 may acquire position information of the steerable catheter 104 from system controller 102. Alternatively, the display controller 100 may acquire the position information directly from the tip position detector 107.
- the steerable catheter 104 may be a single-use or limited-use catheter device. In other words, the steerable catheter 104 may be attachable to, and detachable from, the actuator unit 103 to be disposable.
- the display controller 100 may generate and output a live-view image or a navigation screen to the main display 101-1 and/or the secondary display 101-2 based on the 3D model of a target, sample, or object (e.g., a lung, an airway, a vessel, a patient’s anatomy (a branching structure), etc.) and the position information of at least a portion of the catheter (e.g., position of the catheter tip 320) by executing pre-programmed software routines.
- the navigation screen may indicate a current position of at least the catheter tip 320 on the 3D model. By observing the navigation screen, a user may recognize the current position of the steerable catheter 104 in the branching structure.
- one or more end effector tools may be inserted through the access port 126 at the proximal end of the catheter 104, and such tools may be guided through the tool channel 168 of the catheter body to perform an intraluminal procedure from the distal end of the catheter 104.
- the tool may be a medical tool such as an endoscope camera, forceps, a needle, or other biopsy or ablation tools.
- the tool may be described as an operation tool or working tool.
- the working tool is inserted or removed through the working tool access port 126.
- the tool may include an endoscope camera or an end effector tool, which may be guided through a steerable catheter under the same principles.
- a procedure there is usually a planning procedure, a registration procedure, a targeting procedure, and an operation procedure.
- the one or more processors such as, but not limited to, the display controller 100, may generate and output a navigation screen to the one or more displays 101-1, 101-2 based on the 2D/3D model and the position/orientation/navigation/pose/state (or other state) information by executing the software.
- the navigation screen may indicate a current position/orientation/navigation/pose/state (or other state) of the continuum robot 104 on the 2D/3D model.
- the one or more processors such as, but not limited to, the display controller 100 and/or the controller 102, may include, as shown in FIG.
- ROM Read Only Memory
- CPU central processing unit
- RAM Random Access Memory
- I/O input and output
- HDD Hard Disc Drive
- a Solid State Drive (SSD) may be used instead of HDD 150 as the data storage 150.
- the one or more processors, and/or the display controller 100 and/or the controller 102 may include structure as shown in FIGS.20-21 and 22-23 as further discussed below.
- the ROM110 and/or HDD 150 operate to store the software in one or more embodiments.
- the RAM 130 may be used as a work memory.
- the CPU 120 may execute the software program developed in the RAM 130.
- the I/O 140 operates to input the positional (or other state) information to the display controller 100 (and/or any other processor discussed herein) and to output information for displaying the navigation screen to the one or more displays 101-1, 101-2.
- the navigation screen may be generated by the software program. In one or more other embodiments, the navigation screen may be generated by a firmware.
- One or more devices or systems may include a tip position/orientation/navigation/pose/state (or other state) detector 107 that operates to detect a position/orientation/navigation/pose/state (or other state) of the EM tracking sensor 106 and to output the detected positional (and/or other state) information to the controller 100 or 102 (e.g., as shown in FIGS.1-2), or to any other processor(s) discussed herein.
- the controller 102 may operate to receive the positional (or other state) information of the tip of the continuum robot 104 from the tip position/orientation/navigation/pose/state (or any other state discussed herein) detector 107.
- the detector 107 may be optional.
- the tracking sensor 106 may be replaced by a camera 106.
- the controller 100 and/or the controller 102 operates to control the actuator 103 in accordance with the manipulation by a user (e.g., manually), and/or automatically (e.g., by a method or methods run by one or more processors using software, by the one or more processors, using automatic manipulation in combination with one or more manual manipulations or adjustments, etc.) via one or more operation/operating portions or operational controllers 105 (e.g., such as, but not limited to a joystick as shown in FIGS.1-2; see also, diagram of FIG.4).
- a user e.g., manually
- automatically e.g., by a method or methods run by one or more processors using software, by the one or more processors, using automatic manipulation in combination with one or more manual manipulations or adjustments, etc.
- one or more operation/operating portions or operational controllers 105 e.g., such as, but not limited to
- the one or more displays 101-1, 101-2 and/or operation portion or operational controllers 105 may be used as a user interface 3000 (also referred to as a receiving device) (e.g., as shown diagrammatically in FIG.4).
- the system(s) 1000 may include, as an operation unit, the display 101-1 (e.g., such as, but not limited to, a large screen user interface with a touch panel, first user interface unit, etc.), the display 101-2 (e.g., such as, but not limited to, a compact user interface with a touch panel, a second user interface unit, etc.) and the operating portion 105 (e.g., such as, but not limited to, a joystick shaped user interface unit having shift lever/ button, a third user interface unit, a gamepad, or other input device, etc.).
- the display 101-1 e.g., such as, but not limited to, a large screen user interface with a touch panel, first user interface unit, etc.
- the display 101-2 e.
- the controller 100 and/or the controller 102 may control the continuum robot 104 based on an algorithm known as follow the leader (FTL) algorithm.
- the FTL algorithm may be used in addition to the navigation planning and/or autonomous navigation features of the present disclosure.
- the middle section and the proximal section (following sections) of the continuum robot 104 may move at a first position (or other state) in the same or similar way as the distal section moved at the first position (or other state) or a second position (or state) near the first position (or state) (e.g., during insertion of the continuum robot/catheter 104, by using the navigation planning, autonomous navigation, movement, and/or control feature(s) of the present disclosure, etc.).
- the middle section and the distal section of the continuum robot 104 may move at a first position or state in the same/similar/approximately similar way as the proximal section moved at the first position or state or a second position or state near the first position (e.g., during removal of the continuum robot/catheter 104).
- the continuum robot/catheter 104 may be removed by automatically and/or manually moving along the same or similar, or approximately same or similar, path that the continuum robot/catheter 104 used to enter a target (e.g., a body of a patient, an object, a specimen (e.g., tissue), etc.) using the FTL algorithm, including, but not limited to, using FTL with the one or more adjustment, correction, state, and/or smoothing technique(s) discussed herein.
- a target e.g., a body of a patient, an object, a specimen (e.g., tissue), etc.
- FTL algorithm including, but not limited to, using FTL with the one or more adjustment, correction, state, and/or smoothing technique(s) discussed herein.
- Any of the one or more processors such as, but not limited to, the controller 102 and the display controller 100, may be configured separately.
- the controller 102 may similarly include a CPU 120, a RAM 130, an I/O 140, a ROM 110, and a HDD 150 as shown diagrammatically in FIG. 5.
- any of the one or more processors such as, but not limited to, the controller 102 and the display controller 100, may be configured as one device (for example, the structural attributes of the controller 100 and the controller 102 may be combined into one controller or processor, such as, but not limited to, the one or more other processors discussed herein (e.g., computer, console, or processor 1200, 1200’, etc.).
- the system 1000 may include a tool channel 126 for a camera, biopsy tools, or other types of medical tools (as shown in FIGS.1-2).
- the tool may be a medical tool, such as an endoscope, a forceps, a needle or other biopsy tools, etc.
- the tool may be described as an operation tool or working tool.
- the working tool may be inserted or removed through a working tool insertion slot 126 (as shown in FIGS. 1-2).
- Any of the features of the present disclosure may be used in combination with any of the features, including, but not limited to, the tool insertion slot, as discussed in U.S. Prov. Pat. App. No. 63/378,017, filed September 30, 2022, the disclosure of which is incorporated by reference herein in its entirety, and/or any of the features as discussed in U.S. Prov. Pat. App. No.
- FIG. 6 is a flowchart showing steps of at least one planning procedure of an operation of the continuum robot/catheter device 104.
- One or more of the processors discussed herein may execute the steps shown in FIG. 6, and these steps may be performed by executing a software program read from a storage medium, including, but not limited to, the ROM110 or HDD 150, by CPU 120 or by any other processor discussed herein.
- One or more methods of planning using the continuum robot/catheter device 104 may include one or more of the following steps: (i) In step s601, one or more images such, as CT or MRI images, may be acquired; (ii) In step s602, a three dimensional model of a branching structure (for example, an airway model of lungs or a model of an object, specimen or other portion of a body) may be generated based on the acquired one or more images; (iii) In step s603, a target on the branching structure may be determined (e.g., based on a user instruction, based on preset or stored information, etc.); (iv) In step s604, a route of the continuum robot/catheter device 104 to reach the target (e.g., on the branching structure) may be determined (e.g., based on a user instruction, based on preset or stored information, based on a combination of user instruction and stored or preset information, etc.); (v)
- a model e.g., a 2D or 3D model
- a target and a route on the model may be determined and stored before the operation of the continuum robot 104 is started.
- the system controller 102 (or any other controller, processor, computer, etc. discussed herein) may operate to perform a navigation planning mode and/or an autonomous navigation mode.
- the navigation planning and/or autonomous navigation mode may include or comprise: (1) a perception step, (2) a planning step, and (3) a control step.
- the system controller 102 may receive an endoscope view (or imaging data) and may analyze the endoscope view (or imaging data) to find addressable airways from the current position/orientation of the steerable catheter 104. At an end of this analysis, the system controller 102 identifies or perceives these addressable airways as paths in the endoscope view (or imaging data).
- the planning step is a step to determine a target path, which is the destination for the steerable catheter 104. While there are a couple of different approaches to select one of the paths as the target path, the present disclosure uniquely includes means to reflect user instructions concurrently for the decision of a target path among the identified or perceived paths.
- the control step is a step to control the steerable catheter 104 and the linear translation stage 122 (or any other portion of the robotic platform 108) to navigate the steerable catheter 104 to the target path, pose, state, etc. This step may also be performed as an automatic step.
- the system controller 102 operates to use information relating to the real time endoscope view (e.g., the view 134), the target path, and an internal design & status information on the robotic catheter system 1000.
- the robotic catheter system 1000 may navigate the steerable catheter 104 autonomously, which achieves reflecting the user’s intention efficiently.
- the real-time endoscope view 134 may be displayed in a main display 101-1 (as a user input/output device) in the system 1000. The user may see the airways in the real-time endoscope view 134 through the main display 101-1. This real-time endoscope view 134 may also be sent to the system controller 102.
- the system controller 102 may process the real-time endoscope view 134 and may identify path candidates by using image processing algorithms. Among these path candidates, the system controller 102 may select the paths with the designed computation processes, and then may display the paths with a circle, octagon, or other geometric shape (e.g., one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, triangles, any other shape discussed herein or known to those skilled in the art, any closed shape discussed herein or known to those skilled in the art, etc.) with the real-time endoscope view 134 as discussed further below for FIGS.7-8.
- a circle, octagon, or other geometric shape e.g., one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, triangles, any other shape discussed herein or known to those skilled in the art, any closed shape discussed herein or known to those skilled in the art, etc.
- the system controller 102 may provide a cursor so that the user may indicate the target path by moving the cursor with the joystick 105.
- the system controller 102 operates to recognize the path with the cursor as the target path.
- the system controller 102 may can pause the motion of the actuator unit 103 and the linear translation stage 122 while the user is moving the cursor so that the user may select the target path with a minimal change of the real-time endoscope view 134 and paths since the system 1000 would not move in such a scenario.
- the features of the present disclosure may be performed using artificial intelligence, including the autonomous driving mode.
- deep learning may be used for performing autonomous driving using deep learning for localization.
- Any features of the present disclosure may be used with artificial intelligence features discussed in J. Sganga, D. Eng, C. Graetzel, and D. B. Camarillo, “Autonomous Driving in the Lung using Deep Learning for Localization,” Jul. 2019, Accessed: Jun. 28, 2023. [Online]. Available: https://arxiv.org/abs/1907.08136v1, the disclosure of which is incorporated by reference herein in its entirety.
- the system controller 102 (or any other controller, processor, computer, etc. discussed herein) may operate to perform a depth map mode.
- a depth map may be generated or obtained from one or more images (e.g., bronchoscopic images, CT images, images of another imaging modality, etc.). A depth of each image may be identified or evaluated to generate the depth map or maps.
- the generated depth map or maps may be used to perform navigation planning, autonomous navigation, movement detection, and/or control of a continuum robot, a steerable catheter, an imaging device or system, etc. as discussed herein.
- thresholding may be applied to the generated depth map or maps, or to the depth map mode, to evaluate accuracy for navigation purposes.
- a threshold may be set for an acceptable distance between the ground truth (and/or a target camera location, a predetermined camera location, an actual camera location, etc.) and an estimated camera location for a catheter or continuum robot (e.g., the catheter or continuum robot 104).
- the threshold may defined such that the distance between the ground truth (and/or a target camera location, a predetermined camera location, an actual camera location, etc.) and an estimated camera location is equal to or less than, or less than, a set or predetermined distance of one or more of the following: 5 mm, 10 mm, about 5 mm, about 10 mm, any other distance set by a user of the device (depending on a particular application).
- the predetermined distance may be less than 5 mm or less than about 5 mm. Any other type of thresholding may be applied to the depth mapping to improve and/or confirm the accuracy of the depth map(s).
- thresholding may be applied to segment the one or more images to help identify or find one or more objects and to ultimately help define one or more targets used for the navigation planning, autonomous navigation, movement detection, and/or control features of the present disclosure.
- a depth map or maps may be created or generated using one or more images (e.g., CT images, bronchoscopic images, images of another imaging modality, vessel images, etc.), and then, by applying a threshold to the depth map, the objects in the one or more images may be segmented (e.g., a lung may be segmented, one or more airways may be segmented, etc.).
- the segmented portions of the one or more images may define one or more navigation targets for a next automatic robotic movement, navigation, and/or control. Examples of segmented airways are discussed further below with respect to FIG.8.
- one or more of the automated methods that may be used to apply thresholding may include one or more of the following: a watershed method (such as, but not limited to, watershed method(s) discussed in L. J. Belaid and W. Mourou, “IMAGE SEGMENTATION: A WATERSHED TRANSFORMATION ALGORITHM,” 2011, vol. 28, no. 2, p.
- k-means method such as, but not limited to, k-means method(s) discussed in T. Kanungo, D. M. Mount, N. S. professor, C. D. Piatko, R. Silverman, and A. Y. Wu, “An efficient k-means clustering algorithm: Analysis and implementation,” IEEE Trans Pattern Anal Mach Intell, vol. 24, no. 7, pp.
- an automatic threshold method such as, but not limited to, automatic threshold method(s) discussed in N. Otsu, “Threshold Selection Method from Gray-Level Histograms,” IEEE Trans Syst Man Cybern, vol.9, no.1, pp.62–66, 1979, which is incorporated by reference herein in its entirety
- a sharp slope method such as, but not limited to, sharp slope method(s) discussed in U.S. Pat. Pub. No. 2023/0115191 A1, published on April 13, 2023, which is incorporated by reference herein in its entirety
- peak detection may include any of the techniques discussed herein, including, but not limited to, the techniques discussed in at least “8 Peak detection,” Data Handling in Science and Technology, vol. 21, no. C, pp. 183–190, Jan. 1998, doi: 10.1016/S0922-3487(98)80027-0, which is incorporated by reference herein in its entirety.
- the depth map(s) may be obtained, and/or the quality of the obtained depth map(s) may be evaluated, using artificial intelligence structure, such as, but not limited, convolutional neural networks, generative adversarial networks (GANs), neural networks, any other AI structure or feature(s) discussed herein, any other AI network structure(s) known to those skilled in the art, etc.
- artificial intelligence structure such as, but not limited, convolutional neural networks, generative adversarial networks (GANs), neural networks, any other AI structure or feature(s) discussed herein, any other AI network structure(s) known to those skilled in the art, etc.
- GANs generative adversarial networks
- a generator of a generative adversarial network may operate to generate an image(s) that is/are so similar to ground truth image(s) that a discriminator of the generative adversarial network is not able to distinguish between the generated image(s) and the ground truth image(s).
- the generative adversarial network may include one or more generators and one or more discriminators.
- Each generator of the generative adversarial network may operate to estimate depth of each image (e.g., a CT image, a bronchoscopic image, etc.), and each discriminator of the generative adversarial network may operate to determine whether the estimated depth of each image (e.g., a CT image, a bronchoscopic image, etc.) is estimated (or fake) or ground truth (or real).
- an AI network such as, but not limited to, a GAN or a consistent GAN (cGAN), may receive an image or images as an input and may obtain or create a depth map for each image or images.
- an AI network may evaluate obtained one or more images (e.g., a CT image, a bronchoscopic image, etc.), one or more virtual images, and one or more ground truth depth maps to generate depth map(s) for the one or more images and/or evaluate the generated depth map(s).
- a Three Cycle-Consistent Generative Adversarial Network (3cGAN) may be used to obtain the depth map(s) and/or evaluate the quality of the depth map(s), and an unsupervised learning method (designed and trained in an unsupervised procedure) may be employed on the depth map(s) and the one or more images (e.g., a CT image or images, a bronchoscopic image or images, any other obtained image or images, etc.).
- any feature or features of obtaining a depth map or performing a depth map mode of the present disclosure may be used with any of the depth map or depth estimation features as discussed in A. Banach, F. King, F. Masaki, H. Tsukada, and N. Hata, “Visually Navigated Bronchoscopy using three cycle-Consistent generative adversarial network for depth estimation,” Med Image Anal, vol. 73, p. 102164, Oct. 2021, doi: 10.1016/J.MEDIA.2021.102164, the disclosure of which is incorporated by reference herein in its entirety.
- the system controller 102 or any other controller, processor, computer, etc.
- the computation of one or more lumen may include a one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles fit/blob process, a peak detection, and/or a deepest point analysis.
- the computation of one or more lumen may include a one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles fit/blob process, a peak detection, and/or a deepest point analysis.
- fitting one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles or a blob to a binary object may be equivalent to fitting one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles or a blob to a set of points.
- the set of points may be the boundary points of the binary object.
- a circle/blob fit is not limited thereto (as or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles (or other shape(s)) may be used). Indeed, there are several other variations that may be applied as described in D. Umbach and K. N. Jones, "A few methods for fitting circles to data," in IEEE Transactions on Instrumentation and Measurement, vol. 52, no. 6, pp. 1881-1885, Dec. 2003, doi: 10.1109/TIM.2003.820472, the disclosure of which is incorporated by reference herein in its entirety. For example, circle/Blob fitting may be achieved on the binary objects by calculating their circularity/blob shape as 4 ⁇ ! ⁇ /(#!
- one or more embodiments may include same in various ways.
- peak detection may be performed in a 1-D signal and may be defined as the extreme value of the signal.
- 2-D image peak detection may be defined as the highest value of the 2-D matrix.
- a depth map or maps is/are the 2-D matrix in one or more embodiments, and a peak is the highest value of the depth math or maps which may correspond to the deepest point.
- more than one peak may exist.
- the depth map or maps produce an image which predicts the depth of the airways; therefore, for each airway, there may be a concentration of non-zero pixels around a deepest point that the neural network, residual network, GANs, or any other AI structure/network discussed herein or known to those skilled in the art predicted.
- the peak detection By applying peak detection to all the non-zero concentrations of the 2-D depth map or maps, the peak of each concentration is detected; each peak corresponds to an airway.
- a GANs or another AI structure/network
- FIG. 7 is a flowchart showing steps of at least one procedure for performing navigation planning, autonomous navigation, movement detection, and/or control technique(s) for a continuum robot/catheter device (e.g., such as continuum robot/catheter device 104).
- a continuum robot/catheter device e.g., such as continuum robot/catheter device 104
- One or more of the processors discussed herein, one or more AI networks discussed herein, and/or a combination thereof may execute the steps shown in FIG.7, and these steps may be performed by executing a software program read from a storage medium, including, but not limited to, the ROM 110 or HDD 150, by CPU 120 or by any other processor discussed herein.
- one or more images e.g., one or more camera images, one or more CT images (or images of another imaging modality), one or more bronchoscopic images, etc.
- step S703 based on the td value, the method continues to perform the selected target detection method and proceeds to step S704 for the peak detection method or mode, to step S706 for the thresholding method or mode, or to step S711 for the deepest point method or mode;
- step S704 for the peak detection method or mode
- step S706 for the thresholding method or mode
- step S711 for the deepest point method or mode
- step S704 1
- step S706 the thresholding method or mode
- step S711 for the deepest point method or mode
- the steps S701 through S712 of FIG. 7 may be performed again for an obtained or received next image or images to evaluate the next movement, pose, position, orientation, or state for the navigation planning, autonomous navigation, movement detection, and/or control of the continuum robot or steerable catheter (or imaging device or system) 104.
- the method may estimate (automatically or manually) the depth map or maps (e.g., a 2D or 3D depth map or maps) of one or more images.
- the one or more depth maps may be estimated or determined using any technique discussed herein, including, but not limited to, artificial intelligence.
- any AI network including, but not limited to a neural network, a convolutional neural network, a generative adversarial network, any other AI network or structure discussed herein or known to those skilled in the art, etc., may be used to estimate or determine the depth map or maps (e.g., automatically).
- the navigation planning, autonomous navigation, movement detection, and/or control technique(s) of the present disclosure are not limited thereto.
- a counter may not be used and/or a target detection method value may not be used such that at least one embodiment may iteratively perform a target detection method of a plurality of target detection methods and move on and use the next target detection method of the plurality of the target detection methods until a target or targets is/are found.
- one or more embodiments may continue to use one or more of the other target detection methods (or any combination of the plurality of target detection methods or modes) to confirm and/or evaluate the accuracy and/or results of the target detection method or mode used to find the already-identified one or more targets.
- the steps S701 through S712 of FIG. 7 may be performed again for an obtained or received next image or images to evaluate the next movement, pose, position, orientation, or state for the navigation planning, autonomous navigation, movement detection, and/or control of the continuum robot or steerable catheter (or imaging device or system) 104.
- the method may estimate (automatically or manually) the depth map or maps (e.g., a 2D or 3D depth map or maps) of one or more images.
- the one or more depth maps may be estimated or determined using any technique discussed herein, including, but not limited to, artificial intelligence.
- any AI network including, but not limited to a neural network, a convolutional neural network, a generative adversarial network, any other AI network or structure discussed herein or known to those skilled in the art, etc., may be used to estimate or determine the depth map or maps (e.g., automatically).
- the navigation planning, autonomous navigation, movement detection, and/or control technique(s) of the present disclosure are not limited thereto.
- a counter may not be used and/or a target detection method value may not be used such that at least one embodiment may iteratively perform a target detection method of a plurality of target detection methods and move on and use the next target detection method of the plurality of the target detection methods until a target or targets is/are found.
- one or more embodiments may continue to use one or more of the other target detection methods (or any combination of the plurality of target detection methods or modes) to confirm and/or evaluate the accuracy and/or results of the target detection method or mode used to find the already-identified one or more targets.
- the identified one or more targets may be double checked, triple checked, etc.
- one or more steps of FIG.7 such as, but not limited to step S707 for binarization, may be omitted in one or more embodiments.
- segmentation may be used using three categories (e.g., airways, background, and edges of the image(s), for example)
- an image may have three colors.
- binarization may be useful to convert the image to black and white image data to perform processing on same as discussed herein.
- FIG. 8 shows images of at least one embodiment of an application example of navigation planning, autonomous navigation, and/or control technique(s) and movement detection for a camera view 800 (left), a depth map 801 (center), and a thresholded image 802 (right) in accordance with one or more aspects of the present disclosure.
- a depth map may be created using the bronchoscopic images and then, by applying a threshold to the depth map, the airways may be segmented.
- the segmented airways shown in thresholded image 802 may define the navigation targets (shown in the octagons of image 802) of the next automatic robotic movement.
- the continuum robot or steerable catheter 104 may follow the target(s) (which a user may change by dragging and dropping the target(s) (e.g., a user may drag and drop an identifier for the target, the user may drag and drop a cross or an x element representing the location for the target, etc.) in one or more embodiments), and the continuum robot or steerable catheter 104 may move forward and rotate on its own while targeting a predetermined location (e.g., a center) of the target(s) of the airway.
- a predetermined location e.g., a center
- the depth map (see e.g., in image 801) may be processed with any combination of blob/ one or more set or predetermined geometric shapes or one or more circles, rectangles, squares, ovals, octagons, and/or triangles fit, peak detection, and/or deepest point methods or modes to detect the airways that are segmented.
- the detected airways may define the navigation targets of the next automatic robotic movement.
- the continuum robot or steerable catheter 104 may move in a direction of the airway with its center closer to the cross or identifier.
- the apparatuses, systems, methods, and/or other features of the present disclosure may be optimized to other geometries as well, depending on the particular application(s) embodied or desired.
- one or more airways may be deformed due to one or more reasons or conditions (e.g., environmental changes, patient diagnosis, structural specifics for one or more lungs or other objects or targets, etc.).
- the circle fit may be used for the planning shown in FIG.8, this figure shows an octagon defining the fitting of the lumen in the images. Such a difference may help with clarifying the different information being provided in the display.
- an indicator of the geometric fit e.g., a circle fit
- it may have the same geometry as used in the fitting algorithm, or it may have a different geometry, such as the octagon shown in FIG.8.
- a study was conducted to introduce and evaluate new and non- obvious techniques for achieving autonomous advancement of a multi-section continuum robot within lung airways, driven by depth map perception.
- depth maps By harnessing depth maps as a fundamental perception modality, one or more embodiments of the studied system aims to enhance the robot’s ability to navigate and manipulate within the intricate and complex anatomical structure of the lungs (or any other targeted anatomy, object, or sample).
- Bronchoscopic operations were conducted using a snake robot developed in the researchers’ lab (some of the features of which are discussed in F. Masaki, F. King, T. Kato, H. Tsukada, Y. Colson, and N. Hata, “Technical validation of multi-section robotic bronchoscope with first person view control for transbronchial biopsies of peripheral lung,” IEEE Transactions on Biomedical Engineering, vol. 68, no. 12, pp. 3534–3542, 2021, which is incorporated by reference herein in its entirety), equipped with a bronchoscopic camera (OVM6946 OmniVision, CA).
- the captured bronchoscopic images were transmitted to a control workstation, where depth maps were created using a method involving a Three Cycle- Consistent Generative Adversarial Network (3cGAN) (see e.g., a 3cGAN as discussed in A. Banach, F. King, F. Masaki, H. Tsukada, and N. Hata, “Visually navigated bronchoscopy using three cycle-consistent generative adversarial network for depth estimation,” Medical Image Analysis, vol. 73, p. 102164, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1361841521002103, the disclosure of which is incorporated by reference herein in its entirety).
- 3cGAN Three Cycle- Consistent Generative Adversarial Network
- a combination of thresholding and blob detection algorithms, methods, or modes was used to detect the airway path, along with peak detection for missed airways.
- a control vector was computed from the chosen point of advancement (identified centroid or deepest point) to the center of the depth map image. This control vector represents the direction of movement on the 2D plane of original RGB and depth map images.
- a software-emulated joystick/gamepad was used in place of the physical interface to control the snake robot (also referred to herein as a continuum robot, steerable catheter, imaging device or system, etc.). The magnitude of the control vector was calculated, and if magnitude fell below a threshold, the robot advanced. If the magnitude exceeded the threshold, the joystick was tilted to initiate bending.
- a device, apparatus, or system may be a continuum robot or a robotic bronchoscope, and one or more embodiments of the present disclosure may employ depth map-driven autonomous advancement of a multi- section continuum robot or robotic bronchoscope in one or more lung airways. Additional non-limiting, non-exhaustive embodiment details for one or more bronchoscope, robotic bronchoscope, apparatus, system, method, storage medium, etc. details, and one or more details for the performed study/studies, are shown in one or more figures, e.g., at least FIGS. 9-18B of the present disclosure.
- the snake robot is a robotic bronchoscope composed of, or including at least, the following parts in one or more embodiments: i) the robotic catheter, ii) the actuator unit, iii) the robotic arm, and iv) the software (see e.g., FIG.9).
- the robotic catheter is developed to mimic, and improve upon and outperform, a manual catheter, and, in one or more embodiments, the robotic catheter includes nine drive wires which travel through the steerable catheter, housed within an outer skin made of polyether block amide (PEBA) of 0.13 mm thickness.
- PEBA polyether block amide
- the catheter includes a central channel which allows for inserting the bronchoscopic camera.
- the outer and inner diameters (OD, ID) of the catheter are 3 and 1.8 mm, respectively (see e.g., J. Zhang, et al., Nature Communications, vol.15, no.1, p.241 (Jan.2024), which is incorporated by reference herein in its entirety).
- the steering structure of the catheter includes two distal bending sections: the tip and middle sections, and one proximal bending section without an intermediate passive section. Each of the sections has its own degree of freedom (DOF) (see e.g., A. Banach, F. King, F. Masaki, H. Tsukada, N. Hata, “Medical image analysis, vol.73, p.
- DOF degree of freedom
- the catheter is actuated through the actuator unit attached to the robotic arm and includes nine motors that control the nine catheter wires. Each motor operates to bend one wire of the catheter by applying pushing or pulling force to the drive wire.
- Both the robotic catheter and actuator are attached to a robotic arm, including a rail that allows for a linear translation of the catheter. The movement of the catheter over the rail is achieved through a linear stage actuator, which pushes or pulls the actuator and the attached catheter.
- the catheter, actuator unit, and robotic arm are coupled into a system controller, which allows their communication with the software.
- t he robot movement may be achieved using a handheld controller (gamepad) or, like in this study, through autonomous driving software.
- the validation design of the robotic bronchoscope was performed by replicating real surgical scenarios, where the bronchoscope entered the trachea and navigated in the airways toward a predefined target (see e.g., L. Dupourque ⁇ , et al., International journal of computer assisted radiology and surgery, vo. 14, no. 11, pp. 2021-2029 (2019), which is incorporated by reference herein in its entirety).
- the autonomous driving method feature(s) of the present disclosure relies/rely on the 2D image from the monocular bronchoscopic camera without tracking hardware or prior CT segmentation in one or more embodiments.
- a 200x200 pixel grayscale bronchoscopic image serves as input for a deep learning model (3cGAN (see e.g., A. Banach, F. King, F. Masaki, H. Tsukada, N. Hata, “Medical image analysis, vol. 73, p. 102164 (2021), the disclosure of which is incorporated by reference herein in its entirety)) that generates a bronchoscopic depth map.
- 3cGAN deep learning model
- Lgan 6 Lgan lev + Lgan lev + ⁇ ⁇ ⁇ + Lgan lev 1 2 6 (1) where the adversarial loss of level i is referred to as Lgan lev .
- the 3cGAN model underwent unsupervised training using bronchoscopic images from phantoms derived from segmented airways.
- Bronchoscopic operations to acquire the training data were performed using a Scope 4 bronchoscope (Ambu Inc, Columbia, MD), while virtual bronchoscopic images and ground truth depth maps were generated in Unity (Unity Technologies, San Francisco, CA).
- the training ex-vivo dataset contained 2458 images.
- the network was trained in PyTorch using an Adam optimizer on 50 epochs with a learning rate of 2 . 10 -4 and a batch size of one. Training time was approximately 30 hours, and less than 0.02s for the inference of one depth map on a GTX 1080 Ti GPU. [0177]
- the depth map was generated from the 3cGAN models by inputting the 2D image from the bronchoscopic camera.
- the bronchoscopic image and/or the depth map was then processed for airway detection using a combination of blob detection, thresholding, and peak detection (see e.g., FIG. 11A discussed below).
- Blob detection was performed on a depth map where 20% of the deepest area was thresholded, and the centroids of the resulting shapes were treated as potential points of advancement for the robot to bend and advance towards.
- Peak detection was performed as a secondary detection method to detect airways that may have been missed by the blob detection. Any peaks detected inside an existing detected blob were disregarded.
- Direction vector control command may be performed using the directed airways to decide to employ bending and/or insertion, and/or such information may be passed or transmitted to software to control the robot and to perform autonomous advancement.
- one or more embodiments of the present disclosure may be a robotic bronchoscope using a robotic catheter and actuator unit, a robotic arm, and/or a control software or a User Interface. Indeed, one or more robotic bronchoscopes may use any of the subject features individually or in combination.
- depth estimation may be performed from bronchoscopic images and with airway detection (see e.g., FIGS. 10A-10B). Indeed, one or more embodiments of a bronchoscope (and/or a processor or computer in use therewith) may use a bronchoscopic image with detected airways and an estimated depth map (or depth estimation) with or using detected airways.
- a pixel of a set or predetermined color (e.g., red or any other desired color) 1002 represents a center of the detected airway.
- a cross or plus sign (+) 1003 may also be of any set or predetermined color (e.g., green or any other desired color), and the cross 1003 may represent the desired direction determined or set by a user (e.g., using a drag and drop feature, using a touch screen feature, entering a manual command, etc.) and/or by one or more processors (see e.g., any of the processors discussed herein).
- the line or segment 1004 (which may also be of any set or predetermined color, such as, but not limited to, blue) may be the direction vector between the center of the image/depth map and the center of the detected blob in closer proximity to the cross or plus sign 1003. [0180]
- the depth map was generated from the 3cGAN models by inputting the 2D image from the bronchoscopic camera. The depth map was then processed for airway detection using a combination of blob detection (see e.g., T. Kato, F. King, K. Takagi, N.
- Blob detection was performed on a depth map where 20% of the deepest area was thresholded, and the centroids of the resulting shapes were treated as potential points of advancement for the robot to bend and advance towards.
- Peak detection (see e.g., F. Masaki, F. King, T. Kato, H. Tsukada, Y. Colson, and N. Hata, IEEE Transactions on Biomedical Engineering, vol.68, no.12, pp.3534-3542 (2021)) was performed as a secondary detection method to detect airways that may have been missed by the blob detection. Any peaks detected inside an existing detected blob were disregarded.
- the integrated control using first-person view grants physicians the capability to guide the distal section’s motion via visual feedback from the robotic bronchoscope.
- users may determine only the lateral and vertical movements of the third (e.g., most distal) section, along with the general advancement or retraction of the robotic bronchoscope.
- the user’s control of the third section may be performed using the computer mouse and drag and drop a cross or plus sign 1003 to the desired direction as shown in FIG. 11A and/or FIG. 11B.
- a voice control may also be implemented additionally or alternatively to the mouse-operated cross or plus sign 1003.
- an operator or user may select an airway for the robotic bronchoscope to aim using voice recognition algorithm (VoiceBot, Fortress, Ontario, Canada) via a headset (J100 Pro, Jeeco, Shenzhen, China).
- voice recognition algorithm VoiceBot, Fortress, Ontario, Canada
- headset J100 Pro, Jeeco, Shenzhen, China.
- the options acceptable as input commands to control the robotic bronchoscope were the four cardinal directions (up, down, left, right, and center) and start/stop. For example, when the voice recognition algorithm accepted “up”, a cross 1003 was shown on top of the endoscopic camera view.
- any feature of the present disclosure may be used with features, including, but not limited to, training feature(s), autonomous navigation feature(s), artificial intelligence feature(s), etc., as discussed in U.S. Prov. Pat. App.
- control For specifying the robot’s movement direction, the target airway is identified based on its center proximity to the user-set marker visible as the cross or cross/plus sign 1003 in one or more embodiments as shown in FIGS. 10A-10B (the cross may be any set or predetermined color, e.g., green or other chosen color).
- the bronchoscopic robot may maintain a set or predetermined/calculated linear speed (e.g., of 2 mm/s) and a set or predetermined/calculated bending speed (e.g., of 15 deg/s).
- the movements of the initial two sections may be managed by the FTL motion algorithm, based on the movement history of the third section.
- the reverse FTL motion algorithm may control all three sections, leveraging the combined movement history of all sections recorded during the advancement phase, allowing users to retract the robotic bronchoscope whenever necessary.
- the middle section and the distal section of the continuum robot may move at a first position or state in the same/similar/approximately similar way as the proximal section moved at the first position or state or a second position or state near the first position (e.g., during removal of the continuum robot/catheter).
- the continuum robot/catheter may be removed by automatically and/or manually moving along the same or similar, or approximately same or similar, path that the continuum robot/catheter used to enter a target (e.g., a body of a patient, an object, a specimen (e.g., tissue), etc.) using the FTL algorithm, including, but not limited to, using FTL with the one or more control, depth map-driven autonomous advancement, or other technique(s) discussed herein.
- Other FTL features may be used with the one or more features of the present disclosure.
- An airway detection algorithm or process may identify the one or more airways in the bronchoscopic image(s) and/or in the depth map (e.g., such as, but not limited to, using thresholding, blob detection, peak detection, and/or any other process for identifying one or more airways as discussed herein and/or as may be set by a user and/or one or more processors, etc.).
- the pixel 1002 may represent a center of a detected airway and the cross or plus sign 1003 may represent the desired direction determined by the user (e.g., moved using a drag and drop feature, using a touch screen feature, entering a manual command, etc.) and/or by one or more processors (see e.g., any of the processors discussed herein).
- the line or segment 1004 (which may also be of any set or predetermined color, such as, but not limited to, blue) may be the direction vector between the center of the image/depth map and the center of the detected blob closer or closest in proximity to the cross or plus sign 1003.
- the direction vector control command may decide between bending and insertion.
- the direction vector may then be sent to the robot’s control software by a virtual gamepad (or other controller or processor) which may initiate the autonomous advancement.
- a virtual gamepad or other controller or processor
- FIG. 11A at least one embodiment may have a network estimate a depth map from a bronchoscopic image, and the airway detection algorithm(s) may identify the airways.
- the pixel 1002, the cross or plus sign 1003, and the line or segment 1004 may be employed in the same or similar fashion such that discussion of the subject features shown in FIGS.10A-10B and FIG.11B will not be repeated.
- Table 1 Characteristics of models and scans for at least one study performed is shown in Table 1 below: Table 1: Characteristics of Phantom and Ex vivo models and scans Model Target Generations # of CT dimensions CT spacing targets [mm] [mm] 6 6 6 [0188] Materials [0189] Patient-derived phantoms and ex-vivo specimens/animal model [0190] Imaging and airway models: The experiments utilized a chest CT scan from a patient who underwent a robotic-assisted bronchoscopic biopsy to develop an airway phantom (see FIG. 12B), under the IRB approval #2020P001835. FIG.
- FIG. 12B shows a robotic bronchoscope in the phantom having reached the location corresponding to the location of the lesion in the patient’s lung, using the proposed supervised-autonomous navigation and/or navigation planning.
- the 62-year-old male patient presented with a nodule measuring 21x21x16 [mm] in the right upper lobe (RUL).
- the procedure was smoothly conducted using the Ion Endoluminal System (Intuitive Surgical, Inc., Sunnyvale, CA), with successful lesion access (see FIG.12A showing the view of the navigation screen with the lesion reached in the clinical phase).
- FIGS.12A-12B illustrate a navigation screen for a clinical target location 125 in or at a lesion reached by autonomous driving and a robotic bronchoscope in a phantom having reached the location corresponding to the location of the lesion using one or more navigation features, respectively.
- Various procedures were performed at the lesion’s location, including bronchoalveolar lavage, transbronchial needle aspiration, brushing, and transbronchial lung biopsy. The procedure progressed without immediate complications.
- the inventors via the experiment, aimed to ascertain whether the proposed autonomous driving method(s) would achieve the same clinical target (which the experiment confirmed that such method(s) would achieve the same clinical target).
- one target in the phantom replicated the lesion’s location in the patient’s lung.
- Airway segmentation of the chest CT scan mentioned above was performed using ‘Thresholding’ and ‘Grow from Seeds’ techniques within 3D Slicer software.
- a physical/tangible mold replica of the walls of the segmented airways was created using 3D printing in ABS plastic.
- the printed mold was later filled to produce the Patient Device Phantom using a silicone rubber compound, which was left to cure before being removed from the mold.
- the inventors via the experiment, also validated the method features on two ex- vivo porcine lungs with and without breathing motion simulation.
- a human Breathing motion was simulated using an AMBU bag with a 2-second interval between the inspiration phases.
- Target and Geometrical Path Analysis [0193] CT scans of the phantom and both ex-vivo lungs have been performed (see Table 1) and airways were segmented using ‘Thresholding’ and ‘Grow from Seeds’ techniques in 3D Slicer. [0194] The target locations were determined as the airways with a diameter constraint imposed to limit movement of the robotic bronchoscope. The phantom contained 75 targets, where ex-vivo lung #1 had 52 targets, and ex-vivo lung # 2 had 41 targets. The targets were positioned across all airways.
- Each of the phantoms and specimens contained target locations in all the lobes.
- Each target location was marked in the segmented model, and the Local Curvature (LC) and Plane Rotation (PR) were generated along the path from the trachea to the target location and were computed according to the methodology described by Naito et al. (M. Naito, F. Masaki, R. Lisk, H. Tsukada, a nd N.
- LC was computed using the Menger curvature, which defines curvature as the inverse of the radius of the circle passing through three points in n-dimensional Euclidean space. To calculate the local curvature at a given point along the centerline, the encourager curvature was determined using the point itself, the fifteen preceding points, and the fifteen subsequent points, encompassing approximately 5 mm along the centerline. LC is expressed in [mm ⁇ 1 ]. PR measures the angle of rotation of the airway branch on a plane, independent of its angle relative to the trachea.
- This metric is based on the concept that maneuvering the bronchoscope outside the current plane of motion increases the difficulty of advancement.
- the given vector was compared to the current plane of motion of the bronchoscope.
- the plane of motion was initially determined by two vectors in the trachea, establishing a plane that intersects the trachea laterally (on the left-right plane of the human body). If the centerline surpassed a threshold of 0.75 [rad] (42 [deg]) for more than a hundred consecutive points, a new plane was defined. This approach allowed for multiple changes in the plane of motion along one centerline if the path indicated it.
- the PR is represented in [rad]. Both LC and PR have been proven significant in the success rate of advancement with user-controlled robotic bronchoscopes.
- the swine model was placed on a patient bed in the supine position and the robotic catheter was inserted diagonally above the swine model. Vital signs and respiratory parameters were monitored periodically to assess for hemodynamic stability and monitor for respiratory distress. After the setting, the magnitude of breathing motion was confirmed using electromagnetic (EM) tracking sensors (AURORA, NDI, Ontario, Canada) embedded into the peripheral area of four different lobes of the swine model.
- FIG. 12C shows six consecutive breathing cycles measured by the EM tracking sensors as an example of the breathing motion.
- the local camera coordinate frame was calibrated with the robot’s coordinate system, and the robotic software was designed to advance toward the detected airway closest to the green cross placed by the operator.
- One advancement per target was performed and recorded. If the driving algorithm failed, the recording was stopped at the point of failure.
- the primary metric collected in this study was target reachability, defining the success in reaching the target location in each advancement.
- the secondary metric was success at each branching point determined as a binary measurement based on visual assessment of the robot entering the user-defined airway.
- the other metrics included target generation, target lobe, local curvature (LC) and plane rotation (PR) at each branching point, type of branching point, the total time and total path length to reach the target location (if successfully reached), and time to failure location together with airway generation of failure (if not successfully reached).
- Path length was determined as the linear distance advanced by the robot from the starting point to the target or failure location.
- the primary analysis performed in this study was the Chi-square test to analyze the significance of the maximum generation reached and target lobe on target reachability. Second, the influence of branching point type, LC and PR, and lobe segment on the success at branching points was investigated using the Chi-square test.
- a navigator sending voice commands to the autonomous navigation and/or plan randomly selected the airway at each bifurcation point for the robotic catheter to move in and ended the autonomous navigation and/or navigation plan when the mucus blocked the endoscopic camera view.
- the navigator was not allowed to change the selected airway before the robotic catheter moved into the selected airway, and not allowed to retract the robotic catheter in the middle of one attempt.
- the navigation from the trachea to the point where the navigation was ended was defined as one attempt.
- the starting point of all attempts was set at 10 mm away from the carina in the trachea.
- Time and force defined below were collected as metrics to compare the autonomous navigation with the navigation by the human operators. All data points during retraction were excluded. When the robotic catheter was moved forward and bent at a bifurcation point, one data point was collected as an independent data point.
- a) Time for bending command Input commands to control the robotic catheter including moving forward, retraction and bending were recorded at 100 Hz.
- the time for bending command was collected as the summation of the time for the operator or autonomous navigation software to send input commands to bend the robotic catheter at a bifurcation point.
- b) Maximum force applied to driving wire Force applied to each driving wire to bend the tip section of the robotic catheter was recorded at 100 Hz using a strain gauge (KFRB General-purpose Foil Strain Gage, Kyowa Electronic Instruments, Tokyo, Japan) attached to each driving wire. Then the absolute value of the maximum force of three driving wires at each bifurcation point was extracted to indirectly evaluate the interaction against the airway wall.
- KFRB General-purpose Foil Strain Gage Kyowa Electronic Instruments, Tokyo, Japan
- the overall success rate at branching points achieved was 95.8%.
- the branching points comprised 399 bifurcations and 82 trifurcations.
- the success rates at bifurcations and trifurcations were 97% and 92%, respectively.
- the success at branching points varied across different lobe segments, with rates of 99% for the left lower lobe, 93% for the left upper lobe, 97% for the right lower lobe, 85% for the right middle lobe, and 94% for the right upper lobe.
- the average LC and PR at successful branching points were respectively 287.5 ⁇ 125.5 [mm ⁇ 1 ] and 0.4 ⁇ 0.2 [rad].
- the average LC and PR at failed branching points were respectively 429.5 ⁇ 133.7 [mm ⁇ 1 ] and 0.9 ⁇ 0.3 [rad].
- the paired Wilcoxon signed-rank test showed no statistical significance of LC (p ⁇ 0.001) and PR (p ⁇ 0.001). Boxplots showing the significance of LC and PR on success at branching points are presented in FIGS. 14A-14B together with ex-vivo data.
- FIGS. 12B Using autonomous method features of the present disclosure, the inventors, via the experiment, successfully accessed the targets (as shown in FIG. 12B). These results underscore the promising potential of our method(s) and related features of the present disclosure that may be used to redefine the standards of robotic bronchoscopy. [0221] FIGS.
- FIG. 13A-13C illustrate views of at least one embodiment of a navigation algorithm performing at various branching points in a phantom
- FIG.13A shows a path on which the target location (dot) was not reached (e.g., the algorithm may not have traversed the last bifurcation where an airway on the right was not detected)
- FIG.13B shows a path on which the target location (dot) was successfully reached
- FIG.13C shows a path on which the target location was also successful reached.
- the highlighted squares represent estimated depth maps with detected airways at each visible branching point on paths toward target locations.
- the black frame (or a frame of another set/first color) represents success at a branching point and the frame of a set or predetermined color (e.g., red or other different/second color) (e.g., frame 1006 may be the frame of a red or different/second color as shown in the bottom right frame of FIG.13A) represents a failure at a branching point. All three targets were in RLL.
- a set or predetermined color e.g., red or other different/second color
- frame 1006 may be the frame of a red or different/second color as shown in the bottom right frame of FIG.13A
- Red pixel(s) represent the center of a detected airway
- green cross e.g., the cross or plus sign 1003
- the blue segment e.g., the segment 1004
- FIGS.14A-14B illustrate graphs showing success at branching point(s) with respect to Local Curvature (LC) and Plane Rotation (PR), respectively, for all data combined in one or more embodiments.
- 14A-14B show the statistically significant difference between successful performance at branching points with respect to LC (see FIG.14A) and PR (see FIG. 14B).
- LC is expressed in [mm ⁇ 1 ] and PR in [rad].
- Ex-vivo Specimen/animal Study [0224] The target reachability achieved in ex-vivo #1 was 77% and in ex-vivo #278% without breathing motion. The target reachability achieved in ex-vivo #1 was 69% and in ex-vivo #2 76% with breathing motion. [0225] 774 branching points were tried in the ex-vivo#1 and 583 in ex-vivo#2 for autonomous robotic advancements.
- the overall success rate at branching points achieved was 97% in ex-vivo #1 and 97% in ex-vivo#2 without BM, and 96% in ex-vivo #1 and 97% in ex-vivo#2 with BM.
- the branching points comprised 327 bifurcations and 62 trifurcations in ex-vivo#1 and 255 bifurcations and 38 trifurcations in ex-vivo#2 without BM.
- the branching points comprised 326 bifurcations and 59 trifurcations in ex-vivo#1 and 252 bifurcations and 38 trifurcations in ex-vivo#2 with BM.
- the success rates without BM at bifurcations and trifurcations were respectively 98% and 92% in ex-vivo#1, and 97% and 95% in ex-vivo#2.
- the success rates with BM at bifurcations and trifurcations were respectively 96% and 93% in ex- vivo#1, and 96% and 97% in ex-vivo#2.
- the Chi-square test demonstrated a statistically significant difference (p ⁇ 0.001) in success at branching points between the lobe segments for all ex-vivo data combined.
- the average LC and PR at successful branching points were respectively 211.9 ⁇ 112.6 [mm ⁇ 1 ] and 0.4 ⁇ 0.2 [rad] for ex-vivo#1, and 184.5 ⁇ 110.4 [mm ⁇ 1 ] and 0.6 ⁇ 0.2 [rad] for ex-vivo#2.
- the average LC and PR at failed branching points were respectively 393.7 ⁇ 153.5 [mm ⁇ 1 ] and 0.6 ⁇ 0.3 [rad] for ex-vivo#1, and 369.5 ⁇ 200.6 [mm ⁇ 1 ] and 0.7 ⁇ 0.4 [rad] for ex-vivo#2.
- FIGS. 14A-14B represent the comparison of LC and PR for successful and failed branching points, for all data (phantom, ex-vivos, ex-vivos with breathing motion) combined.
- results of Local Curvature (LC) and Plane Rotation (PR) were displayed on three advancement paths towards different target locations with highlighted, color-coded values of LC and PR along the paths.
- the views illustrated impact(s) of Local Curvature (LC) and Plane Rotation (PR) on one or more performances of one or more embodiments of a navigation algorithm where one view illustrated a path toward a target location in RML of ex vivo #1, which was reached successfully, where another view illustrated a path toward a target location in LLL of ex vivo #1, which was reached successfully, and where yet another view illustrated a path toward a target location in RLL of the phantom, which failed at a location marked with a square (e.g., a red square).
- a square e.g., a red square
- FIGS. 15A-15C illustrate three advancement paths towards different target locations (see blue dots) using one or more embodiments of navigation feature(s) with and without BM.
- FIGS. 15A-15C illustrate one or more impacts of breathing motion on a performance of the one or more navigation algorithm(s), where FIG.
- FIG. 15A shows a path on which the target location (ex vivo #1 LLL) was reached with and without breathing motion (BM)
- FIG. 15B shows a path on which the target location (ex vivo #1 RLL) was not reached without BM but was reached with BM (such as result illustrates that at times BM may help the algorithm(s) with detecting and entering the right airway for one or more embodiments of the present disclosure)
- FIG.15C shows a path on which the target location (ex vivo #1 RML) was reached without BM was not reached with BM (such a result illustrates that at times BM may affect performance of an algorithm in one or more situations. That said, the algorithms of the present disclosure are still highly effective under such a condition).
- the highlighted squares represent estimated depth maps with detected airways at each visible branching point on paths toward target locations.
- the black frame represents success at a branching point and the red frame represents a failure at a branching point.
- Statistical Analysis [0231] The hypothesis that the low local curvatures and plane rotations along the path might increase the likelihood of success at branching points was correct. Additionally, the hypothesis that breathing motion simulation will not impose a statistically significant difference in success at branching points and hence total target reachability was also correct. [0232] In-vivo animal study [0233] In total, 112 and 34 data points were collected from the human operators and autonomous navigation, respectively.
- FIG. 16A illustrates the box plots for time for the operator or the autonomous navigation to bend the robotic catheter
- FIG.16B illustrates the box plots for the maximum force for the operator or the autonomous navigation at each bifurcation point.
- FIGS. 18A-18B show scatter plots for time to bend the robotic catheter (FIG.
- the inventors inferred that the perpetuity of the anatomical airway structure quantified by LC and PR statistically significantly influences the success at branching points and hence target reachability.
- the presented method features show that, by using autonomous driving, physicians may safely navigate toward the target by controlling a cursor on the computer screen.
- the autonomous driving was compared with two human operators using a gamepad controller in a living swine model under breathing motion.
- Our blinded comparison study revealed that the autonomous driving took less time to bend the robotic catheter and applied less force to the anatomy than the navigation by human operator using a gamepad controller, suggesting the autonomous driving successfully identified the center of the airway in the camera view even with breathing motion and accurately moved the robotic catheter into the identified airway.
- One or more embodiments of the present disclosure is in accordance with two studies that recently introduced the approach for autonomous driving in the lung (see e.g., J. Sganga, et al., RAL, pp.1–10 (2019), which is incorporated by reference herein in its entirety, and Y. Zou, et al., IEEE Transactions on Medical Robotics and Bionics, vol.4, no.3, pp.588- 598 (2022), which is incorporated by reference herein in its entirety).
- the first study reports 95% target reachability with the robot reaching the target in 19 out of 20 trials, but it is limited to 4 targets (J.
- the subject study does not report any details on the number of targets, the location of the targets within lung anatomy, the origin of the human lung phantom, and the statistical analysis to identify the reasons for failure.
- the only metric used is the time to target.
- Both of these Sganga, et al. and Zou, et al. studies differ from the present disclosure in numerous ways, including, but not limited to, in the design of the method(s) of the present disclosure and the comprehensiveness of clinical validation.
- the methods of those two studies are based on airway detection from supervised learning algorithms.
- one or more methods of the present disclosure first estimate the bronchoscopic depth map using an unsupervised generative learning technique (A. Banach, F. King, F. Masaki, H. Tsukada, N.
- One or more embodiments of the presented method of the present disclosure may be dependent on the quality of bronchoscopic depth estimation by 3cGAN (see e.g., A. Banach, F. King, F. Masaki, H. Tsukada, N.
- the one or more trained models or AI-networks is or uses one or a combination of the following: a neural net model or neural network model, a deep convolutional neural network model, a recurrent neural network model with long short-term memory that can take temporal relationships across images or frames into account, a generative adversarial network (GAN) model, a consistent generative adversarial network (cGAN) model, a three cycle- consistent generative adversarial network (3cGAN) model, a model that can take temporal relationships across images or frames into account, a model that can take temporal relationships into account including tissue location(s) during pullback in a vessel and/or including tissue characterization data during
- FIGS. 17A-17D illustrate one or more examples of depth estimation failure and artifact robustness that may be observed in one or more embodiments.
- FIG. 17A shows a scenario where the depth map (right side of FIG. 17A) was not estimated accurately and therefore the airway detection algorithm did not detect the airway partially visible on the right side of the bronchoscopic image (left side of FIG. 17A).
- FIG. 17B shows a scenario where the depth map estimated the airways accurately despite presence of debris.
- FIG.17C shows a scenario opposite to the one presented in FIG.17A where the airway on the right side of the bronchoscopic image (left side of FIG. 17C) is more visible and the airway detection algorithm detects it successfully.
- FIG. 17D shows a scenario where a visual artifact is ignored by the depth estimation algorithm and both visible airways are detected in the depth map.
- Another possible scenario may be related to the fact that the control algorithm should guide the robot along the centerline. Dynamic LSE operates to solve that issue and to guide the robot towards the centerline when not at a branching point. The inventors also identified the failure at branching points as a result of lacking short-term memory, and that using short-term memory may increase success rate(s) at branching points.
- the algorithm may detect some of the visible airways only for a short moment, not leaving enough time for the control algorithm to react.
- a potential solution would involve such short-term memory that ‘remembers’ the detected airways and forces the control algorithm to make the bronchoscopic camera ‘look around’ and make sure that no airways were missed.
- Such a ‘look around’ mode implemented between certain time or distance intervals may also prevent from missing airways that were not visible in the bronchoscopic image in one or more embodiments of the present disclosure.
- FIGS.19-21 illustrate features of at least one embodiment of a continuum robot apparatus 10 configuration to implement automatic correction of a direction to which a tool channel or a camera moves or is bent in a case where a displayed image is rotated.
- the continuum robot apparatus 10 enables to keep a correspondence between a direction on a monitor (top, bottom, right or left of the monitor) and a direction the tool channel or the camera moves on the monitor according to a particular directional command (up, down, turn right or turn left) even if the displayed image is rotated.
- the continuum robot apparatus 10 also may be used with any of the navigation planning, autonomous navigation, movement detection, and/or control features of the present disclosure.
- the continuum robot apparatus 10 may include one or more of a continuum robot 11, an image capture unit 20, an input unit 30, a guide unit 40, a controller 50, and a display 60.
- the image capture unit 20 may be a camera or other image capturing device.
- the continuum robot 11 may include one or more flexible portions 12 connected together and configured so that the one or more flexible portions 12 may be curved or rotated about in different directions.
- the continuum robot 11 may include a drive unit 13, a movement drive unit 14, and a linear drive or guide 15. The movement drive unit 14 operates to cause the drive unit 13 to move along the linear drive or guide 15.
- the input unit 30 has an input element 32 and is configured to allow a user to positionally adjust the flexible portions 12 of the continuum robot 11.
- the input unit 30 may be configured as a mouse, a keyboard, joystick, lever, or another shape to facilitate user interaction.
- the user may provide an operation input through the input element 32, and the continuum robot apparatus 10 may receive information of the input element 32 and one or more input/output devices, which may include, but are not limited to, a receiver, a transmitter, a speaker, a display, an imaging sensor, a user input device, which may include a keyboard, a keypad, a mouse, a position tracked stylus, a position tracked probe, a foot switch, a microphone, etc.
- the guide unit 40 is a device that includes one or more buttons, knobs, switches, etc.42, 44, that a user may use to adjust various parameters of the continuum robot 10, such as the speed (e.g., rotational speed, translational speed, etc.), angle or plane, or other parameters.
- FIG.21 illustrates at least one embodiment of a controller 50 according to one or more features of the present disclosure.
- the controller 50 may be configured to control the elements of the continuum robot apparatus 10 and has one or more of a CPU 51, a memory 52, a storage 53, an input and output (I/O) interface 54, and communication interface 55.
- the continuum robot apparatus 10 may be interconnected with medical instruments or a variety of other devices, and may be controlled independently, externally, or remotely by the controller 50.
- one or more features of the continuum robot apparatus 10 and one or more features of the continuum robot or catheter or probe system 1000 may be used in combination or alternatively to each other.
- the memory 52 may be used as a work memory or may include any memory discussed in the present disclosure.
- the storage 53 stores software or computer instructions, and may be any type of storage, data storage 150, or other memory or storage discussed in the present disclosure.
- the CPU 51 which may include one or more processors, circuitry, or a combination thereof, executes the software developed in the memory 52 (or any other memory discussed herein).
- the I/O interface 54 operates to input information from the continuum robot apparatus 10 to the controller 50 and to output information for displaying to the display 60 (or any other display discussed herein, such as, but not limited to, display 1209 discussed below).
- the communication interface 55 may be configured as a circuit or other device for communicating with components included in the apparatus 10, and with various external apparatuses connected to the apparatus via a network.
- the communication interface 55 may store information to be output in a transfer packet and may output the transfer packet to an external apparatus via the network by communication technology such as Transmission Control Protocol/Internet Protocol (TCP/IP).
- TCP/IP Transmission Control Protocol/Internet Protocol
- the apparatus may include a plurality of communication circuits according to a desired communication form.
- the controller 50 may be communicatively interconnected or interfaced with one or more external devices including, for example, one or more data storages (e.g., the data storage 150, the SSD or storage drive 1207 discussed below, or any other storage discussed herein), one or more external user input/output devices, or the like.
- the controller 50 may interface with other elements including, for example, one or more of an external storage, a display, a keyboard, a mouse, a sensor, a microphone, a speaker, a projector, a scanner, a display, an illumination device, etc.
- the display 60 may be a display device configured, for example, as a monitor, an LCD (liquid panel display), an LED display, an OLED (organic LED) display, a plasma display, an organic electro luminescence panel, or any other display discussed herein (including, but not limited to, displays 101-1, 101-2, etc.). Based on the control of the apparatus, a screen may be displayed on the display 60 showing one or more images, such as, but not limited to, one or more images being captured, captured images, captured moving images recorded on the storage unit, etc. [0257] The components may be connected together by a bus 56 so that the components may communicate with each other.
- a bus 56 so that the components may communicate with each other.
- the bus 56 transmits and receives data between these pieces of hardware connected together, or the bus 56 transmits a command from the CPU 51 to the other pieces of hardware.
- the components may be implemented by one or more physical devices that may be coupled to the CPU 51 through a communication channel.
- the controller 50 may be implemented using circuitry in the form of ASIC (application specific integrated circuits) or other similar circuits as discussed herein.
- the controller 50 may be implemented as a combination of hardware and software, where the software is loaded into a processor from a memory or over a network connection.
- Functionality of the controller 50 may be stored on a storage medium, which may include, but is not limited to, RAM (random-access memory), magnetic or optical drive, diskette, cloud storage, etc.
- the units described throughout the present disclosure are exemplary and/or preferable modules for implementing processes described in the present disclosure. However, one or mor embodiments of the present disclosure are not limited thereto.
- the term “unit”, as used herein, may generally refer to firmware, software, hardware, or other component, such as circuitry or the like, or any combination thereof, that is used to effectuate a purpose.
- the modules may be hardware units (such as circuitry, firmware, a field programmable gate array, a digital signal processor, an application specific integrated circuit or the like) and/or software modules (such as a computer readable program, instructions stored in a memory or storage medium, etc.).
- the modules for implementing the various steps are not described exhaustively above.
- One or more navigation planning, autonomous navigation, movement detection, and/or control features of the present disclosure may be used with one or more image correction or adjustment features in one or more embodiments.
- One or more adjustments, corrections, or smoothing functions for a catheter or probe device and/or a continuum robot may adjust a path of one or more sections or portions of the catheter or probe device and/or the continuum robot (e.g., the continuum robot 104, the continuum robot device 10, etc.), and one or more embodiments may make a corresponding adjustment or correction to an image view.
- the medical tool may be a bronchoscope as aforementioned.
- a computer such as the console or computer 1200, 1200’, may perform any of the steps, processes, and/or techniques discussed herein for any apparatus, bronchoscope, robot, and/or system being manufactured or used, any of the embodiments shown in FIGS.1-28, any other bronchoscope, robot, apparatus, or system discussed herein or included herewith, etc.
- a bronchoscope or robotic bronchoscope perform imaging, navigation planning, autonomous navigation, movement detection, and/or control for a continuum robot, correct or adjust an image or a path/state (or one or more sections or portions of) a continuum robot (or other probe or catheter device or system), or perform any other measurement or process discussed herein, to perform continuum robot and/or bronchoscope method(s) or algorithm(s), and/or to control at least one bronchoscope and/or continuum robot device/apparatus/robot, system and/or storage medium, digital as well as analog.
- a computer such as the console or computer 1200, 1200’, may be dedicated to control and/or use continuum robot and/or bronchoscope devices, systems, methods, and/or storage mediums for use therewith described herein.
- the one or more detectors, sensors, cameras, or other components of the bronchoscope, robotic bronchoscope, robot, continuum robot, apparatus, system, method, or storage medium embodiments e.g.
- a processor or a computer such as, but not limited to, an image processor or display controller 100, a controller 102, a CPU 120, a controller 50, a CPU 51, a processor or computer 1200, 1200’ (see e.g., at least FIGS.1-5, 19- 21, and 22-28), a combination thereof, any other processor(s) discussed herein, etc.
- the image processor may be a dedicated image processor or a general purpose processor that is configured to process images.
- the computer 1200, 1200’ may be used in place of, or in addition to, the image processor or display controller 100 and/or the controller 102 (or any other processor or controller discussed herein, such as, but not limited to, the controller 50, the CPU 51, the CPU 120, etc.).
- the image processor may include an ADC and receive analog signals from the one or more detectors or sensors of the bronchoscopes, robots, apparatuses, systems (e.g., system 1000 (or any other system discussed herein)), methods, storage mediums, etc.
- the image processor may include one or more of a CPU, DSP, FPGA, ASIC, or some other processing circuitry.
- the image processor may include memory for storing image, data, and instructions.
- the image processor may generate one or more images based on the information provided by the one or more detectors, sensors, or cameras.
- a computer or processor discussed herein, such as, but not limited to, a processor of the devices, apparatuses, bronchoscopes, or systems of FIGS.1-5 and 19-21, the computer 1200, the computer 1200’, the image processor, etc. may also include one or more components further discussed herein below (see e.g., FIGS.22-28).
- Electrical analog signals obtained from the output of the system 1000 or the components thereof, and/or from the devices, bronchoscopes, apparatuses, or systems of FIGS.1-5 and 19-21, may be converted to digital signals to be analyzed with a computer, such as, but not limited to, the computers or controllers 100, 102 of FIG. 1, the computer 1200, 1200’, etc.
- a computer such as the computer or controllers 100, 102 of FIG.1, the console or computer 1200, 1200’, etc., may be dedicated to the autonomous navigation/planning/control and the monitoring of the bronchoscopes, robotic bronchoscopes, devices, systems, methods, and/or storage mediums and/or of continuum robot devices, systems, methods and/or storage mediums described herein.
- the electric signals used for imaging may be sent to one or more processors, such as, but not limited to, the processors or controllers 100, 102 of FIGS.1-5, a computer 1200 (see e.g., FIG.22), a computer 1200’ (see e.g., FIG.23), etc. as discussed further below, via cable(s) or wire(s), such as, but not limited to, the cable(s) or wire(s) 113 (see FIG.24).
- the computers or processors discussed herein are interchangeable, and may operate to perform any of the feature(s) and method(s) discussed herein.
- a computer system 1200 may include a central processing unit (“CPU”) 1201, a ROM 1202, a RAM 1203, a communication interface 1205, a hard disk (and/or other storage device) 1204, a screen (or monitor interface) 1209, a keyboard (or input interface; may also include a mouse or other input device in addition to the keyboard) 1210 and a BUS (or “Bus”) or other connection lines (e.g., connection line 1213) between one or more of the aforementioned components (e.g., as shown in FIG.
- a computer system 1200 may comprise one or more of the aforementioned components.
- a computer system 1200 may include a CPU 1201, a RAM 1203, an input/output (I/O) interface (such as the communication interface 1205) and a bus (which may include one or more lines 1213 as a communication system between components of the computer system 1200; in one or more embodiments, the computer system 1200 and at least the CPU 1201 thereof may communicate with the one or more aforementioned components of a robot device, apparatus, bronchoscope or robotic bronchoscope, or system using same, and/or a continuum robot device or system using same, such as, but not limited to, the system 1000, the devices/systems of FIGS.
- the CPU 1201 is configured to read and perform computer-executable instructions stored in a storage medium.
- the computer-executable instructions may include those for the performance of the methods and/or calculations described herein.
- the computer system 1200 may include one or more additional processors in addition to CPU 1201, and such processors, including the CPU 1201, may be used for controlling and/or manufacturing a device, system or storage medium for use with same or for use with any continuum robot, bronchoscope, or robotic bronchoscope technique(s), and/or use with imaging, navigation planning, autonomous navigation, movement detection, and/or control technique(s) discussed herein.
- the system 1200 may further include one or more processors connected via a network connection (e.g., via network 1206).
- the CPU 1201 and any additional processor being used by the system 1200 may be located in the same telecom network or in different telecom networks (e.g., performing, manufacturing, controlling, calculation, and/or using technique(s) may be controlled remotely).
- the I/O or communication interface 1205 provides communication interfaces to input and output devices, which may include the one or more of the aforementioned components of any of the bronchoscopes, robotic bronchoscopes, apparatuses, devices, and/or systems discussed herein (e.g., the controller 100, the controller 102, the displays 101-1, 101- 2, the actuator 103, the continuum device 104, the operating portion or controller 105, the EM tracking sensor (or a camera) 106, the position detector 107, the rail 110, etc.), a microphone, a communication cable and a network (either wired or wireless), a keyboard 1210, a mouse (see e.g., the mouse 1211 as shown in FIG.23), a touch screen or screen 1209, a light pen and so on.
- the controller 100 the controller 102, the displays 101-1, 101- 2, the actuator 103, the continuum device 104, the operating portion or controller 105, the EM tracking sensor (or a camera) 106, the position detector 107
- the communication interface of the computer 1200 may connect to other components discussed herein via line 113 (as diagrammatically shown in FIG.22).
- the Monitor interface or screen 1209 provides communication interfaces thereto.
- an EM sensor 106 may be replaced by a camera 106, and the position detector 107 may be optional.
- Any methods and/or data of the present disclosure such as, but not limited to, the methods for using/guiding and/or controlling a bronchoscope, robotic bronchoscope, continuum robot, or catheter device, apparatus, system, or storage medium for use with same and/or method(s) for imaging, performing tissue, lesion, or sample characterization or analysis, performing diagnosis, planning and/or examination, for controlling a bronchoscope or robotic bronchoscope, device/apparatus, or system, for performing navigation planning, autonomous navigation, movement detection, and/or control technique(s), for performing adjustment or smoothing techniques (e.g., to a path of, to a pose or position of, to a state of, or to one or more sections or portions of, a continuum robot, a catheter or a probe), and/or for performing imaging and/or image correction or adjustment technique(s), (or any other technique(s)) as discussed herein, may be stored on a computer-readable storage medium.
- a computer-readable and/or writable storage medium used commonly such as, but not limited to, one or more of a hard disk (e.g., the hard disk 1204, a magnetic disk, etc.), a flash memory, a CD, an optical disc (e.g., a compact disc (“CD”), a digital versatile disc (“DVD”), a Blu-ray TM disc, etc.), a magneto-optical disk, a random-access memory (“RAM”) (such as the RAM 1203), a DRAM, a read only memory (“ROM”), a storage of distributed computing systems, a memory card, or the like (e.g., other semiconductor memory, such as, but not limited to, a non-volatile memory card, a solid state drive (SSD) (see SSD 1207 in FIG.
- SSD solid state drive
- the computer-readable storage medium may be a non-transitory computer- readable medium, and/or the computer-readable medium may comprise all computer- readable media, with the sole exception being a transitory, propagating signal in one or more embodiments.
- the computer-readable storage medium may include media that store information for predetermined, limited, or short period(s) of time and/or only in the presence of power, such as, but not limited to Random Access Memory (RAM), register memory, processor cache(s), etc.
- Embodiment(s) of the present disclosure may also be realized by a computer and/or neural network (or other AI architecture/structure/models) of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a “non- transitory computer-readable storage medium”) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions
- the methods, bronchoscopes or robotic bronchoscopes, devices, systems, and computer-readable storage mediums related to the processors may be achieved utilizing suitable hardware, such as that illustrated in the figures.
- suitable hardware such as that illustrated in the figures.
- Functionality of one or more aspects of the present disclosure may be achieved utilizing suitable hardware, such as that illustrated in FIG. 22.
- Such hardware may be implemented utilizing any of the known technologies, such as standard digital circuitry, any of the known processors that are operable to execute software and/or firmware programs, any neural networks (and/or any other artificial intelligence (AI) structure/architecture/models/etc. that may be used to perform any of the technique(s) discussed herein) one or more programmable digital devices or systems, such as programmable read only memories (PROMs), programmable array logic devices (PALs), etc.
- the CPU 1201 (as shown in FIG.22 or FIG.23, and/or which may be included in the computer, processor, controller and/or CPU 120 of FIGS.
- CPU 51, and/or the CPU 120 may also include and/or be made of one or more microprocessors, nanoprocessors, one or more graphics processing units (“GPUs”; also called a visual processing unit (“VPU”)), one or more Field Programmable Gate Arrays (“FPGAs”), or other types of processing components (e.g., application specific integrated circuit(s) (ASIC)).
- GPUs graphics processing units
- FPGAs Field Programmable Gate Arrays
- ASIC application specific integrated circuit
- the various aspects of the present disclosure may be implemented by way of software and/or firmware program(s) that may be stored on suitable storage medium (e.g., computer-readable storage medium, hard drive, etc.) or media (such as floppy disk(s), memory chip(s), etc.) for transportability and/or distribution.
- the computer may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
- the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
- the computers or processors e.g., 100, 102, 120, 50, 51, 1200, 1200’, any other computer or processor discussed herein, etc.
- the computers or processors may include the aforementioned CPU structure, or may be connected to such CPU structure for communication therewith.
- FIG. 23 hardware structure of an alternative embodiment of a computer or console 1200’ is shown in FIG. 23.
- the computer 1200’ includes a central processing unit (CPU) 1201, a graphical processing unit (GPU) 1215, a random access memory (RAM) 1203, a network interface device 1212, an operation interface 1214 such as a universal serial bus (USB) and a memory such as a hard disk drive or a solid-state drive (SSD) 1207.
- the computer or console 1200’ includes a display 1209 (and/or the displays 101-1, 101-2, any other display(s) discussed herein, etc.).
- the computer 1200’ may connect with one or more components of a system (e.g., the systems/apparatuses/bronchoscopes/robotic bronchoscopes discussed herein; the systems/apparatuses/bronchoscopes/robotic bronchoscopes and/or any other device, apparatus, system, etc. of any of the figures included herewith (e.g., the systems/apparatuses of FIGS.1-5, 19-21, etc.)) via the operation interface 1214 or the network interface 1212.
- the operation interface 1214 is connected with an operation unit such as a mouse device 1211, a keyboard 1210 or a touch panel device.
- the computer 1200’ may include two or more of each component.
- the CPU 1201 or the GPU 1215 may be replaced by the field-programmable gate array (FPGA), the application-specific integrated circuit (ASIC) or other processing unit depending on the design of a computer, such as the computer 1200, the computer 1200’, etc.
- FPGA field-programmable gate array
- ASIC application-specific integrated circuit
- At least one computer program is stored in the SSD 1207 (or any other storage device or drive discussed herein), and the CPU 1201 loads the at least one program onto the RAM 1203, and executes the instructions in the at least one program to perform one or more processes described herein, as well as the basic input, output, calculation, memory writing, and memory reading processes.
- the computer such as the computer 1200, 1200’, the computer, processors, and/or controllers of FIGS.1-5 and/or FIGS.19-21 (and/or of any other figure(s) included herewith, etc., communicates with the one or more components of the apparatuses/systems/bronchoscopes/robotic bronchoscopes/robots of FIGS.1-5, of FIGS.19- 21, of any other figure(s) included herewith, etc. and/or of any other apparatuses/systems/bronchoscopes/robotic bronchoscopes/robots/etc. discussed herein, to perform imaging, and reconstructs an image from the acquired intensity data.
- the monitor or display 1209 displays the reconstructed image, and the monitor or display 1209 may display other information about the imaging condition or about an object, target, or sample to be imaged.
- the monitor 1209 also provides a graphical user interface for a user to operate a system, for example when performing CT, MRI, or other imaging technique(s), including, but not limited to, controlling continuum robots/bronchoscopes/robotic bronchoscopes/devices/systems, and/or performing imaging, navigation planning, autonomous navigation, movement detection, and/or control technique(s).
- An operation signal is input from the operation unit (e.g., such as, but not limited to, a mouse device 1211, a keyboard 1210, a touch panel device, etc.) into the operation interface 1214 in the computer 1200’, and corresponding to the operation signal the computer 1200’ instructs the apparatus/bronchoscope/robotic bronchoscope/system (e.g., the system 1000, the systems/apparatuses of FIGS. 1-5, the systems/apparatuses of FIGS.
- the apparatus/bronchoscope/robotic bronchoscope/system e.g., the system 1000, the systems/apparatuses of FIGS. 1-5, the systems/apparatuses of FIGS.
- any other system/apparatus discussed herein any of the apparatus(es)/bronchoscope(s)/robotic bronchoscope(s)/system(s) discussed herein, etc.) to start or end the imaging, and/or to start or end bronchoscope/robotic bronchoscope/device/system/continuum robot control(s) and/or performance of imaging, correction, adjustment, and/or smoothing technique(s).
- the camera or imaging device as aforementioned may have interfaces to communicate with the computers 1200, 1200’ (or any other computer or processor discussed herein) to send and receive the status information and the control signals.
- AI structure(s) such as, but not limited to, residual networks, neural networks, convolutional neural networks, GANs, cGANs, etc.
- other types of AI structure(s) and/or network(s) may be used.
- the below discussed network/structure examples are illustrative only, and any of the features of the present disclosure may be used with any AI structure or network, including AI networks that are less complex than the network structures discussed below (e.g., including such structure as show in FIGS.24-28).
- one or more processors or computers 1200, 1200’ may be part of a system in which the one or more processors or computers 1200, 1200’ (or any other processor discussed herein) communicate with other devices (e.g., a database 1603, a memory 1602 (which may be used with or replaced by any other type of memory discussed herein or known to those skilled in the art), an input device 1600, an output device 1601, etc.).
- other devices e.g., a database 1603, a memory 1602 (which may be used with or replaced by any other type of memory discussed herein or known to those skilled in the art), an input device 1600, an output device 1601, etc.
- one or more models may have been trained previously and stored in one or more locations, such as, but not limited to, the memory 1602, the database 1603, etc.
- one or more models and/or data discussed herein may be input or loaded via a device, such as the input device 1600.
- a user may employ an input device 1600 (which may be a separate computer or processor, a keyboard such as the keyboard 1210, a mouse such as the mouse 1211, a microphone, a screen or display 1209 (e.g., a touch screen or display), or any other input device known to those skilled in the art).
- an input device 1600 may not be used (e.g., where user interaction is eliminated by one or more artificial intelligence features discussed herein).
- the output device 1601 may receive one or more outputs discussed herein to perform coregistration, navigation planning, autonomous navigation, movement detection, control, and/or any other process discussed herein.
- the database 1603 and/or the memory 1602 may have outputted information (e.g., trained model(s), detected marker information, image data, test data, validation data, training data, coregistration result(s), segmentation model information, object detection/regression model information, combination model information, etc.) stored therein. That said, one or more embodiments may include several types of data stores, memory, storage media, etc. as discussed above, and such storage media, memory, data stores, etc. may be stored locally or remotely.
- the input may be the entire image frame or frames, and the output may be the centroid coordinates of a target, an octagon, circle or other geometric shape used or discussed herein, one or more airways, and/or coordinates of a portion of a catheter or probe.
- FIGS.25-27 an example of an input image on the left side of FIGS.25-27 and a corresponding output image on the right side of FIGS.25- 27 are illustrated for regression model(s).
- At least one architecture of a regression model is shown in FIG. 25. In at least the embodiment of FIG.
- the regression model may use a combination of one or more convolution layers 900, one or more max-pooling layers 901, and one or more fully connected dense layers 902. While not limited to the Kernel size, Width/Number of filters (output size), and Stride sizes shown for each layer (e.g., in the left convolution layer of FIG. 24, the Kernel size is “3x3”, the Width/# of filters (output size) is “64”, and the Stride size is “2”). In one or more embodiments, another hyper-parameter search with a fixed optimizer and with a different width may be performed, and at least one embodiment example of a model architecture for a convolutional neural network for this scenario is shown in FIG.26.
- One or more embodiments may use one or more features for a regression model as discussed in “Deep Residual Learning for Image Recognition” to Kaiming He, et al., Microsoft Research, December 10, 2015 (https://arxiv.org/pdf/1512.03385.pdf), which is incorporated by reference herein in its entirety.
- FIG. 27 shows at least a further embodiment example of a created architecture of or for a regression model(s).
- the output from a segmentation model is a “probability” of each pixel that may be categorized as a target or as an estimate (incorrect) or actual (correct) match
- post-processing after prediction via the trained segmentation model may be developed to better define, determine, or locate the final coordinate of catheter location and/or determine the navigation planning, autonomous navigation, movement detection, and/or control status of the catheter or continuum robot.
- One or more embodiments of a semantic segmentation model may be performed using the One-Hundred Layers Tiramisu method discussed in “The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation” to Simon Jégou, et al., Montreal Institute for Learning Algorithms, published October 31, 2017 (https://arxiv.org/pdf/1611.09326.pdf), which is incorporated by reference herein in its entirety.
- a segmentation model may be used in one or more embodiment, for example, as shown in FIG.28. At least one embodiment may utilize an input 600 as shown to obtain an output 605 of at least one embodiment of a segmentation model method.
- one or more features such as, but not limited to, convolution 601, concatenation 603, transition up 605, transition down 604, dense block 602, etc., may be employed by slicing the training data set.
- a slicing size may be one or more of the following: 100 x 100, 224 x 224, 512 x 512.
- a batch size (of images in a batch) may be one or more of the following: 2, 4, 8, 16, and, from the one or more experiments performed, a bigger batch size typically performs better (e.g., with greater accuracy).
- 16 images/batch may be used.
- the optimization of all of these hyper-parameters depends on the size of the available data set as well as the available computer/computing resources; thus, once more data is available, different hyper-parameter values may be chosen.
- steps/epoch may be 100, and the epochs may be greater than (>) 1000.
- a convolutional autoencoder CAE may be used.
- the present disclosure and/or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with continuum robot devices, systems, methods, and/or storage mediums and/or with endoscope devices, systems, methods, and/or storage mediums.
- continuum robot devices, systems, methods, and/or storage mediums are disclosed in at least: U.S. Provisional Pat. App. No. 63/150,859, filed on February 18, 2021, the disclosure of which is incorporated by reference herein in its entirety.
- Such endoscope devices, systems, methods, and/or storage mediums are disclosed in at least: U.S. Pat. App. No.
- an imaging apparatus or system such as, but not limited to, a robotic bronchoscope and/or imaging devices or systems, discussed herein may have or include three bendable sections.
- the visualization technique(s) and methods discussed herein may be used with one or more imaging apparatuses, systems, methods, or storage mediums of U.S. Prov. Pat. App. No. 63/377,983, filed on September 30, 2022, the disclosure of which is incorporated by reference herein in its entirety, and/or may be used with one or more imaging apparatuses, systems, methods, or storage mediums of U.S. Prov. Pat. App. No.63/378,017, filed on September 30, 2022, the disclosure of which is incorporated by reference herein in its entirety.
- present disclosure and/or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with continuum robotic systems and catheters, such as, but not limited to, those described in U.S. Patent Publication Nos. 2019/0105468; 2021/0369085; 2020/0375682; 2021/0121162; 2021/0121051; and 2022-0040450, each of which patents and/or patent publications are incorporated by reference herein in their entireties.
- present disclosure and/or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with autonomous robot devices, systems, methods, and/or storage mediums and/or with endoscope devices, systems, methods, and/or storage mediums.
- Such continuum robot devices, systems, methods, and/or storage mediums are disclosed in at least: PCT/US2024/025546, filed on April 19, 2024, which is incorporated by reference herein in its entirety, and U.S. Prov. Pat. App. 63/497, 358, filed on April 20, 2023, which is incorporated by reference herein in its entirety.
- present disclosure and/or one or more components of devices, systems, and storage mediums, and/or methods, thereof also may be used in conjunction with autonomous robot devices, systems, methods, and/or storage mediums and/or with endoscope devices, systems, methods, and/or storage mediums, and/or other features that may be used with same, such as, but not limited to, any of the features disclosed in at least: U.S. Prov. Pat. App. 63/513, 794, filed on July 14, 2023, which is incorporated by reference herein in its entirety, and U.S. Prov. Pat. App.63/603,523, filed on November 28, 2023, which is incorporated by reference herein in its entirety.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Robotics (AREA)
- Optics & Photonics (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- General Physics & Mathematics (AREA)
- Pulmonology (AREA)
- Mechanical Engineering (AREA)
- Theoretical Computer Science (AREA)
- Otolaryngology (AREA)
- Astronomy & Astrophysics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Physiology (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Signal Processing (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Endoscopes (AREA)
- Manipulator (AREA)
Abstract
Sont divulgués un ou plusieurs dispositifs, systèmes, procédés et supports de stockage pour effectuer une planification de navigation, une navigation autonome, une détection de mouvement et/ou une commande pour un robot continuum. Des exemples d'une telle planification, navigation autonome, détection de mouvement et/ou commande comprennent, entre autres, la planification de navigation et/ou la navigation autonome d'au moins une partie d'un robot continuum vers une cible particulière, la détection de mouvement du robot continuum, un lissage « Follow-The-Leader » et/ou au moins un changement d'état pour un robot continuum. Des exemples d'applications comprennent l'imagerie, l'évaluation et le diagnostic d'objets biologiques, tels que, entre autres, des applications gastro-intestinales, cardiovasculaires, bronchiques et/ou ophtalmiques, et sont obtenues par l'intermédiaire d'au moins un instrument optique, tel que, entre autres, des sondes optiques, des cathéters, des endoscopes et des bronchoscopes. Les techniques selon la présente divulgation améliorent également l'efficacité de traitement et d'imagerie, tout en fournissant des images plus précises, et permettent en outre d'obtenir une charge mentale et physique réduite et une facilité d'utilisation améliorée.
Applications Claiming Priority (8)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363513794P | 2023-07-14 | 2023-07-14 | |
| US202363513803P | 2023-07-14 | 2023-07-14 | |
| US63/513,794 | 2023-07-14 | ||
| US63/513,803 | 2023-07-14 | ||
| US202363587637P | 2023-10-03 | 2023-10-03 | |
| US63/587,637 | 2023-10-03 | ||
| US202363603523P | 2023-11-28 | 2023-11-28 | |
| US63/603,523 | 2023-11-28 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025019378A1 true WO2025019378A1 (fr) | 2025-01-23 |
Family
ID=94282649
Family Applications (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/037930 Pending WO2025019377A1 (fr) | 2023-07-14 | 2024-07-12 | Planification et navigation autonomes de robot à continuum avec entrée vocale |
| PCT/US2024/037924 Pending WO2025019373A1 (fr) | 2023-07-14 | 2024-07-12 | Navigation autonome d'un robot continuum |
| PCT/US2024/037935 Pending WO2025019378A1 (fr) | 2023-07-14 | 2024-07-12 | Détection de mouvement de dispositif et planification de navigation et/ou navigation autonome pour un robot continuum ou un dispositif ou système endoscopique |
Family Applications Before (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/037930 Pending WO2025019377A1 (fr) | 2023-07-14 | 2024-07-12 | Planification et navigation autonomes de robot à continuum avec entrée vocale |
| PCT/US2024/037924 Pending WO2025019373A1 (fr) | 2023-07-14 | 2024-07-12 | Navigation autonome d'un robot continuum |
Country Status (1)
| Country | Link |
|---|---|
| WO (3) | WO2025019377A1 (fr) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN120116216B (zh) * | 2025-03-17 | 2025-11-28 | 复旦大学 | 一种三段式柔性机器人避障位姿控制方法 |
| CN120635865B (zh) * | 2025-08-13 | 2025-12-09 | 中国电建集团西北勘测设计研究院有限公司 | 一种自动驾驶障碍物意图预测与避让方法及系统 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080113317A1 (en) * | 2004-04-30 | 2008-05-15 | Kemp James H | Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization |
| US20140287393A1 (en) * | 2010-11-04 | 2014-09-25 | The Johns Hopkins University | System and method for the evaluation of or improvement of minimally invasive surgery skills |
| US20190105468A1 (en) * | 2017-10-05 | 2019-04-11 | Canon U.S.A., Inc. | Medical continuum robot with multiple bendable sections |
| US20230061534A1 (en) * | 2020-03-06 | 2023-03-02 | Histosonics, Inc. | Minimally invasive histotripsy systems and methods |
Family Cites Families (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5955847B2 (ja) * | 2010-09-15 | 2016-07-20 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 血管ツリー画像に基づく内視鏡のロボット制御 |
| JP6785656B2 (ja) * | 2013-08-15 | 2020-11-18 | インテュイティブ サージカル オペレーションズ, インコーポレイテッド | カテーテルの位置付け及び挿入のためのグラフィカル・ユーザインターフェイス |
| KR102764213B1 (ko) * | 2016-06-30 | 2025-02-07 | 인튜어티브 서지컬 오퍼레이션즈 인코포레이티드 | 영상 안내식 시술 중에 복수의 모드에서 안내 정보를 디스플레이하기 위한 그래픽 사용자 인터페이스 |
| US10022192B1 (en) * | 2017-06-23 | 2018-07-17 | Auris Health, Inc. | Automatically-initialized robotic systems for navigation of luminal networks |
| WO2019125398A1 (fr) * | 2017-12-18 | 2019-06-27 | CapsoVision, Inc. | Procédé et appareil d'examen gastrique utilisant une caméra à capsule |
| US11723739B2 (en) * | 2019-08-15 | 2023-08-15 | Verb Surgical Inc. | Admittance compensation for surgical tool |
| WO2021163615A1 (fr) * | 2020-02-12 | 2021-08-19 | The Board Of Regents Of The University Of Texas System | Systèmes microrobotiques et procédés pour interventions endovasculaires |
| US12089817B2 (en) * | 2020-02-21 | 2024-09-17 | Canon U.S.A., Inc. | Controller for selectively controlling manual or robotic operation of endoscope probe |
| US12087429B2 (en) * | 2020-04-30 | 2024-09-10 | Clearpoint Neuro, Inc. | Surgical planning systems that automatically assess different potential trajectory paths and identify candidate trajectories for surgical systems |
| US11786106B2 (en) * | 2020-05-26 | 2023-10-17 | Canon U.S.A., Inc. | Robotic endoscope probe having orientation reference markers |
| JP2024534970A (ja) * | 2021-09-09 | 2024-09-26 | マグニシティ リミテッド | 動的変形可能な管腔マップを使用した自己操縦管腔内装置 |
| CN115990042A (zh) * | 2021-10-20 | 2023-04-21 | 奥林巴斯株式会社 | 内窥镜系统以及使用内窥镜系统进行导引和成像的方法 |
-
2024
- 2024-07-12 WO PCT/US2024/037930 patent/WO2025019377A1/fr active Pending
- 2024-07-12 WO PCT/US2024/037924 patent/WO2025019373A1/fr active Pending
- 2024-07-12 WO PCT/US2024/037935 patent/WO2025019378A1/fr active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080113317A1 (en) * | 2004-04-30 | 2008-05-15 | Kemp James H | Computer-implemented system and method for automated and highly accurate plaque analysis, reporting, and visualization |
| US20140287393A1 (en) * | 2010-11-04 | 2014-09-25 | The Johns Hopkins University | System and method for the evaluation of or improvement of minimally invasive surgery skills |
| US20190105468A1 (en) * | 2017-10-05 | 2019-04-11 | Canon U.S.A., Inc. | Medical continuum robot with multiple bendable sections |
| US20230061534A1 (en) * | 2020-03-06 | 2023-03-02 | Histosonics, Inc. | Minimally invasive histotripsy systems and methods |
Non-Patent Citations (1)
| Title |
|---|
| PORE AMEYA: "Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning", DISSERTATION, 18 March 2023 (2023-03-18), pages 1 - 215, XP093267743, Retrieved from the Internet <URL:https://iris.univr.it/handle/11562/1099166> * |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2025019373A1 (fr) | 2025-01-23 |
| WO2025019377A1 (fr) | 2025-01-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11791032B2 (en) | Systems and methods for filtering localization data | |
| CN110831486B (zh) | 用于基于定位传感器的分支预测的系统和方法 | |
| US20230072879A1 (en) | Systems and methods for hybrid imaging and navigation | |
| US12156704B2 (en) | Intraluminal navigation using ghost instrument information | |
| WO2025019378A1 (fr) | Détection de mouvement de dispositif et planification de navigation et/ou navigation autonome pour un robot continuum ou un dispositif ou système endoscopique | |
| KR20220058569A (ko) | 위치 센서의 가중치-기반 정합을 위한 시스템 및 방법 | |
| JP2022546419A (ja) | 器具画像信頼性システム及び方法 | |
| CN117615724A (zh) | 医疗器械指导系统和相关联方法 | |
| CN118302127A (zh) | 包括用于经皮肾镜取石术程序的引导系统的医疗器械引导系统以及相关联的装置和方法 | |
| WO2025117336A1 (fr) | Cathéters orientables et différences de force de fil | |
| US20250143812A1 (en) | Robotic catheter system and method of replaying targeting trajectory | |
| WO2022233201A1 (fr) | Procédé, équipement et support de stockage pour diriger un élément tubulaire dans un canal à bifurcations multiples | |
| WO2024134467A1 (fr) | Segmentation lobulaire du poumon et mesure de la distance nodule à limite de lobe | |
| US20240164853A1 (en) | User interface for connecting model structures and associated systems and methods | |
| US20250170363A1 (en) | Robotic catheter tip and methods and storage mediums for controlling and/or manufacturing a catheter having a tip | |
| Banach et al. | Conditional Autonomy in Robot-Assisted Transbronchial Interventions | |
| EP4454571A1 (fr) | Navigation autonome d'un robot endoluminal | |
| US20230225802A1 (en) | Phase segmentation of a percutaneous medical procedure | |
| WO2025184464A1 (fr) | Navigation autonome d'un cathéter orientable | |
| DUAN et al. | Progress and key technology analysis of bronchoscopic robot | |
| WO2024081745A2 (fr) | Localisation et ciblage de petites lésions pulmonaires | |
| WO2025059207A1 (fr) | Appareil médical doté d'une structure de support et son procédé d'utilisation | |
| WO2025072201A9 (fr) | Commande robotique pour robot continuum | |
| WO2024163533A1 (fr) | Extraction de dispositif allongé à partir d'images peropératoires | |
| WO2025029781A1 (fr) | Systèmes et procédés de segmentation de données d'image |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24843784 Country of ref document: EP Kind code of ref document: A1 |