[go: up one dir, main page]

WO2025059143A1 - Image-guided endoscope withdrawal control - Google Patents

Image-guided endoscope withdrawal control Download PDF

Info

Publication number
WO2025059143A1
WO2025059143A1 PCT/US2024/046148 US2024046148W WO2025059143A1 WO 2025059143 A1 WO2025059143 A1 WO 2025059143A1 US 2024046148 W US2024046148 W US 2024046148W WO 2025059143 A1 WO2025059143 A1 WO 2025059143A1
Authority
WO
WIPO (PCT)
Prior art keywords
endoscope
withdrawal
segment
target
specific
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/046148
Other languages
French (fr)
Inventor
Sailesh Conjeti
Dawei Liu
Michael Ryan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gyrus ACMI Inc
Original Assignee
Gyrus ACMI Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gyrus ACMI Inc filed Critical Gyrus ACMI Inc
Publication of WO2025059143A1 publication Critical patent/WO2025059143A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000096Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00006Operational features of endoscopes characterised by electronic signal processing of control signals
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000094Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00043Operational features of endoscopes provided with output arrangements
    • A61B1/00055Operational features of endoscopes provided with output arrangements for alerting the user
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/31Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the rectum, e.g. proctoscopes, sigmoidoscopes, colonoscopes

Definitions

  • the present document relates generally to endoscopic medical systems, and more particularly to systems and methods for image-guided withdrawal of an endoscope during an endoscopy procedure.
  • Endoscopes have been used in a variety of clinical procedures, including, for example, illuminating, imaging, detecting and diagnosing one or more disease states, providing fluid delivery (e.g., saline or other preparations via a fluid channel) toward an anatomical region, providing passage (e.g., via a working channel) of one or more therapeutic devices or biological matter collection devices for sampling or treating an anatomical region, and providing suction passageways for collecting fluids (e.g., saline or other preparations), among other procedures.
  • fluid delivery e.g., saline or other preparations via a fluid channel
  • passage e.g., via a working channel
  • suction passageways for collecting fluids (e.g., saline or other preparations)
  • Examples of such anatomical region can include gastrointestinal tract (e.g., esophagus, stomach, duodenum, pancreaticobiliary duct, intestines, colon, and the like), renal area (e.g., kidney(s), ureter, bladder, urethra) and other internal organs (e.g., reproductive systems, sinus cavities, submucosal regions, respiratory tract), and the like.
  • gastrointestinal tract e.g., esophagus, stomach, duodenum, pancreaticobiliary duct, intestines, colon, and the like
  • renal area e.g., kidney(s), ureter, bladder, urethra
  • other internal organs e.g., reproductive systems, sinus cavities, submucosal regions, respiratory tract
  • Some endoscopes include a working channel through which an operator can perform suction, placement of diagnostic or therapeutic devices (e.g., a brush, a biopsy needle or forceps, a stent, a basket, or a balloon), or minimally invasive surgeries such as tissue sampling or removal of unwanted tissue (e.g., benign or malignant strictures) or foreign objects (e.g., calculi).
  • diagnostic or therapeutic devices e.g., a brush, a biopsy needle or forceps, a stent, a basket, or a balloon
  • minimally invasive surgeries such as tissue sampling or removal of unwanted tissue (e.g., benign or malignant strictures) or foreign objects (e.g., calculi).
  • Some endoscopes can be used with a laser or plasma system to deliver energy to an anatomical target (e.g., soft or hard tissue or calculi) to achieve desired treatment.
  • laser has been used in applications of tissue ablation, coagulation, vaporization, fragmentation, and lithotripsy to break down calculi in kidney, gallbladder, ureter, among other stone-forming regions, or to ablate large calculi into smaller fragments.
  • An endoscopy procedure involves an insertion phase where an endoscope is passed through a natural orifice and into a body cavity until reaching a target site, and a subsequent withdrawal phase where the endoscope is carefully withdrawn from the body. Diagnostic or therapeutic operations (e.g., biopsy or removal of certain abnormal or pathological tissue or foreign objects) may occur at the target anatomy or during the withdrawal.
  • Diagnostic or therapeutic operations e.g., biopsy or removal of certain abnormal or pathological tissue or foreign objects
  • a colonoscopy procedure involves passing a colonoscope through the anus and all the way up to the beginning of the colon, called the cecum. During withdrawal, colon segments may be inspected, and polyps of certain sizes that are detected during insertion may be removed (polypectomy). Colonoscopy with diagnosis and treatment of polyps has been used to reduce incidence and mortality rate of colorectal cancer.
  • Colonoscope withdrawal time during colonoscopy is an important quality measure, especially for negative procedures.
  • a minimal standard of more than six minutes to an aspirational standard of more than ten minutes withdrawal time have been reported as colonoscopy quality assurance standards.
  • Longer withdrawal time may increase the adenoma detection rate (ADR).
  • ADR adenoma detection rate
  • the withdrawal speed limits or withdrawal time thresholds used for assessing the withdrawal in an endoscopy procedure are generally predetermined such as based on the generally accepted standards (e.g., at least six minutes or at least ten minutes).
  • These “generic” withdrawal standards are highly generalized standards that are determined without consideration of the unique circumstances which individual procedures present.
  • the target withdrawal time of more than six minutes is a quality indicator primarily for non-result colonoscopies in average-risk patients with intact colons. This includes patients without previous surgical resection, and in whom no biopsies or polypectomies are performed.
  • such generic withdrawal standards may be less effective or sub- optimal. Additionally, the generic withdrawal time may be ineffective if bowel preparation in one or more colon segments is poor, in which case the endoscopist may need to be diligent in withdrawing slowly cleaning the bowel along the way and ensuring complete mucosal inspection to avoid any hidden polyps.
  • the generic withdrawal time standards focus on the global withdrawal time applicable to the entirety of colonoscope withdrawal phase. It lacks specificity of withdrawal times for any particular colon segments. Because different colon segments generally have different bowel preparation qualities and/or different chances of presenting anomalous structures (e.g., polyps or cancer), the desired withdrawal time may vary from segment to segment.
  • a generic, global withdrawal time provides little guidance as to how fast or how slow the colonoscope should be withdrawn in different segments of the colon.
  • An exemplary system can receive endoscopic images or video streams obtained during insertion or at any time prior to the endoscope being withdrawn beyond a target segment, analyze the images or video streams, and generate a personalized, segment-specific endoscope withdrawal plan.
  • the segmentspecific endoscope withdrawal plan may include target endoscope withdrawal speed limits or withdrawal time limits for each of a plurality of distinct segments of an anatomy.
  • the personalized, segment-specific endoscope withdrawal plan can be presented to the user in a graphical form such as a withdrawal speed map or a withdrawal time map.
  • the personalized, segmentspecific endoscope withdrawal plan may serve as a guidance for mucosal inspection and necessary treatment (e.g., polypectomy). Actual withdrawal speed can be tracked and measured. Substantially real-time feedback on withdrawal speed or time spent on a segment of anatomy (e.g., a colon segment), and recommendations to adjust endoscope withdrawal speed, can be provided to the endoscopist. The real-time monitoring of endoscope withdrawal with reference to the personalized, segment-specific endoscope withdrawal plan can improve the quality of mucosal inspection and screening accuracy and efficiency.
  • the personalized segment-wise endoscope withdrawal plan may be provided to a robotic colonoscopy system to facilitate robotically assisted endoscopy and to ensure that withdrawal speed is within recommended limits.
  • Example 1 is an endoscope system that includes: an endoscope, including an imaging system configured to obtain images or video streams of distinct segments of an anatomy of a patient during an endoscopy procedure comprising insertion and subsequent withdrawal of the endoscope in the anatomy; and a controller circuit configured to: analyze the obtained images or video streams to generate endoscopic image or video features for individual ones of the distinct segments of the anatomy; based at least in part on the endoscopic image or video features, generate an endoscope withdrawal plan comprising target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments of the anatomy; and provide the endoscope withdrawal plan to a user or a robotic system to facilitate manual or robotic withdrawal of the endoscope.
  • an endoscope including an imaging system configured to obtain images or video streams of distinct segments of an anatomy of a patient during an endoscopy procedure comprising insertion and subsequent withdrawal of the endoscope in the anatomy
  • a controller circuit configured to: analyze the obtained images or video streams to generate endoscopic image or video features for individual ones of the distinct segments of the anatomy
  • Example 2 the subject matter of Example 1 optionally includes the target or recommended segment-specific endoscope withdrawal parameter values that can include target segment-specific endoscope withdrawal speed limits (WSL) for the individual ones of the distinct segments of the anatomy.
  • WSL target segment-specific endoscope withdrawal speed limits
  • Example 3 the subject matter of any one or more of Examples 1-2 optionally includes the target or recommended segment-specific endoscope withdrawal parameter values that can include target segment-specific endoscope withdrawal time limits (WTL) for the individual ones of the distinct segments of the anatomy.
  • WTL target segment-specific endoscope withdrawal time limits
  • Example 4 the subject matter of any one or more of Examples 1-3 optionally includes an colonoscope for use in a colonoscopy procedure, wherein the imaging system is configured to obtain images or video streams from each of distinct colon segments during the colonoscopy procedure.
  • Example 5 the subject matter of any one or more of Examples 1-4 optionally includes, wherein to determine the target or recommended segment-specific endoscope withdrawal parameter values includes to determine, for a first segment of the anatomy, a first target segment-specific endoscope withdrawal parameter using at least the endoscopic image or video features generated from the images or video streams obtained prior to the endoscope being withdrawn beyond the first segment during the manual or robotic withdrawal of the endoscope.
  • Example 6 the subject matter of Example 5 optionally includes the images or video streams obtained prior to the endoscope reaching the first segment that can include images or video streams obtained during the insertion of the endoscope in the anatomy.
  • Example 7 the subject matter of any one or more of Examples 1-6 optionally includes the controller circuit that can be configured to: perform anomaly detection that includes detecting one or more anomalies in the individual ones of the distinct segments based at least in part on the endoscopic image or video features; and determine the target or recommended segmentspecific endoscope withdrawal parameter values for the individual ones of the distinct segments based at least in part on a result of the anomaly detection.
  • Example 8 the subject matter of Example 7 optionally includes the anomaly detection that can include recognizing one or more of a presence or absence, a type, a size, a shape, a location, or an amount of pathological tissue or obstructed mucosa.
  • Example 9 the subject matter of any one or more of Examples 7-8 optionally includes the controller circuit that can be configured to detect the one or more anomalies using a first trained machine-learning (ML) model, the first ML model trained to establish a correspondence between (i) an endoscopic image or video stream or features extracted therefrom and (ii) one or more anomaly characteristics.
  • ML machine-learning
  • Example 10 the subject matter of any one or more of Examples 7-9 optionally includes the controller circuit that can be configured to determine the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments by applying the result of anomaly detection to a second trained machine-learning (ML) model, the second ML model trained to establish a correspondence between (i) one or more anomaly characteristics and (ii) a target or recommended segment-specific endoscope withdrawal parameter value.
  • ML machine-learning
  • Example 11 the subject matter of Example 10 optionally includes the controller circuit that can be configured to: determine an anomaly score based on a type, a size, a shape, a location, or an amount of anomaly; and determine the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments by applying the determined anomaly score to the second trained ML model.
  • Example 12 the subject matter of any one or more of Examples 1-11 optionally includes the controller circuit that can be configured to generate the endoscope withdrawal plan further using one or more of image or video streams and clinical data from previous endoscopy procedures; patient information and medical history; or pre-procedure imaging study data.
  • Example 13 the subject matter of any one or more of Examples 1-12 optionally includes the controller circuit that can be configured to display on a user interface a graphic representation of the endoscope withdrawal plan, the graphical representation including depictions of distinct segments of the anatomy each color-coded or grayscale-coded to indicate respective target or recommended segment-specific endoscope withdrawal parameter values.
  • Example 14 the subject matter of Example 13 optionally includes the controller circuit that can be configured to: determine in substantially real time a location of the endoscope in one of the distinct segments of the anatomy based at least in part on the endoscopic image or video features; register the location of the endoscope to a pre-generated template of the anatomy; and display on the user interface the location of the endoscope in the one of the distinct segments as overlay on the graphic representation of the endoscope withdrawal plan.
  • Example 15 the subject matter of any one or more of Examples 2-14 optionally includes the controller circuit that can be configured to: measure an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generate an alert to the user, and provide a recommendation to adjust the withdrawal speed to substantially conform to the target segment-specific endoscope WSL.
  • Example 16 the subject matter of any one or more of Examples 2-15 optionally includes the robotic system that can be configured to robotically withdraw the endoscope, and wherein the controller circuit is configured to: measure an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generate a control signal to the robotic system to automatically adjust the withdrawal speed to substantially conform to the target segment-specific endoscope WSL.
  • Example 17 the subject matter of any one or more of Examples 2-16 optionally includes the controller circuit that can be configured to: identify an anomalous segment from the distinct segments based at least in part on the endoscopic image or video features; and during the manual or robotic withdrawal of the endoscope and prior to the endoscope reaching the identified anomalous segment, display a visual indicator of the anomalous segment on a user interface, and generate a warning to the user to withdraw the endoscope at a speed lower than the target segment-specific endoscope WSL for the identified anomalous segment.
  • the controller circuit can be configured to: identify an anomalous segment from the distinct segments based at least in part on the endoscopic image or video features; and during the manual or robotic withdrawal of the endoscope and prior to the endoscope reaching the identified anomalous segment, display a visual indicator of the anomalous segment on a user interface, and generate a warning to the user to withdraw the endoscope at a speed lower than the target segment-specific endoscope WSL for the identified anomalous segment.
  • Example 18 is a method of planning withdrawal of an endoscope from an anatomy of a patient in an endoscopy procedure, the method including steps of: obtaining images or video streams of distinct segments of the anatomy during an insertion phase of the endoscopy procedure using an imaging system associated with an the endoscope; analyzing the obtained images or video streams to generate endoscopic image or video features for individual ones of the distinct segments of the anatomy; based at least in part on the endoscopic image or video features, generate an endoscope withdrawal plan comprising target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments of the anatomy; and providing the endoscope withdrawal plan to a user or a robotic system to facilitate manual or robotic withdrawal of the endoscope.
  • Example 19 the subject matter of Example 18 optionally includes the target or recommended segment-specific endoscope withdrawal parameter values that can include target segment-specific endoscope withdrawal speed limits (WSL) or target segment-specific endoscope withdrawal time limits (WTL) for the individual ones of the distinct segments of the anatomy.
  • WSL target segment-specific endoscope withdrawal speed limits
  • WTL target segment-specific endoscope withdrawal time limits
  • Example 20 the subject matter of any one or more of Examples 18-19 optionally includes determining a first target segment-specific endoscope withdrawal parameter for a first segment of the anatomy using at least the endoscopic image or video features generated from the images or video streams obtained prior to the endoscope being withdrawn beyond the first segment during the manual or robotic withdrawal of the endoscope.
  • Example 21 the subject matter of Example 20 optionally includes the images or video streams obtained prior to the endoscope reaching the first segment that can include images or video streams obtained during the insertion of the endoscope in the anatomy.
  • Example 22 the subject matter of any one or more of Examples 18-21 optionally includes: performing anomaly detection, including detecting one or more anomalies in the individual ones of the distinct segments based at least in part on the endoscopic image or video features; and determining the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments based at least in part on a result of the anomaly detection.
  • Example 23 the subject matter of Example 22 optionally includes anomaly detection using a first trained machine-learning (ML) model, the first ML model trained to establish a correspondence between (i) an endoscopic image or video stream or features extracted therefrom and (ii) one or more anomaly characteristics.
  • ML machine-learning
  • Example 24 the subject matter of any one or more of Examples 22-23 optionally includes determining the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments that can include applying the result of anomaly detection to a second trained machine-learning (ML) model, the second ML model trained to establish a correspondence between (i) one or more anomaly characteristics and (ii) a target or recommended segment-specific endoscope withdrawal parameter value.
  • ML machine-learning
  • Example 25 the subject matter of any one or more of Examples 18-24 optionally includes generating the endoscope withdrawal plan further that can include using one or more of image or video streams and clinical data from previous endoscopy procedures; patient information and medical history; or pre-procedure imaging study data.
  • Example 26 the subject matter of any one or more of Examples 18-25 optionally includes displaying on a user interface on a user interface a graphic representation of the endoscope withdrawal plan, the graphical representation including depictions of distinct segments of the anatomy each color-coded or grayscale-coded to indicate respective target or recommended segment-specific endoscope withdrawal parameter values.
  • Example 27 the subject matter of Example 26 optionally includes determining in substantially real time a location of the endoscope; registering the location of the endoscope to a pre-generated template of the anatomy; and displaying on the user interface the location of the endoscope in the one of the distinct segments as overlay on the graphic representation of the endoscope withdrawal plan.
  • Example 28 the subject matter of any one or more of Examples 19-27 optionally includes measuring an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generating an alert to the user, and providing a recommendation to adjust the endoscope withdrawal speed to substantially conform to the target segmentspecific endoscope WSL.
  • Example 29 the subject matter of any one or more of Examples 19-28 optionally includes measuring an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generating a control signal to a robotic system to automatically adjust the withdrawal speed to substantially conform to the target segment-specific endoscope WSL.
  • Example 30 the subject matter of any one or more of Examples 19-29 optionally includes identifying an anomalous segment from the distinct segments based at least in part on the endoscopic image or video features; and during the manual or robotic withdrawal of the endoscope and prior to the endoscope reaching the identified anomalous segment, displaying a visual indicator of the anomalous segment on a user interface, and generating a warning to the user to withdraw the endoscope at a speed lower than the target segment-specific endoscope WSL for the identified anomalous segment.
  • the presented techniques are described in terms of controlled withdrawal of endoscope in colonoscopy, but are not so limited.
  • the systems, devices, and techniques as described in accordance with various embodiments in this document may additionally or alternatively be used in other procedures involving different types of endoscopes, including, for example, anoscopy, arthroscopy, bronchoscopy, colonoscopy, colposcopy, cystoscopy, esophagoscopy, gastroscopy, laparoscopy, laryngoscopy, neuroendoscopy, proctoscopy, sigmoidoscopy, thoracoscopy etc.
  • FIGS. 1-2 are schematic diagrams illustrating an example of an endoscope system for use in endoscopy procedures, such as a colonoscopy procedure.
  • FIG. 3 illustrates an example of an endoscope system configured to create a personalized, segment-specific endoscope withdrawal plan and to use said plan to guide endoscope withdrawal in an endoscopy procedure.
  • FIG. 4 illustrates examples of auxiliary input that may be used in withdrawal parameter estimation and creation of a personalized, segmentspecific endoscope withdrawal plan.
  • FIGS. 7A-7B are graphs illustrating examples of guided withdrawal and speed tracking in a colonoscopy procedure using a personalized, segment-specific endoscope withdrawal plan.
  • FIG. 8 is a flow chart illustrating an example method of creating a personalized, segment-specific endoscope withdrawal plan and using said plan to guide endoscope withdrawal in an endoscopy procedure.
  • FIG. 9 is a block diagram illustrating an example machine upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform.
  • an endoscope system includes an endoscope and a controller circuit.
  • the endoscope incudes an imaging system to obtain images or video streams of distinct segments of an anatomy in a patient during an endoscopy procedure.
  • the controller circuit can analyze the obtained images or video streams, and generate image or video features for individual ones of the distinct segments of the anatomy. Based on the image or video features, the controller circuit can generate a personalized, segment-specific endoscope withdrawal plan that includes target or recommended values of an endoscope withdrawal parameter (such as a withdrawal speed or withdrawal time) for the individual ones of the distinct segments.
  • the personalized endoscope withdrawal plan may be provided to the user or a robotic system to facilitate manual or robotic withdrawal of the endoscope.
  • the endoscope 14 can be insertable into an anatomical region for imaging or to provide passage of or attachment to (e.g., via tethering) one or more sampling devices for biopsies or therapeutic devices for treatment of a disease state associated with the anatomical region.
  • the endoscope 14 can interface with and connect to imaging and control system 12.
  • the endoscope 14 can also include a colonoscope, though other types of endoscopes can be used with the features and teachings of the present disclosure.
  • the imaging and control system 12 can include a control unit 16, an output unit 18, an input unit 20, a light source unit 22, a fluid source 24, and a suction pump 26.
  • the imaging and control system 12 can include various ports for coupling with the endoscope system 10.
  • the control unit 16 can include a data input/output port for receiving data from and communicating data to the endoscope 14.
  • the light source unit 22 can include an output port for transmitting light to the endoscope 14, such as via a fiber optic link.
  • the fluid source 24 can include a port for transmitting fluid to the endoscope 14.
  • the fluid source 24 can include, for example, a pump and a tank of fluid or can be connected to an external tank, vessel, or storage unit.
  • the suction pump 26 can include a port to draw a vacuum from the endoscope 14 to generate suction, such as for withdrawing fluid from the anatomical region into which the endoscope 14 is inserted.
  • the output unit 18 and the input unit 20 can be used by an operator of the endoscope system 10 to control functions of the endoscope system 10 and view the output of the endoscope 14.
  • the control unit 16 can also generate signals or other outputs from treating the anatomical region into which the endoscope 14 is inserted.
  • the control unit 16 can generate electrical output, acoustic output, fluid output, and the like for treating the anatomical region with, for example, cauterizing, cutting, freezing, and the like.
  • the endoscope 14 can include an insertion section 28, a functional section 30, and a handle section 32, which can be coupled to a cable section 34 and a coupler section 36.
  • the insertion section 28 can extend distally from the handle section 32, and the cable section 34 can extend proximally from the handle section 32.
  • the insertion section 28 can be elongated and can include a bending section and a distal end to which the functional section 30 can be attached.
  • the bending section can be controllable (e.g., by a control knob 38 on the handle section 32) to maneuver the distal end through tortuous anatomical passageways (e.g., stomach, duodenum, kidney, ureter, etc.).
  • the insertion section 28 can also include one or more working channels (e.g., an internal lumen) that can be elongated and can support the insertion of one or more therapeutic tools of the functional section 30.
  • the working channel can extend between the handle section 32 and the functional section 30. Additional functionalities, such as fluid passages, guide wires, and pull wires, can also be provided by the insertion section 28 (e.g., via suction or irrigation passageways or the like).
  • a coupler section 36 can be connected to the control unit 16 to connect to the endoscope 14 to multiple features of the control unit 16, such as the input unit 20, the light source unit 22, the fluid source 24, and the suction pump 26.
  • the handle section 32 can include the knob 38 and the port 40A.
  • the knob 38 can be connected to a pull wire or other actuation mechanisms that can extend through the insertion section 28.
  • the port 40 A, as well as other ports, such as a port 40B (FIG. 2), can be configured to couple various electrical cables, guide wires, auxiliary scopes, tissue collection devices, fluid tubes, and the like to the handle section 32, such as for coupling with the insertion section 28.
  • the functional section 30 can include components for treating and diagnosing anatomy of a patient.
  • the functional section 30 can include an imaging device, an illumination device, and an elevator.
  • the functional section 30 can further include optically enhanced biological matter and tissue collection and retrieval devices.
  • the functional section 30 can include one or more electrodes conductively connected to the handle section 32 and functionally connected to the imaging and control system 12 to analyze biological matter in contact with the electrodes based on comparative biological data stored in the imaging and control system 12.
  • the functional section 30 can directly incorporate tissue collectors.
  • FIG. 2 is a schematic diagram of the endoscope system 10 of FIG. 1 including the imaging and control system 12 and the endoscope 14.
  • FIG. 2 schematically illustrates components of the imaging and the control system 12 coupled to the endoscope 14, which in the illustrated example includes a colonoscope.
  • the imaging and control system 12 can include the control unit 16, which can include or be coupled to an image processing unit 42, a treatment generator 44, and a drive unit 46, as well as the light source unit 22, the input unit 20, and the output unit 18.
  • the control unit 16 can include, or can be in communication with, an endoscope, a surgical instrument 48, and an endoscope system, which can include a device configured to engage tissue and collect and store a portion of that tissue and through which imaging equipment (e.g., a camera) can view target tissue via inclusion of optically enhanced materials and components.
  • the control unit 16 can be configured to activate a camera to view target tissue distal of the endoscope system.
  • the control unit 16 can be configured to activate the light source unit 22 to shine light on the surgical instrument 48, which can include select components configured to reflect light in a particular manner, such as enhanced tissue cutters with reflective particles.
  • the coupler section 36 can be connected to the control unit 16 to connect to the endoscope 14 to multiple features of the control unit 16, such as the image processing unit 42 and the treatment generator 44.
  • the port 40A can be used to insert another surgical instrument 48 or device, such as a daughter scope or auxiliary scope, into the endoscope 14. Such instruments and devices can be independently connected to the control unit 16 via the cable 47.
  • the port 40B can be used to connect coupler section 36 to various inputs and outputs, such as video, air, light, and electric.
  • the image processing unit 42 and light source unit 22 can each interface with the endoscope 14 (e.g., at the functional section 30) by wired or wireless electrical connections.
  • the imaging and control system 12 can accordingly illuminate an anatomical region, collect signals representing the anatomical region, process signals representing the anatomical region, and display images representing the anatomical region on the display unit 18.
  • the imaging and control system 12 can include the light source unit 22 to illuminate the anatomical region using light of desired spectrum (e.g., broadband white light, narrow-band imaging using preferred electromagnetic wavelengths, and the like).
  • the imaging and control system 12 can connect (e.g., via an endoscope connector) to the endoscope 14 for signal transmission (e.g., light output from light source, video signals from imaging system in the distal end, diagnostic and sensor signals from a diagnostic device, and the like).
  • signal transmission e.g., light output from light source, video signals from imaging system in the distal end, diagnostic and sensor signals from a diagnostic device, and the like.
  • the treatment generator 44 can generate a treatment plan, which can be used by the control unit 16 to control the operation of the endoscope 14, or to provide with the operating physician a guidance for maneuvering the endoscope 14, during an endoscopy procedure.
  • the treatment generator 44 can generate an endoscope navigation plan, including estimated values for one or more cannulation or navigation parameters (e.g., an angle, a force, etc.) for maneuvering the steerable elongate instrument, using patient information including an image of the target anatomy.
  • the endoscope navigation plan can help guide the operating physician to cannulate and navigate the endoscope in the patient anatomy.
  • the endoscope navigation plan may additionally or alternatively be used to robotically adjust the position, angle, force, and/or navigation of the endoscope or other instrument.
  • FIG. 3 is a block diagram that illustrates an example of an endoscope system 300 that can create a personalized, segment-specific endoscope withdrawal plan, and use said plan as a guidance in endoscope withdrawal.
  • the endoscope system 300 may be used in a colonoscopy procedure to provide guided mucosal inspection and necessary treatment (e.g., polypectomy) during withdrawal.
  • the system 300 may be implemented as a part of the control unit 16 in FIG. 1.
  • the system 300 may include one or more of an endoscope 310, an auxiliary input 315, a controller circuit 320, a user interface 330, and a storage device 340.
  • the system 300 may further include or be communicatively coupled to a robotic system 350 in a robotically assisted endoscopy procedure.
  • the endoscope 310 can be an example of the endoscope 14 as described above and shown in FIGS. 1-2.
  • the endoscope 310 may include, among other things, an imaging system 312 and a lighting system 314.
  • the imaging system 312 may include at least one imaging sensor or device (e.g., a camera) configured to obtain endoscopic images or video streams of a target anatomy of the patient during an endoscopy procedure.
  • the imaging sensor or device may be located at a distal portion or a distal end of the endoscope 310.
  • the lighting system 314 may include one or more light sources to provide illumination on the target anatomy through one or more lighting lenses.
  • the imaging system 312 may be controllably adjusted to operate in one of a plurality of distinct imaging modes. These imaging modes may differ in one or more of zoom settings, contrast settings, exposure levels, or viewing angles toward the target anatomy.
  • the lighting system 314 may be controllably adjusted to provide different lighting or illumination conditions.
  • An endoscopy procedure generally includes an insertion phase where an endoscope is passed into a body cavity through a natural orifice, and a subsequent withdrawal phase where the endoscope is withdrawn from the body through the body cavity and the natural orifice.
  • the imaging system 312 may obtain images or video streams during the insertion phase (hereinafter referred to as “insertion images or video streams”) and during the withdrawal phase (hereinafter referred to as “withdrawal images or video streams”).
  • insertion images or video streams images or video streams during the withdrawal phase
  • withdrawal images or video streams may be used in different applications.
  • the imaging system 312 may obtain images or video streams from each of those distinct segments of the anatomy during the insertion phase (hereinafter referred to as segment-specific insertion images or video streams), or during the withdrawal phase (hereinafter referred to as segment-specific withdrawal images or video streams).
  • segment-specific insertion images or video streams may be used to detect anomaly at any particular segments, and based at least in part on the detected anomaly, a segment-specific endoscope withdrawal plan can be created that defines withdrawal speed limit or time limit at each of the segments.
  • Segment-specific withdrawal images or video streams may be analyzed to localize the endoscope position in substantially real time, and to track the real-time withdrawal speed. Using the segment-wise withdrawal plan as a reference, timely feedback on endoscope withdrawal and recommendations for adjusting withdrawal may be provided to the endoscopist.
  • the controller circuit 320 may include circuit sets comprising one or more other circuits or sub-circuits that may, alone or in combination, perform the functions, methods, or techniques described herein.
  • the controller circuit 320 and the circuits sets therein may be implemented as a part of a microprocessor circuit, which may be a dedicated processor such as a digital signal processor, application specific integrated circuit (ASIC), microprocessor, or other type of processor for processing information including physical activity information.
  • the microprocessor circuit may be a general-purpose processor that may receive and execute a set of instructions of performing the functions, methods, or techniques described herein.
  • hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired).
  • the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation.
  • a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation.
  • the instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation.
  • the computer readable medium is communicatively coupled to the other components of the circuit set member when the device is operating.
  • any of the physical components may be used in more than one member of more than one circuit set.
  • execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time.
  • the controller circuit 320 may include one or more of an image processor 321, an anomaly detector circuit 322, an endoscope localization circuit 323, a withdrawal plan generator 324, a real-time endoscope withdrawal tracker circuit 327, and an endoscope withdrawal controller 328.
  • the image processor 321 can analyze the segment-specific images or video streams obtained from the imaging system 312, and generate endoscopic image or video features for individual ones of the distinct segments of the anatomy. Examples of the image or video features include statistical features of pixel values or morphological features, such as corners, edges, blobs, curvatures, speeded up robust features (SURF), or scale-invariant feature transform (SIFT) features, among others.
  • the image processor 321 may pre-process the segment-specific images or video streams, such as filtering, resizing, orienting, or color or grayscale correction, and the endoscopic image or video features may be extracted from the pre-processed images or video streams.
  • the image processor 321 may post-process the image or video features to enhance feature quality, such as edge interpolation or extrapolation to produce continuous and smooth edges.
  • the anomaly detector circuit 322 may detect anomaly at any particular segments of the anatomy based at least in part on the segment-specific endoscopic image or video features.
  • the anomaly detection includes detecting and/or recognizing one or more of a presence or absence, a type, a size, a shape, a location, or an amount of pathological tissue or anatomical structures, among other objects in an environment of the anatomy.
  • the anomaly being detected may include pathological tissue segments, such as mucosal abnormalities (polyps, inflammatory bowel diseases, Meckel’s diverticulum, lipoma, bleeding, vascularized mucosa etc.), or obstructed mucosa (for e.g., segments with bad bowel preparation, distended colon etc.).
  • the anomaly detector circuit 322 may detect anomaly in a segment of the anatomy using segment-specific images or video streams (or features extracted therefrom) obtained prior to the endoscope is withdrawn to that specific segment.
  • segment-specific images or video streams generally include the segment-specific insertion images or video streams, or features generated therefrom, during the insertion phase.
  • certain segment-specific withdrawal images or video streams or features generated therefrom may also be used for detecting anomaly.
  • the anomaly detector circuit 322 may detect anomaly using a template matching technique, in which an anomaly may be recognized based on a comparison of the segment-specific endoscopic features (e.g., features characterizing shapes or contours of a structure) to one or more pre-generated templates of known anomalous structure.
  • the anomaly detector circuit 322 may detect the anomaly using artificial intelligence (Al) or machine learning (ML) based techniques. Segment-specific endoscopic images or video streams or features extracted therefrom may be applied to a trained ML model to automatically recognize presence or absence, type, size, location, and/or other characteristics of the anomaly.
  • the ML model may be trained to establish a correspondence between an endoscopic image or video stream or features extracted therefrom and one or more anomaly characteristics.
  • Examples of the ML model used for recognizing anomaly from endoscopic images or video streams include Convolutional Neural Networks, bidirectional LSTM, Recurrent Neural Networks, Conditional Random Fields, Dictionary Learning, or other machine learning techniques (support vector machine, Bayesian models, decision trees, k-means clustering), among other ML techniques.
  • the trained ML model may be stored in a storage device 340. Examples of training an ML model and using the trained ML model to detect anomaly, to determine segment-specific withdrawal parameter values, and to generate a personalized segment-wise withdrawal plan are discussed below with respect to FIGS. 6A-6B.
  • the endoscope localization circuit 323 may determine a location of the endoscope in a particular segment of the anatomy in substantially real time. With such real-time endoscope location information, the anomaly detector circuit 322 may associate a detected anomaly with the particular segment of the anatomy.
  • the endoscope location may be determined using electromagnetic tracking, or using anatomical landmark detected from the images or video streams such as obtained during the insertion phase of endoscopy, or image or video stream features extracted therefrom by the image processor 321.
  • the endoscope localization circuit 323 may analyze the images or video streams to recognize colon landmarks such as anus, left ascending colon, splenic flexure, transverse colon, hepatic flexure, right descending colon, cecum, appendix and terminal ileum etc.
  • the endoscope localization circuit 323 may recognize the landmarks using a template matching technique.
  • the endoscope localization circuit 323 may recognize the landmarks using Al or ML techniques, such as a trained ML model that has been trained to establish a correspondence between an endoscopic image or video stream or features extracted therefrom and a landmark recognition.
  • Examples of the ML model used for recognizing an anatomical landmark include Deep Belief Network, ResNet, DenseNet, Autoencoders, capsule networks, generative adversarial networks, Siamese networks, Convolutional Neural Networks (CNN), deep reinforcement learning, support vector machine (SVM), Bayesian models, decision trees, k-means clustering, among other ML models.
  • the trained ML model may be stored in a storage device 340. Once the endoscope location is determined, the endoscope localization circuit 323 may register the substantially real-time endoscope location to a pre-generated template of the anatomy. Information of the endoscope location in the segments of an anatomy may be presented to a user (e.g., the endoscopist) on the user interface 330, as will be discussed further with respect to FIG. 5.
  • the withdrawal plan generator 324 may generate a personalized, segment-wise withdrawal plan using information about the detected anomaly as produced by the anomaly detector circuit 322, and the substantially real-time location of the endoscope when the anomaly is detected, as produced by the endoscope localization circuit 323.
  • the personalized segment-wise withdrawal plan may include target or recommended values for an endoscope withdrawal parameter (P) for individual ones of the distinct segments of the anatomy.
  • the target or recommended segment-specific endoscope withdrawal parameter values may be estimated using a withdrawal parameter estimator 325.
  • the endoscope withdrawal parameter P includes a target or recommended segment-specific endoscope withdrawal speed limit (WSL).
  • WSL represents an upper speed limit, or an acceptable withdrawal speed range, for any particular segment of the anatomy.
  • the endoscope withdrawal parameter P includes a target or recommended segment-specific endoscope withdrawal time limit (WTL).
  • WTL represents a maximal allowed time, or an acceptable time range, for the endoscope to remain in any particular segment of the anatomy during withdrawal.
  • the segment-specific endoscope withdrawal parameter may have respective target or recommended withdrawal parameter values including, for example, Pc for cecum segment, PA for ascending segment, PT for transverse segment, PD for descending segment, Ps for sigmoid segment, and PR for rectosigmoid segment.
  • Other segments or sub-segments may be defined by the user and the corresponding target or recommended withdrawal parameter values may be determined by the withdrawal parameter estimator 325.
  • the withdrawal parameter estimator 325 may determine the target or recommended values of a segment-specific endoscope withdrawal parameter (e.g., WSL or WTL) for individual ones of the distinct segments using an Al or ML approach.
  • a segment-specific endoscope withdrawal parameter e.g., WSL or WTL
  • the result of anomaly detection from the anomaly detector circuit 322 and the landmark detection and real-time endoscope location information from the endoscope localization circuit 323 may be applied to a trained ML model.
  • the ML model may have been trained to establish a correspondence between an anomaly characteristic and a target or recommended value of a segment-specific endoscope withdrawal parameter (e.g., a WSL or WTL).
  • the trained ML model may be stored in a storage device 340. Examples of training an ML model, and using the trained ML model to detect anomaly, to determine segment-specific withdrawal parameter values, and to generate a personalized segment-wise withdrawal plan are discussed below with respect to FIGS. 6A-6B.
  • the withdrawal parameter estimator 325 may compute an anomaly score using one or more anomaly characteristics such as a type, a size, a shape, a location, or an amount of the anomaly.
  • the anomaly score may have numerical values, such as within a range of zero to ten.
  • a composite anomaly score may be generated such as using a linear combination or a non-linear combination of multiple anomaly scores each quantifying an anomaly characteristic.
  • the composite anomaly score can be computed by averaging or taking worst case anomaly score within a segment.
  • the withdrawal parameter estimator 325 may map the anomaly score (or the composite anomaly score) to a withdrawal parameter value based on a comparison to one or more threshold values or value ranges.
  • the auxiliary input 315 may include, by way of example and not limitation, image or video streams and clinical data from previous endoscopy procedures 410 performed on the patient, patient information and medical history 420, or pre-procedure imaging study data 430 (e.g., X-ray or fluoroscopy images, electrical potential map or an electrical impedance map, computer tomography (CT) images, magnetic resonance imaging (MRI) images, among other imaging modalities).
  • CT computer tomography
  • MRI magnetic resonance imaging
  • the auxiliary input may include previous colonoscopies in surveillance cases, including those polyps that the endoscopist left in situ or areas of the colon which were operated on.
  • the patient information and medical history 420 may include, in a typical endoscopy scenario, clinical demographic information, past and current indications and treatment received, etc.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Endoscopes (AREA)

Abstract

Systems, devices, and methods for creating and using a personalized, segment-specific endoscope withdrawal plan to guide an endoscopy procedure are disclosed. An endoscope system includes an endoscope and a controller circuit. The endoscope incudes an imaging system to obtain images or video streams of distinct segments of an anatomy in a patient during an endoscopy procedure. The controller circuit can analyze the images or video streams to generate image or video features for individual ones of the distinct segments of the anatomy. Based on the image or video features, the controller circuit can generate an endoscope withdrawal plan including target or recommended segment-specific endoscope withdrawal parameter values for each of the segments. The endoscope withdrawal plan may be provided to the user or a robotic system to facilitate manual or robotic withdrawal of the endoscope.

Description

IMAGE-GUIDED ENDOSCOPE WITHDRAWAL CONTROL
PRIORITY CLAIM
[0001] This application claims the benefit of priority to U.S. Provisional Patent Application Serial No. 63/582,026, filed September 12, 2023, the contents of which are incorporated herein by reference.
FIELD OF THE DISCLOSURE
[0002] The present document relates generally to endoscopic medical systems, and more particularly to systems and methods for image-guided withdrawal of an endoscope during an endoscopy procedure.
BACKGROUND
[0003] Endoscopes have been used in a variety of clinical procedures, including, for example, illuminating, imaging, detecting and diagnosing one or more disease states, providing fluid delivery (e.g., saline or other preparations via a fluid channel) toward an anatomical region, providing passage (e.g., via a working channel) of one or more therapeutic devices or biological matter collection devices for sampling or treating an anatomical region, and providing suction passageways for collecting fluids (e.g., saline or other preparations), among other procedures. Examples of such anatomical region can include gastrointestinal tract (e.g., esophagus, stomach, duodenum, pancreaticobiliary duct, intestines, colon, and the like), renal area (e.g., kidney(s), ureter, bladder, urethra) and other internal organs (e.g., reproductive systems, sinus cavities, submucosal regions, respiratory tract), and the like.
[0004] Some endoscopes include a working channel through which an operator can perform suction, placement of diagnostic or therapeutic devices (e.g., a brush, a biopsy needle or forceps, a stent, a basket, or a balloon), or minimally invasive surgeries such as tissue sampling or removal of unwanted tissue (e.g., benign or malignant strictures) or foreign objects (e.g., calculi). Some endoscopes can be used with a laser or plasma system to deliver energy to an anatomical target (e.g., soft or hard tissue or calculi) to achieve desired treatment. For example, laser has been used in applications of tissue ablation, coagulation, vaporization, fragmentation, and lithotripsy to break down calculi in kidney, gallbladder, ureter, among other stone-forming regions, or to ablate large calculi into smaller fragments.
[0005] An endoscopy procedure involves an insertion phase where an endoscope is passed through a natural orifice and into a body cavity until reaching a target site, and a subsequent withdrawal phase where the endoscope is carefully withdrawn from the body. Diagnostic or therapeutic operations (e.g., biopsy or removal of certain abnormal or pathological tissue or foreign objects) may occur at the target anatomy or during the withdrawal. For example, a colonoscopy procedure involves passing a colonoscope through the anus and all the way up to the beginning of the colon, called the cecum. During withdrawal, colon segments may be inspected, and polyps of certain sizes that are detected during insertion may be removed (polypectomy). Colonoscopy with diagnosis and treatment of polyps has been used to reduce incidence and mortality rate of colorectal cancer.
SUMMARY
[0006] Colonoscope withdrawal time during colonoscopy is an important quality measure, especially for negative procedures. To ensure sufficient inspection and treatment time, a minimal standard of more than six minutes to an aspirational standard of more than ten minutes withdrawal time have been reported as colonoscopy quality assurance standards. Longer withdrawal time may increase the adenoma detection rate (ADR).
[0007] To ensure sufficient inspection time and to prevent colonoscope being withdrawn too quickly, a speedometer or the like has been proposed to display the withdrawal speed. However, the withdrawal speed limits or withdrawal time thresholds used for assessing the withdrawal in an endoscopy procedure are generally predetermined such as based on the generally accepted standards (e.g., at least six minutes or at least ten minutes). These “generic” withdrawal standards are highly generalized standards that are determined without consideration of the unique circumstances which individual procedures present. For example, the target withdrawal time of more than six minutes is a quality indicator primarily for non-result colonoscopies in average-risk patients with intact colons. This includes patients without previous surgical resection, and in whom no biopsies or polypectomies are performed. In case of a colon of a patient with colon cancer or patients in surveillance (e.g., due to previously identified and/or resected polyps from past colonoscopy procedures for a specific patient), such generic withdrawal standards may be less effective or sub- optimal. Additionally, the generic withdrawal time may be ineffective if bowel preparation in one or more colon segments is poor, in which case the endoscopist may need to be diligent in withdrawing slowly cleaning the bowel along the way and ensuring complete mucosal inspection to avoid any hidden polyps.
[0008] In addition to the inter-patient variation in withdrawal time requirements such as due to patient medical conditions and/or bowel preparation qualities as stated above, the generic withdrawal time standards (e.g., a minimum of six minutes or ten minutes) focus on the global withdrawal time applicable to the entirety of colonoscope withdrawal phase. It lacks specificity of withdrawal times for any particular colon segments. Because different colon segments generally have different bowel preparation qualities and/or different chances of presenting anomalous structures (e.g., polyps or cancer), the desired withdrawal time may vary from segment to segment. A generic, global withdrawal time provides little guidance as to how fast or how slow the colonoscope should be withdrawn in different segments of the colon.
[0009] The inventors of the present disclosure have identified an unmet need to improve qualify control of endoscope withdrawal during a colonoscopy procedure. This document describes systems and methods for image-based, segment-specific endoscope withdrawal in an endoscopy procedure. An exemplary system can receive endoscopic images or video streams obtained during insertion or at any time prior to the endoscope being withdrawn beyond a target segment, analyze the images or video streams, and generate a personalized, segment-specific endoscope withdrawal plan. The segmentspecific endoscope withdrawal plan may include target endoscope withdrawal speed limits or withdrawal time limits for each of a plurality of distinct segments of an anatomy. The personalized, segment-specific endoscope withdrawal plan can be presented to the user in a graphical form such as a withdrawal speed map or a withdrawal time map. During withdrawal, the personalized, segmentspecific endoscope withdrawal plan may serve as a guidance for mucosal inspection and necessary treatment (e.g., polypectomy). Actual withdrawal speed can be tracked and measured. Substantially real-time feedback on withdrawal speed or time spent on a segment of anatomy (e.g., a colon segment), and recommendations to adjust endoscope withdrawal speed, can be provided to the endoscopist. The real-time monitoring of endoscope withdrawal with reference to the personalized, segment-specific endoscope withdrawal plan can improve the quality of mucosal inspection and screening accuracy and efficiency. In some examples, the personalized segment-wise endoscope withdrawal plan may be provided to a robotic colonoscopy system to facilitate robotically assisted endoscopy and to ensure that withdrawal speed is within recommended limits. [0010] Example 1 is an endoscope system that includes: an endoscope, including an imaging system configured to obtain images or video streams of distinct segments of an anatomy of a patient during an endoscopy procedure comprising insertion and subsequent withdrawal of the endoscope in the anatomy; and a controller circuit configured to: analyze the obtained images or video streams to generate endoscopic image or video features for individual ones of the distinct segments of the anatomy; based at least in part on the endoscopic image or video features, generate an endoscope withdrawal plan comprising target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments of the anatomy; and provide the endoscope withdrawal plan to a user or a robotic system to facilitate manual or robotic withdrawal of the endoscope.
[0011] In Example 2, the subject matter of Example 1 optionally includes the target or recommended segment-specific endoscope withdrawal parameter values that can include target segment-specific endoscope withdrawal speed limits (WSL) for the individual ones of the distinct segments of the anatomy.
[0012] In Example 3, the subject matter of any one or more of Examples 1-2 optionally includes the target or recommended segment-specific endoscope withdrawal parameter values that can include target segment-specific endoscope withdrawal time limits (WTL) for the individual ones of the distinct segments of the anatomy.
[0013] In Example 4, the subject matter of any one or more of Examples 1-3 optionally includes an colonoscope for use in a colonoscopy procedure, wherein the imaging system is configured to obtain images or video streams from each of distinct colon segments during the colonoscopy procedure. [0014] In Example 5, the subject matter of any one or more of Examples 1-4 optionally includes, wherein to determine the target or recommended segment-specific endoscope withdrawal parameter values includes to determine, for a first segment of the anatomy, a first target segment-specific endoscope withdrawal parameter using at least the endoscopic image or video features generated from the images or video streams obtained prior to the endoscope being withdrawn beyond the first segment during the manual or robotic withdrawal of the endoscope.
[0015] In Example 6, the subject matter of Example 5 optionally includes the images or video streams obtained prior to the endoscope reaching the first segment that can include images or video streams obtained during the insertion of the endoscope in the anatomy.
[0016] In Example 7, the subject matter of any one or more of Examples 1-6 optionally includes the controller circuit that can be configured to: perform anomaly detection that includes detecting one or more anomalies in the individual ones of the distinct segments based at least in part on the endoscopic image or video features; and determine the target or recommended segmentspecific endoscope withdrawal parameter values for the individual ones of the distinct segments based at least in part on a result of the anomaly detection.
[0017] In Example 8, the subject matter of Example 7 optionally includes the anomaly detection that can include recognizing one or more of a presence or absence, a type, a size, a shape, a location, or an amount of pathological tissue or obstructed mucosa.
[0018] In Example 9, the subject matter of any one or more of Examples 7-8 optionally includes the controller circuit that can be configured to detect the one or more anomalies using a first trained machine-learning (ML) model, the first ML model trained to establish a correspondence between (i) an endoscopic image or video stream or features extracted therefrom and (ii) one or more anomaly characteristics.
[0019] In Example 10, the subject matter of any one or more of Examples 7-9 optionally includes the controller circuit that can be configured to determine the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments by applying the result of anomaly detection to a second trained machine-learning (ML) model, the second ML model trained to establish a correspondence between (i) one or more anomaly characteristics and (ii) a target or recommended segment-specific endoscope withdrawal parameter value.
[0020] In Example 11, the subject matter of Example 10 optionally includes the controller circuit that can be configured to: determine an anomaly score based on a type, a size, a shape, a location, or an amount of anomaly; and determine the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments by applying the determined anomaly score to the second trained ML model.
[0021] In Example 12, the subject matter of any one or more of Examples 1-11 optionally includes the controller circuit that can be configured to generate the endoscope withdrawal plan further using one or more of image or video streams and clinical data from previous endoscopy procedures; patient information and medical history; or pre-procedure imaging study data.
[0022] In Example 13, the subject matter of any one or more of Examples 1-12 optionally includes the controller circuit that can be configured to display on a user interface a graphic representation of the endoscope withdrawal plan, the graphical representation including depictions of distinct segments of the anatomy each color-coded or grayscale-coded to indicate respective target or recommended segment-specific endoscope withdrawal parameter values.
[0023] In Example 14, the subject matter of Example 13 optionally includes the controller circuit that can be configured to: determine in substantially real time a location of the endoscope in one of the distinct segments of the anatomy based at least in part on the endoscopic image or video features; register the location of the endoscope to a pre-generated template of the anatomy; and display on the user interface the location of the endoscope in the one of the distinct segments as overlay on the graphic representation of the endoscope withdrawal plan.
[0024] In Example 15, the subject matter of any one or more of Examples 2-14 optionally includes the controller circuit that can be configured to: measure an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generate an alert to the user, and provide a recommendation to adjust the withdrawal speed to substantially conform to the target segment-specific endoscope WSL.
[0025] In Example 16, the subject matter of any one or more of Examples 2-15 optionally includes the robotic system that can be configured to robotically withdraw the endoscope, and wherein the controller circuit is configured to: measure an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generate a control signal to the robotic system to automatically adjust the withdrawal speed to substantially conform to the target segment-specific endoscope WSL.
[0026] In Example 17, the subject matter of any one or more of Examples 2-16 optionally includes the controller circuit that can be configured to: identify an anomalous segment from the distinct segments based at least in part on the endoscopic image or video features; and during the manual or robotic withdrawal of the endoscope and prior to the endoscope reaching the identified anomalous segment, display a visual indicator of the anomalous segment on a user interface, and generate a warning to the user to withdraw the endoscope at a speed lower than the target segment-specific endoscope WSL for the identified anomalous segment.
[0027] Example 18 is a method of planning withdrawal of an endoscope from an anatomy of a patient in an endoscopy procedure, the method including steps of: obtaining images or video streams of distinct segments of the anatomy during an insertion phase of the endoscopy procedure using an imaging system associated with an the endoscope; analyzing the obtained images or video streams to generate endoscopic image or video features for individual ones of the distinct segments of the anatomy; based at least in part on the endoscopic image or video features, generate an endoscope withdrawal plan comprising target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments of the anatomy; and providing the endoscope withdrawal plan to a user or a robotic system to facilitate manual or robotic withdrawal of the endoscope. [0028] In Example 19, the subject matter of Example 18 optionally includes the target or recommended segment-specific endoscope withdrawal parameter values that can include target segment-specific endoscope withdrawal speed limits (WSL) or target segment-specific endoscope withdrawal time limits (WTL) for the individual ones of the distinct segments of the anatomy.
[0029] In Example 20, the subject matter of any one or more of Examples 18-19 optionally includes determining a first target segment-specific endoscope withdrawal parameter for a first segment of the anatomy using at least the endoscopic image or video features generated from the images or video streams obtained prior to the endoscope being withdrawn beyond the first segment during the manual or robotic withdrawal of the endoscope.
[0030] In Example 21, the subject matter of Example 20 optionally includes the images or video streams obtained prior to the endoscope reaching the first segment that can include images or video streams obtained during the insertion of the endoscope in the anatomy.
[0031] In Example 22, the subject matter of any one or more of Examples 18-21 optionally includes: performing anomaly detection, including detecting one or more anomalies in the individual ones of the distinct segments based at least in part on the endoscopic image or video features; and determining the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments based at least in part on a result of the anomaly detection.
[0032] In Example 23, the subject matter of Example 22 optionally includes anomaly detection using a first trained machine-learning (ML) model, the first ML model trained to establish a correspondence between (i) an endoscopic image or video stream or features extracted therefrom and (ii) one or more anomaly characteristics.
[0033] In Example 24, the subject matter of any one or more of Examples 22-23 optionally includes determining the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments that can include applying the result of anomaly detection to a second trained machine-learning (ML) model, the second ML model trained to establish a correspondence between (i) one or more anomaly characteristics and (ii) a target or recommended segment-specific endoscope withdrawal parameter value.
[0034] In Example 25, the subject matter of any one or more of Examples 18-24 optionally includes generating the endoscope withdrawal plan further that can include using one or more of image or video streams and clinical data from previous endoscopy procedures; patient information and medical history; or pre-procedure imaging study data.
[0035] In Example 26, the subject matter of any one or more of Examples 18-25 optionally includes displaying on a user interface on a user interface a graphic representation of the endoscope withdrawal plan, the graphical representation including depictions of distinct segments of the anatomy each color-coded or grayscale-coded to indicate respective target or recommended segment-specific endoscope withdrawal parameter values.
[0036] In Example 27, the subject matter of Example 26 optionally includes determining in substantially real time a location of the endoscope; registering the location of the endoscope to a pre-generated template of the anatomy; and displaying on the user interface the location of the endoscope in the one of the distinct segments as overlay on the graphic representation of the endoscope withdrawal plan.
[0037] In Example 28, the subject matter of any one or more of Examples 19-27 optionally includes measuring an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generating an alert to the user, and providing a recommendation to adjust the endoscope withdrawal speed to substantially conform to the target segmentspecific endoscope WSL.
[0038] In Example 29, the subject matter of any one or more of Examples 19-28 optionally includes measuring an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generating a control signal to a robotic system to automatically adjust the withdrawal speed to substantially conform to the target segment-specific endoscope WSL.
[0039] In Example 30, the subject matter of any one or more of Examples 19-29 optionally includes identifying an anomalous segment from the distinct segments based at least in part on the endoscopic image or video features; and during the manual or robotic withdrawal of the endoscope and prior to the endoscope reaching the identified anomalous segment, displaying a visual indicator of the anomalous segment on a user interface, and generating a warning to the user to withdraw the endoscope at a speed lower than the target segment-specific endoscope WSL for the identified anomalous segment.
[0040] The presented techniques are described in terms of controlled withdrawal of endoscope in colonoscopy, but are not so limited. The systems, devices, and techniques as described in accordance with various embodiments in this document, may additionally or alternatively be used in other procedures involving different types of endoscopes, including, for example, anoscopy, arthroscopy, bronchoscopy, colonoscopy, colposcopy, cystoscopy, esophagoscopy, gastroscopy, laparoscopy, laryngoscopy, neuroendoscopy, proctoscopy, sigmoidoscopy, thoracoscopy etc.
[0041] This summary is an overview of some of the teachings of the present application and not intended to be an exclusive or exhaustive treatment of the present subject matter. Further details about the present subject matter are found in the detailed description and appended claims. Other aspects of the disclosure will be apparent to persons skilled in the art upon reading and understanding the following detailed description and viewing the drawings that form a part thereof, each of which are not to be taken in a limiting sense. The scope of the present disclosure is defined by the appended claims and their legal equivalents.
BRIEF DESCRIPTION OF THE DRAWING
[0042] FIGS. 1-2 are schematic diagrams illustrating an example of an endoscope system for use in endoscopy procedures, such as a colonoscopy procedure. [0043] FIG. 3 illustrates an example of an endoscope system configured to create a personalized, segment-specific endoscope withdrawal plan and to use said plan to guide endoscope withdrawal in an endoscopy procedure.
[0044] FIG. 4 illustrates examples of auxiliary input that may be used in withdrawal parameter estimation and creation of a personalized, segmentspecific endoscope withdrawal plan.
[0045] FIG. 5 illustrates by way of example different stages of an image- guided colonoscopy in accordance with embodiments discussed herein.
[0046] FIGS. 6A-6B illustrate examples of training a machine learning (ML) model, and using the trained ML model to determine a target or recommended segment-specific endoscope withdrawal parameter value for any particular segment of the anatomy.
[0047] FIGS. 7A-7B are graphs illustrating examples of guided withdrawal and speed tracking in a colonoscopy procedure using a personalized, segment-specific endoscope withdrawal plan.
[0048] FIG. 8 is a flow chart illustrating an example method of creating a personalized, segment-specific endoscope withdrawal plan and using said plan to guide endoscope withdrawal in an endoscopy procedure.
[0049] FIG. 9 is a block diagram illustrating an example machine upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform.
DETAILED DESCRIPTION
[0050] This document describes systems, devices, and methods for creating and using a personalized, segment-specific endoscope withdrawal plan to guide withdrawal during an endoscopy procedure. According to one embodiment, an endoscope system includes an endoscope and a controller circuit. The endoscope incudes an imaging system to obtain images or video streams of distinct segments of an anatomy in a patient during an endoscopy procedure. The controller circuit can analyze the obtained images or video streams, and generate image or video features for individual ones of the distinct segments of the anatomy. Based on the image or video features, the controller circuit can generate a personalized, segment-specific endoscope withdrawal plan that includes target or recommended values of an endoscope withdrawal parameter (such as a withdrawal speed or withdrawal time) for the individual ones of the distinct segments. The personalized endoscope withdrawal plan may be provided to the user or a robotic system to facilitate manual or robotic withdrawal of the endoscope.
[0051] FIG. 1 is a schematic diagram of an endoscope system 10 for use in an endoscopy procedure, such as colonoscopy. The system 10 can include an imaging and control system 12 and an endoscope 14. The system 10 is an illustrative example of an endoscope system suitable for use with the systems, devices, and methods described herein, such as a colonoscopy system for use in image-guided colonoscopy in accordance with a personalized segment-wise endoscope withdrawal plan as described in this document.
[0052] The endoscope 14 can be insertable into an anatomical region for imaging or to provide passage of or attachment to (e.g., via tethering) one or more sampling devices for biopsies or therapeutic devices for treatment of a disease state associated with the anatomical region. The endoscope 14 can interface with and connect to imaging and control system 12. The endoscope 14 can also include a colonoscope, though other types of endoscopes can be used with the features and teachings of the present disclosure. The imaging and control system 12 can include a control unit 16, an output unit 18, an input unit 20, a light source unit 22, a fluid source 24, and a suction pump 26.
[0053] The imaging and control system 12 can include various ports for coupling with the endoscope system 10. For example, the control unit 16 can include a data input/output port for receiving data from and communicating data to the endoscope 14. The light source unit 22 can include an output port for transmitting light to the endoscope 14, such as via a fiber optic link. The fluid source 24 can include a port for transmitting fluid to the endoscope 14. The fluid source 24 can include, for example, a pump and a tank of fluid or can be connected to an external tank, vessel, or storage unit. The suction pump 26 can include a port to draw a vacuum from the endoscope 14 to generate suction, such as for withdrawing fluid from the anatomical region into which the endoscope 14 is inserted. The output unit 18 and the input unit 20 can be used by an operator of the endoscope system 10 to control functions of the endoscope system 10 and view the output of the endoscope 14. The control unit 16 can also generate signals or other outputs from treating the anatomical region into which the endoscope 14 is inserted. In some examples, the control unit 16 can generate electrical output, acoustic output, fluid output, and the like for treating the anatomical region with, for example, cauterizing, cutting, freezing, and the like. [0054] The fluid source 24 can be in communication with control unit 16 and can include one or more sources of air, saline, or other fluids, as well as associated fluid pathways (e.g., air channels, irrigation channels, suction channels, or the like) and connectors (barb fittings, fluid seals, valves, or the like). The fluid source 24 can be utilized as an activation energy for a biasing device or a pressure-applying device of the present disclosure. The imaging and control system 12 can also include the drive unit 46, which can include a motorized drive for advancing a distal section of endoscope 14.
[0055] The endoscope 14 can include an insertion section 28, a functional section 30, and a handle section 32, which can be coupled to a cable section 34 and a coupler section 36. The insertion section 28 can extend distally from the handle section 32, and the cable section 34 can extend proximally from the handle section 32. The insertion section 28 can be elongated and can include a bending section and a distal end to which the functional section 30 can be attached. The bending section can be controllable (e.g., by a control knob 38 on the handle section 32) to maneuver the distal end through tortuous anatomical passageways (e.g., stomach, duodenum, kidney, ureter, etc.). The insertion section 28 can also include one or more working channels (e.g., an internal lumen) that can be elongated and can support the insertion of one or more therapeutic tools of the functional section 30. The working channel can extend between the handle section 32 and the functional section 30. Additional functionalities, such as fluid passages, guide wires, and pull wires, can also be provided by the insertion section 28 (e.g., via suction or irrigation passageways or the like).
[0056] A coupler section 36 can be connected to the control unit 16 to connect to the endoscope 14 to multiple features of the control unit 16, such as the input unit 20, the light source unit 22, the fluid source 24, and the suction pump 26.
[0057] The handle section 32 can include the knob 38 and the port 40A. The knob 38 can be connected to a pull wire or other actuation mechanisms that can extend through the insertion section 28. The port 40 A, as well as other ports, such as a port 40B (FIG. 2), can be configured to couple various electrical cables, guide wires, auxiliary scopes, tissue collection devices, fluid tubes, and the like to the handle section 32, such as for coupling with the insertion section 28.
[0058] According to examples, the imaging and control system 12 can be provided on a mobile platform (e.g., a cart 41) with shelves for housing the light source unit 22, the suction pump 26, an image processing unit 42 (FIG. 2), etc. Alternatively, several components of the imaging and the control system 12 (shown in FIGS. 1 and 2) can be provided directly on the endoscope 14 to make the endoscope “self-contained.”
[0059] The functional section 30 can include components for treating and diagnosing anatomy of a patient. The functional section 30 can include an imaging device, an illumination device, and an elevator. The functional section 30 can further include optically enhanced biological matter and tissue collection and retrieval devices. For example, the functional section 30 can include one or more electrodes conductively connected to the handle section 32 and functionally connected to the imaging and control system 12 to analyze biological matter in contact with the electrodes based on comparative biological data stored in the imaging and control system 12. In other examples, the functional section 30 can directly incorporate tissue collectors.
[0060] In some examples, the endoscope 14 can be robotically controlled, such as by a robot arm attached thereto. The robot arm can automatically, or semi-automatically (e.g., with certain user manual control or commands), via an actuator, position and navigate the endoscope 14 (e.g., the functional section 30 and/or the insertion section 28) in the target anatomy, or position a device at a desired location with desired posture to facilitate an operation of an anatomical target. In accordance with various examples discussed in this document, a controller can generate a control signal to the actuator of the robot arm to facilitate operation of such instrument or tools in accordance with the personalized, segment-specific endoscope withdrawal plan in a robotically assisted endoscopy procedure.
[0061] FIG. 2 is a schematic diagram of the endoscope system 10 of FIG. 1 including the imaging and control system 12 and the endoscope 14. FIG. 2 schematically illustrates components of the imaging and the control system 12 coupled to the endoscope 14, which in the illustrated example includes a colonoscope. The imaging and control system 12 can include the control unit 16, which can include or be coupled to an image processing unit 42, a treatment generator 44, and a drive unit 46, as well as the light source unit 22, the input unit 20, and the output unit 18. The control unit 16 can include, or can be in communication with, an endoscope, a surgical instrument 48, and an endoscope system, which can include a device configured to engage tissue and collect and store a portion of that tissue and through which imaging equipment (e.g., a camera) can view target tissue via inclusion of optically enhanced materials and components. The control unit 16 can be configured to activate a camera to view target tissue distal of the endoscope system. Likewise, the control unit 16 can be configured to activate the light source unit 22 to shine light on the surgical instrument 48, which can include select components configured to reflect light in a particular manner, such as enhanced tissue cutters with reflective particles. [0062] The coupler section 36 can be connected to the control unit 16 to connect to the endoscope 14 to multiple features of the control unit 16, such as the image processing unit 42 and the treatment generator 44. In examples, the port 40A can be used to insert another surgical instrument 48 or device, such as a daughter scope or auxiliary scope, into the endoscope 14. Such instruments and devices can be independently connected to the control unit 16 via the cable 47. In examples, the port 40B can be used to connect coupler section 36 to various inputs and outputs, such as video, air, light, and electric.
[0063] The image processing unit 42 and light source unit 22 can each interface with the endoscope 14 (e.g., at the functional section 30) by wired or wireless electrical connections. The imaging and control system 12 can accordingly illuminate an anatomical region, collect signals representing the anatomical region, process signals representing the anatomical region, and display images representing the anatomical region on the display unit 18. The imaging and control system 12 can include the light source unit 22 to illuminate the anatomical region using light of desired spectrum (e.g., broadband white light, narrow-band imaging using preferred electromagnetic wavelengths, and the like). The imaging and control system 12 can connect (e.g., via an endoscope connector) to the endoscope 14 for signal transmission (e.g., light output from light source, video signals from imaging system in the distal end, diagnostic and sensor signals from a diagnostic device, and the like).
[0064] The treatment generator 44 can generate a treatment plan, which can be used by the control unit 16 to control the operation of the endoscope 14, or to provide with the operating physician a guidance for maneuvering the endoscope 14, during an endoscopy procedure. In an example, the treatment generator 44 can generate an endoscope navigation plan, including estimated values for one or more cannulation or navigation parameters (e.g., an angle, a force, etc.) for maneuvering the steerable elongate instrument, using patient information including an image of the target anatomy. The endoscope navigation plan can help guide the operating physician to cannulate and navigate the endoscope in the patient anatomy. The endoscope navigation plan may additionally or alternatively be used to robotically adjust the position, angle, force, and/or navigation of the endoscope or other instrument.
[0065] FIG. 3 is a block diagram that illustrates an example of an endoscope system 300 that can create a personalized, segment-specific endoscope withdrawal plan, and use said plan as a guidance in endoscope withdrawal. In an example, the endoscope system 300 may be used in a colonoscopy procedure to provide guided mucosal inspection and necessary treatment (e.g., polypectomy) during withdrawal. The system 300 may be implemented as a part of the control unit 16 in FIG. 1.
[0066] The system 300 may include one or more of an endoscope 310, an auxiliary input 315, a controller circuit 320, a user interface 330, and a storage device 340. In some examples, the system 300 may further include or be communicatively coupled to a robotic system 350 in a robotically assisted endoscopy procedure.
[0067] The endoscope 310 can be an example of the endoscope 14 as described above and shown in FIGS. 1-2. The endoscope 310 may include, among other things, an imaging system 312 and a lighting system 314. The imaging system 312 may include at least one imaging sensor or device (e.g., a camera) configured to obtain endoscopic images or video streams of a target anatomy of the patient during an endoscopy procedure. The imaging sensor or device may be located at a distal portion or a distal end of the endoscope 310. The lighting system 314 may include one or more light sources to provide illumination on the target anatomy through one or more lighting lenses. In some examples, the imaging system 312 may be controllably adjusted to operate in one of a plurality of distinct imaging modes. These imaging modes may differ in one or more of zoom settings, contrast settings, exposure levels, or viewing angles toward the target anatomy. In some examples, the lighting system 314 may be controllably adjusted to provide different lighting or illumination conditions.
[0068] An endoscopy procedure generally includes an insertion phase where an endoscope is passed into a body cavity through a natural orifice, and a subsequent withdrawal phase where the endoscope is withdrawn from the body through the body cavity and the natural orifice. The imaging system 312 may obtain images or video streams during the insertion phase (hereinafter referred to as “insertion images or video streams”) and during the withdrawal phase (hereinafter referred to as “withdrawal images or video streams”). The insertion images or video streams and the withdrawal images or video streams may be used in different applications. When a target anatomy comprises a plurality of anatomically distinct or artificially defined segments or portions, the imaging system 312 may obtain images or video streams from each of those distinct segments of the anatomy during the insertion phase (hereinafter referred to as segment-specific insertion images or video streams), or during the withdrawal phase (hereinafter referred to as segment-specific withdrawal images or video streams). As will be discussed further below, during colonoscopy, the segmentspecific insertion images or video streams may be used to detect anomaly at any particular segments, and based at least in part on the detected anomaly, a segment-specific endoscope withdrawal plan can be created that defines withdrawal speed limit or time limit at each of the segments. Segment-specific withdrawal images or video streams may be analyzed to localize the endoscope position in substantially real time, and to track the real-time withdrawal speed. Using the segment-wise withdrawal plan as a reference, timely feedback on endoscope withdrawal and recommendations for adjusting withdrawal may be provided to the endoscopist.
[0069] The controller circuit 320 may include circuit sets comprising one or more other circuits or sub-circuits that may, alone or in combination, perform the functions, methods, or techniques described herein. In an example, the controller circuit 320 and the circuits sets therein may be implemented as a part of a microprocessor circuit, which may be a dedicated processor such as a digital signal processor, application specific integrated circuit (ASIC), microprocessor, or other type of processor for processing information including physical activity information. Alternatively, the microprocessor circuit may be a general-purpose processor that may receive and execute a set of instructions of performing the functions, methods, or techniques described herein. In an example, hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation. Accordingly, the computer readable medium is communicatively coupled to the other components of the circuit set member when the device is operating. In an example, any of the physical components may be used in more than one member of more than one circuit set. For example, under operation, execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time.
[0070] The controller circuit 320 may include one or more of an image processor 321, an anomaly detector circuit 322, an endoscope localization circuit 323, a withdrawal plan generator 324, a real-time endoscope withdrawal tracker circuit 327, and an endoscope withdrawal controller 328. The image processor 321 can analyze the segment-specific images or video streams obtained from the imaging system 312, and generate endoscopic image or video features for individual ones of the distinct segments of the anatomy. Examples of the image or video features include statistical features of pixel values or morphological features, such as corners, edges, blobs, curvatures, speeded up robust features (SURF), or scale-invariant feature transform (SIFT) features, among others. In some examples, the image processor 321 may pre-process the segment-specific images or video streams, such as filtering, resizing, orienting, or color or grayscale correction, and the endoscopic image or video features may be extracted from the pre-processed images or video streams. In some examples, the image processor 321 may post-process the image or video features to enhance feature quality, such as edge interpolation or extrapolation to produce continuous and smooth edges.
[0071] The anomaly detector circuit 322 may detect anomaly at any particular segments of the anatomy based at least in part on the segment-specific endoscopic image or video features. The anomaly detection includes detecting and/or recognizing one or more of a presence or absence, a type, a size, a shape, a location, or an amount of pathological tissue or anatomical structures, among other objects in an environment of the anatomy. In an example of colonoscopy, the anomaly being detected may include pathological tissue segments, such as mucosal abnormalities (polyps, inflammatory bowel diseases, Meckel’s diverticulum, lipoma, bleeding, vascularized mucosa etc.), or obstructed mucosa (for e.g., segments with bad bowel preparation, distended colon etc.). The anomaly detector circuit 322 may detect anomaly in a segment of the anatomy using segment-specific images or video streams (or features extracted therefrom) obtained prior to the endoscope is withdrawn to that specific segment. Such segment-specific images or video streams generally include the segment-specific insertion images or video streams, or features generated therefrom, during the insertion phase. In some examples, certain segment-specific withdrawal images or video streams or features generated therefrom may also be used for detecting anomaly.
[0072] In an example, the anomaly detector circuit 322 may detect anomaly using a template matching technique, in which an anomaly may be recognized based on a comparison of the segment-specific endoscopic features (e.g., features characterizing shapes or contours of a structure) to one or more pre-generated templates of known anomalous structure. In another example, the anomaly detector circuit 322 may detect the anomaly using artificial intelligence (Al) or machine learning (ML) based techniques. Segment-specific endoscopic images or video streams or features extracted therefrom may be applied to a trained ML model to automatically recognize presence or absence, type, size, location, and/or other characteristics of the anomaly. In an example, the ML model may be trained to establish a correspondence between an endoscopic image or video stream or features extracted therefrom and one or more anomaly characteristics. Examples of the ML model used for recognizing anomaly from endoscopic images or video streams include Convolutional Neural Networks, bidirectional LSTM, Recurrent Neural Networks, Conditional Random Fields, Dictionary Learning, or other machine learning techniques (support vector machine, Bayesian models, decision trees, k-means clustering), among other ML techniques. The trained ML model may be stored in a storage device 340. Examples of training an ML model and using the trained ML model to detect anomaly, to determine segment-specific withdrawal parameter values, and to generate a personalized segment-wise withdrawal plan are discussed below with respect to FIGS. 6A-6B.
[0073] The endoscope localization circuit 323 may determine a location of the endoscope in a particular segment of the anatomy in substantially real time. With such real-time endoscope location information, the anomaly detector circuit 322 may associate a detected anomaly with the particular segment of the anatomy. The endoscope location may be determined using electromagnetic tracking, or using anatomical landmark detected from the images or video streams such as obtained during the insertion phase of endoscopy, or image or video stream features extracted therefrom by the image processor 321. In an example of colonoscopy, the endoscope localization circuit 323 may analyze the images or video streams to recognize colon landmarks such as anus, left ascending colon, splenic flexure, transverse colon, hepatic flexure, right descending colon, cecum, appendix and terminal ileum etc. In an example, the endoscope localization circuit 323 may recognize the landmarks using a template matching technique. In another example, the endoscope localization circuit 323 may recognize the landmarks using Al or ML techniques, such as a trained ML model that has been trained to establish a correspondence between an endoscopic image or video stream or features extracted therefrom and a landmark recognition. Examples of the ML model used for recognizing an anatomical landmark include Deep Belief Network, ResNet, DenseNet, Autoencoders, capsule networks, generative adversarial networks, Siamese networks, Convolutional Neural Networks (CNN), deep reinforcement learning, support vector machine (SVM), Bayesian models, decision trees, k-means clustering, among other ML models. The trained ML model may be stored in a storage device 340. Once the endoscope location is determined, the endoscope localization circuit 323 may register the substantially real-time endoscope location to a pre-generated template of the anatomy. Information of the endoscope location in the segments of an anatomy may be presented to a user (e.g., the endoscopist) on the user interface 330, as will be discussed further with respect to FIG. 5.
[0074] The withdrawal plan generator 324 may generate a personalized, segment-wise withdrawal plan using information about the detected anomaly as produced by the anomaly detector circuit 322, and the substantially real-time location of the endoscope when the anomaly is detected, as produced by the endoscope localization circuit 323. The personalized segment-wise withdrawal plan may include target or recommended values for an endoscope withdrawal parameter (P) for individual ones of the distinct segments of the anatomy. The target or recommended segment-specific endoscope withdrawal parameter values may be estimated using a withdrawal parameter estimator 325. In one example, the endoscope withdrawal parameter P includes a target or recommended segment-specific endoscope withdrawal speed limit (WSL). The WSL represents an upper speed limit, or an acceptable withdrawal speed range, for any particular segment of the anatomy. In another example, the endoscope withdrawal parameter P includes a target or recommended segment-specific endoscope withdrawal time limit (WTL). The WTL represents a maximal allowed time, or an acceptable time range, for the endoscope to remain in any particular segment of the anatomy during withdrawal. In an example of colonoscopy, the segment-specific endoscope withdrawal parameter may have respective target or recommended withdrawal parameter values including, for example, Pc for cecum segment, PA for ascending segment, PT for transverse segment, PD for descending segment, Ps for sigmoid segment, and PR for rectosigmoid segment. Other segments or sub-segments may be defined by the user and the corresponding target or recommended withdrawal parameter values may be determined by the withdrawal parameter estimator 325. [0075] The withdrawal parameter estimator 325 may determine the target or recommended values of a segment-specific endoscope withdrawal parameter (e.g., WSL or WTL) for individual ones of the distinct segments using an Al or ML approach. In an example, the result of anomaly detection from the anomaly detector circuit 322 and the landmark detection and real-time endoscope location information from the endoscope localization circuit 323 may be applied to a trained ML model. The ML model may have been trained to establish a correspondence between an anomaly characteristic and a target or recommended value of a segment-specific endoscope withdrawal parameter (e.g., a WSL or WTL). The trained ML model may be stored in a storage device 340. Examples of training an ML model, and using the trained ML model to detect anomaly, to determine segment-specific withdrawal parameter values, and to generate a personalized segment-wise withdrawal plan are discussed below with respect to FIGS. 6A-6B.
[0076] In some examples, the withdrawal parameter estimator 325 may compute an anomaly score using one or more anomaly characteristics such as a type, a size, a shape, a location, or an amount of the anomaly. The anomaly score may have numerical values, such as within a range of zero to ten. In some examples, a composite anomaly score may be generated such as using a linear combination or a non-linear combination of multiple anomaly scores each quantifying an anomaly characteristic. In an example, the composite anomaly score can be computed by averaging or taking worst case anomaly score within a segment. The withdrawal parameter estimator 325 may map the anomaly score (or the composite anomaly score) to a withdrawal parameter value based on a comparison to one or more threshold values or value ranges. A higher anomaly score, which typically indicates a more severe anomalous condition, can be mapped to a lower WSL to encourage slower withdrawal in this segment, or a longer WTL to encourage longer withdrawal time in this segment. An established correspondence between anomaly scores or ranges of anomaly scores and corresponding target or recommended values of the segment-specific endoscope withdrawal parameter may be determined by a lookup table to a normative database, or via a rule-based system. The established mapping may be stored in the storage device 340. In another example, the withdrawal parameter estimator 325 may apply the anomaly score or the composite anomaly score to the trained ML model to determine the target or recommended values of segment-specific endoscope withdrawal parameter for individual ones of the distinct segments. In some examples, an anomaly score calculation may be included into the ML model, such as in a layer of the neural network model. [0077] In some examples, the withdrawal parameter estimator 325 may determine a target or recommended endoscope withdrawal parameter value for individual ones of the distinct segments of the anatomy further using an auxiliary input 315. Referring to FIG. 4, the auxiliary input 315 may include, by way of example and not limitation, image or video streams and clinical data from previous endoscopy procedures 410 performed on the patient, patient information and medical history 420, or pre-procedure imaging study data 430 (e.g., X-ray or fluoroscopy images, electrical potential map or an electrical impedance map, computer tomography (CT) images, magnetic resonance imaging (MRI) images, among other imaging modalities). In an example of colonoscopy, the auxiliary input may include previous colonoscopies in surveillance cases, including those polyps that the endoscopist left in situ or areas of the colon which were operated on. The patient information and medical history 420 may include, in a typical endoscopy scenario, clinical demographic information, past and current indications and treatment received, etc.
[0078] In some examples, the auxiliary input 315 may additionally or alternatively include user (e.g., endoscopist) profile 440 including user’s experience, working environment (e.g., a hospital setting or an ambulatory screening centre), affinity to new technology, preference of a certain endoscopy protocol etc. The user profile 440 may be provided by the user via the user interface 330. Alternatively, the user profile 440 may be automatically generated or updated through learning from the user’s past choices and training. For example, a higher WSL (corresponding to a higher withdrawal speed) or a smaller WTL (corresponding to a shorter examination time) at a particular segment may be preferred by an experienced endoscopist, while a lower WSL (corresponding to a slower withdrawal pace) or a larger WTL (corresponding to a longer examination time) may be recommended to a training endoscopist. [0079] In some examples, the auxiliary input 315 may additionally or alternatively include endoscope and equipment information 450. This may include, for example, specification of the endoscope 310 including type, size, dimension, shape, and structures of the endoscope or other steerable instruments such as a cannular, a catheter, or a guidewire supporting imaging modes and lighting modes; specification of size, dimension, shape, and structures of tissue section, sampling, or treatment tools; current state of the equipment, including which light mode is on, which endo buttons are engaged like waterjet, insufflation, what light modes are supported, or whether the magnification is turned on and the current magnification selection, etc.
[0080] If there are additional Al or ML algorithms running in the background, the states of these Al or ML algorithms, collectively referred to hereinafter as “Al findings” 460, can be additional elements of the auxiliary input 315 that can be passed to the withdrawal plan generator 324. Examples of the Al findings 460 may include Al algorithms for detecting anomaly, landmark, or other features or structural elements of interested in the target anatomy.
[0081] The estimated segment-specific target withdrawal parameter values (e.g., segment-specific WSLs or WTLs) may be used to generate an endoscope withdrawal map 326, which is a graphic representation of the endoscope withdrawal plan. The endoscope withdrawal map 326 may be displayed on the user interface 330 and used as a visual guidance to assist the endoscopist during an endoscopy procedure. The endoscope withdrawal map 326 may include depictions of distinct segments of the anatomy each color- coded or grayscale-coded to indicate respective target or recommended segmentspecific endoscope withdrawal parameter values. The endoscope withdrawal map 326 may be displayed on the user interface 330 as a reference to guide the endoscopist to withdraw the endoscope during an endoscopy procedure. In some examples, the estimated withdrawal parameter values produced by the withdrawal parameter estimator 325 may be displayed on the user interface 330. In some examples, information about the anomaly detected previously such as during the insertion phase of endoscopy or from previous endoscopic procedures, may also be displayed on the endoscope withdrawal map 326 to serve as an additional layer of precaution to prevent excessive withdrawal speed in the anomalous segment. Other information, including the endoscopic images and image features, information about the detected anomaly and landmarks in each segment of the anatomy, or substantially real-time location of the endoscope during the endoscopy procedure, may be displayed on the user interface 330.
[0082] In some examples, the segment-wise withdrawal plan, including the estimated withdrawal parameter values and the endoscope withdrawal map 326, may be stored in the storage device 340. Information about the anomaly detected previously (such as during the insertion phase) may also be stored in the storage device 340. The storage device 340 can be local to the endoscope system 300. Alternatively, the storage device 340 may be a separate remote storage device, such as a part of a cloud comprising one or more storage and computing devices (e.g., servers) that provides secure access to cloud-based services including, for example, data storage, computing services, and provisioning of customer services, among others. In some examples, at least some of the data processing and computation with regard to anomaly detection, landmark recognition, endoscope localization, withdrawal parameter estimation, and endoscope withdrawal map generation may be performed in a cloud. For example, images or video streams or features extracted therefrom may be streamed to the cloud, get processed therein, and the computation results such as the the segment-wise withdrawal plan (e.g., the endoscope withdrawal map 326) may be relayed back to local endoscope system for use in image-guided endoscope withdrawal.
[0083] The real-time endoscope withdrawal tracker circuit 327 can track and measure in substantially real time a withdrawal parameter (e.g., withdrawal speed or total withdrawal time spend) in a particular segment of the anatomy, such as a colon segment. Various techniques may be used to track and measure the endoscope withdrawal speed. In one example, the endoscope withdrawal speed may be estimated using optical flow based method. Optical flow is the pattern of apparent motion of image objects between two consecutive frames caused by the movement of object or camera. The endoscope withdrawal speed is positively correlated to the endoscope image frame change rate. A fast frame change rate corresponds to a faster flow, or endoscope withdrawal speed. In another example, the endoscope withdrawal speed may be estimated using an electromagnetic (EM) tracking technique. EM coils incorporated along the length of the endoscope’s insertion tube generate a pulsed low-intensity magnetic field, which can be picked up by a receiver device. The EM pulses are used to calculate the precise position and orientation of the insertion tube. The endoscope withdrawal speed can then be estimated based on a rate of change in endoscope positions. In yet another example, the endoscope withdrawal speed may be estimated with respect to a landmark such as recognized from the endoscopic images by the endoscope localization circuit 323 as described above. Using the landmark as a reference or fiducial point, the endoscope withdrawal speed can be estimated based on a change in relative position with respect to the recognized landmark.
[0084] In some examples, the real-time endoscope withdrawal tracker circuit 327 can determine the beginning of the withdrawal phase and activate withdrawal speed tracking and measurement only during the withdrawal phase. The real-time endoscope withdrawal tracker circuit 327 may additionally determine if any operations are performed by the endoscopists. Examples of such operations performed include but not limited to waterjet and suction of debris, inserting instruments via a working channel (forceps, snares, cytology brushes, needs for sclerotherapy or mucosal injection, and aspiration catheters). Once the withdrawal phase is recognized, the withdrawal clock starts, and withdrawal speed tracking is activated as the mucosa is being inspected. If the colonoscopy is in other phases, the withdrawal speed tracking can be made dormant, so that it does not interfere with the endoscopists’ actions.
[0085] During withdrawal, the endoscope localization circuit 323 can determine in substantially real time a location of the endoscope in one of the distinct segments of the anatomy based at least in part on the endoscopic image or video streams or features extracted therefrom. As similarly discussed above with respect to localizing endoscope position during the insertion phase, location of the endoscope may be determined based on an anatomical landmark recognized from the endoscopic image or video streams. The substantially realtime location of the endoscope may be registered to a pre-generated template of the anatomy. The substantially real-time location of the endoscope indicates the current segment of the anatomy where the endoscopic image or video streams are obtained. The measured real-time endoscope withdrawal parameter value (e.g., withdrawal speed) produced by the real-time endoscope withdrawal tracker circuit 327 may be associated with the current recognized segment to produce a measured segment-specific withdrawal parameter value. The endoscope withdrawal controller 328 may compare the measured segment-specific withdrawal parameter value to the target or recommended segment-specific withdrawal parameter value, and determine if the measure withdrawal parameter value is within a specific margin of the target value. For example, the endoscope withdrawal controller 328 may compare the measured endoscope withdrawal speed in segment “j”, Sj, to the withdrawal speed limit (WSL) of the same segment “j”, WSLj. If the measured withdrawal speed Sj falls within a specified margin 6 of WSLj, that is, WSLj - 6 < Sj < WSLj + 6, then Sj is deemed appropriate. When Sj deviates from WSLj exceeding the specified margin 6, that is, Sj < WSLj - 5 or Sj> WSLj + 5, then an alert may be generated to warn the user of an inappropriately fast withdrawal speed (if Sj> WSLj + 6) or an inappropriately slow withdrawal speed (if Sj < WSLj - 6) at segment “j”. A recommendation may be provided to the user to adjust the withdrawal speed to substantially conform to the target speed WSLj (e.g., WSLj - 6 < Sj < WSLj + 6). [0086] In some examples, during the endoscope withdrawal phase, the measured segment-specific endoscope withdrawal speed Sj, and the substantially real-time endoscope position in a segment of the anatomy (as determined by the endoscope localization circuit 323), may be displayed on the user interface 330 along with (e.g., side by side, or overlaid upon) the endoscope withdrawal map 326. This provides direct visual feedback to the user on endoscope withdrawal, as shown and discussed further below with respect to FIG. 5. In some examples, during withdrawal and prior to the endoscope being withdrawn beyond a previously identified anomalous segment (such as one identified during the insertion phase), the user may be notified about the anomalous segment about to reach, and pre-emptively warned to withdraw at a speed not to exceed the segment-specific WSL when passing the anomalous segment. The warning to the user may be either delivered through optical means on the diagnostic monitor or via auditory means like a warning alarm. In an example, highlighting, flash alerts, audible or haptic feedback may be provided to the user to emphasize the anomalous segment about to reach. At the completion of withdrawal (e.g., rectum detected), the total net withdrawal time, post-procedure analytics and withdrawal summary can be shown to the endoscopist, including, for example, segment-by-segment withdrawal time and/or average speed, total net withdrawal time and average speed, anomaly detected, among others. The post-procedure analytics maybe used for quality assurance and for determining if the colonoscopy procedure was successful.
[0087] In some examples, the system 300 may be operated in a closed- loop fashion with a feedback loop for continuous learning of the segment-wise withdrawal plan, such as updating the target or recommended segment-specific endoscope withdrawal parameter values. The continuous learning may be via explicit endoscopist feedback (e.g., satisfaction with recommendations, a “like” button etc.). Alternatively, the system 300 can run in a shadow mode to experienced endoscopists and monitor their withdrawal actions and correct the segment-wise withdrawal plan through reinforcement feedback.
[0088] In some examples, image-guided endoscopy, or a portion therefore such as endoscope withdrawal, may be performed using the robotic system 350. The robot system 350 may include a robot arm detachably attached to the endoscope 310. The robot arm can automatically, or semi -automatically (e.g., with certain user manual control or commands), via an actuator, position and manipulate the endoscope 310 in an anatomical target, or position a device at a desired location with desired posture to facilitate an operation on the anatomical target. The robotic system 350 can include a robotic controller to control the movement of the robotic arm in accordance with the segment-wise withdrawal plan, including the estimated withdrawal parameter values produced by the withdrawal parameter estimator 325 and/or the endoscope withdrawal map 326.
[0089] FIG. 5 illustrates by way of example different stages of an image- guided endoscopy procedure 500, such as a colonoscopy as shown in this example. The procedure includes an insertion stage, followed by a withdrawal stage. The procedure may be carried out using the system 300. During the insertion stage, endoscopic images or video streams can be obtained respectively from each of a plurality of colon segments, including the rectosigmoid segment, sigmoid segment, descending segment, transverse segment, ascending segment, and cecum, such as using the imaging system 312. Various colon segments may be recognized by the endoscope localization circuit 323. Also during insertion, anomaly may be detected automatically using the anomaly detector circuit 322. At the end of the insertion stage where the endoscope distal tip has reached cecum, segment-specific images or video streams obtained during the insertion stage may be analyzed, and a personalized segment-wise withdrawal plan may be generated, such as using the withdrawal plan generator 324. The segmentwise withdrawal plan may be graphically represented by a colonoscope withdrawal speed map 510 (an example of the endoscope withdrawal map 326). Withdrawal speed limits (WSLs) for each of the plurality of colon segments may be color-coded (or grayscale-coded), and displayed on the colonoscope withdrawal speed map 510. In the example as illustrated in FIG. 5, the color- coded or grayscale-coded WSLs may include a “normal” or relatively fast WSL range, a “cautious” or medium WSL range, and a “slow” withdrawal WSL range. By way of example and not limitation, the “normal” WSL range is approximately 3-4 millimeters per second (mm/sec), the “cautious” WSL range is approximately greater or equal to 40 mm/sec, and the “slow” WSL range is approximately 1-3 mm/sec. Other information obtained or computed, such as anomaly detected or anomalous segment recognized during the insertion phase or from previous endoscopic procedures, may also be displayed on colonoscope withdrawal speed map 510.
[0090] In some examples, images or video streams used for determining the target or recommended endoscope withdrawal parameter value (and for generating the personalized segment-wise withdrawal plan such as the colonoscope withdrawal speed map 510) may not be limited to endoscopic images or video streams obtained during endoscope insertion, but may include endoscopic images or video streams obtained during withdrawal phase prior to the endoscope being withdrawn beyond the segment to be inspected and treated. For example, the target or recommended withdrawal speed limit (WSL) for any particular colon segment “j” can be determined at any time prior to the colonoscope being withdrawn beyond that segment “j”.
[0091] The colonoscope withdrawal speed map 510 may be displayed to a user throughout the withdrawal stage to assist the endoscopist in maneuvering the endoscope to inspect mucosa in colon segments and perform necessary treatment (e.g., polypectomy). Additionally or alternatively, the colonoscope withdrawal speed map 510 may be used for controlling the robotic system 350 in robotically assistant colonoscopy. As illustrated in FIG. 5, during withdrawal, a substantially real-time endoscope location 522, such as determined by the endoscope localization circuit 323, may be displayed as overlay on the colonoscope withdrawal speed map 510 to produce an overlay plot 520. When the endoscope tip is in, or about to reach, a particular colon segment, a notification, such as a marker 524, may be displayed on the user interface to inform the user about the WSL at the present or about-to-reach segment. At each colon segment, an actual endoscope withdrawal speed can be measured such as using the real-time endoscope withdrawal tracker circuit 327. The measured segment-specific withdrawal speed may be compared to the WSL stored in the personalized segment-wise withdrawal plan, and determine if the measured speed is within a specific margin of the WSL for that segment. As described above, when the measured speed exceeds a specific margin of WSL, an alert may be generated to warn the user (e.g., endoscopist) of an inappropriate withdrawal speed. A recommendation may be provided to the user to adjust the withdrawal speed to substantially conform to the WSL.
[0092] FIGS. 6A-6B illustrate examples of training an ML model, and using the trained ML model to determine target or recommended segmentspecific endoscope withdrawal parameter value (e.g., WSL or WTL) for each of a plurality of distinct segments of the anatomy. The target or recommended segment-specific endoscope withdrawal parameter values may be used to construct the endoscope withdrawal map 326 or the colonoscope withdrawal speed map 510. FIG. 6A illustrates an ML model training (or learning) phase during which an ML model 620 may be trained to determine a target or recommended colonoscope withdrawal speed limit (WSL) at a particular segment “j” (e.g., cecum) based at least in part on endoscopic images of that particular segment. A training dataset may include a group of endoscopic images 610 of the same colon segment “j” obtained from colonoscopy procedures performed on a plurality of patients. In some examples, the training dataset may also include auxiliary input 315 as described above with respect to FIG. 3. The ML model 620 may have a neural network structure comprising an input layer, one or more hidden layers, and an output layer. The plurality of endoscopic images 610 or features generated therefrom may be fed into the input layer of the ML model 620, which propagates the input data or data features through one or more hidden layers to the output layer that outputs a WSL for the colon segment (e.g., cecum). The ML model 620 is able to perform tasks, without explicitly being programmed, by making inferences based on patterns found in the analysis of data. The ML model 620 explores the study and construction of algorithms (e.g., ML algorithms) that may learn from existing data and make predictions about new data. Such algorithms operate by building the ML model 620 from training data in order to make data-driven predictions or decisions expressed as outputs or assessments.
[0093] The ML model 620 may be trained using supervised learning or unsupervised learning. Supervised learning uses prior knowledge (e.g., examples that correlate inputs to outputs or outcomes) to learn the relationships between the inputs and the outputs. The goal of supervised learning is to learn a function that, given some training data, best approximates the relationship between the training inputs and outputs so that the ML model can implement the same relationships when given inputs to generate the corresponding outputs. Unsupervised learning is the training of an ML algorithm using information that is neither classified nor labeled, and allowing the algorithm to act on that information without guidance. Unsupervised learning is useful in exploratory analysis because it can automatically identify structure in data.
[0094] Common tasks for supervised learning are classification problems and regression problems. Classification problems, also referred to as categorization problems, aim at classifying items into one of several category values. Regression algorithms aim at quantifying some items (for example, by providing a score to the value of some input). Some examples of commonly used supervised-ML algorithms are Logistic Regression (LR), Naive-Bayes, Random Forest (RF), neural networks (NN), deep neural networks (DNN), matrix factorization, and Support Vector Machines (SVM). Examples of DNN include a convolutional neural network (CN N), a recurrent neural network (RNN), a deep belief network (DBN), or a hybrid neural network comprising two or more neural network models of different types or different model configurations.
Some common tasks for unsupervised learning include clustering, representation learning, and density estimation. Some examples of commonly used unsupervised learning algorithms are K-means clustering, principal component analysis, and autoencoders.
[0095] Another type of ML is federated learning (also known as collaborative learning) that trains an algorithm across multiple decentralized devices holding local data, without exchanging the data. This approach stands in contrast to traditional centralized machine-learning techniques where all the local datasets are uploaded to one server, as well as to more classical decentralized approaches which often assume that local data samples are identically distributed. Federated learning enables multiple actors to build a common, robust machine learning model without sharing data, thus allowing to address critical issues such as data privacy, data security, data access rights and access to heterogeneous data.
[0096] The training of the ML model 620 may be performed continuously or periodically, or in near real time as additional procedure data are made available. The training process involves algorithmically adjusting one or more ML model parameters (e.g., weights or bias at any particular layer of a neural network model), until the ML model being trained satisfies a specified training convergence criterion. By way of example and not limitation, the ML model 720 may be trained with weighted square loss (for explicit feedback) or with binary cross-entropy loss (for implicit feedback). Other training techniques, such as deep factorization machine, wide and deep learning, deep structured semantic models, or autoencoder based recommender systems, may be used. The trained ML mode 620 can establish a correspondence between the endoscopic- images 610 (or features extracted therefrom) and the target or recommended WSL for the segment “j”.
[0097] Similar training process and techniques as stated above may be used to train other ML models using respective training datasets each comprising a group of endoscopic images of the same colon segment (e.g., segment “k” different than segment “j” above) obtained from colonoscopy procedures performed on a plurality of patients. The resulting plurality of trained ML models can each predict a target or recommended WSL for respective colon segments. For example, a first ML model for predicting WSL in cecum, a second ML model for predicting WSL in ascending segment, a third ML model for predicting WSL in transverse segment, a fourth ML model for predicting WSL in descending segment, a fifth ML model for predicting WSL in sigmoid segment, and a sixth ML model for predicting WSL in rectosigmoid segment of a colon. [0098] FIG. 6B illustrates an inference phase during which a live endoscopic image 630 of a particular segment is applied to the trained ML model 620 to automatically determine a segment-specific WSL in segment “j”, WSLj 640. The live endoscopic image 630 may be obtained during the insertion phase of endoscopy, such that WSLj 640 may be determined at the end of the insertion phase. Alternatively, the live endoscopic image 630 may be obtained during the withdrawal phase prior to the endoscope being withdrawn beyond segment “j”, such that WSLj 640 may be determined at any time prior to the colonoscope being withdrawn beyond segment “j”. The WSLj 640, along with WSLs estimated for other segments by applying images of respective colon segments to respectively trained ML model, may be used to construct the personalized colonoscope withdrawal speed map 510.
[0099] During withdrawal, endoscope location can be determined in substantially real time such as using the endoscope localization circuit 323. The endoscope location may be displayed on top of the colonoscope withdrawal speed map 510 to produce an overlay plot 520 that may be displayed to the endoscopist during the procedure. Endoscope withdrawal speed can be tracked and measured in substantially real time, such as using the real-time endoscope withdrawal tracker circuit 327. As the endoscope is withdrawn to segment “j”, the actual withdrawal speed Sj 650 is measured. Sj 650 can be compared to WSLj 640 to determine if Sj 650 is within a specific margin of the WSLj 640. As described above, if the measured speed falls within a recommended margin of WSL, then the withdrawal speed is deemed appropriate. If the measured speed exceeds the recommended margin of WSL, then an alert may be generated to warn the user (e.g., endoscopist) of an inappropriate withdrawal speed. A recommendation may be provided to the user to adjust the withdrawal speed to substantially conform to the WSL.
[00100] The ML models described above are trained to estimate target or recommended segment-specific endoscope withdrawal parameter value (e.g., WSL or WTL) for any particular segment of the anatomy. In some examples, a plurality of ML models can be separately trained, validated, and used (in an inference phase) in other applications, such as anomaly detection or anatomical landmark recognition.. In an example, an ML model may be trained and used by the anomaly detector circuit 322 to detect anomaly from an input endoscopic image of a particular colon segment, and another ML model may be trained and used by the endoscope localization circuit 323 to detect an anatomical landmark from an input endoscopic image of a particular colon segment.
[00101] FIGS. 7A-7B illustrate example graphs of guided withdrawal and speed tracking in a colonoscopy procedure using a personalized, segmentspecific endoscope withdrawal plan. Actual colonoscopy withdrawal speed (such as tracked and measured by the real-time endoscope withdrawal tracker circuit 327) and target or recommended WSLs for each of a plurality of (e.g., N) segments, can be plotted on the same graph and displayed to the user during withdrawal. FIG. 7 A illustrates a “good” endoscopy procedure in which the actual withdrawal speed 710, measured at each of the N segments, remains below respective segment-specific WSLs (e.g., WSLi 720A for segment 1, WSL2 720B for segment 2, WSL3 720C for segment 3, WSL4 720D for segment 4, WSL5 720E for segment 5, . . . , WSLN 720N for segment N). FIG. 7B illustrates the withdrawal speed tracking in another procedure. The actual withdrawal speed 730 in segments 1 through 4 are below the respective segmentspecific WSL (i.e., WSLi through WSL4). However, in segment 5, the actual measured withdrawal speed 732 exceeds the speed limit WSLs 720E. In response, an alert can be trigged, and the endoscopist is warned to slow down the withdrawal in that segment. As the endoscope has inspected a certain portion of segment 5 at the speed 732, to reinspect that portion at a slower speed, the endoscope is reinserted to the beginning of segment 5 (or the end of preceding segment 4), and from there withdrawn through segment 5 at a slower speed 734 below the WSLs 720E.
[00102] FIG. 8 is a flow chart illustrating an example method 800 of creating a personalized, segment-specific endoscope withdrawal plan and using said plan to guide endoscope withdrawal in an endoscopy procedure., such as an colonoscopy procedure. The personalized, segment-specific endoscope withdrawal plan can be created using images or video streams of distinct segments of the anatomy. The method 800 may be implemented in the endoscope system 300. Although the processes of the method 800 are drawn in one flow chart, they are not required to be performed in a particular order. In various examples, some of the processes can be performed in a different order than that illustrated herein. [00103] At 810, images or video streams of distinct segments of the target anatomy may be obtained using an imaging system associated with an the endoscope. In an example of colonoscopy, distinct segments (e.g., one or more of cecum, ascending segment, transverse segment, descending segment, sigmoid segment, or rectosigmoid segment of a colon) may be imaged separately to produce segment-wise or segment-specific images or video streams. The images or video streams may be acquired when the imaging system is set to one of a plurality of available imaging modes. An imaging mode refers to one or more of zoom settings, contrast settings, exposure levels, viewing angles toward the target anatomy, or lighting or illumination conditions. In an example, the images or video streams may be acquired during an insertion phase of the endoscopy procedure, where segment-wise or segment-specific insertion images or video streams may be acquired from each of the plurality of segments (e.g., colon segments).
[00104] At 820, the obtained segment-wise images or video streams may be analyzed to generate endoscopic image or video features for individual ones of the distinct segments of the anatomy. Examples of the image or video features include statistical features of pixel values or morphological features, such as comers, edges, blobs, curvatures, speeded up robust features (SURF), or scaleinvariant feature transform (SIFT) features, among others. In some examples, the segment-specific images or video streams may be pre-processed, such as filtering, resizing, orienting, or color or grayscale correction, and the endoscopic image features may be extracted from the pre-processed images or video streams. In some examples, the endoscopic image or video features may be postprocessed to enhance feature quality, such as edge interpolation or extrapolation to produce continuous and smooth edges.
[00105] At 830, based at least in part on the endoscopic image or video features, a personalized, segment-wise endoscope withdrawal plan may be generated, such as using the withdrawal plan generator 324 as shown in FIG. 3. The segment-wise endoscope withdrawal plan may include target or recommended segment-specific endoscope withdrawal parameter values for individual ones of the distinct segments of the anatomy. By way of example and not limitation, and as discussed above with respect to FIG. 3, the target or recommended segment-specific endoscope withdrawal parameter may include a target or recommended segment-specific endoscope withdrawal speed limit (WSL), or a target or recommended segment-specific endoscope withdrawal time limit (WTL), for individual ones of the distinct segments. The WSL represents an upper speed limit or an acceptable withdrawal speed range for individual ones of the distinct segments of the anatomy. The WTL represents a maximal allowed time or an acceptable time range for the endoscope to remain in individual ones of the distinct segments during the endoscope withdrawal process. In an example of colonoscopy, the segment-specific endoscope withdrawal parameter may have respective target or recommended values, such as WSLs or WTLs, for one or more of the cecum segment, the ascending segment, the transverse segment, the descending segment, the sigmoid segment, or the rectosigmoid segment. Other segments or sub-segments may be defined by the user and the corresponding target or recommended withdrawal parameter values may be similarly determined.
[00106] In an example, the target or recommended segment-specific endoscope withdrawal parameter values may be estimated using information about anomaly detected from any particular segments of the anatomy. Anomaly at any particular segments may be detected from at least the segment-specific endoscopic image or video features using a template matching technique, or artificial intelligence (Al) or machine learning (ML) based techniques. The anomaly detection includes detecting and/or recognizing one or more of a presence or absence, a type, a size, a shape, a location, or an amount of pathological tissue or anatomical structures, among other objects in an environment of the anatomy. In an example of colonoscopy, the anomaly being detected may include pathological tissue segments, such as mucosal abnormalities (polyps, inflammatory bowel diseases, Meckel’s diverticulum, lipoma, bleeding, vascularized mucosa etc.), or obstructed mucosa (for e.g., segments with bad bowel preparation, distended colon etc.). In some examples, the target or recommended segment-specific endoscope withdrawal parameter values may be estimated further using substantially real-time location of the endoscope when the anomaly is detected. The endoscope location may be determined using electromagnetic tracking, or anatomical landmark recognized from the images or video streams such as obtained during the insertion phase of endoscopy, or image or video stream features extracted therefrom. In an example of landmark-based endoscope localization, the landmark may be recognized using an Al or ML approach, such as a trained ML model that has been trained to establish a correspondence between an endoscopic image or video stream (or features extracted therefrom) and a landmark recognition, as described above with respect to FIG. 3.
[00107] In an example, the target or recommended segment-specific endoscope withdrawal parameter values may be estimated further using auxiliary input, including, by way of example and not limitation, image or video streams and clinical data from previous endoscopy procedures performed on the patient, patient information and medical history, or pre-procedure imaging study data, user (e.g., endoscopist) profile (including user’s experience, working environment, affinity to new technology, preference of a certain endoscopy protocol etc.), endoscope and equipment information, or “Al findings” including states of the Al or ML algorithms for detecting anomaly, landmark, or other features or structural elements of interested in the target anatomy, as described above with respect to FIG. 4.
[00108] The target or recommended segment-specific endoscope withdrawal parameter values (e.g., target WSL or WTL) for individual ones of the distinct segments may be estimated using an Al or ML approach. In an example, the result of anomaly detection, the landmark detection and real-time endoscope location information, and the auxiliary input may be applied to a trained ML model. The ML model may have been trained to establish a correspondence between the composite input and a target or recommended value of a segment-specific endoscope withdrawal parameter (e.g., a WSL or WTL), as described above with respect to FIGS. 6A-6B.
[00109] The target or recommended segment-specific endoscope withdrawal parameter values (e.g., segment-specific WSL or WTL) may be used to generate an endoscope withdrawal map that graphically represents the endoscope withdrawal plan, such as the colonoscope withdrawal speed map 510 as shown in FIG. 5. The colonoscope withdrawal speed map may include depictions of distinct segments of the anatomy each color-coded or grayscale- coded to indicate respective target or recommended segment-specific endoscope withdrawal parameter values. The endoscope withdrawal map may be displayed on the user interface as a reference to guide the endoscopist during an endoscopy procedure. In some examples, information about the anomaly detected previously such as during the insertion phase of endoscopy or from previous endoscopic procedures, may also be displayed on the endoscope withdrawal map to serve as an additional layer of precaution to prevent excessive withdrawal speed in the anomalous segment during endoscope withdrawal.
[00110] At 840, during the withdrawal phase, an endoscope withdrawal parameter (e.g., withdrawal speed or total withdrawal time spend) in a particular segment of the anatomy can be tracked and measured. Various techniques may be used to track and measure the endoscope withdrawal speed. In one example, the endoscope withdrawal speed may be estimated using optical flow based method, where the endoscope withdrawal speed can be estimated using endoscope image frame change rate. In another example, the endoscope withdrawal speed may be estimated using an electromagnetic (EM) tracking technique. In yet another example, the endoscope withdrawal speed may be estimated with respect to a landmark such as recognized from the endoscopic images as described above. Using the landmark as a reference or fiducial point, the endoscope withdrawal speed can be estimated based on a change in relative position with respect to the recognized land ark. In some examples, during the endoscope withdrawal phase, the location of the endoscope in one of the distinct segments of the anatomy may be detected in substantially real time based at least in part on the endoscopic image or video streams or features extracted therefrom. The measured real-time endoscope withdrawal parameter value (e.g., withdrawal speed) may be associated with the current recognized segment to produce a measured segment-specific withdrawal parameter value. [00111] The endoscope withdrawal plan generated at step 830, and the actual endoscope withdrawal parameter value measured at step 840, may be provided to a user or a process to facilitate manual or robotic withdrawal of the endoscope. At 852, the endoscope withdrawal plan, such as the endoscope withdrawal map, may be displayed to the user, as illustrated in FIG. 5. The measured segment-specific withdrawal parameter value (e.g., withdrawal speed) may be compared to the target or recommended segment-specific withdrawal parameter value to determine if the measure value is within a specific margin of the target value. If the measured withdrawal speed falls within a specified margin of the speed limit for that segment, then the withdrawal speed is deemed appropriate. When the measured withdrawal speed deviates from the speed limit exceeding the specified margin, then at 854, an alert may be generated to warn the user (e.g., endoscopist) of an inappropriately fast or inappropriately slow withdrawal speed. A recommendation may be provided to the user to adjust the withdrawal speed to substantially conform to the speed limit for that segment. In some examples, the substantially real-time endoscope position in a segment of the anatomy may be displayed along with (e.g., side by side, or overlaid upon) the endoscope withdrawal map, as illustrated in FIG. 5. This provides direct visual feedback to the user on endoscope withdrawal. Additionally or alternatively, in some examples, at 856, a control signal may be generated to a robotic system that can robotically adjust the endoscope withdrawal speed in accordance with the segment-wise withdrawal plan.
[00112] FIG. 9 illustrates generally a block diagram of an example machine 900 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform. Portions of this description may apply to the computing framework of various portions of the endoscope system 300.
[00113] In alternative embodiments, the machine 900 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 900 may operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine 900 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine 900 may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
[00114] Examples, as described herein, may include, or may operate by, logic or a number of components, or mechanisms. Circuit sets are a collection of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuit set membership may be flexible over time and underlying hardware variability. Circuit sets include members that may, alone or in combination, perform specified operations when operating. In an example, hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation. Accordingly, the computer readable medium is communicatively connected to the other components of the circuit set member when the device is operating. In an example, any of the physical components may be used in more than one member of more than one circuit set. For example, under operation, execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time.
[00115] Machine (e.g., computer system) 900 may include a hardware processor 902 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 904 and a static memory 906, some or all of which may communicate with each other via an interlink (e.g., bus) 908. The machine 900 may further include a display unit 910 (e.g., a raster display, vector display, holographic display, etc.), an alphanumeric input device 912 (e.g., a keyboard), and a user interface (UI) navigation device 914 (e.g., a mouse). In an example, the display unit 910, input device 912 and UI navigation device 914 may be a touch screen display. The machine 900 may additionally include a storage device (e.g., drive unit) 916, a signal generation device 918 (e.g., a speaker), a network interface device 920, and one or more sensors 921, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensors. The machine 900 may include an output controller 928, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
[00116] The storage device 916 may include a machine readable medium 922 on which is stored one or more sets of data structures or instructions 924 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 924 may also reside, completely or at least partially, within the main memory 904, within static memory 906, or within the hardware processor 902 during execution thereof by the machine 900. In an example, one or any combination of the hardware processor 902, the main memory 904, the static memory 906, or the storage device 916 may constitute machine readable media.
[00117] While the machine-readable medium 922 is illustrated as a single medium, the term "machine readable medium" may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 924.
[00118] The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 900 and that cause the machine 900 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Nonlimiting machine-readable medium examples may include solid-state memories, and optical and magnetic media. In an example, a massed machine-readable medium comprises a machine readable medium with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals. Specific examples of massed machine- readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EPSOM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. [00119] The instructions 924 may further be transmitted or received over a communication network 926 using a transmission medium via the network interface device 920 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as WiFi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 920 may include one or more physical jacks (e.g., Ethernet, coaxial, or phonejacks) or one or more antennas to connect to the communication network 926. In an example, the network interface device 920 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 900, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
Additional Notes
[00120] The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as “examples.” Such examples can include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein. [00121] In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, composition, formulation, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
[00122] The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims

What is claimed is:
1. An endoscope system, comprising: an endoscope, including an imaging system configured to obtain images or video streams of distinct segments of an anatomy of a patient during an endoscopy procedure comprising insertion and subsequent withdrawal of the endoscope in the anatomy; and a controller circuit configured to: analyze the obtained images or video streams to generate endoscopic image or video features for individual ones of the distinct segments of the anatomy; based at least in part on the endoscopic image or video features, generate an endoscope withdrawal plan comprising target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments of the anatomy; and provide the endoscope withdrawal plan to a user or a robotic system to facilitate manual or robotic withdrawal of the endoscope.
2. The endoscope system of claim 1, wherein the target or recommended segment-specific endoscope withdrawal parameter values include target segment-specific endoscope withdrawal speed limits (WSL) for the individual ones of the distinct segments of the anatomy.
3. The endoscope system of any of claims 1-2, wherein the target or recommended segment-specific endoscope withdrawal parameter values include target segment-specific endoscope withdrawal time limits (WTL) for the individual ones of the distinct segments of the anatomy.
4. The endoscope system of any of claims 1-3, wherein the endoscope is a colonoscope for use in a colonoscopy procedure, wherein the imaging system is configured to obtain images or video streams from each of distinct colon segments during the colonoscopy procedure.
5. The endoscope system of any of claims 1-4, wherein to determine the target or recommended segment-specific endoscope withdrawal parameter values includes to determine, for a first segment of the anatomy, a first target segment-specific endoscope withdrawal parameter using at least the endoscopic image or video features generated from the images or video streams obtained prior to the endoscope being withdrawn beyond the first segment during the manual or robotic withdrawal of the endoscope.
6. The endoscope system of claim 5, wherein the images or video streams obtained prior to the endoscope reaching the first segment includes images or video streams obtained during the insertion of the endoscope in the anatomy.
7. The endoscope system of any of claims 1-6, wherein the controller circuit is configured to: perform anomaly detection that includes detecting one or more anomalies in the individual ones of the distinct segments based at least in part on the endoscopic image or video features; and determine the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments based at least in part on a result of the anomaly detection.
8. The endoscope system of claim 7, wherein the anomaly detection includes recognizing one or more of a presence or absence, a type, a size, a shape, a location, or an amount of pathological tissue or obstructed mucosa.
9. The endoscope system of any of claims 7-8, wherein controller circuit is configured to detect the one or more anomalies using a first trained machinelearning (ML) model, the first ML model trained to establish a correspondence between (i) an endoscopic image or video stream or features extracted therefrom and (ii) one or more anomaly characteristics.
10. The endoscope system of any of claims 7-9, wherein the controller circuit is configured to determine the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments by applying the result of anomaly detection to a second trained machine-learning (ML) model, the second ML model trained to establish a correspondence between (i) one or more anomaly characteristics and (ii) a target or recommended segment-specific endoscope withdrawal parameter value.
11. The endoscope system of claim 10, wherein the controller circuit is configured to: determine an anomaly score based on a type, a size, a shape, a location, or an amount of anomaly; and determine the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments by applying the determined anomaly score to the second trained ML model.
12. The endoscope system of any of claims 1-11, wherein the controller circuit is configured to generate the endoscope withdrawal plan further using one or more of image or video streams and clinical data from previous endoscopy procedures; patient information and medical history; or pre-procedure imaging study data.
13. The endoscope system of any of claims 1-12, wherein the controller circuit is configured to display on a user interface a graphic representation of the endoscope withdrawal plan, the graphical representation including depictions of distinct segments of the anatomy each color-coded or grayscale-coded to indicate respective target or recommended segment-specific endoscope withdrawal parameter values.
14. The endoscope system of claim 13, wherein the controller circuit is configured to: determine in substantially real time a location of the endoscope in one of the distinct segments of the anatomy based at least in part on the endoscopic image or video features; register the location of the endoscope to a pre-generated template of the anatomy; and display on the user interface the location of the endoscope in the one of the distinct segments as overlay on the graphic representation of the endoscope withdrawal plan.
15. The endoscope system of claim 2, wherein the controller circuit is configured to: measure an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generate an alert to the user, and provide a recommendation to adjust the withdrawal speed to substantially conform to the target segmentspecific endoscope WSL.
16. The endoscope system of any of claims 2 or 15, comprising the robotic system configured to robotically withdraw the endoscope, and wherein the controller circuit is configured to: measure an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generate a control signal to the robotic system to automatically adjust the withdrawal speed to substantially conform to the target segmentspecific endoscope WSL.
17. The endoscope system of any of claims 2 and 15-16, wherein the controller circuit is configured to: identify an anomalous segment from the distinct segments based at least in part on the endoscopic image or video features; and during the manual or robotic withdrawal of the endoscope and prior to the endoscope reaching the identified anomalous segment, display a visual indicator of the anomalous segment on a user interface, and generate a warning to the user to withdraw the endoscope at a speed lower than the target segmentspecific endoscope WSL for the identified anomalous segment.
18. A method of planning withdrawal of an endoscope from an anatomy of a patient in an endoscopy procedure, the method comprising: obtaining images or video streams of distinct segments of the anatomy during an insertion phase of the endoscopy procedure using an imaging system associated with an the endoscope; analyzing the obtained images or video streams to generate endoscopic image or video features for individual ones of the distinct segments of the anatomy; based at least in part on the endoscopic image or video features, generate an endoscope withdrawal plan comprising target or recommended segmentspecific endoscope withdrawal parameter values for the individual ones of the distinct segments of the anatomy; and providing the endoscope withdrawal plan to a user or a robotic system to facilitate manual or robotic withdrawal of the endoscope.
19. The method of claim 18, wherein the target or recommended segmentspecific endoscope withdrawal parameter values include target segment-specific endoscope withdrawal speed limits (WSL) or target segment-specific endoscope withdrawal time limits (WTL) for the individual ones of the distinct segments of the anatomy.
20. The method of any of claims 18-19, comprising determining a first target segment-specific endoscope withdrawal parameter for a first segment of the anatomy using at least the endoscopic image or video features generated from the images or video streams obtained prior to the endoscope being withdrawn beyond the first segment during the manual or robotic withdrawal of the endoscope.
21. The method of claim 20, wherein the images or video streams obtained prior to the endoscope reaching the first segment includes images or video streams obtained during the insertion of the endoscope in the anatomy.
22. The method of any of claims 18-21, comprising: performing anomaly detection, including detecting one or more anomalies in the individual ones of the distinct segments based at least in part on the endoscopic image or video features; and determining the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments based at least in part on a result of the anomaly detection.
23. The method of claim 22, wherein the anomaly detection includes using a first trained machine-learning (ML) model, the first ML model trained to establish a correspondence between (i) an endoscopic image or video stream or features extracted therefrom and (ii) one or more anomaly characteristics.
24. The method of any of claims 22-23, wherein determining the target or recommended segment-specific endoscope withdrawal parameter values for the individual ones of the distinct segments includes applying the result of anomaly detection to a second trained machine-learning (ML) model, the second ML model trained to establish a correspondence between (i) one or more anomaly characteristics and (ii) a target or recommended segment-specific endoscope withdrawal parameter value.
25. The method of any of claims 18-24, wherein generating the endoscope withdrawal plan further includes using one or more of image or video streams and clinical data from previous endoscopy procedures; patient information and medical history; or pre-procedure imaging study data.
26. The method of any of claims 18-25, further comprising displaying on a user interface on a user interface a graphic representation of the endoscope withdrawal plan, the graphical representation including depictions of distinct segments of the anatomy each color-coded or grayscale-coded to indicate respective target or recommended segment-specific endoscope withdrawal parameter values.
27. The method of claim 26, further comprising: determining in substantially real time a location of the endoscope; registering the location of the endoscope to a pre-generated template of the anatomy; and displaying on the user interface the location of the endoscope in the one of the distinct segments as overlay on the graphic representation of the endoscope withdrawal plan.
28. The method of claim 19, further comprising: measuring an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generating an alert to the user, and providing a recommendation to adjust the endoscope withdrawal speed to substantially conform to the target segment-specific endoscope WSL.
29. The method of any of claims 19 or 28, further comprising: measuring an endoscope withdrawal speed in one of the distinct segments of the anatomy; and when the measured endoscope withdrawal speed deviates from the target segment-specific endoscope WSL for the one of the distinct segments by a specific margin, generating a control signal to a robotic system to automatically adjust the withdrawal speed to substantially conform to the target segmentspecific endoscope WSL.
30. The method of any of claims 19 and 28-29, further comprising: identifying an anomalous segment from the distinct segments based at least in part on the endoscopic image or video features; and during the manual or robotic withdrawal of the endoscope and prior to the endoscope reaching the identified anomalous segment, displaying a visual indicator of the anomalous segment on a user interface, and generating a warning to the user to withdraw the endoscope at a speed lower than the target segment-specific endoscope WSL for the identified anomalous segment.
PCT/US2024/046148 2023-09-12 2024-09-11 Image-guided endoscope withdrawal control Pending WO2025059143A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363582026P 2023-09-12 2023-09-12
US63/582,026 2023-09-12

Publications (1)

Publication Number Publication Date
WO2025059143A1 true WO2025059143A1 (en) 2025-03-20

Family

ID=92895705

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/046148 Pending WO2025059143A1 (en) 2023-09-12 2024-09-11 Image-guided endoscope withdrawal control

Country Status (1)

Country Link
WO (1) WO2025059143A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706536A (en) * 2021-10-28 2021-11-26 武汉大学 Sliding mirror risk early warning method and device and computer readable storage medium
US20220369920A1 (en) * 2021-05-24 2022-11-24 Verily Life Sciences Llc Phase identification of endoscopy procedures
WO2023033859A1 (en) * 2021-09-02 2023-03-09 Smart Medical Systems Ltd. Artificial-intelligence-based control system for mechanically-enhanced internal imaging

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220369920A1 (en) * 2021-05-24 2022-11-24 Verily Life Sciences Llc Phase identification of endoscopy procedures
WO2023033859A1 (en) * 2021-09-02 2023-03-09 Smart Medical Systems Ltd. Artificial-intelligence-based control system for mechanically-enhanced internal imaging
CN113706536A (en) * 2021-10-28 2021-11-26 武汉大学 Sliding mirror risk early warning method and device and computer readable storage medium

Similar Documents

Publication Publication Date Title
US12053144B2 (en) Robotic systems for navigation of luminal networks that compensate for physiological noise
US12478433B2 (en) Image guidance during cannulation
US20230117954A1 (en) Automatic positioning and force adjustment in endoscopy
US20220254017A1 (en) Systems and methods for video-based positioning and navigation in gastroenterological procedures
JP2023508521A (en) Identification and targeting of anatomical features
US20250288186A1 (en) Ai-based endoscopic tissue acquisition planning
JP2023119573A (en) Computer aided assistance system and method
US20230122179A1 (en) Procedure guidance for safety
US20240197403A1 (en) Endoscopic ultrasound guided tissue acquisition
WO2025059143A1 (en) Image-guided endoscope withdrawal control
WO2025059157A1 (en) Ai-based imaging mode selection in endoscopy
US20230119097A1 (en) Endoluminal transhepatic access procedure
WO2025259475A1 (en) Real-time endoscopy scene analysis and photo-documentation
US20240197163A1 (en) Endoscopy in reversibly altered anatomy
Díaz et al. Robot based transurethral bladder tumor resection with automatic detection of tumor cells
US20230363628A1 (en) Wire puncture of stricture for pancreaticobiliary access
WO2025054333A1 (en) Needle trajectory prediction
WO2025054243A1 (en) Sampling equipment recommendation based on target location
WO2024186443A1 (en) Computer-aided diagnosis system
WO2025085314A1 (en) Instrument abnormality detection for medical device
WO2025111486A1 (en) Endoscope with navigation capability
US20250170363A1 (en) Robotic catheter tip and methods and storage mediums for controlling and/or manufacturing a catheter having a tip
US20250241716A1 (en) Endoscopy video timeline interest level prediction
US20230225802A1 (en) Phase segmentation of a percutaneous medical procedure
WO2024081745A2 (en) Localization and targeting of small pulmonary lesions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24776784

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)