[go: up one dir, main page]

WO2025059157A1 - Sélection de mode d'imagerie à base d'ia en endoscopie - Google Patents

Sélection de mode d'imagerie à base d'ia en endoscopie Download PDF

Info

Publication number
WO2025059157A1
WO2025059157A1 PCT/US2024/046168 US2024046168W WO2025059157A1 WO 2025059157 A1 WO2025059157 A1 WO 2025059157A1 US 2024046168 W US2024046168 W US 2024046168W WO 2025059157 A1 WO2025059157 A1 WO 2025059157A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
imaging
imaging mode
endoscope
recommended
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2024/046168
Other languages
English (en)
Inventor
Sailesh Conjeti
Dawei Liu
Michael Ryan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gyrus ACMI Inc
Original Assignee
Gyrus ACMI Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gyrus ACMI Inc filed Critical Gyrus ACMI Inc
Publication of WO2025059157A1 publication Critical patent/WO2025059157A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000096Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000094Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/045Control thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/0655Control therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/31Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the rectum, e.g. proctoscopes, sigmoidoscopes, colonoscopes

Definitions

  • the present document relates generally to endoscopic systems, and more particularly to systems and methods of determining or adjusting an imaging mode for use in inspecting tissue or foreign objects during an endoscopy procedure.
  • Endoscopes have been used in a variety of clinical procedures, including, for example, illuminating, imaging, detecting and diagnosing one or more disease states, providing fluid delivery (e.g., saline or other preparations via a fluid channel) toward an anatomical region, providing passage (e.g., via a working channel) of one or more therapeutic devices or biological matter collection devices for sampling or treating an anatomical region, and providing suction passageways for collecting fluids (e.g., saline or other preparations), among other procedures.
  • fluid delivery e.g., saline or other preparations via a fluid channel
  • passage e.g., via a working channel
  • suction passageways for collecting fluids (e.g., saline or other preparations)
  • Examples of such anatomical region can include gastrointestinal tract (e.g., esophagus, stomach, duodenum, pancreaticobiliary duct, intestines, colon, and the like), renal area (e.g., kidney(s), ureter, bladder, urethra) and other internal organs (e.g., reproductive systems, sinus cavities, submucosal regions, respiratory tract), and the like.
  • gastrointestinal tract e.g., esophagus, stomach, duodenum, pancreaticobiliary duct, intestines, colon, and the like
  • renal area e.g., kidney(s), ureter, bladder, urethra
  • other internal organs e.g., reproductive systems, sinus cavities, submucosal regions, respiratory tract
  • Some endoscopes include a working channel through which an operator can perform suction, placement of diagnostic or therapeutic devices (e.g., a brush, a biopsy needle or forceps, a stent, a basket, or a balloon), or minimally invasive surgeries such as tissue sampling or removal of unwanted tissue (e.g., benign or malignant strictures) or foreign objects (e.g., calculi).
  • diagnostic or therapeutic devices e.g., a brush, a biopsy needle or forceps, a stent, a basket, or a balloon
  • minimally invasive surgeries such as tissue sampling or removal of unwanted tissue (e.g., benign or malignant strictures) or foreign objects (e.g., calculi).
  • Some endoscopes can be used with a laser or plasma system to deliver energy to an anatomical target (e.g., soft or hard tissue or calculi) to achieve desired treatment.
  • laser has been used in applications of tissue ablation, coagulation, vaporization, fragmentation, and lithotripsy to break down calculi in kidney, gallbladder, ureter, among other stone-forming regions, or to ablate large calculi into smaller fragments.
  • endoscopy is colonoscopy for used to reduce incidence and mortality rate of colorectal cancer. Colonoscopy is typically performed with rapid advance of a colonoscope to the cecum and then performing a thorough inspection to detect anomalies (e.g., polyps) and necessary treatment (e.g., polypectomy) during withdrawal.
  • anomalies e.g., polyps
  • necessary treatment e.g., polypectomy
  • An endoscope generally includes an imaging sensor (e.g., a camera) that can take live images or video streams during an endoscopy procedure.
  • the imaging sensor can operate under a preset imaging mode to obtain the live images or video streams.
  • the imaging mode may include lighting mode, optical magnification, viewing angle of the imaging sensor, among other settings of the imaging sensor and lighting system.
  • Different imaging modes have been used in real-time endoscopic inspection and optical diagnosis, such as high-definition white light imaging (WLI), chromoendoscopy techniques like dye-based, or virtual CE like narrow-band imaging (NBI), texture and color enhancement (TXI) imaging, red dichromatic imaging (RDI), etc.
  • Optical magnification has been recommended for inspecting lesions or pathologies of distinct characteristics. Proper selection of an imaging mode can improve image or video quality and facilitate inspection and diagnosis of anomalous tissue or foreign objects of interest during the procedure.
  • IEE Image enhanced endoscopy
  • IEE Image enhanced endoscopy
  • specialized imaging modality recommendations depending on the type of pathology being investigated toggling between such imaging modalities during an endoscopy procedure remains highly manual and dependent on a clinician both recognizing a current type of pathology being viewed and recalling which imaging modality is recommended for viewing such pathology.
  • the high operator dependency on the operator preference and experience in reading these advanced endoscopy modes their usage may result in variability in diagnosis and treatment outcome.
  • training on advanced endoscopic imaging can potentially improve overall anomaly detection and allow for more accurate optical diagnosis and effective treatment, the training can be costly.
  • the adoption of IEE techniques in colonoscopy is still operator dependent and requires substantial specialized training.
  • An exemplary endoscope system includes an endoscope and a controller circuit.
  • the endoscope includes an imaging system to obtain images or video streams of a target anatomy during an endoscopy procedure.
  • the controller circuit can generate from the obtained images or video streams endoscopic image or video features characterizing an anomaly in the target anatomy, and determine a personalized, pathology-specific imaging mode to enhance discernability of the anomaly from the images or video streams.
  • the recommended imaging mode may be determined using machine learning (ML) techniques.
  • the recommended imaging mode may be provided to a user or a process to facilitate manual or automatic endoscopic imaging mode adjustment during the endoscopy procedure.
  • the systems, devices, and methods described herein may be used in various endoscopy procedures to improve real-time inspection and detection and diagnosis of pathologies. It can also help reduce the inter-operator variability in experience and/or preference and produce more consistent and predictable diagnosis and treatment, while at the same time reduce the cost associated with intensive training.
  • the systems and techniques described herein also promote adoption of advanced IEE technologies, thereby improving the overall endoscopy procedure success rate and patient outcome.
  • Example 1 is an endoscope system including: an endoscope, including an imaging system configured to obtain images or video streams of a target anatomy in a patient during an endoscopy procedure; and a controller circuit configured to: analyze the obtained images or video streams to generate endoscopic image or video features characterizing an anomaly in the target anatomy; based at least in part on the endoscopic image or video features and one or more auxiliary features different from the endoscopic image or video features, automatically determine a target or recommended imaging mode of the imaging system to enhance discemability of the anomaly from the images or video streams of the target anatomy in subsequent imaging of the target anatomy; and provide the target or recommended imaging mode to a user or a robotic system to facilitate manual or automatic endoscopic imaging mode adjustment during the endoscopy procedure.
  • an endoscope including an imaging system configured to obtain images or video streams of a target anatomy in a patient during an endoscopy procedure; and a controller circuit configured to: analyze the obtained images or video streams to generate endoscopic image or video features characterizing an anomaly in the target anatomy; based at least
  • Example 2 the subject matter of Example 1 optionally includes the target or recommended imaging mode that can include one or more of a lighting modality, a zoom setting, or a viewing angle with respect to the anomaly.
  • the target or recommended imaging mode can include one or more of a lighting modality, a zoom setting, or a viewing angle with respect to the anomaly.
  • Example 3 the subject matter of any one or more of Examples 1-2 optionally includes the target or recommended imaging mode that can include one of a narrow-band imaging (NBI) mode, a red dichromatic imaging (RDI) mode, a white-light imaging (WLI) mode, or a texture and image enhancement (TXI) mode.
  • NBI narrow-band imaging
  • RDI red dichromatic imaging
  • WLI white-light imaging
  • TXI texture and image enhancement
  • Example 4 the subject matter of any one or more of Examples 1-3 optionally includes a colonoscope, wherein the imaging system is configured to obtain images or video streams of each of distinct colon segments during an colonoscopy procedure, wherein the controller circuit is configured to determine respective target or recommended imaging modes for use in subsequent imaging of the distinct colon segments to enhance anomaly discernability from the images or video streams.
  • a colonoscope wherein the imaging system is configured to obtain images or video streams of each of distinct colon segments during an colonoscopy procedure, wherein the controller circuit is configured to determine respective target or recommended imaging modes for use in subsequent imaging of the distinct colon segments to enhance anomaly discernability from the images or video streams.
  • Example 5 the subject matter of any one or more of Examples 1-4 optionally includes the controller circuit that can be configured to: detect the anomaly in the target anatomy using the endoscopic image or video features; and determine the target or recommended imaging mode based at least in part on a result of the anomaly detection.
  • Example 6 the subject matter of Example 5 optionally includes, wherein to detect the anomaly includes to detect a presence or absence of the anomaly and one or more of a type, a size, a shape, a location, or an amount of pathological tissue or obstructed mucosa in the target anatomy.
  • Example 7 the subject matter of any one or more of Examples 5-6 optionally includes the controller circuit that can be configured to detect the anomaly using a first trained machine-learning (ML) model.
  • ML machine-learning
  • Example 8 the subject matter of any one or more of Examples 1-7 optionally includes the controller circuit that can be configured to: determine in substantially real time a location of the endoscope in the target anatomy based at least in part on the endoscopic image or video features; register the location of the endoscope to a pre-generated template of the target anatomy; and display the location of the endoscope in the target anatomy on a user interface.
  • Example 9 the subject matter of Example 8 optionally includes the controller circuit that can be configured to recognize an anatomical landmark using the endoscopic image or video features, and to determine the location of the endoscope based on the recognized anatomical landmark.
  • Example 10 the subject matter of any one or more of Examples 8-9 optionally includes the controller circuit that can be configured to determine the target or recommended imaging mode further using the determined location of the endoscope in the target anatomy.
  • Example 11 the subject matter of any one or more of Examples 1-10 optionally includes, wherein to determine the target or recommended imaging mode, the controller circuit is configured to apply the endoscopic image or video features to a second trained machine-learning (ML) model, the second trained ML model being trained to established a correspondence between the endoscopic image or video features to one of a group of candidate imaging modes.
  • ML machine-learning
  • Example 12 the subject matter of Example 11 optionally includes the second trained ML model that can be trained further using the one or more auxiliary information features including one or more of: a profile of an endoscopist performing the endoscopy procedure; patient information and medical history data of the patient; endoscope system setup information; or preprocedure imaging study data, wherein to determine the target or recommended imaging mode, the controller circuit is configured to apply an augmented input comprising the endoscopic image or video features and an auxiliary input to the second trained ML model.
  • the controller circuit is configured to apply an augmented input comprising the endoscopic image or video features and an auxiliary input to the second trained ML model.
  • Example 13 the subject matter of any one or more of Examples 11-12 optionally includes the second trained ML model that can be trained to predict, for each of a plurality of candidate imaging modes, a probability representing a likelihood of a corresponding imaging mode being selected as the target or recommended imaging mode, wherein the controller circuit is configured to determine the target or recommended imaging mode based at least in part on the probabilities corresponding to the candidate imaging modes.
  • Example 15 the subject matter of any one or more of Examples 1-14 optionally includes the controller circuit that can be configured to generate a control signal to the imaging system to automatically switch to the target or recommended imaging mode and to re-capture images or video streams of the target anatomy in response to the target or recommended imaging mode being different from an existing imaging mode being used to obtain the images or video streams of the target anatomy.
  • the controller circuit can be configured to generate a control signal to the imaging system to automatically switch to the target or recommended imaging mode and to re-capture images or video streams of the target anatomy in response to the target or recommended imaging mode being different from an existing imaging mode being used to obtain the images or video streams of the target anatomy.
  • Example 16 is a method of determining or adjusting an endoscopic imaging mode during an endoscopy procedure, the method including steps of: obtaining images or video streams of a target anatomy during an endoscopy procedure using an imaging system associated with an the endoscope; analyzing the obtained images or video streams to generate endoscopic image or video features characterizing an anomaly in the target anatomy; based at least in part on the endoscopic image or video features and one or more auxiliary features different from the endoscopic image or video features, automatically determining a target or recommended imaging mode of the imaging system to enhance discernability of the anomaly from the images or video streams of the target anatomy in subsequent imaging of the target anatomy; and providing the target or recommended imaging mode to a user or a robotic system to facilitate manual or automatic endoscopic imaging mode adjustment during the endoscopy procedure.
  • Example 17 the subject matter of Example 16 optionally includes the target or recommended imaging mode that can include one or more of a lighting modality, a zoom setting, or a viewing angle with respect to the anomaly.
  • Example 18 the subject matter of any one or more of Examples 16-17 optionally includes the target or recommended imaging mode that can include one of a narrow-band imaging (NBI) mode, a red dichromatic imaging (RDI) mode, a white-light imaging (WLI) mode, or a texture and image enhancement (TXI) mode.
  • NBI narrow-band imaging
  • RDI red dichromatic imaging
  • WLI white-light imaging
  • TXI texture and image enhancement
  • Example 19 the subject matter of any one or more of Examples 16-18 optionally includes detecting the anomaly in the target anatomy using the endoscopic image or video features; and determining the target or recommended imaging mode based at least in part on a result of the anomaly detection.
  • Example 20 the subject matter of any one or more of Examples 16-19 optionally includes recognize an anatomical landmark using the endoscopic image or video features; determine in substantially real time a location of the endoscope in the target anatomy based at least in part on the recognized anatomical landmark; registering the location of the endoscope to a pre-generated template of the target anatomy; and displaying the location of the endoscope in the target anatomy on a user interface, wherein determining the target or recommended imaging mode is further based on the determined location of the endoscope in the target anatomy.
  • Example 21 the subject matter of any one or more of Examples 16-20 optionally includes determining the target or recommended imaging mode that can include applying the endoscopic image or video features to a second trained machine-learning (ML) model, the second trained ML model being trained to established a correspondence between the endoscopic image or video features to one of a group of candidate imaging modes.
  • ML machine-learning
  • Example 22 the subject matter of Example 21 optionally includes the second trained ML model that can be trained further using the one or more auxiliary features information including one or more of: a profile of an endoscopist performing the endoscopy procedure; patient information and medical history data of the patient; endoscope system setup information; or preprocedure imaging study data, wherein determining the target or recommended imaging mode includes applying an augmented input comprising the endoscopic image or video features and an auxiliary input to the second trained ML model.
  • the second trained ML model that can be trained further using the one or more auxiliary features information including one or more of: a profile of an endoscopist performing the endoscopy procedure; patient information and medical history data of the patient; endoscope system setup information; or preprocedure imaging study data, wherein determining the target or recommended imaging mode includes applying an augmented input comprising the endoscopic image or video features and an auxiliary input to the second trained ML model.
  • Example 23 the subject matter of Example 22 optionally includes the second trained ML model that can be trained to predict, for each of a plurality of candidate imaging modes, a probability representing a likelihood of a corresponding imaging mode being selected as the target or recommended imaging mode, wherein determining the target or recommended imaging mode includes identifying one of the plurality of candidate imaging modes with the corresponding probability higher than any other of the candidate imaging modes.
  • Example 24 the subject matter of any one or more of Examples 16-23 optionally includes, in response to the target or recommended imaging mode being different from an existing imaging mode being used to obtain the images or video streams of the target anatomy, providing a recommendation to the user to switch to the target or recommended imaging mode to re-capture images or video streams of the target anatomy.
  • Example 25 the subject matter of any one or more of Examples 16-24 optionally includes, in response to the target or recommended imaging mode being different from an existing imaging mode being used to obtain the images or video streams of the target anatomy, generating a control signal to the imaging system to automatically switch to the target or recommended imaging mode and to re-capture images or video streams of the target anatomy.
  • the presented techniques are described in terms of controlled withdrawal of endoscope in colonoscopy but are not so limited.
  • the systems, devices, and techniques as described in accordance with various embodiments in this document may additionally or alternatively be used in other procedures involving different types of endoscopes, including, for example, anoscopy, arthroscopy, bronchoscopy, colonoscopy, colposcopy, cystoscopy, esophagoscopy, gastroscopy, laparoscopy, laryngoscopy, neuroendoscopy, proctoscopy, sigmoidoscopy, thoracoscopy etc.
  • FIGS. 1-2 are schematic diagrams illustrating an example of an endoscope system for use in endoscopy procedures, procedures, such as a colonoscopy procedure.
  • FIG. 3 illustrates an example of an endoscope system configured to determine a target or recommended imaging mode to be used for inspecting tissue or foreign objects of interest during an endoscopy procedure.
  • FIG. 4 illustrates examples of auxiliary input that may be used in determining a target or recommended imaging mode.
  • FIG. 5 illustrates by way of example different stages of an image- guided colonoscopy and automatic imaging modality switching based on characteristics of the anomaly.
  • FIG. 6 illustrates example images of a portion of colon segment under different imaging modes.
  • FIGS. 7A-7B illustrate examples of training a machine learning (ML) model and using the trained ML model to determine a recommended pathology-specific imaging modality for anomaly inspection and diagnosis.
  • FIG. 8 is a flow chart illustrating an example method of determining or adjusting an endoscopic imaging mode during an endoscopy procedure.
  • FIG. 9 is a block diagram illustrating an example machine upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform.
  • An exemplary endoscope system includes an endoscope and a controller circuit.
  • the endoscope includes at least one imaging sensor to obtain images or video streams of a target anatomy of a patient during an endoscopy procedure.
  • the controller circuit can perform real-time analysis of endoscopic images or video streams and generate endoscopic image or video features characterizing an anomaly in the target anatomy. Based on the endoscopic image or video features, the controller can determine a target or recommended pathology-specific imaging mode to enhance discemability of the anomaly from the images or video streams of the target anatomy.
  • a machine learning (ML) model may be trained and used for determining the target or recommended imaging mode.
  • the target or recommended imaging mode may be provided to a user or a process to facilitate manual or automatic endoscopic imaging mode adjustment during the endoscopy procedure.
  • FIG. 1 is a schematic diagram of an endoscope system 10 for use in an endoscopy procedure, such as colonoscopy.
  • the system 10 can include an imaging and control system 12 and an endoscope 14.
  • the system 10 is an illustrative example of an endoscope system suitable for use with the systems, devices, and methods described herein, such as a colonoscopy system for use in image-guided colonoscopy with auto-adjusted imaging modality as described in this document.
  • the endoscope 14 can be insertable into an anatomical region for imaging or to provide passage of or attachment to (e.g., via tethering) one or more sampling devices for biopsies or therapeutic devices for treatment of a disease state associated with the anatomical region.
  • the endoscope 14 can interface with and connect to imaging and control system 12.
  • the endoscope 14 may be a colonoscope, though other types of endoscopes can be used with the features and teachings of the present disclosure.
  • the imaging and control system 12 can include a control unit 16, an output unit 18, an input unit 20, a light source unit 22, a fluid source 24, and a suction pump 26.
  • the imaging and control system 12 can include various ports for coupling with the endoscope system 10.
  • the control unit 16 can include a data input/output port for receiving data from and communicating data to the endoscope 14.
  • the light source unit 22 can include an output port for transmitting light to the endoscope 14, such as via a fiber optic link.
  • the fluid source 24 can include a port for transmitting fluid to the endoscope 14.
  • the fluid source 24 can include, for example, a pump and a tank of fluid or can be connected to an external tank, vessel, or storage unit.
  • the suction pump 26 can include a port to draw a vacuum from the endoscope 14 to generate suction, such as for withdrawing fluid from the anatomical region into which the endoscope 14 is inserted.
  • the output unit 18 and the input unit 20 can be used by an operator of the endoscope system 10 to control functions of the endoscope system 10 and view the output of the endoscope 14.
  • the control unit 16 can also generate signals or other outputs from treating the anatomical region into which the endoscope 14 is inserted.
  • the control unit 16 can generate electrical output, acoustic output, fluid output, and the like for treating the anatomical region with, for example, cauterizing, cutting, freezing, and the like.
  • the fluid source 24 can be in communication with control unit 16 and can include one or more sources of air, saline, or other fluids, as well as associated fluid pathways (e.g., air channels, irrigation channels, suction channels, or the like) and connectors (barb fittings, fluid seals, valves, or the like).
  • the fluid source 24 can be utilized as an activation energy for a biasing device or a pressure-applying device of the present disclosure.
  • the imaging and control system 12 can also include the drive unit 46, which can include a motorized drive for advancing a distal section of endoscope 14.
  • the endoscope 14 can include an insertion section 28, a functional section 30, and a handle section 32, which can be coupled to a cable section 34 and a coupler section 36.
  • the insertion section 28 can extend distally from the handle section 32, and the cable section 34 can extend proximally from the handle section 32.
  • the insertion section 28 can be elongated and can include a bending section and a distal end to which the functional section 30 can be attached.
  • the bending section can be controllable (e.g., by a control knob 38 on the handle section 32) to maneuver the distal end through tortuous anatomical passageways (e.g., stomach, duodenum, kidney, ureter, etc.).
  • the insertion section 28 can also include one or more working channels (e.g., an internal lumen) that can be elongated and can support the insertion of one or more therapeutic tools of the functional section 30.
  • the working channel can extend between the handle section 32 and the functional section 30. Additional functionalities, such as fluid passages, guide wires, and pull wires, can also be provided by the insertion section 28 (e.g., via suction or irrigation passageways or the like).
  • a coupler section 36 can be connected to the control unit 16 to connect to the endoscope 14 to multiple features of the control unit 16, such as the input unit 20, the light source unit 22, the fluid source 24, and the suction pump 26.
  • the handle section 32 can include the knob 38 and the port 40A.
  • the knob 38 can be connected to a pull wire or other actuation mechanisms that can extend through the insertion section 28.
  • the port 40 A, as well as other ports, such as a port 40B (FIG. 2), can be configured to couple various electrical cables, guide wires, auxiliary scopes, tissue collection devices, fluid tubes, and the like to the handle section 32, such as for coupling with the insertion section 28.
  • the imaging and control system 12 can be provided on a mobile platform (e.g., a cart 41) with shelves for housing the light source unit 22, the suction pump 26, an image processing unit 42 (FIG. 2), etc.
  • a mobile platform e.g., a cart 41
  • shelves for housing the light source unit 22, the suction pump 26, an image processing unit 42 (FIG. 2), etc.
  • FIGS. 1 and 2 show several components of the imaging and the control system 12 (shown in FIGS. 1 and 2) to make the endoscope “self-contained.”
  • the functional section 30 can include components for treating and diagnosing anatomy of a patient.
  • the functional section 30 can include an imaging device, an illumination device, and an elevator.
  • the functional section 30 can further include optically enhanced biological matter and tissue collection and retrieval devices.
  • the functional section 30 can include one or more electrodes conductively connected to the handle section 32 and functionally connected to the imaging and control system 12 to analyze biological matter in contact with the electrodes based on comparative biological data stored in the imaging and control system 12.
  • the functional section 30 can directly incorporate tissue collectors.
  • the endoscope 14 can be robotically controlled, such as by a robot arm attached thereto.
  • the robot arm can automatically, or semi-automatically (e.g., with certain user manual control or commands), via an actuator, position and navigate the endoscope 14 (e.g., the functional section 30 and/or the insertion section 28) in the target anatomy or position a device at a desired location with desired posture to facilitate an operation of an anatomical target.
  • a controller can generate a control signal to the actuator of the robot arm to facilitate anomaly inspection and diagnosis under the target or recommended imaging mode in a robotically assisted endoscopy procedure.
  • FIG. 2 is a schematic diagram of the endoscope system 10 of FIG. 1 including the imaging and control system 12 and the endoscope 14.
  • FIG. 2 schematically illustrates components of the imaging and the control system 12 coupled to the endoscope 14, which in the illustrated example includes a colonoscope.
  • the imaging and control system 12 can include the control unit 16, which can include or be coupled to an image processing unit 42, a treatment generator 44, and a drive unit 46, as well as the light source unit 22, the input unit 20, and the output unit 18.
  • the control unit 16 can include, or can be in communication with, an endoscope, a surgical instrument 48, and an endoscope system, which can include a device configured to engage tissue and collect and store a portion of that tissue and through which imaging equipment (e.g., a camera) can view target tissue via inclusion of optically enhanced materials and components.
  • the control unit 16 can be configured to activate a camera to view target tissue distal of the endoscope system.
  • the control unit 16 can be configured to activate the light source unit 22 to shine light on the surgical instrument 48, which can include select components configured to reflect light in a particular manner, such as enhanced tissue cutters with reflective particles.
  • the coupler section 36 can be connected to the control unit 16 to connect to the endoscope 14 to multiple features of the control unit 16, such as the image processing unit 42 and the treatment generator 44.
  • the port 40A can be used to insert another surgical instrument 48 or device, such as a daughter scope or auxiliary scope, into the endoscope 14. Such instruments and devices can be independently connected to the control unit 16 via the cable 47.
  • the port 40B can be used to connect coupler section 36 to various inputs and outputs, such as video, air, light, and electric.
  • the image processing unit 42 and light source unit 22 can each interface with the endoscope 14 (e.g., at the functional section 30) by wired or wireless electrical connections.
  • the imaging and control system 12 can accordingly illuminate an anatomical region, collect signals representing the anatomical region, process signals representing the anatomical region, and display images representing the anatomical region on the display unit 18.
  • the imaging and control system 12 can include the light source unit 22 to illuminate the anatomical region using light of desired spectrum (e.g., broadband white light, narrow-band imaging using preferred electromagnetic wavelengths, and the like).
  • the imaging and control system 12 can connect (e.g., via an endoscope connector) to the endoscope 14 for signal transmission (e.g., light output from light source, video signals from imaging system in the distal end, diagnostic and sensor signals from a diagnostic device, and the like).
  • signal transmission e.g., light output from light source, video signals from imaging system in the distal end, diagnostic and sensor signals from a diagnostic device, and the like.
  • the treatment generator 44 can generate a treatment plan, which can be used by the control unit 16 to control the operation of the endoscope 14, or to provide with the operating physician a guidance for maneuvering the endoscope 14, during an endoscopy procedure.
  • the treatment generator 44 can generate an endoscope navigation plan, including estimated values for one or more cannulation or navigation parameters (e.g., an angle, a force, etc.) for maneuvering the steerable elongate instrument, using patient information including an image of the target anatomy.
  • the endoscope navigation plan can help guide the operating physician to cannulate and navigate the endoscope in the patient anatomy.
  • the endoscope navigation plan may additionally or alternatively be used to robotically adjust the position, angle, force, and/or navigation of the endoscope or other instrument.
  • FIG. 3 is a block diagram illustrating an example of an endoscope system 300 that can determine a target or recommended imaging mode to be used for inspecting tissue or foreign objects of interest during an endoscopy procedure.
  • the target or recommended imaging mode can help improve realtime inspection, recognition, and optical diagnosis of various pathologies.
  • the system 300 may be used in a colonoscopy procedure to better detect and manage anomalies such as polyps or colorectal cancers.
  • the system 300 may be implemented as a part of the control unit 16 in FIG. 1.
  • the system 300 may include one or more of an endoscope 310, an auxiliary input 315, a controller circuit 320, a user interface 330, and a storage device 340.
  • the system 300 may further include or be communicatively coupled to a robotic system 350 in a robotically assisted endoscopy procedure.
  • the endoscope 310 can be an example of the endoscope 14 as described above and shown in FIGS. 1-2.
  • the endoscope 310 may include, among other things, an imaging system 312 and a lighting system 314.
  • the imaging system 312 can include at least one imaging sensor or device (e.g., a camera) configured to obtain images or video streams of a target anatomy of a patient during an endoscopy procedure.
  • the imaging sensor or device may be located at a distal portion or a distal end of the endoscope 310.
  • the lighting system 314 may include one or more light sources to produce illumination on the target anatomy via one or more lighting lenses.
  • the imaging system 312 may be controllably adjusted to operate on different settings including zoom settings, contrast settings, exposure levels, or viewing angles toward the target anatomy.
  • the lighting system 314 may be controllably adjusted to provide different lighting or illumination conditions.
  • the imaging system 312 and the lighting system 314 together can define an imaging mode or modality for capturing endoscopic images or video streams of the target anatomy.
  • the imaging mode or modality may include one or more of a lighting modality, an optical magnification, or a viewing angle of the imaging device.
  • imaging or lighting modality may include high- definition white light imaging (WLI), chromoendoscopy techniques like dyebased, or virtual CE like narrow-band imaging (NBI), texture and color enhancement (TXI) imaging, red dichromatic imaging (RDI), among others.
  • the optical magnification defines a zoom setting (e.g., zoom in or zoom out of a suspicious anomaly of the target anatomy).
  • the viewing angle of the imaging device also referred to as a field of view, describes the angular extent of a given scene that is imaged by the imaging device.
  • the controller circuit 320 may include circuit sets comprising one or more other circuits or sub-circuits that may, alone or in combination, perform the functions, methods, or techniques described herein.
  • the controller circuit 320 and the circuits sets therein may be implemented as a part of a microprocessor circuit, which may be a dedicated processor such as a digital signal processor, application specific integrated circuit (ASIC), microprocessor, or other type of processor for processing information including physical activity information.
  • the microprocessor circuit may be a general-purpose processor that may receive and execute a set of instructions of performing the functions, methods, or techniques described herein.
  • hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired).
  • the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation.
  • a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation.
  • the instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation.
  • the computer readable medium is communicatively coupled to the other components of the circuit set member when the device is operating.
  • any of the physical components may be used in more than one member of more than one circuit set.
  • execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time.
  • the controller circuit 320 can determine a personalized, pathology-specific imaging modality in substantially real time during an endoscopy procedure using at least the endoscopic images or video streams or features extracted therefrom.
  • imaging modality plays an important role in determining image or video quality during endoscopy and therefore the accuracy and efficiency of optical detection, diagnosis, and treatment of pathologies of various types.
  • determining a desired or “optimal” imaging modality best suited to detect or diagnose certain pathologies is a highly manual process. It generally requires the user (e.g., endoscopist) to manually toggle among the available imaging modalities, recognize a current type of pathology being viewed, and recall which imaging modality was used for viewing such pathology.
  • the system 300 can automate the imaging modality selection and optimization process using artificial intelligence (Al) or machine learning (ML) based techniques, as described in further detail below.
  • the controller circuit 320 may include one or more of an image processor 321, an anomaly detector circuit 322, an endoscope localization circuit 323, and an image mode selector circuit 324.
  • the image processor 321 can analyze the images or video streams obtained from the imaging system 312 and generate endoscopic image or video features.
  • the image or video features include statistical features of pixel values or morphological features, such as comers, edges, blobs, curvatures, speeded up robust features (SURF), or scale-invariant feature transform (SIFT) features, among others.
  • the image processor 321 may pre-process the images or video streams, such as filtering, resizing, orienting, or color or grayscale correction, and the endoscopic features may be extracted from the pre-processed images or video streams.
  • the image processor 321 may post-process the image features to enhance feature quality, such as edge interpolation or extrapolation to produce continuous and smooth edges.
  • the anomaly detector circuit 322 may detect anomaly at the target anatomy based at least in part on the endoscopic mage or video features.
  • the anomaly detection includes detecting and/or recognizing one or more of a presence or absence, a type, a size, a shape, a location, or an amount of pathological tissue or anatomical structures, among other objects in an environment of the anatomy.
  • the anomaly being detected may include pathological tissue segments, such as mucosal abnormalities (polyps, inflammatory bowel diseases, Meckel’s diverticulum, lipoma, bleeding, vascularized mucosa etc.), or obstructed mucosa (e.g., segments with bad bowel preparation, distended colon etc.).
  • the anomaly detector circuit 322 may detect the anomaly using a template matching technique, in which an anomaly may be recognized based on a comparison of the endoscopic features (e.g., features characterizing shapes or contours of a structure) to one or more pre-generated templates of known anomalous structure.
  • the anomaly detector circuit 322 may detect the anomaly using Al or ML based techniques, where the endoscopic images or video streams or features extracted therefrom may be applied to a trained ML model to automatically recognize presence or absence, type, size, location, and/or other characteristics of the anomaly.
  • the ML model may be trained to establish a correspondence between an endoscopic image or video stream or features extracted therefrom and one or more anomaly characteristics.
  • Examples of the ML model used for recognizing anomaly from endoscopic images or video streams include Convolutional Neural Networks, bidirectional LSTM, Recurrent Neural Networks, Conditional Random Fields, Dictionary Learning, or other machine learning techniques (support vector machine, Bayesian models, decision trees, k-means clustering), among other ML techniques.
  • the trained ML model may be stored in the storage device 340.
  • the endoscope localization circuit 323 may determine in substantially real time a location of the endoscope in a particular segment of the anatomy. With such real-time endoscope location information, the anomaly detector circuit 322 may associate a detected anomaly with the segment of the anatomy.
  • the endoscope location may be determined using electromagnetic tracking, or using anatomical landmark detected from the images or video streams such as obtained during the insertion phase of endoscopy, or image or video stream features extracted therefrom by the image processor 321.
  • the endoscope localization circuit 323 may recognize from the images or video streams colon landmarks such as anus, left ascending colon, splenic flexure, transverse colon, hepatic flexure, right descending colon, cecum, appendix and terminal ileum etc.
  • the endoscope localization circuit 323 may recognize the landmarks using a template matching algorithm.
  • the endoscope localization circuit 323 may recognize the landmarks using an Al or ML approach, such as a trained ML model that has been trained to establish a correspondence between an endoscopic image or video stream or features extracted therefrom and a landmark recognition.
  • Examples of the ML model used for recognizing an anatomical landmark include Deep Belief Network, ResNet, DenseNet, Autoencoders, capsule networks, generative adversarial networks, Siamese networks, Convolutional Neural Networks (CNN), deep reinforcement learning, support vector machine (SVM), Bayesian models, decision trees, k-means clustering, among other ML models.
  • the trained ML model may be stored in a storage device 340. Once the endoscope location is determined, the endoscope localization circuit 323 may register the substantially real-time endoscope location to a pre-generated template of the anatomy. Information of the endoscope location in the segments of an anatomy may be presented to the user on the user interface 330.
  • the imaging mode selector circuit 324 may determine a personalized, pathology-specific imaging modality using at least the information about the detected anomaly (as produced by the anomaly detector circuit 322) and the substantially real-time location of the endoscope when the anomaly is detected (as produced by the endoscope localization circuit 323).
  • the personalized pathology-specific imaging modality may include a lighting modality, an optical magnification (i.e., a zoom setting of the imaging system 312), or a viewing angle of the imaging system 312.
  • Examples of lighting modality may include high-definition white light imaging (WLI), chromoendoscopy techniques like dye-based, or virtual CE like narrow-band imaging (NBI), texture and color enhancement (TXI) imaging, red dichromatic imaging (RDI), among others.
  • WLI high-definition white light imaging
  • NBI narrow-band imaging
  • TXI texture and color enhancement
  • RDI red dichromatic imaging
  • the optical magnification defines a zoom setting (e.g., zoom in or zoom out of a suspicious anomaly of the target anatomy).
  • the viewing angle of the imaging device also referred to as a field of view, describes the angular extent of a given scene that is imaged by the imaging device.
  • the personalized, pathology-specific imaging modality may be determined using Al or ML based techiques.
  • information about the detected anomaly, the landmark, and the substantially realtime endoscope location may be applied to trained ML model(s) 360.
  • the ML model(s) 360 may be trained to establish a correspondence between an anomaly characteristic and a target or recommended imaging modality.
  • the trained ML model(s) 360 once trained, may be stored in a storage device 340. Examples of training an ML model and using the trained ML model to determine a personalized pathology-specific imaging modality are discussed below with respect to FIG. 6.
  • the imaging mode selector circuit 324 may compute an anomaly score based on one or more anomaly characteristics such as a type, a size, a shape, a location, or an amount of the anomaly.
  • the anomaly score may have numerical values, such as within a range of 0 to 10.
  • a composite anomaly score may be generated such as using a linear combination or a non-linear combination of multiple anomaly scores each quantifying an anomaly characteristic.
  • the imaging mode selector circuit 324 may map the anomaly score to an imaging mode (e.g., one or more of a lighting modality, an optical magnification, or a viewing angle of the imaging system 312) based on a comparison to one or more threshold values or value ranges, such that a higher anomaly score, which typically indicates a more severe anomalous condition, can be mapped to a more enhanced lighting modality or a higher magnification.
  • An established correspondence between anomaly scores or ranges of anomaly scores and corresponding imaging mode may be determined by a lookup table to a normative database, or via a rule-based system. The established mapping may be stored in the storage device 340.
  • the imaging mode selector circuit 324 may apply the anomaly score to the trained ML model(s) 360 to determine the personalized pathology-specific imaging modality.
  • the anomaly score calculation may be included into the ML model, such as in a layer of the neural network model.
  • the imaging mode selector circuit 324 may determine the personalized pathology-specific imaging modality further using an auxiliary input 315.
  • the auxiliary input 315 may include, by way of example and not limitation, image or video streams and clinical data from previous endoscopy procedures 410 performed on the patient, patient information and medical history 420, or pre-procedure imaging study data 430 (e.g., X-ray or fluoroscopy images, electrical potential map or an electrical impedance map, computer tomography (CT) images, magnetic resonance imaging (MRI) images, among other imaging modalities).
  • CT computer tomography
  • MRI magnetic resonance imaging
  • the auxiliary input may include previous colonoscopies in surveillance cases, including those polyps that the endoscopist left in situ or areas of the colon which were operated on.
  • the patient information and medical history 420 may include, in a typical endoscopy scenario, clinical demographic information, past and current indications and treatment received, etc.
  • the auxiliary input 315 may additionally or alternatively include user (e.g., endoscopist) profile 440 including user’s experience, working environment (e.g., a hospital setting or an ambulatory screening centre), affinity to new technology, preference of a certain endoscopy protocol or certain imaging modes.
  • user profile 440 may be provided by the user via the user interface 330. Alternatively, the user profile 440 may be automatically generated or updated through learning from the user’s past choices and training.
  • the auxiliary input 315 may additionally or alternatively include endoscope and equipment information 450.
  • This may include, for example, specification of the endoscope 310 including type, size, dimension, shape, and structures of the endoscope or other steerable instruments such as a cannular, a catheter, or a guidewire supporting imaging modes and lighting modes (e.g., NBI, RDI, WLI, TXI, etc.); specification of size, dimension, shape, and structures of tissue section, sampling, or treatment tools; current state of the equipment, including which light mode is on, which endo buttons are engaged like waterjet, insufflation, what light modes are supported, or whether the magnification is turned on and the current magnification selection, etc.
  • imaging modes and lighting modes e.g., NBI, RDI, WLI, TXI, etc.
  • current state of the equipment including which light mode is on, which endo buttons are engaged like waterjet, insufflation, what light modes are supported, or whether the magnification is turned on and the current
  • Al finding 460 can be additional elements of the auxiliary input 315 that can be passed to the imaging mode selector circuit 324.
  • Examples of the Al finding 460 may include Al algorithms for detecting anomaly, landmark, or other features or structural elements of interested in the target anatomy.
  • Information about the personalized pathology-specific imaging modality as determined by the imaging mode selector circuit 324 may be displayed on the user interface 330 to assist the endoscopist during an endoscopy procedure, such as during image-guided colonoscopy. If the determined imaging modality is not the same as the current imaging modality being used for inspecting and detecting the anomaly, the user (e.g., endoscopist) may be notified or warned of such difference via the user interface 330. The notification may be delivered through optical means on the diagnostic monitor, or via auditory means like a warning alarm. In an example, highlighting, flash alerts, audible or haptic feedback may be provided to the user. Examples of automatically switching the pathology-specific imaging modality are discussed below with reference to FIG. 5.
  • Other information including the endoscopic images and image features, information about the detected anomaly and landmarks in each segment of the anatomy, or substantially real-time location of the endoscope during the endoscopy procedure, may also be displayed on the user interface 330.
  • information about the anomaly detected and treated, and the total net withdrawal time, among other information may be generated in a post-procedure summary.
  • post-procedure analytics may be used for quality assurance and for determining if the colonoscopy procedure was successful.
  • the personalized pathology-specific imaging modality may be stored in the storage device 340.
  • Information about the anomaly detected may also be stored in the storage device 340.
  • the storage device 340 can be local to the endoscope system 300.
  • the storage device 340 can be a remote storage device, such as a part of a cloud comprising one or more storage and computing devices (e.g., servers) that provides secure access to cloud-based services including, for example, data storage, computing services, and provisioning of customer services, among others.
  • at least some of the data processing and computation with regard to anomaly detection, landmark recognition, endoscope localization, and imaging mode selection may be performed in a cloud.
  • the system 300 may be operated in a closed- loop fashion with a feedback loop for continuous learning of the pathologyspecific imaging modality, such as updating the target or recommended imaging modality.
  • the continuous learning may be via explicit endoscopist feedback (e.g., satisfaction with recommendations, a “like” button etc.).
  • the system 300 can run in a shadow mode to experienced endoscopists and monitor their withdrawal actions and correct the segment-wise withdrawal plan through reinforcement feedback.
  • image-guided endoscopy may be performed using the robotic system 350.
  • the robotic system 350 can automatically adjust the imaging modality in accordance the recommended pathology-specific imaging modality.
  • the robot system 350 may include a robot arm detachably attached to the endoscope 310.
  • the robot arm can automatically, or semi -automatically (e.g., with certain user manual control or commands), via an actuator, position and manipulate the endoscope 310 in an anatomical target, or position a device at a desired location with desired posture to facilitate an operation on the anatomical target.
  • FIG. 5 is a diagram illustrating by way of example different stages of an image-guided colonoscopy, and automatic imaging modality switching based on characteristics of the anomaly detected during an image- guided endoscopy procedure 500, such as a colonoscopy as shown in this example.
  • the procedure includes an endoscope insertion stage, followed by a withdrawal stage.
  • the image-guided endoscopy procedure may be carried out using the system 300.
  • endoscopic images or video streams can be obtained respectively for each of a plurality of colon segments, including the rectosigmoid segment, sigmoid segment, descending segment, transverse segment, ascending segment, and cecum, such as using the imaging system 312.
  • Anomaly may be detected automatically using the anomaly detector circuit 322 during the insertion phase, or alternatively, as shown in FIG. 5, detected or confirmed during the withdrawal phase.
  • the imaging mode selection may be triggered manually or automatically by automatic cecum detection.
  • a default imaging mode including a default lighting mode such as white light imaging (WLI) as shown in FIG. 5, may be used.
  • WLI white light imaging
  • the imaging mode selector circuit 324 recommends in substantially real time a pathology-specific imaging modality, including a recommended lighting mode of narrow-band imaging (NBI), such as using the trained ML model 360.
  • NBI mode can enhance discernability of the first anomaly from the background of the image 510. Imaging mode can then be automatically switched from WLI to NBI, and the first anomaly is re-inspected and diagnosed under the NBI mode. Alternatively, the user is prompted to switch to NBI mode to reinspect the first anomaly. Image 512 of the first anomaly under NBI mode can be displayed to the user in substantially real time.
  • the imaging mode can then be switched back to the default WLI for inspecting other colon segments.
  • a subsequent inspection reveals a second anomaly detected from endoscopic image 520 of another segment.
  • the imaging mode selector circuit 324 recommends in substantially real time a pathology-specific imaging modality, including a recommended lighting mode of red dichromatic imaging (RDI), such as using the trained ML model 360.
  • RDI mode can enhance discernability of the second anomaly from the background of the image 520. Imaging mode is then automatically switched from WLI to RDI, and the first anomaly is re-inspected and diagnosed under the RDI mode.
  • Image 522 of the second anomaly under RDI mode can be displayed to the user in substantially real time.
  • the imaging mode can be switched back to the default WLI for inspecting other colon segments.
  • FIG. 6 illustrates example images of a portion of colon segment under different imaging modes.
  • Image 610 was obtained under a “default” WLI mode.
  • a lesion 615 is detected from the endoscopic image 610.
  • a recommendation to switch to narrow-band imaging (NBI) is generated.
  • Imaging mode can be automatically or manually switched to the NBI model.
  • the lesion 615 is then re-inspected from the image 620 obtained under NBI mode.
  • NBI mode can enhance discemability of the lesion 615 from the background of the image 620, allowing for more accurate and efficient real-time inspection and optical diagnosis of the lesion 615.
  • FIGS. 7A-7B are diagrams illustrating examples of training an ML model and using the trained ML model to determine a recommended pathology-specific imaging modality for anomaly inspection and diagnosis.
  • FIG. 7A illustrates an ML model training (or learning) phase during which an ML model 720 may be trained to determine a target or recommended pathologyspecific imaging modality based at least in part on endoscopic images of the anomaly and surrounding environment.
  • a training dataset may include a plurality of endoscopic images 710 of the same type of anomaly (e.g., a polyp or lesion in the lining of a colon segment) obtained from colonoscopy procedures performed on a plurality of patients.
  • the training data may also include auxiliary input 315 as described above with respect to FIG. 3.
  • auxiliary input may include supporting imaging modes and lighting modes 732 (an example or a portion of the endoscope and equipment information 450), endoscopist preference 734 (an example of the user profile 440), clinical information 736 (which may include one or more of the image or video streams and clinical data from previous endoscopy procedures 410, the patient information and medical history 420, or the pre-procedure imaging study data 430), or Al findings 738 (an example of the Al finding 460).
  • Endoscope location information for each of the plurality of endoscopic images 710 may also be included in the training dataset.
  • the ML model 720 may have a neural network structure comprising an input layer, one or more hidden layers, and an output layer.
  • the plurality of endoscopic images 710 or features generated therefrom, along with one or more of the auxiliary inputs 732, 734, 736, or 738, may be fed into the input layer of the ML model 720, which propagates the input data or data features through one or more hidden layers to the output layer that outputs a recommended pathology-specific imaging modality.
  • the ML model 720 is able to perform tasks, without explicitly being programmed, by making inferences based on patterns found in the analysis of data.
  • the ML model 720 explores the study and construction of algorithms (e.g., ML algorithms) that may learn from existing data and make predictions about new data. Such algorithms operate by building the ML model 720 from training data in order to make data- driven predictions or decisions expressed as outputs or assessments.
  • the ML model 720 may be trained using supervised learning or unsupervised learning.
  • Supervised learning uses prior knowledge (e.g., examples that correlate inputs to outputs or outcomes) to learn the relationships between the inputs and the outputs.
  • the goal of supervised learning is to learn a function that, given some training data, best approximates the relationship between the training inputs and outputs so that the ML model can implement the same relationships when given inputs to generate the corresponding outputs.
  • Unsupervised learning is the training of an ML algorithm using information that is neither classified nor labelled and allowing the algorithm to act on that information without guidance. Unsupervised learning is useful in exploratory analysis because it can automatically identify structure in data.
  • Common tasks for supervised learning are classification problems and regression problems.
  • Classification problems also referred to as categorization problems, aim at classifying items into one of several category values.
  • Regression algorithms aim at quantifying some items (for example, by providing a score to the value of some input).
  • Some examples of commonly used supervised-ML algorithms are Logistic Regression (LR), Naive-Bayes, Random Forest (RF), neural networks (NN), deep neural networks (DNN), matrix factorization, and Support Vector Machines (SVM).
  • DNN include a convolutional neural network (CNN), a recurrent neural network (RNN), a deep belief network (DBN), or a hybrid neural network comprising two or more neural network models of different types or different model configurations.
  • Some common tasks for unsupervised learning include clustering, representation learning, and density estimation.
  • Some examples of commonly used unsupervised learning algorithms are K-means clustering, principal component analysis, and autoencoders.
  • federated learning also known as collaborative learning
  • This approach stands in contrast to traditional centralized machine-learning techniques where all the local datasets are uploaded to one server, as well as to more classical decentralized approaches which often assume that local data samples are identically distributed.
  • Federated learning enables multiple actors to build a common, robust machine learning model without sharing data, thus allowing to address critical issues such as data privacy, data security, data access rights and access to heterogeneous data.
  • the training of the ML model 720 may be performed continuously or periodically, or in near real time as additional procedure data are made available.
  • the training process involves algorithmically adjusting one or more ML model parameters (e.g., weights or bias at any particular layer of a neural network model), until the ML model being trained satisfies a specified training convergence criterion.
  • the ML model 720 may be trained with weighted square loss (for explicit feedback) or with binary cross-entropy loss (for implicit feedback).
  • Other training techniques such as deep factorization machine, wide and deep learning, deep structured semantic models, or autoencoder based recommender systems, may be used.
  • the trained ML mode 720 can establish a correspondence between the endoscopic images 710 (or features extracted therefrom) and the recommended imaging modality and associated probability indicating the likelihood of the recommended imaging modality being selected.
  • FIG. 7B illustrates an inference phase during, for example, withdrawal of colonoscope to determine a target or recommended imaging modality to inspect and diagnosis an anomaly suspected under an existing imaging modality.
  • a live endoscopic image 750 of a suspected anomaly e.g., a polyp or lesion
  • auxiliary input 760 including, for example, information about the endoscopist profile, patient clinical information, Al findings, equipment setup features, and endoscope location features
  • output 770 including a recommended imaging modality and associated probability indicating the likelihood of the target or recommended imaging modality being selected.
  • the output 770 may be provided to the endoscopist or a robotic system for manual or robotically assisted adjusting of imaging modality for enhanced anomaly inspection and diagnosis.
  • the training of the ML model 720 as shown in FIG. 7A may include predicting a probability associated with each of a plurality of supporting imaging modes, such as various lighting modes (e.g., NBI, RDI, WLI, TXI, etc.). Such probability represents a likelihood of a corresponding imaging mode being selected as the target or recommended imaging mode. The probability may be predicted based on the current system state (including current imaging mode being used to produce the endoscopic image being analyzed) conditioned on various inputs including the image features and various auxiliary input features.
  • various lighting modes e.g., NBI, RDI, WLI, TXI, etc.
  • endoscopist v clinical information w
  • the features of the endoscopic image be represented as u and the imaging or lighting mode as I.
  • the training of ML model 720 involves predicting an “affinity” based on current video stream u and for the given feature vector x, what is the likelihood the endoscopist chooses light source I.
  • Such “affinity” can be represented by a conditional probability r ut ⁇ x taking value between 0 and 1.
  • a higher affinity value indicates a higher probability that the imaging or lighting mode I would be selected as the recommended imaging mode.
  • the recommended system f may be learned using the multi-input collaborative learning process as illustrated in FIG. 5.
  • the trained ML model 720 may predict the conditional probabilities f u i ⁇ x for each of a plurality of candidate imaging or lighting modes, and determine the target or recommended imaging mode based at least in part on the probabilities corresponding to the candidate imaging modes.
  • the target or recommended imaging mode can be selected to be the candidate imaging mode associated with the highest conditional probability r u i ⁇ x .
  • FIG. 8 is a flow chart illustrating an example method 800 of determining or adjusting an endoscopic imaging mode during an endoscopy procedure in a patient, such as a colonoscopy procedure.
  • a target or recommended pathology-specific imaging modality can be determined using at least endoscopic images or video streams of the anatomy.
  • the imaging mode can help improve real-time inspection, recognition, and diagnosis of various pathologies.
  • the method 800 may be implemented in the endoscope system 300. Although the processes of the method 800 are drawn in one flow chart, they are not required to be performed in a particular order. In various examples, some of the processes can be performed in a different order than that illustrated herein.
  • images or video streams of distinct segments of the target anatomy may be obtained using an imaging system associated with an endoscope.
  • the images or video streams may be acquired when the imaging system is set to one of a plurality of available imaging modes.
  • An imaging mode or modality refers to one or more of a lighting modality, an optical magnification, or a viewing angle of the imaging device.
  • lighting modality may include high-definition white light imaging (WLI), chromoendoscopy techniques like dye-based, or virtual CE like narrow-band imaging (NBI), texture and color enhancement (TXI) imaging, red dichromatic imaging (RDI), among others.
  • WLI high-definition white light imaging
  • NBI narrow-band imaging
  • TXI texture and color enhancement
  • RDI red dichromatic imaging
  • the optical magnification defines a zoom setting (e.g., zoom in or zoom out of a suspicious anomaly of the target anatomy).
  • the viewing angle of the imaging device also referred to as a field of view, describes the angular extent of a given scene that is imaged by the imaging device.
  • the obtained images or video streams may be analyzed to generate endoscopic image or video features for each of the distinct segments of the anatomy.
  • image or video features include statistical features of pixel values or morphological features, such as corners, edges, blobs, curvatures, speeded up robust features (SURF), or scale-invariant feature transform (SIFT) features, among others.
  • the segmentspecific images or video streams may be pre-processed, such as filtering, resizing, orienting, or color or grayscale correction, and the endoscopic image features may be extracted from the pre-processed images or video streams.
  • the endoscopic image or video features may be post-processed to enhance feature quality, such as edge interpolation or extrapolation to produce continuous and smooth edges.
  • a personalized, target or recommended pathology-specific imaging modality may be automatically determined, such as using the imaging mode selector circuit 324 as shown in FIG. 3.
  • the target or recommended pathology-specific imaging modality may be estimated using information about anomaly detected from any particular segments of the anatomy. Anomaly may be detected from the endoscopic mage or video features using a template matching technique, or an artificial intelligence (Al) or machine learning (ML) based technique.
  • the anomaly detection includes detecting and/or recognizing one or more of a presence or absence, a type, a size, a shape, a location, or an amount of pathological tissue or anatomical structures, among other objects in an environment of the anatomy.
  • the anomaly being detected may include pathological tissue segments, such as mucosal abnormalities (polyps, inflammatory bowel diseases, Meckel’s diverticulum, lipoma, bleeding, vascularized mucosa etc.), or obstructed mucosa (for e.g., segments with bad bowel preparation, distended colon etc.).
  • the target or recommended pathology-specific imaging modality may be determined further using substantially real-time location of the endoscope when the anomaly is detected.
  • the endoscope location may be determined using electromagnetic tracking, or anatomical landmark recognized from the images or video streams such as obtained during the insertion phase of endoscopy, or image or video stream features extracted therefrom.
  • the landmark may be recognized using an Al or ML approach, such as a trained ML model that has been trained to establish a correspondence between an endoscopic image or video stream or features extracted therefrom and a landmark recognition, as described above with respect to FIG. 3.
  • the target or recommended pathology-specific imaging modality may be estimated further using auxiliary input, including, by way of example and not limitation, image or video streams and clinical data from previous endoscopy procedures performed on the patient, patient information and medical history, or pre-procedure imaging study data, user (e.g., endoscopist) profile (including user’s experience, working environment, affinity to new technology, preference of a certain endoscopy protocol etc.), endoscope and equipment information, or “Al findings” including states of the Al or ML algorithms for detecting anomaly, landmark, or other features or structural elements of interested in the target anatomy, as described above with respect to FIG. 4.
  • auxiliary input including, by way of example and not limitation, image or video streams and clinical data from previous endoscopy procedures performed on the patient, patient information and medical history, or pre-procedure imaging study data, user (e.g., endoscopist) profile (including user’s experience, working environment, affinity to new technology, preference of a certain endoscopy protocol etc.), end
  • the target or recommended pathology-specific imaging modality may be estimated using an Al or ML approach.
  • the result of anomaly detection, the landmark detection and real-time endoscope location information, and the auxiliary input may be applied to a trained ML model.
  • the ML model may have been trained to establish a correspondence between the composite input and a target or recommended value of a target or recommended pathology-specific imaging modality, as described above with respect to FIGS. 7A-7B.
  • the training of the ML model may include predicting a probability associated with each of a plurality of supporting imaging modes, such as various lighting modes (e.g., NBI, RDI, WLI, TXI, etc.).
  • Such probability represents a likelihood of a corresponding imaging mode being selected as the target or recommended imaging mode.
  • the probability may be predicted based on the current system state (including current imaging mode being used to produce the endoscopic image being analyzed) conditioned on various inputs including the image features and various auxiliary input features.
  • the training of ML model involves predicting an “affinity” based on current video stream u and for the given composite input feature vector (which may include, for example, endoscopist information, patient clinical information, Al findings, equipment setup features, and endoscope location features, etc.), what is the likelihood the endoscopist chooses a particular light source.
  • Such “affinity” can be represented by a conditional probability that may be learnt from prior implicit or explicit user feedback through a multi-input collaborative learning process as illustrated in FIG. 5.
  • the trained ML model may predict the conditional probabilities for each of a plurality of candidate imaging or lighting modes.
  • the target or recommended imaging mode may be selected to be the candidate imaging mode associated with the highest conditional probability.
  • the target or recommended pathology-specific imaging modality determined at 830 may be provided to a user or a process to facilitate manual or robotic withdrawal of the endoscope.
  • the target or recommended pathology-specific imaging modality may be displayed to the user to assist the endoscopist during an endoscopy procedure, such as during image-guided colonoscopy. If the target or recommended imaging modality is not the same as the current imaging modality being used for inspecting and detecting the anomaly, at 844, an alert may be generated to notify or warn the endoscopist such an effect.
  • the notification or alert may be delivered through optical means on the diagnostic monitor, or via auditory means like a warning alarm.
  • a recommendation may be provided to the user to switch to the target or recommended imaging modality.
  • FIG. 9 illustrates generally a block diagram of an example machine 900 upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform. Portions of this description may apply to the computing framework of various portions of the endoscope system 300.
  • the machine 900 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 900 may operate in the capacity of a server machine, a client machine, or both in server-client network environments. In an example, the machine 900 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment.
  • the machine 900 may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA personal digital assistant
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.
  • cloud computing software as a service
  • SaaS software as a service
  • Examples, as described herein, may include, or may operate by, logic or a number of components, or mechanisms.
  • Circuit sets are a collection of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuit set membership may be flexible over time and underlying hardware variability. Circuit sets include members that may, alone or in combination, perform specified operations when operating. In an example, hardware of the circuit set may be immutably designed to carry out a specific operation (e.g., hardwired).
  • the hardware of the circuit set may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation.
  • a computer readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation.
  • the instructions enable embedded hardware (e.g., the execution units or a loading mechanism) to create members of the circuit set in hardware via the variable connections to carry out portions of the specific operation when in operation.
  • the computer readable medium is communicatively connected to the other components of the circuit set member when the device is operating.
  • any of the physical components may be used in more than one member of more than one circuit set.
  • execution units may be used in a first circuit of a first circuit set at one point in time and reused by a second circuit in the first circuit set, or by a third circuit in a second circuit set at a different time.
  • Machine (e.g., computer system) 900 may include a hardware processor 902 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 904 and a static memory 906, some or all of which may communicate with each other via an interlink (e.g., bus) 908.
  • the machine 900 may further include a display unit 910 (e.g., a raster display, vector display, holographic display, etc.), an alphanumeric input device 912 (e.g., a keyboard), and a user interface (UI) navigation device 914 (e.g., a mouse).
  • a hardware processor 902 e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof
  • main memory 904 e.g., main memory 904
  • static memory 906 some or all of which may communicate with each other via an interlink (e.g
  • the display unit 910, input device 912 and UI navigation device 914 may be a touch screen display.
  • the machine 900 may additionally include a storage device (e.g., drive unit) 916, a signal generation device 918 (e.g., a speaker), a network interface device 920, and one or more sensors 921, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensors.
  • a storage device e.g., drive unit
  • a signal generation device 918 e.g., a speaker
  • a network interface device 920 e.g., a network interface device 920
  • sensors 921 such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensors.
  • GPS global positioning system
  • the machine 900 may include an output controller 928, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
  • a serial e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
  • USB universal serial bus
  • IR infrared
  • NFC near field communication
  • the storage device 916 may include a machine readable medium 922 on which is stored one or more sets of data structures or instructions 924 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein.
  • the instructions 924 may also reside, completely or at least partially, within the main memory 904, within static memory 906, or within the hardware processor 902 during execution thereof by the machine 900.
  • one or any combination of the hardware processor 902, the main memory 904, the static memory 906, or the storage device 916 may constitute machine readable media.
  • machine-readable medium 922 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 924.
  • machine readable medium may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) configured to store the one or more instructions 924.
  • machine readable medium may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 900 and that cause the machine 900 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions.
  • Nonlimiting machine-readable medium examples may include solid-state memories, and optical and magnetic media.
  • a massed machine-readable medium comprises a machine-readable medium with a plurality of particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals.
  • Non-volatile memory such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EPSOM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • EPROM Electrically Programmable Read-Only Memory
  • EPSOM Electrically Erasable Programmable Read-Only Memory
  • flash memory devices e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EPSOM)
  • EPROM Electrically Programmable Read-Only Memory
  • EPSOM Electrically Erasable Programmable Read-Only Memory
  • flash memory devices e.g., electrically Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EPSOM)
  • magnetic disks such as internal hard disks and removable disks
  • Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802.11 family of standards known as WiFi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others.
  • the network interface device 920 may include one or more physical jacks (e.g., Ethernet, coaxial, or phonejacks) or one or more antennas to connect to the communication network 926.
  • the network interface device 920 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques.
  • SIMO single-input multiple-output
  • MIMO multiple-input multiple-output
  • MISO multiple-input single-output
  • transmission medium shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 900, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Endoscopes (AREA)

Abstract

L'invention concerne des systèmes, des dispositifs et des procédés pour déterminer ou ajuster un mode d'imagerie à utiliser dans l'inspection de tissus ou d'objets étrangers pendant une procédure d'endoscopie. Un système d'endoscope donné à titre d'exemple comprend un endoscope et un circuit de commande. L'endoscope comprend un système d'imagerie pour obtenir des images ou des flux vidéo d'une anatomie cible pendant une procédure d'endoscopie. Le circuit de commande peut générer à partir des images ou des flux vidéo obtenus une image endoscopique ou des caractéristiques vidéo caractérisant une anomalie dans l'anatomie cible, et déterminer un mode d'imagerie personnalisé spécifique d'une pathologie pour améliorer la perceptibilité de l'anomalie à partir des images ou des flux vidéo. Le mode d'imagerie recommandé peut être déterminé à l'aide d'un modèle d'apprentissage automatique entraîné. Le mode d'imagerie recommandé peut être fourni à un utilisateur ou à un processus pour faciliter un réglage manuel ou automatique du mode d'imagerie endoscopique pendant la procédure d'endoscopie.
PCT/US2024/046168 2023-09-12 2024-09-11 Sélection de mode d'imagerie à base d'ia en endoscopie Pending WO2025059157A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363582029P 2023-09-12 2023-09-12
US63/582,029 2023-09-12

Publications (1)

Publication Number Publication Date
WO2025059157A1 true WO2025059157A1 (fr) 2025-03-20

Family

ID=92895416

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/046168 Pending WO2025059157A1 (fr) 2023-09-12 2024-09-11 Sélection de mode d'imagerie à base d'ia en endoscopie

Country Status (1)

Country Link
WO (1) WO2025059157A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140051986A1 (en) * 2012-08-14 2014-02-20 Intuitive Surgical Operations, Inc. Systems and Methods for Registration of Multiple Vision Systems
US20210145248A1 (en) * 2018-07-10 2021-05-20 Olympus Corporation Endoscope apparatus, operating method of endoscope apparatus, and information storage medium
US20230084030A1 (en) * 2021-09-15 2023-03-16 Intuitive Surgical Operations, Inc. Devices and methods for spatially controllable illumination
US20230118522A1 (en) * 2021-10-20 2023-04-20 Nvidia Corporation Maintaining neighboring contextual awareness with zoom

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140051986A1 (en) * 2012-08-14 2014-02-20 Intuitive Surgical Operations, Inc. Systems and Methods for Registration of Multiple Vision Systems
US20210145248A1 (en) * 2018-07-10 2021-05-20 Olympus Corporation Endoscope apparatus, operating method of endoscope apparatus, and information storage medium
US20230084030A1 (en) * 2021-09-15 2023-03-16 Intuitive Surgical Operations, Inc. Devices and methods for spatially controllable illumination
US20230118522A1 (en) * 2021-10-20 2023-04-20 Nvidia Corporation Maintaining neighboring contextual awareness with zoom

Similar Documents

Publication Publication Date Title
US12478433B2 (en) Image guidance during cannulation
US20220254017A1 (en) Systems and methods for video-based positioning and navigation in gastroenterological procedures
US20220296081A1 (en) Method for real-time detection of objects, structures or patterns in a video, an associated system and an associated computer readable medium
CN113544743A (zh) 内窥镜用处理器、程序、信息处理方法和信息处理装置
US20230117954A1 (en) Automatic positioning and force adjustment in endoscopy
JP7143504B2 (ja) 医用画像処理装置、プロセッサ装置、内視鏡システム、医用画像処理装置の作動方法及びプログラム
WO2017175282A1 (fr) Procédé d'apprentissage, dispositif de reconnaissance d'image et programme
US11423318B2 (en) System and methods for aggregating features in video frames to improve accuracy of AI detection algorithms
US20240037733A1 (en) Control method, apparatus and program for system for determining lesion obtained via real-time image
WO2022251112A1 (fr) Identification de phase d'interventions d'endoscopie
US20250288186A1 (en) Ai-based endoscopic tissue acquisition planning
US12376733B2 (en) Systems, apparatuses, and methods for endoscopy
JP2023119573A (ja) コンピュータを利用した支援システム及び方法
WO2025059157A1 (fr) Sélection de mode d'imagerie à base d'ia en endoscopie
US20230122179A1 (en) Procedure guidance for safety
WO2025059143A1 (fr) Commande de retrait d'endoscope guidé par l'image
US20240197403A1 (en) Endoscopic ultrasound guided tissue acquisition
US20230119097A1 (en) Endoluminal transhepatic access procedure
US20230363628A1 (en) Wire puncture of stricture for pancreaticobiliary access
US20240197163A1 (en) Endoscopy in reversibly altered anatomy
US20250241716A1 (en) Endoscopy video timeline interest level prediction
WO2024186443A1 (fr) Système de diagnostic assisté par ordinateur
US20250268455A1 (en) Endoscopic device, control method and computer program for the endoscopic device
EP4606292A1 (fr) Dispositif d'endoscope, procede de commande et programme informatique pour le dispositif d'endoscope
Liedlgruber et al. Predicting pathology in medical decision support systems in endoscopy of the gastrointestinal tract

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24776787

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)