WO2024134467A1 - Lobuar segmentation of lung and measurement of nodule distance to lobe boundary - Google Patents
Lobuar segmentation of lung and measurement of nodule distance to lobe boundary Download PDFInfo
- Publication number
- WO2024134467A1 WO2024134467A1 PCT/IB2023/062891 IB2023062891W WO2024134467A1 WO 2024134467 A1 WO2024134467 A1 WO 2024134467A1 IB 2023062891 W IB2023062891 W IB 2023062891W WO 2024134467 A1 WO2024134467 A1 WO 2024134467A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- distance
- nodule
- boundary
- lung
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000094—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000096—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00064—Constructional details of the endoscope body
- A61B1/00071—Insertion part of the endoscope body
- A61B1/0008—Insertion part of the endoscope body characterised by distal tip features
- A61B1/00097—Sensors
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00147—Holding or positioning arrangements
- A61B1/00149—Holding or positioning arrangements using articulated arms
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
- G06T2207/30064—Lung nodule
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
Definitions
- the present disclosure relates to the field of medical procedures.
- Various medical procedures involve the use of one or more machinegenerated images, which may be utilized to provide visualization that confer certain benefits during a medical procedure. Certain operational processes can be guided based at least in part on the machine-generated images.
- Described herein are systems, devices, and methods to facilitate the identification and segmentation of various anatomical features based on images of such features obtained using a scope device or other medical instrument.
- Such feature identification and/or segmentation can facilitate the operation of certain anatomical features in connection with a medical procedure, such as bronchoscopy, lung biopsy, lung nodule treatment, or other procedures accessing the respiratory systems, for example.
- the techniques described herein relate to a computer-implemented method including receiving an image of an anatomical feature; segmenting the image into a plurality of portions based on a trained image segmentation neural network, wherein each portion of the plurality of portions is assigned a portion label; determining a nodule location associated with a nodule in the anatomical feature; assigning the nodule to a portion of the plurality of portions in the image based on the nodule location; computing at least one distance metric between a first point on a boundary of the portion and a second point away from the boundary of the portion; and generating a distance-coded image based on the distance metric, wherein the distance- coded image indicates distances from a boundary of the portion to points exterior to the boundary based on a color scheme.
- the techniques described herein relate to a method, wherein the determining a nodule location associated with a nodule in the anatomical feature includes receiving an input from a user that identifies the nodule location.
- the techniques described herein relate to a method, wherein the determining a nodule location associated with a nodule in the anatomical feature includes analyzing the image to identify the nodule location.
- the techniques described herein relate to a method, further including generating a portion image based on the portion label assigned to the portion, the portion image excluding other portions of the plurality of portions, wherein the generating a distance-coded image involves assigning colors to the portion image based on the color scheme.
- the techniques described herein relate to a method, wherein the generating a distance-coded image includes assigning colors to pixels in the distance-coded image based on each shortest distance between a first pixel away from the boundary and a second pixel on the boundary of the portion in the portion image.
- the techniques described herein relate to a method, further including providing a distance from a location to the boundary of the portion based on the distance-coded image.
- the techniques described herein relate to a method, wherein the providing a distance from a location to the boundary of the portion based on the distance-coded image includes determining a distance between the location to the boundary of the portion, wherein the distance is based on at least one of Euclidean coordinates or polar coordinates.
- the techniques described herein relate to a method, wherein the generating a portion image based on a portion label assigned to the portion further includes assigning the points exterior to the boundary of the portion with a single color. [0013] In some aspects, the techniques described herein relate to a method, further including assigning the portion with a different color in the portion image to generate a binary image, wherein the anatomical feature is a lung, the portion is a lung lobe, and the plurality of portions is a plurality of lung lobes.
- the techniques described herein relate to a method, wherein the determining a nodule location associated with a nodule in the anatomical feature includes determining a nodule centroid; and the assigning the nodule to a portion of the plurality of portions in the image includes determining the portion label associated with the nodule centroid.
- the techniques described herein relate to a method, wherein the assigning the nodule to a portion of the plurality of portions in the image further includes determining a pixel value associated with the nodule centroid.
- the techniques described herein relate to a method, wherein the generating a distance-coded image based on the distance metric includes passing the portion image through a distance map fdter.
- the techniques described herein relate to a method, wherein the color scheme includes grayscale colors.
- the techniques described herein relate to a method, wherein the trained image segmentation neural network identifies one or more pleura or fissures, and the portion labels are assigned to the plurality of portions based on of the pleura or fissures.
- the techniques described herein relate to a method, wherein the trained image segmentation neural network is a UNet convolutional neural network.
- the techniques described herein relate to a system including a processor; and a memory storing computer-executable instructions to cause the processor to perform steps including segmenting an anatomical feature image into a plurality of portions based on a neural network trained to label at least a portion of the anatomical feature image; selecting a portion of the plurality of the portions based on coordinates of a nodule location corresponding to a pixel associated with the portion; mapping pixel distances to a range of values; and assigning a value to a first pixel away from a boundary of the portion based on (i) a distance between the first pixel and a second pixel on the boundary of the portion and (ii) the range of values.
- the techniques described herein relate to a system, further including determining the second pixel based on the second pixel having the shortest distance to the first pixel among pixels on the boundary of the portion.
- the techniques described herein relate to a system or claim 17, wherein the anatomical feature is a lung, the portion is a lung lobe, the plurality of portions is a plurality of lung lobes, and the mapping pixel distances to the range of values includes mapping the pixel distances to a range of grayscale values.
- the techniques described herein relate to a medical system including an endoscope having a position sensor associated with a distal end thereof; a robotic medical system including a plurality of articulating arms; and control circuitry communicatively coupled to the endoscope and the robotic medical system, the control circuitry configured to receive a distance-coded image associated with a treatment site, wherein colors in the distance-coded image are assigned values based on distances of a pixel to a boundary of a lobe encompassing the treatment site; determine that the distal end is below a threshold distance to the boundary based on reference to colors of the distance-coded image; and generate a notification that there is a risk of the distal end contacting the boundary.
- the techniques described herein relate to a medical system, wherein the control circuitry is further configured to limit controlling of the endoscope based on the risk of the distal end contacting the boundary.
- Figure 1 illustrates an embodiment of a robotic medical system in accordance with one or more embodiments.
- Figure 2 illustrates example devices that may be implemented in the medical system of Figure 1 in accordance with one or more embodiments.
- Figure 3 illustrates a bronchoscope disposed in portions of the respiratory system in accordance with one or more embodiments.
- Figure 4 illustrates lobuar segments of the respiratory system of a patient.
- Figure 5 represented in parts 5-1 and 5-2, is a flow diagram illustrating a process for providing distance-coded image in accordance with one or more embodiments.
- Figure 6 represented in parts 6-1 and 6-2, shows certain images corresponding to various blocks, states, and/or operations associated with the process of Figure 5 in accordance with one or more embodiments.
- Figure 7 is a flow diagram illustrating a process for assigning a nodule to a lobe in accordance with one or more embodiments.
- Figure 8 shows certain images corresponding to various blocks, states, and/or operations associated with the process of Figure 7 in accordance with one or more embodiments.
- Figure 9 illustrates a lobuar segmentation framework in accordance with one or more embodiments.
- Figure 10 illustrates an example distance-coded image generation architecture in accordance with one or more embodiments.
- Certain standard anatomical terms of location are used herein to refer to the anatomy of animals, and namely humans, with respect to the preferred embodiments.
- certain spatially relative terms such as “outer,” “inner,” “upper,” “lower,” “below,” “above,” “vertical,” “horizontal,” “top,” “bottom,” and similar terms, are used herein to describe a spatial relationship of one device/element or anatomical structure to another device/element or anatomical structure, it is understood that these terms are used herein for ease of description to describe the positional relationship between element(s)/structures(s), as illustrated in the drawings.
- spatially relative terms are intended to encompass different orientations of the element(s)/structures(s), in use or operation, in addition to the orientations depicted in the drawings.
- an element/structure described as “above” another element/structure may represent a position that is below or beside such other element/structure with respect to alternate orientations of the subject patient or element/structure, and vice-versa.
- the present disclosure relates to systems, devices, and methods for identifying and segmenting target anatomical features within a body cavity/area of a patient to aid in certain medical procedures.
- endoscopy procedures such as bronchoscopy procedures
- anatomical feature identification and segmentation concepts disclosed herein are applicable to any suitable medical procedures.
- robotically enabled medical procedures are disclosed herein in the context of lung nodule treatments.
- anatomical feature identification and segmentation concepts disclosed herein may be implemented in, or configured for implementation in, any suitable or desirable anatomy.
- a typical lung generally include fissures and pleura that segment or otherwise divide the lung into lobes.
- the separated lobes can help to provide redundancy of respiratory functions and to prevent spread of diseases in normal circumstances.
- An accidental puncturing of a boundary of a lobe formed by such fissures and pleura can result in pneumothorax which is often associated with greater likelihood of complications and slower recovery.
- accurate, sometimes real-time, measurements of distances from the boundary of the lobe to a target anatomical feature, such as a nodule during a lung nodule treatment procedure can significantly reduce pneumothorax risks.
- CT computed tomography
- MPR 2D slice multiplanar reformation
- anatomical feature identification and segmentation concepts that may automatically perform such measurements and provide a distance-coded image that readily encapsulates the distance information within the image as visual information. All, or substantially all, of a process involved in generation of the distance-coded image may be performed by a computing system that utilizes some form of artificial intelligence framework, such as a deep learning architecture.
- the framework can employ a neural network trained to identify fissures and pleura in received CT lung images and semantically segment the images into lobes based on the fissures and pleura. Then, the architecture can compute distances from a boundary of a lobe to a given pixel in the CT lung images and generate the distance- coded image. As the distance-coded information visually encapsulates distance information, clinicians may do away with manual distance measurements and, instead, focus more of their times on operational planning and execution.
- medical instrument(s) such as a robotically controlled medical instrument(s) (e.g., endoscope(s), access sheath(s), working instrument(s), such as needle instruments), is/are inserted into a patient’s body.
- a robotically controlled medical instrument(s) e.g., endoscope(s), access sheath(s), working instrument(s), such as needle instruments
- the instrument(s) may be positioned within a luminal network or other anatomy of the patient.
- luminal network refers to any cavity structure within the body, whether comprising a plurality of lumens or branches (e.g., a plurality of branched lumens, as in the lungs or blood vessels) or a single lumen or branch (e.g., within the gastrointestinal tract).
- the instrument(s) may be moved (e.g., advance, retracted, navigated, guided, driven, etc.) through the luminal network to one or more areas of interest, such as location of a lesion (e.g., tumor, nodule, etc.).
- FIG. 1 illustrates an example medical system 10 for performing various medical procedures in accordance with aspects of the present disclosure.
- the medical system 10 includes a robotic system 11 configured to engage with and/or control a medical instrument 32 to perform a procedure on a patient 13.
- the medical system 10 also includes a control system 50 configured to interface with the robotic system 11, provide information regarding the procedure, and/or perform a variety of other operations.
- the control system 50 can include a display 42 to present certain information to assist the physician 5.
- the medical system 10 can include a table 15 configured to hold the patient 13.
- the medical system 10 may further include an electromagnetic (EM) field generator (not shown; see Figure 3), which may be held by one or more of the robotic arms 12 of the robotic system 11, or may be a stand-alone device.
- EM electromagnetic
- the robotically enabled medical system 10 may be configured in a variety of ways depending on the particular procedure.
- the medical system 10 may utilize the robotic arms 12 to deliver a medical instrument, such as a steerable endoscope 32 (e.g., bronchoscope), through a natural orifice access point (e.g., the mouth 9 of the patient 13, positioned on a table 15 in the present example) to deliver diagnostic and/or therapeutic tools/instruments.
- a medical instrument such as a steerable endoscope 32 (e.g., bronchoscope)
- a natural orifice access point e.g., the mouth 9 of the patient 13, positioned on a table 15 in the present example
- the robotic system 11 may be positioned proximate to the patient’s upper torso in order to provide access to the access point.
- the robotic arms 12 may be actuated to position the steerable endoscope 32 relative to the access point.
- GI gastrointestinal
- the robotic arms 12 may insert the steerable endoscope 32 into the patient robotically, manually, or a combination thereof.
- the steerable endoscope 32 may be advanced within an outer access sheath 40, which may be coupled to, and/or controlled by, one or more robotic arms in some implementations.
- the steerable endoscope 32 and sheath 40 may be each coupled to a separate instrument driver/manipulator from the set of instrument drivers/manipulators 28, each instrument driver coupled to the distal end of a respective robotic arm 12.
- the instrument drivers 28 can be configured in linear arrangement that facilitates coaxial alignment of the steerable endoscope 32 with the sheath 40 along a “virtual rail” 33 that may be repositioned in space by manipulating the one or more robotic arms 12 into different angles and/or positions. Translation of the instrument drivers 28 along the virtual rail 33 can telescope the steerable endoscope 32 relative to the outer sheath 40 or advance or retract the steerable endoscope 32 from the patient.
- the medical system 10 may be used to perform a locally targeted procedure, such as a lung nodule treatment with a bronchoscope.
- a bronchoscope includes an endoscope at its distal end configured to enable visualization of the respiratory tract.
- endoscope and “endoscope” are used herein according to their broad and ordinary meanings, and may refer to any type of elongate medical instrument having image generating, viewing, and/or capturing functionality and configured to be introduced into any type of organ, cavity, lumen, chamber, or space of a body.
- references herein to scopes or endoscopes may refer to a bronchoscope, cystoscope, nephroscope, bronchoscope, arthroscope, colonoscope, laparoscope, borescope, or the like.
- Scopes/endoscopes in some instances, may comprise a rigid or flexible tube, and may be dimensioned to be passed within an outer sheath, catheter, introducer, or other lumen-type device, or may be used without such devices.
- the patient 13 is undergoing a bronchoscopy.
- the endoscope 32 may be robotically navigated within the respiratory system of the patient 13.
- the endoscope 32 may be manipulated to telescopically extend the steerable endoscope 32 from the outer access sheath 40 to obtain enhanced articulation and greater bend radius.
- the use of separate instrument drivers 28 allows the steerable endoscope 32 and sheath 40 to be driven independently of each other.
- the respiratory system comprises certain passages, vessels, organs, and muscles that aid the body in the exchange of gases between the air and blood, and between the blood and the cells of the body.
- the respiratory system includes the upper respiratory tract, which comprises the nose/nasal cavity, the pharynx (i.e., throat), and the larynx (i.e., voice box).
- the respiratory system further includes the lower respiratory tract, which comprises the trachea 6, the lungs 4 (4r and 41), and the various segments of the bronchial tree, including the alveoli and alveolar ducts, which comprise clusters of small air sacs that are responsible for gas exchange between the lungs and the pulmonary blood vessels.
- the bronchial tree is an example luminal network in which robotically controlled instruments may be navigated and utilized in accordance with the inventive solutions presented here.
- luminal networks including a bronchial network of airways (e.g., lumens, branches) of a patient’s lung
- some embodiments of the present disclosure can be implemented in other types of luminal networks, such as renal networks, cardiovascular networks (e.g., arteries and veins), gastrointestinal tracts, urinary tracts, etc.
- the luminal network comprises a three-dimensional structure;
- Figure 1 represents the luminal network as a two- dimensional structure solely for ease of illustration.
- the organs of the lower respiratory tract are located inside the chest cavity, which is surrounded by the sternum (i.e., chest bone) and ribcage on the front and the vertebrae (i.e., backbones) on the back, which collectively protect the lungs and other organs in the chest.
- the sternum i.e., chest bone
- the vertebrae i.e., backbones
- the trachea 6 may provide the main entryway to the lungs 4.
- the bronchi 7 branch from the trachea 6 into each of lungs 4, the left 41 and right 4r lung.
- the trachea 6 is located just below the larynx (not shown) and provides the main airway to the lungs 4.
- the left 41 and right 4r lungs are responsible for providing oxygen to capillaries and exhaling carbon dioxide.
- the bronchi 7 branch from the trachea 6 into each lung 4 and create the network of intricate passages that supply the lungs 4 with air.
- the diaphragm is the main respiratory muscle that contracts and relaxes to allow air into the lungs.
- the trachea 6 is a tube that carries the air in and out of the lungs 4.
- Each lung 4 has associated therewith a tube 7 called a bronchus that connects to the trachea.
- the trachea and bronchi form the bronchial tree.
- the bronchial tree includes primary bronchi 71, which branch off into smaller secondary 78 and tertiary 75 bronchi, and terminate in even smaller tubes called bronchioles 77.
- Each bronchiole tube is coupled to a cluster of aveoli (not shown).
- the robotic system 11 can be coupled to any component of the medical system 10, such as the control system 50, the table 15, the EM field generator (not shown; see Figure 3), the steerable endoscope 32, and/or an operational instrument (e.g., needle).
- the robotic system 11 is communicatively coupled to the control system 50.
- the robotic system 11 may be configured to receive a control signal from the control system 50 to perform an operation, such as to position a robotic arm 12 in a particular manner, manipulate the steerable endoscope 32, and so on.
- the robotic system 11 can control a component of the robotic system 11 to perform the operation.
- the robotic system 11 is configured to receive images and/or image data from the steerable endoscope 32 representing internal anatomy of the patient 13, namely the respiratory system with respect to the particular depiction of Figure 1, and/or send images/image data to the control system 50 (which can then be displayed on the display 42 or other output device). Furthermore, in some embodiments, the robotic system 11 is coupled to a component of the medical system 10, such as the control system 50, in such a manner as to allow for fluids, optics, power, or the like to be received therefrom. Additional example details of a robotic system are discussed in further detail below in reference to Figure 2.
- the control system 50 can be configured to provide various functionality to assist in performing a medical procedure.
- the control system 50 can be coupled to the robotic system 11 and operate in cooperation with the robotic system 11 to perform a medical procedure on the patient 13.
- the control system 50 can communicate with the robotic system 11 via a wireless or wired connection (e.g., to control the robotic system 11 and/or the steerable endoscope 32, receive images captured by the steerable endoscope 32, etc.), provide fluids to the robotic system 11 via one or more fluid channels, provide power to the robotic system 11 via one or more electrical connections, provide optics to the robotic system 11 via one or more optical fibers or other components, and so on.
- control system 50 can communicate with a needle and/or endoscope to receive position data therefrom. Moreover, in some embodiments, the control system 50 can communicate with the table 15 to position the table 15 in a particular orientation or otherwise control the table 15. Further, in some embodiments, the control system 50 can communicate with the EM field generator (not shown) to control generation of an EM field in an area around the patient 13.
- the control system 50 can include various I/O devices configured to assist the physician 5 or others in performing a medical procedure.
- the control system 50 can include certain input/output (I/O) components configured to allow for user input to control the steerable endoscope 32, such as to navigate the steerable endoscope 32 within the patient 13.
- I/O input/output
- joystick-, button-, and or other-type user-input controls 312 may be used to control articulation of the robotically- controlled medical instruments, including the steerable endoscope 32, sheath 40, working channel instrument assembly (e.g., needle assembly; not shown; may be controlled by a robotic driver/end effector associated with the robotic system 11).
- the input(s) received from the user control(s) 312 may be mapped to one or more of the instrument drivers 28 at a given time.
- the physician 5 can provide input to the control system and/or robotic system, wherein in response, control signals can be sent to the robotic system 11 to manipulate the steerable endoscope 32.
- the control system 50 can include the display 42 to provide various information regarding a procedure.
- the display 42 can provide information regarding the steerable endoscope 32.
- the control system 50 can receive real-time images that are captured by the steerable endoscope 32 and display the realtime images via the display 42.
- control system 50 can receive signals (e.g., analog, digital, electrical, acoustic/sonic, pneumatic, tactile, hydraulic, etc.) from a medical monitor and/or a sensor associated with the patient 13, and the display 42 can present information regarding the health or environment of the patient 13.
- information can include information that is displayed via a medical monitor including, for example, a heart rate (e.g., ECG, HRV, etc.), blood pressure/rate, muscle bio-signals (e.g., EMG), body temperature, blood oxygen saturation (e.g., SpOi), CO2, brainwaves (e.g., EEG), environmental and/or local or core bodytemperature, and so on.
- control system 50 can include various components (sometimes referred to as “subsystems”).
- the control system 50 can include control electronics/circuitry, as well as one or more power sources, pneumatic devices, optical sources, actuators, data storage devices, and/or communication interfaces.
- control system 50 includes control circuitry comprising a computer-based control system that is configured to store executable instructions, that when executed, cause various operations to be implemented.
- the control system 50 is movable, while in other embodiments, the control system 50 is a substantially stationary system.
- control system 50 any of such functionality and/or components can be integrated into and/or performed by other systems and/or devices, such as the robotic system 11, the table 15, for example. Components of an example robotic system are discussed in further detail below in reference to Figure 2.
- the medical system 10 can provide a variety of benefits, such as providing guidance to assist a physician in performing a procedure (e.g., instrument tracking, instrument alignment information, etc.), enabling a physician to perform a procedure from an ergonomic position without the need for awkward arm motions and/or positions, enabling a single physician to perform a procedure with one or more medical instruments, avoiding radiation exposure (e.g., associated with fluoroscopy techniques), enabling a procedure to be performed in a single -operative setting, providing continuous suction to remove an object more efficiently (e.g., to remove a kidney stone), and so on.
- a procedure e.g., instrument tracking, instrument alignment information, etc.
- radiation exposure e.g., associated with fluoroscopy techniques
- enabling a procedure to be performed in a single -operative setting providing continuous suction to remove an object more efficiently (e.g., to remove a kidney stone), and so on.
- the medical system 10 can provide guidance information to assist a physician in using various medical instruments to access a target anatomical feature while minimizing bleeding and/or damage to anatomy (e.g., pneumothorax, critical organs, blood vessels, etc.). Further, the medical system 10 can provide non-radiation-based navigational and/or localization techniques to reduce physician and patient exposure to radiation and/or reduce the amount of equipment in the operating room. Moreover, the medical system 10 can provide functionality that is distributed between at least the control system 50 and the robotic system 11 , which may be independently movable. Such distribution of functionality and/or mobility can enable the control system 50 and/or the robotic system 11 to be placed at locations that are optimal for a particular medical procedure, which can maximize working area around the patient, and/or provide an optimized location for a physician to perform a procedure.
- anatomy e.g., pneumothorax, critical organs, blood vessels, etc.
- the medical system 10 can provide non-radiation-based navigational and/or localization techniques to reduce physician and patient exposure to radiation and/or
- the various components of the medical system 10 can be communicatively coupled to each other over a network, which can include a wireless and/or wired network.
- Example networks include one or more personal area networks (PANs), local area networks (LANs), wide area networks (WANs), Internet area networks (IANS), cellular networks, the Internet, etc.
- PANs personal area networks
- LANs local area networks
- WANs wide area networks
- IANS Internet area networks
- cellular networks the Internet, etc.
- the various components of the medical system 10 can be connected for data communication, fluid/gas exchange, power exchange, and so on via one or more support cables, tubes, or the like.
- FIG. 2 provides a detailed illustration of embodiments of the robotic system 11 (e.g., a cart-based robotically enabled system) and the control system 50 shown in Figure 1.
- the robotic system 11 generally includes an elongated support structure 14 (also referred to as a “column”), a robotic system base 25, and a console
- the column 14 may include one or more arm supports
- the arm support 17 (also referred to as a “carriage”) for supporting the deployment of one or more robotic arms 12 (three shown in Figure 2).
- the arm support 17 may include individually configurable arm mounts that rotate along a perpendicular axis to adjust the base of the robotic arms 12 for better positioning relative to the patient.
- the arm support 17 also includes a column interface 19 that allows the arm support 17 to vertically translate along the column 14.
- the column interface can be connected to the column 14 through slots, such as slot 20, that are positioned on opposite sides of the column 14 to guide the vertical translation of the arm support 17.
- the slot 20 contains a vertical translation interface to position and hold the arm support 17 at various vertical heights relative to the robotic system base 25.
- Vertical translation of the arm support 17 allows the robotic system 11 to adjust the reach of the robotic arms 12 to meet a variety of table heights, patient sizes, and physician preferences.
- the individually configurable arm mounts on the arm support 17 allow the robotic arm base 21 of robotic arms 12 to be angled in a variety of configurations.
- the robotic arms 12 may generally comprise robotic arm bases 21 and end effectors 22, separated by a series of linkages 23 that are connected by a series of joints 24, each joint comprising one or more independent actuators.
- Each actuator may comprise an independently controllable motor.
- Each independently controllable joint 24 can provide or represent an independent degree of freedom available to the robotic arm.
- each of the arms 12 has seven joints, and thus provides seven degrees of freedom, including “redundant” degrees of freedom. Redundant degrees of freedom allow the robotic arms 12 to position their respective end effectors 22 at a specific position, orientation, and trajectory in space using different linkage positions and joint angles. This allows for the system to position and direct a medical instrument from a desired point in space while allowing the physician to move the arm joints into a clinically advantageous position away from the patient to create greater access, while avoiding arm collisions.
- the robotic system base 25 balances the weight of the column 14, arm support 17, and arms 12 over a surface, e.g., a floor. Accordingly, the robotic system base 25 may house heavier components, such as electronics, motors, power supply, as well as components that selectively enable movement or immobilize the robotic system.
- the robotic system base 25 includes wheel-shaped casters 28 that allow for the robotic system to easily move around the room prior to a procedure . After reaching the appropriate position, the casters 28 may be immobilized using wheel locks to hold the robotic system 11 in place during the procedure.
- the console 16 allows for both a user interface for receiving user input and a display screen (or a dual-purpose device such as, for example, a touchscreen 26) to provide the physician user with both pre-operative and intra-operative data.
- Potential pre-operative data on the touchscreen 26 may include pre-operative plans, navigation and mapping data derived from preoperative computerized tomography (CT) scans, and/or notes from pre-operative patient interviews.
- Intra-operative data on display may include optical information provided from the tool, sensor and coordinate information from sensors, as well as vital patient statistics, such as respiration, heart rate, and/or pulse.
- the console 16 may be positioned and tilted to allow a physician to access the console from the side of the column 14 opposite arm support 17. From this position, the physician may view the console 16, robotic arms 12, and patient while operating the console 16 from behind the robotic system 11. As shown, the console 16 can also include a handle 27 to assist with maneuvering and stabilizing robotic system 11.
- the end effector 22 of each of the robotic arms 12 may comprise an instrument device manipulator (IDM), which may be attached using a mechanism changer interface (MCI).
- IDM instrument device manipulator
- MCI mechanism changer interface
- the IDM can be removed and replaced with a different type of IDM, for example, a first type of IDM may manipulate an endoscope, while a second type of IDM may manipulate a laparoscope.
- the MCI can include connectors to transfer pneumatic pressure, electrical power, electrical signals, and/or optical signals from the robotic arm 12 to the IDM.
- the IDMs may be configured to manipulate medical instruments (e.g., surgical tools/instruments), such as the steerable endoscope 32 using techniques including, for example, direct drive, harmonic drive, geared drives, belts and pulleys, magnetic drives, and the like.
- medical instruments e.g., surgical tools/instruments
- the steerable endoscope 32 using techniques including, for example, direct drive, harmonic drive, geared drives, belts and pulleys, magnetic drives, and the like.
- the control system 50 shown in Figure 2 may serve as a command console for the example surgical robotic system 11.
- the control system 50 can include a console base 51 and one or more display devices 42.
- the medical system 10 may include certain control circuitry 60 configured to perform certain of the functionality described herein.
- the control circuitry 60 may be part of the robotic system, the control system 50, or both. That is, references herein to control circuitry may refer to circuitry embodied in a robotic system, a control system, or any other component of a medical system, such as the medical system 10 shown in Figure 1.
- control circuitry is used herein according to its broad and ordinary meaning, and may refer to any collection of processors, processing circuitry, processing modules/units, chips, dies (e.g., semiconductor dies including come or more active and/or passive devices and/or connectivity circuitry), microprocessors, micro-controllers, digital signal processors, microcomputers, central processing units, field programmable gate arrays, programmable logic devices, state machines (e.g., hardware state machines), logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions.
- state machines e.g., hardware state machines
- logic circuitry analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions.
- Control circuitry referenced herein may further include one or more circuit substrates (e.g., printed circuit boards), conductive traces and vias, and/or mounting pads, connectors, and/or components.
- Control circuitry referenced herein may further comprise one or more, storage devices, which may be embodied in a single memory device, a plurality of memory devices, and/or embedded circuitry of a device.
- Such data storage may comprise read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, data storage registers, and/or any device that stores digital information.
- control circuitry comprises a hardware and/or software state machine
- analog circuitry, digital circuitry, and/or logic circuitry data storage device(s)/register(s) storing any associated operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry.
- control circuitry 60 may comprise a computer-readable medium storing hard-coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the present figures and/or described herein. Such computer-readable medium can be included in an article of manufacture in some instances.
- the control circuitry 60 may be entirely locally maintained/disposed or may be remotely located at least in part (e.g., communicatively coupled indirectly via a local area network and/or a wide area network).
- control circuitry 60 is integrated with the robotic system 11 (e.g., in the base 25, column 14, and/or console 16) or another system communicatively coupled to the robotic system 11.
- control circuitry 60 is integrated with the control system 50 (e.g., in the console base 51 and/or display unit 42). Therefore, any description of functional control circuitry herein may be understood to be embodied in either the robotic system 11, the control system 50, or both, and/or at least in part in one or more other local or remote systems/devices.
- the medical system 10 further includes certain user controls 65, which may comprise any type of user input (and/or output) devices or device interfaces, such as one or more buttons, keys, joysticks, handheld controllers (e.g., video-game- type controllers), computer mice, trackpads, trackballs, control pads, and/or sensors (e.g., motion sensors or cameras) that capture hand gestures and finger gestures, and/or interfaces/connectors therefore.
- the user controls 65 are communicatively and/or physically coupled to at least some of the control circuitry 60.
- the user controls 65 and/or control circuitry 60 are configured to receive user input to allow a user to control a medical instrument, such as an endoscope or endoscope, such as an instrument manipulatable at least in part by a robotic system, in a velocity mode or position control mode.
- a medical instrument such as an endoscope or endoscope, such as an instrument manipulatable at least in part by a robotic system
- velocity mode the user may be permitted to directly control pitch and yaw motion of a distal end of, for example, an endoscope or other instrument based on direct manual control using the controls 65.
- movement on a joystick may be mapped to yaw and pitch movement in the distal end of the scope/device .
- the user controls 65 are configured to provide haptic feedback to the user.
- a joystick or other control mechanism may vibrate to indicate an invalid or potentially problematic input.
- the control system 50 and/or robotic system 11 can also provide visual feedback (e.g., pop-up messages) and/or audio feedback (e.g., beeping) to indicate issues associated with robotic operation.
- the control circuitry 60 may use a three- dimensional (3D) map of a patient and/or pre-determined computer models of the patient to control a medical instrument (e.g., endoscope).
- a medical instrument e.g., endoscope
- the control circuitry 60 can be configured to provide control signals to the robotic arms 12 of the robotic system 11 to manipulate the relevant instrument to position the same at a target location, position, and/or orientation/alignment.
- position control mode may require sufficiently accurate mapping of the anatomy of the patient.
- a user can manually manipulate a robotic arm 12 of the robotic system 11 without using the user controls 65.
- a user may move the robotic arms 12 and/or any other medical instruments to provide desired access to a patient.
- the robotic system 11 may rely on force feedback and inertia control from the user to determine appropriate configuration of the robotic arms 12 and associated instrumentation.
- the display device(s) 42 of the control system 50 may be integrated with the user controls 65, for example, as a tablet device with a touchscreen providing for user input.
- the display device(s) 42 can be configured to provide data and input commands to the robotic system 11 using integrated display touch controls.
- the display device(s) 42 can be configured to display graphical user interfaces showing information about the position and orientation of various instruments operating within the patient and/or system based on information provided by one or more position sensors.
- position sensors associated with medical instruments e.g., an endoscope
- Such connectivity components may be configured to transmit the position information to the console base 51 for processing thereof by the control circuitry 60 and for presentation via the display device(s).
- FIG 3 illustrates a bronchoscope 340, which may be referred as a scope, an endoscope, a medical instrument, or the like depending on context, disposed in portions of the respiratory system of a patient in accordance with one or more embodiments of the present disclosure.
- bronchoscope procedures can be implemented for investigating abnormalities in human lungs and/or treating the same.
- bronchoscope procedures can be implemented to treat and/or remove lesions or nodules.
- Such procedures may be implemented manually at least in part and/or may be performed using robotic technologies at least in part, such as the robotic system 11 shown in Figures 1 and 2.
- the scope 340 includes a working channel 344 for deploying medical instruments (e.g., lithotripters, basketing devices, forceps, etc.), irrigation, and/or aspiration to an operative region at a distal end of the scope.
- medical instruments e.g., lithotripters, basketing devices, forceps, etc.
- the scope 340 can be articulable, such as with respect to at least a distal portion of the scope, so that the scope can be steered within the human anatomy.
- the scope 340 is configured to be articulated with, for example, five degrees of freedom, including XYZ coordinate movement, as well as pitch and yaw.
- Position sensor(s) of the scope 340 may likewise have similar degrees of freedom with respect to the position information they produce/provide.
- Figure 3 illustrates multiple degrees of motion of the scope 340 according to some embodiments. As shown in Figure 3, a tip or a distal end 342 of the scope 340 can be oriented with zero deflection relative to a longitudinal axis 306 thereof (also referred to as a “roll axis”).
- a robotic system may be configured to deflect the tip 342 on a positive yaw axis 302, negative yaw axis 303, positive pitch axis 304, negative pitch axis 305, or roll axis 306.
- the tip 342 or body 345 of the scope 340 may be elongated or translated in the longitudinal axis 306, x-axis 308, or y-axis 309.
- the scope 340 may include a reference structure (not shown) to calibrate the position of the scope. For example, a robotic system may measure deflection of the scope 340 relative to the reference structure.
- the reference structure can be located, for example, on a proximal end of the endoscope 340 and may include a key, slot, or flange.
- the reference structure can be coupled to a first drive mechanism for initial calibration and coupled to a second drive mechanism to perform a surgical procedure.
- robotic arms of a robotic system can be configured/configurable to manipulate the scope 340 using elongate movement members.
- the elongate movement members may include one or more pull wires (e.g., pull or push wires), cables, fibers, and/or flexible shafts.
- the robotic arms may be configured to actuate multiple pull wires (not shown) coupled to the scope 340 to deflect the tip 342 of the scope 340.
- Pull wires may include any suitable or desirable materials, such as metallic and non-metallic materials such as stainless steel, Kevlar, tungsten, carbon fiber, and the like.
- the scope 340 is configured to exhibit nonlinear behavior in response to forces applied by the elongate movement members. The nonlinear behavior may be based on stiffness and compressibility of the scope, as well as variability in slack or stiffness between different elongate movement members.
- the scope (e.g., endoscope/bronchoscope) 340 may comprise a tubular and flexible medical instrument that is configured to be inserted into the anatomy of a patient to capture images of the anatomy.
- the scope 340 can accommodate wires and/or optical fibers to transfer signals to/from an optical assembly and a distal end 342 of the scope 340, which can include an imaging device 348, such as an optical camera.
- the camera/imaging device 348 can be used to capture images of an internal anatomical space, such as a target portion of the bronchi 7 (e.g., primary 71, secondary 78 and tertiary 75 bronchi, and bronchioles 77).
- the scope 340 may further be configured to accommodate optical fibers to carry light from proximately-located light sources, such as light-emitting diodes, to the distal end 342 of the scope.
- the distal end 342 of the scope 340 can include ports for light sources to illuminate an anatomical space when using the camera/imaging device.
- the scope 340 is configured to be controlled by a robotic system similar in one or more respects to the robotic system 11 shown in Figures 1 and 2.
- the imaging device may comprise an optical fiber, fiber array, and/or lens. The optical components move along with the tip of the scope 340 such that movement of the tip of the scope results in changes to the images captured by the imaging device(s) 348.
- the medical instrument (e.g., scope) 340 includes a sensor that is configured to generate and/or send sensor position data to another device.
- the sensor position data can indicate a position and/or orientation of the medical instrument 340 (e.g., the distal end 342 thereof) and/or can be used to determine/infer a position/orientation of the medical instrument.
- a sensor (sometimes referred to as a “position sensor”) can include an electromagnetic (EM) sensor with a coil of conductive material, or other form/embodiment of an antenna.
- EM electromagnetic
- Figure 3 shows an EM field generator 315, which is configured to broadcast an EM field 90 that is detected by the EM sensor on the medical instrument.
- the EM field 90 can induce small currents in coils of the EM position sensor, which may be analyzed to determine a distance and/or angle/orientation between the EM sensor and the EM field generator 315.
- the medical instrument/scope 340 can include other types of sensors, such as a shape sensing fiber, accelerometer(s), gyroscope(s), satellite-based positioning sensor(s) (e.g., global positioning system (GPS) sensors), radio-frequency transceiver(s), and so on.
- GPS global positioning system
- a sensor on a medical instrument can provide sensor data to a control system, which is then used to determine a position and/or an orientation of the medical instrument.
- the position sensor is positioned on the distal end 342 of the medical instrument 340, while in other embodiments the sensor is positioned at another location on the medical instrument, the bronchoscope may be driven to a position in proximity to the target portion of the bronchi 7.
- the distal end of the bronchoscope 340 may be advanced, through the thorax 6 and into the bronchi of the lung 41, 4r, to contact or otherwise reach a target anatomical feature, which may be a nodule 201.
- a target anatomical feature which may be a nodule 201.
- the position sensor associated with the distal end of the scope 340 in contact and/or proximity to the target anatomical feature, the position of the distal end of the scope 340 may be recorded as the target access position to which the operational instrument (e.g., needle) may be directed to access the nodule 201 through bronchi 7.
- the operational instrument e.g., needle
- the scope 340 may be directed to deliver an injection needle to a target, such as, for example, a nodule 201 within the lungs of the patient.
- the needle may be deployed down a working channel 344 that runs a length of the endoscope 340 to inject a cancer treatment agent/drug directly into the target nodule 201 and/or in the surrounding area.
- the endoscope 340 may endoscopically deliver tools to resect potentially cancerous tissue.
- diagnostic and therapeutic treatments may be delivered in separate procedures.
- the endoscope 340 may also be used to deliver a fiducial to “mark” the location of the target nodule as well. In other instances, diagnostic and therapeutic treatments may be delivered during the same procedure.
- robotic control/manipulation described herein may be any types of end effectors/manipulators, such as rail-based and/or table-based robotic end effectors/manipulators .
- Certain embodiments of the present disclosure advantageously help to automate and guide physicians through the process for gaining access to and treating target anatomical features.
- electromagnetic positioning and scope images can be used together to guide the insertion of a needle into a patient.
- Such solutions can allow physicians to gain access into the lung 4 and to be able to perform lung nodule treatment.
- Certain embodiments of the present disclosure involve position- sensor-guided access to a target treatment site, such as a location of the nodule 201 in the lung 4.
- a target treatment site such as a location of the nodule 201 in the lung 4.
- the scope 340 is fitted with one or more electromagnetic sensors, and the bronchoscope 340 further includes one or more electromagnetic sensors, and such sensors are subjected to the electromagnetic field 90 created by the field generator 315
- associated system control circuitry can be configured to detect and track their locations.
- the tip of the bronchoscope 340 acts as a guiding beacon while the user inserts the bronchoscope 340.
- Such solutions can allow the user to reach a target portion from a variety of approaches, thereby obviating the need to rely on fluoroscopic or ultrasound imaging modalities.
- a control system (not shown in Figure 3) associated with the scope 340 is configured to implement localization/positioning techniques to determine and/or track a location/position of a medical instrument, such as the scope 340 and/or drug injection needle (not shown).
- the EM field generator 315 is configured to provide an EM field 90 within the environment of the patient.
- the scope 340 and/or the drug injection needle can include an EM sensor that is configured to detect EM signals and send sensor data regarding the detected EM signals to the control system.
- the control system can analyze the sensor data to determine a position and/or orientation of the scope 340 (e.g., a distance and/or angle/orientation between the EM sensor and the EM field generator 315).
- the control system can use other techniques to determine a position and/or an orientation of the scope 340.
- the scope 340 and/or needle
- the scope 340 can include a shape-sensing fiber, an accelerometer, a gyroscope, an accelerometer, a satellite -based positioning sensor (e.g., a global positioning system (GPS)), a radio-frequency transceiver, and so on.
- the control system can receive sensor data from the scope 340 and determine a position and/or an orientation thereof.
- the control system can track a position and/or an orientation of the scope 340 in real-time with respect to a coordinate system and/or the anatomy of the patient.
- the scope 340 may be controllable in any suitable or desirable way, either based on user input or automatically.
- the controls 311, 312 provide examples that may be used to receive user input.
- the controls of the scope 340 are located on a proximal handle of the scope, which may be relatively difficult to grasp in some procedural postures/positions as the orientation of the bronchoscope changes.
- the scope 340 is controlled using a two-handed controller 312.
- the controllers 311, 312 are shown as hand-held controllers, user input may be received using any type of I/O device, such as a touchscreen/pad, a mouse, a keyboard, a microphone, etc.
- Figure 4 illustrates lobuar segments of the respiratory system 400 of a patient.
- the lungs 4 are segmented into lobes through a process that can be referred as lung lobe segmentation.
- Lung lobes segmentation can be particularly important during the process of assessing the location (e.g., a nodule 201 location) and progression of symptoms, complications, or diseases, as well as when choosing their most appropriate treatment.
- location e.g., a nodule 201 location
- emphysema quantification and lung nodule detection are among the clinical applications which may benefit from lung segmentation.
- Correct lung lobes segmentation and determination of the lobe boundaries can prevent pleural damage, such as pneumothorax, during examination and treatment.
- the two human lungs 4 are divided in five lobes.
- the lungs 4 contain lung fissures that are folds (e.g., double folds) of visceral pleura that form borders among sections of the lungs and segment the lungs 4 into lobes.
- both lungs have an oblique fissure 81, 85 separating the upper and lower lobes
- the right lung additionally has a horizontal fissure 84 separating the right middle lobe from the upper lobe.
- an oblique fissure (the left major fissure) 81 separates the left lung 41 into two lobes: the superior (upper) left lobe 82 and the inferior (lower) left lobe 83.
- a horizontal fissure (the right major fissure) 84 and an oblique fissure (the right minor fissure) 85 separate the lung into three lobes, the superior (upper) right lobe 86, the middle right lobe 87, and the inferior (lower) right lobe 88. That is, each lobe has its own pleural covering formed by the fissures.
- the separated lobes provides various benefits.
- the lobes can serve to restrict the spread of a bronchopulmonary infection to an affected lobe. Accordingly, as referenced above, identification of the fissures that serve as borders is an important part of diagnosis and treatment plan for lung malignancy and lung diseases.
- measuring distances from a nodule to the identified boundaries can assist pre-operational planning of the biopsy and avoidance of the boundaries during the biopsy.
- a waming/error may be generated for notification to a clinician or to stop or otherwise limit control of the surgical tool.
- Figure 4 illustrates the respiratory system 400 and its lung 4 and lobes 82, 83, 86, 87, 88
- the present disclosure may be applied to any anatomical feature instead of the lung 4 and any portions of such anatomical feature instead of the lobes 82, 83, 86, 87, 88.
- the present disclosure may be applied to another anatomical feature of a different system, such as a heart of a cardiovascular system with portions thereof (e.g., atriums and ventricles).
- Figure 5 (represented in parts 5-1 and 5-2) is a flow diagram illustrating a process 500 for providing distance-coded image in accordance with one or more embodiments.
- Figure 6 (represented in parts 6-1 and 6-2) shows certain images corresponding to various blocks, states, and/or operations associated with the process of Figure 5 in accordance with one or more embodiments.
- the distance-coded images generated by the process 500 may assist diagnosis, pre-operational planning, and treatment of respiratory disorders, such as lung malignancy and lung diseases.
- the process 500 involves receiving an image of a lung (lung image).
- the lung image may be a CT scan, an MRI scan, or other clinical images of the lung, which may be an original image or a modelled image constructed based on one or more original images capturing a patient lung.
- the lung image may be a 2D or 3D, composed of pixels or voxels, raster or vector image, or any variations thereof.
- the lung image may have yet to be segmented into lobes. That is, the lung image is an unlabeled image without lobe labels associated with its pixels or voxels. It will be understood that a pixel and a voxel are smallest units of a visual representation that can be assigned a particular value, such as a color value.
- lung images 602a, 602c, and 602d illustrate 2D images while the lung image 602b illustrate a 3D modelled image.
- the images 602, depending on various image capturing/generation techniques used for the images 602, may include more or fewer anatomical features, such as vessels/bronchi 604 in the lung image 602a or a thorax 606 in the lung image 602d.
- the lung images 602 may include pixels/voxels associated with a nodule 201a-d, which may or may not be readily recognizable by human eyes as indicating existences of the nodule.
- the process 500 involves performing lobuar segmentation.
- the lobuar segmentation may be performed using one or more artificial intelligence framework.
- a machine learning framework leveraging deep learning, such as deep neural networks, deep belief networks, deep reinforcement learning, recurrent neural networks, convolutional neural networks, or the like may be trained and configured to perform the lobuar segmentation on the lung images 602.
- the lobuar segmentation can label some or all pixels or voxels of the lung image 602a-d with lobe labels indicating to which lobe the pixels or voxels belong. More details regarding the machine learning framework will be described in relation to Figures 9 and 10.
- Segmented lung images 612 illustrate the resulting images of the block 504.
- Lobes (A, B, C, D, and E) are reflected in the segmented lung images 612.
- the outermost pixels/voxels associated with a particular lobe label can define a boundary of a lobe that is associated with the particular lobe label, where the boundary is indicative of fissures and pleura.
- the process 500 involves assigning at least one nodule to a lobe.
- a clinician may provide a nodule location to assign a nodule to a lobe.
- the assignment may involve tagging one or more pixels/voxels on a segmented lung image 612a-d, such as through clicking on the image 612a-d or by other methods of providing coordinates in relation to the image 612a-d, to assign the nodule location.
- an artificial intelligence algorithm/machine may automatically tag the nodule location. For example, the algorithm/machine may automatically analyze the lung images 602 associated with the block 502 to identify the nodules 201 and tag their pixels/voxels. More details regarding the assignment of the nodule to a lobe will be described in relation to Figures 7 and 8.
- Assignment of a nodule to a lobe allows identification of a particular lobe that contains the nodule.
- the lobe “A”s contains respective nodules 201a-d.
- the association between the nodules 201a-d with the lobe “A”s is only to simplify the descriptions and it will be understood that a nodule can be assigned to any of the remaining lobes “B”, “C”, “D”, or “E”, where applicable.
- a nodule 201 is described as a target of interest, this block 506 and concepts described herein can be applied to any anatomical feature, such as a tumor, to assign the anatomical feature to a lobe. Additionally, it will be understood that the blocks 504 and 506 may be performed independently or in a reverse order. That is, it is possible to assign a nodule to a pixel/voxel of the lung images 602 before or concurrently with segmenting the lung images 602 and, later, determine a lobe that contains the pixel/voxel.
- the process 500 may optionally involve generating a lobe image 622a-d.
- the generation of the lobe image 622a-d may involve generation of a new image or conversion of an existing image, such as conversion of the segmented lung image 612a-d of the block 506 into the lobe image 622a-d.
- the lobe image 622a- d can include a lobe label of the assigned lobe (the lobe “A”) and exclude lobe labels of the other lobes (the lobes “B”, “C”, “D”, and “E”). That is, the lobe image 622a-d can include pixels/voxels of associated with the assigned lobe from the block 506 and exclude pixels/voxels associated with other lobes.
- the inclusion of the assigned lobe can involve assignment of bias (first) color value(s) to pixels/voxels associated with the assigned lobe. That is, the pixels/voxels associated with a lobe label of the assigned lobe can be assigned the bias color value.
- the exclusion of the other lobes can involve filtering out, masking out, or otherwise removal of colors from pixels/voxels associated with the other lobes. That is, the pixels/voxels associated with lobe labels of the other lobes can be assigned neutral (second) color value(s).
- the neutral color value(s) can be a designated value in a coloring scheme that is distinct from the bias color value(s), such as RGB color scheme value #000000 or CMYK color scheme value 0, 0, 0, 100.
- a lobe image may be a binary image that includes pixels/voxels associated with the assigned lobe and excludes all other pixels/voxels.
- the lobe image 622b illustrates such a binary lobe image that excludes all pixels/voxels that are not associated with a lobe label of the assigned lobe at the block 506.
- a lobe image may differentiate the assigned lobe from the other lobes yet substantially leave pixels/voxels associated with other lobes and anatomical features substantially unchanged, such as the lobe images 622a, 622c, and 622d.
- the generated lobe images 622a-d can be passed to the block 510 to be distance-coded.
- this block 506 may be optional in a sense that a lobe label labelling pixels/voxels of the assigned lobe may be passed to the block 508 along with the segmented images 612a-d without the generation of the lobe images 622, thereby leaving the assignment of the bias color value as a step to be performed at block 510.
- the process 500 involves generating a distance-coded image.
- the block 510 receives information of an assigned lobe and lobe segmentation from the previous blocks.
- the received information can include a lobe label of the assigned lobe passed from the previous blocks and the segmented lung images 612.
- the received information can be in a form of a generated/converted image, such as the lobe images 622, which differentiates an assigned lobe from the other lobes.
- the assigned lobe has a boundary surrounding or encompassing the lobe.
- the lobe boundary is defined by pleura and fissures.
- the lobe boundary can include a set of pixels/voxels that are on the edge in 2D or on the surface in 3D of the assigned lobe.
- Various computing algorithms such as edge detection algorithms and 3D equivalents, may be utilized to identify the boundary.
- distance metrics can be calculated for the pixels/voxels outside the boundary.
- a distance metric can be a Euclidean distance between a given pixel/voxel external to the boundary to the closest pixel/voxel on the edge/surface of the boundary.
- a distance metric can be a polar distance between a given pixel/voxel external to the boundary to the closest pixel/voxel on the edge/surface of the boundary. It will be understood that other distance metrics may also be used.
- the calculated distance metrics of the pixels/voxels can be used to code (distance-code) a lung image, such as the segmented lung image 612a-d or the lobe image 622a-d, to generate a distance -coded image 632a-d.
- the distance coding can take various formats.
- the distance-coded image 632a illustrates a grayscale image in which closer distances from the boundary are assigned darker colors (color indices) and further distances from the boundary are assigned lighter colors (color indices).
- An applicable set of color index range may be computed based on the shortest (e.g., on or adjacent to the boundary) distances and the furthest distances away from the boundary in the lung image and accordingly map color indices to the distances. Accordingly, the distance-coded image 632a can show a gradient of color indices when considering pixels/voxels exterior to the boundary as a whole. It will be understood that while grayscale typically ranging between black and white is described, any ranges of color indices including a color spectrum (e.g., red to blue), contrasts (low contrast to high contrast), and reversed color indices (e.g., lighter to darker for closer to further), other values, or any combinations of the color indices are contemplated.
- a color spectrum e.g., red to blue
- contrasts low contrast to high contrast
- reversed color indices e.g., lighter to darker for closer to further
- the distance-coded images 632b, 632d illustrate utilizing contour lines (isoline, isopleth, or isarithm) to indicate pixel/voxel distances from the boundary.
- the distance-coded image 632b additionally utilizes various patterns to indicate the distances, such as closer-knit patterns indicating closer distance.
- the distance-coded image 632c utilizes a point cloud with more dense points indicating a closer distance and less dense points indicating a further distance. Many variations in the distance-coding schemes are possible.
- the distance-coded images 632a-d may be generated by passing a lobe image through a process that calculates the distance metrics, determines a distancecoding scheme, and applies the scheme to the lobe image.
- the process may apply a fdter (a distance map fdter) that converts the calculated distance metrics to a range of values, maps the range of values to color indices, and assigns mapped color indices to pixels/voxels external to the boundary.
- the interior of a lobe may similarly distance-coded in some embodiments.
- similar distance-coding schemes may be used to assign colors to the interior of the lobe, from a given point in the interior to the closest point on the boundary.
- the interior distance-coding may be done to a distance-coded image 632a-d alternatively or in addition to the illustrated exterior distance-coding.
- the process 500 may involve providing the distance- coded images 632.
- the provision of the distance-coded images 632 may involve displaying the images 632 to clinicians or transmitting the images 632 to a medical system (e.g., the medical system 10).
- the clinicians may use the images 632 during their pre-operational planning of a medical procedure to determine how best to conduct an operation with reduced pneumothorax risks.
- the medical system may integrate the images 632 in its inter-operational software to provide distance information in real-time to warn clinicians of such risks.
- Figure 7 is a flow diagram illustrating a process 700 for assigning a nodule to a lobe in accordance with one or more embodiments.
- Figure 8 shows certain images corresponding to various blocks, states, and/or operations associated with the process of Figure 7 in accordance with one or more embodiments.
- the process 700 describes the block 506 of Figure 5 in greater detail.
- the process 700 involves determining a nodule location. It will be understood that, while a nodule and its location is described herein, the process 700 may be applied to any target anatomical feature.
- the determination of the nodule location in connection with block 710 can be performed in any suitable or desirable way, such as using an at least partially manual determination subprocess 711 or an at least partially automated determination subprocess 712, which are described below in connection with blocks 713 and 714, respectively.
- the process 700 involves receiving or acquiring the nodule location from a clinician.
- the clinician may access lung images, such as CT scans or 3D models of a lung constructed from such images, and identify the nodule location.
- the clinician may provide input to notify the relevant computing/medical system of the nodule location in some manner, such as by clicking on a centroid 715 of the nodule in a lung image or entering coordinates of the centroid 715.
- other manual user input methods including drag-select or selection gestures around the nodule 201.
- coordinates of the centroid 715 can be computed and determined as the nodule location.
- the process 700 involves determining the position of the nodule location using image data input (e.g., the lung images 602) and artificial intelligence framework, as described in detail below with respect to Figures 9 and 10.
- the artificial intelligence framework may be a deep learning framework, such as a convolutional neural network framework.
- the artificial intelligence framework can receive the image data input containing the nodule 201. Depending on whether the image data is of 2D or 3D, the framework can analyze pixels/voxels based on various image segmentation techniques to determine a set of pixels/voxels associated with the nodule 201 from surrounding pixels/voxels. In some embodiments, the nodule pixels/voxels may be further analyzed to provide a convenient position metric that represents the nodule position. For example, the nodule position may be represented as a centroid 715 on a coordinate system, such as the illustrated Euclidean coordinate system in XYZ-axes.
- the subprocesses 711, 712 may substitute or complement one another. That is, the nodule location may be manually determined, automatically determined, or automatically determined and later manually confirmed. Furthermore, the nodule location determined by each subprocess 711, 712 may be compared and, when there is a discrepancy above a threshold level, the discrepancy may be notified as a warning or an error to the clinician.
- the process 700 involves determining such pixel/voxel.
- the acquired or computed coordinates may be based on a coordinate system used in relation to pre-operational planning models and may need to be translated to pixel/voxel coordinates of the lung images.
- the acquired or computed coordinates may need to be rounded up or down to sufficiently map to pixel/voxel coordinates of the lung images.
- the process 700 involves determining a lobe associated with the pixel/voxel.
- the pixel/voxel of the nodule location had been determined.
- the pixel/voxel has an associated lobe label to which it corresponds. Based on the lobe label, a nodule is assigned a particular lobe 731.
- Figure 9 illustrates a lobuar segmentation framework 900 in accordance with one or more embodiments of the present disclosure.
- the lobuar segmentation framework 900 may be embodied in certain control circuitry, including one or more processors, data storage devices, connectivity features, substrates, passive and/or active hardware circuit devices, chips/dies, and/or the like.
- the framework 900 may be embodied in the control circuitry 60 shown in Figure 2 and described above.
- the framework 900 may employ machine learning functionality to perform lobuar segmentation and labelling for lung images.
- the framework 900 may be configured to operate on certain imagetype data structures, such as image data representing at least a portion of a lung including CT scan, an MRI scan, or other clinical images, which may be an original image or a modelled image constructed based on one or more original images.
- image data representing at least a portion of a lung including CT scan, an MRI scan, or other clinical images, which may be an original image or a modelled image constructed based on one or more original images.
- Such input data/data structures may be operated on in some manner by certain segmentation/labelling network 920 associated with an image processing portion of the framework 900.
- the segmentation/labelling network 920 is one example of any suitable or desirable artificial intelligence architecture that may perform the lobuar segmentation/labelling.
- the framework 900 can involve a training process 901 and an operational process 902.
- the segmentation/labelling network 920 may be trained according to known anatomical images 912 and known segmented images 932 corresponding to the respective images 912 as input/output pairs, wherein the segmentation/labelling network 920 is configured to adjust one or more parameters or weights associated therewith to correlate the known input and output image data.
- the segmentation/labelling network 920 e.g., convolutional neural network
- the machine learning framework may be configured to execute the leaming/training in any suitable or desirable manner.
- the known segmented images 932 may be generated at least in part by manually labeling anatomical features in the known anatomical images 912. For example, manual labels may be determined and/or applied by a relevant medical expert to label or otherwise indicate where, for example, each lobuar segments are on the known anatomical images 912.
- the known input/output pairs can indicate the parameters of the segmentation/labelling network 920, which may be dynamically updatable in some embodiments.
- the known segmented images 932 may depict the boundary of the lobes segmented therein.
- the framework 900 may be configured to generate segmented images 935 in a manner as to indicate in a binary manner whether a particular lung image of unlabeled anatomical images 915 includes a lobe (a segment) or not, wherein further processing may be performed on the images that are identified as containing one or more instances of the lobe to further identify the location, boundary, and/or other aspects of the lobe. In some embodiments, further processing may be performed to determine a nodule location in one of the segmented lobes.
- the lobuar segmentation framework 900 may further be configured to generate segmented image 935 associated with unlabelled anatomical images 915 using the trained version of the segmentation/labelling network 920.
- segmentation/labelling network 920 For example, during a medical procedure, realtime lung images of the treatment site may be processed using the segmentation/labelling network 920 to generate segmented images 935 identifying the presence and/or position of one or more lobes in the real-time lung images.
- the framework 900 may be configured to identify fissures and/or pleura dividing lobes, as well as segment-out the lobes in the image.
- the framework 900 may comprise an artificial neural network (e.g., the segmentation/labelling network 920), such as a convolutional neural network.
- the framework 900 may implement a deep learning architecture that takes in an input image, assigns learnable weights/biases to various aspects/objects in the image to differentiate one from the other.
- the network 920 may include a plurality of neurons (e.g., layers of neurons, as shown in Figure 9) corresponding to overlapping regions of an input image that cover the visual area of the input image.
- the network 920 may further operate to flatten the input image, or portion(s) thereof, in some manner.
- the network 920 may be configured to capture spatial and/or temporal dependencies in the input images 915 through the application of certain filters. Such filters may be executed in various convolution operations to achieve the desired output data and may be hand-engineered or may be learned through machine learning. Such convolution operations may be used to extract features, such as edges, contours, and the like.
- the network 920 may include any number of convolutional layers, wherein more layers may provide for identification of higher-level features.
- the network 920 may further include one or more pooling layers, which may be configured to reduce the spatial size of convolved features, which may be useful for extracting features which are rotational and/or positional invariant, as with certain anatomical features.
- the image data may be processed by a multi-level perceptron and/or a feed-forward neural network. Furthermore, backpropagation may be applied to each iteration of training.
- the framework may be able to distinguish between dominating and certain low-level features in the input images and classify them using any suitable or desirable technique.
- the neural network architecture comprises any of the following known convolutional neural network architectures: UNet, LeNet, AlexNet, VGGNet, GoogLeNet, ResNet, ZFNet, or any suitable architecture.
- the network 920 can be modelled based on UNet semantic segmentation deep learning model.
- the model can be developed to perform segmentation of the lung to lobes described in relation to Figure 4.
- the model can be trained on segmented images that are already associated with lobe labels.
- the known anatomical images 912 are original images that have corresponding known segmented images 932 that already divide the known anatomical images 912 into lobes with lobe labels.
- the known anatomical images 912 and their corresponding known segmented images 932 may be public data, such as available for public use with Slicer Chest Imaging Extension.
- the images 912, 932 can be byproducts of previous medical procedures where clinicians have identified lobes in the images, permitted the images do not violate patient privacy concerns.
- the images may be provided as CT images, MRI images, or other medical images in 2D or 3D and can form a training dataset.
- the model can be tuned and validated with cross validation.
- Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. For example, in a 5-fold cross-validation, the training dataset can be randomly reshuffled and split into 5 groups. A group is selected as a validation dataset and the remaining groups are used to train the model and the validation dataset is used to evaluate the trained model. Any evaluation metrics, including error metrics such as the global average dice score, may be used. The training and the evaluation are repeated for each group.
- Cross-validation can effectively use a limited training dataset in order to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model.
- transfer learning techniques may be utilized along the cross-validation to further fine-tune the model.
- Figure 10 illustrates an example distance-coded image generation architecture 1000 in accordance with one or more embodiments.
- the architecture 1000 may represent a convolutional neural network architecture and may include one or more of the illustrated components, which may represent certain functional components each of which may be embodied in one or more portions or components of control circuitry associated with any of the systems, devices, and/or methods of the present disclosure.
- the architecture 1000 may implement an embodiment of the lobuar segmentation framework 900 of Figure 9, or a portion thereof.
- the architecture 1000 may include a trained segmentation network component 1002, such as a UNet neural network, or the like.
- the trained segmentation network component 1002 may be the segmentation/labelling network 920 of Figure 9 and may perform some or all functionalities described in the blocks 502 and 504 of Figure 5-1. That is, the trained segmentation network component 1002 can receive a lung image and perform lobuar segmentation to provide a segmented image 1004, such as the segmented lung image 632a-d.
- the segmented image may assign a lobe label to every pixel/voxel in a CT image, in such a way that the pixel/voxel value indicates to which lobe that pixel/voxel belongs.
- the architecture 1000 may further include a nodule classifier component 1006.
- the nodule classifier component 1006 may be configured to determine a nodule location in an image and assign a lobe (segment) that contains pixels/voxels corresponding to the nodule location.
- the nodule classifier component 1006 may perform some or all functionalities described in the block 506 of Figure 5-1 and blocks of the nodule assignment process 700 of Figure 7.
- the nodule classifier component 1006 can be designed to assign a lobe that contains the nodule.
- the assignment of the lobe may involve selection of the lobe to assign to the nodule. For example, a pixel/voxel that corresponds to a nodule centroid may be determined and a lobe that contains the pixel/voxel may be selected as the lobe to assign to the nodule.
- One or more additional components 1016 of the architecture 1000 may further process the image based on the selected (assigned) lobe.
- a lobe image generation component 1010 may be configured to generate a lobe image that includes the selected lobe and excludes other lobes. That is, the lobe image generation component 1010 may be configured to perform some or all functionalities described in the block 508 of Figure 5-2, such as generating a binary image.
- a distance calculation component 1012 may be configured to calculate a distance metric between a given pixel/voxel to the closest pixel/voxel on the boundary of the selected lobe.
- a color assignment component 1014 may be configured to assign a color to a pixel/voxel based on the distance metric. The assigned color may be determined based on a range of colors, including grayscale, that maps a color to a distance.
- the distance calculation component 1012 and the color assignment component 1014 may be configured to perform some or all functionalities described in the block 510.
- Conditional language used herein such as, among others, “can,” “could,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is intended in its ordinary sense and is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
- indefinite articles (“a” and “an”) may indicate “one or more” rather than “one.”
- an operation performed “based on” a condition or event may also be performed based on one or more other conditions or events not explicitly recited.
- the spatially relative terms “outer,” “inner,” “upper,” “lower,” “below,” “above,” “vertical,” “horizontal,” and similar terms, may be used herein for ease of description to describe the relations between one element or component and another element or component as illustrated in the drawings. It be understood that the spatially relative terms are intended to encompass different orientations of the device in use or operation, in addition to the orientation depicted in the drawings. For example, in the case where a device shown in the drawing is turned over, the device positioned “below” or “beneath” another device may be placed “above” another device. Accordingly, the illustrative term “below” may include both the lower and upper positions. The device may also be oriented in the other direction, and thus the spatially relative terms may be interpreted differently depending on the orientations.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Optics & Photonics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Radiology & Medical Imaging (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Endoscopes (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
Claims
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP23906216.9A EP4637501A1 (en) | 2022-12-19 | 2023-12-18 | Lobuar segmentation of lung and measurement of nodule distance to lobe boundary |
| CN202380094288.0A CN120693094A (en) | 2022-12-19 | 2023-12-18 | Lung lobule segmentation and distance measurement between nodules and lobe boundaries |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263476147P | 2022-12-19 | 2022-12-19 | |
| US63/476,147 | 2022-12-19 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024134467A1 true WO2024134467A1 (en) | 2024-06-27 |
Family
ID=91587795
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2023/062891 Ceased WO2024134467A1 (en) | 2022-12-19 | 2023-12-18 | Lobuar segmentation of lung and measurement of nodule distance to lobe boundary |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP4637501A1 (en) |
| CN (1) | CN120693094A (en) |
| WO (1) | WO2024134467A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119722711A (en) * | 2024-12-23 | 2025-03-28 | 广东工业大学 | A pulmonary nodule segmentation method and device based on ABVM-UNet |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050207630A1 (en) * | 2002-02-15 | 2005-09-22 | The Regents Of The University Of Michigan Technology Management Office | Lung nodule detection and classification |
| US20190244347A1 (en) * | 2015-08-14 | 2019-08-08 | Elucid Bioimaging Inc. | Methods and systems for utilizing quantitative imaging |
| WO2021138096A1 (en) * | 2019-12-30 | 2021-07-08 | Intuitive Surgical Operations, Inc. | Systems and methods for indicating approach to an anatomical boundary |
| US20220160433A1 (en) * | 2020-11-20 | 2022-05-26 | Auris Health, Inc. | Al-Based Automatic Tool Presence And Workflow/Phase/Activity Recognition |
-
2023
- 2023-12-18 CN CN202380094288.0A patent/CN120693094A/en active Pending
- 2023-12-18 EP EP23906216.9A patent/EP4637501A1/en active Pending
- 2023-12-18 WO PCT/IB2023/062891 patent/WO2024134467A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20050207630A1 (en) * | 2002-02-15 | 2005-09-22 | The Regents Of The University Of Michigan Technology Management Office | Lung nodule detection and classification |
| US20190244347A1 (en) * | 2015-08-14 | 2019-08-08 | Elucid Bioimaging Inc. | Methods and systems for utilizing quantitative imaging |
| WO2021138096A1 (en) * | 2019-12-30 | 2021-07-08 | Intuitive Surgical Operations, Inc. | Systems and methods for indicating approach to an anatomical boundary |
| US20220160433A1 (en) * | 2020-11-20 | 2022-05-26 | Auris Health, Inc. | Al-Based Automatic Tool Presence And Workflow/Phase/Activity Recognition |
Non-Patent Citations (1)
| Title |
|---|
| SGANGA JAKE; ENG DAVID; GRAETZEL CHAUNCEY; CAMARILLO DAVID: "OffsetNet: Deep Learning for Localization in the Lung using Rendered Images", 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE, 20 May 2019 (2019-05-20), pages 5046 - 5052, XP033593888, DOI: 10.1109/ICRA.2019.8793940 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119722711A (en) * | 2024-12-23 | 2025-03-28 | 广东工业大学 | A pulmonary nodule segmentation method and device based on ABVM-UNet |
| CN119722711B (en) * | 2024-12-23 | 2025-10-10 | 广东工业大学 | A pulmonary nodule segmentation method and device based on ABVM-UNet |
Also Published As
| Publication number | Publication date |
|---|---|
| EP4637501A1 (en) | 2025-10-29 |
| CN120693094A (en) | 2025-09-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12414823B2 (en) | Anatomical feature tracking | |
| US12226168B2 (en) | Systems and methods for registration of location sensors | |
| US12285223B2 (en) | Systems and methods of registration for image-guided surgery | |
| US12295672B2 (en) | Robotic systems for determining a roll of a medical device in luminal networks | |
| US12478444B2 (en) | Systems and methods for localization based on machine learning | |
| CN110831486B (en) | System and method for position sensor-based branch prediction | |
| EP3334324B1 (en) | Systems and methods of registration for image-guided surgery | |
| US11207141B2 (en) | Systems and methods for weight-based registration of location sensors | |
| US11944422B2 (en) | Image reliability determination for instrument localization | |
| WO2024134467A1 (en) | Lobuar segmentation of lung and measurement of nodule distance to lobe boundary | |
| US20250308066A1 (en) | Pose estimation using intensity thresholding and point cloud analysis | |
| WO2025229542A1 (en) | Target localization for percutaneous access |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23906216 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2025536029 Country of ref document: JP Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2025536029 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023906216 Country of ref document: EP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202380094288.0 Country of ref document: CN |
|
| WWP | Wipo information: published in national office |
Ref document number: 202380094288.0 Country of ref document: CN |
|
| WWP | Wipo information: published in national office |
Ref document number: 2023906216 Country of ref document: EP |