WO2024186659A1 - Génération d'images médicales haute résolution à l'aide d'un modèle d'apprentissage automatique - Google Patents
Génération d'images médicales haute résolution à l'aide d'un modèle d'apprentissage automatique Download PDFInfo
- Publication number
- WO2024186659A1 WO2024186659A1 PCT/US2024/018155 US2024018155W WO2024186659A1 WO 2024186659 A1 WO2024186659 A1 WO 2024186659A1 US 2024018155 W US2024018155 W US 2024018155W WO 2024186659 A1 WO2024186659 A1 WO 2024186659A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- machine learning
- ground truth
- learning model
- resolution
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
Definitions
- one or more medical images may be used for planning a medical procedure.
- a medical image depicting airways of a lung may be used to determine a pathway (e.g., through the airways of the lung) that may access a location, such as a lesion, within the lung.
- the medical image may have a low resolution such that the medical image may fail to accurately depict the anatomical object.
- the medical image may fail to depict smaller portions of airways extending within the lung. Because the anatomical object might not be accurately depicted within the medical image, the medical procedure may be difficult and/or time consuming to plan, particularly for relatively less-experienced personnel.
- the 3D model might not be accurately generated due to lower quality of some medical images (e.g., lower resolution images and/or images with larger slice thicknesses).
- An illustrative apparatus includes a memory storing instructions and a processor communicatively coupled to the memory and configured to execute the instructions to perform a process comprising: accessing a first image depicting an anatomical object, the first image having a first resolution; generating, based on the first image and using a machine learning model, a second image depicting the anatomical object, the second image having a second resolution greater than the first resolution; and generating, based on the second image, a three-dimensional (3D) model of the anatomical object.
- a process comprising: accessing a first image depicting an anatomical object, the first image having a first resolution; generating, based on the first image and using a machine learning model, a second image depicting the anatomical object, the second image having a second resolution greater than the first resolution; and generating, based on the second image, a three-dimensional (3D) model of the anatomical object.
- 3D three-dimensional
- An illustrative method includes accessing a first image depicting an anatomical object, the first image having a first resolution; generating, based on the first image and using a machine learning model, a second image depicting the anatomical object, the second image having a second resolution greater than the first resolution; and generating, based on the second image, a three-dimensional (3D) model of the anatomical object.
- An illustrative non-transitory computer-readable medium may store instructions that, when executed, direct a processor of a computing device to perform a process comprising: accessing a first image depicting an anatomical object, the first image having a first resolution; generating, based on the first image and using a machine learning model, a second image depicting the anatomical object, the second image having a second resolution greater than the first resolution; and generating, based on the second image, a three-dimensional (3D) model of the anatomical object.
- 3D three-dimensional
- An illustrative apparatus for training a machine learning model configured to receive low resolution images having a first resolution and output super resolution images having a second resolution higher than the first resolution may include a memory storing instructions and one or more processors communicatively coupled to the memory and configured to execute the instructions to perform a process comprising: accessing a three-dimensional (3D) ground truth image; generating, based on the 3D ground truth image, a 3D simulated image having a slice thickness greater than a slice thickness of the 3D ground truth image; providing the 3D simulated image as an input to the machine learning model, the machine learning model configured to output, based on the 3D simulated image and one or more parameters of the machine learning model, a sequence of 3D super resolution images; and adjusting, based on providing the 3D simulated image as the input to the machine learning model, the one or more parameters of the machine learning model to reduce a difference between the 3D ground truth image and the sequence of 3D super resolution images output by the machine learning model.
- 3D three-dimensional
- An illustrative method includes accessing a three-dimensional (3D) ground truth image; generating, based on the 3D ground truth image, a 3D simulated image having a slice thickness greater than a slice thickness of the 3D ground truth image; providing the 3D simulated image as an input to the machine learning model, the machine learning model configured to output, based on the 3D simulated image and one or more parameters of the machine learning model, a sequence of 3D super resolution images; and adjusting, based on providing the 3D simulated image as the input to the machine learning model, the one or more parameters of the machine learning model to reduce a difference between the slice thickness of the 3D ground truth image and a slice thickness of the sequence of 3D super resolution images output by the machine learning model.
- An illustrative non-transitory computer-readable medium may store instructions that, when executed, direct a processor of a computing device to perform a process comprising accessing a three-dimensional (3D) ground truth image; generating, based on the 3D ground truth image, a 3D simulated image having a lower resolution than a resolution of the 3D ground truth image; providing the 3D simulated image as an input to the machine learning model, the machine learning model configured to output, based on the 3D simulated image and one or more parameters of the machine learning model, a sequence of 3D super resolution images; and adjusting, based on providing the 3D simulated image as the input to the machine learning model, the one or more parameters of the machine learning model to reduce a difference between the 3D ground truth image and the sequence of 3D super resolution images output by the machine learning model
- FIG. 1 depicts an illustrative implementation including an image generation system.
- FIG. 2 depicts another illustrative implementation including an image generation system.
- FIG. 3 depicts an illustrative method of operating an image generation system.
- FIG. 4 depicts another illustrative method of operating an image generation system.
- FIG. 5A depicts an illustrative implementation of a 3D model of an anatomical object based on low resolution images.
- FIG. 5B depicts an illustrative implementation of a 3D model of an anatomical object based on high resolution images.
- FIG. 6 depicts an illustrative implementation fortraining a machine learning model.
- FIG. 7 depicts an illustrative method fortraining a machine learning model.
- FIG. 8 depicts another illustrative method for training a machine learning model.
- FIG. 9 depicts an illustrative computing system.
- FIG. 10 depicts a simplified diagram of a medical system according to some embodiments.
- FIG. 11 A depicts a simplified diagram of a medical instrument system according to some embodiments.
- FIG. 11 B depicts a simplified diagram of a medical instrument including a medical tool within an elongate device according to some embodiments.
- FIGS. 12A and 12B are simplified diagrams of side views of a patient coordinate space including a medical instrument mounted on an insertion assembly according to some embodiments.
- An illustrative image generation system may include a machine learning module configured to transform low resolution images depicting an anatomical object to high resolution images depicting the anatomical object (e.g., for the planning of a medical procedure).
- the image generation system may be configured to access a first image depicting an anatomical object and generate, based on the first image and using a machine learning model, a second image depicting the anatomical object such that the second image has a second resolution greater than a first resolution of the first image.
- the image generation system may further be configured to generate, based on the second image, a three-dimensional (3D) model of the anatomical object.
- an operation associated with 3D model may be performed, such as using the 3D model to plan a medical procedure associated with the anatomical object of the 3D model.
- the machine learning model may be trained, such as by generating, based on a 3D ground truth image, a 3D simulated image having a lower resolution than the 3D ground truth image, providing the 3D simulated image as an input to the machine learning model, the machine learning model configured to output, based on the 3D simulated image and one or more parameters of the machine learning model, a sequence of 3D super resolution images, and adjusting, based on providing the 3D simulated image as the input to the machine learning model, the one or more parameters of the machine learning model to reduce a difference between the 3D ground truth image and the sequence of 3D super resolution images output by the machine learning model.
- the principles described herein may result in an improved representation of an anatomical object compared to conventional techniques for transforming low resolution images of the anatomical object to high resolution images, as well as provide other benefits as described herein.
- using a machine learning model to transform low resolution images of the anatomical object to high resolution images may allow the anatomical object to be depicted more accurately (e.g., by showing additional portions of the anatomical object) such that the planning of a medical procedure associated with the anatomical object may be performed more quickly and/or easily.
- high resolution images generated by the machine learning model may depict smaller portions of airways that extend within a lung more accurately than low resolution images, such that determining a pathway through the airways of the lung to access a location in the lung may be performed more easily and/or quickly. Additionally, the high resolution images generated by the machine learning model may, in some instances, eliminate an additional procedure to recapture images of the anatomical object at a high resolution, which may conserve time and/or resources and/or reduce the amount of radiation exposure for the patient.
- FIG. 1 shows an illustrative implementation 100 comprising an image generation system 102 configured to transform low resolution images of an anatomical object to high resolution images of the anatomical object.
- Implementation 100 may include additional or alternative components as may serve a particular implementation.
- implementation 100 or certain components of implementation 100 may be implemented by a computer-assisted medical system.
- the resolution of an image may be indicative of one or more of a number of data points that may be expressed in a common coordinate frame of the image (e.g., two-dimensional (2D) pixels or 3D voxels), a spacing of the pixels or voxels, a slice thickness, a number of slices, an amount of noise, and/or an amount of blurriness of the image.
- a common coordinate frame of the image e.g., two-dimensional (2D) pixels or 3D voxels
- a low resolution image as referred to herein may include a relatively low number of pixels or voxels, a relatively large spacing of the pixels or voxels, a relatively large slice thickness (e.g., a slice thickness greater than about 3 millimeters, such as from about 3 millimeters to about 6 millimeters), a relatively low number of slices, a relatively large amount of noise, and/or a large amount of blurriness.
- a relatively low resolution image may include a relatively low number of pixels or voxels, a relatively large spacing of the pixels or voxels, a relatively large slice thickness (e.g., a slice thickness greater than about 3 millimeters, such as from about 3 millimeters to about 6 millimeters), a relatively low number of slices, a relatively large amount of noise, and/or a large amount of blurriness.
- a high resolution image may include one or more of a relatively large number of pixels or voxels, a relatively small spacing of the pixels or voxels, a relatively small slice thickness (e.g., a slice thickness smaller than about 3 millimeters, such as from about 0.5 millimeters to about 1 millimeter), a relatively large number of slices, a relatively small amount of noise, and/or a relatively small amount of blurriness.
- a high resolution image has a resolution that is higher than a resolution of a low resolution image.
- Image generation system 102 may be implemented by one or more computing devices and/or computer resources (e.g., processors, memory devices, storage devices, etc.) as may serve a particular implementation.
- image generation system 102 may include, without limitation, a memory 104 and a processor 106 selectively and communicatively coupled to one another.
- Memory 104 and processor 106 may each include or be implemented by computer hardware that is configured to store and/or process computer software.
- Various other components of computer hardware and/or software not explicitly shown in FIG. 1 may also be included within image generation system 102.
- memory 104 and/or processor 106 may be distributed between multiple devices and/or multiple locations as may serve a particular implementation.
- Memory 104 may store and/or otherwise maintain executable data used by processor 106 to perform any of the functionality described herein.
- memory 104 may store instructions 108 that may be executed by processor 106.
- Memory 104 may be implemented by one or more memory or storage devices, including any memory or storage devices described herein, that are configured to store data in a transitory or non-transitory manner.
- Instructions 108 may be executed by processor 106 to cause image generation system 102 to perform any of the functionality described herein.
- Instructions 108 may be implemented by any suitable application, software, code, and/or other executable data instance.
- memory 104 may also maintain any other data accessed, managed, used, and/or transmitted by processor 106 in a particular implementation.
- Processor 106 may be implemented by one or more computer processing devices, including general purpose processors (e.g., central processing units (CPUs), graphics processing units (GPUs), microprocessors, etc.), special purpose processors (e.g., application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), etc.), image signal processors, or the like.
- general purpose processors e.g., central processing units (CPUs), graphics processing units (GPUs), microprocessors, etc.
- special purpose processors e.g., application specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), etc.
- image signal processors e.g., when processor 106 is directed to perform operations represented by instructions 108 stored in memory 104
- image generation system 102 may perform various operations as described herein.
- FIG. 2 shows another illustrative implementation 200 configured to transform low resolution images of an anatomical object to high resolution images of the anatomical object.
- implementation 200 includes an image generation system 202 communicatively coupled (e.g., wired and/or wirelessly) with a user interface system 204.
- Implementation 200 may include additional or alternative components as may serve a particular implementation.
- implementation 200 or certain components of implementation 200 may be implemented by a computer- assisted medical system.
- Image generation system 202 may implement or be similar to image generation system 102 and may be configured to access low resolution images 206.
- Low resolution images 206 may have a first resolution, such as a low resolution.
- Low resolution images 206 may depict an anatomical object, such as an object associated with a subject (e.g., a body of a live animal, a human or animal cadaver, a portion of human or animal anatomy, tissue removed from human or animal anatomies, nontissue work pieces, training models, etc.).
- anatomical object may include tissue of a subject (e.g., an organ, soft tissue, connective tissue, etc.).
- low resolution images 206 may comprise low resolution medical images (e.g., a positron emission tomography and computed tomography (PET/CT) image, a tomosynthesis image, a regular computed tomography (CT) image, a positron emission tomography (PET) image etc.).
- Low resolution images 206 may include any suitable type of image (e.g., a 2D image along a coronal plane, a sagittal plane, and/or a transverse plane associated with the subject, a 2.5 dimension fusion image, a 2D image with segmentation loss, a 3D image, etc.).
- Image generation system 202 may access low resolution images 206 in any suitable manner.
- image generation system 202 may access data representative of the low resolution images 206 by way of one or more networks (e.g., a local area network, the Internet, etc.), directly from a computing device storing the low resolution images 206, directly from an imaging device configured to generate the low resolution images 206, etc.
- networks e.g., a local area network, the Internet, etc.
- image generation system 202 includes a machine learning model 208 configured to generate, based on low resolution images 206, high resolution images 210 depicting the anatomical object.
- Machine learning model 208 may employ any type of machine learning, deep learning, neural networking, artificial intelligence, and/or other such algorithms as may serve a particular implementation.
- machine learning model 208 may be a deep neural network involving a plurality of convolutional neural network (CNN) layers.
- machine learning model 208 may include an Enhanced Deep SuperResolution Network (EDSR)), an Enhanced Super-Resolution Generative Adversarial Network (ESRGAN), etc.
- EDSR Enhanced Deep SuperResolution Network
- ESRGAN Enhanced Super-Resolution Generative Adversarial Network
- Machine learning model 208 may include one or more parameters (e.g., how large or granular the CNN layers are set to be, how many CNN layers are used, a patch size, a batch size etc.) for generating high resolution images 210 based on low resolution images 206.
- High resolution images 210 may have a second resolution greater than the first resolution of low resolution images 206 (e.g., the second image may have a larger number of pixels or voxels, a smaller spacing of the pixels or voxels, a smaller slice thickness, a larger number of slices, a smaller amount of noise, and/or a smaller amount of blurriness than the first image).
- high resolution images 210 may have a high resolution, such as a high resolution that may correspond to high resolution medical images (e.g., a high resolution computed tomography (CT) image).
- CT computed tomography
- the increased resolution of high resolution images 210 may allow high resolution images 210 to depict additional portions and/or features of the anatomical object than low resolution images 206.
- Image generation system 202 of the illustrated implementation further includes a 3D model generator 212 configured to generate, based on high resolution images 210, a 3D model 214 of the anatomical object.
- 3D model generator 212 may be configured to fuse or otherwise combine high resolution images 210 output by machine learning model 208 to generate 3D model 214.
- the fusing may include stitching non-overlapping voxels or pixels together, such as by stitching images together along non-overlapping boundaries of the images.
- high resolution images 210 may depict the anatomical object at different viewpoints such that the fusing may include merging aligned (or overlapping) voxels or pixels, such as by blending intensity and/or depth values for aligned voxels or pixels.
- Image generation system 202 may be configured to output 3D model 214 to user interface system 204.
- User interface system 204 of the illustrated implementation comprises a display device 216 and a user input device 218.
- Display device 216 may be implemented by a monitor or other suitable device configured to display information to a user.
- display device 216 may be configured to display 3D model 214 or other information based on high resolution images 210.
- User input device 218 may be implemented by any suitable device or devices (e.g., a button, joystick, touchscreen, keyboard, handle, microphone, etc.) configured to receive a user input, for example, to interact with the display presented by display device 216.
- user input device 218 may be used to interact with 3D model 214, such as to plan a medical procedure associated with the anatomical object of 3D model 214.
- FIG. 3 shows an illustrative method 300 that may be performed by image generation system 202. While FIG. 3 illustrates example operations according to one embodiment, other embodiments may omit, add to, reorder, and/or modify any of the operations shown in FIG. 3. Moreover, each of the operations depicted in FIG. 3 may be performed in any of the ways described herein.
- image generation system 202 may, at operation 302, access a first image (e.g., one or more of low resolution images 206) depicting an anatomical object and having a first resolution.
- the first image may have a low resolution and may, in some instances, include a low resolution medical image (e.g., a positron emission tomography and computed tomography (PET/CT) image, a tomosynthesis image, a regular computed tomography (CT) image, a positron emission tomography (PET) image etc.).
- accessing the first image may include processing (e.g., fusing, stitching, merging, etc.) the first image.
- Image generation system 202 may, at operation 304, generate, based on the first image, a second image (e.g., one or more of high resolution images 210) depicting the anatomical object and having a second resolution greater than the first resolution.
- image generation system 202 may use machine learning model 208 to transform the first image to the second image having a greater resolution.
- generating the second image may comprise one or more of sharpening the first image, reducing noise from the first image, decreasing a slice thickness of the first image, and/or increasing a number of pixels and/or voxels of the first image.
- Image generation system 202 may, at operation 306, generate, based on the second image, a 3D model (e.g., 3D model 214) of the anatomical object.
- the second image may be associated with data points expressed in pixels and/or voxels such that image generation system 202 may be configured to derive the 3D model of the anatomical object based on the data points.
- image generation system 202 may be configured to derive vertices of the 3D model arranged in a 3D grid configuration and associate the vertices with 3D locations that correspond to locations of the data points of the second image.
- the 3D model may represent the anatomical object in one or more forms (e.g., solid, wireframe, surface, etc.).
- the second image may include a plurality of second images such that generating the 3D model may include fusing or otherwise combining the second images.
- the fusing may include stitching non-overlapping voxels or pixels together, such as by stitching images together along non-overlapping boundaries of the second images.
- the second images may depict the anatomical object at different viewpoints such that the fusing may include merging aligned (or overlapping) voxels or pixels, such as by blending intensity and/or depth values for aligned voxels or pixels.
- generating the 3D model may include identifying the anatomical object depicted in the second image (e.g., by identifying pixels and/or voxels of the second image associated with the anatomical object).
- the anatomical object may be identified by implementing and applying artificial intelligence algorithms, such as machine learning algorithms. Any suitable form of artificial intelligence and/or machine learning may be used, including, for example, deep learning, neural networks, etc.
- a machine learning algorithm may be generated through machine learning procedures and applied to identification operations.
- the machine learning algorithm may be directed to identifying an anatomical object and/or a feature of an anatomical object within the second image.
- the machine learning algorithm may operate as an identification function that is applied to individual and/or fused imagery to classify the anatomical object in the second image.
- at least a portion of identifying the anatomical object may be implemented by machine learning model 208.
- image generation system 202 may be configured to identify the anatomical object within the second image by implementing and applying object recognition algorithms.
- object recognition algorithm may be used to identify objects (e.g., an anatomical object) of predetermined types within the second image, such as by comparing data associated with the second image to model object data of predetermined types of objects.
- model object data may be stored within a model database that may be communicatively coupled with image generation system 202.
- FIG. 4 shows another illustrative method 400 that may be performed by image generation system 202. While FIG. 4 illustrates example operations according to one embodiment, other embodiments may omit, add to, reorder, and/or modify any of the operations shown in FIG. 4. Moreover, each of the operations depicted in FIG. 4 may be performed in any of the ways described herein.
- image generation system 202 may, at operation 402, access a first image depicting an anatomical object and having a first resolution (e.g., a low resolution). Image generation system 202 may further, at operation 404, determine whether the first resolution is below a resolution threshold.
- the resolution threshold may be representative of a threshold associated with one or more of a number of pixels or voxels, a spacing of the pixels or voxels, a slice thickness, a number of slices, an amount of noise, and/or an amount of blurriness of the image.
- the first resolution of the first image may be compared to the resolution threshold to determine whether the first resolution is below the resolution threshold, which may indicate that the first image has a low resolution.
- image generation system 202 may, at operation 406, generate, based on the first image, a second image depicting the anatomical object and having a second resolution (e.g., a high resolution) greater than the first resolution.
- image generation system 202 may use the machine learning model to transform the first image into the second image, which may include one or more of sharpening the first image, reducing noise from the first image, decreasing a slice thickness of the first image, and/or increasing a number of pixels and/or voxels of the first image.
- Image generation system may further, at operation 408, generate a 3D model of the anatomical object based on the second image output by the machine learning model.
- image generation system 202 may, in some instances, be configured to generate the 3D model based on the first image.
- image generation system 202 may further, at operation 410, perform an operation associated with the 3D model of the anatomical object.
- the operation may include providing the 3D model for display by a display device (e.g., display device 216).
- the operation may include using the 3D model to plan a medical procedure associated with the anatomical object.
- a user may interact (e.g., by user input device 218) with the 3D model (e.g., displayed on display device 216) to plan the medical procedure, such as accessing a location of the anatomical object.
- the 3D model may depict airways of a lung such that the 3D model may be used to determine a pathway (e.g., through the airways of the lung of the 3D model) to access a location (e.g., a lesion) within the lung.
- the planning of the medical procedure may be performed prior to and/or during (e.g., in real time) the medical procedure.
- one or more of the first or second images depicting the anatomical object may be used in addition to or instead of the 3D model to plan the medical procedure.
- the operation may include registering the 3D model with the first image.
- the first image may, in some instances, depict a lesion at a location within the anatomical object.
- the 3D model may be registered with the first image to align features of the anatomical object depicted by the 3D model with the lesion depicted in the first image. This may allow a pathway to the location of the lesion of the anatomical object to be determined using the 3D model.
- high resolution images generated by the machine learning model may depict additional portions and/or features of the anatomical object compared to low resolution images.
- FIG. 5A shows an implementation 500 of a low resolution 3D model of an anatomical object based on low resolution images (e.g., low resolution images 206) having a low resolution (e.g., prior to being input to machine learning model 208).
- FIG. 5B shows an implementation 502 of a high resolution 3D model of the anatomical object based on high resolution images (e.g., high resolution images 210) having a high resolution (e.g., output by machine learning model 208) greater than the low resolution of the low resolution images.
- low resolution 3D model 500 and high resolution 3D model 502 both depict a plurality of airways 504 (e.g., airways 504-1 to 504-N) of a lung that may be used to plan a medical procedure associated with airways 504.
- airways 504 e.g., airways 504-1 to 504-N
- a pathway 506 may be determined via airways 504 of low resolution 3D model 500 to access a target location 508 of the lung.
- the target location 508 is positioned beyond airways 504 of the lung depicted in low resolution 3D model 500.
- pathway 506 may be determined through the airways 504 depicted in low resolution 3D model 500 to an airway location 510 positioned in airways 504 and proximate to the target location 508.
- pathway 506 in the illustrated low resolution 3D model 500 may be determined through a first airway 504-1 and a fourth airway 504-4 of low resolution 3D model 500 to the airway location 510.
- pathway 506 and/or airway location 510 may be determined by a user.
- the user may interact with a user input device (e.g., user input device 218) to designate pathway 506 and/or airway location 510 on low resolution 3D model 500.
- pathway 506 and/or airway location 510 may be automatically determined (e.g., using machine learning algorithms), such as by one or more computing devices (e.g., processor 106).
- FIG. 5B shows high resolution 3D model 502 of airways 504 based on high resolution images having a higher resolution than the low resolution images of low resolution 3D model 500.
- the high resolution images may have a smaller slice thickness, and therefore more slices, than the low resolution images.
- the higher resolution of the high resolution images may allow high resolution 3D model 502 to depict additional airways 504 and/or portions of airways 504 of the lung that may not have been shown in low resolution 3D model 500 such that high resolution 3D model 502 may more accurately correspond to physical airways of the lung.
- high resolution 3D model 502 of the illustrated implementation depicts additional portions of airways 504 (e.g., portions of second airway 504-2, third airway 504-3, and fourth airway 504-4 extending farther into the lung from the first airway 504-1 ) and additional airways (e.g., fifth airway 504-5) relative to low resolution 3D model 500.
- additional airways 504 e.g., portions of second airway 504-2, third airway 504-3, and fourth airway 504-4 extending farther into the lung from the first airway 504-1
- additional airways e.g., fifth airway 504-5
- the additional portions and/or number of airways 504 depicted in high resolution 3D model 502 may be used to determine pathway 506 to an airway location 510 of high resolution 3D model 502 positioned proximate to the target location 508.
- pathway 506 of high resolution 3D model 502 may be determined through the first airway 504-1 and the additional fifth airway 504-5 depicted in high resolution 3D model 502.
- pathway 506 of high resolution 3D model 502 may extend farther through the depicted airways 504 than low resolution 3D model 500 such that the airway location 510 of high resolution 3D model 502 may be more closely positioned to the target location 508 than the airway location 510 of low resolution 3D model 500.
- 3D models depicting airways of a lung may include other features and/or anatomical objects in addition to or instead of airways.
- the medical procedure may include additional or alternative medical procedures associated with the anatomical object.
- the machine learning model may be trained to transform low resolution images depicting the anatomical object to high resolution images depicting the anatomical object.
- FIG. 6 shows an illustrative implementation 600 configured to train machine learning model 208.
- implementation 600 includes a simulated image generator 602 and a parameter adjustment module 604 communicatively coupled (e.g., wired and/or wirelessly) with machine learning model 208.
- Implementation 600 may include additional or alternative components as may serve a particular implementation.
- implementation 600 or certain components of implementation 600 may be implemented by a computer-assisted medical system.
- Simulated image generator 602 may be implemented by one or more computing devices and/or computer resources (e.g., processors, memory devices, storage devices, etc.) configured to decrease the resolution of an image.
- simulated image generator 602 may be at least partially implemented by image generation system 102 and/or image generation system 202.
- simulated image generator 602 may be configured to access a 3D ground truth image 606 and generate, based on 3D ground truth image 606, a 3D simulated image 608 having a lower resolution than 3D ground truth image 606.
- 3D ground truth image 606 may have a high resolution and may depict an anatomical object.
- 3D ground truth image 606 may include a high resolution medical image (e.g., a high resolution CT image).
- 3D simulated image 608 generated by simulated image generator 602 may have a low resolution and may depict the anatomical object of 3D ground truth image 606.
- 3D simulated image 608 may have a resolution corresponding to low resolution medical images.
- 3D ground truth image 606 may include a dataset of 3D ground truth images 606 such that the dataset of 3D ground truth images 606 may include a wide variety of 3D ground truth images 606.
- the dataset of 3D ground truth images 606 may represent anatomical objects having various characteristics (e.g., anatomical objects having different types, different properties, different features, captured from different vantage points, etc.).
- simulated image generator 602 may generate, based on the dataset of 3D ground truth images, a dataset of 3D simulated images 608 such that 3D simulated image 608 may include a dataset of 3D simulated images 608. This may allow the dataset of 3D simulated images 608 to represent anatomical objects having the various characteristics.
- 3D simulated image 608 may be provided as an input to machine learning model 208 such that machine learning model 208 may be configured to output, based on 3D simulated image 608, a sequence of 3D super resolution images 610 having a higher resolution than 3D simulated image 608.
- the resolution of the sequence of 3D super resolution images 610 may correspond to the resolution of 3D ground truth image 606.
- machine learning model 208 may have one or more parameters (e.g., how large or granular the CNN layers are set to be, how many CNN layers are used, a patch size, a batch size etc.) for generating the sequence of super resolution images 610 based on 3D simulated image 608.
- the one or more parameters of machine learning model 208 may be adjustable, such as to adjust the resolution of the sequence of super resolution images 610.
- the one or more parameters of machine learning model 208 may be customized for anatomical objects such that machine learning model 208 may be enabled to generate the sequence of 3D super resolution images 610 depicting anatomical objects.
- the one or more parameters of machine learning model 208 may be optimized and/or fine-tuned for anatomical object modeling as may serve a particular implementation.
- Parameter adjustment module 604 may be implemented by one or more computing devices and/or computer resources (e.g., processors, memory devices, storage devices, etc.) and configured to adjust the one or more parameters of machine learning model 208.
- parameter adjustment module 604 may be configured to adjust, based on providing 3D simulated image 608 as the input to machine learning model 208, the one or more parameters of machine learning model 208 to reduce a difference between 3D ground truth image 606 and the sequence of 3D super resolution images 610 output by machine learning model 208.
- the difference between 3D ground truth image 606 and the sequence of 3D super resolution images 610 may include one or more of a difference in resolution, slice thickness, a number of pixels or voxels, a spacing of pixels or voxels, an amount of noise, a number of slices, or an amount of blurriness between 3D ground truth image 606 and the sequence of 3D super resolution images 610.
- parameter adjustment module 604 may adjust the one or more parameters of machine learning model 208 to adjust one or more of a resolution, a slice thickness, a number of pixels or voxels, a spacing of pixels or voxels, an amount of noise, a number of slices, or an amount of blurriness of the sequence of 3D super resolution images 610 to reduce the difference between 3D ground truth image 606 and the sequence of 3D super resolution images 610.
- FIG. 7 shows an illustrative method 700 that may be performed to train machine learning model 208 to receive low resolution images having a first resolution and output super resolution images having a second resolution higher than the first resolution. While FIG.
- Method 700 illustrates example operations according to one embodiment, other embodiments may omit, add to, reorder, and/or modify any of the operations shown in FIG. 7. Moreover, each of the operations depicted in FIG. 7 may be performed in any of the ways described herein. Method 700 may be performed by one or more computing devices and/or computer resources (e.g., processors, memory devices, storage devices, etc.).
- computing devices and/or computer resources e.g., processors, memory devices, storage devices, etc.
- method 700 may include, at operation 702, accessing a 3D ground truth image (e.g., 3D ground truth image 606).
- a 3D ground truth image e.g., 3D ground truth image 606
- the 3D ground truth image may include a high resolution image, such as a high resolution medical image depicting an anatomical object.
- Method 700 may further include, at operation 704, generating, based on the 3D ground truth image, a 3D simulated image (e.g., 3D simulated image 608).
- a 3D simulated image e.g., 3D simulated image 608
- the 3D simulated image may have a lower resolution and/or a greater slice thickness than the 3D ground truth image.
- generating the 3D simulated image may include downsampling the 3D ground truth image in a Z-axis direction of an XYZ coordinate system such that the Z-axis is associated with the slice thickness of the 3D ground truth image.
- generating the 3D simulated image may include one or more of downsampling the 3D ground truth image in an X-axis direction of the XYZ coordinate system, downsampling the 3D ground truth image in a Y-axis direction of the XYZ coordinate system, blurring the 3D ground truth image, injecting noise into the 3D ground truth image, or applying a smoothing function in one or more of the X-axis direction, the Y-axis direction, or the Z-axis direction.
- Method 700 may further include, at operation 706, providing the 3D simulated image as an input to the machine learning model (e.g., machine learning model 208) such that the machine learning model may be configured to output, based on the 3D simulated image and one or more parameters of the machine learning model, a sequence of 3D super resolution images (e.g., 3D super resolution images 610).
- the sequence of 3D super resolution images output by the machine learning model may have a higher resolution and/or a smaller slice thickness than the 3D simulated image such that the resolution and/or slice thickness of the sequence of 3D super resolution images may correspond to the resolution and/or slice thickness of the 3D ground truth image.
- generating the sequence of 3D super resolution images may include one or more of sharpening the 3D simulated image, reducing noise from the 3D simulated image, upsampling the 3D simulated image in any one or more of an X-axis direction, a Y-axis direction, or a Z-axis direction of an XYZ coordinate system of the 3D simulated image, decreasing a slice thickness of the 3D simulated image, or increasing a number of pixels or voxels of the 3D simulated image.
- machine learning model 208 may decrease the slice thickness of the 3D simulated image such that the sequence of 3D super resolution images may have additional slices relative to the 3D simulated image.
- Method 700 may further include, at operation 708, adjusting, based on providing the 3D simulated image as the input to the machine learning model, the one or more parameters of the machine learning model to reduce a difference between the 3D ground truth image the sequence of 3D super resolution images output by the machine learning model.
- the one or more parameters of the machine learning model may be adjusted to reduce a difference between the airways (e.g., a number of airways, features of the airways, etc.) depicted in the 3D ground truth image and the sequence of 3D super resolution images output by the machine learning model.
- FIG. 8 shows another illustrative method 800 that may be performed to train machine learning model 208 to receive low resolution images having a first resolution and output super resolution images having a second resolution higher than the first resolution. While FIG. 8 illustrates example operations according to one embodiment, other embodiments may omit, add to, reorder, and/or modify any of the operations shown in FIG. 8. Moreover, each of the operations depicted in FIG. 8 may be performed in any of the ways described herein. Method 800 may be performed by one or more computing devices and/or computer resources (e.g., processors, memory devices, storage devices, etc.).
- computing devices and/or computer resources e.g., processors, memory devices, storage devices, etc.
- method 800 may include, at operation 802, accessing a 3D ground truth image.
- Method 800 may further include, at operation 804, determining whether the 3D ground truth image has a resolution above a resolution threshold.
- the resolution threshold may be representative of a select threshold of one or more of a slice thickness, a number of pixels or voxels, a spacing of pixels or voxels, an amount of noise, a number of slices, or an amount of blurriness.
- the resolution of the 3D ground truth image may be compared to the resolution threshold to determine whether the 3D ground truth image is above the resolution threshold, which may indicate that the 3D ground truth image has a sufficiently high resolution (e.g., for training the machine learning model).
- a resolution at or below the resolution threshold may indicate that the 3D ground truth image has too low of resolution (e.g., fortraining the machine learning model).
- method 800 may further include, at operation 806, generating, based on the 3D ground truth image, a 3D simulated image having a lower resolution than the 3D ground truth image.
- the 3D simulated image may not be generated based on the 3D ground truth image. For example, the training of the machine learning model may be discontinued and/or the 3D ground truth image may be discarded such that another 3D ground truth image may be accessed.
- method 800 may further include, at operation 808, providing the 3D simulated image as an input to the machine learning model such that the machine learning model may be configured to output, based on the 3D simulated image and one or more parameters of the machine learning model, a sequence of 3D super resolution images.
- the sequence of 3D super resolution images output by the machine learning model may have a higher resolution and/or a smaller slice thickness than the 3D simulated image.
- Method 800 may further include, at operation 810, determining whether there is a difference between the 3D ground truth image and the sequence of 3D super resolution images output by the machine learning model is outside of a predetermined range (e.g., a percentage, a ratio, etc.). For example, the 3D ground truth image and the sequence of 3D super resolution images may be compared to determine whether there is a difference in one or more of a resolution, a slice thickness, a number of pixels or voxels, a spacing of the pixels or voxels, an amount of noise, a number of slices, or an amount of blurriness between the 3D ground truth image and the sequence of 3D super resolution images.
- a predetermined range e.g., a percentage, a ratio, etc.
- the difference may be determined to exist when the difference between the 3D ground truth image and the sequence of 3D super resolution images is outside of the predetermined range. Alternatively, the difference may be determined to not exist when the difference between the 3D ground truth image and the sequence of 3D super resolution images is within the predetermined range, which may indicate that the sequence of 3D super resolution images sufficiently corresponds to the 3D ground truth image.
- method 800 may further include, at operation 812, adjusting, based on providing the 3D simulated image as the input to the machine learning model, the one or more parameters of the machine learning model to reduce the difference between the 3D ground truth image and the sequence of 3D super resolution images output by the machine learning model.
- the sequence of 3D super resolution images may be regenerated by the machine learning model.
- the one or more parameters of the machine learning model may continue to be adjusted (e.g., until the difference is reduced and/or within the predetermined range).
- the machine learning model may be implemented (e.g., by image generation system 102 and/or image generation system 202).
- one or more of the processes described herein may be implemented at least in part as instructions embodied in a non-transitory computer- readable medium and executable by one or more computing devices.
- a processor e.g., a microprocessor
- receives instructions from a non-transitory computer-readable medium, (e.g., a memory, etc.), and executes those instructions, thereby performing one or more processes, including one or more of the processes described herein.
- Such instructions may be stored and/or transmitted using any of a variety of known computer-readable media.
- a computer-readable medium includes any non-transitory medium that participates in providing data (e.g., instructions) that may be read by a computer (e.g., by a processor of a computer).
- a medium may take many forms, including, but not limited to, non-volatile media, and/or volatile media.
- Non-volatile media may include, for example, optical or magnetic disks and other persistent memory.
- Volatile media may include, for example, dynamic random access memory (“DRAM”), which typically constitutes a main memory.
- DRAM dynamic random access memory
- Computer-readable media include, for example, a disk, hard disk, magnetic tape, any other magnetic medium, a compact disc read-only memory (“CD- ROM”), a digital video disc (“DVD”), any other optical medium, random access memory (“RAM”), programmable read-only memory (“PROM”), electrically erasable programmable read-only memory (“EPROM”), FLASH-EEPROM, any other memory chip or cartridge, or any other tangible medium from which a computer can read.
- CD- ROM compact disc read-only memory
- DVD digital video disc
- RAM random access memory
- PROM programmable read-only memory
- EPROM electrically erasable programmable read-only memory
- FLASH-EEPROM any other memory chip or cartridge, or any other tangible medium from which a computer can read.
- FIG. 9 shows an illustrative computing device 900 that may be specifically configured to perform one or more of the processes described herein. Any of the systems, computing devices, and/or other components described herein may be implemented by computing device 900.
- computing device 900 may include a communication interface 902, a processor 904, a storage device 906, and an input/output (“I/O”) module 908 communicatively connected one to another via a communication infrastructure 910. While an illustrative computing device 900 is shown in FIG. 9, the components illustrated in FIG. 9 are not intended to be limiting. Additional or alternative components may be used in other embodiments. Components of computing device 900 shown in FIG. 9 will now be described in additional detail.
- Communication interface 902 may be configured to communicate with one or more computing devices.
- Examples of communication interface 902 include, without limitation, a wired network interface (such as a network interface card), a wireless network interface (such as a wireless network interface card), a modem, an audio/video connection, and any other suitable interface.
- Processor 904 generally represents any type or form of processing unit capable of processing data and/or interpreting, executing, and/or directing execution of one or more of the instructions, processes, and/or operations described herein.
- Processor 904 may perform operations by executing computer-executable instructions 912 (e.g., an application, software, code, and/or other executable data instance) stored in storage device 906.
- computer-executable instructions 912 e.g., an application, software, code, and/or other executable data instance
- Storage device 906 may include one or more data storage media, devices, or configurations and may employ any type, form, and combination of data storage media and/or device.
- storage device 906 may include, but is not limited to, any combination of the non-volatile media and/or volatile media described herein.
- Electronic data, including data described herein, may be temporarily and/or permanently stored in storage device 906.
- data representative of computer-executable instructions 912 configured to direct processor 904 to perform any of the operations described herein may be stored within storage device 906.
- data may be arranged in one or more databases residing within storage device 906.
- I/O module 908 may include one or more I/O modules configured to receive user input and provide user output.
- I/O module 908 may include any hardware, firmware, software, or combination thereof supportive of input and output capabilities.
- I/O module 908 may include hardware and/or software for capturing user input, including, but not limited to, a keyboard or keypad, a touchscreen component (e.g., touchscreen display), a receiver (e.g., an RF or infrared receiver), motion sensors, and/or one or more input buttons.
- I/O module 908 may include one or more devices for presenting output to a user, including, but not limited to, a graphics engine, a display (e.g., a display screen), one or more output drivers (e.g., display drivers), one or more audio speakers, and one or more audio drivers.
- I/O module 908 is configured to provide graphical data to a display for presentation to a user.
- the graphical data may be representative of one or more graphical user interfaces and/or any other graphical content as may serve a particular implementation.
- FIG. 10 is a simplified diagram of a medical system 1000 according to some embodiments.
- the medical system 1000 may be suitable for use in, for example, surgical, diagnostic (e.g., biopsy), or therapeutic (e.g., ablation, electroporation, etc.) procedures. While some embodiments are provided herein with respect to such procedures, any reference to medical or surgical instruments and medical or surgical methods is non-limiting.
- the systems, instruments, and methods described herein may be used for animals, human cadavers, animal cadavers, portions of human or animal anatomy, non-surgical diagnosis, as well as for industrial systems, general or special purpose robotic systems, general or special purpose teleoperational systems, or robotic medical systems.
- medical system 1000 may include a manipulator assembly 1002 that controls the operation of a medical instrument 1004 in performing various procedures on a patient P.
- Medical instrument 1004 may extend into an internal site within the body of patient P via an opening in the body of patient P.
- the manipulator assembly 1002 may be teleoperated, non-teleoperated, or a hybrid teleoperated and non-teleoperated assembly with one or more degrees of freedom of motion that may be motorized and/or one or more degrees of freedom of motion that may be non-motorized (e.g., manually operated).
- the manipulator assembly 1002 may be mounted to and/or positioned near a patient table T.
- a master assembly 1006 allows an operator O (e.g., a surgeon, a clinician, a physician, or other user) to control the manipulator assembly 1002.
- the master assembly 1006 allows the operator O to view the procedural site or other graphical or informational displays.
- the manipulator assembly 1002 may be excluded from the medical system 1000 and the instrument 1004 may be controlled directly by the operator O.
- the manipulator assembly 1002 may be manually controlled by the operator O. Direct operator control may include various handles and operator interfaces for hand-held operation of the instrument 1004.
- the master assembly 1006 may be located at a surgeon’s console which is in proximity to (e.g., in the same room as) a patient table T on which patient P is located, such as at the side of the patient table T. In some examples, the master assembly 1006 is remote from the patient table T, such as in in a different room or a different building from the patient table T.
- the master assembly 1006 may include one or more control devices for controlling the manipulator assembly 1002.
- the control devices may include any number of a variety of input devices, such as joysticks, trackballs, scroll wheels, directional pads, buttons, data gloves, trigger-guns, hand-operated controllers, voice recognition devices, motion or presence sensors, and/or the like.
- the manipulator assembly 1002 supports the medical instrument 1004 and may include a kinematic structure of links that provide a set-up structure.
- the links may include one or more non-servo controlled links (e.g., one or more links that may be manually positioned and locked in place) and/or one or more servo controlled links (e.g., one or more links that may be controlled in response to commands, such as from a control system 1012).
- the manipulator assembly 1002 may include a plurality of actuators (e.g., motors) that drive inputs on the medical instrument 1004 in response to commands, such as from the control system 1012.
- the actuators may include drive systems that move the medical instrument 1004 in various ways when coupled to the medical instrument 1004.
- one or more actuators may advance medical instrument 1004 into a naturally or surgically created anatomic orifice.
- Actuators may control articulation of the medical instrument 1004, such as by moving the distal end (or any other portion) of medical instrument 1004 in multiple degrees of freedom.
- degrees of freedom may include three degrees of linear motion (e.g., linear motion along the X, Y, Z Cartesian axes) and in three degrees of rotational motion (e.g., rotation about the X, Y, Z Cartesian axes).
- One or more actuators may control rotation of the medical instrument about a longitudinal axis.
- Actuators can also be used to move an articulable end effector of medical instrument 1004, such as for grasping tissue in the jaws of a biopsy device and/or the like, or may be used to move or otherwise control tools (e.g., imaging tools, ablation tools, biopsy tools, electroporation tools, etc.) that are inserted within the medical instrument 1004.
- medical instrument 1004 such as for grasping tissue in the jaws of a biopsy device and/or the like
- move or otherwise control tools e.g., imaging tools, ablation tools, biopsy tools, electroporation tools, etc.
- the medical system 1000 may include a sensor system 1008 with one or more sub-systems for receiving information about the manipulator assembly 1002 and/or the medical instrument 1004.
- Such sub-systems may include a position sensor system (e.g., that uses electromagnetic (EM) sensors or other types of sensors that detect position or location); a shape sensor system for determining the position, orientation, speed, velocity, pose, and/or shape of a distal end and/or of one or more segments along a flexible body of the medical instrument 1004; a visualization system (e.g., using a color imaging device, an infrared imaging device, an ultrasound imaging device, an x-ray imaging device, a fluoroscopic imaging device, a computed tomography (CT) imaging device, a magnetic resonance imaging (MRI) imaging device, or some other type of imaging device) for capturing images, such as from the distal end of medical instrument 1004 or from some other location; and/or actuator position sensors such as resolvers, encoders, potentiometers, and the like that describe
- the medical system 1000 may include a display system 1010 for displaying an image or representation of the procedural site and the medical instrument 1004.
- Display system 1010 and master assembly 1006 may be oriented so physician O can control medical instrument 1004 and master assembly 1006 with the perception of telepresence.
- the medical instrument 1004 may include a visualization system, which may include an image capture assembly that records a concurrent or real-time image of a procedural site and provides the image to the operator O through one or more displays of display system 1010.
- the image capture assembly may include various types of imaging devices.
- the concurrent image may be, for example, a two-dimensional image or a three-dimensional image captured by an endoscope positioned within the anatomical procedural site.
- the visualization system may include endoscopic components that may be integrally or removably coupled to medical instrument 1004. Additionally or alternatively, a separate endoscope, attached to a separate manipulator assembly, may be used with medical instrument 1004 to image the procedural site.
- the visualization system may be implemented as hardware, firmware, software or a combination thereof which interact with or are otherwise executed by one or more computer processors, such as of the control system 1012.
- Display system 1010 may also display an image of the procedural site and medical instruments, which may be captured by the visualization system.
- the medical system 1000 provides a perception of telepresence to the operator O.
- images captured by an imaging device at a distal portion of the medical instrument 1004 may be presented by the display system 1010 to provide the perception of being at the distal portion of the medical instrument 1004 to the operator O.
- the input to the master assembly 1006 provided by the operator O may move the distal portion of the medical instrument 1004 in a manner that corresponds with the nature of the input (e.g., distal tip turns right when a trackball is rolled to the right) and results in corresponding change to the perspective of the images captured by the imaging device at the distal portion of the medical instrument 1004.
- the perception of telepresence for the operator O is maintained as the medical instrument 1004 is moved using the master assembly 1006.
- the operator O can manipulate the medical instrument 1004 and hand controls of the master assembly 1006 as if viewing the workspace in substantially true presence, simulating the experience of an operator that is physically manipulating the medical instrument 1004 from within the patient anatomy.
- the display system 1010 may present virtual images of a procedural site that are created using image data recorded pre-operatively (e.g., prior to the procedure performed by the medical instrument system 1100) or intra-operatively (e.g., concurrent with the procedure performed by the medical instrument system 1100), such as image data created using computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), fluoroscopy, thermography, ultrasound, optical coherence tomography (OCT), thermal imaging, impedance imaging, laser imaging, nanotube X-ray imaging, and/or the like.
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- OCT optical coherence tomography
- thermal imaging impedance imaging
- laser imaging laser imaging
- nanotube X-ray imaging and/or the like.
- the virtual images may include two-dimensional, three-dimensional, or higher-dimensional (e.g., including, for example, time based or velocity-based information) images.
- one or more models are created from pre-operative or intra-operative image data sets and the virtual images are generated using the one or more models.
- display system 1010 may display a virtual image that is generated based on tracking the location of medical instrument 1004.
- the tracked location of the medical instrument 1004 may be registered (e.g., dynamically referenced) with the model generated using the pre-operative or intra-operative images, with different portions of the model correspond with different locations of the patient anatomy.
- the registration is used to determine portions of the model corresponding with the location and/or perspective of the medical instrument 1004 and virtual images are generated using the determined portions of the model. This may be done to present the operator O with virtual images of the internal procedural site from viewpoints of medical instrument 1004 that correspond with the tracked locations of the medical instrument 1004.
- the medical system 1000 may also include the control system 1012, which may include processing circuitry that implements the some or all of the methods or functionality discussed herein.
- the control system 1012 may include at least one memory and at least one processor for controlling the operations of the manipulator assembly 1002, the medical instrument 1004, the master assembly 1006, the sensor system 1008, and/or the display system 1010.
- Control system 1012 may include instructions (e.g., a non-transitory machine-readable medium storing the instructions) that when executed by the at least one processor, configures the one or more processors to implement some or all of the methods or functionality discussed herein. While the control system 1012 is shown as a single block in FIG.
- control system 1012 may include two or more separate data processing circuits with one portion of the processing being performed at the manipulator assembly 1002, another portion of the processing being performed at the master assembly 1006, and/or the like.
- control system 1012 may include other types of processing circuitry, such as application-specific integrated circuits (ASICs) and/or field-programmable gate array (FPGAs).
- ASICs application-specific integrated circuits
- FPGAs field-programmable gate array
- the control system 1012 may be implemented using hardware, firmware, software, or a combination thereof.
- control system 1012 may receive feedback from the medical instrument 1004, such as force and/or torque feedback. Responsive to the feedback, the control system 1012 may transmit signals to the master assembly 1006. In some examples, the control system 1012 may transmit signals instructing one or more actuators of the manipulator assembly 1002 to move the medical instrument 1004. In some examples, the control system 1012 may transmit informational displays regarding the feedback to the display system 1010 for presentation or perform other types of actions based on the feedback.
- the control system 1012 may include a virtual visualization system to provide navigation assistance to operator O when controlling the medical instrument 1004 during an image-guided medical procedure.
- Virtual navigation using the virtual visualization system may be based upon an acquired pre-operative or intra-operative dataset of anatomic passageways of the patient P.
- the control system 1012 or a separate computing device may convert the recorded images, using programmed instructions alone or in combination with operator inputs, into a model of the patient anatomy.
- the model may include a segmented two-dimensional or three-dimensional composite representation of a partial or an entire anatomic organ or anatomic region.
- An image data set may be associated with the composite representation.
- the virtual visualization system may obtain sensor data from the sensor system 1008 that is used to compute an (e.g., approximate) location of the medical instrument 1004 with respect to the anatomy of patient P.
- the sensor system 1008 may be used to register and display the medical instrument 1004 together with the pre-operatively or intra- operatively recorded images.
- PCT Publication WO 2016/191298 published December 1 , 2016 and titled “Systems and Methods of Registration for Image Guided Surgery”
- the sensor system 1008 may be used to compute the (e.g., approximate) location of the medical instrument 1004 with respect to the anatomy of patient P.
- the location can be used to produce both macro-level (e.g., external) tracking images of the anatomy of patient P and virtual internal images of the anatomy of patient P.
- the system may include one or more electromagnetic (EM) sensors, fiber optic sensors, and/or other sensors to register and display a medical instrument together with pre-operatively recorded medical images.
- EM electromagnetic
- Medical system 1000 may further include operations and support systems (not shown) such as illumination systems, steering control systems, irrigation systems, and/or suction systems.
- the medical system 1000 may include more than one manipulator assembly and/or more than one master assembly.
- the exact number of manipulator assemblies may depend on the medical procedure and space constraints within the procedural room, among other factors. Multiple master assemblies may be co-located or they may be positioned in separate locations. Multiple master assemblies may allow more than one operator to control one or more manipulator assemblies in various combinations.
- FIG. 11A is a simplified diagram of a medical instrument system 1100 according to some embodiments.
- the medical instrument system 1100 includes a flexible elongate device 1102 (also referred to as elongate device 1102), a drive unit 1104, and a medical tool 1126 that collectively is an example of a medical instrument 1004 of a medical system 1000.
- the medical system 1000 may be a teleoperated system, a non-teleoperated system, or a hybrid teleoperated and non-teleoperated system, as explained with reference to FIG. 10.
- a visualization system 1131 , tracking system 1130, and navigation system 1132 are also shown in FIG. 11A and are example components of the control system 1012 of the medical system 1000.
- the medical instrument system 1100 may be used for non-teleoperational exploratory procedures or in procedures involving traditional manually operated medical instruments, such as endoscopy.
- the medical instrument system 1100 may be used to gather (e.g., measure) a set of data points corresponding to locations within anatomic passageways of a patient, such as patient P.
- Medical instrument system 1100 may include the tracking system 1130 for determining the position, orientation, speed, velocity, pose, and/or shape of the flexible body 1116 at the distal end 1118 and/or of one or more segments 1124 along flexible body 1116, as will be described in further detail below.
- the tracking system 1130 may include one or more sensors and/or imaging devices.
- the flexible body 1116 such as the length between the distal end 1118 and the proximal end 1117, may include multiple segments 1124.
- the tracking system 1130 may be implemented using hardware, firmware, software, or a combination thereof. In some examples, the tracking system 1130 is part of control system 1012 shown in FIG. 10.
- T racking system 1130 may track the distal end 1118 and/or one or more of the segments 1124 of the flexible body 1116 using a shape sensor 1122.
- the shape sensor 1122 may include an optical fiber aligned with the flexible body 1116 (e.g., provided within an interior channel of the flexibly body 1116 or mounted externally along the flexible body 1116).
- the optical fiber may have a diameter of approximately 200 pm. In other examples, the diameter may be larger or smaller.
- the optical fiber of the shape sensor 1122 may form a fiber optic bend sensor for determining the shape of flexible body 1116.
- Optical fibers including Fiber Bragg Gratings (FBGs) may be used to provide strain measurements in structures in one or more dimensions.
- FBGs Fiber Bragg Gratings
- the shape of the flexible body 1116 may be determined using other techniques. For example, a history of the position and/or pose of the distal end 1118 of the flexible body 1116 can be used to reconstruct the shape of flexible body 1116 over an interval of time (e.g., as the flexible body 1116 is advanced or retracted within a patient anatomy).
- the tracking system 1130 may alternatively and/or additionally track the distal end 1118 of the flexible body 1116 using a position sensor system 1120.
- Position sensor system 1120 may be a component of an EM sensor system with the position sensor system 1120 including one or more position sensors.
- the position sensor system 1120 is shown as being near the distal end 1118 of the flexible body 1116 to track the distal end 1118, the number and location of the position sensors of the position sensor system 1120 may vary to track different regions along the flexible body 1116.
- the position sensors include conductive coils that may be subjected to an externally generated electromagnetic field. Each coil of position sensor system 1120 may produce an induced electrical signal having characteristics that depend on the position and orientation of the coil relative to the externally generated electromagnetic field.
- the position sensor system 1120 may measure one or more position coordinates and/or one or more orientation angles associated with one or more portions of flexible body 1116.
- the position sensor system 1120 may be configured and positioned to measure six degrees of freedom, e.g., three position coordinates X, Y, Z and three orientation angles indicating pitch, yaw, and roll of a base point. In some examples, the position sensor system 1120 may be configured and positioned to measure five degrees of freedom, e.g., three position coordinates X, Y, Z and two orientation angles indicating pitch and yaw of a base point. Further description of a position sensor system, which may be applicable in some embodiments, is provided in U.S. Patent No. 6,380,732 (filed August 11, 1999 and titled “Six-Degree of Freedom Tracking System Having a Passive Transponder on the Object Being Tracked”), which is incorporated by reference herein in its entirety.
- the tracking system 1130 may alternately and/or additionally rely on a collection of pose, position, and/or orientation data stored for a point of an elongate device 1102 and/or medical tool 1126 captured during one or more cycles of alternating motion, such as breathing. This stored data may be used to develop shape information about the flexible body 1116.
- a series of position sensors such as EM sensors like the sensors in position sensor 1120 or some other type of position sensors may be positioned along the flexible body 1116 and used for shape sensing.
- a history of data from one or more of these position sensors taken during a procedure may be used to represent the shape of elongate device 1102, particularly if an anatomic passageway is generally static.
- Medical instrument 1126 may be, for example, an image capture probe, a biopsy tool (e.g., a needle, grasper, brush, etc.), an ablation tool (e.g., a laser ablation tool, radio frequency (RF) ablation tool, cryoablation tool, thermal ablation tool, heated liquid ablation tool, etc.), an electroporation tool, and/or another surgical, diagnostic, or therapeutic tool.
- the medical tool 1126 may include an end effector having a single working member such as a scalpel, a blunt blade, an optical fiber, an electrode, and/or the like.
- Other end types of end effectors may include, for example, forceps, graspers, scissors, staplers, clip appliers, and/or the like.
- Other end effectors may further include electrically activated end effectors such as electrosurgical electrodes, transducers, sensors, and/or the like.
- the medical tool 1126 may be a biopsy tool used to remove sample tissue or a sampling of cells from a target anatomic location.
- the biopsy tool is a flexible needle.
- the biopsy tool may further include a sheath that can surround the flexible needle to protect the needle and interior surface of the channel 1121 when the biopsy tool is within the channel 1121.
- the medical tool 1126 may be an image capture probe that includes a distal portion with a stereoscopic or monoscopic camera that may be placed at or near the distal end 1118 of flexible body 1116 for capturing images (e.g., still or video images).
- the captured images may be processed by the visualization system 1131 for display and/or provided to the tracking system 1130 to support tracking of the distal end 1118 of the flexible body 1116 and/or one or more of the segments 1124 of the flexible body 1116.
- the image capture probe may include a cable for transmitting the captured image data that is coupled to an imaging device at the distal portion of the image capture probe.
- the image capture probe may include a fiber-optic bundle, such as a fiberscope, that couples to a more proximal imaging device of the visualization system 1131.
- the image capture probe may be single-spectral or multi-spectral, for example, capturing image data in one or more of the visible, near-infrared, infrared, and/or ultraviolet spectrums.
- the image capture probe may also include one or more light emitters that provide illumination to facilitate image capture.
- the image capture probe may use ultrasound, x-ray, fluoroscopy, CT, MRI, or other types of imaging technology.
- the image capture probe is inserted within the flexible body 1116 of the elongate device 1102 to facilitate visual navigation of the elongate device 1102 to a procedural site and then is replaced within the flexible body 1116 with another type of medical tool 1126 that performs the procedure.
- the image capture probe may be within the flexible body 1116 of the elongate device 1102 along with another type of medical tool 1126 to facilitate simultaneous image capture and tissue intervention, such as within the same channel 1121 or in separate channels.
- a medical tool 1126 may be advanced from the opening of the channel 1121 to perform the procedure (or some other functionality) and then retracted back into the channel 1121 when the procedure is complete.
- the medical tool 1126 may be removed from the proximal end 1117 of the flexible body 1116 or from another optional instrument port (not shown) along flexible body 1116.
- the elongate device 1102 may include integrated imaging capability rather than utilize a removable image capture probe.
- the imaging device (or fiber-optic bundle) and the light emitters may be located at the distal end 1118 of the elongate device 1102.
- the flexible body 1115 may include one or more dedicated channels that carry the cable(s) and/or optical fiber(s) between the distal end 1118 and the visualization system 1131.
- the medical instrument system 1100 can perform simultaneous imaging and tool operations.
- the medical tool 1126 is capable of controllable articulation.
- the medical tool 1126 may house cables (which may also be referred to as pull wires), linkages, or other actuation controls (not shown) that extend between its proximal and distal ends to controllably bend the distal end of medical tool 1126, such as discussed herein for the flexible elongate device 1102.
- the medical tool 1126 may be coupled to a drive unit 1104 and the manipulator assembly 1002.
- the elongate device 1102 may be excluded from the medical instrument system 1100 or may be a flexible device that does not have controllable articulation. Steerable instruments or tools, applicable in some embodiments, are further described in detail in U.S. Patent No.
- the flexible body 1116 of the elongate device 1102 may also or alternatively house cables, linkages, or other steering controls (not shown) that extend between the drive unit 1104 and the distal end 1118 to controllably bend the distal end 1118 as shown, for example, by broken dashed line depictions 1119 of the distal end 1118 in FIG. 11 A.
- at least four cables are used to provide independent up- down steering to control a pitch of the distal end 1118 and left-right steering to control a yaw of the distal end 1181.
- the flexible elongate device 1102 may be a steerable catheter.
- steerable catheters are described in detail in PCT Publication WO 2019/018736 (published Jan. 24, 2019 and titled “Flexible Elongate Device Systems and Methods”), which is incorporated by reference herein in its entirety.
- the drive unit 1104 may include drive inputs that removably couple to and receive power from drive elements, such as actuators, of the teleoperational assembly.
- the elongate device 1102 and/or medical tool 1126 may include gripping features, manual actuators, or other components for manually controlling the motion of the elongate device 1102 and/or medical tool 1126.
- the elongate device 1102 may be steerable or, alternatively, the elongate device 1102 may be non-steerable with no integrated mechanism for operator control of the bending of distal end 1118.
- one or more channels 1121 (which may also be referred to as lumens), through which medical tools 1126 can be deployed and used at a target anatomical location, may be defined by the interior walls of the flexible body 1116 of the elongate device 1102.
- the medical instrument system 1100 may include a flexible bronchial instrument, such as a bronchoscope or bronchial catheter, for use in examination, diagnosis, biopsy, and/or treatment of a lung.
- a flexible bronchial instrument such as a bronchoscope or bronchial catheter
- the medical instrument system 1100 may also be suited for navigation and treatment of other tissues, via natural or surgically created connected passageways, in any of a variety of anatomic systems, including the colon, the intestines, the kidneys and kidney calices, the brain, the heart, the circulatory system including vasculature, and/or the like.
- the information from the tracking system 1130 may be sent to the navigation system 1132, where the information may be combined with information from the visualization system 1131 and/or pre-operatively obtained models to provide the physician, clinician, surgeon, or other operator with real-time position information.
- the real-time position information may be displayed on the display system 1010 for use in the control of the medical instrument system 1100.
- the navigation system 1132 may utilize the position information as feedback for positioning medical instrument system 1100.
- Various systems for using fiber optic sensors to register and display a surgical instrument with surgical images are provided in U.S. Patent No. 8,900,131 (filed May 13, 2011 and titled “Medical System Providing Dynamic Registration of a Model of an Anatomic Structure for Image-Guided Surgery”), which is incorporated by reference herein in its entirety.
- FIGS. 12A and 12B are simplified diagrams of side views of a patient coordinate space including a medical instrument mounted on an insertion assembly according to some embodiments.
- a surgical environment 1200 may include a patient P positioned on the patient table T.
- Patient P may be stationary within the surgical environment 1200 in the sense that gross patient movement is limited by sedation, restraint, and/or other means. Cyclic anatomic motion, including respiration and cardiac motion, of patient P may continue.
- a medical instrument 1204 is used to perform a medical procedure which may include, for example, surgery, biopsy, ablation, illumination, irrigation, suction, or electroporation.
- the medical instrument 1204 may also be used to perform other types of procedures, such as a registration procedure to associate the position, orientation, and/or pose data captured by the sensor system 1008 to a desired (e.g., anatomical or system) reference frame.
- the medical instrument 1204 may be, for example, the medical instrument 1004.
- the medical instrument 1204 may include an elongate device 1210 (e.g., a catheter) coupled to an instrument body 1212.
- Elongate device 1210 includes one or more channels sized and shaped to receive a medical tool.
- Elongate device 1210 may also include one or more sensors (e.g., components of the sensor system 1008).
- a shape sensor 1214 may be fixed at a proximal point 1216 on the instrument body 1212.
- the proximal point 1216 of the shape sensor 1214 may be movable with the instrument body 1212, and the location of the proximal point 1216 with respect to a desired reference frame may be known (e.g., via a tracking sensor or other tracking device).
- the shape sensor 1214 may measure a shape from the proximal point 1216 to another point, such as a distal end 1218 of the elongate device 1210.
- the shape sensor 1214 may be aligned with the elongate device 1210 (e.g., provided within an interior channel or mounted externally).
- the shape sensor 1214 may optical fibers used to generate shape information for the elongate device 1210.
- position sensors e.g., EM sensors
- a series of position sensors may be positioned along the flexible elongate device 1210 and used for shape sensing.
- Position sensors may be used alternatively to the shape sensor 1214 or with the shape sensor 1214, such as to improve the accuracy of shape sensing or to verify shape information.
- Elongate device 1210 may house cables, linkages, or other steering controls that extend between the instrument body 1212 and the distal end 1218 to controllably bend the distal end 1218.
- at least four cables are used to provide independent up-down steering to control a pitch of distal end 1218 and left-right steering to control a yaw of distal end 1218.
- the instrument body 1212 may include drive inputs that removably couple to and receive power from drive elements, such as actuators, of a manipulator assembly.
- the instrument body 1212 may be coupled to an instrument carriage 1206.
- the instrument carriage 1206 may be mounted to an insertion stage 1208 that is fixed within the surgical environment 1200.
- the insertion stage 1208 may be movable but have a known location (e.g., via a tracking sensor or other tracking device) within surgical environment 1200.
- Instrument carriage 1206 may be a component of a manipulator assembly (e.g., manipulator assembly 1002) that couples to the medical instrument 1204 to control insertion motion (e.g., motion along an insertion axis A) and/or motion of the distal end 1218 of the elongate device 1210 in multiple directions, such as yaw, pitch, and/or roll.
- the instrument carriage 1206 or insertion stage 1208 may include actuators, such as servomotors, that control motion of instrument carriage 1206 along the insertion stage 1208.
- a sensor device 1220 which may be a component of the sensor system 1008, may provide information about the position of the instrument body 1212 as it moves relative to the insertion stage 1208 along the insertion axis A.
- the sensor device 1220 may include one or more resolvers, encoders, potentiometers, and/or other sensors that measure the rotation and/or orientation of the actuators controlling the motion of the instrument carriage 1206, thus indicating the motion of the instrument body 1212.
- the insertion stage 1208 has a linear track as shown in FIGS. 12A and 12B.
- the insertion stage 1208 may have curved track or have a combination of curved and linear track sections.
- FIG. 12A shows the instrument body 1212 and the instrument carriage 1206 in a retracted position along the insertion stage 1208.
- the proximal point 1216 is at a position L0 on the insertion axis A.
- the location of the proximal point 1216 may be set to a zero value and/or other reference value to provide a base reference (e.g., corresponding to the origin of a desired reference frame) to describe the position of the instrument carriage 1206 along the insertion stage 1208.
- the distal end 1218 of the elongate device 1210 may be positioned just inside an entry orifice of patient P.
- the instrument body 1212 and the instrument carriage 1206 have advanced along the lineartrack of insertion stage 1208, and the distal end 1218 of the elongate device 1210 has advanced into patient P.
- the proximal point 1216 is at a position L1 on the insertion axis A.
- the rotation and/or orientation of the actuators measured by the sensor device 1220 indicating movement of the instrument carriage 1206 along the insertion stage 1208 and/or one or more position sensors associated with instrument carriage 1206 and/or the insertion stage 1208 may be used to determine the position L1 of the proximal point 1216 relative to the position L0.
- the position L1 may further be used as an indicator of the distance or insertion depth to which the distal end 1218 of the elongate device 1210 is inserted into the passageway(s) of the anatomy of patient P.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
Abstract
Un système de génération d'image à titre d'exemple peut être configuré pour accéder à une première image représentant un objet anatomique, générer, sur la base de la première image et à l'aide d'un modèle d'apprentissage automatique, une seconde image représentant l'objet anatomique et présentant une résolution supérieure à une résolution de la première image, et générer, sur la base de la seconde image, un modèle tridimensionnel de l'objet anatomique. Le modèle d'apprentissage automatique peut être entraîné pour générer la seconde image à l'aide de diverses techniques.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202480017307.4A CN120826699A (zh) | 2023-03-06 | 2024-03-01 | 使用机器学习模型生成高分辨率医学图像 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US63/450,250 | 2022-09-09 | ||
| US202363450250P | 2023-03-06 | 2023-03-06 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024186659A1 true WO2024186659A1 (fr) | 2024-09-12 |
Family
ID=90718070
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/018155 Pending WO2024186659A1 (fr) | 2023-03-06 | 2024-03-01 | Génération d'images médicales haute résolution à l'aide d'un modèle d'apprentissage automatique |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN120826699A (fr) |
| WO (1) | WO2024186659A1 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119469218A (zh) * | 2024-10-24 | 2025-02-18 | 中国船舶集团有限公司第七一五研究所 | 一种基于人工神经网络的低分辨布里渊频率谱解调方法 |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6380732B1 (en) | 1997-02-13 | 2002-04-30 | Super Dimension Ltd. | Six-degree of freedom tracking system having a passive transponder on the object being tracked |
| US20060013523A1 (en) | 2004-07-16 | 2006-01-19 | Luna Innovations Incorporated | Fiber optic position and shape sensing device and method relating thereto |
| US7316681B2 (en) | 1996-05-20 | 2008-01-08 | Intuitive Surgical, Inc | Articulated surgical instrument for performing minimally invasive surgery with enhanced dexterity and sensitivity |
| US7772541B2 (en) | 2004-07-16 | 2010-08-10 | Luna Innnovations Incorporated | Fiber optic position and/or shape sensing based on rayleigh scatter |
| US8773650B2 (en) | 2009-09-18 | 2014-07-08 | Intuitive Surgical Operations, Inc. | Optical position and/or shape sensing |
| US8900131B2 (en) | 2011-05-13 | 2014-12-02 | Intuitive Surgical Operations, Inc. | Medical system providing dynamic registration of a model of an anatomical structure for image-guided surgery |
| US9259274B2 (en) | 2008-09-30 | 2016-02-16 | Intuitive Surgical Operations, Inc. | Passive preload and capstan drive for surgical instruments |
| WO2016191298A1 (fr) | 2015-05-22 | 2016-12-01 | Intuitive Surgical Operations, Inc. | Systèmes et procédés d'alignement pour chirurgie guidée par image |
| WO2019018736A2 (fr) | 2017-07-21 | 2019-01-24 | Intuitive Surgical Operations, Inc. | Systèmes et procédés de dispositif allongé flexible |
| US20210374911A1 (en) * | 2019-02-28 | 2021-12-02 | Fujifilm Corporation | Learning method, learning system, learned model, program, and super resolution image generating device |
| KR102488676B1 (ko) * | 2021-08-02 | 2023-01-13 | 서울여자대학교 산학협력단 | 딥러닝 기반 ct 영상의 z축 해상도 개선 방법 및 장치 |
-
2024
- 2024-03-01 CN CN202480017307.4A patent/CN120826699A/zh active Pending
- 2024-03-01 WO PCT/US2024/018155 patent/WO2024186659A1/fr active Pending
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7316681B2 (en) | 1996-05-20 | 2008-01-08 | Intuitive Surgical, Inc | Articulated surgical instrument for performing minimally invasive surgery with enhanced dexterity and sensitivity |
| US6380732B1 (en) | 1997-02-13 | 2002-04-30 | Super Dimension Ltd. | Six-degree of freedom tracking system having a passive transponder on the object being tracked |
| US20060013523A1 (en) | 2004-07-16 | 2006-01-19 | Luna Innovations Incorporated | Fiber optic position and shape sensing device and method relating thereto |
| US7772541B2 (en) | 2004-07-16 | 2010-08-10 | Luna Innnovations Incorporated | Fiber optic position and/or shape sensing based on rayleigh scatter |
| US9259274B2 (en) | 2008-09-30 | 2016-02-16 | Intuitive Surgical Operations, Inc. | Passive preload and capstan drive for surgical instruments |
| US8773650B2 (en) | 2009-09-18 | 2014-07-08 | Intuitive Surgical Operations, Inc. | Optical position and/or shape sensing |
| US8900131B2 (en) | 2011-05-13 | 2014-12-02 | Intuitive Surgical Operations, Inc. | Medical system providing dynamic registration of a model of an anatomical structure for image-guided surgery |
| WO2016191298A1 (fr) | 2015-05-22 | 2016-12-01 | Intuitive Surgical Operations, Inc. | Systèmes et procédés d'alignement pour chirurgie guidée par image |
| WO2019018736A2 (fr) | 2017-07-21 | 2019-01-24 | Intuitive Surgical Operations, Inc. | Systèmes et procédés de dispositif allongé flexible |
| US20210374911A1 (en) * | 2019-02-28 | 2021-12-02 | Fujifilm Corporation | Learning method, learning system, learned model, program, and super resolution image generating device |
| KR102488676B1 (ko) * | 2021-08-02 | 2023-01-13 | 서울여자대학교 산학협력단 | 딥러닝 기반 ct 영상의 z축 해상도 개선 방법 및 장치 |
Non-Patent Citations (4)
| Title |
|---|
| DUAN JINMING ET AL: "Automatic 3D Bi-Ventricular Segmentation of Cardiac Images by a Shape-Refined Multi- Task Deep Learning Approach", IEEE TRANSACTIONS ON MEDICAL IMAGING, IEEE, USA, vol. 38, no. 9, 1 September 2019 (2019-09-01), pages 2151 - 2164, XP011743189, ISSN: 0278-0062, [retrieved on 20190829], DOI: 10.1109/TMI.2019.2894322 * |
| FEI FENG ET AL: "Three-dimensional self super-resolution for pelvic floor MRI using a convolutional neural network with multi-orientation data training", MEDICAL PHYSICS, AIP, MELVILLE, NY, US, vol. 49, no. 2, 18 January 2022 (2022-01-18), pages 1083 - 1096, XP072505690, ISSN: 0094-2405, DOI: 10.1002/MP.15438 * |
| MÜNZER BERND ET AL: "Content-based processing and analysis of endoscopic images and videos: A survey", MULTIMEDIA TOOLS AND APPLICATIONS, KLUWER ACADEMIC PUBLISHERS, BOSTON, US, vol. 77, no. 1, 11 January 2017 (2017-01-11), pages 1323 - 1362, XP036403706, ISSN: 1380-7501, [retrieved on 20170111], DOI: 10.1007/S11042-016-4219-Z * |
| OKTAY OZAN ET AL: "Multi-input Cardiac Image Super-Resolution Using Convolutional Neural Networks", 2 October 2016, SAT 2015 18TH INTERNATIONAL CONFERENCE, AUSTIN, TX, USA, SEPTEMBER 24-27, 2015; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER, BERLIN, HEIDELBERG, PAGE(S) 246 - 254, ISBN: 978-3-540-74549-5, XP047364481 * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119469218A (zh) * | 2024-10-24 | 2025-02-18 | 中国船舶集团有限公司第七一五研究所 | 一种基于人工神经网络的低分辨布里渊频率谱解调方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120826699A (zh) | 2025-10-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12245833B2 (en) | Systems and methods of continuous registration for image-guided surgery | |
| US12121204B2 (en) | Systems and methods of registration for image guided surgery | |
| US11080902B2 (en) | Systems and methods for generating anatomical tree structures | |
| US11779396B2 (en) | Systems and methods for registering elongate devices to three dimensional images in image-guided procedures | |
| US20210259783A1 (en) | Systems and Methods Related to Registration for Image Guided Surgery | |
| US20230030727A1 (en) | Systems and methods related to registration for image guided surgery | |
| US20250268453A1 (en) | Systems and methods for detecting an orientation of medical instruments | |
| US20220142714A1 (en) | Systems for enhanced registration of patient anatomy | |
| WO2021092124A1 (fr) | Systèmes et procédés d'enregistrement d'un instrument sur une image à l'aide de données de nuage de points | |
| WO2024186659A1 (fr) | Génération d'images médicales haute résolution à l'aide d'un modèle d'apprentissage automatique | |
| WO2025029781A1 (fr) | Systèmes et procédés de segmentation de données d'image | |
| WO2024163533A1 (fr) | Extraction de dispositif allongé à partir d'images peropératoires | |
| WO2024178047A1 (fr) | Commande de dispositif allongé flexible basée sur un outil |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24716539 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202480017307.4 Country of ref document: CN |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWP | Wipo information: published in national office |
Ref document number: 202480017307.4 Country of ref document: CN |