[go: up one dir, main page]

WO2025226870A1 - Methods and systems for generating 3d models of anatomy - Google Patents

Methods and systems for generating 3d models of anatomy

Info

Publication number
WO2025226870A1
WO2025226870A1 PCT/US2025/026059 US2025026059W WO2025226870A1 WO 2025226870 A1 WO2025226870 A1 WO 2025226870A1 US 2025026059 W US2025026059 W US 2025026059W WO 2025226870 A1 WO2025226870 A1 WO 2025226870A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
target anatomy
images
generating
tracking data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/026059
Other languages
French (fr)
Inventor
Brett D. JACKSON
Brian T. HOWARD
Varun A. BHATIA
Joshua J. BLAUER
Tarek D. HADDAD
Elliot C. SCHMIDT
Keara A. BERLIN
Vijay Rajendran
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Medtronic Inc
Original Assignee
Medtronic Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medtronic Inc filed Critical Medtronic Inc
Publication of WO2025226870A1 publication Critical patent/WO2025226870A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering

Definitions

  • the present technology relates to methods and systems for generating three- dimensional (3D) models of anatomy.
  • a 3D model (e.g., reconstruction) of patient anatomy can be helpful for a variety of medical applications.
  • 3D models of anatomy can be used to help identify a desired implant location in the anatomy.
  • 3D models of anatomy can be used to help determine a navigational approach for accessing certain anatomy.
  • 3D models can be generated from images capturing an anatomy of interest.
  • Accuracy and completeness of a 3D model of an anatomy are important for their use in successful medical applications.
  • the accuracy and/or completeness in a 3D model may be limited to achieve, such as due to the nature of images forming the basis of such 3D models.
  • poor image quality e.g., due to low image resolution, artefacts, obstructions in the field of view, etc.
  • motion of the target anatomy of interest whether due to patient motion and/or naturally-occurring motion (e.g., heart chamber movements during a cardiac cycle), can limit the quality of the resulting 3D model.
  • a 3D model of a patient’s target anatomy may have uncertain accuracy when that anatomy may have changed (e.g., due to progression of disease) following a pre-procedural imaging scan on which the 3D model is based.
  • the subject technology relates to systems and methods for generating 3D models of anatomy, where the 3D models have a greater level of certainty for accuracy and/or completeness of the anatomy.
  • the subject technology can improve the level of certainty in a 3D model of anatomy despite poor image quality, anatomical motion, anatomical changes, and other challenges that can affect the accuracy and/or completeness of a 3D model of the anatomy.
  • a method for generating a three-dimensional (3D) model of a target anatomy comprising: receiving a plurality of two-dimensional (2D) images of the target anatomy, wherein the 2D images are generated by an imaging probe manipulated in multiple poses; receiving electromagnetic (EM) tracking data representing the poses of the imaging probe while generating the plurality of 2D images; generating a 3D model of the target anatomy based on the 2D images and the EM tracking data; determining a confidence level in a first portion of the 3D model of the target anatomy; and displaying the 3D model of the target anatomy and a representation of the confidence level in the first portion of the 3D model.
  • 2D two-dimensional
  • generating the 3D model comprises: generating a segmentation mask for each of the 2D images, wherein each of the segmentation masks segments one or more features of interest in the 2D image; and projecting the segmentation masks in 3D space.
  • generating the 3D model further comprises: projecting the segmentation masks as a point cloud of pixels from the 2D images in 3D space; and converting the point cloud to a mesh model of the target anatomy.
  • determining the confidence level in the first portion of the 3D model comprises evaluating variance of brightness in one or more pixels among the 2D images over a period of time.
  • determining the confidence level in the first portion of the 3D model comprises correlating one or more pixels among the 2D images to a region of the target anatomy known to experience a high amount of movement.
  • displaying comprises displaying the first portion of the 3D model with a first display scheme and a second portion of the 3D model different from the first portion with a second display scheme, wherein the first and second display schemes are different.
  • a system for generating a three-dimensional (3D) model of a target anatomy comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a plurality of two-dimensional (2D) images of the target anatomy, wherein the 2D images are generated by an imaging probe manipulated in multiple poses; receiving electromagnetic (EM) tracking data representing the poses of the imaging probe while generating the plurality of 2D images; generating a 3D model of the target anatomy based on the 2D images and the EM tracking data; determining a confidence level in a first portion of the 3D model of the target anatomy; and displaying, on a display, the 3D model of the target anatomy and a representation of the confidence level in the first portion of the 3D model.
  • 2D two-dimensional
  • the instructions that, when executed by the processor, cause the system to generate the 3D model comprise instructions that, when executed by the processor, cause the system to perform operations comprising: generating a segmentation mask for each of the 2D images, wherein each of the segmentation masks segments one or more features of interest in the 2D image; and projecting the segmentation masks in 3D space.
  • the instructions that, when executed by the processor, cause the system to generate the 3D model comprise instructions that, when executed by the processor, cause the system to perform operations comprising: projecting the segmentation masks as a point cloud of pixels from the 2D images in 3D space; and converting the point cloud to a mesh model of the target anatomy.
  • a method for generating a three-dimensional (3D) model of a target anatomy comprising: receiving a 3D model of the target anatomy, wherein the 3D model comprises at least one portion associated with a confidence level; receiving EM tracking data representing location of a device placed proximate the target anatomy; detecting a discrepancy between (i) an expected relation between the device and the target anatomy that is based on the 3D model and EM tracking data and (ii) a detected relation between the device and the target anatomy that is based on the 3D model and the EM tracking data; and updating the 3D model based at least in part on the detected discrepancy.
  • updating comprises mapping a surface of the target anatomy by: contacting a plurality of points on the surface of the target anatomy with a tip of the device; and at each point in the plurality of points, registering the position of the tip of the device in 3D space with an EM sensor.
  • updating comprises: receiving a plurality of two-dimensional (2D) images of the target anatomy, wherein the 2D images are collected by an imaging probe manipulated in multiple poses; receiving second EM tracking data representing pose of the imaging probe while collecting the plurality of 2D images; and generating a 3D model of the target anatomy based on the 2D images and the second EM tracking data.
  • 2D two-dimensional
  • a system for generating a three-dimensional (3D) model of a target anatomy comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a 3D model of the target anatomy, wherein the 3D model comprises at least one portion associated with a confidence level; receiving EM tracking data representing location of a device placed proximate the target anatomy; detecting a discrepancy between (i) an expected relation between the device and the target anatomy that is based on the 3D model and the EM tracking data and (ii) a detected relation between the device and the target anatomy that is based on the 3D model and the EM tracking data; and updating the 3D model based at least in part on the detected discrepancy.
  • the indication of the determined location of the medical device comprises a visual indication of an anatomical region in which the medical device is located.
  • the visual indication comprises at least one of a text label or color coding of the anatomical region in which the medical device is located.
  • a method comprising: receiving a model of a target anatomy; generating a display of the model of the target anatomy; receiving EM tracking data representing a location of a medical device placed proximate the target anatomy; determining, based on the EM tracking data, the location of the medical device; modifying the display of the model of the target anatomy to include an indication of the location of the medical device with respect to the target anatomy; and presenting the modified display of the model of the target anatomy.
  • modifying the display of the model of the target anatomy comprises adding a visual indication of an anatomical region in which the medical device is located.
  • a system comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a model of a target anatomy; generating a display of the model of the target anatomy; receiving EM tracking data representing a location of a medical device placed proximate the target anatomy; determining, based on the EM tracking data, the location of the medical device; modifying the display of the model of the target anatomy to include an indication of the location of the medical device with respect to the target anatomy; and presenting the modified display of the model of the target anatomy.
  • FIG. 1 is a schematic of an example system for generating and/or using a 3D model of anatomy in accordance with the present technology.
  • FIG. 2 is a flowchart of an example method for generating a 3D model of target anatomy, in accordance with the present technology.
  • FIG. 3 A is a schematic illustration of a process for generating images of a target anatomy from an imaging probe and collecting pose information for the imaging probe while generating the images, in accordance with the present technology.
  • FIG. 3B depicts an example segmented image of a target anatomy, segmented in accordance with the present technology.
  • FIG. 3C depicts an example point cloud of a target anatomy, in accordance with the present technology.
  • FIGS. 3D and 3E depict various views of an mesh model converted from a point cloud of a target anatomy, in accordance with the present technology.
  • FIG. 4 is a flowchart of an example method for generating a 3D model of a target anatomy including determining a confidence level of at least a portion of the 3D model, in accordance with the present technology.
  • FIGS. 5A and 5B are schematics of example display schemes for displaying a representation of a confidence level of at least a portion of a 3D model, in accordance with the present technology.
  • FIG. 6 is a flowchart of an example method for generating a 3D model of a target anatomy including updating the 3D model, in accordance with the present technology.
  • FIGS. 7 A and 7B are schematic illustrations of an example process of updating a 3D model, in accordance with the present technology.
  • FIGS. 8 A and 8B are schematic illustrations of an example process of updating a 3D model, in accordance with the present technology.
  • FIG. 9 is a flowchart of an example method for modifying a display of a 3D model of a target anatomy, in accordance with the present technology.
  • FIGS. 10A and 10B are schematic illustrations of example processes of displaying a indication of a location of a medical device with respect to a displayed 3D model, in accordance with the present technology.
  • FIGS. 11A and 11B are schematic illustrations of example processes of displaying an indication of a predetermined route for a medical device and a modified indication of the predetermined route for a medical device, respectively, in accordance with the present technology.
  • FIG. 12 is a schematic illustration of an example process of modifying a display of a 3D model of a target anatomy for emphasizing different portions of a target anatomy, in accordance with the present technology.
  • the present technology relates to methods and systems for generating 3D models of anatomy from medical imaging (e.g., ultrasound). Some variations of the present technology, for example, are directed to generating 3D models of target anatomy relating to a medical treatment or other procedure. Specific details of several variations of the technology are described below with reference to FIGS. 1-8B.
  • FIG. 1 is a schematic diagram of an example system 100 for generating a 3D model of anatomy (e.g., 3D reconstruction).
  • the system 100 for generating a 3D model of anatomy can include a model reconstruction system 110.
  • the model reconstruction system 110 can be communicatively coupled to an imaging system 120, one or more user interface devices 130, and/or one or more medical devices 140, such as through a suitable wired and/or wireless connection (e.g., communications network 150) that enables information transfer. Additionally or alternatively, information transfer between one or more of these components of the system 100 can occur in a discrete manner, such as by storing information from one component (e.g., data) on a memory device that is readable by another component.
  • one component e.g., data
  • model reconstruction system, 110, the imaging system 120, the user interface devices 130, and medical devices 140 are illustrated schematically in FIG. 1 as separate components, in some variations two or more of these components can be embodied in a single device. For example, any two, any three, or all four of the model reconstruction system, 110, the imaging system 120, the user interface device(s) 130, and medical device(s) 140 can be combined in a single device. However, in some variations each of the model reconstruction system, 110, the imaging system 120, the user interface device(s) 130, and medical device(s) 140 can be embodied in a respective separate device.
  • the model reconstruction system 110 can include one or more processors 112, and one or more memory devices 114 having instructions stored therein.
  • the one or more memory device 114 can include any suitable computer-readable medium such as RAMs, ROMs, flash memory, EEPROMs, optical devices (e.g., CD or DVD), hard drives, floppy drives, or any suitable device.
  • the memory device 114 can include instructions (e.g., organized in one or more modules) for performing 3D model generation in accordance with any of the methods described in further detail herein (e.g., generating an initial 3D model, predicting a confidence level in one or more portions of the 3D model, and/or updating the 3D model).
  • the processor 112 may be configured to execute the instructions that are stored in the memory device 114 such that, when it executes the instructions, the processor 112 perform aspects of the methods herein.
  • the instructions may be executed by computerexecutable components integrated with a software application, applet, host, server, network, website, communication service, communication interface, hardware, firmware, software elements of a user computer or mobile device, smartphone, or any suitable combination thereof.
  • the one or more processors 112 can be incorporated into a computing device or system such as a cloud-based computer system, a mainframe computer system, a gridcomputer system, or other suitable computer system.
  • the model reconstruction system 110 can be configured to receive one or more images of a target anatomy that are generated by the imaging system 120.
  • the imaging system 120 can be configured to obtain images in any suitable imaging modality.
  • the imaging system 120 can be configured to generate two-dimensional (2D) images along various imaging windows or fields of view.
  • the imaging system 120 is an ultrasound imaging system configured to generate a plurality of 2D images, with each 2D image being collected by performing an ultrasound sweep with an imaging probe 122 (e.g., through translation and/or rotation of the imaging probe 122) at one or more imaging windows.
  • the imaging probe 122 can also include at least one electromagnetic (EM) sensor 124 or other tracking sensor configured to track the pose (including position and/or orientation) of the imaging probe 122 as the imaging probe 122 collects the 2D images. Additionally or alternatively, the EM sensor 124 can be included or coupled to an EM navigation tool that is moved in tandem with the imaging probe 122 to collect representative tracking data for the imaging probe 122. In some variations, the imaging probe 122 can be manipulated external to a patient’s body (e.g., transthoracic ultrasound for imaging a heart).
  • EM electromagnetic
  • the system 100 can further include one or more user interface device(s) 130, which functions to allow a user to interact with the model reconstruction system 110 and/or the imaging system 120.
  • the user interface device 130 can be configured to receive user input (e.g., for controlling input of information between the model reconstruction system 110 and the imaging system 120) and/or provide information to a user.
  • the user interface device 130 can include a display (e.g., monitor, goggles, glasses, AR/VR device, etc.) for displaying a 3D model to a user.
  • the imaging system, model reconstruction system, and/or user interface device can be further communicatively coupled to one or more medical device(s) 140.
  • the medical device 140 can be any suitable device operable in a medical procedure (e.g., implant delivery, mapping, tissue ablation, etc.
  • the medical device 140 can be a delivery catheter for placing an implant in the target anatomy (e.g., cardiac pacemaker device in a patient’s heart).
  • the medical device 140 can include at least one EM sensor 144 or other tracking sensor configured to track the pose (including position and/or orientation) of the medical device, such as when the medical device 140 is being navigated in patient anatomy.
  • a graphical representation of the medical device 140 can be generated and displayed concurrently with the generated 3D model of anatomy, with the representation of the medical device 140 located in relation to anatomy (e.g., in real-time) based on tracking information from the EM sensor 144 and/or other tracking sensors.
  • the methods can be used to generate 3D models of any suitable anatomical regions, such as organs (e.g., heart, lung, kidney) and/or other tissue (e.g., vasculature).
  • the 3D model can be constructed from multiple 2D images taken from various viewing angles, as further described below. Although the 2D images are primarily described herein as ultrasound images, it should be understood that in other variations the 2D images can include any suitable imaging modality.
  • the generated and/or updated 3D model of a target anatomy can be used in a variety of applications, including but not limited to aiding navigation of patient anatomy for purposes of a medical treatment and/or other medical procedure (e.g., implant placement, tissue ablation).
  • the 3D model of a target anatomy can be used to guide identification of a target implant location and/or guide navigation for placement of a cardiac device (e.g., cardiac pacemaker), a stent device (e.g., coronary stent, aortic stent, etc.), or other suitable implant, etc.
  • At least a portion of the method for generating a 3D model of a target anatomy can be performed prior to a medical procedure (e.g., an initial 3D model can be generated and/or a 3D model can be updated based on images collected prior to a medical procedure). Additionally or alternatively, at least a portion of the method for generating a 3D model of a target anatomy can be performed intra-procedurally and/or post-procedurally (e.g., an initial 3D model can be generated and/or a 3D model can be updated based on new images and/or other information collected during and/or after a medical procedure).
  • a method 200 for generating a 3D model of a target anatomy of a patient can include receiving a plurality of 2D images of the target anatomy that are generated by an imaging probe manipulated in multiple poses 210, receiving tracking data representing pose of the image probe while generating the plurality of 2D images 220, and generating a 3D model of the target anatomy based on the 2D images and the tracking data 230.
  • These processes can, for example, be performed at least in part by the model reconstruction system 110 described above, and/or other components of the system 100.
  • the memory device 114 can store instructions that, when executed by one or more processors 112, cause the model reconstruction system 110 to perform one or more aspects of the method 200.
  • the plurality of 2D images can be collected by an imaging probe.
  • the imaging probe is an ultrasound probe 322 configured to collect 2D ultrasound images.
  • the ultrasound probe 322 can, in some variations, be an example of imaging probe 122 shown in FIG. 1.
  • the ultrasound probe 322 can be configured to collect 2D ultrasound images generated from one or more ultrasound sweeps with an ultrasound probe from various vantage points.
  • the ultrasound probe 322 can be manipulated in various poses (e.g., translated and/or rotated) at one or more imaging windows.
  • the imaging probe can be manipulated external to the patient to gather images.
  • the imaging probe can be an ultrasound probe configured to perform a transthoracic ultrasound for imaging a heart.
  • the 2D images can also depict one or more medical devices (e.g., navigational catheter 340) relative to patient anatomy, to further help guide placement and/or other operation of the depicted medical device(s).
  • a live stream of the 2D ultrasound images 314 can be provided on a display (e.g., user interface device 130) to a user operating the ultrasound probe 322.
  • Pose tracking data for the imaging probe can be collected concurrently while the imaging probe is collecting the 2D images.
  • the ultrasound probe 322 can include one or more tracking sensors 324 configured to collect position and/or orientation data for the imaging probe as the imaging probe collects 2D images. Accordingly, the position and/or orientation of the imaging probe is known when the imaging probe collects each 2D image.
  • the tracking data for the imaging probe can be transformed into a known spatial orientation of each 2D image, and used for generating a 3D model as further described below.
  • the one or more tracking sensors 324 can include an EM sensor (e.g., EM sensor 124) used in conjunction with a suitable EM tracking system.
  • the method 200 further includes generating a 3D model of target anatomy based on the 2D images and the tracking data 230. As shown in FIG. 2, in some variations, the method 200 can include generating a segmentation mask for each of the 2D images 232. The segmentation masks can be applied to generate segmented pixels from the 2D images corresponding to anatomical features.
  • the segmentation masks can delineate between various features of the heart such aortic arch, Bachman’s bundle, left atrium, atrioventricular (AV) bundle (bundle of His), left ventricle, left bundle branch, Purkinje fibers, right bundle branch, right ventricle, right atrium, posterior internodal, middle internodal, AV node, anterior intemodal, sinoatrial (SA) node, ventricular apex, veins, valves, trunks, blood pool, and/or other suitable cardiac anatomy.
  • FIG. 3B depicts an example segmented ultrasound image 332 of a heart, including segmentation of the image into blood pools corresponding to a right atrium, a right ventricle, a left ventricle, and a left atrium of the heart.
  • the segmentation masks can be generated in an automated manner, semiautomated, or manual manner.
  • the segmentation masks can be predicted using a suitable pre-trained machine learning algorithm that receives 2D images (e.g., 2D ultrasound images) as input.
  • the machine learning algorithm can, for example, be trained with training data including ultrasound images that are annotated for the target anatomy (e.g., heart chambers for cardiac target anatomy).
  • the machine learning algorithm includes a convolutional neural network or vision transformer model, though may include any suitable machine learning algorithm.
  • the segmentation masks can be generated using one or more other suitable computer vision algorithms, and/or with manual annotation by a user.
  • the method 200 can further include projecting the segmentation masks as a point cloud in 3D space 234. Based on the probe tracking data (e.g., position and orientation of the probe), the 2D images and their corresponding segmentation masks can be projected into 3D space. As a result of this projection, each segmented pixel in each image when displayed in 3D can form a point cloud in 3D space.
  • FIG. 3C depicts an example aggregation of segmentation as a point cloud 334.
  • the method 200 can further include converting the point cloud to a 3D model 236, such as a mesh model (e.g., including triangle surfaces connecting nodes or points in the point cloud).
  • the conversion of the point cloud to a 3D model can, for example, be performed with a suitable algorithm such as the Marching Cubes algorithm, and/or other suitable algorithm.
  • the 3D model can be suitable for easier processing and/or manipulation.
  • the method 200 can further include performing post-processing of the 3D model 238, such as noise reduction (e.g., smoothing lines and/or surfaces), deleting one or more regions of the modeled anatomy, etc.
  • the 3D model (with or without post-processing) can be suitable for display.
  • the method 200 can further include displaying the 3D model of the target anatomy 240 on a suitable display (e.g., user interface device 130).
  • FIGS. 3D and 3E depict various views of a 3D model 336 of the heart generated with the method 200.
  • the displayed 3D model can be manipulated by a user (e.g., rotated, enlarged, viewed along one or more cross-sectional planes, etc.).
  • FIG. 3D depicts the 3D model 336 in a first view
  • FIG. 3E depicts the 3D model 336 in a second view rotated relative to that shown in the first view.
  • the 3D model can be stored on a suitable memory device.
  • a graphical representation of a medical device can be generated when the medical device is located (e.g., navigated) in or near the target anatomy.
  • the medical device can include one or more tracking sensors (e.g., EM sensors) that provide position and/or orientation information of the medical device.
  • the graphical representation of the medical device can located relative to the target anatomy based on tracking data from the tracking sensors of the medical device (e.g., an EM sensor 144 or other suitable tracking sensor), and the graphical representation of the medical device can be displayed concurrently with the 3D model on a display (e.g., user interface device 130) accordingly, such that a user can view the location of the medical device relative to the target anatomy on the display.
  • a graphical representation of a current (or present) location of the medical device and/or past locations (e.g., path tracking) of the medical device in the patient can be displayed.
  • the graphical representation of the medical device can be overlaid on the 3D model. For example, as shown in FIGS.
  • a graphical representation 350 of a delivery catheter can be overlaid and/or manipulated with the 3D model 336.
  • the display of the graphical representation of the medical device can toggled on and off (e.g., selected for display or non-display). For example, a user can select through a user control on a user interface device (e.g., user interface device 130) to display the graphical representation of the medical device, or select through the user control to hide the graphical representation of the medical device.
  • the graphical representation of the medical device may be automatically displayed or hidden based on one or more parameters, such as automatically displayed based on proximity to a location of interest (e.g., implant location) in the target anatomy. Additionally or alternatively, a user can select a display scheme (e.g., color, transparency, etc.) of the graphical representation of the medical device on the display.
  • a 3D model of patient anatomy can include uncertainty associated with potential inaccuracy.
  • Various factors can contribute to uncertainty in a 3D model, such as image resolution, viewing angles of the imaging probe, and/or motion of target anatomy (e.g., due to anatomical behavior such as heartbeat, breathing, etc., and/or patient movements).
  • Uncertainty in the model is generally undesirable, as it can result in making a medical procedure more challenging and/or result in adverse events.
  • uncertainty in the model can result in a physician making unwanted contact with the heart wall, or inadvertently implanting the cardiac implant in the wrong location.
  • generating a 3D model can include determining a confidence level in one or more portions of the 3D model.
  • a method 400 can include receiving a 3D model of a target anatomy 410 (e.g., a 3D model that is received in method 400 can be generated as described above with respect to method 200 and FIGS. 2-3 E), determining a confidence level in a portion of the 3D model of the target anatomy 420.
  • the method 400 can further include displaying the 3D model of the target anatomy with a representation of the confidence level 430.
  • Confidence level can be expressed quantitatively (e.g., along a numerical scale, such as between a range of 1 and 10, between 1 and 100, etc.) and/or qualitatively (e.g., “high”, “medium”, “low”), and/or in any suitable manner.
  • These processes can, for example, be performed at least in part by the model reconstruction system 110 described above, and/or other components of the system 100.
  • the memory device 114 can store instructions that, when executed by one or more processors 112, cause the model reconstruction system 110 to perform one or more aspects of the method 400.
  • a confidence level (or conversely, an uncertainty level) in one or more portions of the 3D model can be determined in one or more various manners.
  • determining the confidence level in a portion of the 3D model 420 can include evaluating variance of brightness (e.g., relative brightness and/or absolute brightness) in one or more pixels among the 2D images of the target anatomy (i.e., the 2D images on which the 3D model is based) over time.
  • Variance of brightness of a pixel over multiple image frames can generally be correlated to motion of the target tissue at that location.
  • variance of brightness of a pixel over multiple image frames can be indicative of one or more artifacts existing in one or more of the 2D images.
  • evaluating the variance in pixel brightness over a period of time can provide an indication of the amount of tissue movement that is occurring at that location and/or an indication of the existence of at least one image artifact, and hence reflect an uncertainty in the accuracy of the 3D model at that location for any given point in time.
  • determining the confidence level in a portion of the 3D model 420 can additionally or alternatively include correlating one or more pixels among the 2D images or 3D model to a region of the target anatomy known to experience a high amount of movement.
  • certain heart chambers are known to generally experience a certain amount of movement (e.g., compared to an apex of the heart, which is relatively stable).
  • one or more portions of the 3D model corresponding to certain segmented regions of the heart (or other suitable target anatomy) can reflect an uncertainty in the accuracy of the 3D model at those portions for any given point in time.
  • determining the confidence level in a portion of the 3D model 420 can additionally or alternatively include comparing information about a portion of the target anatomy as depicted from multiple view. For example, consider there may be a moderate level of confidence (e.g., first probability) that a first 2D image (taken from a first viewing window) depicts a left ventricle of a heart, but the first 2D image depicts an occluded portion of the left ventricle due to a rib bone partially blocking the field of view.
  • a moderate level of confidence e.g., first probability
  • a second 2D image (taken from a second viewing window, such as with the imaging probe rotated to a different angle) may provide a clearer view of the occluded portion of the left ventricle, resulting in a higher level of confidence (e.g., second probability) that the second 2D image depicts the portion of the left ventricle that is occluded in the first 2D image.
  • the confidence level associated with the corresponding left ventricle portion of the 3D model can thus be based at least in part on the first and second confidence levels, such as by taking the average (e.g., mean, median, etc.) of the first and second probabilities, or maximum of the first and second probabilities, etc.
  • determining the confidence level in a portion of the 3D model 420 can include applying a trained machine learning algorithm configured to predict where in the 3D model that the reconstruction is likely to have little confidence (or more uncertainty).
  • the method 400 can further include displaying a representation of the confidence level 430 in one or more portions of the 3D model.
  • the representation of the confidence level can be displayed as part of the displayed 3D model itself (e.g., similar to displaying the 3D model of the target anatomy 240 described above), such as on a suitable display (e.g., user interface device 130).
  • displaying a representation of the confidence level 430 can include displaying a first portion of the 3D model with a first confidence level in accordance with a first display scheme that is different than a display scheme for the rest of the 3D model (e.g., including a second portion of the 3D model that has a second confidence level different from the first confidence level).
  • Display schemes associated with confidence level can differ in color, transparency level, patterning, sharpness, and/or the like.
  • confidence level in any particular portion of the 3D model can be represented by displaying pixels in that portion of the 3D model with visual properties varying along a color spectrum corresponding to confidence level.
  • a first portion 510 of a 3D model 500 having a first confidence level can be displayed in a first color (e.g., red to indicate lower confidence)
  • a second portion 520 of the 3D model 500 having a second confidence level can be displayed in a second color (e.g., green to indicate higher confidence).
  • colored pixels of a displayed 3D model can be presented against a dark (e.g., black) background, to help improve visual contrast and enable better visualization of the 3D model.
  • confidence level in any particular portion of the 3D model can be represented by displaying pixels in that portion of the 3D model with visual properties varying along a transparency (or opacity) spectrum corresponding to confidence level.
  • the first portion 510 of the 3D model can be displayed in a first transparency (e.g., greater transparency, or lower opacity), while a second portion 520 of the 3D model can be displayed in a second transparency (e.g., lower transparency, or high opacity) compared to the first portion 510.
  • more opaque regions of the 3D model having higher confidence can generally be more visible than more transparent regions of the 3D model having lower confidence, which may help focus a user’s attention more on those regions of the 3D model that have higher confidence.
  • the transparency of an entire model segment can vary with confidence level.
  • the transparency of the interior of a portion of the 3D model can vary with confidence level while the border of a portion of the 3D model can be visualized in an opaque manner to better help delineate the particular portion of the 3D model.
  • confidence level in any particular portion of the 3D model can be represented by displaying pixels in that portion of the 3D model with visual properties varying along a sharpness (or blurriness) spectrum corresponding to confidence level.
  • portions of the 3D model having higher confidence can generally appear sharper and more visible, while portions of the 3D model having lower confidence can generally be appear more out of focus or blurry, which may help focus a user’s attention more on those regions of the 3D model that have higher confidence.
  • the sharpness of an entire model segment can vary with confidence level.
  • the sharpness level of an interior of a portion of the 3D model can vary with confidence level while the border of a portion of the 3D model can be visualized in a sharp or clear manner to better help delineate the particular portion of the 3D model.
  • Various spectrums of the display scheme can be generally continuous, or can be discrete with buckets each corresponding to a range of confidence level (e.g., a first color shade associated with a lower confidence level of 0-2-, a second color shade associated with a higher confidence level of 21-100, on a confidence level scale of 0-100).
  • a range of confidence level e.g., a first color shade associated with a lower confidence level of 0-2-, a second color shade associated with a higher confidence level of 21-100, on a confidence level scale of 0-100.
  • a user can (e.g., through user interface device 130) control the display of the 3D model (including the nature of the visualization of the confidence level) and/or the graphical representation of a medical device.
  • the display of the 3D model can be vary between one or more modes.
  • one or more portions of the 3D model having lower confidence levels can be displayed together with portions of the 3D model having higher confidence levels, but distinguished from those portions with higher confidence levels by varying visual parameters (e.g., with color, transparency, patterning, sharpness, etc. as described below).
  • the user can, in some variations, select the visual parameter to be varied in accordance with confidence level.
  • one or more portions of the 3D model having confidence level below a threshold value can be omitted from display, such that only portions of the 3D model having at least a certain confidence level are displayed to a user.
  • the threshold value can be defined by a user (e.g., selectable from a menu of options, entered directly, etc.), and/or can be a default threshold value.
  • the method 400 can include displaying a graphical representation of a medical device 530 (e.g., a catheter, such as a delivery or navigational catheter), similar to that described above with respect to method 200.
  • a medical device 530 e.g., a catheter, such as a delivery or navigational catheter
  • the graphical representation of the medical device can be located relative to the target anatomy based on tracking data from the tracking sensors of the medical device, and the graphical representation of the medical device can be displayed concurrently with the 3D model on a display accordingly, such that a user can view the location of the medical device relative to the target anatomy on the display.
  • the appearance of the graphical representation of the medical device 530 can vary based on proximity of the medical device 530 to a region of the target anatomy whose corresponding portion of the 3D model is uncertain (e.g., has below a threshold confidence level).
  • a distal end of the graphical representation of a catheter can change appearance in a visual parameter (e.g., color, transparency, patterning, sharpness, etc.) as it approaches or is located in portion(s) of the 3D model that are sufficiently uncertain (e.g., has below a threshold confidence level).
  • a user can (e.g., through user interface device 130) control the display of the graphical representation of a medical device. For example, a user can select or deselect a display mode in which the appearance of the graphical representation of the medical device varies based on proximity of the medical device to an anatomical region that is depicted with low confidence in the 3D model (e.g., having a confidence level below a threshold value).
  • the user can, in some variations, select the visual parameter (e.g., color, transparency, patterning, sharpness, etc.) to be varied in accordance with confidence level.
  • a threshold value of confidence level can be defined by a user (e.g., selectable from a menu of options, entered directly, etc.), and/or can be a default threshold value.
  • a 3D model of patient anatomy can include uncertainty associated with potential inaccuracy that is generally undesirable. Accordingly, in some instances it may be desirable to update the 3D model (especially during a medical procedure, such as during an implant placement procedures) so as to reduce the level of uncertainty (increase the confidence level) in certain one or more portions.
  • generating a 3D model can include updating a 3D model.
  • a method 600 can include receiving a 3D model of a target anatomy 610 (e.g., a 3D model that is received in method 600 can be generated as described above with respect to method 200, and/or as described above with respect to method 400) and receiving tracking data (e.g., EM tracking data) representing location of a medical device or other device placed proximate the target anatomy 620, such as a catheter (e.g., delivery or navigational catheter).
  • the medical device can be an example of the medical device 140 of FIG. 1, and can include a tracking sensor such as EM sensor 144.
  • the method 600 can further include detecting a discrepancy 630 between (i) an expected relation between the medical device and the target anatomy, and (ii) a detected relation between the medical device and the target anatomy.
  • the method 600 can further include updating the 3D model based at least in part on the detected discrepancy 640, such as by adjusting a geometrical feature of the 3D model and/or a confidence level associated with one or more portions of the 3D model.
  • the method can further include displaying the updated 3D model 650 (e.g., on user interface device(s) 130), which can, for example, be similar to other display processes described herein (e.g., displaying process 240 described above with respect to method 200, and/or displaying process 420 described above with respect to method 400). These processes can, for example, be performed at least in part by the model reconstruction system 110 described above, and/or other components of the system 100.
  • the memory device 114 can store instructions that, when executed by one or more processors 112, cause the model reconstruction system 110 to perform one or more aspects of the method 600.
  • detecting the discrepancy 630 is based at least in part on information derived from the 3D model and/or the tracking data for the medical device.
  • An expected relation or detected relation can include any interaction (or lack of interaction) between the medical device and the target, such as contact. Contact can be detected, for example, through a force sensor, pressure sensor, impedance sensor, optical sensor, and/or other suitable sensor(s), etc. on the medical device.
  • the medical device can be a catheter and an expected relation or detected relation can include contact between a distal tip of the catheter and a tissue wall (or lack of contact between a distal tip of the catheter and a tissue wall). Where the detected relation differs from what is expected based on the 3D model, the 3D model can be updated as further described below.
  • a catheter 730 can be navigated within a target anatomy (e.g., heart) represented by a 3D model 700.
  • the confidence level of the modeled wall 712 can be reduced.
  • the modeled tissue wall can be expanded outwards to a new modeled tissue wall 714 to include the catheter 730 (FIG. 7B) (and the confidence level of the new modeled tissue wall 714 can be increased).
  • the medical device based on the tracked location of the medical device in anatomy relative to the 3D model, it may be expected that the medical device does not contact a tissue wall of the target anatomy. However, if contact is detected (e.g., through a force sensor, pressure sensor, impedance sensor, optical sensor, etc. on the medical device), then in response to this discrepancy, the tissue surface of the 3D model can be adjusted (e.g., moved) in accordance with the tracked location of the medical device at the time of contact. Additionally or alternatively, the confidence level of the portion of the 3D model representing the tissue surface can be adjusted. For example, as shown in the schematic of FIG.
  • a catheter 830 can be navigated within a target anatomy (e.g., heart) represented by a 3D model 800. If a distal portion of the catheter 830 is detected to have contacted a tissue wall before reaching the modeled tissue wall 812, then the modeled tissue wall can be contracted inwards to a new modeled tissue wall 814 to the tracked location of the distal portion of the catheter 830 at the time of contact (and optionally, the confidence level of the new modeled tissue wall 814 can be increased). Alternatively, the confidence level of the modeled tissue wall 812 can be reduced (without adjusting the modeled tissue wall).
  • a target anatomy e.g., heart
  • the modeled tissue wall can be contracted inwards to a new modeled tissue wall 814 to the tracked location of the distal portion of the catheter 830 at the time of contact (and optionally, the confidence level of the new modeled tissue wall 814 can be increased).
  • the confidence level of the modeled tissue wall 812 can be reduced (without adjusting the modeled tissue wall).
  • the 3D model can be updated intra-procedurally with the aid of a medical device (e.g., catheter, such as a delivery or navigational catheter).
  • a medical device e.g., catheter, such as a delivery or navigational catheter.
  • the medical device can, in some variations, be an example of medical device 140 of FIG. 1.
  • a catheter can be navigated such that its distal end portion touches an internal tissue surface (e.g., heart wall), and the location of the distal end portion of the catheter can be registered. This process can be repeated at multiple locations in the target anatomy until a sufficient surface of the target anatomy is mapped.
  • the method 600 can include receiving additional 2D images (e.g., ultrasound images) intra-procedurally and generating a new partial or full 3D model based on the additional intra-procedural 2D images (e.g., as described above with respect to method 200 and/or method 400).
  • the user interface can aid in the image collecting process by guiding a user (e.g., ultrasound operator) toward particular imaging windows that will provide the most information, based on the current level of uncertainty in the 3D model.
  • a model of patient anatomy e.g., target anatomy
  • a medical device e.g., a catheter, such as a delivery or navigational catheter
  • the model may be used, for example, intra-procedurally (e.g., during delivery of an implant or other medical device, during a catheter navigational procedure, etc.).
  • the model can, for example, be a 3D model that is generated and/or updated in accordance with any one or more of the methods described herein (e.g., method 200, method 400, method 600, etc.).
  • model can be any suitable model generated through other techniques.
  • model is primarily described below as a 3D model, it should be understood that in some variations, a 2D model can be used in a similar manner for aiding a user’s understanding of the current location of a medical device.
  • a method for using a 3D model includes leveraging the 3D model to illustrate a location of a medical device.
  • a method 900 can include receiving a model of a target anatomy 910 (e.g., a 3D model that is received in method 900 can be generated as described above with respect to method 200, as described above with respect to method 400, and/or as described above with respect to method 600) and generating a display of the model of the target anatomy 912.
  • the method 900 can further include receiving tracking data (e.g., EM tracking data) representing location of a medical device or other device placed proximate the target anatomy 920, such as a catheter (e.g., delivery or navigational catheter).
  • tracking data e.g., EM tracking data
  • the medical device can be an example of the medical device 140 of FIG. 1, and can include a tracking sensor such as EM sensor 144.
  • the method 900 can further include determining, based on the tracking data, the location of the medical device 930, and modifying the display of the model of the target anatomy to include an indication of the location of the medical device with respect to the target anatomy 940.
  • the method 900 can further include presenting the modified display of the model of the target anatomy 950 (e.g., on user interface device(s)), which can, for example, be similar to other display processes described herein (e.g., displaying process 240 described above with respect to method 200, displaying process 420 described above with respect to method 400, and/or displaying process 650 as described above with respect to method 600).
  • model reconstruction system 110 can, for example, be performed at least in part by the model reconstruction system 110 described above, and/or other components of the system 100.
  • the memory device 114 can store instructions that, when executed by one or more processors 112, cause the model reconstruction system 110 to perform one or more aspects of the method 900.
  • the indication of the location of the medical device can be a visual indication.
  • the location can be an anatomical region that is a generally known or technically defined region.
  • the labeled anatomical region may be a cardiac chamber (left atrium, right atrium, left ventricle, right ventricle).
  • the location can be an anatomical region that is another predefined (e.g., user-defined) region, such as an area of interest based on suspected or known disease state, or a cell in a grid.
  • the visual indication may include a displayed label.
  • FIG. 10A depicts a schematic of a display including a 3D model 1000 of a target anatomy and a visual indication 1040 in the form of a label identifying an anatomical region or other portion of the target anatomy in which the medical device 1030 or a portion thereof (e.g., a distal end or portion of a catheter) is located.
  • the visual indication may include any suitable form of a label, such as a text label (e.g., full name of the anatomical region, abbreviated form of the anatomical region, code representative of the anatomical region, etc.).
  • the label can be displayed overlaid with the 3D model, such as overlaid over the displayed anatomical region (e.g., “RV” displayed over the right ventricle portion of a cardiac 3D model), overlaid over the 3D model adjacent to the displayed anatomical region, and/or overlaid adjacent to the displayed medical device (e.g., distal portion of a catheter) where a user is likely to notice the visual indication (e.g., as the user’s attention may be focused on the displayed representation of the medical device during a procedure).
  • overlaid over the displayed anatomical region e.g., “RV” displayed over the right ventricle portion of a cardiac 3D model
  • the displayed medical device e.g., distal portion of a catheter
  • the visual indication of the location of the medical device can additionally or alternatively include a distinctive display scheme of a portion of the 3D model corresponding to the location of the medical device.
  • FIG. 10B depicts a schematic of a display including a 3D model 1050 of a target anatomy and a visual indication 1040 in the form of a display scheme for an anatomical region or other portion of the 3D model that differs from a display scheme for the rest (or other portions) of the 3D model.
  • the visual indication of the location of the medical device can include depicting an anatomical region of the target anatomy in the 3D model with a different color, transparency level, patterning, sharpness, and/or the like.
  • the method 900 can further include displaying an indication of a predetermined route for the medical device for navigation of the medical device toward a target.
  • the predetermined route can be a proposed path along which a catheter can travel toward a target site within the target anatomy.
  • the predetermined route can be generated, for example, pre-procedurally based on prior imaging and/or intra-procedurally based on information in a 3D model generated intra-procedurally, and can be determined by a user, a suitable machine learning algorithm, etc.
  • the predetermined route can, for example, be represented by a line (e.g., solid line, dashed or dotted line, etc.) displayed overlaid over the 3D model so as to be seen proximate a representation of the medical device as the medical device is navigated within the target anatomy.
  • a line e.g., solid line, dashed or dotted line, etc.
  • the method can further include determining, based on the EM tracking data, that the location of the medical device has deviated from the predetermined route, and modifying the indication of the predetermined route in accordance with a second display scheme that is different from the first display scheme so as to indicate the deviation from the predetermined route.
  • FIG. 11A depicts a schematic of a display including a 3D model 1100 of a target anatomy and an indication of a predetermined route 1140.
  • the indication of the predetermined route can, for example, include a line having a first display scheme (e.g., color, patterning, line weight, shading, etc.) and along which a medical device can be guided.
  • FIG. 1 IB depicts a schematic of a display including the 3D model 1100 of the target anatomy and a modified indication of the predetermined route 1140 having a second display scheme different from the first display scheme.
  • the indication 1140 of the predetermined route can be displayed in green while the medical device’s tracked path is consistent with the predetermined route, and then displayed in red if the medical device’s tracked path deviates from the predetermined route.
  • the second display scheme for the modified indication 1140 can differ from the first display scheme in any suitable manner (e.g., different color, patterning, line weight, and/or shading, etc.).
  • the amount of deviation of the medical device’s traveled path from the predetermined route can be communicated with multiple display schemes.
  • the indication 1140 of the predetermined route can be displayed in green while the medical device’s tracked path is consistent with the predetermined route, displayed in orange if the medical device’s tracked path deviates from the predetermined route by a moderate amount (e.g., above a first threshold value but below a second threshold value), and displayed in red if the medical device’s tracked path deviates from the predetermined route by a significant amount (e.g., above the second threshold value).
  • Two, three, four, five, or more than five levels of deviation may be communicated with a corresponding number of display schemes.
  • the display schemes may differ in any suitable manner (e.g., different color, patterning, line weight, and/or shading, etc.).
  • any of the above indications can be selectively displayed in conjunction with the 3D model of the target anatomy.
  • a user may selectively toggle on or off the display of such indication(s) based on their preferences (e.g., can toggle off the display of one or more indication(s) to reduce visual distractions and/or obscuring of features of the 3D model).
  • the display of such indication(s) can be automatically or semi- automatically toggled on or off based on immediate relevancy.
  • an indication of location of the medical device can be selectively omitted from the display until the medical device gets within a threshold distance of a target site and/or selectively omitted from the display unless the medical device is navigated to a vulnerable area of the target anatomy that may be more susceptible to tissue damage.
  • an indication of a predetermined route for the medical device can be selectively omitted from the display until the medical device’s path sufficiently deviates from the predetermined route.
  • the display of a 3D model can be modified to emphasize or otherwise highlight one or more regions of interest.
  • certain anatomical regions of the 3D model can be color coded and/or include text labels to help a user identify certain anatomy or other regions in the model (e.g., intra-procedurally while a medical device is being navigated in and the target anatomy).
  • the displayed 3D model can include color coding of one or more heart chambers, a left side and/or right side of the heart, and/or one or more valve planes. For example, FIG.
  • FIG. 12 illustrates a schematic of a 3D model of a heart in which a first portion of the heart 1240 is shown with a first display scheme (e.g., color, patterning, transparency, shading, etc.) and second portion of the heart 1242 is shown with a second display scheme.
  • a first display scheme e.g., color, patterning, transparency, shading, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

In some variations, a method for generating a 3D model of a target anatomy includes receiving 2D images of the target anatomy, wherein the 2D images are generated by an imaging probe, receiving tracking data representing the poses of the imaging probe while generating the 2D images, generating a 3D model based on the 2D images and the tracking data, determining a confidence level in a portion of the 3D model, and displaying the 3D model and a representation of the confidence level. In some variations, a method for generating a 3D model of a target anatomy includes detecting a discrepancy between (i) an expected relation between a tracked medical device and the target anatomy, and (ii) a detected relation between the tracked medical device and the target anatomy, and updating the 3D model based at least in part on the detected discrepancy.

Description

METHODS AND SYSTEMS FOR GENERATING 3D MODELS OF ANATOMY
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001| This application claims the benefit of priority to U.S. Provisional App. No. 63/638,309, filed April 24, 2024, which is incorporated by reference herein in its entirety.
TECHNICAL FIELD
[0002] The present technology relates to methods and systems for generating three- dimensional (3D) models of anatomy.
BACKGROUND
[0003] A 3D model (e.g., reconstruction) of patient anatomy can be helpful for a variety of medical applications. For example, 3D models of anatomy can be used to help identify a desired implant location in the anatomy. As another example, 3D models of anatomy can be used to help determine a navigational approach for accessing certain anatomy. Generally, 3D models can be generated from images capturing an anatomy of interest.
[0004] Accuracy and completeness of a 3D model of an anatomy are important for their use in successful medical applications. However, the accuracy and/or completeness in a 3D model may be limited to achieve, such as due to the nature of images forming the basis of such 3D models. For example, poor image quality (e.g., due to low image resolution, artefacts, obstructions in the field of view, etc.) can limit the quality of the resulting 3D model. As another example, motion of the target anatomy of interest, whether due to patient motion and/or naturally-occurring motion (e.g., heart chamber movements during a cardiac cycle), can limit the quality of the resulting 3D model. Even further, a 3D model of a patient’s target anatomy may have uncertain accuracy when that anatomy may have changed (e.g., due to progression of disease) following a pre-procedural imaging scan on which the 3D model is based.
SUMMARY
[0005] The subject technology relates to systems and methods for generating 3D models of anatomy, where the 3D models have a greater level of certainty for accuracy and/or completeness of the anatomy. For example, the subject technology can improve the level of certainty in a 3D model of anatomy despite poor image quality, anatomical motion, anatomical changes, and other challenges that can affect the accuracy and/or completeness of a 3D model of the anatomy.
[00061 The subject technology is illustrated, for example, according to various aspects described below, including with reference to FIGS. 1-8B. Various examples of aspects of the subject technology are described as numbered clauses (1, 2, 3, etc.) for convenience. These are provided as examples and do not limit the subject technology.
1. A method for generating a three-dimensional (3D) model of a target anatomy, the method comprising: receiving a plurality of two-dimensional (2D) images of the target anatomy, wherein the 2D images are generated by an imaging probe manipulated in multiple poses; receiving electromagnetic (EM) tracking data representing the poses of the imaging probe while generating the plurality of 2D images; generating a 3D model of the target anatomy based on the 2D images and the EM tracking data; determining a confidence level in a first portion of the 3D model of the target anatomy; and displaying the 3D model of the target anatomy and a representation of the confidence level in the first portion of the 3D model.
2. The method of clause 1, wherein the 2D images are ultrasound images.
3. The method of clause 1 or 2, wherein generating the 3D model comprises: generating a segmentation mask for each of the 2D images, wherein each of the segmentation masks segments one or more features of interest in the 2D image; and projecting the segmentation masks in 3D space.
4. The method of clause 3, wherein generating each of the segmentation masks comprises predicting each of the segmentation masks by applying a machine learning algorithm.
5. The method of clause 3 or 4, wherein generating the 3D model further comprises: projecting the segmentation masks as a point cloud of pixels from the 2D images in 3D space; and converting the point cloud to a mesh model of the target anatomy.
6. The method of any one of clauses 1-5, wherein determining the confidence level in the first portion of the 3D model comprises evaluating variance of brightness in one or more pixels among the 2D images over a period of time.
7. The method of any one of clauses 1-6, wherein determining the confidence level in the first portion of the 3D model comprises correlating one or more pixels among the 2D images to a region of the target anatomy known to experience a high amount of movement.
8. The method of any one of clauses 1-7, wherein displaying comprises displaying the first portion of the 3D model with a first display scheme and a second portion of the 3D model different from the first portion with a second display scheme, wherein the first and second display schemes are different.
9. The method of clause 8, wherein the first and second display scheme differ in at least one of transparency level, color, patterning, or sharpness.
10. The method of any one of clauses 1-9, wherein the plurality of 2D images and EM tracking data are received prior to a procedure for placing a device proximate the target anatomy.
11. The method of any one of clauses 1-9, wherein the plurality of 2D images and EM tracking data are collected during a procedure for placing a device proximate the target anatomy.
12. The method of any one of clauses 1-11, further comprising: receiving second EM tracking data representing location of a device placed proximate the target anatomy; based at least in part on the second EM tracking data, detecting a discrepancy between (i) an expected relation between the device and the target anatomy and (ii) a detected relation between the device and the target anatomy; and updating one or both of the 3D model or the confidence level based at least in part on the second EM tracking data.
13. The method of any one of clauses 1-12, wherein the target anatomy comprises a heart.
14. A system for generating a three-dimensional (3D) model of a target anatomy, the system comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a plurality of two-dimensional (2D) images of the target anatomy, wherein the 2D images are generated by an imaging probe manipulated in multiple poses; receiving electromagnetic (EM) tracking data representing the poses of the imaging probe while generating the plurality of 2D images; generating a 3D model of the target anatomy based on the 2D images and the EM tracking data; determining a confidence level in a first portion of the 3D model of the target anatomy; and displaying, on a display, the 3D model of the target anatomy and a representation of the confidence level in the first portion of the 3D model.
15. The system of clause 14, wherein the 2D images are ultrasound images.
16. The system of clause 14 or 15, wherein the instructions that, when executed by the processor, cause the system to generate the 3D model comprise instructions that, when executed by the processor, cause the system to perform operations comprising: generating a segmentation mask for each of the 2D images, wherein each of the segmentation masks segments one or more features of interest in the 2D image; and projecting the segmentation masks in 3D space. 17. The system of clause 16, wherein the instructions that, when executed by the processor, cause the system to generate the 3D model comprise instructions that, when executed by the processor, cause the system to perform operations comprising: projecting the segmentation masks as a point cloud of pixels from the 2D images in 3D space; and converting the point cloud to a mesh model of the target anatomy.
18. The system of any one of clauses 14-17, wherein the instructions that, when executed by the processor, cause the system to determine the confidence level in the first portion of the 3D model comprise instructions that, when executed by the processor, cause the system to perform operations comprising evaluating variance of brightness in one or more pixels among the 2D images over a period of time.
19. The system of any one of clauses 14-18, wherein the instructions that, when executed by the processor, cause the system to determine the confidence level in the first portion of the 3D model comprise instructions that, when executed by the processor, cause the system to perform operations comprising correlating one or more pixels among the 2D images to a region of the target anatomy known to experience a high amount of movement.
20. The system of any one of clauses 14-19, wherein the instructions that, when executed by the processor, cause the system to display the 3D model of the target anatomy comprise instructions that, when executed by the processor, cause the system to perform operations comprising displaying the first portion of the 3D model with a first display scheme and a second portion of the 3D model different from the first portion with a second display scheme, wherein the first and second display schemes are different.
21. The system of any one of clauses 14-20, wherein the instructions, when executed by the processor, cause the system to perform operations comprising: receiving second EM tracking data representing location of a device placed proximate the target anatomy; based at least in part on the second EM tracking data, detecting a discrepancy between (i) an expected relation between the device and a tissue wall and (ii) a detected relation between the device and a tissue wall; and updating one or both of the 3D model and the confidence level based at least in part on the second EM tracking data.
22. The system of any one of clauses 1-21, further comprising an imaging probe.
23. The system of clause 22, wherein the imaging probe is an ultrasound probe.
24. The system of any one of clauses 1-23, wherein the imaging probe comprises an electromagnetic tracking sensor.
25. The system of any one of clauses 1-24, further comprising a display configured to display the 3D model of the target anatomy.
26. A method for generating a three-dimensional (3D) model of a target anatomy, the method comprising: receiving a 3D model of the target anatomy, wherein the 3D model comprises at least one portion associated with a confidence level; receiving EM tracking data representing location of a device placed proximate the target anatomy; detecting a discrepancy between (i) an expected relation between the device and the target anatomy that is based on the 3D model and EM tracking data and (ii) a detected relation between the device and the target anatomy that is based on the 3D model and the EM tracking data; and updating the 3D model based at least in part on the detected discrepancy.
27. The method of clause 26, wherein at a first tracked device location, the expected relation is non-contact between the device and a tissue wall of the target anatomy, and the detected relation is contact between the device and the tissue wall, wherein updating comprises moving a representation of the tissue wall in the 3D model to correspond to the first tracked device location.
28. The method of clause 26 or 27, wherein at a second tracked device location, the expected relation is contact between the device and a tissue wall of the target anatomy, and the detected relation is non-contact between the device and the tissue wall, wherein updating comprises decreasing the confidence level associated with the representation of the tissue wall in the 3D model.
29. The method of any one of clauses 26-28, wherein updating comprises mapping a surface of the target anatomy by: contacting a plurality of points on the surface of the target anatomy with a tip of the device; and at each point in the plurality of points, registering the position of the tip of the device in 3D space with an EM sensor.
30. The method of any one of clauses 26-29, wherein updating comprises: receiving a plurality of two-dimensional (2D) images of the target anatomy, wherein the 2D images are collected by an imaging probe manipulated in multiple poses; receiving second EM tracking data representing pose of the imaging probe while collecting the plurality of 2D images; and generating a 3D model of the target anatomy based on the 2D images and the second EM tracking data.
31. The method of clause 30, wherein the 2D images are ultrasound images.
32. The method of any one of clauses 26-31, wherein the updating is performed prior to a procedure for placing a medical device proximate the target anatomy.
33. The method of any one of clauses 26-31, wherein the updating is performed during a procedure for placing a medical device proximate the target anatomy.
34. The method of any one of clauses 26-33, further comprising displaying the updated 3D model, displaying a representation of the confidence level, or both.
35. The method of any one of clauses 26-34, wherein the target anatomy is a heart. 36. The method of any one of clauses 26-35, further comprising: determining, based on the EM tracking data, a location of the medical device; and displaying an indication of the determined location of the medical device with respect to the target anatomy concurrently with a display of the 3D model.
37. The method of clause 36, wherein the indication of the determined location of the medical device comprises a visual indication of an anatomical region in which the medical device is located.
38. The method of clause 37, wherein the visual indication comprises at least one of a text label or color coding of the anatomical region in which the medical device is located.
39. The method of any one of clauses 36-38, further comprising displaying, with a first display scheme, an indication of a predetermined route for the medical device for navigation of the medical device toward a target site.
40. The method of clause 39, further comprising: determining, based on the EM tracking data, that the location of the medical device has deviated from the predetermined route; and modifying the indication of the predetermined route in accordance with a second display scheme that is different from the first display scheme.
41. A system for generating a three-dimensional (3D) model of a target anatomy, the system comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a 3D model of the target anatomy, wherein the 3D model comprises at least one portion associated with a confidence level; receiving EM tracking data representing location of a device placed proximate the target anatomy; detecting a discrepancy between (i) an expected relation between the device and the target anatomy that is based on the 3D model and the EM tracking data and (ii) a detected relation between the device and the target anatomy that is based on the 3D model and the EM tracking data; and updating the 3D model based at least in part on the detected discrepancy.
42. The system of clause 41, wherein at a first tracked device location, the expected relation is non-contact between the device and a tissue wall of the target anatomy, and the detected relation is contact between the device and the tissue wall, wherein the instructions that, when executed by the processor, cause the system to update the 3D model comprises instructions that, when executed by the processor, cause the system to perform operations comprising moving a representation of the tissue wall in the 3D model to correspond to the first tracked device location.
43. The system of clause 41 or 42, wherein at a second tracked device location, the expected relation is contact between the device and a tissue wall of the target anatomy, and the detected relation is non-contact between the device and the tissue wall, wherein the instructions that, when executed by the processor, cause the system to update the 3D model comprises instructions that, when executed by the processor, cause the system to perform operations comprising decreasing the confidence level associated with the representation of the tissue wall in the 3D model.
44. The system of any one of clauses 41-43, wherein the instructions that, when executed by the processor, cause the system to update the 3D model comprises instructions that, when executed by the processor, cause the system to perform operations comprising mapping a surface of the target anatomy by: contacting a plurality of points on the surface of the target anatomy with a tip of the device; and at each point in the plurality of points, registering the position of the tip of the device in 3D space with an EM sensor.
45. The system of any one of clauses 41-44, wherein the instructions that, when executed by the processor, cause the system to update the 3D model comprises instructions that, when executed by the processor, cause the system to perform operations comprising : receiving a plurality of two-dimensional (2D) images of the target anatomy, wherein the 2D images are collected by an imaging probe manipulated in multiple poses; receiving second EM tracking data representing the poses of the imaging probe while collecting the plurality of 2D images; and generating a 3D model of the target anatomy based on the 2D images and the second EM tracking data.
46. The system of clause 45, wherein the 2D images are ultrasound images.
47. The system of any one of clauses 41-46, wherein the operations further comprise displaying the updated 3D model, displaying a representation of the confidence level, or both.
48. The system of any one of clauses 41-47, wherein the target anatomy is a heart.
49. The system of any one of clauses 41-48, further comprising an imaging probe.
50. The system of clause 49, wherein the imaging probe is an ultrasound probe.
51. The system of any one of clauses 41-50, wherein the imaging probe comprises an electromagnetic tracking sensor.
52. The system of any one of clauses 41-51, further comprising a display configured to display the 3D model of the target anatomy.
53. The system of any one of clauses 41-52, wherein the instructions that, when executed by the processor, cause the system to update the 3D model comprises instructions that, when executed by the processor, cause the system to perform operations comprising: determining, based on the EM tracking data, a location of the medical device; and displaying an indication of the determined location of the medical device with respect to the target anatomy concurrently with a display of the 3D model.
54. The system of clause 53, wherein the indication of the determined location of the medical device comprises a visual indication of an anatomical region in which the medical device is located. 55. The system of clause 54, wherein the visual indication comprises at least one of a text label or color coding of the anatomical region in which the medical device is located.
56. The system of any one of clauses 53-55, further comprising displaying, with a first display scheme, an indication of a predetermined route for the medical device for navigation of the medical device toward a target site.
57. The method of clause 56, wherein the instructions that, when executed by the processor, cause the system to update the 3D model comprises instructions that, when executed by the processor, cause the system to perform operations comprising: determining, based on the EM tracking data, that the location of the medical device has deviated from the predetermined route; and modifying the indication of the predetermined route in accordance with a second display scheme that is different from the first display scheme.
58. A method comprising: receiving a model of a target anatomy; generating a display of the model of the target anatomy; receiving EM tracking data representing a location of a medical device placed proximate the target anatomy; determining, based on the EM tracking data, the location of the medical device; modifying the display of the model of the target anatomy to include an indication of the location of the medical device with respect to the target anatomy; and presenting the modified display of the model of the target anatomy.
59. The method of clause 58, wherein the model is a three-dimensional (3D) model.
60. The method of clause 58 or 59, wherein modifying the display of the model of the target anatomy comprises adding a visual indication of an anatomical region in which the medical device is located.
61. The method of clause 60, wherein the visual indication comprises at least one of a text label or color coding of the anatomical region in which the medical device is located. 62. The method of any one of clauses 58-61, further comprising displaying, with a first display scheme, an indication of a predetermined route for the medical device for navigation of the medical device toward a target site.
63. The method of clause 62, further comprising: determining, based on the EM tracking data, that the location of the medical device has deviated from the predetermined route; and modifying the indication of the predetermined route in accordance with a second display scheme that is different from the first display scheme.
64. A system comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a model of a target anatomy; generating a display of the model of the target anatomy; receiving EM tracking data representing a location of a medical device placed proximate the target anatomy; determining, based on the EM tracking data, the location of the medical device; modifying the display of the model of the target anatomy to include an indication of the location of the medical device with respect to the target anatomy; and presenting the modified display of the model of the target anatomy.
65. The system of clause 64, wherein the model is a three-dimensional (3D) model.
66. The system of clause 64 or 65, wherein the instructions that, when executed by the processor, cause the system to modify the display of the model, comprises instructions that, when executed by the processor, cause the system to perform operations comprising: adding a visual indication of an anatomical region in which the medical device is located.
67. The system of clause 66, wherein the visual indication comprises at least one of a text label or color coding of the anatomical region in which the medical device is located. 68. The system of any one of clauses 64-67, wherein the instructions that, when executed by the processor, cause the system to modify the display of the model, comprises instructions that, when executed by the processor, cause the system to perform operations comprising: displaying, with a first display scheme, an indication of a predetermined route for the medical device for navigation of the medical device toward a target site.
69. The system of clause 68, wherein the instructions that, when executed by the processor, cause the system to modify the display of the model, comprises instructions that, when executed by the processor, cause the system to perform operations comprising: determining, based on the EM tracking data, that the location of the medical device has deviated from the predetermined route; and modifying the indication of the predetermined route in accordance with a second display scheme that is different from the first display scheme.
BRIEF DESCRIPTION OF THE DRAWINGS
[00071 Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale. Instead, emphasis is placed on illustrating clearly the principles of the present disclosure.
[00081 FIG. 1 is a schematic of an example system for generating and/or using a 3D model of anatomy in accordance with the present technology.
[0009] FIG. 2 is a flowchart of an example method for generating a 3D model of target anatomy, in accordance with the present technology.
[0010] FIG. 3 A is a schematic illustration of a process for generating images of a target anatomy from an imaging probe and collecting pose information for the imaging probe while generating the images, in accordance with the present technology.
[0011] FIG. 3B depicts an example segmented image of a target anatomy, segmented in accordance with the present technology.
[0012] FIG. 3C depicts an example point cloud of a target anatomy, in accordance with the present technology. [0013] FIGS. 3D and 3E depict various views of an mesh model converted from a point cloud of a target anatomy, in accordance with the present technology.
[0014] FIG. 4 is a flowchart of an example method for generating a 3D model of a target anatomy including determining a confidence level of at least a portion of the 3D model, in accordance with the present technology.
[0015] FIGS. 5A and 5B are schematics of example display schemes for displaying a representation of a confidence level of at least a portion of a 3D model, in accordance with the present technology.
[0016] FIG. 6 is a flowchart of an example method for generating a 3D model of a target anatomy including updating the 3D model, in accordance with the present technology.
[0017] FIGS. 7 A and 7B are schematic illustrations of an example process of updating a 3D model, in accordance with the present technology.
[0018] FIGS. 8 A and 8B are schematic illustrations of an example process of updating a 3D model, in accordance with the present technology.
[0019] FIG. 9 is a flowchart of an example method for modifying a display of a 3D model of a target anatomy, in accordance with the present technology.
[0020] FIGS. 10A and 10B are schematic illustrations of example processes of displaying a indication of a location of a medical device with respect to a displayed 3D model, in accordance with the present technology.
[0021] FIGS. 11A and 11B are schematic illustrations of example processes of displaying an indication of a predetermined route for a medical device and a modified indication of the predetermined route for a medical device, respectively, in accordance with the present technology.
[0022] FIG. 12 is a schematic illustration of an example process of modifying a display of a 3D model of a target anatomy for emphasizing different portions of a target anatomy, in accordance with the present technology.
DETAILED DESCRIPTION
[0023] The present technology relates to methods and systems for generating 3D models of anatomy from medical imaging (e.g., ultrasound). Some variations of the present technology, for example, are directed to generating 3D models of target anatomy relating to a medical treatment or other procedure. Specific details of several variations of the technology are described below with reference to FIGS. 1-8B.
I. Systems for Generating 3D Models of Anatomy
[0024] FIG. 1 is a schematic diagram of an example system 100 for generating a 3D model of anatomy (e.g., 3D reconstruction). As shown in FIG. 1, the system 100 for generating a 3D model of anatomy can include a model reconstruction system 110. The model reconstruction system 110 can be communicatively coupled to an imaging system 120, one or more user interface devices 130, and/or one or more medical devices 140, such as through a suitable wired and/or wireless connection (e.g., communications network 150) that enables information transfer. Additionally or alternatively, information transfer between one or more of these components of the system 100 can occur in a discrete manner, such as by storing information from one component (e.g., data) on a memory device that is readable by another component. Although the model reconstruction system, 110, the imaging system 120, the user interface devices 130, and medical devices 140 are illustrated schematically in FIG. 1 as separate components, in some variations two or more of these components can be embodied in a single device. For example, any two, any three, or all four of the model reconstruction system, 110, the imaging system 120, the user interface device(s) 130, and medical device(s) 140 can be combined in a single device. However, in some variations each of the model reconstruction system, 110, the imaging system 120, the user interface device(s) 130, and medical device(s) 140 can be embodied in a respective separate device.
[0025] The model reconstruction system 110 can include one or more processors 112, and one or more memory devices 114 having instructions stored therein. The one or more memory device 114 can include any suitable computer-readable medium such as RAMs, ROMs, flash memory, EEPROMs, optical devices (e.g., CD or DVD), hard drives, floppy drives, or any suitable device. The memory device 114 can include instructions (e.g., organized in one or more modules) for performing 3D model generation in accordance with any of the methods described in further detail herein (e.g., generating an initial 3D model, predicting a confidence level in one or more portions of the 3D model, and/or updating the 3D model).
[0026] The processor 112 may be configured to execute the instructions that are stored in the memory device 114 such that, when it executes the instructions, the processor 112 perform aspects of the methods herein. The instructions may be executed by computerexecutable components integrated with a software application, applet, host, server, network, website, communication service, communication interface, hardware, firmware, software elements of a user computer or mobile device, smartphone, or any suitable combination thereof. In some variations, the one or more processors 112 can be incorporated into a computing device or system such as a cloud-based computer system, a mainframe computer system, a gridcomputer system, or other suitable computer system.
[0027] The model reconstruction system 110 can be configured to receive one or more images of a target anatomy that are generated by the imaging system 120. The imaging system 120 can be configured to obtain images in any suitable imaging modality. In some variations, the imaging system 120 can be configured to generate two-dimensional (2D) images along various imaging windows or fields of view. For example, in some variations, the imaging system 120 is an ultrasound imaging system configured to generate a plurality of 2D images, with each 2D image being collected by performing an ultrasound sweep with an imaging probe 122 (e.g., through translation and/or rotation of the imaging probe 122) at one or more imaging windows. The imaging probe 122 can also include at least one electromagnetic (EM) sensor 124 or other tracking sensor configured to track the pose (including position and/or orientation) of the imaging probe 122 as the imaging probe 122 collects the 2D images. Additionally or alternatively, the EM sensor 124 can be included or coupled to an EM navigation tool that is moved in tandem with the imaging probe 122 to collect representative tracking data for the imaging probe 122. In some variations, the imaging probe 122 can be manipulated external to a patient’s body (e.g., transthoracic ultrasound for imaging a heart).
[0028] In some variations, the system 100 can further include one or more user interface device(s) 130, which functions to allow a user to interact with the model reconstruction system 110 and/or the imaging system 120. For example, the user interface device 130 can be configured to receive user input (e.g., for controlling input of information between the model reconstruction system 110 and the imaging system 120) and/or provide information to a user. For example, in some variations the user interface device 130 can include a display (e.g., monitor, goggles, glasses, AR/VR device, etc.) for displaying a 3D model to a user.
[0029] The imaging system, model reconstruction system, and/or user interface device can be further communicatively coupled to one or more medical device(s) 140. The medical device 140 can be any suitable device operable in a medical procedure (e.g., implant delivery, mapping, tissue ablation, etc. For example, in some variations, the medical device 140 can be a delivery catheter for placing an implant in the target anatomy (e.g., cardiac pacemaker device in a patient’s heart). The medical device 140 can include at least one EM sensor 144 or other tracking sensor configured to track the pose (including position and/or orientation) of the medical device, such as when the medical device 140 is being navigated in patient anatomy. In some variations, as described in further detail below, a graphical representation of the medical device 140 can be generated and displayed concurrently with the generated 3D model of anatomy, with the representation of the medical device 140 located in relation to anatomy (e.g., in real-time) based on tracking information from the EM sensor 144 and/or other tracking sensors.
[0030| Other aspects of the use of the system 100 are described in further detail below with respect to methods for generating 3D models of target anatomy.
II. Methods for Generating 3D Models of Anatomy
[0031] Described herein are example methods for generating 3D models of anatomy of a patient, including generating an initial 3D model, predicting a confidence level in one or more portions of the 3D model, and/or updating the 3D model. The methods can be used to generate 3D models of any suitable anatomical regions, such as organs (e.g., heart, lung, kidney) and/or other tissue (e.g., vasculature). The 3D model can be constructed from multiple 2D images taken from various viewing angles, as further described below. Although the 2D images are primarily described herein as ultrasound images, it should be understood that in other variations the 2D images can include any suitable imaging modality.
[0032] The generated and/or updated 3D model of a target anatomy can be used in a variety of applications, including but not limited to aiding navigation of patient anatomy for purposes of a medical treatment and/or other medical procedure (e.g., implant placement, tissue ablation). For example, the 3D model of a target anatomy can be used to guide identification of a target implant location and/or guide navigation for placement of a cardiac device (e.g., cardiac pacemaker), a stent device (e.g., coronary stent, aortic stent, etc.), or other suitable implant, etc. In some variations, at least a portion of the method for generating a 3D model of a target anatomy can be performed prior to a medical procedure (e.g., an initial 3D model can be generated and/or a 3D model can be updated based on images collected prior to a medical procedure). Additionally or alternatively, at least a portion of the method for generating a 3D model of a target anatomy can be performed intra-procedurally and/or post-procedurally (e.g., an initial 3D model can be generated and/or a 3D model can be updated based on new images and/or other information collected during and/or after a medical procedure). A. Generating a 3D model
[0033] As shown in FIG. 2, a method 200 for generating a 3D model of a target anatomy of a patient can include receiving a plurality of 2D images of the target anatomy that are generated by an imaging probe manipulated in multiple poses 210, receiving tracking data representing pose of the image probe while generating the plurality of 2D images 220, and generating a 3D model of the target anatomy based on the 2D images and the tracking data 230. These processes can, for example, be performed at least in part by the model reconstruction system 110 described above, and/or other components of the system 100. For example, the memory device 114 can store instructions that, when executed by one or more processors 112, cause the model reconstruction system 110 to perform one or more aspects of the method 200.
[0034] In some variations, the plurality of 2D images can be collected by an imaging probe. In one illustrative example as shown in FIG. 3A, the imaging probe is an ultrasound probe 322 configured to collect 2D ultrasound images. The ultrasound probe 322 can, in some variations, be an example of imaging probe 122 shown in FIG. 1. In some variations, the ultrasound probe 322 can be configured to collect 2D ultrasound images generated from one or more ultrasound sweeps with an ultrasound probe from various vantage points. For example, the ultrasound probe 322 can be manipulated in various poses (e.g., translated and/or rotated) at one or more imaging windows. The imaging probe can be manipulated external to the patient to gather images. For example, in some variations the imaging probe can be an ultrasound probe configured to perform a transthoracic ultrasound for imaging a heart. In some variations in which the method 200 is performed intra-procedurally, the 2D images can also depict one or more medical devices (e.g., navigational catheter 340) relative to patient anatomy, to further help guide placement and/or other operation of the depicted medical device(s).
[0035] In some variations, as shown in FIG. 3 A, as the 2D images are collected by the ultrasound probe 322, a live stream of the 2D ultrasound images 314 can be provided on a display (e.g., user interface device 130) to a user operating the ultrasound probe 322.
[0036] Pose tracking data for the imaging probe can be collected concurrently while the imaging probe is collecting the 2D images. For example, as shown in FIG. 3A, the ultrasound probe 322 can include one or more tracking sensors 324 configured to collect position and/or orientation data for the imaging probe as the imaging probe collects 2D images. Accordingly, the position and/or orientation of the imaging probe is known when the imaging probe collects each 2D image. The tracking data for the imaging probe can be transformed into a known spatial orientation of each 2D image, and used for generating a 3D model as further described below. In some variations, the one or more tracking sensors 324 can include an EM sensor (e.g., EM sensor 124) used in conjunction with a suitable EM tracking system.
[0037] As described above, the method 200 further includes generating a 3D model of target anatomy based on the 2D images and the tracking data 230. As shown in FIG. 2, in some variations, the method 200 can include generating a segmentation mask for each of the 2D images 232. The segmentation masks can be applied to generate segmented pixels from the 2D images corresponding to anatomical features. In one example in which the target anatomy is a heart, the segmentation masks can delineate between various features of the heart such aortic arch, Bachman’s bundle, left atrium, atrioventricular (AV) bundle (bundle of His), left ventricle, left bundle branch, Purkinje fibers, right bundle branch, right ventricle, right atrium, posterior internodal, middle internodal, AV node, anterior intemodal, sinoatrial (SA) node, ventricular apex, veins, valves, trunks, blood pool, and/or other suitable cardiac anatomy. For example, FIG. 3B depicts an example segmented ultrasound image 332 of a heart, including segmentation of the image into blood pools corresponding to a right atrium, a right ventricle, a left ventricle, and a left atrium of the heart.
[0038] The segmentation masks can be generated in an automated manner, semiautomated, or manual manner. For example, in some variations the segmentation masks can be predicted using a suitable pre-trained machine learning algorithm that receives 2D images (e.g., 2D ultrasound images) as input. The machine learning algorithm can, for example, be trained with training data including ultrasound images that are annotated for the target anatomy (e.g., heart chambers for cardiac target anatomy). In some variations, the machine learning algorithm includes a convolutional neural network or vision transformer model, though may include any suitable machine learning algorithm. Additionally or alternatively, the segmentation masks can be generated using one or more other suitable computer vision algorithms, and/or with manual annotation by a user.
[0039] As shown in FIG. 2, the method 200 can further include projecting the segmentation masks as a point cloud in 3D space 234. Based on the probe tracking data (e.g., position and orientation of the probe), the 2D images and their corresponding segmentation masks can be projected into 3D space. As a result of this projection, each segmented pixel in each image when displayed in 3D can form a point cloud in 3D space. For example, FIG. 3C depicts an example aggregation of segmentation as a point cloud 334. [0040] The method 200 can further include converting the point cloud to a 3D model 236, such as a mesh model (e.g., including triangle surfaces connecting nodes or points in the point cloud). The conversion of the point cloud to a 3D model can, for example, be performed with a suitable algorithm such as the Marching Cubes algorithm, and/or other suitable algorithm. The 3D model can be suitable for easier processing and/or manipulation. For example, the method 200 can further include performing post-processing of the 3D model 238, such as noise reduction (e.g., smoothing lines and/or surfaces), deleting one or more regions of the modeled anatomy, etc.
[0041] Furthermore, the 3D model (with or without post-processing) can be suitable for display. Accordingly, in some variations the method 200 can further include displaying the 3D model of the target anatomy 240 on a suitable display (e.g., user interface device 130). For example, FIGS. 3D and 3E depict various views of a 3D model 336 of the heart generated with the method 200. In some variations, the displayed 3D model can be manipulated by a user (e.g., rotated, enlarged, viewed along one or more cross-sectional planes, etc.). For example, FIG. 3D depicts the 3D model 336 in a first view, while FIG. 3E depicts the 3D model 336 in a second view rotated relative to that shown in the first view. Additionally or alternatively, the 3D model can be stored on a suitable memory device.
[0042] Furthermore, in some variations a graphical representation of a medical device (e.g., catheter, such as a delivery or navigational catheter) can be generated when the medical device is located (e.g., navigated) in or near the target anatomy. As described elsewhere herein, the medical device can include one or more tracking sensors (e.g., EM sensors) that provide position and/or orientation information of the medical device. In some variations, the graphical representation of the medical device can located relative to the target anatomy based on tracking data from the tracking sensors of the medical device (e.g., an EM sensor 144 or other suitable tracking sensor), and the graphical representation of the medical device can be displayed concurrently with the 3D model on a display (e.g., user interface device 130) accordingly, such that a user can view the location of the medical device relative to the target anatomy on the display. A graphical representation of a current (or present) location of the medical device and/or past locations (e.g., path tracking) of the medical device in the patient can be displayed. In some variations, the graphical representation of the medical device can be overlaid on the 3D model. For example, as shown in FIGS. 3D and 3E, a graphical representation 350 of a delivery catheter can be overlaid and/or manipulated with the 3D model 336. [0043] The display of the graphical representation of the medical device can toggled on and off (e.g., selected for display or non-display). For example, a user can select through a user control on a user interface device (e.g., user interface device 130) to display the graphical representation of the medical device, or select through the user control to hide the graphical representation of the medical device. As another example, the graphical representation of the medical device may be automatically displayed or hidden based on one or more parameters, such as automatically displayed based on proximity to a location of interest (e.g., implant location) in the target anatomy. Additionally or alternatively, a user can select a display scheme (e.g., color, transparency, etc.) of the graphical representation of the medical device on the display.
B. Determining a confidence level in at least a portion of a 3D model
[0044] A 3D model of patient anatomy can include uncertainty associated with potential inaccuracy. Various factors can contribute to uncertainty in a 3D model, such as image resolution, viewing angles of the imaging probe, and/or motion of target anatomy (e.g., due to anatomical behavior such as heartbeat, breathing, etc., and/or patient movements). Uncertainty in the model is generally undesirable, as it can result in making a medical procedure more challenging and/or result in adverse events. For example, in variations in which the 3D model is used for placement of a cardiac implant in the heart, uncertainty in the model can result in a physician making unwanted contact with the heart wall, or inadvertently implanting the cardiac implant in the wrong location.
[0045] In some variations, generating a 3D model can include determining a confidence level in one or more portions of the 3D model. For example, as shown in FIG. 4, a method 400 can include receiving a 3D model of a target anatomy 410 (e.g., a 3D model that is received in method 400 can be generated as described above with respect to method 200 and FIGS. 2-3 E), determining a confidence level in a portion of the 3D model of the target anatomy 420. The method 400 can further include displaying the 3D model of the target anatomy with a representation of the confidence level 430. Such display of the confidence level (or equivalently, an indication of uncertainty in at least a portion of the 3D model), may enable a physician to make more informed clinical judgments before, during and/or after a medical procedure. Confidence level can be expressed quantitatively (e.g., along a numerical scale, such as between a range of 1 and 10, between 1 and 100, etc.) and/or qualitatively (e.g., “high”, “medium”, “low”), and/or in any suitable manner. These processes can, for example, be performed at least in part by the model reconstruction system 110 described above, and/or other components of the system 100. For example, the memory device 114 can store instructions that, when executed by one or more processors 112, cause the model reconstruction system 110 to perform one or more aspects of the method 400.
[0046] A confidence level (or conversely, an uncertainty level) in one or more portions of the 3D model can be determined in one or more various manners. For example, in some variations, determining the confidence level in a portion of the 3D model 420 can include evaluating variance of brightness (e.g., relative brightness and/or absolute brightness) in one or more pixels among the 2D images of the target anatomy (i.e., the 2D images on which the 3D model is based) over time. Variance of brightness of a pixel over multiple image frames can generally be correlated to motion of the target tissue at that location. Additionally or alternatively, variance of brightness of a pixel over multiple image frames can be indicative of one or more artifacts existing in one or more of the 2D images. Accordingly, evaluating the variance in pixel brightness over a period of time (e.g., over 1 second, over 2 seconds, over 5 seconds, over 10 seconds, etc.) can provide an indication of the amount of tissue movement that is occurring at that location and/or an indication of the existence of at least one image artifact, and hence reflect an uncertainty in the accuracy of the 3D model at that location for any given point in time.
[0047] As another example, determining the confidence level in a portion of the 3D model 420 can additionally or alternatively include correlating one or more pixels among the 2D images or 3D model to a region of the target anatomy known to experience a high amount of movement. For example, certain heart chambers are known to generally experience a certain amount of movement (e.g., compared to an apex of the heart, which is relatively stable). Accordingly, in some variations one or more portions of the 3D model corresponding to certain segmented regions of the heart (or other suitable target anatomy) can reflect an uncertainty in the accuracy of the 3D model at those portions for any given point in time.
[0048] As another example, determining the confidence level in a portion of the 3D model 420 can additionally or alternatively include comparing information about a portion of the target anatomy as depicted from multiple view. For example, consider there may be a moderate level of confidence (e.g., first probability) that a first 2D image (taken from a first viewing window) depicts a left ventricle of a heart, but the first 2D image depicts an occluded portion of the left ventricle due to a rib bone partially blocking the field of view. A second 2D image (taken from a second viewing window, such as with the imaging probe rotated to a different angle) may provide a clearer view of the occluded portion of the left ventricle, resulting in a higher level of confidence (e.g., second probability) that the second 2D image depicts the portion of the left ventricle that is occluded in the first 2D image. The confidence level associated with the corresponding left ventricle portion of the 3D model can thus be based at least in part on the first and second confidence levels, such as by taking the average (e.g., mean, median, etc.) of the first and second probabilities, or maximum of the first and second probabilities, etc.
[0049] Additionally or alternatively, determining the confidence level in a portion of the 3D model 420 can include applying a trained machine learning algorithm configured to predict where in the 3D model that the reconstruction is likely to have little confidence (or more uncertainty).
[0050] As described above, the method 400 can further include displaying a representation of the confidence level 430 in one or more portions of the 3D model. The representation of the confidence level can be displayed as part of the displayed 3D model itself (e.g., similar to displaying the 3D model of the target anatomy 240 described above), such as on a suitable display (e.g., user interface device 130).
[0051] For example, displaying a representation of the confidence level 430 can include displaying a first portion of the 3D model with a first confidence level in accordance with a first display scheme that is different than a display scheme for the rest of the 3D model (e.g., including a second portion of the 3D model that has a second confidence level different from the first confidence level). Display schemes associated with confidence level can differ in color, transparency level, patterning, sharpness, and/or the like.
[0052] For example, confidence level in any particular portion of the 3D model can be represented by displaying pixels in that portion of the 3D model with visual properties varying along a color spectrum corresponding to confidence level. As shown in the schematic of FIG. 5A, for example, a first portion 510 of a 3D model 500 having a first confidence level can be displayed in a first color (e.g., red to indicate lower confidence), while a second portion 520 of the 3D model 500 having a second confidence level can be displayed in a second color (e.g., green to indicate higher confidence). In some variations, colored pixels of a displayed 3D model can be presented against a dark (e.g., black) background, to help improve visual contrast and enable better visualization of the 3D model.
[0053] As another example, confidence level in any particular portion of the 3D model can be represented by displaying pixels in that portion of the 3D model with visual properties varying along a transparency (or opacity) spectrum corresponding to confidence level. For example, in the schematic of FIG. 5B, the first portion 510 of the 3D model can be displayed in a first transparency (e.g., greater transparency, or lower opacity), while a second portion 520 of the 3D model can be displayed in a second transparency (e.g., lower transparency, or high opacity) compared to the first portion 510. In this example, more opaque regions of the 3D model having higher confidence can generally be more visible than more transparent regions of the 3D model having lower confidence, which may help focus a user’s attention more on those regions of the 3D model that have higher confidence. In some variations, the transparency of an entire model segment (including border and interior of a portion of the 3D model) can vary with confidence level. Alternatively, in some variations, the transparency of the interior of a portion of the 3D model can vary with confidence level while the border of a portion of the 3D model can be visualized in an opaque manner to better help delineate the particular portion of the 3D model.
[0054] As another example, confidence level in any particular portion of the 3D model can be represented by displaying pixels in that portion of the 3D model with visual properties varying along a sharpness (or blurriness) spectrum corresponding to confidence level. In this example, portions of the 3D model having higher confidence can generally appear sharper and more visible, while portions of the 3D model having lower confidence can generally be appear more out of focus or blurry, which may help focus a user’s attention more on those regions of the 3D model that have higher confidence. In some variations, the sharpness of an entire model segment (including border and interior of a portion of the 3D model) can vary with confidence level. Alternatively, in some variations, the sharpness level of an interior of a portion of the 3D model can vary with confidence level while the border of a portion of the 3D model can be visualized in a sharp or clear manner to better help delineate the particular portion of the 3D model.
[0055] Various spectrums of the display scheme can be generally continuous, or can be discrete with buckets each corresponding to a range of confidence level (e.g., a first color shade associated with a lower confidence level of 0-2-, a second color shade associated with a higher confidence level of 21-100, on a confidence level scale of 0-100).
[0056] In some variations, a user can (e.g., through user interface device 130) control the display of the 3D model (including the nature of the visualization of the confidence level) and/or the graphical representation of a medical device. For example, the display of the 3D model can be vary between one or more modes. In one example mode, one or more portions of the 3D model having lower confidence levels can be displayed together with portions of the 3D model having higher confidence levels, but distinguished from those portions with higher confidence levels by varying visual parameters (e.g., with color, transparency, patterning, sharpness, etc. as described below). The user can, in some variations, select the visual parameter to be varied in accordance with confidence level. In another example mode, one or more portions of the 3D model having confidence level below a threshold value can be omitted from display, such that only portions of the 3D model having at least a certain confidence level are displayed to a user. The threshold value can be defined by a user (e.g., selectable from a menu of options, entered directly, etc.), and/or can be a default threshold value.
[0057] As shown for example in FIGS. 5A and 5B, in some variations the method 400 can include displaying a graphical representation of a medical device 530 (e.g., a catheter, such as a delivery or navigational catheter), similar to that described above with respect to method 200. For example, the graphical representation of the medical device can be located relative to the target anatomy based on tracking data from the tracking sensors of the medical device, and the graphical representation of the medical device can be displayed concurrently with the 3D model on a display accordingly, such that a user can view the location of the medical device relative to the target anatomy on the display. In some variations, the appearance of the graphical representation of the medical device 530 can vary based on proximity of the medical device 530 to a region of the target anatomy whose corresponding portion of the 3D model is uncertain (e.g., has below a threshold confidence level). For example, a distal end of the graphical representation of a catheter can change appearance in a visual parameter (e.g., color, transparency, patterning, sharpness, etc.) as it approaches or is located in portion(s) of the 3D model that are sufficiently uncertain (e.g., has below a threshold confidence level).
[0058] In some variations, a user can (e.g., through user interface device 130) control the display of the graphical representation of a medical device. For example, a user can select or deselect a display mode in which the appearance of the graphical representation of the medical device varies based on proximity of the medical device to an anatomical region that is depicted with low confidence in the 3D model (e.g., having a confidence level below a threshold value). The user can, in some variations, select the visual parameter (e.g., color, transparency, patterning, sharpness, etc.) to be varied in accordance with confidence level. Additionally or alternatively, like that described above with respect to display of the 3D model, a threshold value of confidence level can be defined by a user (e.g., selectable from a menu of options, entered directly, etc.), and/or can be a default threshold value. c. Updating a 3D model
[0059] As described above, a 3D model of patient anatomy can include uncertainty associated with potential inaccuracy that is generally undesirable. Accordingly, in some instances it may be desirable to update the 3D model (especially during a medical procedure, such as during an implant placement procedures) so as to reduce the level of uncertainty (increase the confidence level) in certain one or more portions.
[0060] Accordingly, in some variations, generating a 3D model can include updating a 3D model. For example, as shown in FIG. 6, a method 600 can include receiving a 3D model of a target anatomy 610 (e.g., a 3D model that is received in method 600 can be generated as described above with respect to method 200, and/or as described above with respect to method 400) and receiving tracking data (e.g., EM tracking data) representing location of a medical device or other device placed proximate the target anatomy 620, such as a catheter (e.g., delivery or navigational catheter). In some variations, the medical device can be an example of the medical device 140 of FIG. 1, and can include a tracking sensor such as EM sensor 144. The method 600 can further include detecting a discrepancy 630 between (i) an expected relation between the medical device and the target anatomy, and (ii) a detected relation between the medical device and the target anatomy. The method 600 can further include updating the 3D model based at least in part on the detected discrepancy 640, such as by adjusting a geometrical feature of the 3D model and/or a confidence level associated with one or more portions of the 3D model. Following the updating of the 3D model, the method can further include displaying the updated 3D model 650 (e.g., on user interface device(s) 130), which can, for example, be similar to other display processes described herein (e.g., displaying process 240 described above with respect to method 200, and/or displaying process 420 described above with respect to method 400). These processes can, for example, be performed at least in part by the model reconstruction system 110 described above, and/or other components of the system 100. For example, the memory device 114 can store instructions that, when executed by one or more processors 112, cause the model reconstruction system 110 to perform one or more aspects of the method 600.
[0061] In some variations, detecting the discrepancy 630 is based at least in part on information derived from the 3D model and/or the tracking data for the medical device. An expected relation or detected relation can include any interaction (or lack of interaction) between the medical device and the target, such as contact. Contact can be detected, for example, through a force sensor, pressure sensor, impedance sensor, optical sensor, and/or other suitable sensor(s), etc. on the medical device. In one illustrative example, the medical device can be a catheter and an expected relation or detected relation can include contact between a distal tip of the catheter and a tissue wall (or lack of contact between a distal tip of the catheter and a tissue wall). Where the detected relation differs from what is expected based on the 3D model, the 3D model can be updated as further described below.
[0062] For example, in some instances, based on the tracked location of the medical device in anatomy relative to the 3D model, contact between the medical device and a tissue surface of the target anatomy may be expected. However, if no contact is detected, then in response to this discrepancy, the confidence level of the portion of the 3D model representing the tissue surface can be adjusted. Additionally or alternatively, the tissue surface of the 3D model can be adjusted (e.g., moved) in accordance with the tracked location of the medical device at the time of contact. For example, as shown in the schematic of FIG. 7A, a catheter 730 can be navigated within a target anatomy (e.g., heart) represented by a 3D model 700. If a distal portion of the catheter 730 is detected to have crossed or extend beyond a tissue wall 712 of the 3D model, then the confidence level of the modeled wall 712 can be reduced. Alternatively, the modeled tissue wall can be expanded outwards to a new modeled tissue wall 714 to include the catheter 730 (FIG. 7B) (and the confidence level of the new modeled tissue wall 714 can be increased).
[0063] As another example, in some instances, based on the tracked location of the medical device in anatomy relative to the 3D model, it may be expected that the medical device does not contact a tissue wall of the target anatomy. However, if contact is detected (e.g., through a force sensor, pressure sensor, impedance sensor, optical sensor, etc. on the medical device), then in response to this discrepancy, the tissue surface of the 3D model can be adjusted (e.g., moved) in accordance with the tracked location of the medical device at the time of contact. Additionally or alternatively, the confidence level of the portion of the 3D model representing the tissue surface can be adjusted. For example, as shown in the schematic of FIG. 8 A, a catheter 830 can be navigated within a target anatomy (e.g., heart) represented by a 3D model 800. If a distal portion of the catheter 830 is detected to have contacted a tissue wall before reaching the modeled tissue wall 812, then the modeled tissue wall can be contracted inwards to a new modeled tissue wall 814 to the tracked location of the distal portion of the catheter 830 at the time of contact (and optionally, the confidence level of the new modeled tissue wall 814 can be increased). Alternatively, the confidence level of the modeled tissue wall 812 can be reduced (without adjusting the modeled tissue wall). [0064] In some variations, the 3D model can be updated intra-procedurally with the aid of a medical device (e.g., catheter, such as a delivery or navigational catheter). The medical device can, in some variations, be an example of medical device 140 of FIG. 1. For example, a catheter can be navigated such that its distal end portion touches an internal tissue surface (e.g., heart wall), and the location of the distal end portion of the catheter can be registered. This process can be repeated at multiple locations in the target anatomy until a sufficient surface of the target anatomy is mapped. As another example, the method 600 can include receiving additional 2D images (e.g., ultrasound images) intra-procedurally and generating a new partial or full 3D model based on the additional intra-procedural 2D images (e.g., as described above with respect to method 200 and/or method 400). In some variations, the user interface can aid in the image collecting process by guiding a user (e.g., ultrasound operator) toward particular imaging windows that will provide the most information, based on the current level of uncertainty in the 3D model.
III. Methods for using a 3D model
[0065] In some instances, it may be desirable to utilize a model of patient anatomy (e.g., target anatomy) to improve live registration of a medical device, for example to help a user better understand the current location (e.g., position, orientation, etc.) of a medical device (e.g., a catheter, such as a delivery or navigational catheter) relative to the patient anatomy. The model may be used, for example, intra-procedurally (e.g., during delivery of an implant or other medical device, during a catheter navigational procedure, etc.). The model can, for example, be a 3D model that is generated and/or updated in accordance with any one or more of the methods described herein (e.g., method 200, method 400, method 600, etc.). However, it should be understood that in some variations, the model can be any suitable model generated through other techniques. Furthermore, although the model is primarily described below as a 3D model, it should be understood that in some variations, a 2D model can be used in a similar manner for aiding a user’s understanding of the current location of a medical device.
[0066] Accordingly, in some variations, a method for using a 3D model includes leveraging the 3D model to illustrate a location of a medical device. For example, as shown in FIG. 9, a method 900 can include receiving a model of a target anatomy 910 (e.g., a 3D model that is received in method 900 can be generated as described above with respect to method 200, as described above with respect to method 400, and/or as described above with respect to method 600) and generating a display of the model of the target anatomy 912. The method 900 can further include receiving tracking data (e.g., EM tracking data) representing location of a medical device or other device placed proximate the target anatomy 920, such as a catheter (e.g., delivery or navigational catheter). In some variations, the medical device can be an example of the medical device 140 of FIG. 1, and can include a tracking sensor such as EM sensor 144. The method 900 can further include determining, based on the tracking data, the location of the medical device 930, and modifying the display of the model of the target anatomy to include an indication of the location of the medical device with respect to the target anatomy 940. The method 900 can further include presenting the modified display of the model of the target anatomy 950 (e.g., on user interface device(s)), which can, for example, be similar to other display processes described herein (e.g., displaying process 240 described above with respect to method 200, displaying process 420 described above with respect to method 400, and/or displaying process 650 as described above with respect to method 600). These processes can, for example, be performed at least in part by the model reconstruction system 110 described above, and/or other components of the system 100. For example, the memory device 114 can store instructions that, when executed by one or more processors 112, cause the model reconstruction system 110 to perform one or more aspects of the method 900.
[0067] In some variations, the indication of the location of the medical device can be a visual indication. The location can be an anatomical region that is a generally known or technically defined region. For example, in variations in which the 3D model is a heart, the labeled anatomical region may be a cardiac chamber (left atrium, right atrium, left ventricle, right ventricle). Additionally or alternatively, in some instances, the location can be an anatomical region that is another predefined (e.g., user-defined) region, such as an area of interest based on suspected or known disease state, or a cell in a grid.
[0068] For example, the visual indication may include a displayed label. For example, FIG. 10A depicts a schematic of a display including a 3D model 1000 of a target anatomy and a visual indication 1040 in the form of a label identifying an anatomical region or other portion of the target anatomy in which the medical device 1030 or a portion thereof (e.g., a distal end or portion of a catheter) is located. In some variations, the visual indication may include any suitable form of a label, such as a text label (e.g., full name of the anatomical region, abbreviated form of the anatomical region, code representative of the anatomical region, etc.). The label can be displayed overlaid with the 3D model, such as overlaid over the displayed anatomical region (e.g., “RV” displayed over the right ventricle portion of a cardiac 3D model), overlaid over the 3D model adjacent to the displayed anatomical region, and/or overlaid adjacent to the displayed medical device (e.g., distal portion of a catheter) where a user is likely to notice the visual indication (e.g., as the user’s attention may be focused on the displayed representation of the medical device during a procedure).
[00691 As another example, the visual indication of the location of the medical device can additionally or alternatively include a distinctive display scheme of a portion of the 3D model corresponding to the location of the medical device. For example, FIG. 10B depicts a schematic of a display including a 3D model 1050 of a target anatomy and a visual indication 1040 in the form of a display scheme for an anatomical region or other portion of the 3D model that differs from a display scheme for the rest (or other portions) of the 3D model. For example, the visual indication of the location of the medical device can include depicting an anatomical region of the target anatomy in the 3D model with a different color, transparency level, patterning, sharpness, and/or the like.
[0070] In some variations, the method 900 can further include displaying an indication of a predetermined route for the medical device for navigation of the medical device toward a target. For example, the predetermined route can be a proposed path along which a catheter can travel toward a target site within the target anatomy. The predetermined route can be generated, for example, pre-procedurally based on prior imaging and/or intra-procedurally based on information in a 3D model generated intra-procedurally, and can be determined by a user, a suitable machine learning algorithm, etc. The predetermined route can, for example, be represented by a line (e.g., solid line, dashed or dotted line, etc.) displayed overlaid over the 3D model so as to be seen proximate a representation of the medical device as the medical device is navigated within the target anatomy.
[0071] In some variations, the method can further include determining, based on the EM tracking data, that the location of the medical device has deviated from the predetermined route, and modifying the indication of the predetermined route in accordance with a second display scheme that is different from the first display scheme so as to indicate the deviation from the predetermined route. For example, FIG. 11A depicts a schematic of a display including a 3D model 1100 of a target anatomy and an indication of a predetermined route 1140. The indication of the predetermined route can, for example, include a line having a first display scheme (e.g., color, patterning, line weight, shading, etc.) and along which a medical device can be guided. During a procedure, EM tracking information relating to the location of the medical device can be analyzed to determine the path along which the medical device has traveled, and sufficiently high deviation of the medical device’s traveled path from the predetermined route (e.g., based on a distance offset above a threshold value and/or angular offset above a threshold value, etc.) can be communicated by a modified indication. For example, FIG. 1 IB depicts a schematic of a display including the 3D model 1100 of the target anatomy and a modified indication of the predetermined route 1140 having a second display scheme different from the first display scheme. In one example implementation, the indication 1140 of the predetermined route can be displayed in green while the medical device’s tracked path is consistent with the predetermined route, and then displayed in red if the medical device’s tracked path deviates from the predetermined route. However, the second display scheme for the modified indication 1140 can differ from the first display scheme in any suitable manner (e.g., different color, patterning, line weight, and/or shading, etc.).
[0072] In some variations, the amount of deviation of the medical device’s traveled path from the predetermined route can be communicated with multiple display schemes. For example, the indication 1140 of the predetermined route can be displayed in green while the medical device’s tracked path is consistent with the predetermined route, displayed in orange if the medical device’s tracked path deviates from the predetermined route by a moderate amount (e.g., above a first threshold value but below a second threshold value), and displayed in red if the medical device’s tracked path deviates from the predetermined route by a significant amount (e.g., above the second threshold value). Two, three, four, five, or more than five levels of deviation may be communicated with a corresponding number of display schemes. Again, the display schemes may differ in any suitable manner (e.g., different color, patterning, line weight, and/or shading, etc.).
[0073] Any of the above indications (e.g., indications of location of the medical device, indications of a predetermined route, depiction of a region of interest in the target anatomy in a distinctive display scheme, etc.) can be selectively displayed in conjunction with the 3D model of the target anatomy. For example, a user may selectively toggle on or off the display of such indication(s) based on their preferences (e.g., can toggle off the display of one or more indication(s) to reduce visual distractions and/or obscuring of features of the 3D model). Additionally or alternatively, the display of such indication(s) can be automatically or semi- automatically toggled on or off based on immediate relevancy. For example, an indication of location of the medical device can be selectively omitted from the display until the medical device gets within a threshold distance of a target site and/or selectively omitted from the display unless the medical device is navigated to a vulnerable area of the target anatomy that may be more susceptible to tissue damage. Additionally or alternatively, as another example, an indication of a predetermined route for the medical device can be selectively omitted from the display until the medical device’s path sufficiently deviates from the predetermined route.
[0074] Additionally or alternatively, in some variations, the display of a 3D model can be modified to emphasize or otherwise highlight one or more regions of interest. For example, certain anatomical regions of the 3D model can be color coded and/or include text labels to help a user identify certain anatomy or other regions in the model (e.g., intra-procedurally while a medical device is being navigated in and the target anatomy). For example, in variations in which the 3D model is of a heart, the displayed 3D model can include color coding of one or more heart chambers, a left side and/or right side of the heart, and/or one or more valve planes. For example, FIG. 12 illustrates a schematic of a 3D model of a heart in which a first portion of the heart 1240 is shown with a first display scheme (e.g., color, patterning, transparency, shading, etc.) and second portion of the heart 1242 is shown with a second display scheme.
Conclusion
[0075] Although many of the embodiments are described above with respect to systems, devices, and methods for generating a 3D model of a heart, the technology is applicable to other applications and/or other approaches, such as generating a 3D model of other suitable target anatomy. Moreover, other embodiments in addition to those described herein are within the scope of the technology. Additionally, several other embodiments of the technology can have different configurations, components, or procedures than those described herein. A person of ordinary skill in the art, therefore, will accordingly understand that the technology can have other embodiments with additional elements, or the technology can have other embodiments without several of the features shown and described above with reference to FIGS. 1-8B.
[0076] The descriptions of embodiments of the technology are not intended to be exhaustive or to limit the technology to the precise form disclosed above. Where the context permits, singular or plural terms may also include the plural or singular term, respectively. Although specific embodiments of, and examples for, the technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the technology, as those skilled in the relevant art will recognize. For example, while steps are presented in a given order, alternative embodiments may perform steps in a different order. The various embodiments described herein may also be combined to provide further embodiments. [0077] As used herein, the terms “generally,” “substantially,” “about,” and similar terms are used as terms of approximation and not as terms of degree, and are intended to account for the inherent variations in measured or calculated values that would be recognized by those of ordinary skill in the art.
[0078] Moreover, unless the word “or” is expressly limited to mean only a single item exclusive from the other items in reference to a list of two or more items, then the use of “or” in such a list is to be interpreted as including (a) any single item in the list, (b) all of the items in the list, or (c) any combination of the items in the list. Additionally, the term "comprising" is used throughout to mean including at least the recited feature(s) such that any greater number of the same feature and/or additional types of other features are not precluded. It will also be appreciated that specific embodiments have been described herein for purposes of illustration, but that various modifications may be made without deviating from the technology. Further, while advantages associated with certain embodiments of the technology have been described in the context of those embodiments, other embodiments may also exhibit such advantages, and not all embodiments need necessarily exhibit such advantages to fall within the scope of the technology. Accordingly, the disclosure and associated technology can encompass other embodiments not expressly shown or described herein.

Claims

CLAIMS I/We claim:
1. A method for generating a three-dimensional (3D) model of a target anatomy, the method comprising: receiving a plurality of two-dimensional (2D) images of the target anatomy, wherein the 2D images are generated by an imaging probe manipulated in multiple poses; receiving electromagnetic (EM) tracking data representing the poses of the imaging probe while generating the plurality of 2D images; generating a 3D model of the target anatomy based on the 2D images and the EM tracking data; determining a confidence level in a first portion of the 3D model of the target anatomy; and displaying the 3D model of the target anatomy and a representation of the confidence level in the first portion of the 3D model.
2. The method of claim 1, wherein generating the 3D model comprises: generating a segmentation mask for each of the 2D images, wherein each of the segmentation masks segments one or more features of interest in the 2D image; and projecting the segmentation masks in 3D space.
3. The method of claim 2, wherein generating each of the segmentation masks comprises predicting each of the segmentation masks by applying a machine learning algorithm.
4. The method of claim 2 or 3, wherein generating the 3D model further comprises: projecting the segmentation masks as a point cloud of pixels from the 2D images in
3D space; and converting the point cloud to a mesh model of the target anatomy.
5. The method of any one of claims 1-4, wherein determining the confidence level in the first portion of the 3D model comprises evaluating variance of brightness in one or more pixels among the 2D images over a period of time.
6. The method of any one of claims 1-5, wherein determining the confidence level in the first portion of the 3D model comprises correlating one or more pixels among the 2D images to a region of the target anatomy known to experience a high amount of movement.
7. The method of any one of claims 1-6, wherein displaying comprises displaying the first portion of the 3D model with a first display scheme and a second portion of the 3D model different from the first portion with a second display scheme, wherein the first and second display schemes are different.
8. The method of any one of claims 1-7, further comprising: receiving second EM tracking data representing location of a device placed proximate the target anatomy; based at least in part on the second EM tracking data, detecting a discrepancy between (i) an expected relation between the device and the target anatomy and (ii) a detected relation between the device and the target anatomy; and updating one or both of the 3D model or the confidence level based at least in part on the second EM tracking data.
9. A system for generating a three-dimensional (3D) model of a target anatomy, the system comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a plurality of two-dimensional (2D) images of the target anatomy, wherein the 2D images are generated by an imaging probe manipulated in multiple poses; receiving electromagnetic (EM) tracking data representing the poses of the imaging probe while generating the plurality of 2D images; generating a 3D model of the target anatomy based on the 2D images and the EM tracking data; determining a confidence level in a first portion of the 3D model of the target anatomy; and displaying, on a display, the 3D model of the target anatomy and a representation of the confidence level in the first portion of the 3D model.
10. A method for generating a three-dimensional (3D) model of a target anatomy, the method comprising: receiving a 3D model of a target of the target anatomy, wherein the 3D model comprises at least one portion associated with a confidence level; receiving EM tracking data representing location of a device placed proximate the target anatomy; detecting a discrepancy between (i) an expected relation between the device and the target anatomy that is based on the 3D model and EM tracking data and (ii) a detected relation between the device and the target anatomy that is based on the 3D model and the EM tracking data; and updating the 3D model based at least in part on the detected discrepancy.
11. The method of claim 10, wherein at a first tracked device location, the expected relation is non-contact between the device and a tissue wall of the target anatomy, and the detected relation is contact between the device and the tissue wall, wherein updating comprises moving a representation of the tissue wall in the 3D model to correspond to the first tracked device location.
12. The method of claim 10 or 11, wherein at a second tracked device location, the expected relation is contact between the device and a tissue wall of the target anatomy, and the detected relation is non-contact between the device and the tissue wall, wherein updating comprises decreasing the confidence level associated with the representation of the tissue wall in the 3D model.
13. The method of any one of claims 10-12, wherein updating comprises mapping a surface of the target anatomy by: contacting a plurality of points on the surface of the target anatomy with a tip of the device; and re at each point in the plurality of points, registering the position of the tip of the device in 3D space with an EM sensor.
14. The method of any one of claims 10-13, wherein updating comprises: receiving a plurality of two-dimensional (2D) images of the target anatomy, wherein the 2D images are collected by an imaging probe manipulated in multiple poses; receiving second EM tracking data representing pose of the imaging probe while collecting the plurality of 2D images; and generating a 3D model of the target anatomy based on the 2D images and the second EM tracking data.
15. A system for generating a three-dimensional (3D) model of a target anatomy, the system comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a 3D model of a target of the target anatomy, wherein the 3D model comprises at least one portion associated with a confidence level; receiving EM tracking data representing location of a device placed proximate the target anatomy; detecting a discrepancy between (i) an expected relation between the device and the target anatomy that is based on the 3D model and the EM tracking data and (ii) a detected relation between the device and the target anatomy that is based on the 3D model and the EM tracking data; and updating the 3D model based at least in part on the detected discrepancy.
PCT/US2025/026059 2024-04-24 2025-04-23 Methods and systems for generating 3d models of anatomy Pending WO2025226870A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463638309P 2024-04-24 2024-04-24
US63/638,309 2024-04-24

Publications (1)

Publication Number Publication Date
WO2025226870A1 true WO2025226870A1 (en) 2025-10-30

Family

ID=95743685

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/026059 Pending WO2025226870A1 (en) 2024-04-24 2025-04-23 Methods and systems for generating 3d models of anatomy

Country Status (1)

Country Link
WO (1) WO2025226870A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230368465A1 (en) * 2022-05-13 2023-11-16 Jointvue Llc Methods and apparatus for three-dimensional reconstruction

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230368465A1 (en) * 2022-05-13 2023-11-16 Jointvue Llc Methods and apparatus for three-dimensional reconstruction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MACHUCHO-CADENA RUB�N ET AL: "Geometric techniques for 3D tracking of ultrasound sensor, tumor segmentation in ultrasound images, and 3D reconstruction", PATTERN RECOGNITION, ELSEVIER, GB, vol. 47, no. 5, 12 November 2013 (2013-11-12), pages 1968 - 1987, XP028815379, ISSN: 0031-3203, DOI: 10.1016/J.PATCOG.2013.10.021 *

Similar Documents

Publication Publication Date Title
US11631174B2 (en) Adaptive navigation technique for navigating a catheter through a body channel or cavity
US20230013302A1 (en) Anatomical model generation
CN114025658B (en) Systems and methods for motion-modulated device guidance using vascular roadmaps
US7940970B2 (en) Method and system for automatic quality control used in computerized analysis of CT angiography
JP6894896B2 (en) X-ray image feature detection and alignment systems and methods
US8103074B2 (en) Identifying aorta exit points from imaging data
EP3909020A1 (en) Methods and systems for dynamic coronary roadmapping
US20080103389A1 (en) Method and system for automatic analysis of blood vessel structures to identify pathologies
JP2010510815A (en) Adaptive navigation technology for navigating a catheter through a body passage or cavity
CA3102807A1 (en) Orientation detection in fluoroscopic images
WO2008050223A2 (en) Method and system for automatic analysis of blood vessel structures and pathologies
WO2025226870A1 (en) Methods and systems for generating 3d models of anatomy
WO2025034408A1 (en) Dynamic visualization for device delivery
Al et al. Centerline depth world for left atrial appendage orifice localization using reinforcement learning
WO2025226871A1 (en) Automatic guidance of image collection for 3d reconstruction of target anatomy
US20250049512A1 (en) Dynamic Visualization For Device Delivery
US20230145801A1 (en) Systems and methods of visualizing a medical device relative to a target
JP2024151459A (en) Information processing device, information processing method, and program