[go: up one dir, main page]

WO2025226871A1 - Automatic guidance of image collection for 3d reconstruction of target anatomy - Google Patents

Automatic guidance of image collection for 3d reconstruction of target anatomy

Info

Publication number
WO2025226871A1
WO2025226871A1 PCT/US2025/026060 US2025026060W WO2025226871A1 WO 2025226871 A1 WO2025226871 A1 WO 2025226871A1 US 2025026060 W US2025026060 W US 2025026060W WO 2025226871 A1 WO2025226871 A1 WO 2025226871A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
imaging probe
imaging
region
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/026060
Other languages
French (fr)
Inventor
Brett D. JACKSON
Tarek D. HADDAD
Elliot C. SCHMIDT
Brian T. HOWARD
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Medtronic Inc
Original Assignee
Medtronic Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medtronic Inc filed Critical Medtronic Inc
Publication of WO2025226871A1 publication Critical patent/WO2025226871A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Definitions

  • the present technology relates to automatic guidance of image collection for 3D reconstruction of target anatomy.
  • a 3D model (e.g., reconstruction) of patient anatomy can be helpful for a variety of medical applications.
  • 3D models of anatomy can be used to help identify a desired implant location in the anatomy.
  • 3D models of anatomy can be used to help determine a navigational approach for accessing certain anatomy.
  • 3D models can be generated from images capturing an anatomy of interest.
  • Completeness of a 3D model of an anatomy is important for the 3D model’s use in successful medical applications.
  • the completeness of a 3D model largely depends on the completeness of images from multiple viewing windows and angles.
  • an imaging operator may inadvertently fail to adequately capture the target anatomy in the images forming the basis of the 3D model.
  • the subject technology relates to systems and methods for guiding imaging collection for 3D reconstruction of a target anatomy.
  • Such guidance can, for example, help an imaging operator obtain images that capture the target anatomy from a sufficient number of viewing windows, angles, etc. that can form the basis of a more complete 3D reconstruction of the target anatomy.
  • a method comprising: receiving a three-dimensional (3D) model of a target anatomy; determining a region of missing data in the 3D model; generating a suggested imaging probe pose for imaging a portion of the target anatomy corresponding to the region of missing data; receiving one or more supplemental 2D images collected based on the suggested imaging probe pose; and updating the 3D model based on the supplemental 2D images.
  • 3D three-dimensional
  • determining the region of missing data in the 3D model comprises fitting a mesh surface around an external surface of the 3D model and identifying a deficiency of the fitted mesh surface.
  • determining the region of missing data in the 3D model comprises: comparing the 3D model to a statistical shape model (SSM) of the target anatomy; and identifying a portion of the 3D model that is in low agreement with a corresponding portion of the SSM.
  • SSM statistical shape model
  • the region of missing data is a first region of the 3D model, wherein the method further comprises: displaying the first region of missing data with a first display scheme; and displaying a second region of the 3D model different from the first region with a second display scheme, wherein the first and second display schemes are different.
  • a system comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a three-dimensional (3D) model of a target anatomy; determining a region of missing data in the 3D model; generating a suggested imaging probe pose to collect one or more supplemental 2D images of a portion of the target anatomy corresponding to the region of missing data; receiving the one or more supplemental 2D images; and updating the 3D model based on the supplemental 2D images.
  • 3D three-dimensional
  • the region of missing data is a first region of the 3D model
  • the instructions when executed by the processor, cause the system to perform operations comprising: displaying the first region of missing data with a first display scheme; and displaying a second region of the 3D model different from the first region with a second display scheme, wherein the first and second display schemes are different.
  • a method comprising: receiving a target imaging probe pose associated with a desired imaging window for imaging a subject; receiving electromagnetic (EM) tracking data representing a current pose of an imaging probe while imaging a subject; and guiding the current pose of the imaging probe toward the target imaging probe pose by displaying, on a display, a first graphical representation of the target imaging probe pose and displaying, on the display, a second graphical representation of the current pose of the imaging probe.
  • EM electromagnetic
  • guiding the current pose of the imaging probe comprises displaying the first graphical representation with a first display scheme, and displaying the second graphical representation with a second display scheme, wherein the first and second display schemes are different.
  • a system comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a target imaging probe pose associated with a desired imaging window for imaging a subject; receiving electromagnetic (EM) tracking data representing a current pose of an imaging probe while imaging a subject; and guiding the current pose of the imaging probe toward the target imaging probe pose by displaying, on a display, a first graphical representation of the target imaging probe pose and displaying, on a display, a second graphical representation of the current pose of the imaging probe.
  • EM electromagnetic
  • the imaging probe is an ultrasound probe.
  • the imaging probe comprises an electromagnetic tracking sensor.
  • FIG. 1 is an illustrative schematic of an example system for providing automatic guidance of image collection for generating a 3D model of target anatomy, in accordance with the present technology.
  • FIG. 2 is a flowchart of an example method for providing automatic guidance of image collection for generating a 3D model of target anatomy, in accordance with the present technology.
  • FIG. 3 depicts an illustrative schematic of an example process of determining a region of missing data in a 3D model of target anatomy, in accordance with the present technology.
  • FIG. 4 depicts an example 3D model of target anatomy having a region of potential missing data, in accordance with the present technology.
  • FIG. 5 depicts an illustrative schematic of an example process of determining a region of missing data in a 3D model of target anatomy, in accordance with the present technology.
  • FIG. 6 depicts an illustrative schematic of an example process of generating a suggested imaging probe pose for imaging a portion of the target anatomy, in accordance with the present technology.
  • FIG. 7 is an illustrative schematic of an example system for providing automatic guidance of image collection for generating a 3D model of target anatomy, in accordance with the present technology.
  • FIG. 8 depicts a flowchart of an example method for providing automatic guidance of image collection for generating a 3D model of target anatomy, in accordance with the present technology.
  • FIG. 9 depicts an example process for generating images of a target anatomy from an imaging probe and collecting pose information for the imaging probe while generating the images, in accordance with the present technology.
  • FIG. 10 is an illustrative schematic of an example process for guiding a current pose of an imaging probe toward a target imaging probe pose, in accordance with the present technology.
  • FIG. 11 depicts example display screens for displaying graphical representations of a target imaging probe pose and a current imaging probe pose, and for displaying medical imaging obtained by an imaging probe, in accordance with the present technology.
  • the present technology relates to automatic guidance of image collection for 3D reconstruction of target anatomy. Some variations of the present technology, for example, are directed to the guidance of image collection for 3D reconstruction of the heart. Specific details of several variations of the technology are described below with reference to FIGS. 1- 11.
  • 3D reconstructions also referred to herein as 3D models
  • 3D models 3D models
  • an operator may have inadvertently captured images that do not collectively adequately image the entire anatomy to be modeled, which may occur, for example, because of obstructions (e.g., bony structures blocking field of view), poor imaging probe angles and/or sweeps, etc.
  • the methods and systems described herein can automatically determine what portion(s) of the 3D model are missing, and then automatically determine imaging probe poses (including location and/or orientation) that are suitable to obtain the missing image data.
  • the missing image data can be used to update (e.g., complete, or at least further complete) the 3D model of the target anatomy.
  • the imaging probe poses can be communicated to the operator (e.g., via a display) to help guide the operator manipulate the imaging probe appropriately to obtain the desired additional image data.
  • the collected images can be used to generate a 3D model that can be used in a variety of applications, including but not limited to aiding navigation of patient anatomy for purposes of a medical treatment and/or other medical procedure (e.g., implant placement, tissue ablation).
  • the 3D model of a target anatomy can be used to guide identification of a target implant location and/or guide navigation for placement of a cardiac device (e.g., cardiac pacemaker), a stent device (e.g., coronary stent, aortic stent, etc.), or other suitable implant, etc.
  • the 2D images and/or 3D models can be of any suitable anatomical regions, such as organs (e.g., heart, lung, kidney) and/or other tissue (e.g., vasculature).
  • the 3D model can be constructed from multiple 2D images taken from various viewing angles, as further described below. Although the 2D images are primarily described herein as ultrasound images, it should be understood that in other variations the 2D images can include any suitable imaging modality.
  • automatic guidance of image collection for 3D modeling of a target anatomy can be provided in a training scenario.
  • an operator can be a trainee learning how to manipulate an imaging probe to obtain certain desired images of target anatomy.
  • Target imaging probe poses corresponding to desired imaging windows (or images, fields of view, etc.) can be stored and/or communicated to an operator (e.g., via a display) to help guide the operator manipulate the imaging probe appropriately to obtain the desired images. Additional aspects of automatic guidance of image collection can described in further detail herein.
  • At least a portion of the methods for providing automatic guidance for image collection can be performed prior to a medical procedure (e.g., an initial 3D model can be generated and/or a 3D model can be updated based on images collected prior to a medical procedure). Additionally or alternatively, at least a portion of the methods can be performed intra-procedurally and/or post-procedurally (e.g., an initial 3D model can be generated and/or a 3D model can be updated based on new images and/or other information collected during and/or after a medical procedure).
  • FIG. 1 is a schematic diagram of an example system 100 for generating a 3D model of anatomy.
  • the system 100 for generating a 3D model of anatomy can include a model reconstruction system 110.
  • the model reconstruction system 110 can be communicatively coupled to an imaging system 120, a catheter mapping system 126, one or more user interface devices 130, and/or an imaging probe guidance system 140, such as through a suitable wired and/or wireless connection (e.g., communications network 150) that enables information transfer.
  • a suitable wired and/or wireless connection e.g., communications network 150
  • information transfer between one or more of these components of the system 100 can occur in a discrete manner, such as by storing information from one component (e.g., data) on a memory device that is readable by another component.
  • model reconstruction system 110 the imaging system 120, the catheter mapping system 126, the user interface device(s) 130, and imaging probe guidance system 140 are illustrated schematically in FIG. 1 as separate components, in some variations two or more of these components can be embodied in a single device. For example, any two, any three, any four, or all five of the model reconstruction system, 110, the imaging system 120, the catheter mapping system 126, the user interface device(s) 130, and imaging probe guidance system 140 can be combined in a single device. However, in some variations each of the model reconstruction system, 110, the imaging system 120, the catheter mapping system 126, the user interface device(s) 130, and imaging probe guidance system 140 can be embodied in a respective separate device.
  • the model reconstruction system 110 can function to generate a 3D model based on images from the imaging system 120 and/or mapping data from the catheter mapping system 126, and/or analyze the 3D model (e.g., to identify potentially missing portion(s) of the 3D model).
  • the model reconstruction system 110 can be configured to receive one or more images of a target anatomy that are generated by the imaging system 120 and/or mapping data representative of a target anatomy generated by the catheter mapping system 126.
  • the model construction system 110 can be configured to receive only one or more images generated by the imaging system 120, only mapping data generated by the catheter mapping system 126 (e.g., through manipulation of a catheter 128 having one or more suitable mapping sensors), or both.
  • the imaging system 120 can be configured to obtain images in any suitable imaging modality.
  • the imaging system 120 can be configured to generate two-dimensional (2D) images along various imaging windows or fields of view.
  • the imaging system 120 is an ultrasound imaging system configured to generate a plurality of 2D images, with each 2D image being collected by performing an ultrasound sweep with an imaging probe 122 (e.g., through translation and/or rotation of the imaging probe 122) at one or more imaging windows.
  • the imaging probe 122 can also include at least one electromagnetic (EM) sensor 124 or other tracking sensor configured to track the pose (including position and/or orientation) of the imaging probe 122 as the imaging probe 122 collects the 2D images.
  • EM electromagnetic
  • the EM sensor 124 can be included or coupled to an EM navigation tool that is moved in tandem with the imaging probe 122 to collect representative tracking data for the imaging probe 122.
  • the imaging probe 122 can be manipulated external to a patient’s body (e.g., transthoracic ultrasound for imaging a heart).
  • the model reconstruction system 110 can include at least one processor 112 and at least one memory device 114.
  • the processor 112 may be configured to execute the instructions that are stored in the memory device 114 such that, when it executes the instructions, the processor 112 performs aspects of the methods described herein.
  • the instructions may be executed by computer-executable components integrated with a software application, applet, host, server, network, website, communication service, communication interface, hardware, firmware, software elements of a user computer or mobile device, smartphone, or any suitable combination thereof.
  • the one or more processors 112 can be incorporated into a computing device or system such as a cloud-based computer system, a mainframe computer system, a grid-computer system, or other suitable computer system.
  • the system 100 can further include one or more user interface device(s) 130, which functions to allow a user to interact with the model reconstruction system 110 and/or the imaging system 120.
  • the user interface device 130 can be configured to receive user input (e.g., for controlling input of information between the model reconstruction system 110 and the imaging system 120) and/or provide information to a user.
  • the user interface device 130 can include a display (e.g., monitor, goggles, glasses, AR/VR device, etc.) for displaying a 3D model to a user.
  • the imaging system, catheter mapping system, model reconstruction system, and/or user interface device can be further communicatively coupled to an imaging probe guidance system 140.
  • the imaging probe guidance system 140 functions to generate, receive, and/or communicate to an operator of the imaging system 120 various information relating to a suggested pose (e.g., location and/or orientation) for an imaging probe for obtaining certain desired images (e.g., with the ultimate goal of obtaining the images themselves, and/or with the ultimate goal of obtaining the images on which at least a portion of a 3D model can be based, such as for completion of a missing portion of the 3D model). Further details of the imaging probe guidance system 140 are described herein.
  • the imaging probe guidance system 140 can include at least one processor 142 and at least one memory device 144, which can be similar to the processor 112 and/or the memory device 114, respectively.
  • the model reconstruction system 110 and the imaging probe guidance system 140 can incorporate the same physical instances of processor(s) and/or memory device(s).
  • FIG. 2 is a flowchart of an example method 200 for providing automatic guidance of image collection for 3D reconstruction of a target anatomy (also referred to herein as image collection for generating a 3D model of a target anatomy).
  • the method 200 can, for example, be performed at least in part by the model reconstruction system 110 and/or the imaging probe guidance system 140.
  • method 200 can include receiving a 3D model of a target anatomy 210, determining a region of missing data in the 3D model 220, and generating a suggested imaging probe pose for imaging a portion of the target anatomy corresponding to the region of missing data 230.
  • the method 200 can further include providing the suggested imaging probe pose to a user 240, receiving one or more supplemental two- dimensional (2D) images collected based on the suggested imaging probe pose 250 (e.g., while an operator manipulates an imaging probe toward the suggested imaging probe pose, or after an operator manipulates an imaging probe toward the suggested imaging probe pose), and updating the 3D model based on the supplemental 2D images 260.
  • the method 200 can omit generating and/or providing a suggested imaging probe pose, and instead rely on an operator’s knowledge and experience to properly manipulate the imaging probe to obtain the desired images corresponding to the region of missing data that can be communicated to the operator.
  • the method 200 can include receiving a 3D model of a target anatomy 210.
  • the 3D model can, for example, be obtained with and/or from an imaging system (e.g., imaging system 120).
  • the received 3D model can be generated at least in part from a plurality of 2D images collected with an imaging probe, such as imaging probe 122.
  • the imaging probe is an ultrasound probe configured to collect 2D ultrasound images.
  • the ultrasound probe can be configured to collect 2D ultrasound images generated from one or more ultrasound sweeps with an ultrasound probe from various vantage points.
  • the ultrasound probe can be manipulated in various poses (e.g., translated and/or rotated) at one or more imaging windows.
  • the imaging probe can be manipulated external to the patient to gather images.
  • the imaging probe can be an ultrasound probe configured to perform a transthoracic ultrasound for imaging a heart.
  • the 2D images can also depict one or more medical devices (e.g., a navigational catheter) relative to patient anatomy, to further help guide placement and/or other operation of the depicted medical device(s).
  • the 2D images can be segmented with one or more segmentation masks, and the segmentation masks can be projected into 3D space to form the 3D model.
  • segmentation masks can be applied to generate segmented pixels from the 2D images corresponding to anatomical features.
  • the segmentation masks can delineate between various features of the heart such aortic arch, Bachman’s bundle, left atrium, atrioventricular (AV) bundle (bundle of His), left ventricle, right and left bundle branch, Purkinje fibers, right bundle branch, right ventricle, right atrium, posterior internodal, middle internodal, AV node, anterior internodal, sinoatrial (SA) node, apex, veins, valves, trunks, blood pool, and/or other suitable cardiac anatomy.
  • the segmentation masks can be generated in an automated manner, semi-automated, or manual manner.
  • the received 3D model can be generated at least in part from mapping data collected by a mapping catheter (e.g., catheter 128 in the catheter mapping system 128).
  • a composite 3D model may be generated by combining imaging data (e.g., from imaging system 120) and mapping data (e.g., from catheter mapping system 126).
  • the received 3D model may be generated only from imaging data or only mapping data.
  • the method 200 can include determining a region of missing data in the 3D model 220, where missing data corresponds to one or more portions of the 3D model that is incomplete (e.g., due to insufficient images).
  • the region of missing data in the 3D model can be determined in various manners.
  • the region of missing data can be determined at least in part by fitting a mesh surface around an external surface of the 3D model, and identifying a deficiency of the fitted mesh surface.
  • the segmentation masks (described above) for the 2D images can be projected into 3D space to form a 3D projection of the target anatomy.
  • a mesh surface e.g., with triangular cells, nodes, etc.
  • the exterior surface e.g., wall
  • a deficiency in the mesh surface such as a non-smooth surface (e.g., a non-continuity such as a sharp edge, etc.) and/or a recess (e.g., hole) in the mesh surface can be indicative of a portion of the 3D model that is missing.
  • a mesh surface can be fit around a 3D model of target anatomy 300.
  • Sharp comers and/or unnaturally flat (e.g., planar) surfaces at regions 310a and 310b can be identified as deficiencies of the 3D model 300.
  • the deficiencies 310a and 310b can indicate that are missing portions of the 3D model (shown in FIG. 3 as regions “A” and “B”, respectively” of the true geometry of the target anatomy).
  • FIG. 4 depicts an example 3D model 400 of a heart including some or all of at least four chambers of the heart.
  • a region 410 includes an unnaturally flat surface 410 (e.g., a planar region that indicates a portion of a heart chamber that is cut off unexpectedly), which can be identified in a mesh surface fitted around the 3D model.
  • This flat surface 410 can be indicative of missing data corresponding to an incomplete portion of the heart chamber.
  • the region of missing data in the 3D model can be determined at least in part by comparing the 3D model to a statistical shape model (SSM) of the target anatomy, and identifying a portion of the 3D model that is in low agreement with a corresponding portion of the SSM.
  • SSM statistical shape model
  • Comparing the 3D model can include fitting the 3D model to one or more SSMs.
  • SSMs provide a way of characterizing a given shape based on the shape’s population variation and mean. For example, an SSM applied to a collection of like objects (e.g., type of anatomy, such as a heart) provides a statistical evaluation of the general shape of the object and how the objects differ from one another. SSMs can be used for classification, segmentation, and phantom generation (e.g., producing new representations of the object).
  • one or more processors can have access to a plurality of stored SSMs for the target anatomy (e.g., SSMs generated based on training data of many instances of the target anatomy for various subjects) for comparison to the 3D model.
  • SSMs for the target anatomy
  • at least one SSM 520 can be identified (e.g., using a suitable best fit analysis) that is a close fit to the 3D model of the target anatomy.
  • much of the 3D model 500 may be adequately consistent with the identified SSM 520, which can be an indication that much of the 3D model 500 is an adequate reconstruction of the corresponding regions of target anatomy.
  • regions of low agreement e.g., region 510a
  • a region of missing data in the 3D model can be determined based on receiving an input from a user (e.g., operator of the imaging probe (e.g., imaging probe 122), other user, etc.).
  • the user can review the 3D model (e.g., visualized on a display screen, such as on a user interface device 130) and observe one or more regions that appear to be deficient, such as by looking for non-smooth surfaces, recesses, and/or the like.
  • the user may review a heart 3D model such as that shown in FIG.
  • region 410 includes an unnaturally flat surface 410 (e.g., a planar region that indicates a portion of a heart chamber that is cut off unexpectedly).
  • the user can then select the region 410 to identify it as a region of missing data. For example, the user can trace an outline of the region 410 (e.g., on a display, such as with control of a cursor or with a finger or stylus) with a continuous border outline or selecting points around a border of the region 410.
  • the 3D model can be fitted to multiple SSMs, in that multiple sets of parameters in multiple SSMs can fit the observed data in the 3D model.
  • a 3D model 600 can be fit to two SSMs, including a first SSM 520a and a second SSM 520b.
  • the first SSM 520a and the second SSM 520b match the 3D model in many aspects, but also differ from the 3D model in different ways.
  • different region(s) of the 3D model may be identified as a missing based on the comparison of the 3D model to that ideal SSM.
  • the method can include identifying the regions where the multiple SSMs differ the most from each other (e.g., using a suitable Gaussian process), and indicating to an operator to image these regions.
  • the method 200 can include suggesting one or more imaging probe poses that will enable the acquisition of the degeneracy-breaking images (e.g., using ray tracing techniques and/or at least a portion of method 800 as described in further detail below).
  • the method 200 can include (without or with explicitly suggesting imaging probe pose(s)) obtaining 2D image(s) of at least one region of the target anatomy where a new partial 3D model of the target anatomy would aid in the selection of an ideal or optimum SSM, among multiple candidate SSMs.
  • the method 200 can include generating a suggested imaging probe pose for imaging a portion of the target anatomy corresponding to the region of missing data 230.
  • the imaging probe pose can be generated with an imaging probe guidance system, such as probe guidance system 130 described above with respect to FIG. 1.
  • the suggested imaging probe pose includes a suggested location and/or orientation of the imaging probe (e.g., imaging probe 122) to enable the imaging probe to image the portion of the target anatomy corresponding to the region of missing data in the 3D model.
  • the location and/or orientation is generated using one or more suitable ray tracing techniques.
  • the method can include defining an imaging window that includes the portion of target anatomy corresponding to the region of missing data in the 3D model (e.g., such that the desired portion of target anatomy is centered within the imaging window), and extending one or more ray paths from the imaging window (e.g., angular borders of the imaging window) back to a particular root point.
  • the location of the suggested imaging probe pose can, in some variations, be centered at this root point, and the orientation of the imaging probe pose can be determined based on physical accessibility (e.g., where the imaging probe can be positioned external to a patient, for a transthoracic ultrasound scan).
  • the suggested imaging probe pose can be previously associated with the missing portion of the target anatomy corresponding to the region of missing data 230.
  • the imaging probe guidance system can be similar to the probe guidance system 710 shown schematically in FIG. 7.
  • the probe guidance system 710 can be an example of the probe guidance system 140 of FIG. 1.
  • the imaging probe guidance system 710 can be communicatively coupled to a probe information database 730, which can store one or more suggested imaging probe poses and their associated imaging windows (and/or anatomical regions captured in their associated imaging windows), such as for a particular type of target anatomy (e.g., heart, lung, etc.).
  • These stored suggested imaging probe poses, and their associations, can be previously generated such as through prior imaging processes (e.g., bench test data-gathering, medical procedures, etc.) relating to the target anatomy.
  • prior imaging processes e.g., bench test data-gathering, medical procedures, etc.
  • one or more imaging probe poses can be identified as facilitating the imaging of certain portions of the target anatomy, and this association can be stored in the probe information base 730 such that the probe guidance system 710 can search in the database for a certain desired portion of the target anatomy to be imaged and identify one or more suggested imaging probe poses associated in the database with that desired portion of the target anatomy to be imaged.
  • the probe information database 730 can further be updated when the method 200 is performed, as additional information regarding relationships between imaging probe poses and imaging windows for a type of target anatomy are identified and/or existing such relationships can be confirmed by an operator as successfully enabling imaging of the desired portions of target anatomy. Accordingly, in some variations the accuracy and completeness of the probe information database 730 can continue to improve as the method 200 is performed repeatedly.
  • the method 200 can include providing the suggested probe pose to a user 240 operating an imaging probe (e.g., imaging probe 122), which in some variations can be similar to at least a portion of a method 800 summarized in FIG. 8.
  • FIG. 8 is a flowchart of an example method 800 for providing automatic guidance of image collection for 3D reconstruction of a target anatomy.
  • method 800 can include receiving a suggested imaging probe pose (also referred to herein as a “target” pose) associated with a desired imaging window 810, receiving tracking data representing a current pose of an imaging probe 820, and guiding the current pose of the imaging probe toward the target imaging probe pose 830.
  • the imaging probe can, in some variations, be similar to the imaging probe 122 of FIG. 1.
  • 2D images can be collected by an imaging probe 922, which can be similar to the imaging probe 122 of FIG. 1.
  • a live stream of the 2D images 914 can be provided on a display (e.g., user interface device 130) to a user operating the imaging probe 922.
  • Pose tracking data for the imaging probe can be collected concurrently while the imaging probe is collecting the 2D images.
  • the imaging probe 922 can include one or more tracking sensors 924 configured to collect position and/or orientation data for the imaging probe as the imaging probe collects 2D images.
  • the one or more tracking sensors 924 can include an EM sensor used in conjunction with a suitable EM tracking system.
  • the tracking sensors 924 can include an EM sensor similar to EM sensor 124 of FIG. 1. Accordingly, a current pose (e.g., location and/or orientation) of the imaging probe is known when the imaging probe collects each 2D image. As shown in FIG. 9, a graphical representation of the current post of the imaging probe can be displayed on the display so the operator can better track where the imaging probe is relative to anatomy.
  • guiding the current pose of the imaging probe toward the target imaging probe pose can include displaying a graphical representation of the current pose of the imaging probe (e.g., similar to that shown in FIG. 9), as well as displaying another graphical representation of the target imaging probe pose.
  • graphical representations can be displayed on a suitable display similar to user interface device 130 of FIG. 1. As the imaging probe is manipulated, the displayed graphical representation of the current pose of the imaging probe also changes position and/or orientation accordingly.
  • an operator of the imaging probe can manipulate the imaging probe until the graphical representation of the current imaging probe pose is proximate the graphical representation of the target imaging probe pose (e.g., such that the graphical representations are substantially coincident or overlie each other, or are sufficient near each other).
  • the graphical representations of the current imaging probe pose and the target imaging probe pose can be displayed in different display schemes, such that the operator can visually distinguish between the two graphical representations and/or help the operator focus more on one of the graphical representations than the other.
  • the display schemes can vary with one or more visual parameters, such as transparency level, color, patterning, and/or sharpness.
  • a first graphical representation 1010 of a target imaging probe pose can be displayed in a first display scheme
  • a second graphical representation 1020 of a current imaging probe pose can be displayed in a second display scheme.
  • the first graphical representation 1010 is more transparent
  • the second graphical representation 1020 is more opaque.
  • the first graphical representation 1010 can be more opaque and the second graphical representation 1020 can be more transparent. Additionally or alternatively, the first graphical representation 1010 can be displayed in a first color (or pattern) and the second graphical representation 1020 can be displayed in a second color or pattern different from the first color or pattern. Additionally or alternatively, the first graphical representation 1010 can be blurrier and the second graphical representation 1020 can be sharper (or vice versa).
  • the current field of view 1022 e.g., echo view for an ultrasound probe
  • the target field of view 1012 associated with the target imaging probe pose 1010.
  • Fidelity to the match or discordance of the current imaging probe pose 1020 and the target imaging probe pose 1010 can additionally or alternatively be visually conveyed with one or more objective (e.g., quantitative) measures such as an indication of angle between an axis of the current imaging probe pose 1020 and an axis of the target imaging probe pose 1010 (e.g., the smaller the angle, the closer the current imaging probe pose is approximating the target imaging probe pose).
  • objective e.g., quantitative
  • the fidelity to the match or discordance of the current imaging probe pose 1010 and the target imaging probe pose 1010 can be visually conveyed with one or more visual indicators, such as by modifying one or more display schemes (e.g., hue, brightness) for displaying the images, background, and/or one or more icons or other suitable indicators (e.g., for display adjacent to or overlaid with an image, etc.).
  • one or more display schemes e.g., hue, brightness
  • icons or other suitable indicators e.g., for display adjacent to or overlaid with an image, etc.
  • the graphical representations of the target and current imaging probe poses can be displayed to a user concurrently with a live stream of images from the imaging probe (e.g., echo view from an ultrasound probe).
  • FIG. 11 is a schematic depiction of graphical representations of the target and current imaging probe poses being displayed on a first display 1130a and a live echo view from the imaging probe being displayed on a second display 1130b.
  • the first display 1130a is configured to display, for example, the graphical representation of the target imaging probe pose 1110 and the graphical representation of the current imaging probe pose 1120, similar to that described above with respect to FIG. 10.
  • the first display 1130a can be on a separate screen (e.g., monitor) from the second display 1130b, or the first and second displays 1130a and 1130b can be on the same screen (e.g., monitor) but in different windows, display portions, and/or the like.
  • the graphical representations of the target and current imaging probe poses can be displayed overlaid with a live stream of images from the imaging probe.
  • Another example of guiding a user toward a target imaging probe can include modulating the appearance of the graphical representation of the current imaging probe pose as the imaging probe approaches the target imaging probe pose.
  • the color of the graphical representation of the current imaging probe pose can be displayed in one color (e.g., green) when the current imaging probe pose reaches within a threshold proximity to the target imaging probe pose, and/or displayed in another color (e.g., red) when the current imaging probe pose is outside the threshold proximity to the target imaging probe.
  • the graphical representation of the current imaging probe pose can change to indicate that the probe is currently being moved toward or away from the target imaging probe pose (e.g., change color, blink with a certain frequency, etc.).
  • the appearance of the graphical representation of the current imaging probe pose can be modulated without displaying the graphical representation of the target imaging probe pose.
  • Another example of guiding a user toward a target imaging probe can include providing haptic feedback (e.g., vibration) on the imaging probe itself to indicate when the operator is correctly manipulating the imaging probe toward the target imaging probe pose.
  • the imaging probe can be configured to vibrate to indicate an error (e.g., that the imaging probe is incorrectly being moved away from the target imaging probe pose), and/or or indicate success (e.g., that the imaging probe is correctly being moved toward the target imaging probe pose).
  • vibration at one frequency can be activated to indicate error in probe movement, while vibration at a different frequency can be activated to indicate success in probe movement.
  • FIGS. 9-11 illustrate a probe-shaped or pseudo-realistic representation of the imaging probe
  • any suitable graphical representation may be used (e.g., lines, arrows, dots, polygons, etc.) to indicate location and/or orientation of an imaging probe pose.
  • the display of the graphical representations of the target and current imaging probe poses can function to guide a user to manipulate an imaging prose to a suggested probe position to collect desired images depicting anatomy that is corresponding to the missing portion(s) of the 3D model of the target anatomy.
  • guiding the collection of images can additionally or alternatively include one or more further aspects of guidance as described in further detail below with respect to method 800 being performed as part of a training module (e.g., depicting a target probe manipulation pathway associated with a desired imaging window, transforming graphical representations of the target and/or current imaging probe poses to an operator frame of reference, etc.).
  • a training module e.g., depicting a target probe manipulation pathway associated with a desired imaging window, transforming graphical representations of the target and/or current imaging probe poses to an operator frame of reference, etc.
  • the method 200 can include receiving supplemental 2D images of the target anatomy 250 corresponding to the regions of missing data in the 3D model, and updating the 3D model based on the supplemental 3D images 260.
  • the supplemental 2D images can be segmented with one or more segmentation masks, and the segmentation masks can be projected into 3D space to form at least a portion of the 3D model.
  • segmentation masks can be applied to generate segmented pixels from the 2D images corresponding to anatomical features.
  • the segmentation masks of the supplemental 2D images can be combined with the segmentation masks of the initial 2D images, and all the segmentation masks can collectively be projected to form a new 3D model that includes the previously missing data.
  • the initial received 3D model can be combined with a partial new 3D model that is generated based on only the supplemental 2D images.
  • the updated 3D model generated from either of these processes can be a reconstruction of the entire target anatomy.
  • method 200 may additionally or alternatively include receiving supplemental mapping data (e.g., from catheter mapping system 126) and updating the 3D model based at least in part on the supplemental mapping data.
  • supplemental mapping data e.g., from catheter mapping system 1266
  • the method 200 can further include displaying the updated 3D model 270 on a suitable display (e.g., user interface device 130).
  • a suitable display e.g., user interface device 130
  • the displayed 3D model can be manipulated by a user (e.g., rotated, enlarged, viewed along one or more cross-sectional planes, etc.).
  • the 3D model can be stored on a suitable memory device.
  • a graphical representation of a medical device can be generated when the medical device is located (e.g., navigated) in or near the target anatomy.
  • the medical device can include one or more tracking sensors (e.g., EM sensors) that provide position and/or orientation information of the medical device.
  • the graphical representation of the medical device can located relative to the target anatomy based on tracking data from the tracking sensors of the medical device, and the graphical representation of the medical device can be displayed concurrently with the 3D model on a display accordingly, such that a user can view the location of the medical device relative to the target anatomy on the display.
  • a graphical representation of a current (or present) location of the medical device and/or past locations (e.g., path tracking) of the medical device in the patient can be displayed.
  • the graphical representation of the medical device can be overlaid on the 3D model.
  • the display of the graphical representation of the medical device can toggled on and off (e.g., selected for display or non-display). For example, a user can select through a user control on a user interface device (e.g., user interface device 130) to display the graphical representation of the medical device, or select through the user control to hide the graphical representation of the medical device. As another example, the graphical representation of the medical device may be automatically displayed or hidden based on one or more parameters, such as automatically displayed based on proximity to a location of interest (e.g., implant location) in the target anatomy. Additionally or alternatively, a user can select a display scheme (e.g., color, transparency, etc.) of the graphical representation of the medical device on the display.
  • a display scheme e.g., color, transparency, etc.
  • At least a portion of the method 800 can be performed to generate a more complete 3D model of target anatomy of a live patient (e.g., during a medical procedure). However, in some variations at least a portion of the method 800 can additionally or alternatively be performed as part of a training module for guiding image collection (regardless of whether a 3D model is generated from such images).
  • the imaging probe guidance arrangement 700 shown in FIG. 7 can be used as a training tool for imaging technicians to help them gain experience in operating the imaging system and become more efficient (which may, for example, help reduce patient exposure to any harmful radiation that may occur during imaging).
  • the model 800 can, for example, be used to providing training in combination with an imaging subject.
  • the imaging subject could be a live patient (e.g., human patient), or an inanimate training model.
  • the training model can be accompanied with preset imaging windows with associated imaging probe poses and/or images.
  • a trainee can, for example, be instructed to manipulate an imaging probe to match target imaging probe poses and replicate the preset images associated with the target imaging probe poses.
  • the imaging probe can be similar to the imaging probe 122.
  • an imaging probe guidance system 710 can be communicatively coupled to a probe information database 730 so as to obtain information relating to imaging probe poses and their associated imaging windows.
  • the imaging probe guidance system 710 can be an example of the imaging probe guidance system 140 of FIG. 1.
  • the probe information database 730 can store one or more target imaging probe poses and their associated imaging windows (and/or anatomical regions captured in their associated imaging windows), such as for a particular type of target anatomy (e.g., heart, lung, etc.). These stored imaging probe poses, and their associations, can be previously generated such as through prior imaging processes (e.g., bench test data-gathering, medical procedures, etc.) relating to the target anatomy.
  • one or more imaging probe poses can be identified as facilitating the imaging of certain portions of the target anatomy, and this association can be stored in the probe information base 730 such that the probe guidance system 710 can search in the database for a certain desired portion of the target anatomy to be imaged and identify one or more suggested imaging probe poses associated in the database with that desired portion of the target anatomy to be imaged.
  • the probe information database 730 can further be updated with imaging probe poses entered by an operator over time, as additional information regarding relationships between imaging probe poses and imaging windows for a type of target anatomy are identified.
  • the probe information database 730 can store one or more target probe manipulation pathways and associated imaging windows (and/or anatomical regions captured in their associated imaging windows), such as for a particular type of anatomy (e.g., heart, lung, etc.).
  • a target probe manipulation pathway can include information regarding a physical approach used to place an imaging probe to a desired pose (e.g., moving bony structures out of the way, stretching skin or other tissue, etc. to help create access for the imaging probe to the target imaging probe pose).
  • various target probe manipulation pathways can be previously generated such as through prior imaging processes (e.g., bench test data-gathering, medical procedures, etc.) relating to the target anatomy.
  • the imaging probe guidance system 710 can further be configured to receive sensor information from a sensor 10 that provides an indication of the position and/or orientation of a subject P (e.g., a live training subject or an inanimate training model).
  • a subject P e.g., a live training subject or an inanimate training model
  • the sensor 10 can be an EM tracking sensor, and can be coupled to an external surface of the subject P (e.g., skin of a live, human training subject, or external surface of a training model) and/or on a surface on which the subject P lies.
  • the imaging probe guidance system 710 can also be communicatively coupled to a display 720 (which can be similar to, or different from, the user interface device 130).
  • the display 720 can be configured to display guidance information from the imaging probe guidance system 710 (e.g., graphical representations of target and current imaging probe poses, target probe manipulation pathways, etc.), medical images (e.g., live stream of images collected by the imaging probe poses), and/or other suitable information.
  • the display 710 can include a tracking sensor 722 (e.g. position sensor and/or orientation sensor, such as an EM tracking sensor, accelerometer, etc.
  • the probe guidance system 710 can include a tracking module 712 and a visualization module 714.
  • the tracking module 712 can be configured to track the pose (e.g., location and/or orientation) and/or pathway of an imaging probe during an imaging procedure.
  • the tracking module 712 can receive sensor data from a tracking sensor (e.g., EM tracking sensor such as EM sensor 124) that is coupled to the imaging probe or otherwise moves in tandem with the imaging probe in any suitable manner.
  • a tracking sensor e.g., EM tracking sensor such as EM sensor 124
  • the tracking module 712 can receive sensor data (e.g., from a sensor 10) providing an indication of the position and/or orientation of an imaging subject or a surface on which the imaging subject lies, so as to track the position and/or orientation of the imaging subject. Additionally or alternatively, in some variations the tracking module 712 can receive sensor data from the display tracking sensor 722 providing an indication of the position and/or orientation of a display on which probe guidance information can be displayed.
  • sensor data e.g., from a sensor 10
  • the tracking module 712 can receive sensor data from the display tracking sensor 722 providing an indication of the position and/or orientation of a display on which probe guidance information can be displayed.
  • the tracking module 712 can also sense the position of the imaging probe relative to the anatomy and automatically annotate one or more imaging views and/or images according to the imaged anatomy. For example, when performing a transthoracic ultrasound (e.g., for imaging a heart), the tracking module 712 can sense the position of the imaging probe and automatically labels the associated imaging window based on the relevant anatomy present in the imaging window (e.g., subcostal, substernal, L/R parasternal, suprasternal, etc.). Of course, it should be understood that in some variations the exact labels of the imaging windows may vary based on the type of anatomy being imaged.
  • the visualization module 714 functions to generate various graphics for providing guidance for image collection using an imaging probe.
  • the visualization module 714 can be configured to generate graphical representations of a target imaging probe pose and a current imaging probe pose. Such graphical representations can, for example, be displayed on a display similar to user interface device 130. As described above, as the imaging probe is manipulated, the displayed graphical representation of the current pose of the imaging probe also changes position and/or orientation accordingly.
  • an operator of the imaging probe can manipulate the imaging probe until the graphical representation of the current imaging probe pose is proximate the graphical representation of the target imaging probe pose (e.g., such that the graphical representations are substantially coincident or overlie each other, or are sufficient near each other).
  • the target imaging probe pose when displayed, can assist and train the operator to replicate a desired imaging window or view associated with that target imaging probe pose.
  • the graphical representations of the target and current imaging probe poses can be displayed in different display schemes that differ in one or more visual parameters such as transparency level, color, patterning, or sharpness.
  • the visualization module 714 can be configured to generate a representation of a target probe manipulation pathway for placing the imaging probe at a desired target imaging probe pose.
  • the representation of the target probe manipulation pathway can be graphical (e.g., visually highlighting an access path through and/or around anatomy, such as with a colored line or series of arrows, etc.).
  • the representation of the target probe manipulating pathway can be text-based (e.g., written description, bullet points, etc.) of sub-processes for placing the imaging probe at a desired target imaging probe pose.
  • the visualization module 714 can be configured to transform the probe guidance information (e.g., imaging probe pose, target probe manipulation pathway) to a frame of reference of the observer prior to display on the probe guidance information.
  • the visualization module 714 can adjust the probe guidance information such that it can be presented with the proper orientation to imaging probe operator (or other observer).
  • the visualization module 714 can assess tracking data from the display 720 to track position and/or orientation of the display 720. Based on the position and/or orientation of the display 720, the visualization module 714 can transform (e.g., rotate, flip, etc.) information so the information is generally oriented in a manner representative of the location and/or orientation of the display in the room relative to the operator.
  • the visualization module 714 can assess tracking data from the imaging subject P (e.g., from sensor 10) to track position and/or orientation of the imaging subject P. Based on the position and/or orientation of the imaging subject P, the visualization module 714 can transform (e.g., rotate, flip, etc.) information so that information is generally oriented in a manner representative of the location and/or orientation of the imaging subject in the room relative to the operator.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

In some variations, a method for providing automatic guidance for image collection for 3D reconstruction of a target anatomy includes receiving a three-dimensional (3D) model of a target anatomy, wherein the 3D model is generated from two-dimensional (2D) images of the target anatomy and/or catheter mapping data of the target anatomy, determining a region of missing data in the 3D model, generating a suggested imaging probe pose for imaging a portion of the target anatomy corresponding to the region of missing data, receiving one or more supplemental 2D images collected based on the suggested imaging probe pose, and updating the 3D model based on the supplemental 2D images.

Description

AUTOMATIC GUIDANCE OF IMAGE COLLECTION FOR 3D RECONSTRUCTION OF
TARGET ANATOMY
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of priority to U.S. Provisional App. No. 63/638,306, filed April 24, 2024, which is incorporated by reference herein in its entirety.
TECHNICAL FIELD
[0002] The present technology relates to automatic guidance of image collection for 3D reconstruction of target anatomy.
BACKGROUND
[0003] A 3D model (e.g., reconstruction) of patient anatomy can be helpful for a variety of medical applications. For example, 3D models of anatomy can be used to help identify a desired implant location in the anatomy. As another example, 3D models of anatomy can be used to help determine a navigational approach for accessing certain anatomy. Generally, 3D models can be generated from images capturing an anatomy of interest.
[0004] Completeness of a 3D model of an anatomy is important for the 3D model’s use in successful medical applications. The completeness of a 3D model largely depends on the completeness of images from multiple viewing windows and angles. However, due to lack of training and/or experience, an imaging operator may inadvertently fail to adequately capture the target anatomy in the images forming the basis of the 3D model.
SUMMARY
[0005] The subject technology relates to systems and methods for guiding imaging collection for 3D reconstruction of a target anatomy. Such guidance can, for example, help an imaging operator obtain images that capture the target anatomy from a sufficient number of viewing windows, angles, etc. that can form the basis of a more complete 3D reconstruction of the target anatomy.
[0006] The subject technology is illustrated, for example, according to various aspects described below, including with reference to FIGS. 1-11. Various examples of aspects of the subject technology are described as numbered clauses (1, 2, 3, etc.) for convenience. These are provided as examples and do not limit the subject technology.
1. A method, comprising: receiving a three-dimensional (3D) model of a target anatomy; determining a region of missing data in the 3D model; generating a suggested imaging probe pose for imaging a portion of the target anatomy corresponding to the region of missing data; receiving one or more supplemental 2D images collected based on the suggested imaging probe pose; and updating the 3D model based on the supplemental 2D images.
2. The method of clause 1, wherein the 3D model is at least partially generated from two-dimensional (2D) images of the target anatomy.
3. The method of clause 2, wherein the 2D images are ultrasound images.
4. The method of any one of clauses 1-3, wherein the 3D model is at least partially generated from catheter mapping data of the target anatomy.
5. The method of any one of clauses 1^1, wherein determining the region of missing data in the 3D model comprises fitting a mesh surface around an external surface of the 3D model and identifying a deficiency of the fitted mesh surface.
6. The method of clause 5, wherein the deficiency of the fitted mesh surface comprises a non-smooth surface.
7. The method of clause 5 or 6, wherein the deficiency of the fitted mesh surface comprises a recess.
8. The method of any one of clauses 1-7, wherein determining the region of missing data in the 3D model comprises: comparing the 3D model to a statistical shape model (SSM) of the target anatomy; and identifying a portion of the 3D model that is in low agreement with a corresponding portion of the SSM. 9. The method of any one of clauses 1-8, wherein the region of missing data is a first region of the 3D model, wherein the method further comprises: displaying the first region of missing data with a first display scheme; and displaying a second region of the 3D model different from the first region with a second display scheme, wherein the first and second display schemes are different.
10. The method of clause 9, wherein the first display scheme and the second display scheme differ in at least one of transparency level, color, patterning, or sharpness.
11. The method of any one of clauses 1-10, wherein generating a suggested imaging probe pose comprises performing ray tracing from the region of missing data.
12. The method of any one of clauses 1-11, wherein the supplemental 2D images are received prior to a procedure for placing a device proximate the target anatomy.
13. The method of any one of clauses 1-12, wherein the supplemental 2D images are received during a procedure for placing a device proximate the target anatomy.
14. The method of any one of clauses 1-14, wherein the supplemental 2D images are received after a procedure for placing a device proximate the target anatomy.
15. A system comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a three-dimensional (3D) model of a target anatomy; determining a region of missing data in the 3D model; generating a suggested imaging probe pose to collect one or more supplemental 2D images of a portion of the target anatomy corresponding to the region of missing data; receiving the one or more supplemental 2D images; and updating the 3D model based on the supplemental 2D images.
16. The system of clause 15, wherein the 3D model is at least partially generated from two-dimensional (2D) images of the target anatomy. 17. The system of clause 15, wherein the 2D images are ultrasound images.
18. The system of any one of clauses 15-17, wherein the 3D model is at least partially generated from catheter mapping data of the target anatomy.
19. The system of any one of clauses 15-18, wherein the instructions that, when executed by the processor, cause the system to determine the region of missing data in the 3D model comprises instructions that, when executed by the processor, cause the system to perform operations comprising fitting a mesh surface around an external surface of the 3D model and identifying a deficiency of the fitted mesh surface.
20. The system of clause 19, wherein the deficiency of the fitted mesh surface comprises a non-smooth surface or a recess.
21. The system of any one of clauses 15-20, wherein the instructions that, when executed by the processor, cause the system to determine the region of missing data in the 3D model comprises instructions that, when executed by the processor, cause the system to perform operations comprising: comparing the 3D model to a statistical shape model (SSM) of the target anatomy; and identifying a portion of the 3D model that is in low agreement with a corresponding portion of the SSM.
22. The system of any one of clauses 15-21, wherein the region of missing data is a first region of the 3D model, wherein the instructions, when executed by the processor, cause the system to perform operations comprising: displaying the first region of missing data with a first display scheme; and displaying a second region of the 3D model different from the first region with a second display scheme, wherein the first and second display schemes are different.
23. The system of clause 22, wherein the first display scheme and the second display scheme differ in at least one of transparency level, color, patterning, or sharpness.
24. The system of any one of clauses 15-23, wherein the instructions that, when executed by the processor, cause the system to generate a suggested imaging probe pose comprises instructions that, when executed by the processor, cause the system to perform operations comprising performing ray tracing from the region of missing data.
25. The system of any one of clauses 15-24, further comprising an imaging probe.
26. The system of clause 25, wherein the imaging probe is an ultrasound probe.
27. The system of any one of clauses 15-26, wherein the imaging probe comprises an electromagnetic tracking sensor.
28. The system of any one of clauses 1-27, further comprising a display configured to display the suggested imaging probe pose.
29. A method, comprising: receiving a target imaging probe pose associated with a desired imaging window for imaging a subject; receiving electromagnetic (EM) tracking data representing a current pose of an imaging probe while imaging a subject; and guiding the current pose of the imaging probe toward the target imaging probe pose by displaying, on a display, a first graphical representation of the target imaging probe pose and displaying, on the display, a second graphical representation of the current pose of the imaging probe.
30. The method of clause 29, wherein guiding the current pose of the imaging probe comprises displaying the first graphical representation with a first display scheme, and displaying the second graphical representation with a second display scheme, wherein the first and second display schemes are different.
31. The method of clause 31, wherein the first display scheme and the second display scheme differ in at least one of transparency level, color, patterning, or sharpness.
32. The method of any one of clauses 29-31, wherein guiding the current pose of the imaging probe further comprises displaying, on the display, a target probe manipulation pathway associated with the desired imaging window. 33. The method of any one of clauses 29-32, further comprising storing, to one or more memory devices, a favored pose of the imaging probe as a future target pose associated with a selected imaging window for imaging the subject.
34. The method of clause 33, further comprising: receiving the future target pose associated with the selected imaging window; and guiding the current pose of the imaging probe toward the future target pose by displaying, on the display, a third graphical representation of the future target pose.
35. The method of any one of clauses 29-34, further comprising: receiving display data representing an orientation of the display relative to an operator of the imaging probe; based on the display data, transforming the first graphical representation and the second graphical representation to an operator frame of reference for the operator; and displaying the transformed first graphical representation and the second graphical representation in the operator frame of reference.
36. The method of any one of clauses 29-35, wherein the subject is a patient, the method further comprising: receiving patient data representing an orientation of the patient relative to an operator of the imaging probe; based on the patient data, transforming the first graphical representation and the second graphical representation to an operator frame of reference for the operator; and displaying the transformed first graphical representation and the second graphical representation in the operator frame of reference.
37. The method of any one of clauses 29-36, wherein the imaging probe is an ultrasound probe.
38. The method of any one of clauses 29-37, wherein the subject is a training model. 39. A system, comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a target imaging probe pose associated with a desired imaging window for imaging a subject; receiving electromagnetic (EM) tracking data representing a current pose of an imaging probe while imaging a subject; and guiding the current pose of the imaging probe toward the target imaging probe pose by displaying, on a display, a first graphical representation of the target imaging probe pose and displaying, on a display, a second graphical representation of the current pose of the imaging probe.
40. The system of clause 39, wherein the instructions that, when executed by the processor, cause the system to guide the current pose of the imaging probe comprises instructions that, when executed by the processor, cause the system to perform operations comprising displaying the first graphical representation with a first display scheme, and displaying the second graphical representation with a second display scheme, wherein the first and second display schemes are different.
41. The system of clause 39, wherein the first display scheme and the second display scheme differ in at least one of transparency level, color, patterning, or sharpness.
42. The system of any one of clauses 39-41, wherein the instructions that, when executed by the processor, cause the system to guide the current pose of the imaging probe further comprises instructions that, when executed by the processor, cause the system to perform operations comprising displaying, on the display, a target probe manipulation pathway associated with the desired imaging window.
43. The system of any one of clauses 39-42, wherein the instructions that, when executed by the processor, cause the system to perform operations comprising storing, to one or more memory devices, a favored pose of the imaging probe as a future target pose associated with a selected imaging window for imaging the subject. 44. The system of clause 43, wherein the instructions, when executed by the processor, cause the system to perform operations comprising: receiving the future target pose associated with the selected imaging window; and guiding the current pose of the imaging probe toward the future target pose by displaying, on the display, a third graphical representation of the future target pose.
45. The system of any one of clauses 39-44, wherein the instructions, when executed by the processor, cause the computing system to perform operations comprising: receiving display data representing an orientation of the display relative to an operator of the imaging probe; based on the display data, transforming the first graphical representation and the second graphical representation to an operator frame of reference for the operator; and displaying the transformed first graphical representation and the second graphical representation in the operator frame of reference.
46. The system of any one of clauses 39-45, wherein the subject is a patient, and wherein the instructions, when executed by the processor, cause the computing system to perform operations comprising: receiving patient data representing an orientation of the patient relative to an operator of the imaging probe; based on the patient data, transforming the first graphical representation and the second graphical representation to an operator frame of reference for the operator; and displaying the transformed first graphical representation and the second graphical representation in the operator frame of reference.
47. The system of any one of clauses 39-46, wherein the imaging probe is an ultrasound probe.
48. The system of any one of clauses 39-47, wherein the subject is a training model.
49. The system of any one of clauses 39-48, further comprising an imaging probe.
50. The system of clause 49, wherein the imaging probe is an ultrasound probe. 51. The system of clause 49 or 50, wherein the imaging probe comprises an electromagnetic tracking sensor.
52. The system of any one of clauses 39-51, further comprising a display configured to display the suggested imaging probe pose.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale. Instead, emphasis is placed on illustrating clearly the principles of the present disclosure.
[0008] FIG. 1 is an illustrative schematic of an example system for providing automatic guidance of image collection for generating a 3D model of target anatomy, in accordance with the present technology.
[0009] FIG. 2 is a flowchart of an example method for providing automatic guidance of image collection for generating a 3D model of target anatomy, in accordance with the present technology.
[0010] FIG. 3 depicts an illustrative schematic of an example process of determining a region of missing data in a 3D model of target anatomy, in accordance with the present technology.
[0011] FIG. 4 depicts an example 3D model of target anatomy having a region of potential missing data, in accordance with the present technology.
[0012] FIG. 5 depicts an illustrative schematic of an example process of determining a region of missing data in a 3D model of target anatomy, in accordance with the present technology.
[0013] FIG. 6 depicts an illustrative schematic of an example process of generating a suggested imaging probe pose for imaging a portion of the target anatomy, in accordance with the present technology.
[0014] FIG. 7 is an illustrative schematic of an example system for providing automatic guidance of image collection for generating a 3D model of target anatomy, in accordance with the present technology. [0015] FIG. 8 depicts a flowchart of an example method for providing automatic guidance of image collection for generating a 3D model of target anatomy, in accordance with the present technology.
[0016] FIG. 9 depicts an example process for generating images of a target anatomy from an imaging probe and collecting pose information for the imaging probe while generating the images, in accordance with the present technology.
[0017] FIG. 10 is an illustrative schematic of an example process for guiding a current pose of an imaging probe toward a target imaging probe pose, in accordance with the present technology.
[0018] FIG. 11 depicts example display screens for displaying graphical representations of a target imaging probe pose and a current imaging probe pose, and for displaying medical imaging obtained by an imaging probe, in accordance with the present technology.
DETAILED DESCRIPTION
[0019] The present technology relates to automatic guidance of image collection for 3D reconstruction of target anatomy. Some variations of the present technology, for example, are directed to the guidance of image collection for 3D reconstruction of the heart. Specific details of several variations of the technology are described below with reference to FIGS. 1- 11.
[0020] Described herein are example methods and systems for generating 3D reconstructions (also referred to herein as 3D models) of target anatomy, including guiding an operator to obtain sufficient images (e.g., 2D images) on which the 3D models are based. For example, in some variations an operator may have inadvertently captured images that do not collectively adequately image the entire anatomy to be modeled, which may occur, for example, because of obstructions (e.g., bony structures blocking field of view), poor imaging probe angles and/or sweeps, etc. Generally, the methods and systems described herein can automatically determine what portion(s) of the 3D model are missing, and then automatically determine imaging probe poses (including location and/or orientation) that are suitable to obtain the missing image data. After the missing image data is obtained, it can be used to update (e.g., complete, or at least further complete) the 3D model of the target anatomy. In some variations, the imaging probe poses can be communicated to the operator (e.g., via a display) to help guide the operator manipulate the imaging probe appropriately to obtain the desired additional image data.
[00211 The collected images can be used to generate a 3D model that can be used in a variety of applications, including but not limited to aiding navigation of patient anatomy for purposes of a medical treatment and/or other medical procedure (e.g., implant placement, tissue ablation). For example, the 3D model of a target anatomy can be used to guide identification of a target implant location and/or guide navigation for placement of a cardiac device (e.g., cardiac pacemaker), a stent device (e.g., coronary stent, aortic stent, etc.), or other suitable implant, etc. The 2D images and/or 3D models can be of any suitable anatomical regions, such as organs (e.g., heart, lung, kidney) and/or other tissue (e.g., vasculature). The 3D model can be constructed from multiple 2D images taken from various viewing angles, as further described below. Although the 2D images are primarily described herein as ultrasound images, it should be understood that in other variations the 2D images can include any suitable imaging modality.
[0022] As another example, automatic guidance of image collection for 3D modeling of a target anatomy can be provided in a training scenario. For example, in some variations an operator can be a trainee learning how to manipulate an imaging probe to obtain certain desired images of target anatomy. Target imaging probe poses corresponding to desired imaging windows (or images, fields of view, etc.) can be stored and/or communicated to an operator (e.g., via a display) to help guide the operator manipulate the imaging probe appropriately to obtain the desired images. Additional aspects of automatic guidance of image collection can described in further detail herein.
[0023] In some variations, at least a portion of the methods for providing automatic guidance for image collection can be performed prior to a medical procedure (e.g., an initial 3D model can be generated and/or a 3D model can be updated based on images collected prior to a medical procedure). Additionally or alternatively, at least a portion of the methods can be performed intra-procedurally and/or post-procedurally (e.g., an initial 3D model can be generated and/or a 3D model can be updated based on new images and/or other information collected during and/or after a medical procedure).
I. Automatic Guidance for Image Collection
[0024] FIG. 1 is a schematic diagram of an example system 100 for generating a 3D model of anatomy. As shown in FIG. 1, the system 100 for generating a 3D model of anatomy can include a model reconstruction system 110. The model reconstruction system 110 can be communicatively coupled to an imaging system 120, a catheter mapping system 126, one or more user interface devices 130, and/or an imaging probe guidance system 140, such as through a suitable wired and/or wireless connection (e.g., communications network 150) that enables information transfer. Additionally or alternatively, information transfer between one or more of these components of the system 100 can occur in a discrete manner, such as by storing information from one component (e.g., data) on a memory device that is readable by another component. Although the model reconstruction system 110, the imaging system 120, the catheter mapping system 126, the user interface device(s) 130, and imaging probe guidance system 140 are illustrated schematically in FIG. 1 as separate components, in some variations two or more of these components can be embodied in a single device. For example, any two, any three, any four, or all five of the model reconstruction system, 110, the imaging system 120, the catheter mapping system 126, the user interface device(s) 130, and imaging probe guidance system 140 can be combined in a single device. However, in some variations each of the model reconstruction system, 110, the imaging system 120, the catheter mapping system 126, the user interface device(s) 130, and imaging probe guidance system 140 can be embodied in a respective separate device.
[00251 Generally, the model reconstruction system 110 can function to generate a 3D model based on images from the imaging system 120 and/or mapping data from the catheter mapping system 126, and/or analyze the 3D model (e.g., to identify potentially missing portion(s) of the 3D model). The model reconstruction system 110 can be configured to receive one or more images of a target anatomy that are generated by the imaging system 120 and/or mapping data representative of a target anatomy generated by the catheter mapping system 126. For example, the model construction system 110 can be configured to receive only one or more images generated by the imaging system 120, only mapping data generated by the catheter mapping system 126 (e.g., through manipulation of a catheter 128 having one or more suitable mapping sensors), or both. The imaging system 120 can be configured to obtain images in any suitable imaging modality. In some variations, the imaging system 120 can be configured to generate two-dimensional (2D) images along various imaging windows or fields of view. For example, in some variations, the imaging system 120 is an ultrasound imaging system configured to generate a plurality of 2D images, with each 2D image being collected by performing an ultrasound sweep with an imaging probe 122 (e.g., through translation and/or rotation of the imaging probe 122) at one or more imaging windows. The imaging probe 122 can also include at least one electromagnetic (EM) sensor 124 or other tracking sensor configured to track the pose (including position and/or orientation) of the imaging probe 122 as the imaging probe 122 collects the 2D images. Additionally or alternatively, the EM sensor 124 can be included or coupled to an EM navigation tool that is moved in tandem with the imaging probe 122 to collect representative tracking data for the imaging probe 122. In some variations, the imaging probe 122 can be manipulated external to a patient’s body (e.g., transthoracic ultrasound for imaging a heart).
[00261 The model reconstruction system 110 can include at least one processor 112 and at least one memory device 114. The processor 112 may be configured to execute the instructions that are stored in the memory device 114 such that, when it executes the instructions, the processor 112 performs aspects of the methods described herein. The instructions may be executed by computer-executable components integrated with a software application, applet, host, server, network, website, communication service, communication interface, hardware, firmware, software elements of a user computer or mobile device, smartphone, or any suitable combination thereof. In some variations, the one or more processors 112 can be incorporated into a computing device or system such as a cloud-based computer system, a mainframe computer system, a grid-computer system, or other suitable computer system.
[0027] In some variations, the system 100 can further include one or more user interface device(s) 130, which functions to allow a user to interact with the model reconstruction system 110 and/or the imaging system 120. For example, the user interface device 130 can be configured to receive user input (e.g., for controlling input of information between the model reconstruction system 110 and the imaging system 120) and/or provide information to a user. For example, in some variations the user interface device 130 can include a display (e.g., monitor, goggles, glasses, AR/VR device, etc.) for displaying a 3D model to a user.
[0028] The imaging system, catheter mapping system, model reconstruction system, and/or user interface device can be further communicatively coupled to an imaging probe guidance system 140. Generally, the imaging probe guidance system 140 functions to generate, receive, and/or communicate to an operator of the imaging system 120 various information relating to a suggested pose (e.g., location and/or orientation) for an imaging probe for obtaining certain desired images (e.g., with the ultimate goal of obtaining the images themselves, and/or with the ultimate goal of obtaining the images on which at least a portion of a 3D model can be based, such as for completion of a missing portion of the 3D model). Further details of the imaging probe guidance system 140 are described herein. Like the model reconstruction system 110, the imaging probe guidance system 140 can include at least one processor 142 and at least one memory device 144, which can be similar to the processor 112 and/or the memory device 114, respectively. In some variations, the model reconstruction system 110 and the imaging probe guidance system 140 can incorporate the same physical instances of processor(s) and/or memory device(s).
[00291 FIG. 2 is a flowchart of an example method 200 for providing automatic guidance of image collection for 3D reconstruction of a target anatomy (also referred to herein as image collection for generating a 3D model of a target anatomy). The method 200 can, for example, be performed at least in part by the model reconstruction system 110 and/or the imaging probe guidance system 140. As shown in FIG. 2, method 200 can include receiving a 3D model of a target anatomy 210, determining a region of missing data in the 3D model 220, and generating a suggested imaging probe pose for imaging a portion of the target anatomy corresponding to the region of missing data 230. The method 200 can further include providing the suggested imaging probe pose to a user 240, receiving one or more supplemental two- dimensional (2D) images collected based on the suggested imaging probe pose 250 (e.g., while an operator manipulates an imaging probe toward the suggested imaging probe pose, or after an operator manipulates an imaging probe toward the suggested imaging probe pose), and updating the 3D model based on the supplemental 2D images 260. However, in some variations, the method 200 can omit generating and/or providing a suggested imaging probe pose, and instead rely on an operator’s knowledge and experience to properly manipulate the imaging probe to obtain the desired images corresponding to the region of missing data that can be communicated to the operator.
A. Receiving a 3D model
[0030] As described above, the method 200 can include receiving a 3D model of a target anatomy 210. The 3D model can, for example, be obtained with and/or from an imaging system (e.g., imaging system 120). The received 3D model can be generated at least in part from a plurality of 2D images collected with an imaging probe, such as imaging probe 122. In one illustrative example, the imaging probe is an ultrasound probe configured to collect 2D ultrasound images. In some variations, the ultrasound probe can be configured to collect 2D ultrasound images generated from one or more ultrasound sweeps with an ultrasound probe from various vantage points. For example, the ultrasound probe can be manipulated in various poses (e.g., translated and/or rotated) at one or more imaging windows. The imaging probe can be manipulated external to the patient to gather images. For example, in some variations the imaging probe can be an ultrasound probe configured to perform a transthoracic ultrasound for imaging a heart. In some variations in which the method is performed intra-procedurally, the 2D images can also depict one or more medical devices (e.g., a navigational catheter) relative to patient anatomy, to further help guide placement and/or other operation of the depicted medical device(s).
[0031] In some variations, the 2D images can be segmented with one or more segmentation masks, and the segmentation masks can be projected into 3D space to form the 3D model. For example, segmentation masks can be applied to generate segmented pixels from the 2D images corresponding to anatomical features. In one example in which the target anatomy is a heart, the segmentation masks can delineate between various features of the heart such aortic arch, Bachman’s bundle, left atrium, atrioventricular (AV) bundle (bundle of His), left ventricle, right and left bundle branch, Purkinje fibers, right bundle branch, right ventricle, right atrium, posterior internodal, middle internodal, AV node, anterior internodal, sinoatrial (SA) node, apex, veins, valves, trunks, blood pool, and/or other suitable cardiac anatomy. The segmentation masks can be generated in an automated manner, semi-automated, or manual manner.
[0032] Additionally or alternatively, the received 3D model can be generated at least in part from mapping data collected by a mapping catheter (e.g., catheter 128 in the catheter mapping system 128). For example, a composite 3D model may be generated by combining imaging data (e.g., from imaging system 120) and mapping data (e.g., from catheter mapping system 126). However, in some variations the received 3D model may be generated only from imaging data or only mapping data.
B. Determining a missing region of the 3D model
[0033] As described above, the method 200 can include determining a region of missing data in the 3D model 220, where missing data corresponds to one or more portions of the 3D model that is incomplete (e.g., due to insufficient images). The region of missing data in the 3D model can be determined in various manners.
[0034] For example, in some variations, the region of missing data can be determined at least in part by fitting a mesh surface around an external surface of the 3D model, and identifying a deficiency of the fitted mesh surface. For example, the segmentation masks (described above) for the 2D images can be projected into 3D space to form a 3D projection of the target anatomy. A mesh surface (e.g., with triangular cells, nodes, etc.) can be fit around the exterior surface (e.g., wall) of the 3D projection. A deficiency in the mesh surface, such as a non-smooth surface (e.g., a non-continuity such as a sharp edge, etc.) and/or a recess (e.g., hole) in the mesh surface can be indicative of a portion of the 3D model that is missing. For example, as shown in the illustrative schematic of FIG. 3, a mesh surface can be fit around a 3D model of target anatomy 300. Sharp comers and/or unnaturally flat (e.g., planar) surfaces at regions 310a and 310b (indicated by dashed lines on the surface of the 3D model 300) can be identified as deficiencies of the 3D model 300. The deficiencies 310a and 310b can indicate that are missing portions of the 3D model (shown in FIG. 3 as regions “A” and “B”, respectively” of the true geometry of the target anatomy). An illustrative example of determining a region of missing data in a 3D model of a heart is shown in FIG. 4. In particular, FIG. 4 depicts an example 3D model 400 of a heart including some or all of at least four chambers of the heart. However, a region 410 includes an unnaturally flat surface 410 (e.g., a planar region that indicates a portion of a heart chamber that is cut off unexpectedly), which can be identified in a mesh surface fitted around the 3D model. This flat surface 410 can be indicative of missing data corresponding to an incomplete portion of the heart chamber.
[00351 As another example, in some variations, the region of missing data in the 3D model can be determined at least in part by comparing the 3D model to a statistical shape model (SSM) of the target anatomy, and identifying a portion of the 3D model that is in low agreement with a corresponding portion of the SSM.
[0036] Comparing the 3D model can include fitting the 3D model to one or more SSMs. SSMs provide a way of characterizing a given shape based on the shape’s population variation and mean. For example, an SSM applied to a collection of like objects (e.g., type of anatomy, such as a heart) provides a statistical evaluation of the general shape of the object and how the objects differ from one another. SSMs can be used for classification, segmentation, and phantom generation (e.g., producing new representations of the object). In some variations, one or more processors (e.g., in the model reconstruction system 110 and/or imaging probe guidance system 140) can have access to a plurality of stored SSMs for the target anatomy (e.g., SSMs generated based on training data of many instances of the target anatomy for various subjects) for comparison to the 3D model. As shown in the illustrative schematic of FIG. 5, at least one SSM 520 can be identified (e.g., using a suitable best fit analysis) that is a close fit to the 3D model of the target anatomy. In other words, much of the 3D model 500 may be adequately consistent with the identified SSM 520, which can be an indication that much of the 3D model 500 is an adequate reconstruction of the corresponding regions of target anatomy. However, there may be regions of low agreement (e.g., region 510a) between the 3D model 500 and the SSM 520, which can be an indication of missing imaging data for corresponding regions of the target anatomy.
[0037] Additionally or alternatively, a region of missing data in the 3D model can be determined based on receiving an input from a user (e.g., operator of the imaging probe (e.g., imaging probe 122), other user, etc.). For example, the user can review the 3D model (e.g., visualized on a display screen, such as on a user interface device 130) and observe one or more regions that appear to be deficient, such as by looking for non-smooth surfaces, recesses, and/or the like. As an illustrative example, the user may review a heart 3D model such as that shown in FIG. 4, and observe that region 410 includes an unnaturally flat surface 410 (e.g., a planar region that indicates a portion of a heart chamber that is cut off unexpectedly). The user can then select the region 410 to identify it as a region of missing data. For example, the user can trace an outline of the region 410 (e.g., on a display, such as with control of a cursor or with a finger or stylus) with a continuous border outline or selecting points around a border of the region 410.
[0038] In some variations, the 3D model can be fitted to multiple SSMs, in that multiple sets of parameters in multiple SSMs can fit the observed data in the 3D model. In these instances, it may initially be unclear where the true regions of missing data in the 3D model (that is, relative to the true geometry of the target anatomy) exist, as the multiple SSMs may overall be substantially quantitatively similar to the 3D model, but each SSM may be in low agreement with the 3D model in different regions. For example, as shown in the illustrative schematic of FIG. 6, a 3D model 600 can be fit to two SSMs, including a first SSM 520a and a second SSM 520b. The first SSM 520a and the second SSM 520b match the 3D model in many aspects, but also differ from the 3D model in different ways. Depending on which SSM is taken as the ideal SSM, different region(s) of the 3D model may be identified as a missing based on the comparison of the 3D model to that ideal SSM. Accordingly, in these variations in which the 3D model can be fitted to multiple SSMs (a degenerate solution to the fitting process), the method can include identifying the regions where the multiple SSMs differ the most from each other (e.g., using a suitable Gaussian process), and indicating to an operator to image these regions. Images of those these can break the degeneracy, as a new partial 3D model of that region generated from these degeneracy-breaking images can be compared to the multiple SSMs, and the SSM that is closest to the new partial 3D model can be identified as the “ideal” SSM for purposes of moving forward with the method 200. Additionally or alternatively, the method 200 can include suggesting one or more imaging probe poses that will enable the acquisition of the degeneracy-breaking images (e.g., using ray tracing techniques and/or at least a portion of method 800 as described in further detail below). In other words, in some variations the method 200 can include (without or with explicitly suggesting imaging probe pose(s)) obtaining 2D image(s) of at least one region of the target anatomy where a new partial 3D model of the target anatomy would aid in the selection of an ideal or optimum SSM, among multiple candidate SSMs.
C. Generating a suggested imaging probe pose
[0039] As described above, the method 200 can include generating a suggested imaging probe pose for imaging a portion of the target anatomy corresponding to the region of missing data 230. In some variations, the imaging probe pose can be generated with an imaging probe guidance system, such as probe guidance system 130 described above with respect to FIG. 1. Generally, the suggested imaging probe pose includes a suggested location and/or orientation of the imaging probe (e.g., imaging probe 122) to enable the imaging probe to image the portion of the target anatomy corresponding to the region of missing data in the 3D model. In some variations, the location and/or orientation is generated using one or more suitable ray tracing techniques. For example, the method can include defining an imaging window that includes the portion of target anatomy corresponding to the region of missing data in the 3D model (e.g., such that the desired portion of target anatomy is centered within the imaging window), and extending one or more ray paths from the imaging window (e.g., angular borders of the imaging window) back to a particular root point. The location of the suggested imaging probe pose can, in some variations, be centered at this root point, and the orientation of the imaging probe pose can be determined based on physical accessibility (e.g., where the imaging probe can be positioned external to a patient, for a transthoracic ultrasound scan).
[0040] Additionally or alternatively, the suggested imaging probe pose can be previously associated with the missing portion of the target anatomy corresponding to the region of missing data 230. For example, the imaging probe guidance system can be similar to the probe guidance system 710 shown schematically in FIG. 7. In some variations, the probe guidance system 710 can be an example of the probe guidance system 140 of FIG. 1. As shown in FIG. 7 depicting an imaging probe guidance arrangement 700, the imaging probe guidance system 710 can be communicatively coupled to a probe information database 730, which can store one or more suggested imaging probe poses and their associated imaging windows (and/or anatomical regions captured in their associated imaging windows), such as for a particular type of target anatomy (e.g., heart, lung, etc.). These stored suggested imaging probe poses, and their associations, can be previously generated such as through prior imaging processes (e.g., bench test data-gathering, medical procedures, etc.) relating to the target anatomy. For example, through prior imaging processes, one or more imaging probe poses can be identified as facilitating the imaging of certain portions of the target anatomy, and this association can be stored in the probe information base 730 such that the probe guidance system 710 can search in the database for a certain desired portion of the target anatomy to be imaged and identify one or more suggested imaging probe poses associated in the database with that desired portion of the target anatomy to be imaged. The probe information database 730 can further be updated when the method 200 is performed, as additional information regarding relationships between imaging probe poses and imaging windows for a type of target anatomy are identified and/or existing such relationships can be confirmed by an operator as successfully enabling imaging of the desired portions of target anatomy. Accordingly, in some variations the accuracy and completeness of the probe information database 730 can continue to improve as the method 200 is performed repeatedly.
D. Providing the suggested imaging probe pose to a user
[0041] As described above, the method 200 can include providing the suggested probe pose to a user 240 operating an imaging probe (e.g., imaging probe 122), which in some variations can be similar to at least a portion of a method 800 summarized in FIG. 8. FIG. 8 is a flowchart of an example method 800 for providing automatic guidance of image collection for 3D reconstruction of a target anatomy. As shown in FIG. 8, method 800 can include receiving a suggested imaging probe pose (also referred to herein as a “target” pose) associated with a desired imaging window 810, receiving tracking data representing a current pose of an imaging probe 820, and guiding the current pose of the imaging probe toward the target imaging probe pose 830. In the method 800, the imaging probe can, in some variations, be similar to the imaging probe 122 of FIG. 1.
[0042] As shown in FIG. 9, 2D images can be collected by an imaging probe 922, which can be similar to the imaging probe 122 of FIG. 1. As the 2D images are collected, a live stream of the 2D images 914 can be provided on a display (e.g., user interface device 130) to a user operating the imaging probe 922. Pose tracking data for the imaging probe can be collected concurrently while the imaging probe is collecting the 2D images. For example, as shown in FIG. 9, the imaging probe 922 can include one or more tracking sensors 924 configured to collect position and/or orientation data for the imaging probe as the imaging probe collects 2D images. In some variations, the one or more tracking sensors 924 can include an EM sensor used in conjunction with a suitable EM tracking system. For example, the tracking sensors 924 can include an EM sensor similar to EM sensor 124 of FIG. 1. Accordingly, a current pose (e.g., location and/or orientation) of the imaging probe is known when the imaging probe collects each 2D image. As shown in FIG. 9, a graphical representation of the current post of the imaging probe can be displayed on the display so the operator can better track where the imaging probe is relative to anatomy.
[0043] In some variations, guiding the current pose of the imaging probe toward the target imaging probe pose can include displaying a graphical representation of the current pose of the imaging probe (e.g., similar to that shown in FIG. 9), as well as displaying another graphical representation of the target imaging probe pose. In some variations, such graphical representations can be displayed on a suitable display similar to user interface device 130 of FIG. 1. As the imaging probe is manipulated, the displayed graphical representation of the current pose of the imaging probe also changes position and/or orientation accordingly. When viewing the graphical representations of both current and target imaging probe poses, an operator of the imaging probe can manipulate the imaging probe until the graphical representation of the current imaging probe pose is proximate the graphical representation of the target imaging probe pose (e.g., such that the graphical representations are substantially coincident or overlie each other, or are sufficient near each other).
[0044] In some variations, the graphical representations of the current imaging probe pose and the target imaging probe pose can be displayed in different display schemes, such that the operator can visually distinguish between the two graphical representations and/or help the operator focus more on one of the graphical representations than the other. The display schemes can vary with one or more visual parameters, such as transparency level, color, patterning, and/or sharpness. For example, as shown in FIG. 10, a first graphical representation 1010 of a target imaging probe pose can be displayed in a first display scheme, and a second graphical representation 1020 of a current imaging probe pose can be displayed in a second display scheme. In the example of FIG. 10, the first graphical representation 1010 is more transparent, while the second graphical representation 1020 is more opaque. Alternatively, the first graphical representation 1010 can be more opaque and the second graphical representation 1020 can be more transparent. Additionally or alternatively, the first graphical representation 1010 can be displayed in a first color (or pattern) and the second graphical representation 1020 can be displayed in a second color or pattern different from the first color or pattern. Additionally or alternatively, the first graphical representation 1010 can be blurrier and the second graphical representation 1020 can be sharper (or vice versa). When the first and second graphical representations are generally overlaid with each other, the current field of view 1022 (e.g., echo view for an ultrasound probe) associated with the current imaging probe pose 1020 is also generally overlaid with the target field of view 1012 associated with the target imaging probe pose 1010. Fidelity to the match or discordance of the current imaging probe pose 1020 and the target imaging probe pose 1010 can additionally or alternatively be visually conveyed with one or more objective (e.g., quantitative) measures such as an indication of angle between an axis of the current imaging probe pose 1020 and an axis of the target imaging probe pose 1010 (e.g., the smaller the angle, the closer the current imaging probe pose is approximating the target imaging probe pose). Additionally or alternatively, the fidelity to the match or discordance of the current imaging probe pose 1010 and the target imaging probe pose 1010 can be visually conveyed with one or more visual indicators, such as by modifying one or more display schemes (e.g., hue, brightness) for displaying the images, background, and/or one or more icons or other suitable indicators (e.g., for display adjacent to or overlaid with an image, etc.).
[0045] In some variations, the graphical representations of the target and current imaging probe poses can be displayed to a user concurrently with a live stream of images from the imaging probe (e.g., echo view from an ultrasound probe). For example, FIG. 11 is a schematic depiction of graphical representations of the target and current imaging probe poses being displayed on a first display 1130a and a live echo view from the imaging probe being displayed on a second display 1130b. The first display 1130a is configured to display, for example, the graphical representation of the target imaging probe pose 1110 and the graphical representation of the current imaging probe pose 1120, similar to that described above with respect to FIG. 10. The first display 1130a can be on a separate screen (e.g., monitor) from the second display 1130b, or the first and second displays 1130a and 1130b can be on the same screen (e.g., monitor) but in different windows, display portions, and/or the like. Alternatively, the graphical representations of the target and current imaging probe poses can be displayed overlaid with a live stream of images from the imaging probe.
[0046] Another example of guiding a user toward a target imaging probe can include modulating the appearance of the graphical representation of the current imaging probe pose as the imaging probe approaches the target imaging probe pose. For example, the color of the graphical representation of the current imaging probe pose can be displayed in one color (e.g., green) when the current imaging probe pose reaches within a threshold proximity to the target imaging probe pose, and/or displayed in another color (e.g., red) when the current imaging probe pose is outside the threshold proximity to the target imaging probe. Additionally or alternatively, the graphical representation of the current imaging probe pose can change to indicate that the probe is currently being moved toward or away from the target imaging probe pose (e.g., change color, blink with a certain frequency, etc.). In some variations, the appearance of the graphical representation of the current imaging probe pose can be modulated without displaying the graphical representation of the target imaging probe pose.
[0047] Another example of guiding a user toward a target imaging probe can include providing haptic feedback (e.g., vibration) on the imaging probe itself to indicate when the operator is correctly manipulating the imaging probe toward the target imaging probe pose. For example, the imaging probe can be configured to vibrate to indicate an error (e.g., that the imaging probe is incorrectly being moved away from the target imaging probe pose), and/or or indicate success (e.g., that the imaging probe is correctly being moved toward the target imaging probe pose). For example, vibration at one frequency can be activated to indicate error in probe movement, while vibration at a different frequency can be activated to indicate success in probe movement.
[0048] Although the FIGS. 9-11 illustrate a probe-shaped or pseudo-realistic representation of the imaging probe, it should be understood that any suitable graphical representation may be used (e.g., lines, arrows, dots, polygons, etc.) to indicate location and/or orientation of an imaging probe pose.
[0049] As described above, the display of the graphical representations of the target and current imaging probe poses can function to guide a user to manipulate an imaging prose to a suggested probe position to collect desired images depicting anatomy that is corresponding to the missing portion(s) of the 3D model of the target anatomy. However, in some variations, guiding the collection of images can additionally or alternatively include one or more further aspects of guidance as described in further detail below with respect to method 800 being performed as part of a training module (e.g., depicting a target probe manipulation pathway associated with a desired imaging window, transforming graphical representations of the target and/or current imaging probe poses to an operator frame of reference, etc.). E. Updating 3D model
[0050] As discussed above, the method 200 can include receiving supplemental 2D images of the target anatomy 250 corresponding to the regions of missing data in the 3D model, and updating the 3D model based on the supplemental 3D images 260. Similar to that described above with respect to the generation of the initial received 3D model, the supplemental 2D images can be segmented with one or more segmentation masks, and the segmentation masks can be projected into 3D space to form at least a portion of the 3D model. For example, segmentation masks can be applied to generate segmented pixels from the 2D images corresponding to anatomical features.
[0051] In some variations, the segmentation masks of the supplemental 2D images can be combined with the segmentation masks of the initial 2D images, and all the segmentation masks can collectively be projected to form a new 3D model that includes the previously missing data. Alternatively, the initial received 3D model can be combined with a partial new 3D model that is generated based on only the supplemental 2D images. The updated 3D model generated from either of these processes can be a reconstruction of the entire target anatomy.
[0052] Although method 200 is primarily described with respect to updating the 3D model based on supplemental images, in some variations method 200 may additionally or alternatively include receiving supplemental mapping data (e.g., from catheter mapping system 126) and updating the 3D model based at least in part on the supplemental mapping data.
F. Displaying the updated 3D model
[0053] In some variations, the method 200 can further include displaying the updated 3D model 270 on a suitable display (e.g., user interface device 130). For example, in some variations, the displayed 3D model can be manipulated by a user (e.g., rotated, enlarged, viewed along one or more cross-sectional planes, etc.). Additionally or alternatively, the 3D model can be stored on a suitable memory device.
[0054] Furthermore, in some variations a graphical representation of a medical device (e.g., catheter, such as a delivery or navigational catheter) can be generated when the medical device is located (e.g., navigated) in or near the target anatomy. As described elsewhere herein, the medical device can include one or more tracking sensors (e.g., EM sensors) that provide position and/or orientation information of the medical device. In some variations, the graphical representation of the medical device can located relative to the target anatomy based on tracking data from the tracking sensors of the medical device, and the graphical representation of the medical device can be displayed concurrently with the 3D model on a display accordingly, such that a user can view the location of the medical device relative to the target anatomy on the display. A graphical representation of a current (or present) location of the medical device and/or past locations (e.g., path tracking) of the medical device in the patient can be displayed. In some variations, the graphical representation of the medical device can be overlaid on the 3D model.
[0055] The display of the graphical representation of the medical device can toggled on and off (e.g., selected for display or non-display). For example, a user can select through a user control on a user interface device (e.g., user interface device 130) to display the graphical representation of the medical device, or select through the user control to hide the graphical representation of the medical device. As another example, the graphical representation of the medical device may be automatically displayed or hidden based on one or more parameters, such as automatically displayed based on proximity to a location of interest (e.g., implant location) in the target anatomy. Additionally or alternatively, a user can select a display scheme (e.g., color, transparency, etc.) of the graphical representation of the medical device on the display.
II. Training System for Image Collection
[0056] As described above, at least a portion of the method 800 can be performed to generate a more complete 3D model of target anatomy of a live patient (e.g., during a medical procedure). However, in some variations at least a portion of the method 800 can additionally or alternatively be performed as part of a training module for guiding image collection (regardless of whether a 3D model is generated from such images). For example, the imaging probe guidance arrangement 700 shown in FIG. 7 can be used as a training tool for imaging technicians to help them gain experience in operating the imaging system and become more efficient (which may, for example, help reduce patient exposure to any harmful radiation that may occur during imaging).
[0057] The model 800 can, for example, be used to providing training in combination with an imaging subject. The imaging subject could be a live patient (e.g., human patient), or an inanimate training model. For example, the training model can be accompanied with preset imaging windows with associated imaging probe poses and/or images. A trainee can, for example, be instructed to manipulate an imaging probe to match target imaging probe poses and replicate the preset images associated with the target imaging probe poses. In some variations, the imaging probe can be similar to the imaging probe 122.
[00581 As shown in FIG. 7, an imaging probe guidance system 710 can be communicatively coupled to a probe information database 730 so as to obtain information relating to imaging probe poses and their associated imaging windows. In some variations, the imaging probe guidance system 710 can be an example of the imaging probe guidance system 140 of FIG. 1. The probe information database 730 can store one or more target imaging probe poses and their associated imaging windows (and/or anatomical regions captured in their associated imaging windows), such as for a particular type of target anatomy (e.g., heart, lung, etc.). These stored imaging probe poses, and their associations, can be previously generated such as through prior imaging processes (e.g., bench test data-gathering, medical procedures, etc.) relating to the target anatomy. For example, through prior imaging processes, one or more imaging probe poses can be identified as facilitating the imaging of certain portions of the target anatomy, and this association can be stored in the probe information base 730 such that the probe guidance system 710 can search in the database for a certain desired portion of the target anatomy to be imaged and identify one or more suggested imaging probe poses associated in the database with that desired portion of the target anatomy to be imaged. The probe information database 730 can further be updated with imaging probe poses entered by an operator over time, as additional information regarding relationships between imaging probe poses and imaging windows for a type of target anatomy are identified.
[0059] Additionally or alternatively, the probe information database 730 can store one or more target probe manipulation pathways and associated imaging windows (and/or anatomical regions captured in their associated imaging windows), such as for a particular type of anatomy (e.g., heart, lung, etc.). A target probe manipulation pathway can include information regarding a physical approach used to place an imaging probe to a desired pose (e.g., moving bony structures out of the way, stretching skin or other tissue, etc. to help create access for the imaging probe to the target imaging probe pose). Like the stored imaging probe poses, various target probe manipulation pathways (and their associations) can be previously generated such as through prior imaging processes (e.g., bench test data-gathering, medical procedures, etc.) relating to the target anatomy. Furthermore, the probe information database 730 can further be updated with target probe manipulation pathways entered by an operator over time, as additional information regarding relationships between imaging probe poses and imaging windows for a type of target anatomy are identified. [0060] The imaging probe guidance system 710 can further be configured to receive sensor information from a sensor 10 that provides an indication of the position and/or orientation of a subject P (e.g., a live training subject or an inanimate training model). For example, the sensor 10 can be an EM tracking sensor, and can be coupled to an external surface of the subject P (e.g., skin of a live, human training subject, or external surface of a training model) and/or on a surface on which the subject P lies.
[0061] The imaging probe guidance system 710 can also be communicatively coupled to a display 720 (which can be similar to, or different from, the user interface device 130). The display 720 can be configured to display guidance information from the imaging probe guidance system 710 (e.g., graphical representations of target and current imaging probe poses, target probe manipulation pathways, etc.), medical images (e.g., live stream of images collected by the imaging probe poses), and/or other suitable information. In some variations, the display 710 can include a tracking sensor 722 (e.g. position sensor and/or orientation sensor, such as an EM tracking sensor, accelerometer, etc.
[0062] As shown in FIG. 7, the probe guidance system 710 can include a tracking module 712 and a visualization module 714. The tracking module 712 can be configured to track the pose (e.g., location and/or orientation) and/or pathway of an imaging probe during an imaging procedure. For example, the tracking module 712 can receive sensor data from a tracking sensor (e.g., EM tracking sensor such as EM sensor 124) that is coupled to the imaging probe or otherwise moves in tandem with the imaging probe in any suitable manner. Additionally or alternatively, in some variations the tracking module 712 can receive sensor data (e.g., from a sensor 10) providing an indication of the position and/or orientation of an imaging subject or a surface on which the imaging subject lies, so as to track the position and/or orientation of the imaging subject. Additionally or alternatively, in some variations the tracking module 712 can receive sensor data from the display tracking sensor 722 providing an indication of the position and/or orientation of a display on which probe guidance information can be displayed.
[0063] Additionally or alternatively, in some variations, the tracking module 712 can also sense the position of the imaging probe relative to the anatomy and automatically annotate one or more imaging views and/or images according to the imaged anatomy. For example, when performing a transthoracic ultrasound (e.g., for imaging a heart), the tracking module 712 can sense the position of the imaging probe and automatically labels the associated imaging window based on the relevant anatomy present in the imaging window (e.g., subcostal, substernal, L/R parasternal, suprasternal, etc.). Of course, it should be understood that in some variations the exact labels of the imaging windows may vary based on the type of anatomy being imaged.
[0064] The visualization module 714 functions to generate various graphics for providing guidance for image collection using an imaging probe. For example, the visualization module 714 can be configured to generate graphical representations of a target imaging probe pose and a current imaging probe pose. Such graphical representations can, for example, be displayed on a display similar to user interface device 130. As described above, as the imaging probe is manipulated, the displayed graphical representation of the current pose of the imaging probe also changes position and/or orientation accordingly. When viewing the graphical representations of both current and target imaging probe poses, an operator of the imaging probe can manipulate the imaging probe until the graphical representation of the current imaging probe pose is proximate the graphical representation of the target imaging probe pose (e.g., such that the graphical representations are substantially coincident or overlie each other, or are sufficient near each other). Accordingly, the target imaging probe pose, when displayed, can assist and train the operator to replicate a desired imaging window or view associated with that target imaging probe pose. As described above, the graphical representations of the target and current imaging probe poses can be displayed in different display schemes that differ in one or more visual parameters such as transparency level, color, patterning, or sharpness.
[0065] Additionally or alternatively, the visualization module 714 can be configured to generate a representation of a target probe manipulation pathway for placing the imaging probe at a desired target imaging probe pose. The representation of the target probe manipulation pathway can be graphical (e.g., visually highlighting an access path through and/or around anatomy, such as with a colored line or series of arrows, etc.). Additionally or alternatively, the representation of the target probe manipulating pathway can be text-based (e.g., written description, bullet points, etc.) of sub-processes for placing the imaging probe at a desired target imaging probe pose.
[0066] As another example, the visualization module 714 can be configured to transform the probe guidance information (e.g., imaging probe pose, target probe manipulation pathway) to a frame of reference of the observer prior to display on the probe guidance information. In other words, the visualization module 714 can adjust the probe guidance information such that it can be presented with the proper orientation to imaging probe operator (or other observer). For example, the visualization module 714 can assess tracking data from the display 720 to track position and/or orientation of the display 720. Based on the position and/or orientation of the display 720, the visualization module 714 can transform (e.g., rotate, flip, etc.) information so the information is generally oriented in a manner representative of the location and/or orientation of the display in the room relative to the operator. Similarly, the visualization module 714 can assess tracking data from the imaging subject P (e.g., from sensor 10) to track position and/or orientation of the imaging subject P. Based on the position and/or orientation of the imaging subject P, the visualization module 714 can transform (e.g., rotate, flip, etc.) information so that information is generally oriented in a manner representative of the location and/or orientation of the imaging subject in the room relative to the operator.
[0067] While certain aspects of the method 800 are described above with reference to use of the probe guidance system for training purposes, it should be understood that any one or more suitable aspects of the method 800 can additionally or alternatively be performed in connection with an actual imaging procedure (e.g., as part of method 200 described above).
Conclusion
[0068] Although many of the embodiments are described above with respect to systems, devices, and methods for providing automatic guidance for image collection for 3D reconstruction of target anatomy, the technology is applicable to other applications and/or other approaches. Moreover, other embodiments in addition to those described herein are within the scope of the technology. Additionally, several other embodiments of the technology can have different configurations, components, or procedures than those described herein. A person of ordinary skill in the art, therefore, will accordingly understand that the technology can have other embodiments with additional elements, or the technology can have other embodiments without several of the features shown and described above with reference to FIGS. 1-11.
[0069] The descriptions of embodiments of the technology are not intended to be exhaustive or to limit the technology to the precise form disclosed above. Where the context permits, singular or plural terms may also include the plural or singular term, respectively. Although specific embodiments of, and examples for, the technology are described above for illustrative purposes, various equivalent modifications are possible within the scope of the technology, as those skilled in the relevant art will recognize. For example, while steps are presented in a given order, alternative embodiments may perform steps in a different order. The various embodiments described herein may also be combined to provide further embodiments. [0070] As used herein, the terms “generally,” “substantially,” “about,” and similar terms are used as terms of approximation and not as terms of degree, and are intended to account for the inherent variations in measured or calculated values that would be recognized by those of ordinary skill in the art.
[0071] Moreover, unless the word “or” is expressly limited to mean only a single item exclusive from the other items in reference to a list of two or more items, then the use of “or” in such a list is to be interpreted as including (a) any single item in the list, (b) all of the items in the list, or (c) any combination of the items in the list. Additionally, the term "comprising" is used throughout to mean including at least the recited feature(s) such that any greater number of the same feature and/or additional types of other features are not precluded. It will also be appreciated that specific embodiments have been described herein for purposes of illustration, but that various modifications may be made without deviating from the technology. Further, while advantages associated with certain embodiments of the technology have been described in the context of those embodiments, other embodiments may also exhibit such advantages, and not all embodiments need necessarily exhibit such advantages to fall within the scope of the technology. Accordingly, the disclosure and associated technology can encompass other embodiments not expressly shown or described herein.

Claims

CLAIMS I/We claim:
1. A method, comprising: receiving a three-dimensional (3D) model of a target anatomy; determining a region of missing data in the 3D model; generating a suggested imaging probe pose for imaging a portion of the target anatomy corresponding to the region of missing data; receiving one or more supplemental 2D images collected based on the suggested imaging probe pose; and updating the 3D model based on the supplemental 2D images.
2. The method of claim 1, wherein the 3D model is at least partially generated from two- dimensional (2D) images of the target anatomy.
3. The method of claim 1 or 2, wherein the 3D model is at least partially generated from catheter mapping data of the target anatomy.
4. The method of any one of claims 1-3, wherein determining the region of missing data in the 3D model comprises fitting a mesh surface around an external surface of the 3D model and identifying a deficiency of the fitted mesh surface.
5. The method of claim 4, wherein the deficiency of the fitted mesh surface comprises a non-smooth surface or a recess.
6. The method of any one of claims 1-5, wherein determining the region of missing data in the 3D model comprises: comparing the 3D model to a statistical shape model (SSM) of the target anatomy; and identifying a portion of the 3D model that is in low agreement with a corresponding portion of the SSM.
7. The method of any one of claims 1-6, wherein the region of missing data is a first region of the 3D model, wherein the method further comprises: displaying the first region of missing data with a first display scheme; and displaying a second region of the 3D model different from the first region with a second display scheme, wherein the first and second display schemes are different.
8. A system comprising: a processor; a memory operably coupled to the processor and storing instructions that, when executed by the processor, cause the system to perform operations comprising: receiving a three-dimensional (3D) model of a target anatomy; determining a region of missing data in the 3D model; generating a suggested imaging probe pose to collect one or more supplemental 2D images of a portion of the target anatomy corresponding to the region of missing data; receiving the one or more supplemental 2D images; and updating the 3D model based on the supplemental 2D images.
9. The system of claim 8, wherein the 3D model is at least partially generated from two- dimensional (2D) images of the target anatomy.
10. The system of claim 9, wherein the 2D images are ultrasound images.
11. The system of any one of claims 8-10, wherein the 3D model is at least partially generated from catheter mapping data of the target anatomy.
12. The system of any one of claims 8-11, wherein the instructions that, when executed by the processor, cause the system to determine the region of missing data in the 3D model comprises instructions that, when executed by the processor, cause the system to perform operations comprising fitting a mesh surface around an external surface of the 3D model and identifying a deficiency of the fitted mesh surface.
13. The system of claim 12, wherein the deficiency of the fitted mesh surface comprises a non-smooth surface or a recess.
14. The system of any one of claims 8-13, wherein the instructions that, when executed by the processor, cause the system to determine the region of missing data in the 3D model comprises instructions that, when executed by the processor, cause the system to perform operations comprising: comparing the 3D model to a statistical shape model (SSM) of the target anatomy; and identifying a portion of the 3D model that is in low agreement with a corresponding portion of the SSM.
15. The system of any one of claims 8-14, wherein the region of missing data is a first region of the 3D model, wherein the instructions, when executed by the processor, cause the system to perform operations comprising: displaying the first region of missing data with a first display scheme; and displaying a second region of the 3D model different from the first region with a second display scheme, wherein the first and second display schemes are different.
PCT/US2025/026060 2024-04-24 2025-04-23 Automatic guidance of image collection for 3d reconstruction of target anatomy Pending WO2025226871A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463638306P 2024-04-24 2024-04-24
US63/638,306 2024-04-24

Publications (1)

Publication Number Publication Date
WO2025226871A1 true WO2025226871A1 (en) 2025-10-30

Family

ID=95783929

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/026060 Pending WO2025226871A1 (en) 2024-04-24 2025-04-23 Automatic guidance of image collection for 3d reconstruction of target anatomy

Country Status (1)

Country Link
WO (1) WO2025226871A1 (en)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIN QIAN: "Path Planning for Trajectory Guided Freehand Ultrasound Scan", MASTER THESIS SUBMITTED TO THE DEPARTMENT OF MECHANICAL ENGINEERING, 12 May 2023 (2023-05-12), XP093302196, Retrieved from the Internet <URL:https://dspace.mit.edu/handle/1721.1/151941> [retrieved on 20250804] *

Similar Documents

Publication Publication Date Title
CN105992996B (en) Dynamic and interactive navigation in surgical environment
CN112618026B (en) Remote surgical data fusion interactive display system and method
JP2025084740A (en) How to visualize dynamic anatomy
CN107296650A (en) Intelligent operation accessory system based on virtual reality and augmented reality
Neubauer et al. Advanced virtual endoscopic pituitary surgery
CN106821496B (en) A precise planning system and method for percutaneous transforaminal surgery
CN107067398B (en) Completion method and device for missing blood vessels in three-dimensional medical model
CN109589170A (en) Left Atrial Appendage Closure Guidance in Medical Imaging
CN111588464B (en) Operation navigation method and system
CN113645896A (en) Systems for Surgical Planning, Surgical Navigation and Imaging
CN109157284A (en) A kind of brain tumor medical image three-dimensional reconstruction shows exchange method and system
CA3105430A1 (en) System and method for linking a segmentation graph to volumetric data
CN113197665A (en) Minimally invasive surgery simulation method and system based on virtual reality
CN113017832A (en) Puncture surgery simulation method based on virtual reality technology
CN110547869A (en) Preoperative auxiliary planning device based on virtual reality
Wagner et al. Intraocular surgery on a virtual eye
CN118845217B (en) A fracture reduction path planning method, device and program product
EP2009613A1 (en) System for simultaing a manual interventional operation
WO2025226871A1 (en) Automatic guidance of image collection for 3d reconstruction of target anatomy
JP7444569B2 (en) Arthroscopic surgery support device, arthroscopic surgery support method, and program
US10854005B2 (en) Visualization of ultrasound images in physical space
CN117503340A (en) A virtual reality-based orbital tumor surgery simulation system and method
CN116747017A (en) Cerebral hemorrhage operation planning system and method
KR20230135256A (en) Thoracoscopic surgery simulation apparatus and method based on 3-dimensional collapsed lung model
CN119014978B (en) Surgical simulation methods and systems based on 3D image reconstruction and surgical navigation