WO2025240661A1 - Ultrasound visualization and control in cardiac procedures - Google Patents
Ultrasound visualization and control in cardiac proceduresInfo
- Publication number
- WO2025240661A1 WO2025240661A1 PCT/US2025/029431 US2025029431W WO2025240661A1 WO 2025240661 A1 WO2025240661 A1 WO 2025240661A1 US 2025029431 W US2025029431 W US 2025029431W WO 2025240661 A1 WO2025240661 A1 WO 2025240661A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- hmd
- user
- volumetric image
- ultrasound
- clipping surface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B34/00—Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
- A61B34/10—Computer-aided planning, simulation or modelling of surgical operations
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B90/00—Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
Definitions
- This disclosure generally relates to ultrasound visualization for an augmented reality environment.
- FIG. 1 is a diagram of a system environment for an ultrasound visualization system according to various embodiments.
- FIG. 2 illustrates clipping surfaces according to various embodiments.
- FIG. 3 illustrates measurements used for calibration of input data sources according to an embodiment.
- FIG. 4 illustrates hands-free or automatic control of surfaces (e.g., 2D image rendering, and clipping) within a 3D ultrasound volume image according to various embodiments.
- FIG. 5 illustrates transparency of layers for rendering and compositing multiple 2D ultrasound images of multiple depths according to an embodiment.
- FIG. 6 illustrates coordinate systems defined for an ultrasound visualization system according to various embodiments.
- the method includes generating a 3-dimensional (3D) ultrasound image using 3D ultrasound imaging data, wherein the 3D ultrasound image includes a 3D volumetric image; determining a clipping surface of the 3D volumetric image using sensor data from a head-mounted display (HMD) of a user; displaying, by the HMD to the user, the clipping surface; determining an updated clipping surface of the 3D volumetric image using updated sensor data from the HMD; and displaying, by the HMD to the user, the updated clipping surface.
- HMD head-mounted display
- the method further includes determining a head pose of the user using the sensor data; determining the clipping surface of the 3D volumetric image by determining a 2D plane of the 3D volumetric image at a predetermined distance from the head pose; determining an updated head pose of the user using the updated sensor data; and determining the updated clipping surface of the 3D volumetric image by determining another 2D plane of the 3D volumetric image at the predetermined distance from the updated head pose.
- the method further includes determining a gaze direction of the user using the sensor data, wherein the 2D plane is perpendicular to the gaze direction.
- the method further includes modifying the predetermined distance responsive to an input from the user via the HMD.
- the clipping surface of the 3D volumetric image is determined using a distance function.
- the method further includes determining a distance from a portion of the 3D volumetric image to the clipping surface; and displaying, by the HMD to the user, the portion of the 3D volumetric image at a level of transparency based on the distance.
- the portion of the 3D volumetric image is displayed using voxel data and wherein the clipping surface is a 3D surface.
- the method further includes receiving the 3D ultrasound imaging data from an ultrasound source; and receiving catheter position data from a catheter localization source.
- the clipping surface of the 3D volumetric image is determined further based on the catheter position data.
- the method further includes calibrating the ultrasound source and the catheter localization source.
- a system in another embodiment, includes a head-mounted display (HMD) of a user and a non-transitory computer-readable storage medium storing instructions, the instructions when executed by one or more processors cause the one or more processors to: generate a 3 -dimensional (3D) ultrasound image using 3D ultrasound imaging data, wherein the 3D ultrasound image includes a 3D volumetric image; determine a clipping surface of the 3D volumetric image using sensor data from the HMD; transmit the clipping surface to the HMD for display to the user; determine an updated clipping surface of the 3D volumetric image using updated sensor data sensor data from the HMD; and transmit the updated clipping surface to the HMD for display to the user.
- HMD head-mounted display
- non-transitory computer-readable storage medium storing instructions, the instructions when executed by one or more processors cause the one or more processors to: generate a 3 -dimensional (3D) ultrasound image using 3D ultrasound imaging data, wherein the 3D ultrasound image includes a 3D volumetric image; determine a clipping surface
- a non-transitory computer-readable storage medium stores instructions that when executed by one or more processors cause the one or more processors to perform steps of any of the methods described herein.
- FIG. 1 is a diagram of a system environment for an ultrasound visualization system according to various embodiments.
- the ultrasound visualization system includes a processing system 100, communicatively connected with any number of head-mounted displays (HMDs) 110 and any number of ultrasound sources 120 (e.g., providing ultrasound imaging data).
- the HMDs 110 communicate with the ultrasound source 120 over a network connection 140 (e.g., serial, wireless access point, or internet cloud).
- the HMDs 110 communicate with the ultrasound source 120 directly (e.g., over a wired or wireless connection).
- the HMDs 110 communicate with the ultrasound source 120 through a processing system 100 (e.g., a single central server or located on an HMD 110 or another type of computing device) to reduce the number of connections to ultrasound source 120.
- a processing system 100 e.g., a single central server or located on an HMD 110 or another type of computing device
- the ultrasound visualization system also includes a connection to a catheter localization source 130 (e.g., electro-anatomic mapping system).
- An HMD 110 may include an electronic display to render visualizations, one or more sensors (e.g., for determining position or orientation of the HMD 110 in physical space), and any number of additional hands-free input modalities, among other components known to one skilled in the art (e.g., processors, memory, non-transitory computer-readable storage medium, etc.).
- An electronic display consists of one or more displays to provide 2D, stereoscopic 3D or 3D rendering of images to the user, responsive to ultrasound or HMD sensor data.
- Example sensors may include accelerometers, gyroscopes, inertial measurement units, depth sensing cameras, among others and may be a component of the HMD 110 or externally sense HMD 110 movement or orientation.
- processing system 100 uses changes in orientation of the HMD to control a cursor in the display on the HMD 110 to interact with user elements. In other embodiments, processing system 100 uses changes in hand position or gesture, eye position, or other user input mechanisms to facilitate interaction with user interface elements.
- the processing system 100 operates with ultrasound source 120 inputs to produce 3D image sequences suitable for transmission, interaction, processing, and display on an HMD 110.
- the ultrasound datasets may include unprocessed or partially processed ultrasound radiofrequency (RF) data, image data, or volumetric or polygonal datasets derived using 3D image data (e.g., polygonal -mesh cardiac maps) from a catheter mounted ultrasound array, e.g., from the catheter localization source 130.
- the data sequences may include time as a fourth dimension.
- the processing system 100 uses time data to render a single data source in a loop for review. In other embodiments, the processing system 100 uses time data to correlate ultrasound and catheter datasets.
- the processing system 100 receives polygonal or surface data from the ultrasound source 120.
- the polygonal or surface data is constructed by the processing system 100 from volumetric b-mode or raw ultrasound RF data using reconstruction and segmentation algorithms (e.g., pixel, voxel, or function based; or deep learning).
- the processing system 100 uses volumetric b-mode imaging data including an intensity or mapped color value per voxel, with a mapping function between intensity, RGB color, and transparency. In some embodiments, this mapping is specific to an HMD 110 configuration to optimally map data to the capabilities of the HMD (e.g., additive light display, partial occlusion, light field display), HMD intrinsic image quality parameters (e.g., contrast, dynamic range), environment (e.g., lighting conditions) or the human visual system. In some embodiments, the processing system 100 renders imaging data as gaussian splats or by ray-tracing to display a volumetric representation of the data.
- this mapping is specific to an HMD 110 configuration to optimally map data to the capabilities of the HMD (e.g., additive light display, partial occlusion, light field display), HMD intrinsic image quality parameters (e.g., contrast, dynamic range), environment (e.g., lighting conditions) or the human visual system.
- the processing system 100 renders imaging data as gaus
- polygonal or surface data is displayed using texture-mapping in graphics hardware or CPU algorithms.
- the processing system 100 renders surface and volumetric imaging data independently or in combination with each other.
- the embodiments disclosed herein are also applicable to Transesophageal echocardiography (TEE) and use cases involving external (non-invasive) 3D ultrasound transducers.
- TEE Transesophageal echocardiography
- FIG. 6 illustrates coordinate systems defined for an ultrasound visualization system according to various embodiments.
- the processing system 100 receives position or orientation data corresponding to an ultrasound catheter 605 and one or more other catheters 610 from an ultrasound image processor in a coordinate system.
- the processing system 100 receives position and orientation data from an additional catheter tracking system (e.g., Electroanatomic mapping system) in a coordinate system.
- the processing system 100 uses a priori knowledge of catheter construction (e.g., catheter size, electrode size, electrode spacing) to detect and estimate the position of another catheter 630 within the ultrasound imaging volume and define a coordinate system relative to an ultrasound catheter 625.
- the processing system 100 uses the same coordinate system of the ultrasound catheter for the instrument. In various embodiments, the processing system 100 uses the coordinate system for the HMD 110 in combination with one or more coordinate systems of the ultrasound catheter or one or more other instruments. In various embodiments, interface elements, annotations, or visualization adjustments are spatially located in the coordinate system of the ultrasound catheter.
- FIG. 2 illustrates clipping surfaces according to various embodiments.
- a clipping surface is a surface within a point of view of imaging data displayed to a user of an HMD 110.
- the point of view includes a 3D volume of space, e.g., including volumetric imaging data.
- the clipping surface is a 2D plane such as a rectangular cross section as shown in diagram 210.
- the clipping surface is a 3D surface defined by a collection of polygons as shown in diagram 220.
- the clipping surface is defined by a distance function f (or other parametric function) as shown in diagram 230.
- the processing system 100 adjusts the clipping surface in 3D dimensions responsive to hands-free input from a user of the HMD 110.
- the processing system 100 receives a manually (e.g., user) specified clipping surface as a correction input to the image or tracked catheter calculated clipping surface.
- the visualization may contain any number of clipping surfaces definitions.
- the processing system 100 automatically adjusts the view and clipping surface relative to one or more of (1) the position of the HMD 110, (2) the position of the distal tip of a tracked catheter; (3) the 3D line segment from the distal tip of the track catheter and the “forward direction” of the catheter tip; or (4) the 3D line segment from the distal tip of the tracked catheter and the direction of force measured by the catheter tip.
- the processing system 100 adjusts clipping surface responsive to the change in orientation of the HMD 110 relative to a coordinate system of the visualization.
- the processing system 100 determines the position and orientation of the clipping surface 650 based on the HMD 655 orientation relative to ultrasound visualization 645 as shown in diagram 640.
- the processing system 100 updates the clipping surface 670 as shown in diagram 660.
- the view and clipping surface are updated responsive to changes in position and orientation of the elements.
- the tracked catheter include an ultrasound transducer catheter, cardiac ablation catheter, cardiac mapping catheter, esophageal catheter, and septal puncture catheter, among others. Tracking of the catheter may be performed using impedance, current, deflection, and/or electromagnetic catheter tracking systems.
- FIG. 4 illustrates hands-free or automatic control of surfaces (e.g., 2D image rendering, and clipping) within a 3D ultrasound volume image according to various embodiments.
- the clipping surface is defined by a translations in the axes of the coordinate system or rotations about an arbitrary axes in the coordinate system.
- the clipping surface may have a neutral angle intersecting the proximal-distal axes of the ultrasound catheter.
- the clipping surface may have an angle intersecting the image volume in an oblique angle which does not intersect the proximal-distal axes of the ultrasound catheter.
- the processing system 100 adjusts a clipping surface responsive to hands-free interaction input by a user such as gaze direction.
- the processing system 100 increases the angle of rotation of the plane of the clipping surface about the proximal- distal axes of the ultrasound catheter responsive to determining that the user’s gaze direction intersects with a first hands-free user interface element — a button with a “+” symbol.
- the processing system 100 decreases the angle of rotation of the clipping surface responsive to determining that the user’s gaze direction intersects with a second hands-free user interface element — a button with a symbol.
- the clipping surface is located at a distance (e.g., depth in the field of view) from the user.
- the processing system 100 decreases the distance of the clipping surface from the user responsive to determining that the user’s gaze direction intersects with a first hands-free user interface element — a button with a symbol.
- the processing system 100 increases the distance of the clipping surface from the user responsive to determining that the user’s gaze direction intersects with a second hands-free user interface element — a button with a “+” symbol.
- the processing system 100 updates a depth or angle of a clipping surface responsive to a change in position or orientation of an object (e.g., catheter) within the imaged volume.
- the processing system 100 generates a visualization of the ultrasound volume data such that one or more portions of the ultrasound volume farther from a specified image clipping surface are displayed in combination with a specified ultrasound clipping surface with multiple values of transparency. This allows the user to see contents of the specified image clipping surface, while also gaining spatial context from the content in the more distant portions of the ultrasound volume, without that distant content obscuring the content near the specified image surface.
- the volume contains processed 3D surface data in addition to voxel data.
- the processing system 100 adjusts the proportion of transparency of a primary image surface and distant volumes in response to user input. The processing system 100 can also adjust the proportion of transparency and distant volumes responsive to the statistical distribution of values in the volume.
- FIG. 5 illustrates transparency of layers for rendering and compositing multiple 2D ultrasound images of multiple depths according to an embodiment.
- the primary image surface 500 can be assigned the full opacity with zero transparency assigned to further layers.
- the processing system 100 can adjust the transparency allocation to render additional further layers with transparency at the correct depth by the HMD 110, as shown in illustrations 510 and 520.
- the processing system 100 can assign each layer to have equal transparency at the correct depth to provide additional context to the user.
- the processing system 100 uses a specified clipping surface to reduce network bandwidth or latency requirements by determining not to transmit an entire set of 3D ultrasound volume image data to the HMD 110.
- the specified clipping surface may be predetermined by a user or determined by the processing system 100 from preset options such as a predetermined distance (e.g., one meter) from a head pose of a user of the HMD 110.
- the processing system 100 excludes portions of the volume image data from the display ultimately rendered to a user of the HMD 110.
- the processing system 100 can limit transmission of ultrasound data to the portions and levels-of-detail than are visible in the user’s point of view at a current moment in time while wearing the HMD 110. This helps improve transmission efficiency by conserving network resources. This is also advantageous because the curated display helps the user focus on the areas of interest.
- the processing system 100 adjusts spatial parameters of image acquisition, including focal depth; robotically steering an ultrasound catheter; manipulating the clip surface; ultrasound data position, scale, or orientation; or beam steering direction of the ultrasound energy emitted by the catheter-mounted transducer for imaging or high-frequency focused ultrasound (HIFU) responsive to hands-free user input in the HMD 110.
- the processing system 100 adjusts certain parameters responsive to changes in other parameters. For example, the processing system 100 measures and monitors the aspects of ablation energy, and adjusts the parameters of the ultrasound volume to couple the ablation and imaging volumes.
- the ultrasound volume is adjusted to stabilize the image of an identified object (e.g., catheter) within the volume.
- the processing system 100 processes the inputs to determine a region of interest within the ultrasound imaging volume. Responsive to this determination, the processing system 100 can improve ultrasound image quality at the region of interest. In some embodiments, the processing system 100 adjusts image formation or acquisition parameters including: (1) specifying the gain, filtering, and transparency mapping of the display of the 3D ultrasound volume; (2) triggering the ultrasound scanner to change a transmit waveform such that ultrasound contrast agents are activated responsive to hands-free user input in the HMD 110. In various embodiments, the processing system 100 receives one or more hands-free inputs from the user such as HMD 110 pose, eye position, eye orientation, and hand or finger gestures, among other types of hands-free inputs in a sterile environment.
- the processing system 100 automatically registers a previously captured ultrasound volume with a later captured ultrasound volume and determines changes between the two ultrasound volumes.
- the processing system 100 displays the changes via the HMD 110, for example, by highlighting the changes in the user interface (or using other visual indicators) to bring them to the user’s attention.
- the processing system 100 can highlight the changes by modifying the level of transparency to make certain portions of the ultrasound volume more visible or prominent to the user.
- the hands-free HMD 110 user interface allows the user to more easily specify or adjust the six degree of freedom relationship between the two ultrasound volumes.
- FIG. 3 illustrates measurements used for calibration of input data sources according to an embodiment.
- the processing system 100 uses catheter tracking and ultrasound imaging to correct the agreement of the calibration between the two measurement systems.
- the processing system 100 uses time intervals (e.g., to and ti) to measure the distance between two or more points within the ultrasound volume to calculate a correction function between the catheter tracking system and the ultrasound imaging system.
- the processing system 100 uses intrinsic catheter parameters (e.g., catheter size, electrode spacing) as inputs to the correction function or algorithms.
- the processing system 100 weights one or more correction functions to create a coherent corrected coordinate system between the ultrasound image and the catheter tracking.
- a correction factor of a correction function may be a parameter in image formation (e.g., speed of sound in material) or a parametric scaling function of the output coordinates.
- the processing system 100 uses the correction factor to adjust measurements from one or more input data sources (e.g., ultrasound source 120 or catheter localization source 130), or translate measurements (e.g., distances, areas, volumes) from one input source to any number of other input sources.
- a software module is implemented with a computer program product including a computer-readable non-transitory medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
- Embodiments of the invention may also relate to a product that is produced by a computing process described herein.
- a product may include information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Physics & Mathematics (AREA)
- Robotics (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Pathology (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
In an embodiment, a processing system generates a 3D ultrasound image using 3D ultrasound imaging data, wherein the 3D ultrasound image includes a 3D volumetric image. The processing system determines a clipping surface of the 3D volumetric image using sensor data from an HMD of a user. The processing system transmits the clipping surface to the HMD for display to the user. The processing system determines an updated clipping surface of the 3D volumetric image using updated sensor data from the HMD. The processing system transmits the updated clipping surface to the HMD for display to the user.
Description
ULTRASOUND VISUALIZATION AND CONTROL IN CARDIAC PROCEDURES
CROSS REFENCE TO RELATED APPLICATION
[01] This application claims the benefit of priority to U.S. Provisional Application No. 63/648,026, filed on May 15, 2024, which is incorporated herein by reference in its entirety for all purposes.
TECHNICAL FIELD
[02] This disclosure generally relates to ultrasound visualization for an augmented reality environment.
BACKGROUND
[03] Conventional systems require a high level of bandwidth to send an entire 3- dimensional (3D) image frame (B-mode) from an ultrasound image processor to a headmounted display (HMD). The required bandwidth can be hundreds of times more data than the required bandwidth for sending a live (e.g., real time) 2D ultrasound video. Image processing capabilities of HMD hardware is limited and may further constrain the capability to render 3D image frame data.
[04] Portions of an opaque 3D volume image that are displayed by closer to the user’s point of view will obstruct the visibility of the farther away portions of the 3D volume image. A workaround in conventional systems is to have a non-sterile person use a mouse or trackpad to specify one or more orthogonal 2D planes within the 3D volume to view. But this introduces two more problems: (1) the benefits of a 3D image visualization are lost and (2) a sterile physician must verbally communicate to the non-sterile person how to section the 3D volume into 2D images. The verbal communication can be slow, cumbersome, imprecise, and prone to misunderstanding or human error.
BRIEF DESCRIPTION OF THE FIGURES
[05] The disclosed embodiments have advantages and features which will be more readily apparent from the detailed description, the appended claims, and the accompanying figures (or drawings). A brief introduction of the figures is below.
[06] Figure (FIG. 1) is a diagram of a system environment for an ultrasound visualization system according to various embodiments.
[07] FIG. 2 illustrates clipping surfaces according to various embodiments.
[08] FIG. 3 illustrates measurements used for calibration of input data sources according to an embodiment.
[09] FIG. 4 illustrates hands-free or automatic control of surfaces (e.g., 2D image rendering, and clipping) within a 3D ultrasound volume image according to various embodiments.
[010] FIG. 5 illustrates transparency of layers for rendering and compositing multiple 2D ultrasound images of multiple depths according to an embodiment.
[Oil] FIG. 6 illustrates coordinate systems defined for an ultrasound visualization system according to various embodiments.
SUMMARY
[012] In an embodiment, the method includes generating a 3-dimensional (3D) ultrasound image using 3D ultrasound imaging data, wherein the 3D ultrasound image includes a 3D volumetric image; determining a clipping surface of the 3D volumetric image using sensor data from a head-mounted display (HMD) of a user; displaying, by the HMD to the user, the clipping surface; determining an updated clipping surface of the 3D volumetric image using updated sensor data from the HMD; and displaying, by the HMD to the user, the updated clipping surface.
[013] In an embodiment, the method further includes determining a head pose of the user using the sensor data; determining the clipping surface of the 3D volumetric image by determining a 2D plane of the 3D volumetric image at a predetermined distance from the head pose; determining an updated head pose of the user using the updated sensor data; and determining the updated clipping surface of the 3D volumetric image by determining another 2D plane of the 3D volumetric image at the predetermined distance from the updated head pose.
[014] In an embodiment, the method further includes determining a gaze direction of the user using the sensor data, wherein the 2D plane is perpendicular to the gaze direction.
[015] In an embodiment, the method further includes modifying the predetermined distance responsive to an input from the user via the HMD.
[016] In an embodiment, the clipping surface of the 3D volumetric image is determined using a distance function.
[017] In an embodiment, the method further includes determining a distance from a portion of the 3D volumetric image to the clipping surface; and displaying, by the HMD to the user, the portion of the 3D volumetric image at a level of transparency based on the distance. In an embodiment, the portion of the 3D volumetric image is displayed using voxel data and wherein the clipping surface is a 3D surface.
[018] In an embodiment, the method further includes receiving the 3D ultrasound imaging data from an ultrasound source; and receiving catheter position data from a catheter localization source. In an embodiment, the clipping surface of the 3D volumetric image is determined further based on the catheter position data. In an embodiment, the method further includes calibrating the ultrasound source and the catheter localization source.
[019] In another embodiment, a system includes a head-mounted display (HMD) of a user and a non-transitory computer-readable storage medium storing instructions, the instructions when executed by one or more processors cause the one or more processors to: generate a 3 -dimensional (3D) ultrasound image using 3D ultrasound imaging data, wherein the 3D ultrasound image includes a 3D volumetric image; determine a clipping surface of the 3D volumetric image using sensor data from the HMD; transmit the clipping surface to the HMD for display to the user; determine an updated clipping surface of the 3D volumetric image using updated sensor data sensor data from the HMD; and transmit the updated clipping surface to the HMD for display to the user.
[020] In various embodiments, a non-transitory computer-readable storage medium stores instructions that when executed by one or more processors cause the one or more processors to perform steps of any of the methods described herein.
DETAILED DESCRIPTION
I. SYSTEM OVERVIEW
[021] FIG. 1 is a diagram of a system environment for an ultrasound visualization system according to various embodiments. The ultrasound visualization system includes a processing system 100, communicatively connected with any number of head-mounted displays (HMDs) 110 and any number of ultrasound sources 120 (e.g., providing ultrasound imaging data). The HMDs 110 communicate with the ultrasound source 120 over a network connection 140 (e.g., serial, wireless access point, or internet cloud). In some embodiments, the HMDs 110 communicate with the ultrasound source 120 directly (e.g., over a wired or wireless connection). In other embodiments, the HMDs 110 communicate with the ultrasound source 120 through a processing system 100 (e.g., a single central server or located on an HMD 110 or another type of computing device) to reduce the number of connections to ultrasound source 120. In various embodiments, the ultrasound visualization system also includes a connection to a catheter localization source 130 (e.g., electro-anatomic mapping system).
[022] An HMD 110 may include an electronic display to render visualizations, one or more sensors (e.g., for determining position or orientation of the HMD 110 in physical
space), and any number of additional hands-free input modalities, among other components known to one skilled in the art (e.g., processors, memory, non-transitory computer-readable storage medium, etc.). An electronic display consists of one or more displays to provide 2D, stereoscopic 3D or 3D rendering of images to the user, responsive to ultrasound or HMD sensor data. Example sensors may include accelerometers, gyroscopes, inertial measurement units, depth sensing cameras, among others and may be a component of the HMD 110 or externally sense HMD 110 movement or orientation. In an embodiment, the processing system 100 uses changes in orientation of the HMD to control a cursor in the display on the HMD 110 to interact with user elements. In other embodiments, processing system 100 uses changes in hand position or gesture, eye position, or other user input mechanisms to facilitate interaction with user interface elements.
[023] In various embodiments, the processing system 100 operates with ultrasound source 120 inputs to produce 3D image sequences suitable for transmission, interaction, processing, and display on an HMD 110. The ultrasound datasets may include unprocessed or partially processed ultrasound radiofrequency (RF) data, image data, or volumetric or polygonal datasets derived using 3D image data (e.g., polygonal -mesh cardiac maps) from a catheter mounted ultrasound array, e.g., from the catheter localization source 130. The data sequences may include time as a fourth dimension. In some embodiments, the processing system 100 uses time data to render a single data source in a loop for review. In other embodiments, the processing system 100 uses time data to correlate ultrasound and catheter datasets.
[024] In some embodiments, the processing system 100 receives polygonal or surface data from the ultrasound source 120. In some embodiments, the polygonal or surface data is constructed by the processing system 100 from volumetric b-mode or raw ultrasound RF data using reconstruction and segmentation algorithms (e.g., pixel, voxel, or function based; or deep learning).
[025] In some embodiments, the processing system 100 uses volumetric b-mode imaging data including an intensity or mapped color value per voxel, with a mapping function between intensity, RGB color, and transparency. In some embodiments, this mapping is specific to an HMD 110 configuration to optimally map data to the capabilities of the HMD (e.g., additive light display, partial occlusion, light field display), HMD intrinsic image quality parameters (e.g., contrast, dynamic range), environment (e.g., lighting conditions) or the human visual system. In some embodiments, the processing system 100 renders imaging data as gaussian splats or by ray-tracing to display a volumetric
representation of the data. In other embodiments, polygonal or surface data is displayed using texture-mapping in graphics hardware or CPU algorithms. In various embodiments, the processing system 100 renders surface and volumetric imaging data independently or in combination with each other. The embodiments disclosed herein are also applicable to Transesophageal echocardiography (TEE) and use cases involving external (non-invasive) 3D ultrasound transducers.
[026] FIG. 6 illustrates coordinate systems defined for an ultrasound visualization system according to various embodiments. As shown in diagram 600, the processing system 100 receives position or orientation data corresponding to an ultrasound catheter 605 and one or more other catheters 610 from an ultrasound image processor in a coordinate system. In other embodiments, the processing system 100 receives position and orientation data from an additional catheter tracking system (e.g., Electroanatomic mapping system) in a coordinate system. As shown in diagram 620, the processing system 100 uses a priori knowledge of catheter construction (e.g., catheter size, electrode size, electrode spacing) to detect and estimate the position of another catheter 630 within the ultrasound imaging volume and define a coordinate system relative to an ultrasound catheter 625. In embodiments where position and orientation information for an instrument is unavailable, the processing system 100 uses the same coordinate system of the ultrasound catheter for the instrument. In various embodiments, the processing system 100 uses the coordinate system for the HMD 110 in combination with one or more coordinate systems of the ultrasound catheter or one or more other instruments. In various embodiments, interface elements, annotations, or visualization adjustments are spatially located in the coordinate system of the ultrasound catheter.
II. CLIPPING-SURFACE
[027] FIG. 2 illustrates clipping surfaces according to various embodiments. A clipping surface is a surface within a point of view of imaging data displayed to a user of an HMD 110. As shown in diagram 200, the point of view includes a 3D volume of space, e.g., including volumetric imaging data. In some embodiments, the clipping surface is a 2D plane such as a rectangular cross section as shown in diagram 210. In other embodiments, the clipping surface is a 3D surface defined by a collection of polygons as shown in diagram 220. In other embodiments, the clipping surface is defined by a distance function f (or other parametric function) as shown in diagram 230. In various embodiments, the processing system 100 adjusts the clipping surface in 3D dimensions responsive to hands-free input from a user of the HMD 110. In some embodiments, the processing system 100 receives a manually (e.g., user) specified clipping surface as a correction input to the image or tracked
catheter calculated clipping surface. In some embodiments, the visualization may contain any number of clipping surfaces definitions.
[028] In various embodiments, the processing system 100 automatically adjusts the view and clipping surface relative to one or more of (1) the position of the HMD 110, (2) the position of the distal tip of a tracked catheter; (3) the 3D line segment from the distal tip of the track catheter and the “forward direction” of the catheter tip; or (4) the 3D line segment from the distal tip of the tracked catheter and the direction of force measured by the catheter tip. In one embodiment, the processing system 100 adjusts clipping surface responsive to the change in orientation of the HMD 110 relative to a coordinate system of the visualization. [029] Referring back to FIG. 6, in one embodiment, the processing system 100 determines the position and orientation of the clipping surface 650 based on the HMD 655 orientation relative to ultrasound visualization 645 as shown in diagram 640. When HMD 675 orientation is modified relative to the ultrasound visualization 665, the processing system 100 updates the clipping surface 670 as shown in diagram 660. In various embodiments, the view and clipping surface are updated responsive to changes in position and orientation of the elements. Examples of the tracked catheter include an ultrasound transducer catheter, cardiac ablation catheter, cardiac mapping catheter, esophageal catheter, and septal puncture catheter, among others. Tracking of the catheter may be performed using impedance, current, deflection, and/or electromagnetic catheter tracking systems.
[030] FIG. 4 illustrates hands-free or automatic control of surfaces (e.g., 2D image rendering, and clipping) within a 3D ultrasound volume image according to various embodiments. In some embodiments, the clipping surface is defined by a translations in the axes of the coordinate system or rotations about an arbitrary axes in the coordinate system. As shown in diagram 400, the clipping surface may have a neutral angle intersecting the proximal-distal axes of the ultrasound catheter. In some embodiments, the clipping surface may have an angle intersecting the image volume in an oblique angle which does not intersect the proximal-distal axes of the ultrasound catheter. In various embodiments, the processing system 100 adjusts a clipping surface responsive to hands-free interaction input by a user such as gaze direction. In the example shown in diagram 405, the processing system 100 increases the angle of rotation of the plane of the clipping surface about the proximal- distal axes of the ultrasound catheter responsive to determining that the user’s gaze direction intersects with a first hands-free user interface element — a button with a “+” symbol. In the example shown in diagram 410, the processing system 100 decreases the angle of rotation of
the clipping surface responsive to determining that the user’s gaze direction intersects with a second hands-free user interface element — a button with a symbol.
[031] As shown in diagram 415, the clipping surface is located at a distance (e.g., depth in the field of view) from the user. In the example shown in diagram 420, the processing system 100 decreases the distance of the clipping surface from the user responsive to determining that the user’s gaze direction intersects with a first hands-free user interface element — a button with a symbol. In the example shown in diagram 425, the processing system 100 increases the distance of the clipping surface from the user responsive to determining that the user’s gaze direction intersects with a second hands-free user interface element — a button with a “+” symbol. In another example shown in diagrams 430-435, the processing system 100 updates a depth or angle of a clipping surface responsive to a change in position or orientation of an object (e.g., catheter) within the imaged volume.
[032] In various embodiments, the processing system 100 generates a visualization of the ultrasound volume data such that one or more portions of the ultrasound volume farther from a specified image clipping surface are displayed in combination with a specified ultrasound clipping surface with multiple values of transparency. This allows the user to see contents of the specified image clipping surface, while also gaining spatial context from the content in the more distant portions of the ultrasound volume, without that distant content obscuring the content near the specified image surface. In some embodiments, the volume contains processed 3D surface data in addition to voxel data. In some embodiments, the processing system 100 adjusts the proportion of transparency of a primary image surface and distant volumes in response to user input. The processing system 100 can also adjust the proportion of transparency and distant volumes responsive to the statistical distribution of values in the volume.
[033] FIG. 5 illustrates transparency of layers for rendering and compositing multiple 2D ultrasound images of multiple depths according to an embodiment. As shown in illustration 500, the primary image surface 500 can be assigned the full opacity with zero transparency assigned to further layers. The processing system 100 can adjust the transparency allocation to render additional further layers with transparency at the correct depth by the HMD 110, as shown in illustrations 510 and 520. The processing system 100 can assign each layer to have equal transparency at the correct depth to provide additional context to the user.
[034] In some embodiments, the processing system 100 uses a specified clipping surface to reduce network bandwidth or latency requirements by determining not to transmit an entire
set of 3D ultrasound volume image data to the HMD 110. The specified clipping surface may be predetermined by a user or determined by the processing system 100 from preset options such as a predetermined distance (e.g., one meter) from a head pose of a user of the HMD 110. In some embodiments, the processing system 100 excludes portions of the volume image data from the display ultimately rendered to a user of the HMD 110. In these embodiments, the processing system 100 can limit transmission of ultrasound data to the portions and levels-of-detail than are visible in the user’s point of view at a current moment in time while wearing the HMD 110. This helps improve transmission efficiency by conserving network resources. This is also advantageous because the curated display helps the user focus on the areas of interest.
III. ULTRASOUND IMAGING AND CONTROL
[035] In various embodiments, the processing system 100 adjusts spatial parameters of image acquisition, including focal depth; robotically steering an ultrasound catheter; manipulating the clip surface; ultrasound data position, scale, or orientation; or beam steering direction of the ultrasound energy emitted by the catheter-mounted transducer for imaging or high-frequency focused ultrasound (HIFU) responsive to hands-free user input in the HMD 110. In some embodiments, the processing system 100 adjusts certain parameters responsive to changes in other parameters. For example, the processing system 100 measures and monitors the aspects of ablation energy, and adjusts the parameters of the ultrasound volume to couple the ablation and imaging volumes. In another embodiment, the ultrasound volume is adjusted to stabilize the image of an identified object (e.g., catheter) within the volume. The processing system 100 processes the inputs to determine a region of interest within the ultrasound imaging volume. Responsive to this determination, the processing system 100 can improve ultrasound image quality at the region of interest. In some embodiments, the processing system 100 adjusts image formation or acquisition parameters including: (1) specifying the gain, filtering, and transparency mapping of the display of the 3D ultrasound volume; (2) triggering the ultrasound scanner to change a transmit waveform such that ultrasound contrast agents are activated responsive to hands-free user input in the HMD 110. In various embodiments, the processing system 100 receives one or more hands-free inputs from the user such as HMD 110 pose, eye position, eye orientation, and hand or finger gestures, among other types of hands-free inputs in a sterile environment.
[036] In various embodiments, the processing system 100 automatically registers a previously captured ultrasound volume with a later captured ultrasound volume and determines changes between the two ultrasound volumes. The processing system 100
displays the changes via the HMD 110, for example, by highlighting the changes in the user interface (or using other visual indicators) to bring them to the user’s attention. The processing system 100 can highlight the changes by modifying the level of transparency to make certain portions of the ultrasound volume more visible or prominent to the user. The hands-free HMD 110 user interface allows the user to more easily specify or adjust the six degree of freedom relationship between the two ultrasound volumes.
[037] FIG. 3 illustrates measurements used for calibration of input data sources according to an embodiment. In various embodiments, the processing system 100 uses catheter tracking and ultrasound imaging to correct the agreement of the calibration between the two measurement systems. In the embodiment illustrated with diagram 310, the processing system 100 uses time intervals (e.g., to and ti) to measure the distance between two or more points within the ultrasound volume to calculate a correction function between the catheter tracking system and the ultrasound imaging system. In the embodiment illustrated with diagram 300, the processing system 100 uses intrinsic catheter parameters (e.g., catheter size, electrode spacing) as inputs to the correction function or algorithms. In various embodiments, the processing system 100 weights one or more correction functions to create a coherent corrected coordinate system between the ultrasound image and the catheter tracking. In some embodiments, a correction factor of a correction function may be a parameter in image formation (e.g., speed of sound in material) or a parametric scaling function of the output coordinates. In various embodiments, the processing system 100 uses the correction factor to adjust measurements from one or more input data sources (e.g., ultrasound source 120 or catheter localization source 130), or translate measurements (e.g., distances, areas, volumes) from one input source to any number of other input sources.
IV. ALTERNATIVE CONSIDERATIONS
[038] The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
[039] Some portions of this description describe the embodiments of the invention in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the
like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.
[040] Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product including a computer-readable non-transitory medium containing computer program code, which can be executed by a computer processor for performing any or all of the steps, operations, or processes described.
[041] Embodiments of the invention may also relate to a product that is produced by a computing process described herein. Such a product may include information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
[042] Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
Claims
1. A method comprising: generating a 3 -dimensional (3D) ultrasound image using 3D ultrasound imaging data, wherein the 3D ultrasound image includes a 3D volumetric image; determining a clipping surface of the 3D volumetric image using sensor data from a head-mounted display (HMD) of a user; displaying, by the HMD to the user, the clipping surface; determining an updated clipping surface of the 3D volumetric image using updated sensor data from the HMD; and displaying, by the HMD to the user, the updated clipping surface.
2. The method of claim 1, further comprising: determining a head pose of the user using the sensor data; determining the clipping surface of the 3D volumetric image by determining a 2D plane of the 3D volumetric image at a predetermined distance from the head pose; determining an updated head pose of the user using the updated sensor data; and determining the updated clipping surface of the 3D volumetric image by determining another 2D plane of the 3D volumetric image at the predetermined distance from the updated head pose.
3. The method of claim 2, further comprising: determining a gaze direction of the user using the sensor data, wherein the 2D plane is perpendicular to the gaze direction.
4. The method of claim 2, further comprising: modifying the predetermined distance responsive to an input from the user via the HMD.
5. The method of claim 1, wherein the clipping surface of the 3D volumetric image is determined using a distance function.
6. The method of claim 1, further comprising:
determining a distance from a portion of the 3D volumetric image to the clipping surface; and displaying, by the HMD to the user, the portion of the 3D volumetric image at a level of transparency based on the distance.
7. The method of claim 6, wherein the portion of the 3D volumetric image is displayed using voxel data and wherein the clipping surface is a 3D surface.
8. The method of claim 1, further comprising: receiving the 3D ultrasound imaging data from an ultrasound source; and receiving catheter position data from a catheter localization source.
9. The method of claim 8, wherein the clipping surface of the 3D volumetric image is determined further based on the catheter position data.
10. The method of claim 8, further comprising: calibrating the ultrasound source and the catheter localization source.
11. A non-transitory computer-readable storage medium storing instructions, the instructions when executed by one or more processors cause the one or more processors to: generate a 3 -dimensional (3D) ultrasound image using 3D ultrasound imaging data, wherein the 3D ultrasound image includes a 3D volumetric image; determine a clipping surface of the 3D volumetric image using sensor data from a head-mounted display (HMD) of a user; transmit the clipping surface to the HMD for display to the user; determine an updated clipping surface of the 3D volumetric image using updated sensor data sensor data from the HMD; and transmit the updated clipping surface to the HMD for display to the user.
12. The non-transitory computer-readable storage medium of claim 11, storing further instructions that when executed by the one or more processors cause the one or more processors to: determine a head pose of the user using the sensor data;
determine the clipping surface of the 3D volumetric image by determining a 2D plane of the 3D volumetric image at a predetermined distance from the head pose; determine an updated head pose of the user using the updated sensor data; and determine the updated clipping surface of the 3D volumetric image by determining another 2D plane of the 3D volumetric image at the predetermined distance from the updated head pose.
13. The non-transitory computer-readable storage medium of claim 12, storing further instructions that when executed by the one or more processors cause the one or more processors to: determine a gaze direction of the user using the sensor data, wherein the 2D plane is perpendicular to the gaze direction.
14. The non-transitory computer-readable storage medium of claim 12, storing further instructions that when executed by the one or more processors cause the one or more processors to: modify the predetermined distance responsive to an input from the user via the HMD.
15. The non-transitory computer-readable storage medium of claim 11, wherein the clipping surface of the 3D volumetric image is determined using a distance function.
16. The non-transitory computer-readable storage medium of claim 11, storing further instructions that when executed by the one or more processors cause the one or more processors to: determine a distance from a portion of the 3D volumetric image to the clipping surface; and transmit to the HMD for display to the user, the portion of the 3D volumetric image at a level of transparency based on the distance.
17. The non-transitory computer-readable storage medium of claim 16, wherein the portion of the 3D volumetric image is displayed using voxel data and wherein the clipping surface is a 3D surface.
18. The non-transitory computer-readable storage medium of claim 11, storing further instructions that when executed by the one or more processors cause the one or more processors to: receive the 3D ultrasound imaging data from an ultrasound source; and receive catheter position data from a catheter localization source.
19. The non-transitory computer-readable storage medium of claim 18, wherein the clipping surface of the 3D volumetric image is determined further based on the catheter position data.
20. A system comprising: a head-mounted display (HMD) of a user; and a non-transitory computer-readable storage medium storing instructions, the instructions when executed by one or more processors cause the one or more processors to: generate a 3 -dimensional (3D) ultrasound image using 3D ultrasound imaging data, wherein the 3D ultrasound image includes a 3D volumetric image; determine a clipping surface of the 3D volumetric image using sensor data from the HMD; transmit the clipping surface to the HMD for display to the user; determine an updated clipping surface of the 3D volumetric image using updated sensor data sensor data from the HMD; and transmit the updated clipping surface to the HMD for display to the user.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463648026P | 2024-05-15 | 2024-05-15 | |
| US63/648,026 | 2024-05-15 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025240661A1 true WO2025240661A1 (en) | 2025-11-20 |
Family
ID=97720829
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/029431 Pending WO2025240661A1 (en) | 2024-05-15 | 2025-05-14 | Ultrasound visualization and control in cardiac procedures |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025240661A1 (en) |
-
2025
- 2025-05-14 WO PCT/US2025/029431 patent/WO2025240661A1/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10987190B2 (en) | Generation of augmented reality image of a medical device | |
| US9870446B2 (en) | 3D-volume viewing by controlling sight depth | |
| JP5430203B2 (en) | Image processing apparatus and image processing method | |
| US20120172724A1 (en) | Automatic identification of intracardiac devices and structures in an intracardiac echo catheter image | |
| JP5631453B2 (en) | Image processing apparatus and image processing method | |
| JP2015100712A (en) | Suture needle for surgical system with optical recognition function | |
| US9940747B2 (en) | Mapping 3D to 2D images | |
| US20240394996A1 (en) | Method for analysing 3d medical image data, computer program and 3d medical image data evaluation device | |
| US12446857B2 (en) | Multiscale ultrasound tracking and display | |
| US12295784B2 (en) | System and method for augmented reality data interaction for ultrasound imaging | |
| CN101681516A (en) | Systems and methods for labeling 3-d volume images on a 2-d display of an ultrasonic imaging system | |
| EP3803540A1 (en) | Gesture control of medical displays | |
| KR20150031091A (en) | Method and apparatus for providing ultrasound information using guidelines | |
| WO2025240661A1 (en) | Ultrasound visualization and control in cardiac procedures | |
| JP2023004884A (en) | Rendering device for displaying graphical representation of augmented reality | |
| CN112241996B (en) | Method and system for coloring a volume rendered image | |
| US10849696B2 (en) | Map of body cavity | |
| Ma et al. | Ultrasound calibration using intensity-based image registration: for application in cardiac catheterization procedures | |
| US12357274B2 (en) | Systems and methods for acquiring ultrasonic data | |
| EP4416685B1 (en) | Computer-implemented method for modelling a projection of a scene in three-dimensional space into a composite image | |
| JP7464933B2 (en) | Display device and display system |