[go: up one dir, main page]

US11995773B2 - Computer implemented method and system for navigation and display of 3D image data - Google Patents

Computer implemented method and system for navigation and display of 3D image data Download PDF

Info

Publication number
US11995773B2
US11995773B2 US17/763,728 US202017763728A US11995773B2 US 11995773 B2 US11995773 B2 US 11995773B2 US 202017763728 A US202017763728 A US 202017763728A US 11995773 B2 US11995773 B2 US 11995773B2
Authority
US
United States
Prior art keywords
image
image dataset
kernel
value
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/763,728
Other versions
US20220343605A1 (en
Inventor
John MUNRO SIMPSON
Kuberan PUSHPARAJAH
Alberto GÓMEZ HERRERO
Julia Anne SCHNABEL
Gavin WHEELER
Shujie DENG
Nicolas TOUSSAINT
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guy's And St Thomas' Nhs Foundation Trust
Guys and St Thomas NHS Foundation Trust
Kings College London
Original Assignee
Guys and St Thomas NHS Foundation Trust
Kings College London
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guys and St Thomas NHS Foundation Trust, Kings College London filed Critical Guys and St Thomas NHS Foundation Trust
Publication of US20220343605A1 publication Critical patent/US20220343605A1/en
Assigned to KING’S COLLEGE LONDON reassignment KING’S COLLEGE LONDON ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PUSHPARAJAH, Kuberan, TOUSSAINT, NICOLAS, DENG, Shujie, SCHNABEL, Julia, WHEELER, Gavin, HERRERO, ALBERTO GOMEZ
Assigned to GUY’S AND ST THOMAS’ NHS FOUNDATION TRUST reassignment GUY’S AND ST THOMAS’ NHS FOUNDATION TRUST ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SIMPSON, JOHN
Application granted granted Critical
Publication of US11995773B2 publication Critical patent/US11995773B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/62Semi-transparency
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2012Colour editing, changing, or manipulating; Use of colour codes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2024Style variation

Definitions

  • the present invention relates to a system and computer implemented method for navigation and display of three-dimensional imaging and is particularly applicable to three-dimensional imaging of the human anatomy for the purpose of medical diagnosis and treatment planning.
  • Imaging scanners are used for various purposes including imaging human and animal bodies for diagnosis and guidance during medical intervention such as surgery.
  • Other uses for imaging scanners include structural analysis of buildings, pipes and the like.
  • a conventional medical ultrasound scanner creates two-dimensional B-mode images of tissue in which the brightness of a pixel is based on the intensity of the echo return.
  • Other types of imaging scanners can capture blood flow, motion of tissue over time, the location of blood, the presence of specific molecules, the stiffness of tissue, or the anatomy of a three-dimensional (3D) region.
  • 2D images such as 2D ultrasound images cannot represent three-dimensional structures typical of human or animal body organs because they can only capture one 2D slice of a cross-section.
  • a probe such as an ultrasound probe is mechanically or electronically swept over an area of interest, a three-dimensional image volume is generated.
  • some ultrasound probes for example “matrix” probes, have multiple piezo-electric crystals and can construct “real-time” 3D ultrasound images. This can then be displayed by, for example, 3D holographic technologies, and the anatomy becomes much easier to visualize for both the trained and untrained observer as it is more representative of the true underlying structure/anatomy.
  • Other technologies also allow the capture or building of 3D imagery.
  • 3D imaging in the form of ultrasound, CT, MR
  • CT computed tomography
  • MR magnetic resonance
  • a current limitation of 3D imaging is that although the data is 3D in its nature, conventional 2D displays can only render a flat representation (projection, slice, casting, etc) of the image on a screen.
  • 3D imaging devices are available as indicated above.
  • computing systems including those in imaging systems
  • technology has been made available to display 3D images using computed reality technology such as holograms, virtual reality, mixed reality or augmented reality technology.
  • computed reality technology such as holograms, virtual reality, mixed reality or augmented reality technology.
  • Such technologies have, however, not been primarily developed for the specific requirements of a clinical setting. 3D systems tend to be expensive and their interfaces alien to users used to working in 2D.
  • a further issue with three dimensional rendering of data is that the volume of information portrayed to the user increases substantially.
  • a method and apparatus for navigation and display of 3D image data comprises:
  • a 3D image dataset refers to a 3D array of scalar or vector values, and possibly an associated extent, orientation and resolution that allows to establish a correspondence between the 3D image and real or imaginary world objects.
  • the value of a point within the 3D image may be a colour value or some other scalar or vector image related value.
  • the value may or may not have been captured by an image sensor—it could be ultrasound, MRI, doppler or other data but it could also represent velocity of blood that has been detected or other modalities or measurable/computable values that map to the 3D image.
  • the step of calculating may comprise using a masking kernel.
  • a masking kernel a pre-defined shape or another 3D image may be used as a masking function.
  • the modified 3D image view may be rendered as a 2D or 3D image (or other rendering) for display to the user.
  • the scalar opacity map is calculated for a region of the 3D image dataset, the region comprising the portion of the 3D image dataset between the highlight position and the edges of the 3D image dataset in a field of view.
  • Embodiments of the present invention include a method and system that gives the user the ability to peel away obscuring structure and focus on structure(s) of interest.
  • Preferred embodiments have an intuitive user interface that uses the centre of view from a designated highlight position in the 3D imaged volume to define the structure of interest. In this way, the user simply needs to identify where they wish to highlight from and in which direction (in a similar manner to shining a torch on an unlit scene) and the system is able to focus on structures in that field of view.
  • a masking kernel is used (or the user may be given the ability to select from one of a number of masking kernels) and the user interface include user interface features via which parameters of the kernel are tuneable by the user.
  • the parameters are preferably tuneable during use so that the user can change the degree to which surrounding structures can be seen.
  • kernels One type of kernel that may be used is the Gaussian kernel discussed below.
  • kernels such as those based on a uniform/rectangular distribution, a radial basis function, a spherical step function or an exponential distribution (in the case of an exponential distribution the user would select a point/area to obscure rather than one to highlight.)
  • Preferred embodiments apply a position-dependent opacity kernel such that the opacity of image features in a rendered 2D view (or 3D view) of a 3D image dataset is changed depending on position of the highlight point.
  • the user interface enables a user to move the highlight point and optionally other parameters used to control the opacity as are described in more detail below.
  • the user is provided with an intuitive user interface to navigate a 3D image using a 2D display.
  • the user interface takes inputs from a keyboard and/or mouse and/or other controller that interact with the user interface via the 2D display. In this manner, the user can change perspective/view around the 3D image and view the highlighted structure/area form different perspectives.
  • the volume can be navigated and viewed using existing 3D rendering systems (or 2D slices or other rendering of the 3D image).
  • the user interface may include the capability for the user to set a point (or range) in time to be displayed or it may automatically loop through recorded imagery for the view.
  • the dimensions need not correspond (or correspond entirely) to data from the visible spectrum and could include representations of ultrasound, MRI (Magnetic Resonance Imaging) or other data that is used to form multi or hyperspectral images to be viewed.
  • This modality consists of 3-channel, 3D imaging data over time.
  • Each time-frame is a volume of data, and for each voxel in the imaging data there are two values (channels): a background value corresponding to the B-Mode (brightness) anatomical image, normally visualized in grayscale; and a Doppler velocity value, which measures, typically in cm/s, the blood velocity along a specific direction, and is typically visualized in red-to-blue colour scale.
  • This modality consists of N-channel, 3D imaging data (N>0). Each voxel in the imaging data contains N+1 values. The first value is called the BO signal, and all following values correspond to diffusion-weighted signals at the voxel location. N is typically 6, but can be up to several hundred channels. This type of modality is often utilised for exploring the intrinsic tissue orientation within an organ.
  • This modality is produced by dedicated MRI scanners that are equipped with a PET imaging device. It consists of 2-channel, 3D imaging data. Each voxel in the imaging data contains 2 values. The first value corresponds to the MR-weighting signal (it can be T1-, T2-weighted, or any other MR modality), and the second one corresponds to the PET signal.
  • This type of imaging modality is often used to highlight the concentrated presence of radio tracers that attach to tumour tissue, superimposed to the structural MRI signal.
  • 3D or 2D ultrasound data may be fused with MR or CT data. This may provide a structural/functional view or to show features in one modality which may not be as clear in the other. This may be used for guidance.
  • the two sets of data could be kept in separate coordinate systems or fused into a single volume where one modality is registered to the other and then resampled.
  • calculation of the masking kernel, opacity channel and 2D or 3D rendered image may be done on the fly or may be cached/recorded—particularly in the case of a looped (in time) display, it may be preferable to generate the rendered image during the first loop and cache those until the position or kernel parameters are moved. It will furthermore be appreciated that embodiments of the present invention are also applicable for use in live image capture situations.
  • the user interface may be used in place of the view a technician uses to guide the probe when scanning a patient or as an alternate view for the clinician that can be controlled independently of the operation of the probe.
  • Preferred embodiments make use of full 3D interaction to allow the user to pick a location in 3D (for example by hand tracking, or with an interaction tool) and make structures fade out as they get far from this point.
  • FIG. 1 is a schematic diagram of an imaging system according to an embodiment of the present invention
  • FIGS. 2 a and 2 b are illustrative line drawings and corresponding images showing a 3D rendered image without ( FIG. 2 a ) and with ( FIG. 2 b ) processing according to an embodiment of the present invention
  • FIGS. 3 a and 3 b are images of an ultrasound scan showing a conventional image ( FIG. 3 a ) and an image changes after an embodiment of the present invention is applied ( FIG. 3 b ); and,
  • FIG. 4 shows images in which the method of the present invention has been applied and in which the trade-off between colour distance ⁇ parameter and Euclidean distance parameter with the steepness of the Gaussian kernel through ⁇ is shown.
  • Embodiments of the present invention are directed to method and systems for displaying and applying user inputs to manipulate 3D imagery.
  • Embodiments may receive data directly from a 3D image data source or may receive data that has been previously acquired and stored in a data repository or similar.
  • 3D image data is typically encoded in the form of a 3D array of voxels.
  • voxel is used to refer to a scalar or vector value on a regular grid in three-dimensional space.
  • voxels themselves do not typically have their position (their spatial coordinates) explicitly encoded along with their values. Instead, rendering systems infer the position of a voxel based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image).
  • the 3D image data is preferably processed (preferably in real time or near real time) so as to suppress image features that are in the periphery of the field of view.
  • the system decides on how/whether to portray image features in the rendered output in dependence on a distance dependent opacity map.
  • image features at a focus point are shown with full opacity, the image features around it are less visible as opacity decreases and the image features that are further away are increasingly suppressed.
  • the further features are from the immediate field of view, the more they are suppressed.
  • the 3D image data is processed as the array of voxels (or other representation if voxels aren't used). As such, the existence of structures is not relevant to the system and no additional processing is needed. Opacity changes based on distance from the focus point and also on colour difference (or difference from the other scalar value if not colour). Vessels will likely have similar colours and so voxels of a vessel will have similar opacity depending on distance to viewpoint.
  • FIG. 1 is a schematic diagram of an imaging system according to an embodiment of the present invention.
  • the imaging system includes an image data source 10 , a processor 20 , a display 30 , and a user interface.
  • the user interface in this embodiment includes a position control 40 and a user input device 45 although it will be appreciated that different representations and input devices could be used.
  • the processor 20 receives image data from the image data source 10 and also position data from the position control 40 . It generates an opacity channel from the position data and uses this to render the image data for display on the display 30 .
  • the position control is decoupled from the display 30 .
  • the position control 40 may be superimposed over the displayed image on the display 30 . In other embodiments it may be displayed separately.
  • a user (which may or may not be the operator of the imaging probe that generates the imaging data provided from the imaging data source 10 ) interacts with the position control 40 to define a highlighting position (base of the arrow (A)) and orientation (direction of the arrow). This is in this embodiment the data provided to the processor 20 .
  • Positioning could, for example be done using a mouse, tablet, X/Y/Z position and X/Y/Z highlight direction using a keyboard, sliders etc.
  • the position cursor is illustrated by the arrow and is moved from position A to position B.
  • the resulting 2 channel image is output for visualization through a transfer function, which maps intensity to colours, and the computed opacity channel to opacity.
  • FIGS. 2 a and 2 b are illustrative line drawings and corresponding images showing a 3D rendered image without ( FIG. 2 a ) and with ( FIG. 2 b ) processing according to an embodiment of the present invention and FIGS. 3 a and 3 b are images of an ultrasound scan showing changes after an embodiment of the present invention is applied ( FIG. 3 a being an illustration of rendering without application of an embodiment of the present invention).
  • the output could be to a 3D display device, projection of the 3D image onto a 2D display or the output could be communication or storage of the rendered data (or the base 3D image data set and the opacity channel or just the opacity channel).
  • both the target position and the kernel parameters ( ⁇ , ⁇ ) can be tuned interactively.
  • the system includes a user interface in which the user can move a cursor to select the target point and can use slider or other GUI elements to select the kernel parameters.
  • the amount to which the surrounding regions are obscured can be controlled by trading-off parameters of the kernel (as discussed above, this is preferably provided to the user in the form of a GUI slider or the like).
  • the trade-off in the above embodiment is between colour distance ⁇ and Euclidean distance, and with the steepness of the opacity kernel through ⁇ , as illustrated in FIG. 4 .
  • FIG. 3 a is an example of conventional rendering of ultrasound image data that is used in medical imaging for assisting the clinician in diagnosis or treatment decisions, for example.
  • FIG. 3 b is an image rendered using an embodiment of the present invention.
  • rendering is changed preferably in dependence on user inputs so that organs or other imaged structure that is at or around the focus of the highlight (cross-hair in FIG. 4 ) is rendered but as image features are encountered that are further away from the focus of highlight, these are suppressed relative to distance to the focus of highlight.
  • the system includes a user interface that allows the user to interact in 3D with the rendered 2D environment.
  • the user interface allows the user to pick a location in 3D (for example by hand tracking, or with an interaction tool) and make structures fade out as those structures get far from this point (see FIGS. 2 b and 3 b ).
  • the 3D image data in the form of a scalar (1 channel) or vector (multi-channel) image is taken as input.
  • the system computes an opacity channel based on a kernel which acts on the intensities and on the relative position of voxels in the 3D image data with respect to a user-defined location (typically the system will have a default location that can be manipulated by the user via a user interface). It will be appreciated that other formats of image data could also be used as inputs.
  • An opacity channel is calculated relative to the focus of the highlight, the opacity channel being used to generate the rendered view of FIG. 2 b or 3 b from the input image data.
  • the region of interest is opaque and visible through semi-transparent structures.
  • a 3D image is visualized using this transfer function, preferably using volume rendering that produces a 2D projection.
  • volume rendering refers to a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.
  • a 2D projection of the 3D image data set one defines a camera in space relative to the volume, the opacity and also the colour of every voxel. This is usually defined using an RGBA (for red, green, blue, alpha) transfer function that defines the RGBA value for every possible voxel value.
  • RGBA for red, green, blue, alpha
  • a volume may be viewed by extracting isosurfaces (surfaces of equal values) from the volume and rendering them as polygonal meshes or by rendering the volume directly as a block of data.
  • the marching cubes algorithm is a common technique for extracting an isosurface from volume data.
  • a ray casting algorithm is a common technique for rendering a volume directly.
  • the 3D image data set is stored as a D-dimensional scalar map with samples on a uniform grid G. This may be done as a translation step at the point that the 3D image data set is received or alternatively the data set could be stored as received and translated/mapped as an initial step when rendering is to be done.
  • V(X):R D ⁇ R as the D-dimensional scalar map with samples on a grid G ⁇ R D then V(G) is a D-dimensional scalar image.
  • V(X):R D ⁇ R as a vector valued map
  • V(G) is a D-dimensional vector valued image.
  • the user preferably provides:
  • the masking kernel k maps the position X and the image V to a scalar opacity value, and is of the form: K P, ⁇ ( ⁇ X,V ⁇ ): R D+1 ⁇ [0,1]
  • the kernel may use an isotropic Gaussian kernel, centred at P 0 :
  • the kernel need not be of a Gaussian form.
  • Other examples include a radial (spheroidal) step function and an inverse Gaussian Kernel:
  • the pixel shaders now are able to function as Multiple Instruction Multiple Data (MIMD) processors (now able to independently branch) utilizing up to 1 GB of texture memory with floating point formats.
  • MIMD Multiple Instruction Multiple Data
  • the programmable pixel shaders can be used to simulate variations in the characteristics of lighting, shadow, reflection, emissive colour and so forth. Such simulations can be written using high level shading languages.
  • code e.g., a software algorithm or program
  • firmware e.g., a software algorithm or program
  • computer useable medium having control logic for enabling execution on a computer system having a computer processor.
  • Such a computer system typically includes memory storage configured to provide output from execution of the code which configures a processor in accordance with the execution.
  • the code can be arranged as firmware or software, and can be organized as a set of modules such as discrete code modules, function calls, procedure calls or objects in an object-oriented programming environment. If implemented using modules, the code can comprise a single module or a plurality of modules that operate in cooperation with one another.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Human Computer Interaction (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Generation (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A computer-implemented method and system for navigation and display of 3D image data is described. In the method, a 3D image dataset to be displayed is retrieved and a highlight position is identified within the 3D image dataset. A scalar opacity map is calculated for the 3D image dataset, the opacity map having a value for each of a plurality of positions in the 3D image dataset, the respective value being dependent on the respective position relative to the highlight position, and on the value of the 3D image at the respective position relative to the value of the 3D image at the highlight position. The opacity is applied to the 3D image dataset to generate a modified 3D image view.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a U.S. National Stage Application under 35 U.S.C. § 371 of International Patent Application No. PCT/GB2020/052337, filed Sep. 25, 2020, which claims priority to Great Britain Patent Application No. 1913832.0, filed Sep. 25, 2019, the contents of which are each hereby incorporated by reference in their respective entireties.
FIELD OF THE INVENTION
The present invention relates to a system and computer implemented method for navigation and display of three-dimensional imaging and is particularly applicable to three-dimensional imaging of the human anatomy for the purpose of medical diagnosis and treatment planning.
BACKGROUND TO THE INVENTION
Conventional imaging scanners are used for various purposes including imaging human and animal bodies for diagnosis and guidance during medical intervention such as surgery. Other uses for imaging scanners include structural analysis of buildings, pipes and the like.
A conventional medical ultrasound scanner creates two-dimensional B-mode images of tissue in which the brightness of a pixel is based on the intensity of the echo return. Other types of imaging scanners can capture blood flow, motion of tissue over time, the location of blood, the presence of specific molecules, the stiffness of tissue, or the anatomy of a three-dimensional (3D) region.
Traditionally, imaging scanners produce 2D images. 2D images such as 2D ultrasound images cannot represent three-dimensional structures typical of human or animal body organs because they can only capture one 2D slice of a cross-section. However, if a probe such as an ultrasound probe is mechanically or electronically swept over an area of interest, a three-dimensional image volume is generated. Alternatively, some ultrasound probes, for example “matrix” probes, have multiple piezo-electric crystals and can construct “real-time” 3D ultrasound images. This can then be displayed by, for example, 3D holographic technologies, and the anatomy becomes much easier to visualize for both the trained and untrained observer as it is more representative of the true underlying structure/anatomy. Other technologies also allow the capture or building of 3D imagery.
3D imaging (in the form of ultrasound, CT, MR) has become available to clinicians in recent years and has proved extremely valuable due to the ability to convey imaging information in an intuitive format. In the field of cardiology, for example, such data is being used to plan and guide surgical and catheter interventions.
A current limitation of 3D imaging is that although the data is 3D in its nature, conventional 2D displays can only render a flat representation (projection, slice, casting, etc) of the image on a screen.
3D imaging devices are available as indicated above. However, most computing systems (including those in imaging systems) have two dimensional displays and also user interface designs for two dimensional navigation. It is only recently that technology has been made available to display 3D images using computed reality technology such as holograms, virtual reality, mixed reality or augmented reality technology. Such technologies have, however, not been primarily developed for the specific requirements of a clinical setting. 3D systems tend to be expensive and their interfaces alien to users used to working in 2D.
A further issue with three dimensional rendering of data is that the volume of information portrayed to the user increases substantially.
While this can be argued to be positive, it also makes navigation and changing of views in the three dimensional space and assimilating information on a feature of interest more difficult.
Often these issues mean that the user drops back to working on two-dimensional slices of the 3D image data using a 2D display and user interface. While this may be preferred by users, it loses information from the three-dimensional view that may be of relevance or interest to the user (for example from a different orientation). As a result, many of the advantages of the 3D system are lost and the 3D system ends up becoming an expensive 2D system.
STATEMENT OF INVENTION
According to an aspect of the present invention, there is provided a method and apparatus for navigation and display of 3D image data. The method comprises:
    • retrieving a 3D image dataset to be displayed;
    • receiving identification of a highlight position within the 3D image dataset;
    • calculating a scalar opacity map for the 3D image dataset, the opacity map having a value for each of a plurality of positions in the 3D image dataset, the respective value being dependent on the respective position relative to the highlight position, and on the value of the 3D image at the respective position relative to the value of the 3D image at the highlight position; and,
    • applying the opacity to the 3D image dataset to generate a modified 3D image view.
In embodiments of the present invention, a 3D image dataset refers to a 3D array of scalar or vector values, and possibly an associated extent, orientation and resolution that allows to establish a correspondence between the 3D image and real or imaginary world objects. The methods and apparatus described here, and the outlined claims, apply to this definition of 3D image data, but also to other definitions including, but not restricted to, non-Cartesian spatial sampling of a 3D scalar or vector field, such as 3D spherical sampling (for example used in some 3D ultrasound systems), 3D unstructured datasets (for example resulting from computational fluid dynamics simulations) and point clouds (for example from particle image velocimetry). In all cases, the value of a point within the 3D image may be a colour value or some other scalar or vector image related value. The value may or may not have been captured by an image sensor—it could be ultrasound, MRI, doppler or other data but it could also represent velocity of blood that has been detected or other modalities or measurable/computable values that map to the 3D image.
In the case of multi-channel 3d image data, a number of different approached could be taken (which may be available for selection by a user interface or may have been pre-selected depending on expected data). For example:
    • The system may apply the same opacity mask to multiple ones (not necessarily all) of the channels. The opacity mask may have been calculated for one channel, calculated for the multiple channels and merged or calculated for a flattened version of the channels.
    • The system may selectively (by system or user) calculate and apply an opacity mask to just one channel;
    • e.g. for colour flow Doppler the anatomy channel (B-Mode) could be made to fade out, but the blood flow (CFD channel) does not fade out
    • The system may apply an opacity mask to multiple channels in a weighted manner (determined by the system or specified by the suer via a user interface)
    • e.g. for colour flow Doppler the anatomy channel (B-Mode) could be made to fade out more, compared to the fading out of blood flow (CFD channel) which fades out less—fading out compared at the same distance from the highlight point
    • Or the fading out distance may be different to one channel from the other, e.g. anatomy fades out quite close to the highlight point, colour flow further away from the highlight point
The step of calculating may comprise using a masking kernel. As an alternative to a masking kernel, a pre-defined shape or another 3D image may be used as a masking function.
The modified 3D image view may be rendered as a 2D or 3D image (or other rendering) for display to the user.
Preferably, the scalar opacity map is calculated for a region of the 3D image dataset, the region comprising the portion of the 3D image dataset between the highlight position and the edges of the 3D image dataset in a field of view.
One example of such 3D images is 3D ultrasound images of the heart. These include lots of structures surrounding the heart that are opaque to ultrasound; as a result, these structures will occlude the view of the internal heart structures. An example of this can be seen in FIG. 2 a ). Embodiments of the present invention include a method and system that gives the user the ability to peel away obscuring structure and focus on structure(s) of interest. Preferred embodiments have an intuitive user interface that uses the centre of view from a designated highlight position in the 3D imaged volume to define the structure of interest. In this way, the user simply needs to identify where they wish to highlight from and in which direction (in a similar manner to shining a torch on an unlit scene) and the system is able to focus on structures in that field of view. Preferably, a masking kernel is used (or the user may be given the ability to select from one of a number of masking kernels) and the user interface include user interface features via which parameters of the kernel are tuneable by the user. The parameters are preferably tuneable during use so that the user can change the degree to which surrounding structures can be seen.
One type of kernel that may be used is the Gaussian kernel discussed below. Alternatively, it will be appreciated that other kernels such as those based on a uniform/rectangular distribution, a radial basis function, a spherical step function or an exponential distribution (in the case of an exponential distribution the user would select a point/area to obscure rather than one to highlight.)
Preferred embodiments apply a position-dependent opacity kernel such that the opacity of image features in a rendered 2D view (or 3D view) of a 3D image dataset is changed depending on position of the highlight point. Preferably, the user interface enables a user to move the highlight point and optionally other parameters used to control the opacity as are described in more detail below. Advantageously, the user is provided with an intuitive user interface to navigate a 3D image using a 2D display. Preferably, the user interface takes inputs from a keyboard and/or mouse and/or other controller that interact with the user interface via the 2D display. In this manner, the user can change perspective/view around the 3D image and view the highlighted structure/area form different perspectives. As a 3D image defines by voxels or the like, the volume can be navigated and viewed using existing 3D rendering systems (or 2D slices or other rendering of the 3D image).
Although the focus of the following discussion is on 3D imagery, it will be appreciated that embodiments of the present invention are applicable to higher dimension datasets such as 4D (3D imagery+time), for example. In such a case, the user interface may include the capability for the user to set a point (or range) in time to be displayed or it may automatically loop through recorded imagery for the view.
Similarly, the dimensions need not correspond (or correspond entirely) to data from the visible spectrum and could include representations of ultrasound, MRI (Magnetic Resonance Imaging) or other data that is used to form multi or hyperspectral images to be viewed.
For example:
3D Colour Doppler Data
This modality consists of 3-channel, 3D imaging data over time. Each time-frame is a volume of data, and for each voxel in the imaging data there are two values (channels): a background value corresponding to the B-Mode (brightness) anatomical image, normally visualized in grayscale; and a Doppler velocity value, which measures, typically in cm/s, the blood velocity along a specific direction, and is typically visualized in red-to-blue colour scale.
Diffusion MRI Data
This modality consists of N-channel, 3D imaging data (N>0). Each voxel in the imaging data contains N+1 values. The first value is called the BO signal, and all following values correspond to diffusion-weighted signals at the voxel location. N is typically 6, but can be up to several hundred channels. This type of modality is often utilised for exploring the intrinsic tissue orientation within an organ.
PET-MRI Data
This modality is produced by dedicated MRI scanners that are equipped with a PET imaging device. It consists of 2-channel, 3D imaging data. Each voxel in the imaging data contains 2 values. The first value corresponds to the MR-weighting signal (it can be T1-, T2-weighted, or any other MR modality), and the second one corresponds to the PET signal. This type of imaging modality is often used to highlight the concentrated presence of radio tracers that attach to tumour tissue, superimposed to the structural MRI signal.
MR (or CT)—Ultrasound Fusion
Not a modality per se, but (normally live) 3D or 2D ultrasound data may be fused with MR or CT data. This may provide a structural/functional view or to show features in one modality which may not be as clear in the other. This may be used for guidance. The two sets of data could be kept in separate coordinate systems or fused into a single volume where one modality is registered to the other and then resampled.
It will be appreciated that calculation of the masking kernel, opacity channel and 2D or 3D rendered image may be done on the fly or may be cached/recorded—particularly in the case of a looped (in time) display, it may be preferable to generate the rendered image during the first loop and cache those until the position or kernel parameters are moved. It will furthermore be appreciated that embodiments of the present invention are also applicable for use in live image capture situations. The user interface may be used in place of the view a technician uses to guide the probe when scanning a patient or as an alternate view for the clinician that can be controlled independently of the operation of the probe.
Embodiments of the present invention are able to work in substantially real-time, allowing the user to navigate the imaged volume and change what is and is not being displayed simply by moving the highlight position and kernel parameters.
In contrast to existing systems that involve slicing planes through the volume and then manually cropping images, it will be appreciated that embodiments of the present invention provide significant power and flexibility while at the same time reducing the specialist knowledge and skills needed to operate the imaging system.
Preferred embodiments make use of full 3D interaction to allow the user to pick a location in 3D (for example by hand tracking, or with an interaction tool) and make structures fade out as they get far from this point.
It will be appreciated that user interactions can be recorded for later replay (and the recording needs only record view points and parameters for reproduction as the views themselves can be re-calculated at time of display—particularly if different display devices are to be used to render the 3D image dataset, this approach is particularly advantageous as different clinicians or specialists may have different display technologies available to them).
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying description in which:
FIG. 1 is a schematic diagram of an imaging system according to an embodiment of the present invention;
FIGS. 2 a and 2 b are illustrative line drawings and corresponding images showing a 3D rendered image without (FIG. 2 a ) and with (FIG. 2 b ) processing according to an embodiment of the present invention;
FIGS. 3 a and 3 b are images of an ultrasound scan showing a conventional image (FIG. 3 a ) and an image changes after an embodiment of the present invention is applied (FIG. 3 b ); and,
FIG. 4 shows images in which the method of the present invention has been applied and in which the trade-off between colour distance λ parameter and Euclidean distance parameter with the steepness of the Gaussian kernel through θ is shown.
DETAILED DESCRIPTION
Embodiments of the present invention are directed to method and systems for displaying and applying user inputs to manipulate 3D imagery.
There exist many sources of 3D image data including 3D imaging scanners. Embodiments may receive data directly from a 3D image data source or may receive data that has been previously acquired and stored in a data repository or similar.
3D image data is typically encoded in the form of a 3D array of voxels. In 3D imaging, the term “voxel” is used to refer to a scalar or vector value on a regular grid in three-dimensional space. As with pixels in a bitmap, voxels themselves do not typically have their position (their spatial coordinates) explicitly encoded along with their values. Instead, rendering systems infer the position of a voxel based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image).
In embodiments of the present invention, the 3D image data is preferably processed (preferably in real time or near real time) so as to suppress image features that are in the periphery of the field of view. Preferably, the system decides on how/whether to portray image features in the rendered output in dependence on a distance dependent opacity map. In this way, image features at a focus point (designated by a user interface) are shown with full opacity, the image features around it are less visible as opacity decreases and the image features that are further away are increasingly suppressed. In one embodiment, the further features are from the immediate field of view, the more they are suppressed. It is important to note that the 3D image data is processed as the array of voxels (or other representation if voxels aren't used). As such, the existence of structures is not relevant to the system and no additional processing is needed. Opacity changes based on distance from the focus point and also on colour difference (or difference from the other scalar value if not colour). Vessels will likely have similar colours and so voxels of a vessel will have similar opacity depending on distance to viewpoint.
FIG. 1 is a schematic diagram of an imaging system according to an embodiment of the present invention.
The imaging system includes an image data source 10, a processor 20, a display 30, and a user interface. The user interface in this embodiment includes a position control 40 and a user input device 45 although it will be appreciated that different representations and input devices could be used.
The processor 20 receives image data from the image data source 10 and also position data from the position control 40. It generates an opacity channel from the position data and uses this to render the image data for display on the display 30.
In the illustrated embodiment, the position control is decoupled from the display 30. In some embodiments, the position control 40 may be superimposed over the displayed image on the display 30. In other embodiments it may be displayed separately.
A user (which may or may not be the operator of the imaging probe that generates the imaging data provided from the imaging data source 10) interacts with the position control 40 to define a highlighting position (base of the arrow (A)) and orientation (direction of the arrow). This is in this embodiment the data provided to the processor 20. Positioning could, for example be done using a mouse, tablet, X/Y/Z position and X/Y/Z highlight direction using a keyboard, sliders etc. In the illustrated example, the position cursor is illustrated by the arrow and is moved from position A to position B.
Once the positioning and kernel parameters have been established and the opacity channel Vo calculated, the resulting 2 channel image is output for visualization through a transfer function, which maps intensity to colours, and the computed opacity channel to opacity.
FIGS. 2 a and 2 b are illustrative line drawings and corresponding images showing a 3D rendered image without (FIG. 2 a ) and with (FIG. 2 b ) processing according to an embodiment of the present invention and FIGS. 3 a and 3 b are images of an ultrasound scan showing changes after an embodiment of the present invention is applied (FIG. 3 a being an illustration of rendering without application of an embodiment of the present invention).
Given an intensity and an opacity channel, application of the transfer function by the processor 20 is straightforward. It will be appreciated that the output could be to a 3D display device, projection of the 3D image onto a 2D display or the output could be communication or storage of the rendered data (or the base 3D image data set and the opacity channel or just the opacity channel).
It will be appreciated that both the target position and the kernel parameters (θ, λ) can be tuned interactively. Preferably, the system includes a user interface in which the user can move a cursor to select the target point and can use slider or other GUI elements to select the kernel parameters.
The amount to which the surrounding regions are obscured can be controlled by trading-off parameters of the kernel (as discussed above, this is preferably provided to the user in the form of a GUI slider or the like). The trade-off in the above embodiment is between colour distance λ and Euclidean distance, and with the steepness of the opacity kernel through θ, as illustrated in FIG. 4 .
FIG. 3 a is an example of conventional rendering of ultrasound image data that is used in medical imaging for assisting the clinician in diagnosis or treatment decisions, for example. FIG. 3 b is an image rendered using an embodiment of the present invention. In embodiments of the present invention, rendering is changed preferably in dependence on user inputs so that organs or other imaged structure that is at or around the focus of the highlight (cross-hair in FIG. 4 ) is rendered but as image features are encountered that are further away from the focus of highlight, these are suppressed relative to distance to the focus of highlight.
Preferably, the system includes a user interface that allows the user to interact in 3D with the rendered 2D environment. The user interface allows the user to pick a location in 3D (for example by hand tracking, or with an interaction tool) and make structures fade out as those structures get far from this point (see FIGS. 2 b and 3 b ).
In this preferred embodiment, the 3D image data in the form of a scalar (1 channel) or vector (multi-channel) image is taken as input. The system computes an opacity channel based on a kernel which acts on the intensities and on the relative position of voxels in the 3D image data with respect to a user-defined location (typically the system will have a default location that can be manipulated by the user via a user interface). It will be appreciated that other formats of image data could also be used as inputs.
An opacity channel is calculated relative to the focus of the highlight, the opacity channel being used to generate the rendered view of FIG. 2 b or 3 b from the input image data. As can be seen from FIGS. 2 b and 3 b , the region of interest is opaque and visible through semi-transparent structures.
A 3D image is visualized using this transfer function, preferably using volume rendering that produces a 2D projection.
As will be appreciated, volume rendering refers to a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field. To render a 2D projection of the 3D image data set, one defines a camera in space relative to the volume, the opacity and also the colour of every voxel. This is usually defined using an RGBA (for red, green, blue, alpha) transfer function that defines the RGBA value for every possible voxel value.
For example, a volume may be viewed by extracting isosurfaces (surfaces of equal values) from the volume and rendering them as polygonal meshes or by rendering the volume directly as a block of data. The marching cubes algorithm is a common technique for extracting an isosurface from volume data. A ray casting algorithm is a common technique for rendering a volume directly.
Preferably the 3D image data set is stored as a D-dimensional scalar map with samples on a uniform grid G. This may be done as a translation step at the point that the 3D image data set is received or alternatively the data set could be stored as received and translated/mapped as an initial step when rendering is to be done.
Defining V(X):RD→R as the D-dimensional scalar map with samples on a grid G⊂RD then V(G) is a D-dimensional scalar image. Analogously, defining V(X):RD→R as a vector valued map and V(G) is a D-dimensional vector valued image. In the following, we denote all images V and assume a scalar image is a vector image where d=1.
To calculate the opacity channel, the user preferably provides:
    • 1) a spatial position (preferably through a movable cursor) P∈RD and;
    • 2) a M-dimensional parameter vector for a masking kernel θ (which can be provided through, for example, sliders or other control in a GUI).
In one embodiment, the masking kernel k maps the position X and the image V to a scalar opacity value, and is of the form:
K P,θ({X,V}):R D+1→[0,1]
For example, the kernel may use an isotropic Gaussian kernel, centred at P0:
k P , θ Gauss ( { X , V } ) = exp ( - X - P 2 θ 2 )
where θ is a scalar value representing the width of the Gaussian kernel.
It will be appreciated from the above discussion that the kernel need not be of a Gaussian form. Other examples include a radial (spheroidal) step function and an inverse Gaussian Kernel:
    • i) Radial step function, centred at P0:
k P , θ Spheroidal ( { X , V } ) = { 1 if X - P 2 < R 2 0 elsewhere
    • where R is a scalar value representing the radius of the radial kernel.
    • ii) Inverse Gaussian kernel centred at P0:(which would obscure the targeted region allowing other areas to be viewed):
k P , θ IGauss ( { X , V } ) = 1 - exp ( - X - P 2 θ 2 )
    • where θ is a scalar value representing the width of the Gaussian.
Generalising the above approach for any kernel, preferred embodiments use a kernel that combines intensity (relative to a reference intensity value) and position (Euclidean distance to a target of interest) to define the opacity channel Vo as follows:
V o(X)=k P,θ proposed({X,V})=(k P,θ 1 position)(1-λ)(k V R 2 intensity)λ
    • where λ is a trade-off factor between opacity being governed by intensity (λ=0) or opacity being governed by the position-based kernel (λ=1)kP 0 1 position is a position-based kernel, for example any of the kernels described above, and kVR,θ 2 intensity is an intensity-based kernel, for example:
k V R , θ 2 intensity = exp ( - v - v R 2 θ 2 2 )
As in the case above, VR is a reference image value (which can be the intensity at the target of interest, or fixed to typically VR=255 in a scalar ultrasound, i.e. the intensity of the bright white areas.
It will be appreciated that the parameters need not be user provided and could also be system defaults. Additionally, positioning and masking kernel parameters could be provided via an external system that may have recorded previous views of the dataset or has data from other sources (diagnostic, imaging, medical history or other data) and is guided by that data to highlight features that may be of interest. The system may also include machine learning or other systems so as to provide assistance on best choice of parameters for a particular feature that is at the focus of the field of view or highlight location (for example, within the crosshairs etc).
FIG. 4 shows example images of the opacity channel obtained using an embodiment of the present invention. The images show the opacity channel for increasing values of λ (from left to right: 0.1, 0.2, 0.3, 0.4 and 0.5) and of θ (from top to bottom: 0.05, 0.1 and 0.15), when picking a point on the atrium (white cross).
It will be appreciated that the above approach can be implemented in software and/or hardware. A recently exploited technique to accelerate traditional volume rendering algorithms such as ray-casting is the use of modern graphics cards. Starting with the programmable pixel shaders, people recognized the power of parallel operations on multiple pixels and began to perform general-purpose computing on (the) graphics processing units (GPGPU) and other high performance hardware. The pixel shaders are able to read and write randomly from video memory and perform some basic mathematical and logical calculations. These Single Instruction Multiple Data (SIMD) processors were used to perform general calculations such as rendering polygons and signal processing. In recent GPU generations, the pixel shaders now are able to function as Multiple Instruction Multiple Data (MIMD) processors (now able to independently branch) utilizing up to 1 GB of texture memory with floating point formats. With such power, virtually any algorithm with steps that can be performed in parallel, such as volume ray casting or tomographic reconstruction, can be performed with tremendous acceleration. The programmable pixel shaders can be used to simulate variations in the characteristics of lighting, shadow, reflection, emissive colour and so forth. Such simulations can be written using high level shading languages.
The foregoing preferred embodiments have been disclosed for the purpose of illustration. Variations and modifications of the basic concept of the invention will be readily apparent to persons skilled in the art. For example, graphical symbols other than dots or cross-hairs can be used to depict a position in the volume. Nor is the user interface limited to particular software elements. Not only could different software GUI elements be used, hardware interface features could also be used such as a track-ball, rocker switch, rotary switch, and keys. A mouse, a joystick, a lever, a slider or other input device could also be used, as could movement based detectors, virtual controllers/environments, augmented reality etc. It will also be appreciated that the rendered images produced could be used with many different display technologies including 2D, 3D, virtual reality, augmented reality, holographic and other display types. All such variations and modifications are intended to be encompassed by embodiments of the present invention.
It is to be appreciated that certain embodiments of the invention as discussed below may be incorporated as code (e.g., a software algorithm or program) residing in firmware and/or on computer useable medium having control logic for enabling execution on a computer system having a computer processor. Such a computer system typically includes memory storage configured to provide output from execution of the code which configures a processor in accordance with the execution. The code can be arranged as firmware or software, and can be organized as a set of modules such as discrete code modules, function calls, procedure calls or objects in an object-oriented programming environment. If implemented using modules, the code can comprise a single module or a plurality of modules that operate in cooperation with one another.
Optional embodiments of the invention can be understood as including the parts, elements and features referred to or indicated herein, individually or collectively, in any or all combinations of two or more of the parts, elements or features, and wherein specific integers are mentioned herein which have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.
Although illustrated embodiments of the present invention have been described, it should be understood that various changes, substitutions, and alterations can be made by one of ordinary skill in the art without departing from the present invention which is defined by the recitations in the claims and equivalents thereof.
This work is independent research funded by the National Institute for Health Research (Invention for Innovation programme, 3D Heart project, II-LA-0716-20001). The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.
This application claims priority from GB 1913832.0, the content of which and the content of the abstract accompanying this application are hereby incorporated by reference.

Claims (13)

The invention claimed is:
1. A computer-implemented method for navigation and display of 3D image data, comprising:
retrieving a 3D image dataset to be displayed;
receiving identification of a highlight position within the 3D image dataset;
calculating a scalar opacity map for the 3D image dataset, the scalar opacity map having a value for each of a plurality of positions in the 3D image dataset, the respective value being dependent on a respective distance from the highlight position, and on the value of the 3D image at the respective position relative to the value of the 3D image at the highlight position; and
applying the opacity to the 3D image dataset to generate a modified 3D image view.
2. The method of claim 1, wherein the step of calculating uses a masking kernel.
3. The method of claim 2, wherein the masking kernel is selected from a set including a Gaussian kernel, a kernel based on a uniform/rectangular distribution, a radial basis function, a spherical step function or an exponential distribution.
4. The method of claim 3, further comprising receiving, via a user interface, the highlight position in the 3D image dataset, the method further comprising calculating the scalar opacity map in dependence on the highlight position, on the 3D image dataset and on the masking kernel.
5. The method of claim 4, further comprising receiving, via the user interface, parameters for the masking kernel.
6. The method of claim 5, wherein the user interface comprises a 2D representation of the 3D image dataset.
7. The method of claim 4, wherein the user interface comprises a 2D representation of the 3D image dataset.
8. The method of claim 1, wherein the step of calculating uses a predefined shape or a predefined 3D as a masking function.
9. The method of claim 1, further comprising rendering the modified 3D image as a 2D or 3D image for display to the user.
10. The method of claim 1, further comprising calculating the scalar opacity map for a region of the 3D image dataset, the region comprising the portion of the 3D image dataset between the highlight position and the edges of the 3D image dataset in a field of view.
11. A system for navigation and display of 3D image data, comprising:
a data repository storing a 3D image dataset to be displayed;
a user interface configured to receive identification of a highlight position within the 3D image dataset; and
a processor configured to calculate a scalar opacity map for the 3D image dataset, the scalar opacity map having a value for each of a plurality of positions in the 3D image dataset, the respective value being dependent on a respective distance from the highlight position, and on the value of the 3D image at the respective position relative to the value of the 3D image at the highlight position and apply the opacity to the 3D image dataset to generate a modified 3D image view.
12. The system of claim 11, wherein the system is configured to receive, via the user interface, tuneable parameters for the masking kernel, the processor being configured to apply the received parameters when calculating the scalar opacity map.
13. The system of claim 12, wherein the user interface includes a 2D display showing a representation of the 3D image dataset, the system being configured to receive a designation of the highlight position via the 2D displayed representation.
US17/763,728 2019-09-25 2020-09-25 Computer implemented method and system for navigation and display of 3D image data Active 2040-12-26 US11995773B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB1913832 2019-09-25
GB1913832.0 2019-09-25
GB201913832A GB201913832D0 (en) 2019-09-25 2019-09-25 Method and apparatus for navigation and display of 3d image data
PCT/GB2020/052337 WO2021058981A1 (en) 2019-09-25 2020-09-25 Computer implemented method and system for navigation and display of 3d image data

Publications (2)

Publication Number Publication Date
US20220343605A1 US20220343605A1 (en) 2022-10-27
US11995773B2 true US11995773B2 (en) 2024-05-28

Family

ID=68425503

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/763,728 Active 2040-12-26 US11995773B2 (en) 2019-09-25 2020-09-25 Computer implemented method and system for navigation and display of 3D image data

Country Status (6)

Country Link
US (1) US11995773B2 (en)
EP (1) EP4022578A1 (en)
JP (1) JP2022551060A (en)
CN (1) CN115104129A (en)
GB (1) GB201913832D0 (en)
WO (1) WO2021058981A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240054700A1 (en) * 2021-01-15 2024-02-15 Koninklijke Philips N.V. Post-processing for radiological images
US20230068315A1 (en) * 2021-08-24 2023-03-02 Biosense Webster (Israel) Ltd. Anatomically correct reconstruction of an atrium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010055016A1 (en) * 1998-11-25 2001-12-27 Arun Krishnan System and method for volume rendering-based segmentation
US20110262023A1 (en) * 2008-10-08 2011-10-27 Tomtec Imaging Systems Gmbh Method of filtering an image dataset
US20140187948A1 (en) * 2012-12-31 2014-07-03 General Electric Company Systems and methods for ultrasound image rendering
US20190099159A1 (en) * 2017-09-29 2019-04-04 Siemens Healthcare Gmbh Measurement Point Determination in Medical Diagnostic Imaging
US20190110198A1 (en) * 2017-09-18 2019-04-11 Element Inc. Methods, systems, and media for detecting spoofing in mobile authentication
US20200184640A1 (en) * 2018-12-05 2020-06-11 Stryker Corporation Systems and methods for displaying medical imaging data

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001340336A (en) * 2000-06-01 2001-12-11 Toshiba Medical System Co Ltd Ultrasonic diagnostic apparatus and ultrasonic diagnostic method
US6692441B1 (en) * 2002-11-12 2004-02-17 Koninklijke Philips Electronics N.V. System for identifying a volume of interest in a volume rendered ultrasound image
JP2006000338A (en) * 2004-06-17 2006-01-05 Fuji Photo Film Co Ltd Image processing method, apparatus, and program
GB2416944A (en) * 2004-07-30 2006-02-08 Voxar Ltd Classifying voxels in a medical image
JP5161991B2 (en) * 2011-03-25 2013-03-13 株式会社東芝 Image processing device
JP5693412B2 (en) * 2011-07-26 2015-04-01 キヤノン株式会社 Image processing apparatus and image processing method
US9612657B2 (en) * 2013-03-14 2017-04-04 Brainlab Ag 3D-volume viewing by controlling sight depth
GB201415534D0 (en) * 2014-09-02 2014-10-15 Bergen Teknologioverforing As Method and apparatus for processing three-dimensional image data
US9659405B2 (en) * 2015-04-01 2017-05-23 Toshiba Medical Systems Corporation Image processing method and apparatus
US10342633B2 (en) * 2016-06-20 2019-07-09 Toshiba Medical Systems Corporation Medical image data processing system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010055016A1 (en) * 1998-11-25 2001-12-27 Arun Krishnan System and method for volume rendering-based segmentation
US20110262023A1 (en) * 2008-10-08 2011-10-27 Tomtec Imaging Systems Gmbh Method of filtering an image dataset
US20140187948A1 (en) * 2012-12-31 2014-07-03 General Electric Company Systems and methods for ultrasound image rendering
US20190110198A1 (en) * 2017-09-18 2019-04-11 Element Inc. Methods, systems, and media for detecting spoofing in mobile authentication
US20190099159A1 (en) * 2017-09-29 2019-04-04 Siemens Healthcare Gmbh Measurement Point Determination in Medical Diagnostic Imaging
US20200184640A1 (en) * 2018-12-05 2020-06-11 Stryker Corporation Systems and methods for displaying medical imaging data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Search Report in corresponding PCT/GB2020/052337 dated Dec. 3, 2020.

Also Published As

Publication number Publication date
US20220343605A1 (en) 2022-10-27
GB201913832D0 (en) 2019-11-06
WO2021058981A1 (en) 2021-04-01
JP2022551060A (en) 2022-12-07
EP4022578A1 (en) 2022-07-06
CN115104129A (en) 2022-09-23

Similar Documents

Publication Publication Date Title
Zhang et al. Volume visualization: a technical overview with a focus on medical applications
RU2497194C2 (en) Method and device for 3d visualisation of data sets
EP2486548B1 (en) Interactive selection of a volume of interest in an image
US20050228250A1 (en) System and method for visualization and navigation of three-dimensional medical images
US10593099B2 (en) Transfer function determination in medical imaging
US20070276214A1 (en) Systems and Methods for Automated Segmentation, Visualization and Analysis of Medical Images
JP2009034521A (en) System and method for volume rendering data in medical diagnostic imaging, and computer readable storage medium
Grigoryan et al. Probabilistic surfaces: Point based primitives to show surface uncertainty
US11995773B2 (en) Computer implemented method and system for navigation and display of 3D image data
JP6560745B2 (en) Visualizing volumetric images of anatomy
Chen et al. LD-UNet: A long-distance perceptual model for segmentation of blurred boundaries in medical images
US20050197558A1 (en) System and method for performing a virtual endoscopy in a branching structure
KR20230159696A (en) Methods and systems for processing multi-modal and/or multi-source data in a medium
CN108701492A (en) Medical image navigation system
Spann et al. Interactive visualisation of the food content of a human stomach in MRI
CN115153621A (en) Model-based automatic navigation system and method for ultrasound images
EP4524900A1 (en) Method and system for improved interaction with medical 2d and 3d visualization
US12456273B2 (en) Ultrasound imaging system and method for generating and displaying a colorized surface rendering
Diepenbrock Rapid development of applications for the interactive visual analysis of multimodal medical data
Kirmizibayrak Interactive volume visualization and editing methods for surgical applications
Haidacher Importance-driven rendering in interventional imaging
Zhang et al. Medical image volumetric visualization: Algorithms, pipelines, and surgical applications
Kim et al. Data visualization and display
EP1523733A2 (en) Planar reformat internal surface viewer
Welsh et al. Brain miner: a 3D visual interface for the investigation of functional relationships in the brain

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

AS Assignment

Owner name: KING'S COLLEGE LONDON, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HERRERO, ALBERTO GOMEZ;WHEELER, GAVIN;SCHNABEL, JULIA;AND OTHERS;SIGNING DATES FROM 20220204 TO 20220221;REEL/FRAME:067172/0437

Owner name: GUY'S AND ST THOMAS' NHS FOUNDATION TRUST, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIMPSON, JOHN;REEL/FRAME:067169/0467

Effective date: 20220303

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE