US11995773B2 - Computer implemented method and system for navigation and display of 3D image data - Google Patents
Computer implemented method and system for navigation and display of 3D image data Download PDFInfo
- Publication number
- US11995773B2 US11995773B2 US17/763,728 US202017763728A US11995773B2 US 11995773 B2 US11995773 B2 US 11995773B2 US 202017763728 A US202017763728 A US 202017763728A US 11995773 B2 US11995773 B2 US 11995773B2
- Authority
- US
- United States
- Prior art keywords
- image
- image dataset
- kernel
- value
- user interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/003—Navigation within 3D models or images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/41—Medical
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/62—Semi-transparency
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2024—Style variation
Definitions
- the present invention relates to a system and computer implemented method for navigation and display of three-dimensional imaging and is particularly applicable to three-dimensional imaging of the human anatomy for the purpose of medical diagnosis and treatment planning.
- Imaging scanners are used for various purposes including imaging human and animal bodies for diagnosis and guidance during medical intervention such as surgery.
- Other uses for imaging scanners include structural analysis of buildings, pipes and the like.
- a conventional medical ultrasound scanner creates two-dimensional B-mode images of tissue in which the brightness of a pixel is based on the intensity of the echo return.
- Other types of imaging scanners can capture blood flow, motion of tissue over time, the location of blood, the presence of specific molecules, the stiffness of tissue, or the anatomy of a three-dimensional (3D) region.
- 2D images such as 2D ultrasound images cannot represent three-dimensional structures typical of human or animal body organs because they can only capture one 2D slice of a cross-section.
- a probe such as an ultrasound probe is mechanically or electronically swept over an area of interest, a three-dimensional image volume is generated.
- some ultrasound probes for example “matrix” probes, have multiple piezo-electric crystals and can construct “real-time” 3D ultrasound images. This can then be displayed by, for example, 3D holographic technologies, and the anatomy becomes much easier to visualize for both the trained and untrained observer as it is more representative of the true underlying structure/anatomy.
- Other technologies also allow the capture or building of 3D imagery.
- 3D imaging in the form of ultrasound, CT, MR
- CT computed tomography
- MR magnetic resonance
- a current limitation of 3D imaging is that although the data is 3D in its nature, conventional 2D displays can only render a flat representation (projection, slice, casting, etc) of the image on a screen.
- 3D imaging devices are available as indicated above.
- computing systems including those in imaging systems
- technology has been made available to display 3D images using computed reality technology such as holograms, virtual reality, mixed reality or augmented reality technology.
- computed reality technology such as holograms, virtual reality, mixed reality or augmented reality technology.
- Such technologies have, however, not been primarily developed for the specific requirements of a clinical setting. 3D systems tend to be expensive and their interfaces alien to users used to working in 2D.
- a further issue with three dimensional rendering of data is that the volume of information portrayed to the user increases substantially.
- a method and apparatus for navigation and display of 3D image data comprises:
- a 3D image dataset refers to a 3D array of scalar or vector values, and possibly an associated extent, orientation and resolution that allows to establish a correspondence between the 3D image and real or imaginary world objects.
- the value of a point within the 3D image may be a colour value or some other scalar or vector image related value.
- the value may or may not have been captured by an image sensor—it could be ultrasound, MRI, doppler or other data but it could also represent velocity of blood that has been detected or other modalities or measurable/computable values that map to the 3D image.
- the step of calculating may comprise using a masking kernel.
- a masking kernel a pre-defined shape or another 3D image may be used as a masking function.
- the modified 3D image view may be rendered as a 2D or 3D image (or other rendering) for display to the user.
- the scalar opacity map is calculated for a region of the 3D image dataset, the region comprising the portion of the 3D image dataset between the highlight position and the edges of the 3D image dataset in a field of view.
- Embodiments of the present invention include a method and system that gives the user the ability to peel away obscuring structure and focus on structure(s) of interest.
- Preferred embodiments have an intuitive user interface that uses the centre of view from a designated highlight position in the 3D imaged volume to define the structure of interest. In this way, the user simply needs to identify where they wish to highlight from and in which direction (in a similar manner to shining a torch on an unlit scene) and the system is able to focus on structures in that field of view.
- a masking kernel is used (or the user may be given the ability to select from one of a number of masking kernels) and the user interface include user interface features via which parameters of the kernel are tuneable by the user.
- the parameters are preferably tuneable during use so that the user can change the degree to which surrounding structures can be seen.
- kernels One type of kernel that may be used is the Gaussian kernel discussed below.
- kernels such as those based on a uniform/rectangular distribution, a radial basis function, a spherical step function or an exponential distribution (in the case of an exponential distribution the user would select a point/area to obscure rather than one to highlight.)
- Preferred embodiments apply a position-dependent opacity kernel such that the opacity of image features in a rendered 2D view (or 3D view) of a 3D image dataset is changed depending on position of the highlight point.
- the user interface enables a user to move the highlight point and optionally other parameters used to control the opacity as are described in more detail below.
- the user is provided with an intuitive user interface to navigate a 3D image using a 2D display.
- the user interface takes inputs from a keyboard and/or mouse and/or other controller that interact with the user interface via the 2D display. In this manner, the user can change perspective/view around the 3D image and view the highlighted structure/area form different perspectives.
- the volume can be navigated and viewed using existing 3D rendering systems (or 2D slices or other rendering of the 3D image).
- the user interface may include the capability for the user to set a point (or range) in time to be displayed or it may automatically loop through recorded imagery for the view.
- the dimensions need not correspond (or correspond entirely) to data from the visible spectrum and could include representations of ultrasound, MRI (Magnetic Resonance Imaging) or other data that is used to form multi or hyperspectral images to be viewed.
- This modality consists of 3-channel, 3D imaging data over time.
- Each time-frame is a volume of data, and for each voxel in the imaging data there are two values (channels): a background value corresponding to the B-Mode (brightness) anatomical image, normally visualized in grayscale; and a Doppler velocity value, which measures, typically in cm/s, the blood velocity along a specific direction, and is typically visualized in red-to-blue colour scale.
- This modality consists of N-channel, 3D imaging data (N>0). Each voxel in the imaging data contains N+1 values. The first value is called the BO signal, and all following values correspond to diffusion-weighted signals at the voxel location. N is typically 6, but can be up to several hundred channels. This type of modality is often utilised for exploring the intrinsic tissue orientation within an organ.
- This modality is produced by dedicated MRI scanners that are equipped with a PET imaging device. It consists of 2-channel, 3D imaging data. Each voxel in the imaging data contains 2 values. The first value corresponds to the MR-weighting signal (it can be T1-, T2-weighted, or any other MR modality), and the second one corresponds to the PET signal.
- This type of imaging modality is often used to highlight the concentrated presence of radio tracers that attach to tumour tissue, superimposed to the structural MRI signal.
- 3D or 2D ultrasound data may be fused with MR or CT data. This may provide a structural/functional view or to show features in one modality which may not be as clear in the other. This may be used for guidance.
- the two sets of data could be kept in separate coordinate systems or fused into a single volume where one modality is registered to the other and then resampled.
- calculation of the masking kernel, opacity channel and 2D or 3D rendered image may be done on the fly or may be cached/recorded—particularly in the case of a looped (in time) display, it may be preferable to generate the rendered image during the first loop and cache those until the position or kernel parameters are moved. It will furthermore be appreciated that embodiments of the present invention are also applicable for use in live image capture situations.
- the user interface may be used in place of the view a technician uses to guide the probe when scanning a patient or as an alternate view for the clinician that can be controlled independently of the operation of the probe.
- Preferred embodiments make use of full 3D interaction to allow the user to pick a location in 3D (for example by hand tracking, or with an interaction tool) and make structures fade out as they get far from this point.
- FIG. 1 is a schematic diagram of an imaging system according to an embodiment of the present invention
- FIGS. 2 a and 2 b are illustrative line drawings and corresponding images showing a 3D rendered image without ( FIG. 2 a ) and with ( FIG. 2 b ) processing according to an embodiment of the present invention
- FIGS. 3 a and 3 b are images of an ultrasound scan showing a conventional image ( FIG. 3 a ) and an image changes after an embodiment of the present invention is applied ( FIG. 3 b ); and,
- FIG. 4 shows images in which the method of the present invention has been applied and in which the trade-off between colour distance ⁇ parameter and Euclidean distance parameter with the steepness of the Gaussian kernel through ⁇ is shown.
- Embodiments of the present invention are directed to method and systems for displaying and applying user inputs to manipulate 3D imagery.
- Embodiments may receive data directly from a 3D image data source or may receive data that has been previously acquired and stored in a data repository or similar.
- 3D image data is typically encoded in the form of a 3D array of voxels.
- voxel is used to refer to a scalar or vector value on a regular grid in three-dimensional space.
- voxels themselves do not typically have their position (their spatial coordinates) explicitly encoded along with their values. Instead, rendering systems infer the position of a voxel based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image).
- the 3D image data is preferably processed (preferably in real time or near real time) so as to suppress image features that are in the periphery of the field of view.
- the system decides on how/whether to portray image features in the rendered output in dependence on a distance dependent opacity map.
- image features at a focus point are shown with full opacity, the image features around it are less visible as opacity decreases and the image features that are further away are increasingly suppressed.
- the further features are from the immediate field of view, the more they are suppressed.
- the 3D image data is processed as the array of voxels (or other representation if voxels aren't used). As such, the existence of structures is not relevant to the system and no additional processing is needed. Opacity changes based on distance from the focus point and also on colour difference (or difference from the other scalar value if not colour). Vessels will likely have similar colours and so voxels of a vessel will have similar opacity depending on distance to viewpoint.
- FIG. 1 is a schematic diagram of an imaging system according to an embodiment of the present invention.
- the imaging system includes an image data source 10 , a processor 20 , a display 30 , and a user interface.
- the user interface in this embodiment includes a position control 40 and a user input device 45 although it will be appreciated that different representations and input devices could be used.
- the processor 20 receives image data from the image data source 10 and also position data from the position control 40 . It generates an opacity channel from the position data and uses this to render the image data for display on the display 30 .
- the position control is decoupled from the display 30 .
- the position control 40 may be superimposed over the displayed image on the display 30 . In other embodiments it may be displayed separately.
- a user (which may or may not be the operator of the imaging probe that generates the imaging data provided from the imaging data source 10 ) interacts with the position control 40 to define a highlighting position (base of the arrow (A)) and orientation (direction of the arrow). This is in this embodiment the data provided to the processor 20 .
- Positioning could, for example be done using a mouse, tablet, X/Y/Z position and X/Y/Z highlight direction using a keyboard, sliders etc.
- the position cursor is illustrated by the arrow and is moved from position A to position B.
- the resulting 2 channel image is output for visualization through a transfer function, which maps intensity to colours, and the computed opacity channel to opacity.
- FIGS. 2 a and 2 b are illustrative line drawings and corresponding images showing a 3D rendered image without ( FIG. 2 a ) and with ( FIG. 2 b ) processing according to an embodiment of the present invention and FIGS. 3 a and 3 b are images of an ultrasound scan showing changes after an embodiment of the present invention is applied ( FIG. 3 a being an illustration of rendering without application of an embodiment of the present invention).
- the output could be to a 3D display device, projection of the 3D image onto a 2D display or the output could be communication or storage of the rendered data (or the base 3D image data set and the opacity channel or just the opacity channel).
- both the target position and the kernel parameters ( ⁇ , ⁇ ) can be tuned interactively.
- the system includes a user interface in which the user can move a cursor to select the target point and can use slider or other GUI elements to select the kernel parameters.
- the amount to which the surrounding regions are obscured can be controlled by trading-off parameters of the kernel (as discussed above, this is preferably provided to the user in the form of a GUI slider or the like).
- the trade-off in the above embodiment is between colour distance ⁇ and Euclidean distance, and with the steepness of the opacity kernel through ⁇ , as illustrated in FIG. 4 .
- FIG. 3 a is an example of conventional rendering of ultrasound image data that is used in medical imaging for assisting the clinician in diagnosis or treatment decisions, for example.
- FIG. 3 b is an image rendered using an embodiment of the present invention.
- rendering is changed preferably in dependence on user inputs so that organs or other imaged structure that is at or around the focus of the highlight (cross-hair in FIG. 4 ) is rendered but as image features are encountered that are further away from the focus of highlight, these are suppressed relative to distance to the focus of highlight.
- the system includes a user interface that allows the user to interact in 3D with the rendered 2D environment.
- the user interface allows the user to pick a location in 3D (for example by hand tracking, or with an interaction tool) and make structures fade out as those structures get far from this point (see FIGS. 2 b and 3 b ).
- the 3D image data in the form of a scalar (1 channel) or vector (multi-channel) image is taken as input.
- the system computes an opacity channel based on a kernel which acts on the intensities and on the relative position of voxels in the 3D image data with respect to a user-defined location (typically the system will have a default location that can be manipulated by the user via a user interface). It will be appreciated that other formats of image data could also be used as inputs.
- An opacity channel is calculated relative to the focus of the highlight, the opacity channel being used to generate the rendered view of FIG. 2 b or 3 b from the input image data.
- the region of interest is opaque and visible through semi-transparent structures.
- a 3D image is visualized using this transfer function, preferably using volume rendering that produces a 2D projection.
- volume rendering refers to a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.
- a 2D projection of the 3D image data set one defines a camera in space relative to the volume, the opacity and also the colour of every voxel. This is usually defined using an RGBA (for red, green, blue, alpha) transfer function that defines the RGBA value for every possible voxel value.
- RGBA for red, green, blue, alpha
- a volume may be viewed by extracting isosurfaces (surfaces of equal values) from the volume and rendering them as polygonal meshes or by rendering the volume directly as a block of data.
- the marching cubes algorithm is a common technique for extracting an isosurface from volume data.
- a ray casting algorithm is a common technique for rendering a volume directly.
- the 3D image data set is stored as a D-dimensional scalar map with samples on a uniform grid G. This may be done as a translation step at the point that the 3D image data set is received or alternatively the data set could be stored as received and translated/mapped as an initial step when rendering is to be done.
- V(X):R D ⁇ R as the D-dimensional scalar map with samples on a grid G ⁇ R D then V(G) is a D-dimensional scalar image.
- V(X):R D ⁇ R as a vector valued map
- V(G) is a D-dimensional vector valued image.
- the user preferably provides:
- the masking kernel k maps the position X and the image V to a scalar opacity value, and is of the form: K P, ⁇ ( ⁇ X,V ⁇ ): R D+1 ⁇ [0,1]
- the kernel may use an isotropic Gaussian kernel, centred at P 0 :
- the kernel need not be of a Gaussian form.
- Other examples include a radial (spheroidal) step function and an inverse Gaussian Kernel:
- the pixel shaders now are able to function as Multiple Instruction Multiple Data (MIMD) processors (now able to independently branch) utilizing up to 1 GB of texture memory with floating point formats.
- MIMD Multiple Instruction Multiple Data
- the programmable pixel shaders can be used to simulate variations in the characteristics of lighting, shadow, reflection, emissive colour and so forth. Such simulations can be written using high level shading languages.
- code e.g., a software algorithm or program
- firmware e.g., a software algorithm or program
- computer useable medium having control logic for enabling execution on a computer system having a computer processor.
- Such a computer system typically includes memory storage configured to provide output from execution of the code which configures a processor in accordance with the execution.
- the code can be arranged as firmware or software, and can be organized as a set of modules such as discrete code modules, function calls, procedure calls or objects in an object-oriented programming environment. If implemented using modules, the code can comprise a single module or a plurality of modules that operate in cooperation with one another.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Architecture (AREA)
- Human Computer Interaction (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Generation (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
-
- retrieving a 3D image dataset to be displayed;
- receiving identification of a highlight position within the 3D image dataset;
- calculating a scalar opacity map for the 3D image dataset, the opacity map having a value for each of a plurality of positions in the 3D image dataset, the respective value being dependent on the respective position relative to the highlight position, and on the value of the 3D image at the respective position relative to the value of the 3D image at the highlight position; and,
- applying the opacity to the 3D image dataset to generate a modified 3D image view.
-
- The system may apply the same opacity mask to multiple ones (not necessarily all) of the channels. The opacity mask may have been calculated for one channel, calculated for the multiple channels and merged or calculated for a flattened version of the channels.
- The system may selectively (by system or user) calculate and apply an opacity mask to just one channel;
- e.g. for colour flow Doppler the anatomy channel (B-Mode) could be made to fade out, but the blood flow (CFD channel) does not fade out
- The system may apply an opacity mask to multiple channels in a weighted manner (determined by the system or specified by the suer via a user interface)
- e.g. for colour flow Doppler the anatomy channel (B-Mode) could be made to fade out more, compared to the fading out of blood flow (CFD channel) which fades out less—fading out compared at the same distance from the highlight point
- Or the fading out distance may be different to one channel from the other, e.g. anatomy fades out quite close to the highlight point, colour flow further away from the highlight point
-
- 1) a spatial position (preferably through a movable cursor) P∈RD and;
- 2) a M-dimensional parameter vector for a masking kernel θ (which can be provided through, for example, sliders or other control in a GUI).
K P,θ({X,V}):R D+1→[0,1]
-
- i) Radial step function, centred at P0:
-
- where R is a scalar value representing the radius of the radial kernel.
- ii) Inverse Gaussian kernel centred at P0:(which would obscure the targeted region allowing other areas to be viewed):
-
- where θ is a scalar value representing the width of the Gaussian.
V o(X)=k P,θ proposed({X,V})=(k P,θ
-
- where λ is a trade-off factor between opacity being governed by intensity (λ=0) or opacity being governed by the position-based kernel (λ=1)kP
0 ,θ1 position is a position-based kernel, for example any of the kernels described above, and kVR,θ2 intensity is an intensity-based kernel, for example:
- where λ is a trade-off factor between opacity being governed by intensity (λ=0) or opacity being governed by the position-based kernel (λ=1)kP
Claims (13)
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1913832 | 2019-09-25 | ||
| GB1913832.0 | 2019-09-25 | ||
| GB201913832A GB201913832D0 (en) | 2019-09-25 | 2019-09-25 | Method and apparatus for navigation and display of 3d image data |
| PCT/GB2020/052337 WO2021058981A1 (en) | 2019-09-25 | 2020-09-25 | Computer implemented method and system for navigation and display of 3d image data |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220343605A1 US20220343605A1 (en) | 2022-10-27 |
| US11995773B2 true US11995773B2 (en) | 2024-05-28 |
Family
ID=68425503
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/763,728 Active 2040-12-26 US11995773B2 (en) | 2019-09-25 | 2020-09-25 | Computer implemented method and system for navigation and display of 3D image data |
Country Status (6)
| Country | Link |
|---|---|
| US (1) | US11995773B2 (en) |
| EP (1) | EP4022578A1 (en) |
| JP (1) | JP2022551060A (en) |
| CN (1) | CN115104129A (en) |
| GB (1) | GB201913832D0 (en) |
| WO (1) | WO2021058981A1 (en) |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240054700A1 (en) * | 2021-01-15 | 2024-02-15 | Koninklijke Philips N.V. | Post-processing for radiological images |
| US20230068315A1 (en) * | 2021-08-24 | 2023-03-02 | Biosense Webster (Israel) Ltd. | Anatomically correct reconstruction of an atrium |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20010055016A1 (en) * | 1998-11-25 | 2001-12-27 | Arun Krishnan | System and method for volume rendering-based segmentation |
| US20110262023A1 (en) * | 2008-10-08 | 2011-10-27 | Tomtec Imaging Systems Gmbh | Method of filtering an image dataset |
| US20140187948A1 (en) * | 2012-12-31 | 2014-07-03 | General Electric Company | Systems and methods for ultrasound image rendering |
| US20190099159A1 (en) * | 2017-09-29 | 2019-04-04 | Siemens Healthcare Gmbh | Measurement Point Determination in Medical Diagnostic Imaging |
| US20190110198A1 (en) * | 2017-09-18 | 2019-04-11 | Element Inc. | Methods, systems, and media for detecting spoofing in mobile authentication |
| US20200184640A1 (en) * | 2018-12-05 | 2020-06-11 | Stryker Corporation | Systems and methods for displaying medical imaging data |
Family Cites Families (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2001340336A (en) * | 2000-06-01 | 2001-12-11 | Toshiba Medical System Co Ltd | Ultrasonic diagnostic apparatus and ultrasonic diagnostic method |
| US6692441B1 (en) * | 2002-11-12 | 2004-02-17 | Koninklijke Philips Electronics N.V. | System for identifying a volume of interest in a volume rendered ultrasound image |
| JP2006000338A (en) * | 2004-06-17 | 2006-01-05 | Fuji Photo Film Co Ltd | Image processing method, apparatus, and program |
| GB2416944A (en) * | 2004-07-30 | 2006-02-08 | Voxar Ltd | Classifying voxels in a medical image |
| JP5161991B2 (en) * | 2011-03-25 | 2013-03-13 | 株式会社東芝 | Image processing device |
| JP5693412B2 (en) * | 2011-07-26 | 2015-04-01 | キヤノン株式会社 | Image processing apparatus and image processing method |
| US9612657B2 (en) * | 2013-03-14 | 2017-04-04 | Brainlab Ag | 3D-volume viewing by controlling sight depth |
| GB201415534D0 (en) * | 2014-09-02 | 2014-10-15 | Bergen Teknologioverforing As | Method and apparatus for processing three-dimensional image data |
| US9659405B2 (en) * | 2015-04-01 | 2017-05-23 | Toshiba Medical Systems Corporation | Image processing method and apparatus |
| US10342633B2 (en) * | 2016-06-20 | 2019-07-09 | Toshiba Medical Systems Corporation | Medical image data processing system and method |
-
2019
- 2019-09-25 GB GB201913832A patent/GB201913832D0/en not_active Ceased
-
2020
- 2020-09-25 JP JP2022519138A patent/JP2022551060A/en active Pending
- 2020-09-25 US US17/763,728 patent/US11995773B2/en active Active
- 2020-09-25 EP EP20799791.7A patent/EP4022578A1/en active Pending
- 2020-09-25 CN CN202080067305.8A patent/CN115104129A/en active Pending
- 2020-09-25 WO PCT/GB2020/052337 patent/WO2021058981A1/en not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20010055016A1 (en) * | 1998-11-25 | 2001-12-27 | Arun Krishnan | System and method for volume rendering-based segmentation |
| US20110262023A1 (en) * | 2008-10-08 | 2011-10-27 | Tomtec Imaging Systems Gmbh | Method of filtering an image dataset |
| US20140187948A1 (en) * | 2012-12-31 | 2014-07-03 | General Electric Company | Systems and methods for ultrasound image rendering |
| US20190110198A1 (en) * | 2017-09-18 | 2019-04-11 | Element Inc. | Methods, systems, and media for detecting spoofing in mobile authentication |
| US20190099159A1 (en) * | 2017-09-29 | 2019-04-04 | Siemens Healthcare Gmbh | Measurement Point Determination in Medical Diagnostic Imaging |
| US20200184640A1 (en) * | 2018-12-05 | 2020-06-11 | Stryker Corporation | Systems and methods for displaying medical imaging data |
Non-Patent Citations (1)
| Title |
|---|
| International Search Report in corresponding PCT/GB2020/052337 dated Dec. 3, 2020. |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220343605A1 (en) | 2022-10-27 |
| GB201913832D0 (en) | 2019-11-06 |
| WO2021058981A1 (en) | 2021-04-01 |
| JP2022551060A (en) | 2022-12-07 |
| EP4022578A1 (en) | 2022-07-06 |
| CN115104129A (en) | 2022-09-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Zhang et al. | Volume visualization: a technical overview with a focus on medical applications | |
| RU2497194C2 (en) | Method and device for 3d visualisation of data sets | |
| EP2486548B1 (en) | Interactive selection of a volume of interest in an image | |
| US20050228250A1 (en) | System and method for visualization and navigation of three-dimensional medical images | |
| US10593099B2 (en) | Transfer function determination in medical imaging | |
| US20070276214A1 (en) | Systems and Methods for Automated Segmentation, Visualization and Analysis of Medical Images | |
| JP2009034521A (en) | System and method for volume rendering data in medical diagnostic imaging, and computer readable storage medium | |
| Grigoryan et al. | Probabilistic surfaces: Point based primitives to show surface uncertainty | |
| US11995773B2 (en) | Computer implemented method and system for navigation and display of 3D image data | |
| JP6560745B2 (en) | Visualizing volumetric images of anatomy | |
| Chen et al. | LD-UNet: A long-distance perceptual model for segmentation of blurred boundaries in medical images | |
| US20050197558A1 (en) | System and method for performing a virtual endoscopy in a branching structure | |
| KR20230159696A (en) | Methods and systems for processing multi-modal and/or multi-source data in a medium | |
| CN108701492A (en) | Medical image navigation system | |
| Spann et al. | Interactive visualisation of the food content of a human stomach in MRI | |
| CN115153621A (en) | Model-based automatic navigation system and method for ultrasound images | |
| EP4524900A1 (en) | Method and system for improved interaction with medical 2d and 3d visualization | |
| US12456273B2 (en) | Ultrasound imaging system and method for generating and displaying a colorized surface rendering | |
| Diepenbrock | Rapid development of applications for the interactive visual analysis of multimodal medical data | |
| Kirmizibayrak | Interactive volume visualization and editing methods for surgical applications | |
| Haidacher | Importance-driven rendering in interventional imaging | |
| Zhang et al. | Medical image volumetric visualization: Algorithms, pipelines, and surgical applications | |
| Kim et al. | Data visualization and display | |
| EP1523733A2 (en) | Planar reformat internal surface viewer | |
| Welsh et al. | Brain miner: a 3D visual interface for the investigation of functional relationships in the brain |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| AS | Assignment |
Owner name: KING'S COLLEGE LONDON, UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HERRERO, ALBERTO GOMEZ;WHEELER, GAVIN;SCHNABEL, JULIA;AND OTHERS;SIGNING DATES FROM 20220204 TO 20220221;REEL/FRAME:067172/0437 Owner name: GUY'S AND ST THOMAS' NHS FOUNDATION TRUST, UNITED KINGDOM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIMPSON, JOHN;REEL/FRAME:067169/0467 Effective date: 20220303 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |