US20210264574A1 - Correcting image blur in medical image - Google Patents
Correcting image blur in medical image Download PDFInfo
- Publication number
- US20210264574A1 US20210264574A1 US17/318,363 US202117318363A US2021264574A1 US 20210264574 A1 US20210264574 A1 US 20210264574A1 US 202117318363 A US202117318363 A US 202117318363A US 2021264574 A1 US2021264574 A1 US 2021264574A1
- Authority
- US
- United States
- Prior art keywords
- image
- medical
- training input
- images
- medical images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06T5/003—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20216—Image averaging
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Definitions
- the field of the embodiments relate to a device to correct an image blur within a medical image.
- the corrective mechanism may sharpen blurred edge(s) of an object of interest by processing the medical image with a deep learning model.
- a modern mobile device includes components to provide variety of services such as communication, display, imaging, voice, and/or data capture, among others. Abilities of the modern mobile device jump exponentially when networked to other resources that provide previously unimagined number of services associated with medical imaging.
- Ultrasound and other medical imaging devices remove noise related issues during an imaging session by scanning a variety of images of a biological structure of a patient.
- the scanned images are combined with an averaging process that reduces and/or eliminates noise inherent in an imaging session.
- blurring effects are introduced to the resulting medical image.
- the blurring effects may diminish chances of a correct diagnosis that relies on distinguishable edges associated with an object of interest within the medical image.
- the present invention and its embodiments relate to a device to correct an image blur in a medical image.
- the device may be configured to receive the medical image from a medical image provider.
- the medical image provider may include a medical imaging device.
- the medical image may include an ultrasound scan of a biological structure (such as an organ) of a patient.
- the image blur may be detected within the medical image by analyzing the medical image.
- the image blur may result from a process to reduce noise inherent in the medical imaging process by generating an averaged image of a variety medical ultrasound images (captured during an ultrasound session).
- the medical image may be processed with a deep learning model to correct the image blur.
- the deep learning model may be generated with a training input set of averaged images and an expected output set of de-blurred images corresponding to the averaged images.
- a de-blurred medical image may be generated in response to processing the medical image.
- the de-blurred medical image may be provided for a presentation or a continued analysis.
- a mobile device for correcting an image blur in a medical ultrasound image.
- the mobile device may include a memory configured to store instructions associated with an image analysis application.
- a processor may be coupled to the memory.
- the processor may execute the instructions associated with the image analysis application.
- the image analysis application may include a neural network module.
- the neural network module may be configured to receive the medical ultrasound image from a medical image provider.
- the image blur may be detected within the medical ultrasound image by analyzing the medical ultrasound image.
- the image blur may result from a noise reduced average of ultrasound session images of a biological structure of a patient.
- the medical ultrasound image may subsequently be processed with a deep learning model to correct the image blur.
- a de-blurred medical ultrasound image may be generated.
- the de-blurred medical ultrasound image may be provided for a presentation or a continued analysis
- a method of correcting an image blur in a medical ultrasound image includes receiving the medical ultrasound image from a medical image provider.
- the image blur may be detected within the medical ultrasound image by analyzing the medical ultrasound image.
- the image blur may result from a noise reduced average of ultrasound session images of a biological structure of a patient.
- the medical ultrasound image may be processed with a deep learning model to correct the image blur.
- a de-blurred medical ultrasound image may be generated.
- the de-blurred medical ultrasound image may be provided for a presentation or a continued analysis.
- FIG. 1 shows a conceptual diagram illustrating examples of correcting an image blur in a medical image, according to an embodiment of the invention.
- FIG. 2 shows a display diagram illustrating components of a neural network mechanism to correct an image blur in a medical image, according to an embodiment of the invention.
- FIG. 3 shows another display diagram illustrating components of a neural network mechanism to correct an image blur in a medical image, according to an embodiment of the invention.
- FIG. 4 is a block diagram of an example computing device, which may be used to correct an image blur in a medical image.
- FIG. 5 is a logic flow diagram illustrating a process for correcting an image blur in a medical image, according to an embodiment of the invention.
- FIG. 1 shows a conceptual diagram illustrating examples of correcting an image blur in a medical image.
- a mobile device 104 may execute (or provide) an image analysis application 106 .
- the mobile device 104 may include a physical computing device hosting and/or providing features associated with a client application (such as the image analysis application 106 ).
- the mobile device 104 may include and/or is part of a smart phone, a tablet based device, and/or a laptop computer, among others.
- the mobile device 104 may also be a node of a network.
- the network may also include other nodes such as the medical image provider 112 , among others.
- the network may connect nodes with wired and wireless infrastructure.
- the mobile device 104 may execute the image analysis application 106 .
- the image analysis application 106 may receive a medical image 108 from a medical image provider 112 .
- An example of the medical image 108 may include an ultrasound image (or scan).
- Other examples of the medical image 108 may include a x-ray image, a magnetic resonance imaging (MRI) scan, a computed tomography (CT) scan, and/or a positron emission tomography (PET) scan, among others.
- the medical image provider 112 may include a medical imaging device/system that captures, manages, and/or presents the medical image 108 to a user 102 .
- the user 102 may include as a doctor, a nurse, a technician, a patient, and/or an administrator, among others.
- the user 102 may use the medical image 108 to diagnose an issue, a malignancy (cancer), and/or other illness associated with a patient.
- cancer malignancy
- the medical image 108 and a de-blurred medical image 114 may include an object of interest (OI) 110 .
- the OI 110 may include a biological structure of a patient.
- the OI 110 may include a malignant or a benign tumor.
- the OI 110 may represent another structure associated with an organ and/or other part of the patient.
- the image analysis application 106 may next detect an image blur 111 within the medical image 108 by analyzing the medical image 108 .
- the image blur 111 may result from an averaging process to combine multiple images captured during an imaging session (such as an ultrasound session) of a biological structure of a patient.
- the medical imaging device (conducting the imaging session) may combine the scanned images with an averaging process to generate the medical image 108 .
- the averaging process may reduce noise inherent in the capture process associated with the imaging session.
- the averaging process may blur edge(s) of the biological structure of the patient within the medical image 108 . Sharp edges may be critical to automated and/or manual diagnosis of an illness such as cancer. Blurred edges caused by the averaging process may hinder attempts at automated and/or manual diagnosis.
- the medical image 108 may be processed with a deep learning model to correct the image blur 111 .
- the deep learning model may be generated using a training input set and an expected output set.
- the training input set may include averaged images (associated with medical imaging sessions) and expected de-blurred images corresponding to the averaged images.
- a de-blurred medical image 114 may be generated.
- the de-blurred medical image 114 may include the OI 110 with sharpened edges. Subsequently, the de-blurred medical image 114 may be provided for a presentation to the user 102 or a continued analysis by a downstream analysis application/service.
- the image analysis application 106 may perform operations associated with correcting the image blur in the medical image 108 as a desktop application, a workstation application, and/or a server application, among others.
- the image analysis application 106 may also be a client interface of a server based application.
- the user 102 may interact with the image analysis application 106 with a keyboard based input, a mouse based input, a voice based input, a pen based input, and a gesture based input, among others.
- the gesture based input may include one or more touch based actions such as a touch action, a swipe action, and a combination of each, among others.
- FIG. 1 has been described with specific components including the mobile device 104 , the image analysis application 106 , embodiments are not limited to these components or system configurations and can be implemented with other system configuration employing fewer or additional components.
- FIG. 2 shows a display diagram illustrating components of a neural network mechanism to correct an image blur 111 in the medical image 108 .
- the image analysis application 106 (executed by the mobile device 104 ) may process the medical image 108 with a neural network module 216 .
- An example of the medical image 108 may be an ultrasound image (or scan).
- the medical image 108 may also include the OI 110 such as a biological structure of the patient.
- the medical imaging device (used to capture the medical image 108 ) may generate the medical image 108 with an image blur 111 .
- the image blur 111 may soften edge(s) of the OI 110 . Sharp edges associated with the OI 110 may be critical to manual or automated diagnosis. As such, the capture process of the medical imaging device may diminish a probability of correct diagnosis associated with the OI 110 .
- the capture process may record several images of the OI 110 and combine the images with an averaging process to generate the medical image 108 .
- the averaging process may remove noise associated with the capture process but soften the edges of the OI 110 .
- the image analysis application 106 may sharpen edges associated with the OI 110 .
- the neural network module 216 of the image analysis application 106 may process the medical image 108 .
- the neural network module 216 may process the medical image 108 with a deep learning model 218 .
- the deep learning model 218 may be generated with a training input set 220 and an expected output set 222 .
- the image analysis application 106 may generate the deep learning model 218 .
- the image analysis application 106 may retrieve the deep learning model 218 from an external service provider.
- the training input set 220 may include averaged images of prior imaging sessions (from a variety of patients). Each of the averaged images may include a noise reduced average of several medical images (such as ultrasound images) captured during an imaging session (such as an ultrasound session). Edge(s) of 0 I(s) within the averaged images may be blurred as a result of the averaging process to reduce noise.
- the expected output set 222 may include de-blurred images corresponding to the averaged images. Edge(s) of the 0 I(s) within each of the de-blurred images may be sharpened. The sharpening effect may be applied automatically and/or manually to the edge(s).
- the deep learning model 218 may be trained based on an analysis of the training input set 220 and the expected output set 222 . The training process may form the deep learning model 218 based on how the sharpening effect is applied to the expected output set 222 to correct the softened edges of the OIs within the training input set 220 .
- the neural network module 216 may process the medical image 108 with the deep learning model 218 to remove the image blur 111 .
- the image blur 111 may be removed by sharpening softened edge(s) of the OI 110 .
- the neural network module 216 may generate a de-blurred medical image 114 .
- the de-blurred medical image 114 may include the OI with sharpened edge(s).
- FIG. 3 shows another display diagram illustrating components of a neural network mechanism to correct the image blur in the medical image 108 .
- the image analysis application 106 (executed by the mobile device 104 ) may process the medical image 108 that includes the OI 110 .
- the medical image 108 may include the image blur.
- the image blur may result in the OI 110 having a blurred edge 324 .
- the capture process (to record the medical image 108 ) may introduce the blurred edge 324 to the OI 110 .
- the capture process may combine multiple images recorded during an imaging session to produce an averaged image (the medical image 108 ) that diminishes noise but softens the OI 110 causing the blurred edge 324 .
- the neural network module 216 may process the medical image 108 with the deep learning model 218 to identify and correct the blurred edge 324 of the OI 110 .
- the neural network module 216 may produce the de-blurred medical image 114 .
- the de-blurred medical image 114 may include the OI 110 with a sharpened edge 326 (among other sharpened edges).
- the sharpened edge 326 of the OI 110 may allow downstream diagnosis process (or a user) to correctly diagnose a malignancy, an issue and/or an illness associated with the OI 110 .
- the image analysis application 106 may receive the medical image 108 from a medical imaging device. Alternatively, the image analysis application 106 may receive the medical image 108 from a camera component 325 of the mobile device 104 .
- the camera component 325 may record the medical image 108 as a copy of a scanned image displayed by a display device associated with the medical imaging device.
- the scanned image may represent an imaging session of a biological structure of a patient (recorded by the medical imagining device).
- the medical image 108 may also include a three dimensional image (which may represent components of a biological structure of a patient in three dimensions).
- the neural network module 216 may determine whether the medical image 108 includes a metadata.
- the neural network module 216 may analyze the medical image 108 (to detect the image blur) by evaluating the metadata. For example, the neural network module 216 may identify an annotation associated with the medical image 108 within the metadata.
- the annotation may designate an averaging process used to generate the medical image from scanned images of a scanning session of a biological structure of a patient.
- the neural network module 216 may designate the medical image 108 as including the image blur.
- the image analysis application 106 may receive a selection of a region of interest (ROI) of the medical image 108 from a user.
- the neural network module 216 may focus an image blur detection process and analyze the ROI to identify the image blur within the ROI.
- the medical image 108 may be processed with the deep learning model 218 in a real-time or offline to remove the image blur and generate the de-blurred medical image 114 (with the OI 110 having the sharpened edge 326 ).
- the neural network module 216 may process the medical image 108 and subsequent image(s) of a time sequence based scanning session (of a biological structure of a patient) with the deep learning model 218 in a real-time or offline.
- the neural network module 216 may generate the de-blurred medical image 114 and de-blurred subsequent image(s) in response to processing of the medical image 108 and the subsequent image(s). Alternatively, the neural network module 216 may generate a de-blurred video stream or animation by processing the medical image 108 and the subsequent image(s) with the deep learning model 218 .
- FIGS. 1 through 3 are shown with specific components, data types, and configurations. Embodiments are not limited to systems according to these example configurations.
- a device to correct an image blur in the medical image 108 may be implemented in configurations employing fewer or additional components in applications and user interfaces.
- the example schema and components shown in FIGS. 1 through 3 and their subcomponents may be implemented in a similar manner with other values using the principles described herein.
- FIG. 4 is a block diagram of an example computing device, which may be used to correct an image blur in a medical image, according to embodiments.
- computing device 400 may be used as a server, desktop computer, portable computer, smart phone, special purpose computer, or similar device.
- the computing device 400 may include one or more processors 404 and a system memory 406 .
- a memory bus 408 may be used for communication between the processor 404 and the system memory 406 .
- the basic configuration 402 may be illustrated in FIG. 4 by those components within the inner dashed line.
- the processor 404 may be of any type, including but not limited to a microprocessor ( ⁇ P), a microcontroller ( ⁇ C), a digital signal processor (DSP), or any combination thereof.
- the processor 404 may include one more levels of caching, such as a level cache memory 412 , one or more processor cores 414 , and registers 416 .
- the example processor cores 414 may (each) include an arithmetic logic unit (ALU), a floating-point unit (FPU), a digital signal processing core (DSP Core), a graphics processing unit (GPU), or any combination thereof.
- An example memory controller 418 may also be used with the processor 404 , or in some implementations, the memory controller 418 may be an internal part of the processor 404 .
- the system memory 406 may be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof.
- the system memory 406 may include an operating system 420 , the image analysis application 106 , and a program data 424 .
- the image analysis application 106 may include components such as the neural network module 216 .
- the neural network module 216 may execute the instructions and processes associated with the image analysis application 106 .
- the neural network module 216 may receive a medical image from a medical image provider. Next, an image blur may be detected within the medical image by analyzing the medical image. The medical image may be processed with a deep learning model to correct the image blur. Subsequently, a de-blurred medical image may be generated. The de-blurred medical image may be provided for a presentation or a continued analysis.
- Input to and output out of the image analysis application 106 may be captured and displayed through a display component that may be integrated to the computing device 400 .
- the display component may include a display screen, and/or a display monitor, among others that may capture an input through a touch/gesture based component such as a digitizer.
- the program data 424 may also include, among other data, the medical image 108 , or the like, as described herein.
- the medical image 108 may be processed with the deep learning model to correct softened edges of an OI introduced during the imaging process, among other things.
- the computing device 400 may have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 402 and any desired devices and interfaces.
- a bus/interface controller 430 may be used to facilitate communications between the basic configuration 402 and one or more data storage devices 432 via a storage interface bus 434 .
- the data storage devices 432 may be one or more removable storage devices 436 , one or more non-removable storage devices 438 , or a combination thereof.
- Examples of the removable storage and the non-removable storage devices may include magnetic disk devices, such as flexible disk drives and hard-disk drives (HDDs), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSDs), and tape drives, to name a few.
- Example computer storage media may include volatile and nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data.
- the system memory 406 , the removable storage devices 436 and the non-removable storage devices 438 are examples of computer storage media.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVDs), solid state drives, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by the computing device 400 . Any such computer storage media may be part of the computing device 400 .
- the computing device 400 may also include an interface bus 440 for facilitating communication from various interface devices (for example, one or more output devices 442 , one or more peripheral interfaces 444 , and one or more communication devices 466 ) to the basic configuration 402 via the bus/interface controller 430 .
- interface devices for example, one or more output devices 442 , one or more peripheral interfaces 444 , and one or more communication devices 466 .
- Some of the example output devices 442 include a graphics processing unit 448 and an audio processing unit 450 , which may be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 452 .
- One or more example peripheral interfaces 444 may include a serial interface controller 454 or a parallel interface controller 456 , which may be configured to communicate with external devices such as input devices (for example, keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (for example, printer, scanner, etc.) via one or more I/O ports 458 .
- An example of the communication device(s) 466 includes a network controller 460 , which may be arranged to facilitate communications with one or more other computing devices 462 over a network communication link via one or more communication ports 464 .
- the one or more other computing devices 462 may include servers, computing devices, and comparable devices.
- the network communication link may be one example of a communication media.
- Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media.
- a “modulated data signal” may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media.
- RF radio frequency
- IR infrared
- the term computer readable media as used herein may include both storage media and communication media.
- the computing device 400 may be implemented as a part of a specialized server, mainframe, or similar computer, which includes any of the above functions.
- the computing device 400 may also be implemented as a personal computer including both laptop computer and non-laptop computer configurations. Additionally, the computing device 400 may include specialized hardware such as an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic device (PLD), and/or a free form logic on an integrated circuit (IC), among others.
- ASIC application-specific integrated circuit
- FPGA field programmable gate array
- PLD programmable logic device
- IC integrated circuit
- Example embodiments may also include methods to correct an image blur in a medical image. These methods can be implemented in any number of ways, including the structures described herein. One such way may be by machine operations, of devices of the type described in the present disclosure. Another optional way may be for one or more of the individual operations of the methods to be performed in conjunction with one or more human operators performing some of the operations while other operations may be performed by machines. These human operators need not be collocated with each other, but each can be only with a machine that performs a portion of the program. In other embodiments, the human interaction can be automated such as by pre-selected criteria that may be machine automated.
- FIG. 5 is a logic flow diagram illustrating a process for correcting an image blur in a medical image.
- Process 500 may be implemented on a computing device, such as the computing device 400 or another system.
- Process 500 begins with operation 510 , where an image analysis application may receive a medical image from a medical image provider.
- the medical image may include an ultrasound image of a biological structure of a patient.
- the image blur within the medical image may be detected by analyzing the medical image.
- the medical image may be processed with a deep learning model to correct the image blur.
- a de-blurred medical image may be generated in response to processing the medical image.
- the de-blurred medical image may include sharpened edge(s) of an OI.
- the de-blurred medical image may be provided for a presentation or a continued analysis.
- process 500 is for illustration purposes. Correcting an image blur in a medical image may be implemented by similar processes with fewer or additional steps, as well as in different order of operations using the principles described herein.
- the operations described herein may be executed by one or more processors operated on one or more computing devices, one or more processor cores, specialized processing devices, and/or special purpose processors, among other examples.
- a method of correcting an image blur within a medical image includes receiving the medical ultrasound image from a medical image provider.
- the image blur may be detected within the medical ultrasound image by analyzing the medical ultrasound image.
- the image blur may result from a noise reduced average of several ultrasound session images of a biological structure of a patient.
- the medical ultrasound image may be processed with a deep learning model to correct the image blur.
- a de-blurred medical ultrasound image may be generated.
- the de-blurred medical ultrasound image may be provided for a presentation or a continued analysis.
- the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements.
- the adjective “another,” when used to introduce an element, is intended to mean one or more elements.
- the terms “including” and “having” are intended to be inclusive such that there may be additional elements other than the listed elements.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
Description
- This application is a continuation application relating to and claiming the benefit of commonly-owned, co-pending PCT International Application No. PCT/US2019/061464, filed Nov. 14, 2019, entitled “CORRECTING IMAGE BLUR IN A MEDICAL IMAGE,” which claims priority to and the benefit of commonly owned U.S. Ser. No. 16/190,652, filed Nov. 14, 2018 entitled “CORRECTING IMAGE BLUR IN MEDICAL IMAGE,” which issued as U.S. Pat. No. 10,290,084 on May 14, 2019, the entireties of which are incorporated herein by reference.
- The field of the embodiments relate to a device to correct an image blur within a medical image. The corrective mechanism may sharpen blurred edge(s) of an object of interest by processing the medical image with a deep learning model.
- Information exchanges have changed processes associated with work and personal environments. Automation and improvements in processes have expanded the scope of capabilities offered for personal and business data consumption. With the development of faster and smaller electronics, a variety of mobile devices have integrated into daily lives. A modern mobile device includes components to provide variety of services such as communication, display, imaging, voice, and/or data capture, among others. Abilities of the modern mobile device jump exponentially when networked to other resources that provide previously unimagined number of services associated with medical imaging.
- Ultrasound and other medical imaging devices remove noise related issues during an imaging session by scanning a variety of images of a biological structure of a patient. The scanned images are combined with an averaging process that reduces and/or eliminates noise inherent in an imaging session. As an artifact of the averaging process blurring effects are introduced to the resulting medical image. The blurring effects may diminish chances of a correct diagnosis that relies on distinguishable edges associated with an object of interest within the medical image.
- The present invention and its embodiments relate to a device to correct an image blur in a medical image. The device may be configured to receive the medical image from a medical image provider. The medical image provider may include a medical imaging device. The medical image may include an ultrasound scan of a biological structure (such as an organ) of a patient. Next, the image blur may be detected within the medical image by analyzing the medical image. The image blur may result from a process to reduce noise inherent in the medical imaging process by generating an averaged image of a variety medical ultrasound images (captured during an ultrasound session). Furthermore, the medical image may be processed with a deep learning model to correct the image blur. The deep learning model may be generated with a training input set of averaged images and an expected output set of de-blurred images corresponding to the averaged images. A de-blurred medical image may be generated in response to processing the medical image. In addition, the de-blurred medical image may be provided for a presentation or a continued analysis.
- In another embodiment of the present invention, a mobile device for correcting an image blur in a medical ultrasound image is described. The mobile device may include a memory configured to store instructions associated with an image analysis application. A processor may be coupled to the memory. The processor may execute the instructions associated with the image analysis application. The image analysis application may include a neural network module. The neural network module may be configured to receive the medical ultrasound image from a medical image provider. Next, the image blur may be detected within the medical ultrasound image by analyzing the medical ultrasound image. The image blur may result from a noise reduced average of ultrasound session images of a biological structure of a patient. The medical ultrasound image may subsequently be processed with a deep learning model to correct the image blur. In response to the processing, a de-blurred medical ultrasound image may be generated. In addition, the de-blurred medical ultrasound image may be provided for a presentation or a continued analysis
- In yet another embodiment of the present invention, a method of correcting an image blur in a medical ultrasound image is described. The method includes receiving the medical ultrasound image from a medical image provider. Next, the image blur may be detected within the medical ultrasound image by analyzing the medical ultrasound image. The image blur may result from a noise reduced average of ultrasound session images of a biological structure of a patient. The medical ultrasound image may be processed with a deep learning model to correct the image blur. In response to the processing, a de-blurred medical ultrasound image may be generated. Furthermore, the de-blurred medical ultrasound image may be provided for a presentation or a continued analysis.
- It is an object of the embodiments of the present invention to correct an image blur in a medical image (such as an ultrasound scan) with a neural network mechanism.
- It is an object of the embodiments of the present invention to process a medical image with a deep learning model to detect the image blur.
- It is an object of the embodiments of the present invention to process the medical image with the deep learning model to correct the image blur.
- It is an object of the embodiments of the present invention to sharpen edges of an object of interest in the medical image to correct the image blur.
- These and other features, aspects and advantages of the present invention will become better understood with reference to the following drawings, description and claims.
-
FIG. 1 shows a conceptual diagram illustrating examples of correcting an image blur in a medical image, according to an embodiment of the invention. -
FIG. 2 shows a display diagram illustrating components of a neural network mechanism to correct an image blur in a medical image, according to an embodiment of the invention. -
FIG. 3 shows another display diagram illustrating components of a neural network mechanism to correct an image blur in a medical image, according to an embodiment of the invention. -
FIG. 4 is a block diagram of an example computing device, which may be used to correct an image blur in a medical image. -
FIG. 5 is a logic flow diagram illustrating a process for correcting an image blur in a medical image, according to an embodiment of the invention. - The preferred embodiments of the present invention will now be described with reference to the drawings. Identical elements in the various figures are identified with the same reference numerals.
- Reference will now be made in detail to each embodiment of the present invention. Such embodiments are provided by way of explanation of the present invention, which is not intended to be limited thereto. In fact, those of ordinary skill in the art may appreciate upon reading the present specification and viewing the present drawings that various modifications and variations may be made thereto.
-
FIG. 1 shows a conceptual diagram illustrating examples of correcting an image blur in a medical image. In an example scenario, amobile device 104 may execute (or provide) animage analysis application 106. Themobile device 104 may include a physical computing device hosting and/or providing features associated with a client application (such as the image analysis application 106). Themobile device 104 may include and/or is part of a smart phone, a tablet based device, and/or a laptop computer, among others. Themobile device 104 may also be a node of a network. The network may also include other nodes such as themedical image provider 112, among others. The network may connect nodes with wired and wireless infrastructure. - The
mobile device 104 may execute theimage analysis application 106. Theimage analysis application 106 may receive amedical image 108 from amedical image provider 112. An example of themedical image 108 may include an ultrasound image (or scan). Other examples of themedical image 108 may include a x-ray image, a magnetic resonance imaging (MRI) scan, a computed tomography (CT) scan, and/or a positron emission tomography (PET) scan, among others. Themedical image provider 112 may include a medical imaging device/system that captures, manages, and/or presents themedical image 108 to auser 102. Theuser 102 may include as a doctor, a nurse, a technician, a patient, and/or an administrator, among others. Theuser 102 may use themedical image 108 to diagnose an issue, a malignancy (cancer), and/or other illness associated with a patient. - The
medical image 108 and a de-blurredmedical image 114 may include an object of interest (OI) 110. TheOI 110 may include a biological structure of a patient. For example, theOI 110 may include a malignant or a benign tumor. Alternatively, theOI 110 may represent another structure associated with an organ and/or other part of the patient. - The
image analysis application 106 may next detect animage blur 111 within themedical image 108 by analyzing themedical image 108. Theimage blur 111 may result from an averaging process to combine multiple images captured during an imaging session (such as an ultrasound session) of a biological structure of a patient. The medical imaging device (conducting the imaging session) may combine the scanned images with an averaging process to generate themedical image 108. The averaging process may reduce noise inherent in the capture process associated with the imaging session. However, the averaging process may blur edge(s) of the biological structure of the patient within themedical image 108. Sharp edges may be critical to automated and/or manual diagnosis of an illness such as cancer. Blurred edges caused by the averaging process may hinder attempts at automated and/or manual diagnosis. - Next, the
medical image 108 may be processed with a deep learning model to correct theimage blur 111. The deep learning model may be generated using a training input set and an expected output set. The training input set may include averaged images (associated with medical imaging sessions) and expected de-blurred images corresponding to the averaged images. - In response to the processing of the
medical image 108, a de-blurredmedical image 114 may be generated. The de-blurredmedical image 114 may include theOI 110 with sharpened edges. Subsequently, the de-blurredmedical image 114 may be provided for a presentation to theuser 102 or a continued analysis by a downstream analysis application/service. - Previous example(s) to correct an image blur in the
medical image 108 are not provided in a limiting sense. Alternatively, theimage analysis application 106 may perform operations associated with correcting the image blur in themedical image 108 as a desktop application, a workstation application, and/or a server application, among others. Theimage analysis application 106 may also be a client interface of a server based application. - The
user 102 may interact with theimage analysis application 106 with a keyboard based input, a mouse based input, a voice based input, a pen based input, and a gesture based input, among others. The gesture based input may include one or more touch based actions such as a touch action, a swipe action, and a combination of each, among others. - While the example system in
FIG. 1 has been described with specific components including themobile device 104, theimage analysis application 106, embodiments are not limited to these components or system configurations and can be implemented with other system configuration employing fewer or additional components. -
FIG. 2 shows a display diagram illustrating components of a neural network mechanism to correct animage blur 111 in themedical image 108. In an example scenario, the image analysis application 106 (executed by the mobile device 104) may process themedical image 108 with aneural network module 216. An example of themedical image 108 may be an ultrasound image (or scan). Themedical image 108 may also include theOI 110 such as a biological structure of the patient. The medical imaging device (used to capture the medical image 108) may generate themedical image 108 with animage blur 111. Theimage blur 111 may soften edge(s) of theOI 110. Sharp edges associated with theOI 110 may be critical to manual or automated diagnosis. As such, the capture process of the medical imaging device may diminish a probability of correct diagnosis associated with theOI 110. - The capture process may record several images of the
OI 110 and combine the images with an averaging process to generate themedical image 108. The averaging process may remove noise associated with the capture process but soften the edges of theOI 110. To enable a correct diagnosis associated with theOI 110, theimage analysis application 106 may sharpen edges associated with theOI 110. - Next, the
neural network module 216 of theimage analysis application 106 may process themedical image 108. Theneural network module 216 may process themedical image 108 with adeep learning model 218. Thedeep learning model 218 may be generated with a training input set 220 and an expectedoutput set 222. In an example scenario, theimage analysis application 106 may generate thedeep learning model 218. Alternatively, theimage analysis application 106 may retrieve thedeep learning model 218 from an external service provider. - The training input set 220 may include averaged images of prior imaging sessions (from a variety of patients). Each of the averaged images may include a noise reduced average of several medical images (such as ultrasound images) captured during an imaging session (such as an ultrasound session). Edge(s) of 0I(s) within the averaged images may be blurred as a result of the averaging process to reduce noise.
- The expected output set 222 may include de-blurred images corresponding to the averaged images. Edge(s) of the 0I(s) within each of the de-blurred images may be sharpened. The sharpening effect may be applied automatically and/or manually to the edge(s). The
deep learning model 218 may be trained based on an analysis of the training input set 220 and the expectedoutput set 222. The training process may form thedeep learning model 218 based on how the sharpening effect is applied to the expected output set 222 to correct the softened edges of the OIs within the training input set 220. - In addition, the
neural network module 216 may process themedical image 108 with thedeep learning model 218 to remove theimage blur 111. Theimage blur 111 may be removed by sharpening softened edge(s) of theOI 110. As a result of the processing of themedical image 108, theneural network module 216 may generate a de-blurredmedical image 114. The de-blurredmedical image 114 may include the OI with sharpened edge(s). -
FIG. 3 shows another display diagram illustrating components of a neural network mechanism to correct the image blur in themedical image 108. The image analysis application 106 (executed by the mobile device 104) may process themedical image 108 that includes theOI 110. Themedical image 108 may include the image blur. The image blur may result in theOI 110 having ablurred edge 324. The capture process (to record the medical image 108) may introduce theblurred edge 324 to theOI 110. The capture process may combine multiple images recorded during an imaging session to produce an averaged image (the medical image 108) that diminishes noise but softens theOI 110 causing theblurred edge 324. - The
neural network module 216 may process themedical image 108 with thedeep learning model 218 to identify and correct theblurred edge 324 of theOI 110. In response to processing themedical image 108, theneural network module 216 may produce the de-blurredmedical image 114. The de-blurredmedical image 114 may include theOI 110 with a sharpened edge 326 (among other sharpened edges). The sharpenededge 326 of theOI 110 may allow downstream diagnosis process (or a user) to correctly diagnose a malignancy, an issue and/or an illness associated with theOI 110. - The
image analysis application 106 may receive themedical image 108 from a medical imaging device. Alternatively, theimage analysis application 106 may receive themedical image 108 from acamera component 325 of themobile device 104. Thecamera component 325 may record themedical image 108 as a copy of a scanned image displayed by a display device associated with the medical imaging device. The scanned image may represent an imaging session of a biological structure of a patient (recorded by the medical imagining device). - The
medical image 108 may also include a three dimensional image (which may represent components of a biological structure of a patient in three dimensions). In another example scenario, theneural network module 216 may determine whether themedical image 108 includes a metadata. In response to a verification of the metadata, theneural network module 216 may analyze the medical image 108 (to detect the image blur) by evaluating the metadata. For example, theneural network module 216 may identify an annotation associated with themedical image 108 within the metadata. The annotation may designate an averaging process used to generate the medical image from scanned images of a scanning session of a biological structure of a patient. In response to detecting the annotation, theneural network module 216 may designate themedical image 108 as including the image blur. - In yet another example scenario, the
image analysis application 106 may receive a selection of a region of interest (ROI) of themedical image 108 from a user. Theneural network module 216 may focus an image blur detection process and analyze the ROI to identify the image blur within the ROI. Furthermore, themedical image 108 may be processed with thedeep learning model 218 in a real-time or offline to remove the image blur and generate the de-blurred medical image 114 (with theOI 110 having the sharpened edge 326). Alternatively, theneural network module 216 may process themedical image 108 and subsequent image(s) of a time sequence based scanning session (of a biological structure of a patient) with thedeep learning model 218 in a real-time or offline. Theneural network module 216 may generate the de-blurredmedical image 114 and de-blurred subsequent image(s) in response to processing of themedical image 108 and the subsequent image(s). Alternatively, theneural network module 216 may generate a de-blurred video stream or animation by processing themedical image 108 and the subsequent image(s) with thedeep learning model 218. - The example scenarios and schemas in
FIGS. 1 through 3 are shown with specific components, data types, and configurations. Embodiments are not limited to systems according to these example configurations. A device to correct an image blur in themedical image 108 may be implemented in configurations employing fewer or additional components in applications and user interfaces. Furthermore, the example schema and components shown inFIGS. 1 through 3 and their subcomponents may be implemented in a similar manner with other values using the principles described herein. -
FIG. 4 is a block diagram of an example computing device, which may be used to correct an image blur in a medical image, according to embodiments. - For example,
computing device 400 may be used as a server, desktop computer, portable computer, smart phone, special purpose computer, or similar device. In a basic configuration 402, thecomputing device 400 may include one ormore processors 404 and asystem memory 406. A memory bus 408 may be used for communication between theprocessor 404 and thesystem memory 406. The basic configuration 402 may be illustrated inFIG. 4 by those components within the inner dashed line. - Depending on the desired configuration, the
processor 404 may be of any type, including but not limited to a microprocessor (μP), a microcontroller (μC), a digital signal processor (DSP), or any combination thereof. Theprocessor 404 may include one more levels of caching, such as alevel cache memory 412, one ormore processor cores 414, and registers 416. Theexample processor cores 414 may (each) include an arithmetic logic unit (ALU), a floating-point unit (FPU), a digital signal processing core (DSP Core), a graphics processing unit (GPU), or any combination thereof. Anexample memory controller 418 may also be used with theprocessor 404, or in some implementations, thememory controller 418 may be an internal part of theprocessor 404. - Depending on the desired configuration, the
system memory 406 may be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. Thesystem memory 406 may include anoperating system 420, theimage analysis application 106, and aprogram data 424. Theimage analysis application 106 may include components such as theneural network module 216. Theneural network module 216 may execute the instructions and processes associated with theimage analysis application 106. In an example scenario, theneural network module 216 may receive a medical image from a medical image provider. Next, an image blur may be detected within the medical image by analyzing the medical image. The medical image may be processed with a deep learning model to correct the image blur. Subsequently, a de-blurred medical image may be generated. The de-blurred medical image may be provided for a presentation or a continued analysis. - Input to and output out of the
image analysis application 106 may be captured and displayed through a display component that may be integrated to thecomputing device 400. The display component may include a display screen, and/or a display monitor, among others that may capture an input through a touch/gesture based component such as a digitizer. Theprogram data 424 may also include, among other data, themedical image 108, or the like, as described herein. Themedical image 108 may be processed with the deep learning model to correct softened edges of an OI introduced during the imaging process, among other things. Thecomputing device 400 may have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 402 and any desired devices and interfaces. For example, a bus/interface controller 430 may be used to facilitate communications between the basic configuration 402 and one or moredata storage devices 432 via a storage interface bus 434. Thedata storage devices 432 may be one or moreremovable storage devices 436, one or morenon-removable storage devices 438, or a combination thereof. - Examples of the removable storage and the non-removable storage devices may include magnetic disk devices, such as flexible disk drives and hard-disk drives (HDDs), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSDs), and tape drives, to name a few. Example computer storage media may include volatile and nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data.
- The
system memory 406, theremovable storage devices 436 and thenon-removable storage devices 438 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVDs), solid state drives, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by thecomputing device 400. Any such computer storage media may be part of thecomputing device 400. - The
computing device 400 may also include an interface bus 440 for facilitating communication from various interface devices (for example, one ormore output devices 442, one or moreperipheral interfaces 444, and one or more communication devices 466) to the basic configuration 402 via the bus/interface controller 430. Some of theexample output devices 442 include agraphics processing unit 448 and an audio processing unit 450, which may be configured to communicate to various external devices such as a display or speakers via one or more A/V ports 452. One or more exampleperipheral interfaces 444 may include aserial interface controller 454 or aparallel interface controller 456, which may be configured to communicate with external devices such as input devices (for example, keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (for example, printer, scanner, etc.) via one or more I/O ports 458. An example of the communication device(s) 466 includes anetwork controller 460, which may be arranged to facilitate communications with one or moreother computing devices 462 over a network communication link via one ormore communication ports 464. The one or moreother computing devices 462 may include servers, computing devices, and comparable devices. - The network communication link may be one example of a communication media.
- Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media. A “modulated data signal” may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), microwave, infrared (IR) and other wireless media. The term computer readable media as used herein may include both storage media and communication media.
- The
computing device 400 may be implemented as a part of a specialized server, mainframe, or similar computer, which includes any of the above functions. Thecomputing device 400 may also be implemented as a personal computer including both laptop computer and non-laptop computer configurations. Additionally, thecomputing device 400 may include specialized hardware such as an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic device (PLD), and/or a free form logic on an integrated circuit (IC), among others. - Example embodiments may also include methods to correct an image blur in a medical image. These methods can be implemented in any number of ways, including the structures described herein. One such way may be by machine operations, of devices of the type described in the present disclosure. Another optional way may be for one or more of the individual operations of the methods to be performed in conjunction with one or more human operators performing some of the operations while other operations may be performed by machines. These human operators need not be collocated with each other, but each can be only with a machine that performs a portion of the program. In other embodiments, the human interaction can be automated such as by pre-selected criteria that may be machine automated.
-
FIG. 5 is a logic flow diagram illustrating a process for correcting an image blur in a medical image.Process 500 may be implemented on a computing device, such as thecomputing device 400 or another system. -
Process 500 begins withoperation 510, where an image analysis application may receive a medical image from a medical image provider. The medical image may include an ultrasound image of a biological structure of a patient. Atoperation 520, the image blur within the medical image may be detected by analyzing the medical image. Next, atoperation 530, the medical image may be processed with a deep learning model to correct the image blur. - Furthermore, at
operation 540, a de-blurred medical image may be generated in response to processing the medical image. The de-blurred medical image may include sharpened edge(s) of an OI. Atoperation 550, the de-blurred medical image may be provided for a presentation or a continued analysis. - The operations included in
process 500 is for illustration purposes. Correcting an image blur in a medical image may be implemented by similar processes with fewer or additional steps, as well as in different order of operations using the principles described herein. The operations described herein may be executed by one or more processors operated on one or more computing devices, one or more processor cores, specialized processing devices, and/or special purpose processors, among other examples. - A method of correcting an image blur within a medical image is also described. The method includes receiving the medical ultrasound image from a medical image provider. Next, the image blur may be detected within the medical ultrasound image by analyzing the medical ultrasound image. The image blur may result from a noise reduced average of several ultrasound session images of a biological structure of a patient. The medical ultrasound image may be processed with a deep learning model to correct the image blur. In response to the processing, a de-blurred medical ultrasound image may be generated. Furthermore, the de-blurred medical ultrasound image may be provided for a presentation or a continued analysis.
- When introducing elements of the present disclosure or the embodiment(s) thereof, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. Similarly, the adjective “another,” when used to introduce an element, is intended to mean one or more elements. The terms “including” and “having” are intended to be inclusive such that there may be additional elements other than the listed elements.
- Although this invention has been described with a certain degree of particularity, it is to be understood that the present disclosure has been made only by way of illustration and that numerous changes in the details of construction and arrangement of parts may be resorted to without departing from the spirit and the scope of the invention.
Claims (19)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US17/318,363 US20210264574A1 (en) | 2018-11-14 | 2021-05-12 | Correcting image blur in medical image |
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/190,652 US10290084B1 (en) | 2018-11-14 | 2018-11-14 | Correcting image blur in medical image |
| PCT/US2019/061464 WO2020102523A1 (en) | 2018-11-14 | 2019-11-14 | Correcting image blur in a medical image |
| US17/318,363 US20210264574A1 (en) | 2018-11-14 | 2021-05-12 | Correcting image blur in medical image |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2019/061464 Continuation WO2020102523A1 (en) | 2018-11-14 | 2019-11-14 | Correcting image blur in a medical image |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20210264574A1 true US20210264574A1 (en) | 2021-08-26 |
Family
ID=66439674
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/190,652 Active US10290084B1 (en) | 2018-11-14 | 2018-11-14 | Correcting image blur in medical image |
| US17/318,363 Abandoned US20210264574A1 (en) | 2018-11-14 | 2021-05-12 | Correcting image blur in medical image |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/190,652 Active US10290084B1 (en) | 2018-11-14 | 2018-11-14 | Correcting image blur in medical image |
Country Status (2)
| Country | Link |
|---|---|
| US (2) | US10290084B1 (en) |
| WO (1) | WO2020102523A1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220414832A1 (en) * | 2021-06-24 | 2022-12-29 | Canon Medical Systems Corporation | X-ray imaging restoration using deep learning algorithms |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112102423A (en) * | 2019-06-17 | 2020-12-18 | 通用电气精准医疗有限责任公司 | Medical imaging method and system |
| KR102286189B1 (en) * | 2020-05-18 | 2021-08-06 | 동국대학교 산학협력단 | Deblurring device and method based on deep learning for gaze estimation |
| CN113077605B (en) * | 2021-02-23 | 2024-05-10 | 邹吉涛 | Cloud storage type interval early warning platform and method |
| CN113689355B (en) * | 2021-09-10 | 2022-07-08 | 数坤(北京)网络科技股份有限公司 | Image processing method, image processing device, storage medium and computer equipment |
| US12475564B2 (en) | 2022-02-16 | 2025-11-18 | Proscia Inc. | Digital pathology artificial intelligence quality check |
| CN117036188A (en) * | 2023-07-27 | 2023-11-10 | 中国工商银行股份有限公司 | Image correction method, device, storage medium and electronic equipment |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140066767A1 (en) * | 2012-08-31 | 2014-03-06 | Clearview Diagnostics, Inc. | System and method for noise reduction and signal enhancement of coherent imaging systems |
| US20150065803A1 (en) * | 2013-09-05 | 2015-03-05 | Erik Scott DOUGLAS | Apparatuses and methods for mobile imaging and analysis |
| US20150139515A1 (en) * | 2012-07-03 | 2015-05-21 | The State Of Queensland Acting Through Its Department Of Health | Movement correction for medical imaging |
| US20160044245A1 (en) * | 2014-08-05 | 2016-02-11 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
| US20170147892A1 (en) * | 2015-11-20 | 2017-05-25 | Panasonic Intellectual Property Corporation Of America | Method for processing image and computer-readable non-transitory recording medium storing program |
| US20180144214A1 (en) * | 2016-11-23 | 2018-05-24 | General Electric Company | Deep learning medical systems and methods for image reconstruction and quality evaluation |
| US10534998B2 (en) * | 2016-11-02 | 2020-01-14 | Adobe Inc. | Video deblurring using neural networks |
-
2018
- 2018-11-14 US US16/190,652 patent/US10290084B1/en active Active
-
2019
- 2019-11-14 WO PCT/US2019/061464 patent/WO2020102523A1/en not_active Ceased
-
2021
- 2021-05-12 US US17/318,363 patent/US20210264574A1/en not_active Abandoned
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150139515A1 (en) * | 2012-07-03 | 2015-05-21 | The State Of Queensland Acting Through Its Department Of Health | Movement correction for medical imaging |
| US20140066767A1 (en) * | 2012-08-31 | 2014-03-06 | Clearview Diagnostics, Inc. | System and method for noise reduction and signal enhancement of coherent imaging systems |
| US20150065803A1 (en) * | 2013-09-05 | 2015-03-05 | Erik Scott DOUGLAS | Apparatuses and methods for mobile imaging and analysis |
| US20160044245A1 (en) * | 2014-08-05 | 2016-02-11 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and storage medium |
| US20170147892A1 (en) * | 2015-11-20 | 2017-05-25 | Panasonic Intellectual Property Corporation Of America | Method for processing image and computer-readable non-transitory recording medium storing program |
| US10534998B2 (en) * | 2016-11-02 | 2020-01-14 | Adobe Inc. | Video deblurring using neural networks |
| US20180144214A1 (en) * | 2016-11-23 | 2018-05-24 | General Electric Company | Deep learning medical systems and methods for image reconstruction and quality evaluation |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220414832A1 (en) * | 2021-06-24 | 2022-12-29 | Canon Medical Systems Corporation | X-ray imaging restoration using deep learning algorithms |
| US12182970B2 (en) * | 2021-06-24 | 2024-12-31 | Canon Medical Systems Corporation | X-ray imaging restoration using deep learning algorithms |
Also Published As
| Publication number | Publication date |
|---|---|
| US10290084B1 (en) | 2019-05-14 |
| WO2020102523A1 (en) | 2020-05-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10290084B1 (en) | Correcting image blur in medical image | |
| US11043297B2 (en) | Neural network-based object detection in visual input | |
| CN112770838B (en) | System and method for image enhancement using self-focused deep learning | |
| Liu et al. | Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation | |
| US20210264602A1 (en) | Medical image based distortion correction mechanism | |
| CN109961491B (en) | Multi-mode image truncation compensation method, device, computer equipment and medium | |
| US10453570B1 (en) | Device to enhance and present medical image using corrective mechanism | |
| US10290101B1 (en) | Heat map based medical image diagnostic mechanism | |
| JP2022545440A (en) | System and method for accurate and rapid positron emission tomography using deep learning | |
| Venkadesh et al. | Prior CT improves deep learning for malignancy risk estimation of screening-detected pulmonary nodules | |
| CN111080583B (en) | Medical image detection method, computer equipment and readable storage medium | |
| US20210048941A1 (en) | Method for providing an image base on a reconstructed image group and an apparatus using the same | |
| Song et al. | Artificial intelligence for chest X-ray image enhancement | |
| US11798159B2 (en) | Systems and methods for radiology image classification from noisy images | |
| US10074198B2 (en) | Methods and apparatuses for image processing and display | |
| CN120167065A (en) | Storing medical images | |
| Yang et al. | Explicit and Implicit Representations in AI-based 3D Reconstruction for Radiology: A Systematic Review | |
| CN111028173B (en) | Image enhancement method, device, electronic equipment and readable storage medium | |
| Kendall | Image Processing with Matlab | |
| JP5453450B2 (en) | Diagnosis support system and program | |
| Marcos et al. | Low‐Dose Computed Tomography Image Denoising Vision Transformer Model Optimization Using Space State Method | |
| Kulathilake et al. | Progress of deep learning in digital pathology detection of chest radiographs | |
| Yeh et al. | Deep Deblurring in Teledermatology: Deep Learning Models Restore the Accuracy of Blurry Images’ Classification | |
| CN117475344A (en) | Ultrasound image interception method, device, terminal equipment and storage medium | |
| WO2022080327A1 (en) | Estimator learning device, estimator learning method, and estimator learning program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: RUTGERS, THE STATE UNIVERSITY OF NEW JERSEY, NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SONAVISTA, INC.;REEL/FRAME:056216/0853 Effective date: 20191204 Owner name: SONAVISTA, INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PODILCHUK, CHRISTINE I.;MAMMONE, RICHARD;REEL/FRAME:056216/0749 Effective date: 20181114 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |