WO2021068975A1 - Systems and methods for image reconstruction - Google Patents
Systems and methods for image reconstruction Download PDFInfo
- Publication number
- WO2021068975A1 WO2021068975A1 PCT/CN2020/120503 CN2020120503W WO2021068975A1 WO 2021068975 A1 WO2021068975 A1 WO 2021068975A1 CN 2020120503 W CN2020120503 W CN 2020120503W WO 2021068975 A1 WO2021068975 A1 WO 2021068975A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sample
- image
- intermediate image
- subject
- scan data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/006—Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/424—Iterative
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2211/00—Image generation
- G06T2211/40—Computed tomography
- G06T2211/441—AI-based methods, deep learning or artificial neural networks
Definitions
- the disclosure generally relates to image processing, and more particularly relates to systems and methods for image reconstruction.
- Scanned data collected by a scanning device e.g., a positron emission tomography (PET) device, a computed tomography (CT) device, etc.
- PET positron emission tomography
- CT computed tomography
- a system for image reconstruction may include at least one storage device storing a set of instructions, and at least one processor configured to communicate with the at least one storage device.
- the at least one processor may be configured to direct the system to perform one or more of the following operations.
- the operations may include obtaining scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject; generating, based on the scan data, a first intermediate image of the subject; and generating, based on the first intermediate image and a target reconstruction model, a target image of the subject, wherein the target reconstruction model is trained using a plurality of training samples, and each of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
- the generating a first intermediate image of the subject based on the scan data may include performing a first iterative reconstruction operation including a plurality of first iterations. At least one first iteration of the plurality of first iterations may include generating a first updated image by updating, based on the scan data, a first image determined in a previous first iteration; determining whether a first termination condition is satisfied; and in response to determining that the first termination condition is satisfied, designating the first updated image as the first intermediate image.
- the first termination condition may relate to at least one of a count of first iterations that have been performed in the first iteration reconstruction operation, a difference between the first updated image and the first image, or one or more image parameters of the first updated image.
- the one or more image parameters may include at least one of a signal-to-noise ratio (SNR) , a mean square error (MSE) , a mean absolute deviation (MAD) , or a peak signal-to-noise ratio (PSNR) .
- SNR signal-to-noise ratio
- MSE mean square error
- MAD mean absolute deviation
- PSNR peak signal-to-noise ratio
- the generating, based on the first intermediate image and a target reconstruction model, a target image of the subject comprises: generating a second intermediate image of the subject by processing the first intermediate image using the target reconstruction model; and generating, based on the second intermediate image, the target image of the subject.
- the generating, based on the second intermediate image, the target image of the subject may include performing a second iterative reconstruction operation including a plurality of second iterations. At least one second iteration of the plurality of second iterations may include generating a second updated image by updating, based on the scan data, a second image, the second image being updated from the second intermediate image determined in a previous second iteration; determining whether a second termination condition is satisfied; and in response to determining that the second termination condition is satisfied, designating the second updated image as the target image of the subject.
- the second termination condition may relate to at least one of a count of second iterations that have been performed in the second iterative reconstruction operation, a difference between the second updated image and the second image, or one or more image parameters of the second updated image.
- the target reconstruction model may include a deep learning model.
- the target reconstruction model may be generated according to a model training process including: obtaining the plurality of training samples; and generating the target reconstruction model by training a preliminary model using the plurality of training samples.
- the obtaining a plurality of training samples may include: for each of the plurality of training samples, obtaining sample scan data of the sample subject corresponding to the training sample; generating the sample first intermediate image by performing a sample first iterative reconstruction operation on the sample scan data until a sample first termination condition is satisfied; and generating the sample second intermediate image by performing a sample second iterative reconstruction operation on the sample first intermediate image until a sample second termination condition is satisfied.
- the at least one processor is further configured to direct the system to perform operations including obtaining, from a reconstruction model library, the target reconstruction model.
- a method for image reconstruction may be implemented on a computing device including at least one processor and at least one storage device.
- the method may include obtaining scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject; generating, based on the scan data, a first intermediate image of the subject; and generating, based on the first intermediate image and a target reconstruction model, a target image of the subject, wherein the target reconstruction model is trained using a plurality of training samples, and each of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
- a system for image reconstruction may include an acquisition module configured to obtain scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject; an intermediate image generation module configured to generate a first intermediate image of the subject based on the scan data; and a target image generation module configured to generate a target image of the subject based on the first intermediate image and a target reconstruction model, wherein the target reconstruction model is trained using a plurality of training samples, and each of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
- a non-transitory computer readable medium including at least one set of instructions for image reconstruction.
- the at least one set of instructions When executed by one or more processors of a computing device, the at least one set of instructions causes the computing device to perform a method.
- the method may include obtaining scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject; generating, based on the scan data, a first intermediate image of the subject; and generating, based on the first intermediate image and a target reconstruction model, a target image of the subject, wherein the target reconstruction model is trained using a plurality of training samples, and each of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
- a system may include at least one storage device storing a set of instructions for generating a target reconstruction model, and at least one processor configured to communicate with the at least one storage device.
- the at least one processor may be configured to direct the system to perform one or more of the following operations.
- the operations may include obtaining a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; and determining the target reconstruction model by training a preliminary model using the plurality of training samples.
- the obtaining a plurality of training samples may include: for each of the plurality of training samples, obtaining sample scan data of the sample subject corresponding to the training sample; generating the sample first intermediate image by performing a sample first iterative reconstruction operation on the sample scan data until a sample first termination condition is satisfied; and generating the sample second intermediate image by performing a second iterative reconstruction operation on the sample first intermediate image until a sample second termination condition is satisfied.
- the sample first iterative reconstruction operation may include a plurality of sample first iterations
- the generating the sample first intermediate image comprises: for at least one sample first iteration of the plurality of sample first iterations, generating a sample first updated image by updating, based on the sample scan data, a sample first image determined in a previous sample first iteration; determining whether the sample first termination condition is satisfied; and in response to determining that the sample first termination condition is satisfied, designating the sample first updated image as the sample first intermediate image.
- the sample first termination condition may relate to at least one of a count of sample first iterations that have been performed in the sample first iterative reconstruction operation, a difference between the sample first updated image and the sample first image, or one or more image parameters of the sample first updated image.
- the sample second iterative reconstruction operation may include a plurality of sample second iterations
- the generating the sample second intermediate image comprises: for at least one sample second iteration of the plurality of sample second iterations, generating a sample second updated image by updating, based on the sample scan data, a sample second image, the sample second image being updated from the sample first intermediate image determined in a previous sample second iteration; determining whether the sample second termination condition is satisfied; and in response to determining that the sample second termination condition is satisfied, designating the sample second updated image as the sample second intermediate image.
- the sample second termination condition may relate to at least one of a count of sample second iterations that have been performed in the sample second iterative reconstruction operation, a difference between the sample second updated image and the sample second image, or one or more image parameters of the sample second updated image.
- the generating the target reconstruction model by training a preliminary model using the plurality of training samples may include initializing parameter values of the preliminary model; and generating the target reconstruction model by iteratively updating the parameter values of the preliminary model based on the plurality of training samples.
- the iteratively updating the parameter values of the preliminary model includes performing an iterative operation may include one or more third iterations. At least one third iteration of the one or more third iterations may include for each of at least some of the plurality of training samples, generating an estimated second intermediate image by processing the sample first intermediate image of the training sample using an updated preliminary model determined in a previous third iteration; determining a value of a loss function based on the estimated second intermediate image and the sample second intermediate image of each of the at least some of the plurality of training samples; further updating at least some of the parameter values of the updated preliminary model to be used in a next third iteration based on the value of the loss function.
- a method for generating a target reconstruction model may be implemented on a computing device including at least one processor and at least one storage device.
- the method may include obtaining a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; and determining the target reconstruction model by training a preliminary model using the plurality of training samples.
- a system for generating a target reconstruction model may include an acquisition module configured to obtain a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; and a model determination module configured to determine the target reconstruction model by training a preliminary model using the plurality of training samples.
- a non-transitory computer readable medium including at least one set of instructions for generating a target reconstruction model.
- the at least one set of instructions When executed by one or more processors of a computing device, the at least one set of instructions causes the computing device to perform a method.
- the method may include obtaining a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; and determining the target reconstruction model by training a preliminary model using the plurality of training samples.
- FIG. 1 is a schematic diagram illustrating an exemplary imaging system according to some embodiments of the present disclosure
- FIG. 2 is a schematic diagram illustrating hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure
- FIG. 3 is a schematic diagram illustrating hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure
- FIG. 4A is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure.
- FIG. 4B is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure.
- FIG. 5 is a flowchart illustrating an exemplary process for generating a target image of a subject based on scan data of the subject according to some embodiments of the present disclosure
- FIG. 6 is a flowchart illustrating an exemplary process for generating a target reconstruction model according to some embodiments of the present disclosure
- FIG. 7 is a flowchart illustrating an exemplary process for determining a training sample for a target reconstruction model according to some embodiments of the present disclosure.
- FIG. 8 is a schematic diagram illustrating an exemplary training sample including a sample first intermediate image and a sample second intermediate image reconstructed from sample scan data of the head of a patient by performing different counts of iterations according to some embodiments of the present disclosure.
- system, ” “engine, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, sections, or assembly of different levels in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose.
- module, ” “unit, ” or “block, ” as used herein refers to logic embodied in hardware or firmware, or to a collection of software instructions.
- a module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or other storage devices.
- a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts.
- Software modules/units/blocks configured for execution on computing devices may be provided on a computer-readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution) .
- a computer-readable medium such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution) .
- Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device.
- Software instructions may be embedded in firmware, such as an erasable programmable read-only memory (EPROM) .
- EPROM erasable programmable read-only memory
- modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors.
- the modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks but may be represented in hardware or firmware.
- the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage. The description may be applicable to a system, an engine, or a portion thereof.
- the flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments in the present disclosure. It is to be expressly understood, the operations of the flowchart may be implemented not in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
- image in the present disclosure is used to collectively refer to image data (e.g., scan data, projection data) and/or images of various forms, including a two-dimensional (2D) image, a three-dimensional (3D) image, a four-dimensional (4D) , etc.
- pixel and “voxel” in the present disclosure are used interchangeably to refer to an element of an image.
- region, ” “location, ” and “area” in the present disclosure may refer to a location of an anatomical structure shown in the image or an actual location of the anatomical structure existing in or on a target subject’s body, since the image may indicate the actual location of a certain anatomical structure existing in or on the target subject’s body.
- the systems may include an imaging system.
- the imaging system may include a single modality system and/or a multi-modality system.
- modality used herein broadly refers to an imaging or treatment method or technology that gathers, generates, processes, and/or analyzes imaging information of a subject or treatments the subject.
- the single modality system may include, for example, an ultrasound imaging system, an X-ray imaging system, a computed tomography (CT) system, a magnetic resonance imaging (MRI) system, an ultrasonography system, a positron emission tomography (PET) system, an optical coherence tomography (OCT) imaging system, an ultrasound (US) imaging system, an intravascular ultrasound (IVUS) imaging system, a near-infrared spectroscopy (NIRS) imaging system, or the like, or any combination thereof.
- CT computed tomography
- MRI magnetic resonance imaging
- PET positron emission tomography
- OCT optical coherence tomography
- US ultrasound
- IVUS intravascular ultrasound
- NIRS near-infrared spectroscopy
- the multi-modality system may include, for example, an X-ray imaging-magnetic resonance imaging (X-ray-MRI) system, a positron emission tomography-X-ray imaging (PET-X-ray) system, a single photon emission computed tomography-magnetic resonance imaging (SPECT-MRI) system, a positron emission tomography-computed tomography (PET-CT) system, a C-arm system, a positron emission tomography-magnetic resonance imaging (PET-MR) system, a digital subtraction angiography-magnetic resonance imaging (DSA-MRI) system, etc.
- the medical system may include a treatment system.
- the treatment system may include a treatment plan system (TPS) , image-guided radiotherapy (IGRT) , etc.
- the image-guided radiotherapy (IGRT) may include a treatment device and an imaging device.
- the treatment device may include a linear accelerator, a cyclotron, a synchrotron, etc., configured to perform a radio therapy on a subject.
- the treatment device may include an accelerator of species of particles including, for example, photons, electrons, protons, or heavy ions.
- the imaging device may include an MRI scanner, a CT scanner (e.g., cone beam computed tomography (CBCT) scanner) , a digital radiology (DR) scanner, an electronic portal imaging device (EPID) , etc.
- CBCT cone beam computed tomography
- DR digital radiology
- EPID electronic portal imaging device
- the subject may include a biological object and/or a non-biological object.
- the biological subject may be a human being, an animal, a plant, or a specific portion, organ, and/or tissue thereof.
- the subject may include a head, a neck, a thorax, a heart, a stomach, a blood vessel, a soft tissue, a tumor, a nodule, or the like, or any combination thereof.
- the subject may be a man-made composition of organic and/or inorganic matters that are with or without life.
- object or “subject” are used interchangeably in the present disclosure.
- a representation of a subject in an image may be referred to as “subject” for brevity.
- a representation of an organ or tissue e.g., a heart, a liver, a lung
- an image including a representation of a subject may be referred to as an image of a subject or an image including a subject for brevity.
- an operation performed on a representation of a subject in an image may be referred to as an operation performed on a subject for brevity.
- a segmentation of a portion of an image including a representation of an organ or tissue from the image may be referred to as a segmentation of an organ or tissue for brevity.
- Iterative reconstruction algorithms have been widely used in generating a medical image (e.g., a CT image, a PET image) .
- an iterative reconstruction operation including a plurality of iterations may be performed to reconstruct an image of a subject from scan data of the subject.
- the image quality e.g., measured by one or more image parameters as described in connection with FIG. 5
- the image quality may be associated with the count of iterations performed in the reconstruction image.
- an iterative reconstruction algorithm with more iterations may cost more processing resources (e.g., time, computing space) and reduce the reconstruction efficiency. Therefore, it is desirable to provide systems and methods for image reconstruction, thereby improving the efficiency of the image reconstruction.
- the system may obtain scan data of a subject.
- the scan data may be acquired by an imaging device during a scan of the subject.
- the system may generate a first intermediate image of the subject based on the scan data.
- the system may further generate a target image of the subject based on the first intermediate image and a target reconstruction model.
- a second intermediate image of the subject may be generated by processing the first intermediate image using the target reconstruction model.
- the target image of the subject may be generated based on the second intermediate image.
- the second intermediate image may correspond to a higher iterative count than the first intermediate image.
- the second intermediate image may have a same image quality (e.g., measured by one or more image parameters) as or a similar image quality to an image generated by performing a plurality of reconstruction iterations on the first intermediate image.
- some embodiments of the present disclosure utilize the target reconstruction model to generate the second intermediate image with an improved image quality without performing an actual iteration reconstruction operation. This may improve the reconstruction efficiency by, e.g., reducing the amount of computation resources and the needed time for reconstructing the target image.
- the target reconstruction model may be generated by training a preliminary model using a plurality of training samples.
- Each of the plurality of training samples may comprise a sample first intermediate image and a sample second intermediate image of a sample subject.
- the sample first intermediate image and the second intermediate image may be reconstructed from sample scan data of the sample subject by performing different counts of iterations.
- the preliminary model may be trained according to a machine learning technique to learn an optimal mechanism of iterative reconstruction, and the resulting target reconstruction model may be used to generate an image corresponding to a higher iteration count (i.e., with higher image quality) without performing an actual iteration reconstruction operation.
- FIG. 1 is a schematic diagram illustrating an exemplary imaging system according to some embodiments of the present disclosure.
- the imaging system 100 may include a scanning device 110, a network 120, a terminal device 130, a processing device 140, and a storage device 150.
- the components in the imaging system 100 may be connected in one or more of various ways.
- the scanning device 110 may be connected to the processing device 140 through the network 120.
- the scanning device 110 may be connected to the processing device 140 directly as illustrated in FIG. 1.
- the terminal device 130 may be connected to another component of the imaging system 100 (e.g., the processing device 140) via the network 120.
- the terminal device 130 may be connected to the processing device 140 directly as illustrated by the dotted arrow in FIG. 1.
- the storage device 150 may be connected to another component of the imaging system 100 (e.g., the processing device 140) directly as illustrated in FIG. 1, or through the network 120.
- the scanning device 110 may be configured to acquire scan data relating to at least part of a subject.
- the subject may be biological or non-biological.
- the subject may include a patient, a man-made subject, etc.
- the subject may include a specific portion, organ, and/or tissue of the patient.
- the subject may include the head, the chest, the neck, the thorax, the heart, the stomach, an arm, a palm, a blood vessel, soft tissue, a tumor, nodules, or the like, or any combination thereof.
- the scanning device 110 may include a computed tomography (CT) device, an emission computed tomography (ECT) device, a positron emission tomography (PET) device, a single-photon emission computed tomography (SPECT) device, a magnetic resonance imaging (MRI) device, a magnetic resonance spectroscopy (MRS) device, an ultrasound scanning device, or the like, or any combination thereof.
- CT computed tomography
- ECT emission computed tomography
- PET positron emission tomography
- SPECT single-photon emission computed tomography
- MRI magnetic resonance imaging
- MRS magnetic resonance spectroscopy
- the network 120 may include any suitable network that can facilitate the exchange of information and/or data for the imaging system 100.
- one or more components e.g., the scanning device 110, the processing device 140, the storage device 150, or the terminal device 130
- the processing device 140 may obtain scanning data from the scanning device 110 via the network 120.
- the processing device 140 may obtain user instructions from the terminal device 130 via the network 120.
- the network 120 may include a public network (e.g., the Internet) , a private network (e.g., a local area network (LAN) , a wide area network (WAN) , etc.
- a wired network e.g., an Ethernet
- a wireless network e.g., an 802.11 network, a wireless Wi-Fi network, etc.
- a cellular network e.g., a long term evolution (LTE) network
- LTE long term evolution
- a frame relay network e.g., a virtual private network (VPN)
- satellite network e.g., a telephone network, routers, hubs, server computers, and/or any combination thereof.
- the network 120 may include a cable network, a wireline network, a fiber-optic network, a telecommunications network, an intranet, a wireless local area network (WLAN) , a metropolitan area network (MAN) , a public telephone switched network (PSTN) , a Bluetooth TM network, a ZigBee TM network, a near field communication (NFC) network, or the like, or any combination thereof.
- the network 120 may include one or more network access points.
- the network 120 may include wired and/or wireless network access points such as base stations and/or internet exchange points through which one or more components of the imaging system 100 may be connected to the network 120 to exchange data and/or information.
- the terminal device 130 may be connected to and/or communicate with the scanning device 110, the processing device 140, and/or the storage device 150.
- the terminal device 130 may enable user interactions between a user and the imaging system 100.
- the user may instruct the scanning device 110 to acquire scan data or instruct the processing device 140 to process images via the terminal device 130.
- the terminal device 130 may include a mobile device 131, a tablet computer 132, a laptop computer 133, or the like, or any combination thereof.
- the mobile device 131 may include a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof.
- the smart home device may include a smart lighting device, a smart electrical appliance control device, a smart monitoring device, a smart TV, a smart camera, a walkie-talkie, or the like, or any combination thereof.
- the wearable device may include bracelets, footwear, glasses, helmets, watches, clothes, backpacks, smart accessories, or the like, or any combination thereof.
- the mobile device may include a mobile phone, a personal digital assistant (PDA) , a game device, a navigation device, a POS device, a notebook computer, a tablet computer, a desktop computer, or the like, or any combination thereof.
- PDA personal digital assistant
- the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof.
- the virtual reality device and/or augmented reality device may include Google Glass TM , Oculus Rift TM , HoloLens TM , Gear VR TM , or the like.
- the terminal device 130 may be part of the processing device 140.
- the processing device 140 may process data and/or information obtained from the scanning device 110, the terminal device 130, and/or the storage device 150. For example, the processing device 140 may reconstruct a target image of the subject by applying a target reconstruction model. As another example, the processing device 140 may generate the target reconstruction model by training a preliminary model using a plurality of training samples. In some embodiments, the generation and/or updating of the target reconstruction model may be performed on a processing device, while the application of the target reconstruction model may be performed on a different processing device. In some embodiments, the generation of the target reconstruction model may be performed on a processing device of a system different from the imaging system 100 or a server different from a server including the processing device 140 on which the application of the target reconstruction model is performed.
- the generation of the target reconstruction model may be performed on a first system of a vendor who provides and/or maintains such a target reconstruction model and/or has access to training samples used to generate the target reconstruction model, while image reconstruction based on the provided target reconstruction model may be performed on a second system of a client of the vendor.
- the generation of the target reconstruction model may be performed online in response to a request for image reconstruction. In some embodiments, the generation of the target reconstruction model may be performed offline.
- the target reconstruction model may be generated and/or updated (or maintained) by, e.g., the manufacturer of the scanning device 110 or a vendor.
- the manufacturer or the vendor may load the target reconstruction model into the imaging system 100 or a portion thereof (e.g., the processing device 140) before or during the installation of the scanning device 110 and/or the processing device 140, and maintain or update the target reconstruction model from time to time (periodically or not) .
- the maintenance or update may be achieved by installing a program stored on a storage device (e.g., a compact disc, a USB drive, etc. ) or retrieved from an external source (e.g., a server maintained by the manufacturer or vendor) via the network 120.
- the program may include a new model (e.g., a new image reconstruction model) or a portion of a model that substitute or supplement a corresponding portion of the model.
- the processing device 140 may be a computer, a user console, a single server or a server group, etc.
- the server group can be centralized or distributed.
- the processing device 140 may be local to or remote from the imaging system 100.
- the processing device 140 may access information and/or data from the scanning device 110, the storage device 150, and/or the terminal device 130 via the network 120.
- the processing device 140 may be directly connected to the scanning device 110, the terminal device 130, and/or the storage device 150 to access information and/or data.
- the processing device 140 may be implemented on a cloud platform.
- the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, and inter-cloud, a multi-cloud, or the like, or a combination thereof.
- the processing device 140 may be implemented by a computing device 200 having one or more components as described in connection with FIG. 2.
- the storage device 150 may store data, instructions, and/or any other information. In some embodiments, the storage device 150 may store data obtained from the scanning device 110, the terminal device 130, and/or the processing device 140. In some embodiments, the storage device 150 may store data and/or instructions that the processing device 140 may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage device 150 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. Exemplary mass storage devices may include a magnetic disk, an optical disk, a solid-state drive, etc.
- Exemplary removable storage devices may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
- Exemplary volatile read-and-write memory may include a random access memory (RAM) .
- Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
- DRAM dynamic RAM
- DDR SDRAM double date rate synchronous dynamic RAM
- SRAM static RAM
- T-RAM thyristor RAM
- Z-RAM zero-capacitor RAM
- Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
- MROM mask ROM
- PROM programmable ROM
- EPROM erasable programmable ROM
- EEPROM electrically erasable programmable ROM
- CD-ROM compact disk ROM
- digital versatile disk ROM etc.
- the storage device 150 may be implemented on a cloud platform as described elsewhere in the disclosure.
- the storage device 150 may be connected to the network 120 to communicate with one or more other components (e.g., the processing device 140, the terminal device 130) of the imaging system 100. One or more components of the imaging system 100 may access the data or instructions stored in the storage device 150 via the network 120. In some embodiments, the storage device 150 may be part of the processing device 140, or directly or indirectly connected to the processing device 140.
- the imaging system 100 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
- the assembly and/or function of the imaging system 100 may be varied or changed according to specific implementation scenarios.
- the imaging system 100 may include one or more additional components and/or one or more components of the imaging system 100 described above may be omitted. Additionally or alternatively, two or more components of the imaging system 100 may be integrated into a single component. A component of the imaging system 100 may be implemented on two or more sub-components.
- FIG. 2 is a schematic diagram illustrating hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure.
- a component of the imaging system 100 e.g., the processing device 140
- the computing device 200 may include a processor 210, a storage 220, an input/output (I/O) 230, and a communication port 240.
- I/O input/output
- the processor 210 may execute computer instructions (e.g., program codes) and perform functions of the processing device 140 in accordance with techniques described herein.
- the computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions described herein.
- the processor 210 may process scanning data obtained from the scanning device 110, the terminal device 130, the storage device 150, and/or any other component of the imaging system 100.
- the processor 210 may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC) , an application-specific integrated circuit (ASICs) , an application-specific instruction-set processor (ASIP) , a central processing unit (CPU) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a microcontroller unit, a digital signal processor (DSP) , a field-programmable gate array (FPGA) , an advanced RISC machine (ARM) , a programmable logic device (PLD) , any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof.
- RISC reduced instruction set computer
- ASICs application-specific integrated circuit
- ASIP application-specific instruction-set processor
- CPU central processing unit
- GPU graphics processing unit
- PPU physics processing unit
- DSP digital signal processor
- FPGA field-programmable gate array
- ARM advanced RISC machine
- the computing device 200 in the present disclosure may also include multiple processors, and thus operations and/or method operations that are performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors.
- the processor of the computing device 200 executes both operation A and operation B
- operation A and operation B may also be performed by two or more different processors jointly or separately in the computing device 200 (e.g., a first processor executes operation A and a second processor executes operation B, or the first and second processors jointly execute operations A and B) .
- the storage 220 may store data/information obtained from the scanning device 110, the terminal device 130, the storage device 150, and/or any other component of the imaging system 100.
- the storage 220 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
- the storage 220 may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure.
- the storage 220 may store a program for the processing device 140 to reconstruct a target image of a subject.
- the I/O 230 may input and/or output signals, data, information, etc. In some embodiments, the I/O 230 may enable a user interaction with the processing device 140. In some embodiments, the I/O 230 may include an input device and an output device. Exemplary input devices may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Exemplary output devices may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof.
- Exemplary display devices may include a liquid crystal display (LCD) , a light-emitting diode (LED) -based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT) , a touch screen, or the like, or a combination thereof.
- LCD liquid crystal display
- LED light-emitting diode
- flat panel display a flat panel display
- curved screen a curved screen
- television device a cathode ray tube (CRT)
- CTR cathode ray tube
- touch screen or the like, or a combination thereof.
- the communication port 240 may be connected to a network (e.g., the network 120) to facilitate data communications.
- the communication port 240 may establish connections between the processing device 140 and the scanning device 110, the terminal device 130, and/or the storage device 150.
- the connection may be a wired connection, a wireless connection, any other communication connection that can enable data transmission and/or reception, and/or any combination of these connections.
- the wired connection may include, for example, an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof.
- the wireless connection may include, for example, a Bluetooth TM link, a Wi-Fi TM link, a WiMax TM link, a WLAN link, a ZigBee TM link, a mobile network link (e.g., 3G, 4G, 5G) , or the like, or a combination thereof.
- the communication port 240 may be and/or include a standardized communication port, such as RS232, RS485, etc.
- the communication port 240 may be a specially designed communication port.
- the communication port 240 may be designed in accordance with the digital imaging and communications in medicine (DICOM) protocol.
- DICOM digital imaging and communications in medicine
- FIG. 3 is a schematic diagram illustrating exemplary hardware components and/or software components of an exemplary mobile device according to some embodiments of the present disclosure.
- one or more components e.g., the terminal device 130 and/or the processing device 140
- the imaging system 100 may be implemented on the mobile device 300.
- the mobile device 300 may include a display 310, a communication platform 320, a graphic processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, and a storage 390.
- any other suitable component including but not limited to a system bus or a controller (not shown) , may also be included in the mobile device 300.
- a mobile operating system 370 e.g., iOS TM , Android TM , Windows Phone TM
- one or more applications 380 may be loaded into the memory 360 from the storage 390 in order to be executed by the CPU 340.
- the applications 380 may include a browser or any other suitable mobile apps for receiving and rendering information relating to image processing or other information from the processing device 140.
- User interactions with the information stream may be achieved via the I/O 350 and provided to the processing device 140 and/or other components of the imaging system 100 via the network 120.
- computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein.
- a computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device.
- PC personal computer
- a computer may also act as a server if appropriately programmed.
- FIGs. 4A and 4B are block diagrams illustrating exemplary processing devices according to some embodiments of the present disclosure.
- the processing devices 140A and 140B may be exemplary processing devices 140 as described in connection with FIG. 1.
- the processing device 140A may be configured to apply a target reconstruction model in reconstructing a target image of a subject.
- the processing device 140B may be configured to generate a target reconstruction model by model training.
- the processing devices 140A and 140B may be respectively implemented on a processing unit (e.g., the processor 210 illustrated in FIG. 2 or the CPU 340 illustrated in FIG. 3) .
- the processing devices 140A may be implemented on a CPU 340 of a terminal device, and the processing device 140B may be implemented on a computing device 200.
- the processing devices 140A and 140B may be implemented on a same computing device 200 or a same CPU 340.
- the processing devices 140A and 140B may be implemented on a same computing device 200.
- the processing device 140A may include an acquisition module 410, an intermediate image generation module 420, and a target image generation module 430.
- the acquisition module 410 may be configured to obtain scan data of a subject. More descriptions regarding the obtaining of the scan data may be found elsewhere in the present disclosure. See, e.g., operation 510 in FIG. 5 and relevant descriptions thereof.
- the intermediate image generation module 420 may be configured to generate a first intermediate image of the subject based on the scan data.
- An intermediate image refers to an image generated during an image reconstruction process of the target image of the subject.
- the first intermediate image may correspond to a first count of iterations. More descriptions regarding the generation of the first intermediate image of the subject may be found elsewhere in the present disclosure. See, e.g., operation 520 in FIG. 5 and relevant descriptions thereof.
- the target image generation module 430 may be configured to generate a target image of the subject based on the first intermediate image and a target reconstruction model.
- the target image generation module 430 may generate a second intermediate image using the target reconstruction model.
- the second intermediate image may correspond to a second count of iterations, wherein the second count is greater than the first count.
- the target image generation module 430 may further generate the target image based on the second intermediate image. More descriptions regarding the generating of the target image may be found elsewhere in the present disclosure. See, e.g., operation 530 in FIG. 5 and relevant descriptions thereof.
- the processing device 140B may include an acquisition module 440 and a model determination module 450.
- the acquisition module 440 may be configured to obtain a plurality of training samples.
- Each of the plurality of training samples may include a sample first intermediate image and a sample second intermediate image of a sample subject, wherein the sample second intermediate image may correspond to a higher iteration count than the sample first intermediate image.
- the acquisition module 440 may also be configured to obtain a preliminary model.
- the preliminary model may be of any type of machine learning model (e.g., a deep learning model) . More descriptions regarding the obtaining of the training samples and the preliminary model may be found elsewhere in the present disclosure. See, e.g., operation 610 or 620 in FIG. 6 and relevant descriptions thereof.
- the model determination module 450 may be configured to generate the target reconstruction model by training the preliminary model using the plurality of training samples. More descriptions regarding the generation of the target reconstruction model may be found elsewhere in the present disclosure. See, e.g., operation 630 in FIG. 6 and relevant descriptions thereof.
- the processing device 140A and/or the processing device 140B may share two or more of the modules, and any one of the modules may be divided into two or more units.
- the processing devices 140A and 140B may share a same acquisition module; that is, the acquisition module 410 and the acquisition module 440 are a same module.
- the processing device 140A and/or the processing device 140B may include one or more additional modules, such as a storage module (not shown) for storing data. In some embodiments, the processing device 140A and the processing device 140B may be integrated into one processing device 140.
- FIG. 5 is a flowchart illustrating an exemplary process for generating a target image of a subject based on scan data of the subject according to some embodiments of the present disclosure.
- a process 500 may be implemented as a set of instructions (e.g., an application) stored in the storage device 150, the storage 220, or the storage 390.
- the processing device 140A e.g., implemented on the processor 210, the CPU 340, and/or one or more modules illustrated in FIG. 4A
- the operations of the illustrated process 500 presented below are intended to be illustrative.
- the process 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order of the operations of the process 500 illustrated in FIG. 5 and described below is not intended to be limiting. In some embodiments, the process 500 may be executed by a cloud server to reduce the performance requirements and/or the workload of a local processing device.
- the processing device 140A may obtain scan data of a subject.
- a subject may be biological or non-biological.
- the subject may include a patient (or a portion thereof) , a man-made subject (e.g., a phantom) , etc., as described elsewhere in the present disclosure (e.g., FIG. 1 and the descriptions thereof) .
- the scan data of the subject may include CT image data, PET image data, MRI image data, ultrasound image data, X-ray image data, or the like, or any combination thereof.
- the scan data may be acquired by an imaging device during a scan of the subject.
- an imaging device e.g., the scanning device 110
- the processing device 140A may obtain the scan data from the imaging device.
- the scan of the subject may be performed according to one or more scanning parameters.
- Exemplary scanning parameters may include one or more acquisition parameters relating to the acquisition of the scan data and/or one or more reconstruction parameters relating to the reconstruction of the scan data.
- the scanning parameter (s) may include a tube potential, a tube current, a slice thickness, a scanning time, a width of a collimator, a slice width, a beam filtering parameter, a pitch, or the like, or any combination thereof.
- an imaging device e.g., the scanning device 110
- may transmit acquired scan data to a storage device e.g., the storage device 150, the storage 220, or any other storage device
- the processing device 140A may obtain the scan data from the storage device.
- the processing device 140A may generate a first intermediate image of the subject based on the scan data.
- an intermediate image refers to an image generated during an image reconstruction process of the target image of the subject.
- an intermediate image may be an intermediate product generated before the generation of the target image of the subject.
- the first intermediate image may be generated first and served as a basis for generating the target image.
- the first intermediate image may satisfy a first termination condition.
- different image reconstruction processes may correspond to different first termination conditions. For example, a first termination condition corresponding to an iterative reconstruction process (i.e., a reconstruction process using an iterative reconstruction algorithm) may be different from a first termination condition corresponding to an analytical reconstruction process (i.e., a reconstruction process using an analytic reconstruction algorithm) .
- the processing device 140A may generate the first intermediate image by performing a first iterative reconstruction (IR) operation on the scan data.
- IR iterative reconstruction
- the processing device 140A may perform the first iterative reconstruction operation using an iterative reconstruction algorithm in image space, an iterative reconstruction algorithm in projection space, or the like, or any combination thereof.
- Exemplary iterative reconstruction algorithms in image space may include an iterative reconstruction in image space (IRIS) algorithm.
- Exemplary iterative reconstruction algorithms in projection space may include an iterative model reconstruction (IMR) algorithm, a model-based iterative reconstruction (MBIR) algorithm, etc.
- Exemplary iterative reconstruction algorithms in both image space and projection space may include an adaptive statistical iterative reconstruction (ASIR) algorithm, a sinogram affirmed iterative reconstruction (SAFIRE) algorithm, an iDose algorithm, an adaptive iterative dose reduction (AIDR) algorithm, etc.
- ASIR adaptive statistical iterative reconstruction
- SAFIRE sinogram affirmed iterative reconstruction
- iDose iDose algorithm
- AIDR adaptive iterative dose reduction
- the processing device 140A may perform the first iterative reconstruction operation using an algebraic reconstruction technique (ART) , a simultaneous algebraic reconstruction technique (SART) , a maximum likelihood-expectation maximization (ML-EM) algorithm, or the like, or any combination thereof.
- ART algebraic reconstruction technique
- SART simultaneous algebraic reconstruction technique
- ML-EM maximum likelihood-expectation maximization
- the first iterative reconstruction operation may include a plurality of first iterations.
- the processing device 140A may generate a first updated image by updating a first image determined in a previous first iteration based on the scan data. For example, the processing device 140A may determine a difference between the scan data and projection data corresponding to the first image determined in a previous first iteration. The processing device 140A may update the first image corresponding to the previous first iteration to generate the first updated image corresponding to the current first iteration based on the difference between the projection data corresponding to the first image and the scan data.
- the processing device 140A may further determine whether the first termination condition is satisfied. In response to determining that the first termination condition is satisfied, the processing device 140A may designate the first updated image corresponding to the current first iteration as the first intermediate image. In response to determining that the first termination condition is not satisfied, the processing device 140A may proceed to a next first iteration to further update the first updated image. In some embodiments, if the current first iteration is an iteration performed first among the plurality of first iterations, the processing device 140A may generate a first initial image based on the scan data of the subject. The processing device 140A may generate the first updated image of the current first iteration by updating the first initial image. For example, the processing device 140A may generate the first initial image randomly or according to an image reconstruction algorithm other than the iterative reconstruction algorithm used in the first iterative reconstruction operation (e.g., the ART algorithm, the SART algorithm, etc. ) .
- an image reconstruction algorithm other than the iterative reconstruction algorithm used in the first iterative reconstruction operation
- the first termination condition may be set according to a default setting of the imaging system 100, manually by a user, or determined by the processing device 140A according to an actual need.
- the first termination condition may relate to one or more image parameters of the first updated image corresponding to the current first iteration.
- image parameters of an image may include a signal-to-noise ratio (SNR) , a mean square error (MSE) , a mean absolute deviation (MAD) , a peak signal-to-noise ratio (PSNR) , an image resolution, a contrast ratio, a sharpness value, or the like, or any combination thereof.
- the image parameter (s) of an image may be used to evaluate or measure the image quality of the image. Merely by way of example, if the value of an image parameter of an image exceeds or is below a preset threshold, the image may be deemed as having a desired image quality.
- the SNR may reflect a proportion of image signals and noise signals in an image.
- a larger SNR of the image may indicate that the image has more image signals and a higher image quality.
- the MSE refers to the average squared difference between an estimated value and a corresponding true value of each parameter.
- the MSE may be used to measure the deviation between a generated image (e.g., a reconstructed image) and a corresponding true image.
- a larger value of the MSE of the image may indicate that the image has a higher image quality.
- the MAD may be used to evaluate the distortion of an image.
- a smaller MAD of the image may indicate that the image is closer to a corresponding true image and has a higher image quality.
- the PSNR may be an index for measuring an image distortion based on pixels error. A larger PSNR of the image may indicate that the image is closer to a corresponding true image and a higher image quality.
- Exemplary first termination conditions relating to the image parameter (s) of the first updated image may include that the SNR of the first updated image exceeds a first parameter threshold, that the MSE of the first updated image exceeds a second parameter threshold, that the MAD of the first updated image is below a third parameter threshold, or the like, or any combination thereof.
- the first, the second, the third parameter threshold may be set according to a default setting of the imaging system 100, set by a user or operator via the terminal device 130, or determined by the processing device 140A according to an actual need.
- the first termination condition may relate to a difference between the first updated image corresponding to the current first iteration and the first image corresponding to the previous first iteration.
- the difference between the first updated image and the first image may indicate whether the current first iteration can improve the image quality of the first image generated in the previous first iteration. If the difference between the first updated image and the first image is below a first difference threshold, it may be speculated that the current first iteration results in a limited or no improvement in the image quality of the first image, and the first iterative reconstruction operation may be terminated.
- the first difference threshold may be set according to a default setting of the imaging system 100, set by a user or operator imaging system 100 via the terminal device 130, or determined by the processing device 140A according to an actual need.
- the difference between the first updated image and the first image may be determined based on an image similarity algorithm, including a peak signal to noise ratio (PSNR) algorithm, a structural similarity (SSIM) algorithm, a perceptual hash algorithm, a cosine similarity algorithm, a histogram-based algorithm, a Euclidean distance algorithm, or the like, or any combination thereof.
- an image similarity algorithm including a peak signal to noise ratio (PSNR) algorithm, a structural similarity (SSIM) algorithm, a perceptual hash algorithm, a cosine similarity algorithm, a histogram-based algorithm, a Euclidean distance algorithm, or the like, or any combination thereof.
- PSNR peak signal to noise ratio
- SSIM structural similarity
- perceptual hash algorithm e.g., a perceptual hash algorithm
- cosine similarity algorithm e.g., a histogram-based algorithm
- Euclidean distance algorithm e.g., a Euclidean distance algorithm,
- the difference between the first updated image and the first image may be measured by a difference (denoted as D) between the SNR of the first updated image and the SNR of the first image.
- the first termination condition may include that the difference D is smaller than or equal to a certain percentage (e.g., 5%, 10%, 15%, 20%, etc. ) of the SNR of the first image.
- the first termination condition may relate to a count of first iterations that have been performed in the first iteration reconstruction operation (or referred to as a first iteration count N for brevity) .
- the processing device 140A may determine that the first termination condition is satisfied if the first iteration count N is greater than or equal to a first count threshold, such as 0, 5, 6, 7, 8, 9, 10, etc. It should be noted that the first iteration count N being equal to 0 indicates that the scan data can be designate as the first intermediate image directly.
- the first count threshold may be a default setting of the imaging system 100, or set by a user or operator of the imaging system 100 via the terminal device 130, or determined by the processing device 140A according to an actual need.
- different iterative reconstruction algorithms may correspond to the same first count threshold or different first count thresholds. For example, a first iteration count corresponding to the ART algorithm may be different from a first iteration count corresponding to the SART algorithm.
- the first count threshold may be determined based on the iterative reconstruction algorithm used in the first iterative reconstruction operation. For example, a user may determine the first count threshold corresponding to the iterative reconstruction algorithm according to experience. If in general, a reconstructed image satisfies the first termination condition can be generated by performing 5 first iterations using a specific iterative reconstruction algorithm, the first count threshold corresponding to the specific iterative reconstruction algorithm may be set to 5. In some embodiments, the first count threshold may be determined based on a target reconstruction model described in operation 530. For example, if during the training of the target reconstructing model, a sample first intermediate image of each training sample is generated by performing 10 iterations on sample scan data of the training sample, the first count threshold may be set to 10.
- the processing device 140A may generate the first intermediate image using an analytic reconstruction (AR) algorithm.
- the analytic reconstruction algorithm may include a filter back-projection (FBP) algorithm, a feldkamp (FDK) algorithm, or the like, or any combination thereof.
- FBP filter back-projection
- FDK feldkamp
- the processing device 140A may designate the generated image as the first intermediate image.
- the first termination condition may include that an image parameter of the generated image is higher than a predetermined threshold.
- the processing device 140A may generate a target image of the subject based on the first intermediate image and the target reconstruction model.
- a target reconstruction model refers to a trained model (e.g., a machine learning model) for image reconstruction.
- the target reconstruction model may be generated by training a preliminary model using a plurality of training samples.
- Each of the training samples may include a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations For example, it is assumed that the sample second intermediate image of each training sample is generated by performing more iterations than the sample first intermediate image of the training sample.
- the preliminary model may be trained to update the sample first intermediate image of each training sample to generate an image close to the sample second intermediate image of the training sample.
- the preliminary model may be trained to learn a mechanism of iterative reconstruction, and the resulting target reconstruction model may be used to generate an image corresponding to a higher iteration count without performing an actual iteration reconstruction operation.
- the processing device 140A may input the first intermediate image into the target reconstruction model, and the target reconstruction model may output a second intermediate image.
- the processing device 140A may further generate the target image based on the second intermediate image.
- the processing device 140A may input the first intermediate image into the target reconstruction model, and the target reconstruction model may directly output the target image of the subject.
- the following descriptions are described with reference to embodiments in which the target reconstruction model outputs the second intermediate image, and this is not intended to limit the scope of the present disclosure.
- the second intermediate image may correspond to a higher iteration count than the first intermediate image.
- the iteration count corresponding to the second intermediate image refers to a predicted or simulated count of first iterations that needs to be performed on the scan data to generate an image of a same (or substantially same) image quality as the second intermediate image.
- the first intermediate image may be generated by performing N first iterations on the scan data as described in connection with operation 520.
- the second intermediate image may have a same (or substantially same) image quality as an image generated by further performing M first iterations on the first intermediate image based on the scan data, wherein M may be a positive integer. That is, an iteration count corresponding to the second intermediate image may be equal to (or substantially equal to) a sum of N and M.
- using the target reconstruction model to process the first intermediate image may be equivalent to performing M first iterations on the first intermediate image.
- a greater count of iterations may result in an image with a higher image quality, but cost more processing time and/or increase the computational complexity.
- the target reconstruction model disclosed herein may be used to generate the second intermediate image with an improved image quality without performing an actual iteration reconstruction operation, which may improve the reconstruction efficiency by, e.g., reducing the amount of computation resources and the needed time for reconstructing the target image.
- the target reconstruction model may be of any type of machine learning model.
- the processing device 140A may obtain the target reconstruction model from one or more components of the imaging system 100 (e.g., the storage device 150, the terminal device 130) or an external source via a network (e.g., the network 120) .
- the target reconstruction model may be previously trained by a computing device (e.g., the processing device 140B) , and stored in a storage device (e.g., the storage device 150, the storage 220, and/or the storage 390) of the imaging system 100.
- the processing device 140A may access the storage device and retrieve the target reconstruction model.
- the target reconstruction model may be generated by a computing device (e.g., the processing device 140B) by performing a process (e.g., process 600) for generating a target reconstruction model disclosed herein. More descriptions regarding the generation of the target reconstruction model may be found elsewhere in the present disclosure. See, e.g., FIG. 6 and relevant descriptions thereof.
- the processing device 140A may obtain the target reconstruction model from a reconstruction model library.
- the reconstruction model library may include a plurality of image reconstruction models corresponding to different subjects.
- the reconstruction model library may include a first image reconstruction model corresponding to the head of a human, a second image reconstruction model corresponding to the arms of a human, a third reconstruction model corresponding to the chest of a human, a fourth image reconstruction model corresponding to the heart of a human, a fifth image reconstruction model corresponding to the stomach of human, etc.
- the processing device 140A may select an image reconstruction model corresponding to the type of the subject from the reconstruction model library as the target reconstruction model.
- the type of the subject may be determined according to a scanning protocol or by analyzing the scan data. In this way, a particular image reconstruction model suitable for the subject may be used, thereby improving the reconstruction accuracy.
- Different image reconstruction models in the reconstruction model library may be trained based on different training samples. For example, for determining the first image reconstruction model, a plurality of training samples relating to the heads of humans may be used to train a first preliminary model. As another example, for determining the second image reconstruction model, a plurality of training samples relating to the arms of humans may be used to train a second preliminary model. In some embodiments, the first preliminary model and the second preliminary model may be of different types or the same type.
- the processing device 140A may generate the target image by performing a second iterative reconstruction operation on the second intermediate image based on the scan data.
- the second iterative reconstruction operation may include a plurality of second iterations.
- the generating of the target image based on the second intermediate image may be performed in a manner similar to that of the first intermediate image based on the scan data.
- the processing device 140A may generate a second updated image corresponding to the current second iteration by updating a second image based on the scan data.
- the second image may be updated from the second intermediate image determined in a previous second iteration.
- the second image may be the original second intermediate image.
- the processing device 140A may then determine whether a second termination condition is satisfied. In response to determining that the second termination condition is satisfied, the processing device 140A may designate the second updated image corresponding to the current second iteration as the target image of the subject.
- the second iterative reconstruction operation may be performed according to an iterative reconstruction algorithm as described elsewhere in this disclosure (e.g., operation 520 and the relevant descriptions) .
- the iterative reconstruction algorithm used during the second iterative reconstruction operation may be the same as or different from an iterative reconstruction algorithm used during the first iterative reconstruction operation.
- the second termination condition may be similar to the first termination condition.
- the second termination condition may be set according to a default setting of the imaging system 100, manually by a user, or determined by the processing device 140A according to an actual need.
- the second termination condition may relate to one or more image parameters (e.g., an SNR, an MSE, etc. ) of the second updated image.
- the second termination condition may relate to a difference between the second updated image corresponding to the current second iteration and the second image corresponding to the previous second iteration.
- the processing device 140A may determine that the second termination condition is satisfied when the difference between the second updated image and the second image is below a second difference threshold.
- a same iterative reconstruction operation may be less effective in improving the image quality of the second intermediate image than in improving the image quality of the first intermediate image.
- the second difference threshold may be smaller than the first difference threshold.
- the second termination condition may relate to a second iteration count of second iterations that have been performed in the second iteration reconstruction operation.
- the processing device 140A may determine that the second termination condition is satisfied if the second iteration count is greater than or equal to a second count threshold, such as 1, 3, 5, 6, 7, 8, 9, etc.
- the second count threshold may be a default setting of the imaging system 100, or set by a user or operator of the imaging system 100 via the terminal device 130, or determined by the processing device 140A according to an actual need.
- the second count threshold may be determined based on the first iteration count N and the predicted or simulated count M corresponding to the target reconstruction model.
- one or more operations may be omitted and/or one or more additional operations may be added.
- operation 510 and operation 520 may be combined into a single operation.
- one or more other optional operations e.g., a preprocessing operation
- the preprocessing operation on the scan data may include a denosing operation, an enhancement operation, a filtering operation, or the like, or any combination thereof.
- the process 500 may include an additional operation to transmit the target image to a terminal (e.g., a terminal 130) for display.
- a terminal e.g., a terminal 130
- the target reconstruction model may directly output the target image, and the second iterative reconstruction operation may be omitted.
- FIG. 6 is a flowchart illustrating an exemplary process for generating a target reconstruction model according to some embodiments of the present disclosure.
- a process 600 may be implemented as a set of instructions (e.g., an application) stored in the storage device 150, storage 220, and/or storage 390.
- the processing device 140B e.g., implemented on the processor 210, the CPU 340, and/or one or more modules illustrated in FIG. 4B
- the operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 600 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order of the operations of process 600 illustrated in FIG. 6 and described below is not intended to be limiting.
- the target reconstruction model described in connection with operation 530 in FIG. 5 may be obtained according to the process 600.
- the process 600 may be performed by another device or system other than the imaging system 100, e.g., a device or system of a vendor of a manufacturer.
- the implementation of the process 600 by the processing device 140B is described as an example.
- the processing device 140B may obtain a plurality of training samples.
- Each of the plurality of training samples may include a sample first intermediate image and a sample second intermediate image of a sample subject.
- the sample subject of a training sample may be of the same type as or a different type from the subject as described in connection with operation 510 in FIG. 5.
- two subjects are deemed to be of a same type when they belong to a same type of organ or tissue.
- the subject may be the head of a patient, and the sample subject may be the head of another patient or a phantom of a human head.
- the corresponding sample first intermediate image and the sample second intermediate image may be reconstructed from sample scan data of the sample subject by performing different counts of iterations.
- the sample first intermediate image may be generated from the sample scan data by performing a first count of iterations
- the sample second intermediate image may be generated from the sample scan data by performing a second count of iterations
- the second count may be greater than the first count.
- the first count and the second count may be integers greater than 1.
- the sample second intermediate image may have a higher image quality (e.g., measured by one or more image parameters as described in connection with FIG. 5) than the first intermediate image.
- the processing device 140B may generate the sample first intermediate image of the training sample by performing a sample first iterative reconstruction operation on the sample scan data of the training sample until a sample first termination condition is satisfied.
- the processing device 140B may generate the sample second intermediate image of the training sample by performing a second iterative reconstruction operation on the sample first intermediate image until a sample second termination condition is satisfied. More descriptions regarding the generation of a training sample may be found in FIG. 7 and the descriptions thereof.
- the first counts corresponding to different training samples may be same as or different from each other.
- the second counts corresponding to different training samples may be same as or different from each other.
- the processing device 140B may generate the sample first intermediate image and/or the sample second intermediate image of a training sample using an AR algorithm. For example, the processing device 140B may generate a first sample image and a second sample image having different image qualities using the AR algorithm with different reconstruction parameters.
- the first sample image may satisfy the first sample termination condition, for example, an SNR of the first sample image exceeds a first threshold, and be designated as the sample first intermediate image.
- the second sample image may satisfy the second sample termination condition, for example, an SNR of the second sample image exceeds a second threshold, and be designated as the sample second intermediate image.
- the processing device 140B may obtain a preliminary model.
- the preliminary model may include a machine learning model, such as a deep learning model, a neural network model, etc.
- the preliminary model may include an Alex-Net model, a VGG Net model, a Google-Net model, a Res-Net model, a Squeeze-Net model, a Seg-Net model, a convolutional neural network (CNN) model, a fully convolutional neural network (FCN) model (e.g., a U-Net model, a V-Net model) , a recurrent neural network (RNN) model, a region CNN (RCNN) model, a fast-RCNN model, a generative adversarial network (GAN) model (e.g., a pix2pix model, a Wasserstein GAN (WGAN) model) , or the like, or any combination thereof.
- Alex-Net model Alex-Net model
- VGG Net model e.g., a VGG Net model
- a Google-Net model e.g., a Res-Net
- the preliminary model may include a plurality of model parameters.
- Exemplary model parameters of the preliminary model may include the size of a kernel of a layer, the total count (or number) of layers, the count (or number) of nodes in each layer, a learning rate, a batch size, an epoch, a connected weight between two connected nodes, a bias vector relating to a node, a loss function, or the like, or any combination thereof.
- the parameter value (s) of one or more of the plurality of model parameters may be altered during the training of the preliminary model using the plurality of training samples.
- the parameter values of the plurality of model parameters may be initialized before the training of the preliminary model.
- the connected weight (s) and/or the bias vector (s) of the preliminary model may be initialized by assigning random values in a range, e.g., the range from -1 to 1.
- all the connected weights of the preliminary model may be assigned with a same value in the range from -1 to 1, for example, 0.
- the parameter values of the preliminary model may be initialized based on a Gaussian random algorithm, a Xavier algorithm, etc.
- the processing device 140B may generate the target reconstruction model by training the preliminary model using the plurality of training samples.
- the preliminary model may be trained based on the plurality of training samples using a machine learning algorithm.
- the machine learning algorithm may include but not be limited to an artificial neural network algorithm, a deep learning algorithm, a decision tree algorithm, an association rule algorithm, an inductive logic programming algorithm, a support vector machine algorithm, a clustering algorithm, a Bayesian network algorithm, a reinforcement learning algorithm, a representation learning algorithm, a similarity and metric learning algorithm, a sparse dictionary learning algorithm, a genetic algorithm, a rule-based machine learning algorithm, or the like, or any combination thereof.
- the machine learning algorithm used to generate the image reconstruction model may be a supervised learning algorithm, a semi-supervised learning algorithm, an unsupervised learning algorithm, or the like.
- the target reconstruction model may be obtained by performing an iterative operation including one or more third iterations to iteratively update the parameter values of the preliminary model.
- an exemplary current third iteration of the third iteration (s) is described in the following description.
- the current third iteration may be performed based on at least some of the training samples (referred to as target training sample (s) for the convenience of descriptions) .
- a same set or different sets of training samples may be used in different third iterations in training the preliminary model.
- the processing device 140B may input the sample first intermediate image of each target training sample into an updated preliminary model determined in the previous third iteration, and the updated preliminary model may output an estimated sample second intermediate image.
- the updated preliminary model may extract one or more features including a low-level feature (e.g., an edge feature, a texture feature) , a high-level feature (e.g., a semantic feature) , and/or a complicated feature (e.g., a deep hierarchical feature) from the sample first intermediate image of the target training sample. Based on the extracted features, the updated preliminary model may generate the estimated second intermediate image corresponding to the target training sample.
- a low-level feature e.g., an edge feature, a texture feature
- a high-level feature e.g., a semantic feature
- a complicated feature e.g., a deep hierarchical feature
- the processing device 140B may further determine a value of a loss function of the updated preliminary model based on the estimated second intermediate image and the sample second intermediate image of each target training sample.
- the loss function may be used to assess a difference between an estimated value (e.g., the estimated second intermediate image (s) ) outputted by the updated preliminary model and an actual value (e.g., the sample second intermediate image (s) ) .
- the value of the loss function may be used to evaluate the accuracy and reliability of the updated preliminary model, for example, the smaller the loss function is, the more reliable the updated preliminary model is.
- Exemplary loss functions may include an L1 loss function, a focal loss function, a log loss function, a cross-entropy loss function, a Dice loss function, etc.
- the processing device 140B may further update the value (s) of the model parameter (s) of the updated preliminary model to be used in a next third iteration based on the value of the loss function according to, for example, a backpropagation algorithm.
- the one or more third iterations may be terminated if a third termination condition is satisfied in the current third iteration.
- An exemplary third termination condition may be that the value of the loss function obtained in the current third iteration is less than a loss threshold.
- Other exemplary third termination conditions may include that a certain count of third iterations is performed, that the loss function converges such that the differences of the values of the loss function obtained in consecutive third iterations are within a threshold, etc.
- the processing device 140B may designate the updated preliminary model as the target reconstruction model.
- the target reconstruction model may be transmitted to a storage device (e.g., the storage device 150, the storage 220, the storage 390, etc. ) for storage.
- one or more operations may be added or omitted.
- the processing device 140B may further test the target reconstruction model using a set of testing samples. Additionally or alternatively, the processing device 140B may update the target reconstruction model periodically or irregularly based on one or more newly-generated training samples.
- FIG. 7 is a flowchart illustrating an exemplary process for determining a training sample for a target reconstruction model according to some embodiments of the present disclosure.
- one or more operations of the process 700 may be performed to achieve at least part of operation 610 as described in connection with FIG. 6.
- each of the plurality of training samples (or a portion thereof) obtained in 610 may be determined according to the process 700.
- the process 700 may be performed by another device or system other than the imaging system 100, e.g., a device or system of a vendor of a manufacturer.
- the processing device 140B may obtain sample scan data of a sample subject.
- the sample scan data may include CT image data, PET image data, MRI image data, ultrasound image data, X-ray image data, or the like, or any combination thereof, of the sample subject.
- the sample scan data may be collected by a scanning device during a scan of the sample subject.
- the processing device 140B may obtain the sample scan data from the scanning device 110, the storage device 150, the storage 220, the storage 390, or any other storage device via the network 120.
- the sample scan data may be obtained from a database of an organization, such as a disease detection center, a hospital, a volunteer organization, etc.
- the sample data may be obtained from a medical database.
- Exemplary medical databases may include a Github database, an international symposium on biomedical imaging (ISBI) database, a lung image database consortium and image database resource initiative (LIDC-IDRI) database, a digital database for screening mammography (DDSM) -mammographic image analysis society (MIAS) database, a cancer imaging archive database, an OsiriX database, a neuroimaging tools and resources collaboratory (NITRC) database, etc.
- the processing device 140B may generate a sample first intermediate image by performing a sample first iterative reconstruction operation on the sample scan data until a sample first termination condition is satisfied.
- the sample first iterative reconstruction operation may include a plurality of sample first iterations.
- the generating of the sample first intermediate image may be performed in a manner similar to that of the first intermediate image as described in operation 520 in FIG. 5.
- the processing device140B may generate a sample first updated image corresponding to the current sample first iteration by updating a sample first image determined in a previous sample first iteration based on the sample scan data.
- the processing device140B may determine whether the sample first termination condition is satisfied. In response to determining that the sample first termination condition is satisfied, the processing device140B may designate the sample first updated image corresponding to the current sample first iteration as the sample first intermediate image.
- the sample first iterative reconstruction operation may be performed according to an iterative reconstruction algorithm, which may be the same as or different from the iterative reconstruction algorithm used to implement the first iterative reconstruction operation as described in operation 520.
- the sample first iterative reconstruction operation may be performed using the ART algorithm, and the first iterative reconstruction operation may be performed using the SART algorithm.
- both the sample first iterative reconstruction operation and the first iterative reconstruction operation may be performed using the ML-EM algorithm.
- the sample first termination condition may be the same as or similar to the first termination condition described in operation 520.
- the sample first termination condition may relate a count of sample first iterations (also referred to as sample first iteration count) that have been performed in the sample first iteration reconstruction operation, a difference between the sample first updated image corresponding to the current sample first iteration and the sample first image corresponding to the previous sample first iteration, one or more image parameters of the sample first updated image corresponding to the current sample first iteration, or the like, or any combination thereof.
- the sample first termination condition may specify that the sample first iteration count exceeds a sample first count threshold.
- the sample first iteration counts of the sample first intermediate images of different training samples may be the same as each other.
- the sample first iteration count of sample first iterations that needs to be performed on the sample scan data of a training sample may be the same as the first iteration count of first iterations that needs to be performed on the scan data described in operation 520.
- a count of iterations corresponding to the sample first intermediate image during the training process of the target reconstruction model may be the same as a count of iterations corresponding to a first intermediate image during the application of the target reconstruction model.
- the sample first termination condition may specify that 10 sample first iterations need to be performed to generate the sample first intermediate image of each training sample.
- the processing device 140B may perform 10 sample first iterations on the sample scan data of the training sample.
- the first intermediate image as described in connection with FIG. 5 may be generated by performing 10 first iterations on the scan data of the subject.
- sample first intermediate images of different training samples may correspond to different sample first iteration counts. For example, 10 sample first iterations may need to be performed for a training sample A to generate a sample first intermediate image A’ with an SNR exceeding a threshold, while 20 sample first iterations may need to be performed for a training sample B to generate a sample first intermediate image B’ with an SNR exceeding the threshold.
- the sample first count threshold may be determined according to experience (e.g., set according to a default setting of the imaging system 100, or set by a user) .
- the sample scan data of different training samples may have different data qualities, and the sample first intermediate images having different imaging qualities may be generated for different training samples if the sample scan data of the training samples undergoes a same count of sample first iterations. This may have an effect on the performance of the resulting target reconstruction model to some extent.
- the sample first intermediate images of different training samples may have the same image quality or similar image qualities, which is beneficial to the training of the preliminary model, thereby improving the accuracy and/or the generalization ability of the trained target reconstruction model.
- two images may be deemed as having similar image qualities if a difference between two indicators for evaluating image quality of the two images is within a quality threshold. For example, a difference between SNRs of two images belows a threshold may indicate that the two images have similar image qualities.
- the processing device 140B may generate a sample second intermediate image by performing a sample second iterative reconstruction operation on the sample scan data until a sample second termination condition is satisfied.
- the sample second iterative reconstruction operation may include a plurality of sample second iterations.
- the generation of the sample second intermediate image may be performed in a manner similar to that of the sample first intermediate image as described in operation 720.
- the processing device140B may generate a sample second updated image corresponding to the current sample second iteration by updating a sample second image based on the sample scan data.
- the sample second image may be updated from the sample first intermediate image determined in a previous sample second iteration.
- the processing device140B may determine whether the sample second termination condition is satisfied. In response to determining that the sample second termination condition is satisfied, the processing device140B may designate the sample second updated image corresponding to the current sample second iteration as the sample second intermediate image.
- the sample second iterative reconstruction operation performed according to an iterative reconstruction algorithm, which may be the same as or different from the iterative reconstruction algorithm used to implement the sample first iterative reconstruction operation described in operation 720.
- the sample second termination condition may be the same as or different from the sample first termination condition.
- the sample second termination condition may relate a count of sample second iterations (also referred to as sample second iteration count) that have been performed in the sample second iteration reconstruction operation, a difference between the sample second updated image corresponding to the current sample second iteration and the sample second image corresponding to the previous sample second iteration, one or more image parameters of the sample second updated image corresponding to the current sample second iteration, or the like, or any combination thereof.
- the sample second termination condition may specify that the sample second iteration count exceeds a sample second count threshold.
- the sample second iteration count corresponding to the sample second intermediate image may be the same as the iteration count corresponding to the second intermediate image described in operation 530.
- the sample first intermediate image of a training sample is generated by performing 10 sample first iterations on the sample scan data
- the sample second intermediate image of the training sample is generated by further performing 100 sample first iterations on the sample first intermediate image.
- the application of the resulting target reconstruction model is equivalent to performing 100 iterations on the first intermediate image.
- the processing device 140B may determine a training sample based on the sample first intermediate image and the sample second intermediate image.
- sample first intermediate image and the sample second intermediate image may be designated as the training sample.
- a plurality of training samples may be generated.
- the training samples may be divided into a training set or a test set.
- the training set may be used to train the preliminary model to obtain the target reconstruction model.
- the test set may be used to test the target reconstruction model.
- a certain percentage of the plurality of training samples may be used as the training set, and the remaining of the plurality of training samples may be used as the test set.
- the certain percentage may have a random value, or set manually by a user, or determined by the processing device 140B according to an actual need.
- the ratio of the count of training samples in the training set to the count of training samples in the test set may be 8 to 2, 9 to 1, 9.5 to 0.5, etc.
- one or more operations may be omitted and/or one or more additional operations may be added.
- the operation 740 may be omitted.
- one or more other optional operations e.g., a preprocessing operation
- the preprocessing operation on the scan data may include a denosing operation, an enhancement operation, a filtering operation, or the like, or any combination thereof.
- FIG. 8 is a schematic diagram illustrating an exemplary training sample including a sample first intermediate image and a sample second intermediate image reconstructed from sample scan data of the head of a patient by performing different counts of iterations according to some embodiments of the present disclosure.
- an image 800A is the sample first intermediate image of the head of a patient, which was reconstructed by performing a first count of iterations based on the sample scan data of the head.
- An image 800B is the sample second intermediate image of the head, which was generated by performing a second count of iterations based on the sample scan data of the head (or by further performing an additional count of iterations based on the sample first intermediate image) .
- the second count is greater than the first count, and the image 800B has a higher image quality (e.g., a lower noise level) than the image 800A.
- a plurality of training samples similar to the training sample described in FIG. 8 may be used to generate a target reconstruction model.
- the target reconstruction model may be configured to receive an initial image corresponding to an iteration count and generate an image corresponding to a higher iteration count with an improved image quality.
- the target reconstruction model may obviate the need of performing an actual iteration reconstruction operation, which may improve the reconstruction efficiency by, e.g., reducing the amount of computation resources and the needed time for image reconstruction.
- aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied thereon.
- a non-transitory computer-readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electromagnetic, optical, or the like, or any suitable combination thereof.
- a computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer-readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python, or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran, Perl, COBOL, PHP, ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
- LAN local area network
- WAN wide area network
- SaaS Software as a Service
- the numbers expressing quantities, properties, and so forth, used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about, ” “approximate, ” or “substantially. ”
- “about, ” “approximate” or “substantially” may indicate ⁇ 20%variation of the value it describes, unless otherwise stated.
- the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment.
- the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
A system and method for image reconstruction are provided. The method may include obtaining scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject (510); generating, based on the scan data, a first intermediate image of the subject (520); and generating, based on the first intermediate image and a target reconstruction model, a target image of the subject (530), wherein the target reconstruction model is trained using a plurality of training samples, and each of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
This application claims priority of Chinese Patent Application No. 201910968191.3, filed on October 12, 2019, the contents of which are hereby incorporated by reference.
The disclosure generally relates to image processing, and more particularly relates to systems and methods for image reconstruction.
With the rapid development of image information processing technology, image reconstruction technology has been widely used in biomedical engineering, aerospace and aerospace technology, and communication engineering, etc. Scanned data collected by a scanning device (e.g., a positron emission tomography (PET) device, a computed tomography (CT) device, etc. ) may be processed using an image reconstruction algorithm to generate a reconstructed image. Therefore, it is desirable to provide systems and methods for image reconstruction, thereby improving the efficiency and accuracy of the image reconstruction.
SUMMARY
According to one aspect of the present disclosure, a system for image reconstruction is provided. The system may include at least one storage device storing a set of instructions, and at least one processor configured to communicate with the at least one storage device. When executing the executable instructions, the at least one processor may be configured to direct the system to perform one or more of the following operations. The operations may include obtaining scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject; generating, based on the scan data, a first intermediate image of the subject; and generating, based on the first intermediate image and a target reconstruction model, a target image of the subject, wherein the target reconstruction model is trained using a plurality of training samples, and each of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
In some embodiments, the generating a first intermediate image of the subject based on the scan data may include performing a first iterative reconstruction operation including a plurality of first iterations. At least one first iteration of the plurality of first iterations may include generating a first updated image by updating, based on the scan data, a first image determined in a previous first iteration; determining whether a first termination condition is satisfied; and in response to determining that the first termination condition is satisfied, designating the first updated image as the first intermediate image.
In some embodiments, the first termination condition may relate to at least one of a count of first iterations that have been performed in the first iteration reconstruction operation, a difference between the first updated image and the first image, or one or more image parameters of the first updated image.
In some embodiments, the one or more image parameters may include at least one of a signal-to-noise ratio (SNR) , a mean square error (MSE) , a mean absolute deviation (MAD) , or a peak signal-to-noise ratio (PSNR) .
In some embodiments, the generating, based on the first intermediate image and a target reconstruction model, a target image of the subject comprises: generating a second intermediate image of the subject by processing the first intermediate image using the target reconstruction model; and generating, based on the second intermediate image, the target image of the subject.
In some embodiments, the generating, based on the second intermediate image, the target image of the subject may include performing a second iterative reconstruction operation including a plurality of second iterations. At least one second iteration of the plurality of second iterations may include generating a second updated image by updating, based on the scan data, a second image, the second image being updated from the second intermediate image determined in a previous second iteration; determining whether a second termination condition is satisfied; and in response to determining that the second termination condition is satisfied, designating the second updated image as the target image of the subject.
In some embodiments, the second termination condition may relate to at least one of a count of second iterations that have been performed in the second iterative reconstruction operation, a difference between the second updated image and the second image, or one or more image parameters of the second updated image.
In some embodiments, the target reconstruction model may include a deep learning model.
In some embodiments, the target reconstruction model may be generated according to a model training process including: obtaining the plurality of training samples; and generating the target reconstruction model by training a preliminary model using the plurality of training samples.
In some embodiments, the obtaining a plurality of training samples may include: for each of the plurality of training samples, obtaining sample scan data of the sample subject corresponding to the training sample; generating the sample first intermediate image by performing a sample first iterative reconstruction operation on the sample scan data until a sample first termination condition is satisfied; and generating the sample second intermediate image by performing a sample second iterative reconstruction operation on the sample first intermediate image until a sample second termination condition is satisfied.
In some embodiments, the at least one processor is further configured to direct the system to perform operations including obtaining, from a reconstruction model library, the target reconstruction model.
According to another aspect of the present disclosure, a method for image reconstruction is provided. The method may be implemented on a computing device including at least one processor and at least one storage device. The method may include obtaining scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject; generating, based on the scan data, a first intermediate image of the subject; and generating, based on the first intermediate image and a target reconstruction model, a target image of the subject, wherein the target reconstruction model is trained using a plurality of training samples, and each of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
According to another aspect of the present disclosure, a system for image reconstruction is provide. The system may include an acquisition module configured to obtain scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject; an intermediate image generation module configured to generate a first intermediate image of the subject based on the scan data; and a target image generation module configured to generate a target image of the subject based on the first intermediate image and a target reconstruction model, wherein the target reconstruction model is trained using a plurality of training samples, and each of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
According to another aspect of the present disclosure, a non-transitory computer readable medium including at least one set of instructions for image reconstruction is provided. When executed by one or more processors of a computing device, the at least one set of instructions causes the computing device to perform a method. The method may include obtaining scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject; generating, based on the scan data, a first intermediate image of the subject; and generating, based on the first intermediate image and a target reconstruction model, a target image of the subject, wherein the target reconstruction model is trained using a plurality of training samples, and each of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
According to a second aspect of the present disclosure, a system is provided. The system may include at least one storage device storing a set of instructions for generating a target reconstruction model, and at least one processor configured to communicate with the at least one storage device. When executing the executable instructions, the at least one processor may be configured to direct the system to perform one or more of the following operations. The operations may include obtaining a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; and determining the target reconstruction model by training a preliminary model using the plurality of training samples.
In some embodiments, the obtaining a plurality of training samples may include: for each of the plurality of training samples, obtaining sample scan data of the sample subject corresponding to the training sample; generating the sample first intermediate image by performing a sample first iterative reconstruction operation on the sample scan data until a sample first termination condition is satisfied; and generating the sample second intermediate image by performing a second iterative reconstruction operation on the sample first intermediate image until a sample second termination condition is satisfied.
In some embodiments, the sample first iterative reconstruction operation may include a plurality of sample first iterations, the generating the sample first intermediate image comprises: for at least one sample first iteration of the plurality of sample first iterations, generating a sample first updated image by updating, based on the sample scan data, a sample first image determined in a previous sample first iteration; determining whether the sample first termination condition is satisfied; and in response to determining that the sample first termination condition is satisfied, designating the sample first updated image as the sample first intermediate image.
In some embodiments, the sample first termination condition may relate to at least one of a count of sample first iterations that have been performed in the sample first iterative reconstruction operation, a difference between the sample first updated image and the sample first image, or one or more image parameters of the sample first updated image.
In some embodiments, the sample second iterative reconstruction operation may include a plurality of sample second iterations, the generating the sample second intermediate image comprises: for at least one sample second iteration of the plurality of sample second iterations, generating a sample second updated image by updating, based on the sample scan data, a sample second image, the sample second image being updated from the sample first intermediate image determined in a previous sample second iteration; determining whether the sample second termination condition is satisfied; and in response to determining that the sample second termination condition is satisfied, designating the sample second updated image as the sample second intermediate image.
In some embodiments, the sample second termination condition may relate to at least one of a count of sample second iterations that have been performed in the sample second iterative reconstruction operation, a difference between the sample second updated image and the sample second image, or one or more image parameters of the sample second updated image.
In some embodiments, the generating the target reconstruction model by training a preliminary model using the plurality of training samples may include initializing parameter values of the preliminary model; and generating the target reconstruction model by iteratively updating the parameter values of the preliminary model based on the plurality of training samples.
In some embodiments, the iteratively updating the parameter values of the preliminary model includes performing an iterative operation may include one or more third iterations. At least one third iteration of the one or more third iterations may include for each of at least some of the plurality of training samples, generating an estimated second intermediate image by processing the sample first intermediate image of the training sample using an updated preliminary model determined in a previous third iteration; determining a value of a loss function based on the estimated second intermediate image and the sample second intermediate image of each of the at least some of the plurality of training samples; further updating at least some of the parameter values of the updated preliminary model to be used in a next third iteration based on the value of the loss function.
According to another aspect of the present disclosure, a method for generating a target reconstruction model is provided. The method may be implemented on a computing device including at least one processor and at least one storage device. The method may include obtaining a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; and determining the target reconstruction model by training a preliminary model using the plurality of training samples.
According to another aspect of the present disclosure, a system for generating a target reconstruction model is provide. The system may include an acquisition module configured to obtain a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; and a model determination module configured to determine the target reconstruction model by training a preliminary model using the plurality of training samples.
According to another aspect of the present disclosure, a non-transitory computer readable medium including at least one set of instructions for generating a target reconstruction model is provided. When executed by one or more processors of a computing device, the at least one set of instructions causes the computing device to perform a method. The method may include obtaining a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; and determining the target reconstruction model by training a preliminary model using the plurality of training samples.
Additional features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities, and combinations set forth in the detailed examples discussed below.
The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. The drawings are not to scale. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
FIG. 1 is a schematic diagram illustrating an exemplary imaging system according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram illustrating hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure;
FIG. 3 is a schematic diagram illustrating hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure;
FIG. 4A is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure;
FIG. 4B is a block diagram illustrating an exemplary processing device according to some embodiments of the present disclosure;
FIG. 5 is a flowchart illustrating an exemplary process for generating a target image of a subject based on scan data of the subject according to some embodiments of the present disclosure;
FIG. 6 is a flowchart illustrating an exemplary process for generating a target reconstruction model according to some embodiments of the present disclosure;
FIG. 7 is a flowchart illustrating an exemplary process for determining a training sample for a target reconstruction model according to some embodiments of the present disclosure; and
FIG. 8 is a schematic diagram illustrating an exemplary training sample including a sample first intermediate image and a sample second intermediate image reconstructed from sample scan data of the head of a patient by performing different counts of iterations according to some embodiments of the present disclosure.
The following description is presented to enable any person skilled in the art to make and use the present disclosure and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a, ” “an, ” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise, ” “comprises, ” and/or “comprising, ” “include, ” “includes, ” and/or “including” when used in this disclosure, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood that the term “system, ” “engine, ” “unit, ” “module, ” and/or “block” used herein are one method to distinguish different components, elements, parts, sections, or assembly of different levels in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose.
Generally, the word “module, ” “unit, ” or “block, ” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions. A module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or other storage devices. In some embodiments, a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules/units/blocks configured for execution on computing devices may be provided on a computer-readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution) . Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an erasable programmable read-only memory (EPROM) . It will be further appreciated that hardware modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors. The modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks but may be represented in hardware or firmware. In general, the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage. The description may be applicable to a system, an engine, or a portion thereof.
It will be understood that when a unit, engine, module, or block is referred to as being “on, ” “connected to, ” or “coupled to, ” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawings, all of which form a part of this disclosure. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale.
The flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments in the present disclosure. It is to be expressly understood, the operations of the flowchart may be implemented not in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
The term “image” in the present disclosure is used to collectively refer to image data (e.g., scan data, projection data) and/or images of various forms, including a two-dimensional (2D) image, a three-dimensional (3D) image, a four-dimensional (4D) , etc. The term “pixel” and “voxel” in the present disclosure are used interchangeably to refer to an element of an image. The term “region, ” “location, ” and “area” in the present disclosure may refer to a location of an anatomical structure shown in the image or an actual location of the anatomical structure existing in or on a target subject’s body, since the image may indicate the actual location of a certain anatomical structure existing in or on the target subject’s body.
Provided herein are systems and methods for non-invasive biomedical imaging/treatment, such as for disease diagnostic, disease therapy, or research purposes. In some embodiments, the systems may include an imaging system. The imaging system may include a single modality system and/or a multi-modality system. The term “modality” used herein broadly refers to an imaging or treatment method or technology that gathers, generates, processes, and/or analyzes imaging information of a subject or treatments the subject. The single modality system may include, for example, an ultrasound imaging system, an X-ray imaging system, a computed tomography (CT) system, a magnetic resonance imaging (MRI) system, an ultrasonography system, a positron emission tomography (PET) system, an optical coherence tomography (OCT) imaging system, an ultrasound (US) imaging system, an intravascular ultrasound (IVUS) imaging system, a near-infrared spectroscopy (NIRS) imaging system, or the like, or any combination thereof. The multi-modality system may include, for example, an X-ray imaging-magnetic resonance imaging (X-ray-MRI) system, a positron emission tomography-X-ray imaging (PET-X-ray) system, a single photon emission computed tomography-magnetic resonance imaging (SPECT-MRI) system, a positron emission tomography-computed tomography (PET-CT) system, a C-arm system, a positron emission tomography-magnetic resonance imaging (PET-MR) system, a digital subtraction angiography-magnetic resonance imaging (DSA-MRI) system, etc. In some embodiments, the medical system may include a treatment system. The treatment system may include a treatment plan system (TPS) , image-guided radiotherapy (IGRT) , etc. The image-guided radiotherapy (IGRT) may include a treatment device and an imaging device. The treatment device may include a linear accelerator, a cyclotron, a synchrotron, etc., configured to perform a radio therapy on a subject. The treatment device may include an accelerator of species of particles including, for example, photons, electrons, protons, or heavy ions. The imaging device may include an MRI scanner, a CT scanner (e.g., cone beam computed tomography (CBCT) scanner) , a digital radiology (DR) scanner, an electronic portal imaging device (EPID) , etc. It should be noted that the medical system described below is merely provided for illustration purposes, and not intended to limit the scope of the present disclosure.
In the present disclosure, the subject may include a biological object and/or a non-biological object. The biological subject may be a human being, an animal, a plant, or a specific portion, organ, and/or tissue thereof. For example, the subject may include a head, a neck, a thorax, a heart, a stomach, a blood vessel, a soft tissue, a tumor, a nodule, or the like, or any combination thereof. In some embodiments, the subject may be a man-made composition of organic and/or inorganic matters that are with or without life. The term “object” or “subject” are used interchangeably in the present disclosure.
In the present disclosure, a representation of a subject (e.g., a patient, a subject, or a portion thereof) in an image may be referred to as “subject” for brevity. For instance, a representation of an organ or tissue (e.g., a heart, a liver, a lung) in an image may be referred to as an organ or tissue for brevity. Further, an image including a representation of a subject may be referred to as an image of a subject or an image including a subject for brevity. Still further, an operation performed on a representation of a subject in an image may be referred to as an operation performed on a subject for brevity. For instance, a segmentation of a portion of an image including a representation of an organ or tissue from the image may be referred to as a segmentation of an organ or tissue for brevity.
Iterative reconstruction algorithms have been widely used in generating a medical image (e.g., a CT image, a PET image) . For example, an iterative reconstruction operation including a plurality of iterations may be performed to reconstruct an image of a subject from scan data of the subject. Normally, the image quality (e.g., measured by one or more image parameters as described in connection with FIG. 5) of the reconstructed image may be associated with the count of iterations performed in the reconstruction image. The more iterations, the higher image quality the reconstructed image has. However, an iterative reconstruction algorithm with more iterations may cost more processing resources (e.g., time, computing space) and reduce the reconstruction efficiency. Therefore, it is desirable to provide systems and methods for image reconstruction, thereby improving the efficiency of the image reconstruction.
An aspect of the present disclosure relates to systems and methods for image reconstruction. For example, the system may obtain scan data of a subject. The scan data may be acquired by an imaging device during a scan of the subject. The system may generate a first intermediate image of the subject based on the scan data. The system may further generate a target image of the subject based on the first intermediate image and a target reconstruction model. For example, a second intermediate image of the subject may be generated by processing the first intermediate image using the target reconstruction model. The target image of the subject may be generated based on the second intermediate image. The second intermediate image may correspond to a higher iterative count than the first intermediate image. For example, the second intermediate image may have a same image quality (e.g., measured by one or more image parameters) as or a similar image quality to an image generated by performing a plurality of reconstruction iterations on the first intermediate image. In other words, some embodiments of the present disclosure utilize the target reconstruction model to generate the second intermediate image with an improved image quality without performing an actual iteration reconstruction operation. This may improve the reconstruction efficiency by, e.g., reducing the amount of computation resources and the needed time for reconstructing the target image.
In some embodiments, the target reconstruction model may be generated by training a preliminary model using a plurality of training samples. Each of the plurality of training samples may comprise a sample first intermediate image and a sample second intermediate image of a sample subject. The sample first intermediate image and the second intermediate image may be reconstructed from sample scan data of the sample subject by performing different counts of iterations. During the training process, the preliminary model may be trained according to a machine learning technique to learn an optimal mechanism of iterative reconstruction, and the resulting target reconstruction model may be used to generate an image corresponding to a higher iteration count (i.e., with higher image quality) without performing an actual iteration reconstruction operation.
FIG. 1 is a schematic diagram illustrating an exemplary imaging system according to some embodiments of the present disclosure.
As illustrated in FIG. 1, the imaging system 100 may include a scanning device 110, a network 120, a terminal device 130, a processing device 140, and a storage device 150. The components in the imaging system 100 may be connected in one or more of various ways. Merely by way of example, the scanning device 110 may be connected to the processing device 140 through the network 120. As another example, the scanning device 110 may be connected to the processing device 140 directly as illustrated in FIG. 1. As a further example, the terminal device 130 may be connected to another component of the imaging system 100 (e.g., the processing device 140) via the network 120. As still a further example, the terminal device 130 may be connected to the processing device 140 directly as illustrated by the dotted arrow in FIG. 1. As still a further example, the storage device 150 may be connected to another component of the imaging system 100 (e.g., the processing device 140) directly as illustrated in FIG. 1, or through the network 120.
The scanning device 110 may be configured to acquire scan data relating to at least part of a subject. The subject may be biological or non-biological. For example, the subject may include a patient, a man-made subject, etc. As another example, the subject may include a specific portion, organ, and/or tissue of the patient. For example, the subject may include the head, the chest, the neck, the thorax, the heart, the stomach, an arm, a palm, a blood vessel, soft tissue, a tumor, nodules, or the like, or any combination thereof. In some embodiments, the scanning device 110 may include a computed tomography (CT) device, an emission computed tomography (ECT) device, a positron emission tomography (PET) device, a single-photon emission computed tomography (SPECT) device, a magnetic resonance imaging (MRI) device, a magnetic resonance spectroscopy (MRS) device, an ultrasound scanning device, or the like, or any combination thereof.
The network 120 may include any suitable network that can facilitate the exchange of information and/or data for the imaging system 100. In some embodiments, one or more components (e.g., the scanning device 110, the processing device 140, the storage device 150, or the terminal device 130) of the imaging system 100 may communicate information and/or data with one or more other components of the imaging system 100 via the network 120. For example, the processing device 140 may obtain scanning data from the scanning device 110 via the network 120. As another example, the processing device 140 may obtain user instructions from the terminal device 130 via the network 120. The network 120 may include a public network (e.g., the Internet) , a private network (e.g., a local area network (LAN) , a wide area network (WAN) , etc. ) , a wired network (e.g., an Ethernet) , a wireless network (e.g., an 802.11 network, a wireless Wi-Fi network, etc. ) , a cellular network (e.g., a long term evolution (LTE) network) , a frame relay network, a virtual private network (VPN) , a satellite network, a telephone network, routers, hubs, server computers, and/or any combination thereof. For example, the network 120 may include a cable network, a wireline network, a fiber-optic network, a telecommunications network, an intranet, a wireless local area network (WLAN) , a metropolitan area network (MAN) , a public telephone switched network (PSTN) , a Bluetooth
TM network, a ZigBee
TM network, a near field communication (NFC) network, or the like, or any combination thereof. In some embodiments, the network 120 may include one or more network access points. For example, the network 120 may include wired and/or wireless network access points such as base stations and/or internet exchange points through which one or more components of the imaging system 100 may be connected to the network 120 to exchange data and/or information.
The terminal device 130 may be connected to and/or communicate with the scanning device 110, the processing device 140, and/or the storage device 150. For example, the terminal device 130 may enable user interactions between a user and the imaging system 100. For example, the user may instruct the scanning device 110 to acquire scan data or instruct the processing device 140 to process images via the terminal device 130. In some embodiments, the terminal device 130 may include a mobile device 131, a tablet computer 132, a laptop computer 133, or the like, or any combination thereof. In some embodiments, the mobile device 131 may include a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart lighting device, a smart electrical appliance control device, a smart monitoring device, a smart TV, a smart camera, a walkie-talkie, or the like, or any combination thereof. In some embodiments, the wearable device may include bracelets, footwear, glasses, helmets, watches, clothes, backpacks, smart accessories, or the like, or any combination thereof. In some embodiments, the mobile device may include a mobile phone, a personal digital assistant (PDA) , a game device, a navigation device, a POS device, a notebook computer, a tablet computer, a desktop computer, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include Google Glass
TM, Oculus Rift
TM, HoloLens
TM, Gear VR
TM, or the like. In some embodiments, the terminal device 130 may be part of the processing device 140.
The processing device 140 may process data and/or information obtained from the scanning device 110, the terminal device 130, and/or the storage device 150. For example, the processing device 140 may reconstruct a target image of the subject by applying a target reconstruction model. As another example, the processing device 140 may generate the target reconstruction model by training a preliminary model using a plurality of training samples. In some embodiments, the generation and/or updating of the target reconstruction model may be performed on a processing device, while the application of the target reconstruction model may be performed on a different processing device. In some embodiments, the generation of the target reconstruction model may be performed on a processing device of a system different from the imaging system 100 or a server different from a server including the processing device 140 on which the application of the target reconstruction model is performed. For instance, the generation of the target reconstruction model may be performed on a first system of a vendor who provides and/or maintains such a target reconstruction model and/or has access to training samples used to generate the target reconstruction model, while image reconstruction based on the provided target reconstruction model may be performed on a second system of a client of the vendor. In some embodiments, the generation of the target reconstruction model may be performed online in response to a request for image reconstruction. In some embodiments, the generation of the target reconstruction model may be performed offline.
In some embodiments, the target reconstruction model may be generated and/or updated (or maintained) by, e.g., the manufacturer of the scanning device 110 or a vendor. For instance, the manufacturer or the vendor may load the target reconstruction model into the imaging system 100 or a portion thereof (e.g., the processing device 140) before or during the installation of the scanning device 110 and/or the processing device 140, and maintain or update the target reconstruction model from time to time (periodically or not) . The maintenance or update may be achieved by installing a program stored on a storage device (e.g., a compact disc, a USB drive, etc. ) or retrieved from an external source (e.g., a server maintained by the manufacturer or vendor) via the network 120. The program may include a new model (e.g., a new image reconstruction model) or a portion of a model that substitute or supplement a corresponding portion of the model.
In some embodiments, the processing device 140 may be a computer, a user console, a single server or a server group, etc. The server group can be centralized or distributed. In some embodiments, the processing device 140 may be local to or remote from the imaging system 100. For example, the processing device 140 may access information and/or data from the scanning device 110, the storage device 150, and/or the terminal device 130 via the network 120. As another example, the processing device 140 may be directly connected to the scanning device 110, the terminal device 130, and/or the storage device 150 to access information and/or data. In some embodiments, the processing device 140 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, and inter-cloud, a multi-cloud, or the like, or a combination thereof. In some embodiments, the processing device 140 may be implemented by a computing device 200 having one or more components as described in connection with FIG. 2.
The storage device 150 may store data, instructions, and/or any other information. In some embodiments, the storage device 150 may store data obtained from the scanning device 110, the terminal device 130, and/or the processing device 140. In some embodiments, the storage device 150 may store data and/or instructions that the processing device 140 may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage device 150 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. Exemplary mass storage devices may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage devices may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random access memory (RAM) . Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc. Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc. In some embodiments, the storage device 150 may be implemented on a cloud platform as described elsewhere in the disclosure.
In some embodiments, the storage device 150 may be connected to the network 120 to communicate with one or more other components (e.g., the processing device 140, the terminal device 130) of the imaging system 100. One or more components of the imaging system 100 may access the data or instructions stored in the storage device 150 via the network 120. In some embodiments, the storage device 150 may be part of the processing device 140, or directly or indirectly connected to the processing device 140.
It should be noted that the above description of the imaging system 100 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. For example, the assembly and/or function of the imaging system 100 may be varied or changed according to specific implementation scenarios. In some embodiments, the imaging system 100 may include one or more additional components and/or one or more components of the imaging system 100 described above may be omitted. Additionally or alternatively, two or more components of the imaging system 100 may be integrated into a single component. A component of the imaging system 100 may be implemented on two or more sub-components.
FIG. 2 is a schematic diagram illustrating hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure. In some embodiments, a component of the imaging system 100 (e.g., the processing device 140) may be implemented on the computing device 200. As illustrated in FIG. 2, the computing device 200 may include a processor 210, a storage 220, an input/output (I/O) 230, and a communication port 240.
The processor 210 may execute computer instructions (e.g., program codes) and perform functions of the processing device 140 in accordance with techniques described herein. The computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions described herein. For example, the processor 210 may process scanning data obtained from the scanning device 110, the terminal device 130, the storage device 150, and/or any other component of the imaging system 100. In some embodiments, the processor 210 may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC) , an application-specific integrated circuit (ASICs) , an application-specific instruction-set processor (ASIP) , a central processing unit (CPU) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a microcontroller unit, a digital signal processor (DSP) , a field-programmable gate array (FPGA) , an advanced RISC machine (ARM) , a programmable logic device (PLD) , any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof.
Merely for illustration, only one processor is described in the computing device 200. However, it should be noted that the computing device 200 in the present disclosure may also include multiple processors, and thus operations and/or method operations that are performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors. For example, if in the present disclosure the processor of the computing device 200 executes both operation A and operation B, it should be understood that operation A and operation B may also be performed by two or more different processors jointly or separately in the computing device 200 (e.g., a first processor executes operation A and a second processor executes operation B, or the first and second processors jointly execute operations A and B) .
The storage 220 may store data/information obtained from the scanning device 110, the terminal device 130, the storage device 150, and/or any other component of the imaging system 100. In some embodiments, the storage 220 may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. In some embodiments, the storage 220 may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure. For example, the storage 220 may store a program for the processing device 140 to reconstruct a target image of a subject.
The I/O 230 may input and/or output signals, data, information, etc. In some embodiments, the I/O 230 may enable a user interaction with the processing device 140. In some embodiments, the I/O 230 may include an input device and an output device. Exemplary input devices may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Exemplary output devices may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof. Exemplary display devices may include a liquid crystal display (LCD) , a light-emitting diode (LED) -based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT) , a touch screen, or the like, or a combination thereof.
The communication port 240 may be connected to a network (e.g., the network 120) to facilitate data communications. The communication port 240 may establish connections between the processing device 140 and the scanning device 110, the terminal device 130, and/or the storage device 150. The connection may be a wired connection, a wireless connection, any other communication connection that can enable data transmission and/or reception, and/or any combination of these connections. The wired connection may include, for example, an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof. The wireless connection may include, for example, a Bluetooth
TM link, a Wi-Fi
TM link, a WiMax
TM link, a WLAN link, a ZigBee
TM link, a mobile network link (e.g., 3G, 4G, 5G) , or the like, or a combination thereof. In some embodiments, the communication port 240 may be and/or include a standardized communication port, such as RS232, RS485, etc. In some embodiments, the communication port 240 may be a specially designed communication port. For example, the communication port 240 may be designed in accordance with the digital imaging and communications in medicine (DICOM) protocol.
FIG. 3 is a schematic diagram illustrating exemplary hardware components and/or software components of an exemplary mobile device according to some embodiments of the present disclosure. In some embodiments, one or more components (e.g., the terminal device 130 and/or the processing device 140) of the imaging system 100 may be implemented on the mobile device 300.
As illustrated in FIG. 3, the mobile device 300 may include a display 310, a communication platform 320, a graphic processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, and a storage 390. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown) , may also be included in the mobile device 300. In some embodiments, a mobile operating system 370 (e.g., iOS
TM, Android
TM, Windows Phone
TM) and one or more applications 380 may be loaded into the memory 360 from the storage 390 in order to be executed by the CPU 340. The applications 380 may include a browser or any other suitable mobile apps for receiving and rendering information relating to image processing or other information from the processing device 140. User interactions with the information stream may be achieved via the I/O 350 and provided to the processing device 140 and/or other components of the imaging system 100 via the network 120.
To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein. A computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device. A computer may also act as a server if appropriately programmed.
FIGs. 4A and 4B are block diagrams illustrating exemplary processing devices according to some embodiments of the present disclosure. The processing devices 140A and 140B may be exemplary processing devices 140 as described in connection with FIG. 1. In some embodiments, the processing device 140A may be configured to apply a target reconstruction model in reconstructing a target image of a subject. The processing device 140B may be configured to generate a target reconstruction model by model training. In some embodiments, the processing devices 140A and 140B may be respectively implemented on a processing unit (e.g., the processor 210 illustrated in FIG. 2 or the CPU 340 illustrated in FIG. 3) . Merely by way of example, the processing devices 140A may be implemented on a CPU 340 of a terminal device, and the processing device 140B may be implemented on a computing device 200. Alternatively, the processing devices 140A and 140B may be implemented on a same computing device 200 or a same CPU 340. For example, the processing devices 140A and 140B may be implemented on a same computing device 200.
As illustrated in FIG. 4A, the processing device 140A may include an acquisition module 410, an intermediate image generation module 420, and a target image generation module 430.
The acquisition module 410 may be configured to obtain scan data of a subject. More descriptions regarding the obtaining of the scan data may be found elsewhere in the present disclosure. See, e.g., operation 510 in FIG. 5 and relevant descriptions thereof.
The intermediate image generation module 420 may be configured to generate a first intermediate image of the subject based on the scan data. An intermediate image refers to an image generated during an image reconstruction process of the target image of the subject. The first intermediate image may correspond to a first count of iterations. More descriptions regarding the generation of the first intermediate image of the subject may be found elsewhere in the present disclosure. See, e.g., operation 520 in FIG. 5 and relevant descriptions thereof.
The target image generation module 430 may be configured to generate a target image of the subject based on the first intermediate image and a target reconstruction model. In some embodiments, the target image generation module 430 may generate a second intermediate image using the target reconstruction model. The second intermediate image may correspond to a second count of iterations, wherein the second count is greater than the first count. The target image generation module 430 may further generate the target image based on the second intermediate image. More descriptions regarding the generating of the target image may be found elsewhere in the present disclosure. See, e.g., operation 530 in FIG. 5 and relevant descriptions thereof.
As illustrated in FIG. 4B, the processing device 140B may include an acquisition module 440 and a model determination module 450.
The acquisition module 440 may be configured to obtain a plurality of training samples. Each of the plurality of training samples may include a sample first intermediate image and a sample second intermediate image of a sample subject, wherein the sample second intermediate image may correspond to a higher iteration count than the sample first intermediate image. In some embodiments, the acquisition module 440 may also be configured to obtain a preliminary model. The preliminary model may be of any type of machine learning model (e.g., a deep learning model) . More descriptions regarding the obtaining of the training samples and the preliminary model may be found elsewhere in the present disclosure. See, e.g., operation 610 or 620 in FIG. 6 and relevant descriptions thereof.
The model determination module 450 may be configured to generate the target reconstruction model by training the preliminary model using the plurality of training samples. More descriptions regarding the generation of the target reconstruction model may be found elsewhere in the present disclosure. See, e.g., operation 630 in FIG. 6 and relevant descriptions thereof.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the processing device 140A and/or the processing device 140B may share two or more of the modules, and any one of the modules may be divided into two or more units. For instance, the processing devices 140A and 140B may share a same acquisition module; that is, the acquisition module 410 and the acquisition module 440 are a same module. In some embodiments, the processing device 140A and/or the processing device 140B may include one or more additional modules, such as a storage module (not shown) for storing data. In some embodiments, the processing device 140A and the processing device 140B may be integrated into one processing device 140.
FIG. 5 is a flowchart illustrating an exemplary process for generating a target image of a subject based on scan data of the subject according to some embodiments of the present disclosure. In some embodiments, a process 500 may be implemented as a set of instructions (e.g., an application) stored in the storage device 150, the storage 220, or the storage 390. The processing device 140A (e.g., implemented on the processor 210, the CPU 340, and/or one or more modules illustrated in FIG. 4A) may execute the set of instructions, and when executing the instructions, the processing device 140A may be configured to perform the process 500. The operations of the illustrated process 500 presented below are intended to be illustrative. In some embodiments, the process 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order of the operations of the process 500 illustrated in FIG. 5 and described below is not intended to be limiting. In some embodiments, the process 500 may be executed by a cloud server to reduce the performance requirements and/or the workload of a local processing device.
In 510, the processing device 140A (e.g., the acquisition module 410) may obtain scan data of a subject.
As used herein, a subject may be biological or non-biological. For example, the subject may include a patient (or a portion thereof) , a man-made subject (e.g., a phantom) , etc., as described elsewhere in the present disclosure (e.g., FIG. 1 and the descriptions thereof) . In some embodiments, the scan data of the subject may include CT image data, PET image data, MRI image data, ultrasound image data, X-ray image data, or the like, or any combination thereof.
The scan data may be acquired by an imaging device during a scan of the subject. For example, an imaging device (e.g., the scanning device 110) may be directed to perform a scan on the subject to acquire the scan data. The processing device 140A may obtain the scan data from the imaging device. In some embodiments, the scan of the subject may be performed according to one or more scanning parameters. Exemplary scanning parameters may include one or more acquisition parameters relating to the acquisition of the scan data and/or one or more reconstruction parameters relating to the reconstruction of the scan data. Merely by way of example, for CT scan, the scanning parameter (s) may include a tube potential, a tube current, a slice thickness, a scanning time, a width of a collimator, a slice width, a beam filtering parameter, a pitch, or the like, or any combination thereof. As another example, an imaging device (e.g., the scanning device 110) may transmit acquired scan data to a storage device (e.g., the storage device 150, the storage 220, or any other storage device) for storage. The processing device 140A may obtain the scan data from the storage device.
In 520, the processing device 140A (e.g., the intermediate image generation module 420) may generate a first intermediate image of the subject based on the scan data.
As used herein, an intermediate image refers to an image generated during an image reconstruction process of the target image of the subject. In other words, an intermediate image may be an intermediate product generated before the generation of the target image of the subject. For example, the first intermediate image may be generated first and served as a basis for generating the target image. In some embodiments, the first intermediate image may satisfy a first termination condition. In some embodiments, different image reconstruction processes may correspond to different first termination conditions. For example, a first termination condition corresponding to an iterative reconstruction process (i.e., a reconstruction process using an iterative reconstruction algorithm) may be different from a first termination condition corresponding to an analytical reconstruction process (i.e., a reconstruction process using an analytic reconstruction algorithm) .
In some embodiments, the processing device 140A may generate the first intermediate image by performing a first iterative reconstruction (IR) operation on the scan data. For example, the processing device 140A may perform the first iterative reconstruction operation using an iterative reconstruction algorithm in image space, an iterative reconstruction algorithm in projection space, or the like, or any combination thereof. Exemplary iterative reconstruction algorithms in image space may include an iterative reconstruction in image space (IRIS) algorithm. Exemplary iterative reconstruction algorithms in projection space may include an iterative model reconstruction (IMR) algorithm, a model-based iterative reconstruction (MBIR) algorithm, etc. Exemplary iterative reconstruction algorithms in both image space and projection space may include an adaptive statistical iterative reconstruction (ASIR) algorithm, a sinogram affirmed iterative reconstruction (SAFIRE) algorithm, an iDose algorithm, an adaptive iterative dose reduction (AIDR) algorithm, etc. As another example, the processing device 140A may perform the first iterative reconstruction operation using an algebraic reconstruction technique (ART) , a simultaneous algebraic reconstruction technique (SART) , a maximum likelihood-expectation maximization (ML-EM) algorithm, or the like, or any combination thereof.
In some embodiments, the first iterative reconstruction operation may include a plurality of first iterations. In a current first iteration of the plurality of first iterations, the processing device 140A may generate a first updated image by updating a first image determined in a previous first iteration based on the scan data. For example, the processing device 140A may determine a difference between the scan data and projection data corresponding to the first image determined in a previous first iteration. The processing device 140A may update the first image corresponding to the previous first iteration to generate the first updated image corresponding to the current first iteration based on the difference between the projection data corresponding to the first image and the scan data.
The processing device 140A may further determine whether the first termination condition is satisfied. In response to determining that the first termination condition is satisfied, the processing device 140A may designate the first updated image corresponding to the current first iteration as the first intermediate image. In response to determining that the first termination condition is not satisfied, the processing device 140A may proceed to a next first iteration to further update the first updated image. In some embodiments, if the current first iteration is an iteration performed first among the plurality of first iterations, the processing device 140A may generate a first initial image based on the scan data of the subject. The processing device 140A may generate the first updated image of the current first iteration by updating the first initial image. For example, the processing device 140A may generate the first initial image randomly or according to an image reconstruction algorithm other than the iterative reconstruction algorithm used in the first iterative reconstruction operation (e.g., the ART algorithm, the SART algorithm, etc. ) .
The first termination condition may be set according to a default setting of the imaging system 100, manually by a user, or determined by the processing device 140A according to an actual need.
For example, the first termination condition may relate to one or more image parameters of the first updated image corresponding to the current first iteration. Exemplary image parameters of an image may include a signal-to-noise ratio (SNR) , a mean square error (MSE) , a mean absolute deviation (MAD) , a peak signal-to-noise ratio (PSNR) , an image resolution, a contrast ratio, a sharpness value, or the like, or any combination thereof. In some embodiments, the image parameter (s) of an image may be used to evaluate or measure the image quality of the image. Merely by way of example, if the value of an image parameter of an image exceeds or is below a preset threshold, the image may be deemed as having a desired image quality. For example, the SNR may reflect a proportion of image signals and noise signals in an image. A larger SNR of the image may indicate that the image has more image signals and a higher image quality. The MSE refers to the average squared difference between an estimated value and a corresponding true value of each parameter. In some embodiments, the MSE may be used to measure the deviation between a generated image (e.g., a reconstructed image) and a corresponding true image. A larger value of the MSE of the image may indicate that the image has a higher image quality. The MAD may be used to evaluate the distortion of an image. A smaller MAD of the image may indicate that the image is closer to a corresponding true image and has a higher image quality. The PSNR may be an index for measuring an image distortion based on pixels error. A larger PSNR of the image may indicate that the image is closer to a corresponding true image and a higher image quality.
Exemplary first termination conditions relating to the image parameter (s) of the first updated image may include that the SNR of the first updated image exceeds a first parameter threshold, that the MSE of the first updated image exceeds a second parameter threshold, that the MAD of the first updated image is below a third parameter threshold, or the like, or any combination thereof. In some embodiments, the first, the second, the third parameter threshold may be set according to a default setting of the imaging system 100, set by a user or operator via the terminal device 130, or determined by the processing device 140A according to an actual need.
As another example, the first termination condition may relate to a difference between the first updated image corresponding to the current first iteration and the first image corresponding to the previous first iteration. Merely by way of example, the difference between the first updated image and the first image may indicate whether the current first iteration can improve the image quality of the first image generated in the previous first iteration. If the difference between the first updated image and the first image is below a first difference threshold, it may be speculated that the current first iteration results in a limited or no improvement in the image quality of the first image, and the first iterative reconstruction operation may be terminated. The first difference threshold may be set according to a default setting of the imaging system 100, set by a user or operator imaging system 100 via the terminal device 130, or determined by the processing device 140A according to an actual need.
In some embodiments, the difference between the first updated image and the first image may be determined based on an image similarity algorithm, including a peak signal to noise ratio (PSNR) algorithm, a structural similarity (SSIM) algorithm, a perceptual hash algorithm, a cosine similarity algorithm, a histogram-based algorithm, a Euclidean distance algorithm, or the like, or any combination thereof. For example, the difference between the first updated image and the first image may be measured by a difference between the value (s) of the image parameter (s) of the first updated image and the value (s) of the image parameter (s) of the first image. Taking the SNR as an example, the difference between the first updated image and the first image may be measured by a difference (denoted as D) between the SNR of the first updated image and the SNR of the first image. The first termination condition may include that the difference D is smaller than or equal to a certain percentage (e.g., 5%, 10%, 15%, 20%, etc. ) of the SNR of the first image.
As yet another example, the first termination condition may relate to a count of first iterations that have been performed in the first iteration reconstruction operation (or referred to as a first iteration count N for brevity) . The processing device 140A may determine that the first termination condition is satisfied if the first iteration count N is greater than or equal to a first count threshold, such as 0, 5, 6, 7, 8, 9, 10, etc. It should be noted that the first iteration count N being equal to 0 indicates that the scan data can be designate as the first intermediate image directly. The first count threshold may be a default setting of the imaging system 100, or set by a user or operator of the imaging system 100 via the terminal device 130, or determined by the processing device 140A according to an actual need. In some embodiments, different iterative reconstruction algorithms may correspond to the same first count threshold or different first count thresholds. For example, a first iteration count corresponding to the ART algorithm may be different from a first iteration count corresponding to the SART algorithm.
In some embodiments, the first count threshold may be determined based on the iterative reconstruction algorithm used in the first iterative reconstruction operation. For example, a user may determine the first count threshold corresponding to the iterative reconstruction algorithm according to experience. If in general, a reconstructed image satisfies the first termination condition can be generated by performing 5 first iterations using a specific iterative reconstruction algorithm, the first count threshold corresponding to the specific iterative reconstruction algorithm may be set to 5. In some embodiments, the first count threshold may be determined based on a target reconstruction model described in operation 530. For example, if during the training of the target reconstructing model, a sample first intermediate image of each training sample is generated by performing 10 iterations on sample scan data of the training sample, the first count threshold may be set to 10.
In some embodiments, the processing device 140A may generate the first intermediate image using an analytic reconstruction (AR) algorithm. For example, the analytic reconstruction algorithm may include a filter back-projection (FBP) algorithm, a feldkamp (FDK) algorithm, or the like, or any combination thereof. When an image generated by using the analytic reconstruction algorithm satisfies the first termination condition, the processing device 140A may designate the generated image as the first intermediate image. For example, the first termination condition may include that an image parameter of the generated image is higher than a predetermined threshold.
In 530, the processing device 140A (e.g., the target image generation module 430) may generate a target image of the subject based on the first intermediate image and the target reconstruction model.
A target reconstruction model refers to a trained model (e.g., a machine learning model) for image reconstruction. In some embodiments, the target reconstruction model may be generated by training a preliminary model using a plurality of training samples. Each of the training samples may include a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations For example, it is assumed that the sample second intermediate image of each training sample is generated by performing more iterations than the sample first intermediate image of the training sample. During the training process, the preliminary model may be trained to update the sample first intermediate image of each training sample to generate an image close to the sample second intermediate image of the training sample. In other words, the preliminary model may be trained to learn a mechanism of iterative reconstruction, and the resulting target reconstruction model may be used to generate an image corresponding to a higher iteration count without performing an actual iteration reconstruction operation.
For example, the processing device 140A may input the first intermediate image into the target reconstruction model, and the target reconstruction model may output a second intermediate image. The processing device 140A may further generate the target image based on the second intermediate image. As another example, the processing device 140A may input the first intermediate image into the target reconstruction model, and the target reconstruction model may directly output the target image of the subject. For illustration purposes, the following descriptions are described with reference to embodiments in which the target reconstruction model outputs the second intermediate image, and this is not intended to limit the scope of the present disclosure.
The second intermediate image may correspond to a higher iteration count than the first intermediate image. The iteration count corresponding to the second intermediate image refers to a predicted or simulated count of first iterations that needs to be performed on the scan data to generate an image of a same (or substantially same) image quality as the second intermediate image. For example, the first intermediate image may be generated by performing N first iterations on the scan data as described in connection with operation 520. The second intermediate image may have a same (or substantially same) image quality as an image generated by further performing M first iterations on the first intermediate image based on the scan data, wherein M may be a positive integer. That is, an iteration count corresponding to the second intermediate image may be equal to (or substantially equal to) a sum of N and M. In other words, using the target reconstruction model to process the first intermediate image may be equivalent to performing M first iterations on the first intermediate image. Normally, a greater count of iterations may result in an image with a higher image quality, but cost more processing time and/or increase the computational complexity. The target reconstruction model disclosed herein may be used to generate the second intermediate image with an improved image quality without performing an actual iteration reconstruction operation, which may improve the reconstruction efficiency by, e.g., reducing the amount of computation resources and the needed time for reconstructing the target image.
In some embodiments, the target reconstruction model may be of any type of machine learning model. In some embodiments, the processing device 140A may obtain the target reconstruction model from one or more components of the imaging system 100 (e.g., the storage device 150, the terminal device 130) or an external source via a network (e.g., the network 120) . For example, the target reconstruction model may be previously trained by a computing device (e.g., the processing device 140B) , and stored in a storage device (e.g., the storage device 150, the storage 220, and/or the storage 390) of the imaging system 100. The processing device 140A may access the storage device and retrieve the target reconstruction model. In some embodiments, the target reconstruction model may be generated by a computing device (e.g., the processing device 140B) by performing a process (e.g., process 600) for generating a target reconstruction model disclosed herein. More descriptions regarding the generation of the target reconstruction model may be found elsewhere in the present disclosure. See, e.g., FIG. 6 and relevant descriptions thereof.
In some embodiments, the processing device 140A may obtain the target reconstruction model from a reconstruction model library. The reconstruction model library may include a plurality of image reconstruction models corresponding to different subjects. For example, the reconstruction model library may include a first image reconstruction model corresponding to the head of a human, a second image reconstruction model corresponding to the arms of a human, a third reconstruction model corresponding to the chest of a human, a fourth image reconstruction model corresponding to the heart of a human, a fifth image reconstruction model corresponding to the stomach of human, etc. The processing device 140A may select an image reconstruction model corresponding to the type of the subject from the reconstruction model library as the target reconstruction model. The type of the subject may be determined according to a scanning protocol or by analyzing the scan data. In this way, a particular image reconstruction model suitable for the subject may be used, thereby improving the reconstruction accuracy.
Different image reconstruction models in the reconstruction model library may be trained based on different training samples. For example, for determining the first image reconstruction model, a plurality of training samples relating to the heads of humans may be used to train a first preliminary model. As another example, for determining the second image reconstruction model, a plurality of training samples relating to the arms of humans may be used to train a second preliminary model. In some embodiments, the first preliminary model and the second preliminary model may be of different types or the same type.
In some embodiments, the processing device 140A may generate the target image by performing a second iterative reconstruction operation on the second intermediate image based on the scan data. The second iterative reconstruction operation may include a plurality of second iterations. The generating of the target image based on the second intermediate image may be performed in a manner similar to that of the first intermediate image based on the scan data. For example, in a current second iteration of the plurality of second iterations, the processing device 140A may generate a second updated image corresponding to the current second iteration by updating a second image based on the scan data. The second image may be updated from the second intermediate image determined in a previous second iteration. If the current second iteration is performed first among the second iterations, the second image may be the original second intermediate image. The processing device 140A may then determine whether a second termination condition is satisfied. In response to determining that the second termination condition is satisfied, the processing device 140A may designate the second updated image corresponding to the current second iteration as the target image of the subject. In some embodiments, the second iterative reconstruction operation may be performed according to an iterative reconstruction algorithm as described elsewhere in this disclosure (e.g., operation 520 and the relevant descriptions) . The iterative reconstruction algorithm used during the second iterative reconstruction operation may be the same as or different from an iterative reconstruction algorithm used during the first iterative reconstruction operation.
The second termination condition may be similar to the first termination condition. The second termination condition may be set according to a default setting of the imaging system 100, manually by a user, or determined by the processing device 140A according to an actual need. For example, the second termination condition may relate to one or more image parameters (e.g., an SNR, an MSE, etc. ) of the second updated image. As another example, the second termination condition may relate to a difference between the second updated image corresponding to the current second iteration and the second image corresponding to the previous second iteration. Merely by way of example, the processing device 140A may determine that the second termination condition is satisfied when the difference between the second updated image and the second image is below a second difference threshold. In some embodiments, since the second intermediate image has a higher image quality than the first intermediate image, a same iterative reconstruction operation may be less effective in improving the image quality of the second intermediate image than in improving the image quality of the first intermediate image. The second difference threshold may be smaller than the first difference threshold.
As yet another example, the second termination condition may relate to a second iteration count of second iterations that have been performed in the second iteration reconstruction operation. The processing device 140A may determine that the second termination condition is satisfied if the second iteration count is greater than or equal to a second count threshold, such as 1, 3, 5, 6, 7, 8, 9, etc. The second count threshold may be a default setting of the imaging system 100, or set by a user or operator of the imaging system 100 via the terminal device 130, or determined by the processing device 140A according to an actual need. In some embodiments, the second count threshold may be determined based on the first iteration count N and the predicted or simulated count M corresponding to the target reconstruction model. For example, it is assumed that the first intermediate image is generated by performing 20 iterations on the scan data (i.e., the first iteration count N is 20) , the application of the target reconstruction model is equivalent to performing 170 iterations on the first intermediate image (i.e., M=170) . In such cases, the iteration count corresponding to the second intermediate image may be equal to 20+170=190. If the iteration count corresponding to the target image is equal to 200, the target image may be generated by performing 10 (i.e., 200-190=10) iterations on the second intermediate image based on the scan data.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more operations may be omitted and/or one or more additional operations may be added. For example, operation 510 and operation 520 may be combined into a single operation. As another example, one or more other optional operations (e.g., a preprocessing operation) may be added before operation 520. In some embodiments, the preprocessing operation on the scan data may include a denosing operation, an enhancement operation, a filtering operation, or the like, or any combination thereof. As a still example, the process 500 may include an additional operation to transmit the target image to a terminal (e.g., a terminal 130) for display. In some embodiments, in 530, the target reconstruction model may directly output the target image, and the second iterative reconstruction operation may be omitted.
FIG. 6 is a flowchart illustrating an exemplary process for generating a target reconstruction model according to some embodiments of the present disclosure. In some embodiments, a process 600 may be implemented as a set of instructions (e.g., an application) stored in the storage device 150, storage 220, and/or storage 390. The processing device 140B (e.g., implemented on the processor 210, the CPU 340, and/or one or more modules illustrated in FIG. 4B) may execute the set of instructions, and when executing the instructions, the processing device 140B may be configured to perform the process 600. The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process 600 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed. Additionally, the order of the operations of process 600 illustrated in FIG. 6 and described below is not intended to be limiting.
In some embodiments, the target reconstruction model described in connection with operation 530 in FIG. 5 may be obtained according to the process 600. In some embodiments, the process 600 may be performed by another device or system other than the imaging system 100, e.g., a device or system of a vendor of a manufacturer. For illustration purposes, the implementation of the process 600 by the processing device 140B is described as an example.
In 610, the processing device 140B (e.g., the acquisition module 440) may obtain a plurality of training samples.
Each of the plurality of training samples may include a sample first intermediate image and a sample second intermediate image of a sample subject. The sample subject of a training sample may be of the same type as or a different type from the subject as described in connection with operation 510 in FIG. 5. As used herein, two subjects are deemed to be of a same type when they belong to a same type of organ or tissue. For example, the subject may be the head of a patient, and the sample subject may be the head of another patient or a phantom of a human head.
In some embodiments, for a training sample, the corresponding sample first intermediate image and the sample second intermediate image may be reconstructed from sample scan data of the sample subject by performing different counts of iterations. For example, the sample first intermediate image may be generated from the sample scan data by performing a first count of iterations, the sample second intermediate image may be generated from the sample scan data by performing a second count of iterations, and the second count may be greater than the first count. The first count and the second count may be integers greater than 1. In such cases, the sample second intermediate image may have a higher image quality (e.g., measured by one or more image parameters as described in connection with FIG. 5) than the first intermediate image.
In some embodiments, for a training sample, the processing device 140B may generate the sample first intermediate image of the training sample by performing a sample first iterative reconstruction operation on the sample scan data of the training sample until a sample first termination condition is satisfied. The processing device 140B may generate the sample second intermediate image of the training sample by performing a second iterative reconstruction operation on the sample first intermediate image until a sample second termination condition is satisfied. More descriptions regarding the generation of a training sample may be found in FIG. 7 and the descriptions thereof. In some embodiments, the first counts corresponding to different training samples may be same as or different from each other. The second counts corresponding to different training samples may be same as or different from each other.
In some embodiments, the processing device 140B may generate the sample first intermediate image and/or the sample second intermediate image of a training sample using an AR algorithm. For example, the processing device 140B may generate a first sample image and a second sample image having different image qualities using the AR algorithm with different reconstruction parameters. The first sample image may satisfy the first sample termination condition, for example, an SNR of the first sample image exceeds a first threshold, and be designated as the sample first intermediate image. The second sample image may satisfy the second sample termination condition, for example, an SNR of the second sample image exceeds a second threshold, and be designated as the sample second intermediate image.
In 620, the processing device 140B (e.g., the acquisition module 440) may obtain a preliminary model.
In some embodiments, the preliminary model may include a machine learning model, such as a deep learning model, a neural network model, etc. For example, the preliminary model may include an Alex-Net model, a VGG Net model, a Google-Net model, a Res-Net model, a Squeeze-Net model, a Seg-Net model, a convolutional neural network (CNN) model, a fully convolutional neural network (FCN) model (e.g., a U-Net model, a V-Net model) , a recurrent neural network (RNN) model, a region CNN (RCNN) model, a fast-RCNN model, a generative adversarial network (GAN) model (e.g., a pix2pix model, a Wasserstein GAN (WGAN) model) , or the like, or any combination thereof.
In some embodiments, the preliminary model may include a plurality of model parameters. Exemplary model parameters of the preliminary model may include the size of a kernel of a layer, the total count (or number) of layers, the count (or number) of nodes in each layer, a learning rate, a batch size, an epoch, a connected weight between two connected nodes, a bias vector relating to a node, a loss function, or the like, or any combination thereof. The parameter value (s) of one or more of the plurality of model parameters may be altered during the training of the preliminary model using the plurality of training samples. The parameter values of the plurality of model parameters may be initialized before the training of the preliminary model. For example, the connected weight (s) and/or the bias vector (s) of the preliminary model may be initialized by assigning random values in a range, e.g., the range from -1 to 1. As another example, all the connected weights of the preliminary model may be assigned with a same value in the range from -1 to 1, for example, 0. In some embodiments, the parameter values of the preliminary model may be initialized based on a Gaussian random algorithm, a Xavier algorithm, etc.
In 630, the processing device 140B (e.g., the model determination module 450) may generate the target reconstruction model by training the preliminary model using the plurality of training samples.
For example, the preliminary model may be trained based on the plurality of training samples using a machine learning algorithm. The machine learning algorithm may include but not be limited to an artificial neural network algorithm, a deep learning algorithm, a decision tree algorithm, an association rule algorithm, an inductive logic programming algorithm, a support vector machine algorithm, a clustering algorithm, a Bayesian network algorithm, a reinforcement learning algorithm, a representation learning algorithm, a similarity and metric learning algorithm, a sparse dictionary learning algorithm, a genetic algorithm, a rule-based machine learning algorithm, or the like, or any combination thereof. The machine learning algorithm used to generate the image reconstruction model may be a supervised learning algorithm, a semi-supervised learning algorithm, an unsupervised learning algorithm, or the like.
In some embodiments, the target reconstruction model may be obtained by performing an iterative operation including one or more third iterations to iteratively update the parameter values of the preliminary model. For illustration purposes, an exemplary current third iteration of the third iteration (s) is described in the following description. The current third iteration may be performed based on at least some of the training samples (referred to as target training sample (s) for the convenience of descriptions) . In some embodiments, a same set or different sets of training samples may be used in different third iterations in training the preliminary model.
In the current third iteration, the processing device 140B may input the sample first intermediate image of each target training sample into an updated preliminary model determined in the previous third iteration, and the updated preliminary model may output an estimated sample second intermediate image. For example, for a target training sample, the updated preliminary model may extract one or more features including a low-level feature (e.g., an edge feature, a texture feature) , a high-level feature (e.g., a semantic feature) , and/or a complicated feature (e.g., a deep hierarchical feature) from the sample first intermediate image of the target training sample. Based on the extracted features, the updated preliminary model may generate the estimated second intermediate image corresponding to the target training sample.
The processing device 140B may further determine a value of a loss function of the updated preliminary model based on the estimated second intermediate image and the sample second intermediate image of each target training sample. The loss function may be used to assess a difference between an estimated value (e.g., the estimated second intermediate image (s) ) outputted by the updated preliminary model and an actual value (e.g., the sample second intermediate image (s) ) . The value of the loss function may be used to evaluate the accuracy and reliability of the updated preliminary model, for example, the smaller the loss function is, the more reliable the updated preliminary model is. Exemplary loss functions may include an L1 loss function, a focal loss function, a log loss function, a cross-entropy loss function, a Dice loss function, etc. The processing device 140B may further update the value (s) of the model parameter (s) of the updated preliminary model to be used in a next third iteration based on the value of the loss function according to, for example, a backpropagation algorithm.
In some embodiments, the one or more third iterations may be terminated if a third termination condition is satisfied in the current third iteration. An exemplary third termination condition may be that the value of the loss function obtained in the current third iteration is less than a loss threshold. Other exemplary third termination conditions may include that a certain count of third iterations is performed, that the loss function converges such that the differences of the values of the loss function obtained in consecutive third iterations are within a threshold, etc. If the third termination condition is satisfied in the current third iteration, the processing device 140B may designate the updated preliminary model as the target reconstruction model. In some embodiments, the target reconstruction model may be transmitted to a storage device (e.g., the storage device 150, the storage 220, the storage 390, etc. ) for storage.
It should be noted that the above description regarding the process 600 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more operations may be added or omitted. For example, after the target reconstruction model is generated, the processing device 140B may further test the target reconstruction model using a set of testing samples. Additionally or alternatively, the processing device 140B may update the target reconstruction model periodically or irregularly based on one or more newly-generated training samples.
FIG. 7 is a flowchart illustrating an exemplary process for determining a training sample for a target reconstruction model according to some embodiments of the present disclosure. In some embodiments, one or more operations of the process 700 may be performed to achieve at least part of operation 610 as described in connection with FIG. 6. For example, each of the plurality of training samples (or a portion thereof) obtained in 610 may be determined according to the process 700. In some embodiments, the process 700 may be performed by another device or system other than the imaging system 100, e.g., a device or system of a vendor of a manufacturer.
In 710, the processing device 140B (e.g., the acquisition module 440) may obtain sample scan data of a sample subject.
For example, the sample scan data may include CT image data, PET image data, MRI image data, ultrasound image data, X-ray image data, or the like, or any combination thereof, of the sample subject. The sample scan data may be collected by a scanning device during a scan of the sample subject. In some embodiments, the processing device 140B may obtain the sample scan data from the scanning device 110, the storage device 150, the storage 220, the storage 390, or any other storage device via the network 120. For example, the sample scan data may be obtained from a database of an organization, such as a disease detection center, a hospital, a volunteer organization, etc. As another example, the sample data may be obtained from a medical database. Exemplary medical databases may include a Github database, an international symposium on biomedical imaging (ISBI) database, a lung image database consortium and image database resource initiative (LIDC-IDRI) database, a digital database for screening mammography (DDSM) -mammographic image analysis society (MIAS) database, a cancer imaging archive database, an OsiriX database, a neuroimaging tools and resources collaboratory (NITRC) database, etc.
In 720, the processing device 140B (e.g., the model determination module 450) may generate a sample first intermediate image by performing a sample first iterative reconstruction operation on the sample scan data until a sample first termination condition is satisfied. The sample first iterative reconstruction operation may include a plurality of sample first iterations.
The generating of the sample first intermediate image may be performed in a manner similar to that of the first intermediate image as described in operation 520 in FIG. 5. For example, in a current sample first iteration of the plurality of sample first iterations, the processing device140B may generate a sample first updated image corresponding to the current sample first iteration by updating a sample first image determined in a previous sample first iteration based on the sample scan data. The processing device140B may determine whether the sample first termination condition is satisfied. In response to determining that the sample first termination condition is satisfied, the processing device140B may designate the sample first updated image corresponding to the current sample first iteration as the sample first intermediate image.
In some embodiments, the sample first iterative reconstruction operation may be performed according to an iterative reconstruction algorithm, which may be the same as or different from the iterative reconstruction algorithm used to implement the first iterative reconstruction operation as described in operation 520. For example, the sample first iterative reconstruction operation may be performed using the ART algorithm, and the first iterative reconstruction operation may be performed using the SART algorithm. As another example, both the sample first iterative reconstruction operation and the first iterative reconstruction operation may be performed using the ML-EM algorithm.
In some embodiments, the sample first termination condition may be the same as or similar to the first termination condition described in operation 520. For example, the sample first termination condition may relate a count of sample first iterations (also referred to as sample first iteration count) that have been performed in the sample first iteration reconstruction operation, a difference between the sample first updated image corresponding to the current sample first iteration and the sample first image corresponding to the previous sample first iteration, one or more image parameters of the sample first updated image corresponding to the current sample first iteration, or the like, or any combination thereof.
In some embodiments, the sample first termination condition may specify that the sample first iteration count exceeds a sample first count threshold. In such cases, the sample first iteration counts of the sample first intermediate images of different training samples may be the same as each other. Additionally or alternatively, the sample first iteration count of sample first iterations that needs to be performed on the sample scan data of a training sample may be the same as the first iteration count of first iterations that needs to be performed on the scan data described in operation 520. In other words, a count of iterations corresponding to the sample first intermediate image during the training process of the target reconstruction model may be the same as a count of iterations corresponding to a first intermediate image during the application of the target reconstruction model. For example, the sample first termination condition may specify that 10 sample first iterations need to be performed to generate the sample first intermediate image of each training sample. In the generation of the sample first intermediate image of each training sample, the processing device 140B may perform 10 sample first iterations on the sample scan data of the training sample. During an application process of the resulting target reconstruction model, the first intermediate image as described in connection with FIG. 5 may be generated by performing 10 first iterations on the scan data of the subject.
In some embodiments, when the sample first termination condition is not related to the sample first iteration count (e.g., the sample first termination condition is related to the difference between the sample first updated image corresponding to the current sample first iteration and the sample first image corresponding to the previous sample first iteration, one or more image parameters of the sample first updated image corresponding to the current sample first iteration) , different sample first intermediate images of different training samples may correspond to different sample first iteration counts. For example, 10 sample first iterations may need to be performed for a training sample A to generate a sample first intermediate image A’ with an SNR exceeding a threshold, while 20 sample first iterations may need to be performed for a training sample B to generate a sample first intermediate image B’ with an SNR exceeding the threshold.
In some embodiments, when the sample first termination condition is related to the sample first iteration count, for example, the sample first termination condition is that the sample first iteration count is greater than or equal to a sample first count threshold, the sample first count threshold may be determined according to experience (e.g., set according to a default setting of the imaging system 100, or set by a user) . In some occasions, the sample scan data of different training samples may have different data qualities, and the sample first intermediate images having different imaging qualities may be generated for different training samples if the sample scan data of the training samples undergoes a same count of sample first iterations. This may have an effect on the performance of the resulting target reconstruction model to some extent. By using a sample first termination condition relating to the image parameter (s) of the sample first updated image and/or the difference between the sample first updated image and the sample first image, the sample first intermediate images of different training samples may have the same image quality or similar image qualities, which is beneficial to the training of the preliminary model, thereby improving the accuracy and/or the generalization ability of the trained target reconstruction model. In some embodiments, two images may be deemed as having similar image qualities if a difference between two indicators for evaluating image quality of the two images is within a quality threshold. For example, a difference between SNRs of two images belows a threshold may indicate that the two images have similar image qualities.
In 730, the processing device 140B (e.g., the model determination module 450) may generate a sample second intermediate image by performing a sample second iterative reconstruction operation on the sample scan data until a sample second termination condition is satisfied. The sample second iterative reconstruction operation may include a plurality of sample second iterations.
The generation of the sample second intermediate image may be performed in a manner similar to that of the sample first intermediate image as described in operation 720. For example, in a current sample second iteration of the plurality of sample second iterations, the processing device140B may generate a sample second updated image corresponding to the current sample second iteration by updating a sample second image based on the sample scan data. The sample second image may be updated from the sample first intermediate image determined in a previous sample second iteration. The processing device140B may determine whether the sample second termination condition is satisfied. In response to determining that the sample second termination condition is satisfied, the processing device140B may designate the sample second updated image corresponding to the current sample second iteration as the sample second intermediate image.
In some embodiments, the sample second iterative reconstruction operation performed according to an iterative reconstruction algorithm, which may be the same as or different from the iterative reconstruction algorithm used to implement the sample first iterative reconstruction operation described in operation 720. In some embodiments, the sample second termination condition may be the same as or different from the sample first termination condition. For example, the sample second termination condition may relate a count of sample second iterations (also referred to as sample second iteration count) that have been performed in the sample second iteration reconstruction operation, a difference between the sample second updated image corresponding to the current sample second iteration and the sample second image corresponding to the previous sample second iteration, one or more image parameters of the sample second updated image corresponding to the current sample second iteration, or the like, or any combination thereof.
In some embodiments, the sample second termination condition may specify that the sample second iteration count exceeds a sample second count threshold. In such cases, the sample second iteration count corresponding to the sample second intermediate image may be the same as the iteration count corresponding to the second intermediate image described in operation 530. For example, it is assumed that the sample first intermediate image of a training sample is generated by performing 10 sample first iterations on the sample scan data, and the sample second intermediate image of the training sample is generated by further performing 100 sample first iterations on the sample first intermediate image. The application of the resulting target reconstruction model is equivalent to performing 100 iterations on the first intermediate image.
In 740, the processing device 140B (e.g., the model determination module 450) may determine a training sample based on the sample first intermediate image and the sample second intermediate image.
For example, the sample first intermediate image and the sample second intermediate image may be designated as the training sample.
In some embodiments, a plurality of training samples may be generated. The training samples may be divided into a training set or a test set. The training set may be used to train the preliminary model to obtain the target reconstruction model. The test set may be used to test the target reconstruction model. Merely by way of example, a certain percentage of the plurality of training samples may be used as the training set, and the remaining of the plurality of training samples may be used as the test set. The certain percentage may have a random value, or set manually by a user, or determined by the processing device 140B according to an actual need. For example, the ratio of the count of training samples in the training set to the count of training samples in the test set may be 8 to 2, 9 to 1, 9.5 to 0.5, etc.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more operations may be omitted and/or one or more additional operations may be added. For example, the operation 740 may be omitted. As another example, one or more other optional operations (e.g., a preprocessing operation) may be added before operation 720. In some embodiments, the preprocessing operation on the scan data may include a denosing operation, an enhancement operation, a filtering operation, or the like, or any combination thereof.
FIG. 8 is a schematic diagram illustrating an exemplary training sample including a sample first intermediate image and a sample second intermediate image reconstructed from sample scan data of the head of a patient by performing different counts of iterations according to some embodiments of the present disclosure. As shown in FIG. 8, an image 800A is the sample first intermediate image of the head of a patient, which was reconstructed by performing a first count of iterations based on the sample scan data of the head. An image 800B is the sample second intermediate image of the head, which was generated by performing a second count of iterations based on the sample scan data of the head (or by further performing an additional count of iterations based on the sample first intermediate image) . The second count is greater than the first count, and the image 800B has a higher image quality (e.g., a lower noise level) than the image 800A.
A plurality of training samples similar to the training sample described in FIG. 8 may be used to generate a target reconstruction model. The target reconstruction model may be configured to receive an initial image corresponding to an iteration count and generate an image corresponding to a higher iteration count with an improved image quality. In other words, the target reconstruction model may obviate the need of performing an actual iteration reconstruction operation, which may improve the reconstruction efficiency by, e.g., reducing the amount of computation resources and the needed time for image reconstruction.
Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure and are within the spirit and scope of the exemplary embodiments of this disclosure.
Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment, ” “an embodiment, ” and/or “some embodiments” mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the present disclosure.
Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied thereon.
A non-transitory computer-readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electromagnetic, optical, or the like, or any suitable combination thereof. A computer-readable signal medium may be any computer-readable medium that is not a computer-readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer-readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python, or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran, Perl, COBOL, PHP, ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations, therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software-only solution, e.g., an installation on an existing server or mobile device.
Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof to streamline the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed object matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment.
In some embodiments, the numbers expressing quantities, properties, and so forth, used to describe and claim certain embodiments of the application are to be understood as being modified in some instances by the term “about, ” “approximate, ” or “substantially. ” For example, “about, ” “approximate” or “substantially” may indicate ±20%variation of the value it describes, unless otherwise stated. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the application are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable.
Each of the patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein is hereby incorporated herein by this reference in its entirety for all purposes, excepting any prosecution file history associated with same, any of same that is inconsistent with or in conflict with the present document, or any of same that may have a limiting effect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the description, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail.
In closing, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the application. Other modifications that may be employed may be within the scope of the application. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the application may be utilized in accordance with the teachings herein. Accordingly, embodiments of the present application are not limited to that precisely as shown and described.
Claims (25)
- A system for image reconstruction, comprising:at least one storage device storing a set of instructions; andat least one processor configured to communicate with the at least one storage device, wherein when executing the set of instructions, the at least one processor is configured to direct the system to perform operations including:obtaining scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject;generating, based on the scan data, a first intermediate image of the subject; andgenerating, based on the first intermediate image and a target reconstruction model, a target image of the subject, whereinthe target reconstruction model is trained using a plurality of training samples, andeach of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
- The system of claim 1, wherein the generating a first intermediate image of the subject based on the scan data includes performing a first iterative reconstruction operation including a plurality of first iterations, at least one first iteration of the plurality of first iterations including:generating a first updated image by updating, based on the scan data, a first image determined in a previous first iteration;determining whether a first termination condition is satisfied; andin response to determining that the first termination condition is satisfied, designating the first updated image as the first intermediate image.
- The system of claim 2, wherein the first termination condition relates to at least one of:a count of first iterations that have been performed in the first iteration reconstruction operation,a difference between the first updated image and the first image, orone or more image parameters of the first updated image.
- The system of claim 3, wherein the one or more image parameters include at least one of a signal-to-noise ratio (SNR) , a mean square error (MSE) , a mean absolute deviation (MAD) , or a peak signal-to-noise ratio (PSNR) .
- The system of any one of claim 1 to 4, wherein the generating, based on the first intermediate image and a target reconstruction model, a target image of the subject comprises:generating a second intermediate image of the subject by processing the first intermediate image using the target reconstruction model; andgenerating, based on the second intermediate image, the target image of the subject.
- The system of claim 5, wherein the generating, based on the second intermediate image, the target image of the subject includes performing a second iterative reconstruction operation including a plurality of second iterations, at least one second iteration of the plurality of second iterations including:generating a second updated image by updating, based on the scan data, a second image, the second image being updated from the second intermediate image determined in a previous second iteration;determining whether a second termination condition is satisfied; andin response to determining that the second termination condition is satisfied, designating the second updated image as the target image of the subject.
- The system of claim 6, wherein the second termination condition relates to at least one of:a count of second iterations that have been performed in the second iterative reconstruction operation,a difference between the second updated image and the second image, orone or more image parameters of the second updated image.
- The system of any one of claims 1 to 7, wherein the target reconstruction model includes a deep learning model.
- The system of any one of claims 1 to 8, wherein the target reconstruction model is generated according to a model training process including:obtaining the plurality of training samples; andgenerating the target reconstruction model by training a preliminary model using the plurality of training samples.
- The system of claim 9, wherein the obtaining a plurality of training samples includes:for each of the plurality of training samples,obtaining sample scan data of the sample subject corresponding to the training sample;generating the sample first intermediate image by performing a sample first iterative reconstruction operation on the sample scan data until a sample first termination condition is satisfied; andgenerating the sample second intermediate image by performing a sample second iterative reconstruction operation on the sample first intermediate image until a sample second termination condition is satisfied.
- The system of any one of claims 1 to 10, wherein the at least one processor is further configured to direct the system to perform operations including:obtaining, from a reconstruction model library, the target reconstruction model.
- A method for image reconstruction implemented on a computing device including at least one processor and at least one storage device, the method comprising:obtaining scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject;generating, based on the scan data, a first intermediate image of the subject; andgenerating, based on the first intermediate image and a target reconstruction model, a target image of the subject, whereinthe target reconstruction model is trained using a plurality of training samples, andeach of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
- A system for image reconstruction, comprising:an acquisition module configured to obtain scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject;an intermediate image generation module configured to generate a first intermediate image of the subject based on the scan data; anda target image generation module configured to generate a target image of the subject based on the first intermediate image and a target reconstruction model, whereinthe target reconstruction model is trained using a plurality of training samples, andeach of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
- A non-transitory computer readable medium, comprising at least one set of instructions for image reconstruction, wherein when executed by one or more processors of a computing device, the at least one set of instructions causes the computing device to perform a method, the method comprising:obtaining scan data of a subject, the scan data being acquired by an imaging device during a scan of the subject;generating, based on the scan data, a first intermediate image of the subject; and generating, based on the first intermediate image and a target reconstruction model, a target image of the subject, whereinthe target reconstruction model is trained using a plurality of training samples, andeach of the plurality of training samples comprises a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations.
- A system, comprising:at least one storage device storing a set of instructions for generating a target reconstruction model; andat least one processor configured to communicate with the at least one storage device, wherein when executing the set of instructions, the at least one processor is configured to direct the system to perform operations including:obtaining a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; anddetermining the target reconstruction model by training a preliminary model using the plurality of training samples.
- The system of claim 15, wherein the obtaining a plurality of training samples includes:for each of the plurality of training samples,obtaining sample scan data of the sample subject corresponding to the training sample;generating the sample first intermediate image by performing a sample first iterative reconstruction operation on the sample scan data until a sample first termination condition is satisfied; andgenerating the sample second intermediate image by performing a second iterative reconstruction operation on the sample first intermediate image until a sample second termination condition is satisfied.
- The system of claim 16, wherein the sample first iterative reconstruction operation includes a plurality of sample first iterations, the generating the sample first intermediate image comprises:for at least one sample first iteration of the plurality of sample first iterations,generating a sample first updated image by updating, based on the sample scan data, a sample first image determined in a previous sample first iteration;determining whether the sample first termination condition is satisfied; andin response to determining that the sample first termination condition is satisfied, designating the sample first updated image as the sample first intermediate image.
- The system of claim 17, wherein the sample first termination condition relates to at least one of:a count of sample first iterations that have been performed in the sample first iterative reconstruction operation,a difference between the sample first updated image and the sample first image, orone or more image parameters of the sample first updated image.
- The system of any one of claims 16 to 18, wherein the sample second iterative reconstruction operation includes a plurality of sample second iterations, the generating the sample second intermediate image comprises:for at least one sample second iteration of the plurality of sample second iterations,generating a sample second updated image by updating, based on the sample scan data, a sample second image, the sample second image being updated from the sample first intermediate image determined in a previous sample second iteration;determining whether the sample second termination condition is satisfied; andin response to determining that the sample second termination condition is satisfied, designating the sample second updated image as the sample second intermediate image.
- The system of claim 19, wherein the sample second termination condition relates to at least one of:a count of sample second iterations that have been performed in the sample second iterative reconstruction operation,a difference between the sample second updated image and the sample second image, orone or more image parameters of the sample second updated image.
- The system of any one of claims 15 to 20, wherein the generating the target reconstruction model by training a preliminary model using the plurality of training samples includes:initializing parameter values of the preliminary model; andgenerating the target reconstruction model by iteratively updating the parameter values of the preliminary model based on the plurality of training samples.
- The system of claim 21, wherein the iteratively updating the parameter values of the preliminary model includes performing an iterative operation including one or more third iterations, at least one third iteration of the one or more third iterations comprises,for each of at least some of the plurality of training samples, generating an estimated second intermediate image by processing the sample first intermediate image of the training sample using an updated preliminary model determined in a previous third iteration;determining a value of a loss function based on the estimated second intermediate image and the sample second intermediate image of each of the at least some of the plurality of training samples;further updating at least some of the parameter values of the updated preliminary model to be used in a next third iteration based on the value of the loss function.
- A method for generating a target reconstruction model implemented on a computing device including at least one processor and at least one storage device, the method comprising:obtaining a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; anddetermining the target reconstruction model by training a preliminary model using the plurality of training samples.
- A system for generating a target reconstruction model, comprising:an acquisition module configured to obtain a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; anda model determination module configured to determine the target reconstruction model by training a preliminary model using the plurality of training samples.
- A non-transitory computer-readable storage medium, comprising at least one set of instructions for generating a target reconstruction model, wherein when executed by one or more processors of a computing device, the at least one set of instructions causes the computing device to perform a method, the method comprising:obtaining a plurality of training samples each of which includes a sample first intermediate image and a sample second intermediate image of a sample subject, the sample first intermediate image and the sample second intermediate image being reconstructed from sample scan data of the sample subject by performing different counts of iterations; anddetermining the target reconstruction model by training a preliminary model using the plurality of training samples.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910968191.3 | 2019-10-12 | ||
| CN201910968191.3A CN110807821A (en) | 2019-10-12 | 2019-10-12 | Image reconstruction method and system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021068975A1 true WO2021068975A1 (en) | 2021-04-15 |
Family
ID=69488207
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2020/120503 Ceased WO2021068975A1 (en) | 2019-10-12 | 2020-10-12 | Systems and methods for image reconstruction |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN110807821A (en) |
| WO (1) | WO2021068975A1 (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113706409A (en) * | 2021-08-18 | 2021-11-26 | 苏州雷泰医疗科技有限公司 | CBCT image enhancement method and device based on artificial intelligence and storage medium |
| CN113989207A (en) * | 2021-10-21 | 2022-01-28 | 江苏智库智能科技有限公司 | A material inventory method based on image processing |
| CN114332334A (en) * | 2021-12-31 | 2022-04-12 | 中国电信股份有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
| CN117115387A (en) * | 2023-08-21 | 2023-11-24 | 中国人民解放军总医院第四医学中心 | A complete human skull tissue separation and finite element modeling method based on CT images |
Families Citing this family (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110807821A (en) * | 2019-10-12 | 2020-02-18 | 上海联影医疗科技有限公司 | Image reconstruction method and system |
| CN111489409A (en) * | 2020-04-24 | 2020-08-04 | 东软医疗系统股份有限公司 | CT image processing method and device, CT equipment and CT system |
| CN114998100B (en) * | 2020-06-08 | 2025-04-08 | 广州超视计生物科技有限公司 | Image processing system and method |
| US11423559B2 (en) * | 2020-06-30 | 2022-08-23 | Bnsf Railway Company | Systems and methods for reconstructing objects using transitional images |
| CN113034642B (en) * | 2021-03-30 | 2022-05-27 | 推想医疗科技股份有限公司 | Image reconstruction method and device and training method and device of image reconstruction model |
| CN113128455B (en) * | 2021-04-30 | 2023-04-28 | 上海睿钰生物科技有限公司 | Cell image reconstruction model training method and system |
| CN118945483A (en) * | 2023-05-09 | 2024-11-12 | 北京小米移动软件有限公司 | Image processing method, device, electronic device and storage medium |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140193055A1 (en) * | 2011-07-08 | 2014-07-10 | Hitachi Medical Corporation | Image reconstruction device and image reconstruction method |
| US20180197317A1 (en) * | 2017-01-06 | 2018-07-12 | General Electric Company | Deep learning based acceleration for iterative tomographic reconstruction |
| CN110060314A (en) * | 2019-04-22 | 2019-07-26 | 深圳安科高技术股份有限公司 | A kind of CT iterative approximation accelerated method and system based on artificial intelligence |
| CN110807821A (en) * | 2019-10-12 | 2020-02-18 | 上海联影医疗科技有限公司 | Image reconstruction method and system |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103810735A (en) * | 2014-02-28 | 2014-05-21 | 南方医科大学 | Statistical iterative reconstructing method for low-dose X-ray CT image |
| EP3195265B1 (en) * | 2014-09-15 | 2018-08-22 | Koninklijke Philips N.V. | Iterative image reconstruction with a sharpness driven regularization parameter |
| CN106780338B (en) * | 2016-12-27 | 2020-06-09 | 南京理工大学 | Rapid super-resolution reconstruction method based on anisotropy |
| US11195310B2 (en) * | 2018-08-06 | 2021-12-07 | General Electric Company | Iterative image reconstruction framework |
| CN110276813B (en) * | 2019-05-06 | 2023-01-24 | 深圳先进技术研究院 | CT image reconstruction method, device, storage medium and computer equipment |
-
2019
- 2019-10-12 CN CN201910968191.3A patent/CN110807821A/en active Pending
-
2020
- 2020-10-12 WO PCT/CN2020/120503 patent/WO2021068975A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20140193055A1 (en) * | 2011-07-08 | 2014-07-10 | Hitachi Medical Corporation | Image reconstruction device and image reconstruction method |
| US20180197317A1 (en) * | 2017-01-06 | 2018-07-12 | General Electric Company | Deep learning based acceleration for iterative tomographic reconstruction |
| CN110060314A (en) * | 2019-04-22 | 2019-07-26 | 深圳安科高技术股份有限公司 | A kind of CT iterative approximation accelerated method and system based on artificial intelligence |
| CN110807821A (en) * | 2019-10-12 | 2020-02-18 | 上海联影医疗科技有限公司 | Image reconstruction method and system |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113706409A (en) * | 2021-08-18 | 2021-11-26 | 苏州雷泰医疗科技有限公司 | CBCT image enhancement method and device based on artificial intelligence and storage medium |
| CN113989207A (en) * | 2021-10-21 | 2022-01-28 | 江苏智库智能科技有限公司 | A material inventory method based on image processing |
| CN114332334A (en) * | 2021-12-31 | 2022-04-12 | 中国电信股份有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
| CN117115387A (en) * | 2023-08-21 | 2023-11-24 | 中国人民解放军总医院第四医学中心 | A complete human skull tissue separation and finite element modeling method based on CT images |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110807821A (en) | 2020-02-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12190502B2 (en) | Systems and methods for image optimization | |
| US11887221B2 (en) | Systems and methods for image correction in positron emission tomography | |
| US11847763B2 (en) | Systems and methods for image reconstruction | |
| WO2021068975A1 (en) | Systems and methods for image reconstruction | |
| US20230109899A1 (en) | Systems and methods for image reconstruction | |
| US11436720B2 (en) | Systems and methods for generating image metric | |
| US11593977B2 (en) | Systems and methods for image reconstruction in positron emission tomography | |
| US11308610B2 (en) | Systems and methods for machine learning based automatic bullseye plot generation | |
| US11200669B2 (en) | Systems and methods for determining plasma input function used in positron emission tomography imaging | |
| US20230360312A1 (en) | Systems and methods for image processing | |
| US20240005508A1 (en) | Systems and methods for image segmentation | |
| US20240428413A1 (en) | Systems and methods for motion correction for medical images | |
| US20240346628A1 (en) | Systems and methods for motion correction for a medical image | |
| US20240296601A1 (en) | Systems and methods for data processing | |
| US20240265501A1 (en) | Systems and methods for image processing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20873483 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 20873483 Country of ref document: EP Kind code of ref document: A1 |