Disclosure of Invention
The present invention is directed to a method for removing poisson-gaussian mixed noise, so as to solve the problems in the background art.
The invention is realized by the following technical scheme: a Poisson-Gaussian mixture noise removing method comprises the following steps:
constructing a data set containing a Poisson-Gaussian mixed noise image, and dividing the data set into a training set and a test set;
establishing a noise image denoising model, wherein the model comprises a GAT layer, a CNN layer, a residual layer and an inverse GAT layer, and inputting the data in the training set into the non-blind Poisson-Gaussian mixture denoising model for training so as to obtain a trained non-blind Poisson-Gaussian mixture denoising model;
and inputting the data of the test set into the non-blind Poisson-Gaussian mixed denoising model to obtain an image denoising result.
Optionally, the GAT layer is configured to include poisson-gaussian mixed noise np-gThe image fitting of (1) comprises Gaussian noise ngImage Y ofiWhich obtains an image YiThe specific expression of (A) is as follows:
wherein, YiFor noisy pixels, λ is the intensity of poisson noise, and σ is the standard deviation of gaussian noise.
Optionally, the CNN layer includes a DnCNN deep learning network for transforming the image YiInputting the DnCNN deep learning network for training to obtain a residual mapping function R (y) approximately equal to ngIn the training process, the mean square error is used as a loss function to train the parameter theta of the DnCNN deep learning network, and the expression of the loss function is as follows:
wherein, N represents the number of pairs of noise images and clear images converted by the GAT module, i.e. the number of samples of each training batch, then the parameter θ is optimized by using an Adam optimizer, and the weight is updated as:
wherein W is a convolution kernel, l is the current number of layers, b is the number of iterations, and α is the learning rate;
optionally, a PReLu function is adopted as a nonlinear activation function in the DnCNN deep learning network.
Optionally, the residual layer is used for transforming the image Y to be subjected to the GAT layeriAnd fitting Gaussian noise n extracted through CNN layergMaking a difference: x ═ Yi-ngAnd obtaining a primary clear image X.
Optionally, the inverse GAT layer is configured to combine each pixel X in the preliminary sharp image X with the corresponding pixel XiCarrying out inverse GAT conversion to obtain each pixel point I of the final clean imageiThe transformation process comprises the following steps:
where λ is the intensity of the poisson noise and σ is the standard deviation of the gaussian noise.
Optionally, when constructing the data set including the poisson-gaussian mixture noise image, the method includes:
acquiring a clean image, calculating the gray value of a pixel point of the clean image, and sequentially adding the gray value, the intensity lambda of Poisson noise and the standard deviation sigma of Gaussian noise to obtain a mixed noise image;
and preprocessing the mixed noise image, wherein the preprocessing comprises overturning, translation or rotation, and finally obtaining a data set containing the Poisson-Gaussian mixed noise image.
Compared with the prior art, the invention has the following beneficial effects:
the method for removing Poisson-Gaussian mixed noise provided by the invention fully considers the noise interference of a digital image in the actual imaging process, the result of the method can be as close to real noise as possible, and the method has wider applicability compared with a common single Gaussian noise model.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present invention, a detailed structure will be set forth in the following description in order to explain the present invention. Alternative embodiments of the invention are described in detail below, however, the invention may be practiced in other embodiments that depart from these specific details.
Referring to fig. 1-2, a poisson-gaussian mixture noise removing method includes the following steps:
s1, constructing a data set containing a Poisson-Gaussian mixed noise image, and dividing the data set into a training set and a test set;
s2, establishing a noise image denoising model which comprises a GAT layer, a CNN layer, a residual error layer and an inverse GAT layer, inputting the data in the training set into the non-blind Poisson-Gaussian mixture denoising model for training, and thus obtaining a trained non-blind Poisson-Gaussian mixture denoising model;
and S3, inputting the data of the test set into the non-blind Poisson-Gaussian mixture denoising model to obtain an image denoising result.
In step S1, the data set may be constructed by acquiring an image containing poisson-gaussian mixed noise, and the image containing poisson-gaussian mixed noise may be obtained by adding poisson-gaussian mixed noise to the clean image, where the process of adding poisson-gaussian mixed noise to the clean image includes:
acquiring a clean image, calculating the gray value of a pixel point of the clean image, and sequentially adding the gray value, the intensity lambda of Poisson noise and the standard deviation sigma of Gaussian noise to obtain a mixed noise image;
and preprocessing the mixed noise image, wherein the preprocessing comprises overturning, translation or rotation, and finally obtaining a data set containing the Poisson-Gaussian mixed noise image.
It should be noted that poisson noise generally occurs in a circuit with very low or high power electronic amplification, and the mathematical expression is:
where λ is related to the illumination intensity and σ represents the standard deviation of the gaussian noise, the probability density function is as follows:
as an example, the python code in this embodiment adds poisson-gaussian mixed noise specifying λ, σ to the image, which is specifically as follows:
as an example, when the noise level is known, the addition can be performed by the above steps, and for blind denoising, only the noise level estimator needs to be added in the process to obtain the parameters σ and λ of the noise, such as BP-aid. Poisson-Gaussian noise parameter estimation can also be realized through a neural network, such as PGE-Net, and parameters of the input mixed noise image are estimated through a preset network to obtain sigma and lambda.
Optionally, in step S2, the GAT layer is configured to include poisson-gaussian mixed noise np-gThe image fitting of (1) comprises Gaussian noise ngImage Y ofiWhich obtains an image YiThe specific expression of (A) is as follows:
wherein, YiFor noisy pixels, λ is the intensity of poisson noise, and σ is the standard deviation of gaussian noise.
Referring to fig. 3, optionally, in step S2, the CNN layer includes a DnCNN deep learning network for transforming the image YiInputting the DnCNN deep learning network for training to obtain fitting Gaussian noise ngThe network adopts a mode of combining Batch Normalization (Batch Normalization) and Residual Learning (Residual Learning), a single model can be trained to carry out Gaussian denoising, and the network outputs a Residual image of noise. With x representing a sharp image, ngRepresenting the Gaussian noise after the last step of GAT layer transformation, and for the transformed noisy image y being x + ngTraining a residual mapping function R (y) approximately equal to n through a residual networkg。
In the training process, the mean square error is used as a loss function to train the parameter theta of the DnCNN deep learning network, and the expression of the loss function is as follows:
wherein, N represents the number of pairs of noise images and clear images converted by the GAT module, i.e. the number of samples of each training batch, then the parameter θ is optimized by using an Adam optimizer, and the weight is updated as:
wherein W is a convolution kernel, 1 is the current number of layers, b is the number of iterations, and α is the learning rate;
as one embodiment of the application, a PReLu function is adopted to replace ReLu activation as a nonlinear activation function in the DnCNN deep learning network.
The ReLu activation function filters the negative half shaft part, so that partial information is lost, the output data distribution is not centered at 0 any more, and the data distribution is changed. The PReLu function solves the influence brought by the 0 interval of ReLu, and the expression is as follows:
wherein a is a constant, such as a value of 0.02.
Further, the DnCNN deep learning network further comprises quasi-normalization processing, and in the batch normalization processing, corresponding input x is subjected to normalization operation through mini-batch, so that the output signals are distributed in a mode that the mean value is 0 and the variance is 1 in each dimension. This is the standard normal distribution, which is the main reason why the network shows strong removal capability to gaussian noise, and the batch normalization operation is as follows
Optionally, in step S2, the residual layer is used for transforming the image Y to be subjected to the GAT layeriAnd fitting Gaussian noise n extracted through CNN layergMaking a difference: x ═ Yi-ngAnd obtaining a primary clear image X.
Optionally, in step S2, the inverse GAT layer is used to combine each pixel X in the preliminary sharp image XiCarrying out inverse GAT conversion to obtain each pixel point I of the final clean imageiThe transformation process comprises the following steps:
where λ is the intensity of the poisson noise and σ is the standard deviation of the gaussian noise.
In summary, the non-blind poisson-gaussian mixed denoising model provided by the invention utilizes GAT transformation to process poisson-gaussian mixed noise, fits gaussian noise and performs denoising, and finally recovers a clear image through inverse GAT transformation. Noise interference of the digital image in the actual imaging process is fully considered, the Poisson-Gaussian noise model can be as close to real noise as possible, and the method has wider applicability than a common single Gaussian noise model.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.