[go: up one dir, main page]

CN109472818A - An image dehazing method based on deep neural network - Google Patents

An image dehazing method based on deep neural network Download PDF

Info

Publication number
CN109472818A
CN109472818A CN201811208024.0A CN201811208024A CN109472818A CN 109472818 A CN109472818 A CN 109472818A CN 201811208024 A CN201811208024 A CN 201811208024A CN 109472818 A CN109472818 A CN 109472818A
Authority
CN
China
Prior art keywords
defogging
network
mist
fogless
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811208024.0A
Other languages
Chinese (zh)
Other versions
CN109472818B (en
Inventor
李岳楠
刘宇航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201811208024.0A priority Critical patent/CN109472818B/en
Publication of CN109472818A publication Critical patent/CN109472818A/en
Application granted granted Critical
Publication of CN109472818B publication Critical patent/CN109472818B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种基于深度神经网络的图像去雾方法,包括:选取全球大气光和大气散射系数,利用景深生成有雾图及其透射率图;将无雾图、有雾图和透射率图组成训练集;基于编码器‑解码器架构构建包括估计透射率子网络和去雾子网络的生成器网络;并采用对抗损失函数、透射率L1范数损失函数和去雾图L1范数损失函数的线性组合训练生成器;基于卷积层、sigmoid激活函数以及LeakyReLU函数构建判别器网络;将真实无雾图和经过去雾子网络生成的去雾图分别作为正负样本,以交叉熵作为代价函数训练判别器;采用生成器和判别器交替训练的方式进行对抗训练;训练完成后,将一张待去雾的有雾图输入生成器,经过一次前向传播即得到去雾图。

The invention discloses an image dehazing method based on a deep neural network. The graph constitutes the training set; the generator network including the estimated transmittance sub-network and the dehazing sub-network is constructed based on the encoder-decoder architecture; and the adversarial loss function, the transmittance L1 norm loss function and the dehazing map L1 norm loss are adopted The linear combination of functions trains the generator; the discriminator network is constructed based on the convolutional layer, the sigmoid activation function and the LeakyReLU function; the real haze-free image and the dehazed image generated by the dehazing sub-network are used as positive and negative samples respectively, and the cross entropy is used as the The cost function trains the discriminator; the generator and the discriminator are alternately trained for adversarial training; after the training is completed, a hazy image to be dehazed is input into the generator, and the dehazed image is obtained after a forward propagation.

Description

A kind of image defogging method based on deep neural network
Technical field
The present invention relates to image processing techniques and depth learning technology field, more particularly to one kind to be based on deep neural network Image defogging method.
Background technique
In the case where sky quality condition is bad, the particle that the image of outdoor shooting is often suspended in air is obvious A series of problems, such as degrading, leading to picture contrast decline, cross-color, this is because light can quilt in light communication process Mist, haze and dust in air etc. are scattered, therefore eventually arrive at camera is the light scattered.Haze image is usually by straight The atmosphere light for connecing decaying and scattering forms, the intensity of illumination after directly decaying to the body surface reflection loss that camera receives, The atmosphere light of scattering is the atmosphere light by scattering process that camera receives.Image defogging algorithm is widely applied valence by it Value, is increasingly becoming the research hotspot of military affairs, space flight, traffic and monitoring etc..
The image defogging method of early stage is broadly divided into the defogging method based on picture superposition and is based on atmospheric scattering Model estimates the defogging method of fog free images.It is intended to improve the contrast of image based on method for enhancing picture contrast, and does not have There are the mechanism and atmospherical scattering model for considering image attenuation;Image defogging method based on atmospherical scattering model mainly uses one The feature of a little engineers goes estimation and refined image transmissivity, calculates clearly fog free images further according to model.For example, He[1]Et al. propose dark, and thus estimate transmissivity, refinement transmissivity gone using soft pick figure or guiding filtering, is dissipated according to atmosphere Penetrate that model is counter to solve fogless figure;Zhu[2]Et al. establish linear model and describe image depth and pixel intensity, the relationship of saturation degree, Picture depth is estimated, atmosphere light and atmospheric scattering coefficient are chosen, generates defogging figure using atmospherical scattering model.
Recently, some scientific research personnel carry out image defogging using the method for deep learning, and achieve good results.Example Such as, Cai[3]Et al. propose DehazeNet, using convolutional neural networks study image and transmissivity relationship, given birth to by individual figure At transmissivity, fogless figure is restored based on atmospherical scattering model.Li[4]Et al. derive COEFFICIENT K to replace in atmospherical scattering model Atmosphere light and transmissivity simultaneously redefine atmospherical scattering model, COEFFICIENT K are estimated by convolutional neural networks study, according to again The model of definition restores fogless figure.
Traditional image defogging method calculates depth or transmissivity by its manual features, however these manual features are deposited In the limitation of its own, satisfactory defog effect is unable to reach to the picture of certain scenes.Figure based on deep learning As defogging method can improve scene limitation, there is stronger scene adaptability, and good defog effect can be obtained.
Summary of the invention
The present invention provides a kind of image defogging method based on deep neural network, the present invention fight net using production Network carries out image defogging, and generator is learnt by deep neural network from there is mist figure to the conversion fogless figure, using arbiter Improve generator defogging performance, this method does not need prior information, compared with traditional image defogging method, it is easier easily One after the completion of training, is had mist figure to input generator, obtains defogging figure by a propagated forward, in detail by row to defogging See below description:
A kind of image defogging method based on deep neural network, which comprises
Global atmosphere light and atmospheric scattering coefficient are chosen, has mist figure and its transmittance figure using depth of field generation;By fogless figure, Training set is formed by mist figure and transmittance figure;
It include the generator net of estimation transmissivity sub-network and defogging sub-network based on the building of coder-decoder framework Network;And using the linear combination instruction of confrontation loss function, transmissivity L1 norm loss function and defogging figure L1 norm loss function Practice generator;
Arbiter network is constructed based on convolutional layer, sigmoid activation primitive and LeakyReLU function;It will be true fogless Figure and the defogging figure generated by defogging sub-network are respectively as positive negative sample, and using cross entropy as cost function, training differentiates Device;
Dual training is carried out by the way of generator and arbiter alternately training;
After the completion of training, there is mist figure to input generator to defogging for one, obtains defogging by a propagated forward Figure.
Wherein, the generator network includes:
The estimation transmissivity sub-network generates single pass transmittance figure, and the defogging sub-network generates going for triple channel Mist figure;
Encoder is made of n-layer convolutional neural networks, and activation primitive is LeakyReLU function, and carries out to image data Criticize standardization;
Decoder is made of n-layer convolutional neural networks, by transposition convolution come enlarged image size, the last layer convolution Activation primitive uses Tanh function, other layer of activation primitive uses ReLU function;
Further, between the respective layer of the encoder and decoder by the way of jump connection, by encoder As a result decoder, the symmetrical configuration of encoder and decoder are transmitted to;
Characteristic pattern after convolution is connected on the channel of the characteristic pattern of the decoder of identical size by encoder, is obtained new Characteristic pattern.
Preferably, the confrontation loss function specifically:
In formula, IiBe generated by i-th fogless figure have mist figure, i=1,2 ..., N, N is the number of fogless figure in training set Amount, D (Ii,G2(Ii)) indicate mist figure IiBy defogging sub-network G2The defogging figure G of generation2(Ii) by the output of arbiter.
Preferably, the transmissivity L1 norm loss function specifically:
In formula, IiAnd tiBe respectively generated by i-th fogless figure have mist figure and transmittance figure, i=1,2 ..., N, N is instruction Practice the quantity for concentrating fogless figure, G1(Ii) represent have mist figure IiBy estimating transmissivity sub-network G1The transmittance figure of generation.
Preferably, the defogging figure L1 norm loss function specifically:
In formula, Ji、IiBe i-th fogless figure and it is corresponding have a mist figure, i=1,2 ..., N, N are fogless figure in training set Quantity, G2(Ii) represent have mist figure IiBy defogging sub-network G2The defogging figure of generation.
Wherein, the arbiter network specifically:
Network is made of m convolutional layer, and the last layer output uses sigmoid activation primitive, remaining activation primitive uses LeakyReLU function;
Training objective are as follows:
When input is true fogless figure J, arbiter output is 1;When input is defogging figure G2(I), arbiter output is 0.
Further, the loss function of the arbiter Web vector graphic specifically:
In formula, Ji、IiBe i-th fogless figure and it is corresponding have a mist figure, i=1,2 ..., N, N are fogless figure in training set Quantity, D (Ii,Ji) represent have mist figure IiWhen as condition, true fogless figure JiBy the output of arbiter, D (Ii,G2(Ii)) generation Table has mist figure IiWhen as condition, there is mist figure IiThe defogging figure G generated by defogging sub-network2(Ii) pass through arbiter output, And D (Ii,G2(Ii)) ∈ (0,1), D (Ii,Ji)∈(0,1)。
The present invention have it is following the utility model has the advantages that
1, the present invention does not need prior information, by deep neural network study from there is mist figure to the conversion fogless figure, Clearly defogging figure, method are simple for generation;
2, the present invention does not need calculating complicated in the hypothesis and prior information of conventional method, and defogging is high-efficient, speed is fast;
3, defogging method of the invention only needs individual to have mist figure to produce fogless figure, is conveniently easily achieved.
Detailed description of the invention
Fig. 1 is a kind of flow chart of the image defogging method based on deep neural network provided by the invention;
Fig. 2 is the structural schematic diagram that production provided by the invention fights generator network in network;
Fig. 3 is the structural schematic diagram that production provided by the invention fights arbiter network in network;
Fig. 4 is that real scene has mist figure and defogging figure in defogging result of the present invention;
Fig. 5 is that real scene has mist figure and defogging figure in defogging result of the present invention;
Fig. 6 is that real scene has mist figure and defogging figure in defogging result of the present invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, embodiment of the present invention is made below further Ground detailed description.
Embodiment 1
In order to realize that the image defogging of high quality, the embodiment of the present invention propose a kind of image based on deep neural network Defogging method, described below referring to Fig. 1:
101: choosing global atmosphere light and atmospheric scattering coefficient, have mist figure and its transmittance figure using depth of field generation;By nothing Mist figure forms training set by mist figure and transmittance figure;
102: including the generator of estimation transmissivity sub-network and defogging sub-network based on the building of coder-decoder framework Network;And using the linear combination of confrontation loss function, transmissivity L1 norm loss function and defogging figure L1 norm loss function Training generator;
103: arbiter network is constructed based on convolutional layer, sigmoid activation primitive and LeakyReLU function;It will be true Fogless figure and the defogging figure generated by defogging sub-network are respectively as positive negative sample, and using cross entropy as cost function, training is sentenced Other device;
104: carrying out dual training by the way of generator and arbiter alternately training;
105: after the completion of training, thering is mist figure to input generator to defogging for one, obtained by a propagated forward Defogging figure.
Wherein, the specific steps for establishing sample data set in step 101 are as follows:
1) global atmosphere light A and atmospheric scattering factor beta are chosen, (on the spot using depth of field d that is known or estimating from original image Distance of the scape to camera), there is mist figure I and its transmittance figure t using formula (1), (2) generation;
I (x)=J (x) t (x)+A (1-t (x)) (1)
T (x)=e-βd(x) (2)
In formula, x is the pixel in image.
2) training set is formed by mist figure I, fogless figure J and transmittance figure t by corresponding.
Wherein, after step 101, before step 102, this method further include: image preprocessing step, specifically:
1) all image sizes of data set are fixed as N × N;
2) after normalizing the pixel value of image, then it is standardized as [- 1,1].
Wherein, the specific steps of generator network are constructed in step 102 are as follows:
1) generator network design is two sub-networks, is estimation transmissivity sub-network G respectively1With defogging sub-network G2, two A sub-network is all made of coder-decoder framework, estimates transmissivity sub-network G1Generate transmittance figure, defogging sub-network G2It is raw At defogging figure, with no restrictions to estimation transmissivity sub-network and defogging sub-network structure;
2) encoder is mainly made of n-layer convolutional neural networks, realizes that downscaled images size, enlarged image are logical by convolution Road number, activation primitive selects LeakyReLU function, and carries out batch standardization (Batch Normalization) to image data;
3) decoder is equally made of n-layer convolutional neural networks, by transposition convolution come enlarged image size, the last layer The activation primitive of convolution uses Tanh function, other layer of activation primitive uses ReLU function;
4) result of encoder is transmitted to decoding by the way of jump connection between the respective layer of encoder and decoder Characteristic pattern (channel k) after convolution is connected to the decoding of identical size by device, the symmetrical configuration of encoder and decoder, encoder On the channel of the characteristic pattern (channel k) of device, new characteristic pattern (channel 2k) is obtained;
5) estimate transmissivity sub-network G1With defogging sub-network structure G2It is identical, there is mist figure I defeated as network using same Enter, difference is estimation transmissivity sub-network G1Generate single pass transmittance figure G1(I), defogging sub-network G2Output is three The defogging figure G in channel2(I);
6) confrontation loss function, transmissivity L1 norm loss function and defogging figure L1 norm damage is respectively adopted in training generator Function is lost, specific as follows shown:
It fights shown in loss function such as formula (3):
In formula, IiBe generated by i-th fogless figure have mist figure, i=1,2 ..., N, N is the number of fogless figure in training set Amount, D (Ii,G2(Ii)) indicate mist figure IiBy defogging sub-network G2The defogging figure G of generation2(Ii) by the output of arbiter.
Shown in transmissivity L1 norm loss function such as formula (4):
In formula, IiAnd tiBe respectively generated by i-th fogless figure have mist figure and transmittance figure, i=1,2 ..., N, N is instruction Practice the quantity for concentrating fogless figure, G1(Ii) represent have mist figure IiBy estimating transmissivity sub-network G1The transmittance figure of generation.
Shown in defogging figure L1 norm loss function such as formula (5):
In formula, Ji、IiBe i-th fogless figure and it is corresponding have a mist figure, i=1,2 ..., N, N are fogless figure in training set Quantity, G2(Ii) represent have mist figure IiBy defogging sub-network G2The defogging figure of generation.
In conjunction with above three loss functions, the total loss function of generator is obtained, as shown in formula (6):
Gen_loss=θ LA+λLt+αLJ (6)
In formula, θ, λ and α are respectively LA、LtAnd LJWeight.
Wherein, the specific steps of arbiter network are constructed in step 103 are as follows:
1) arbiter network is made of m convolutional layer, and the last layer output uses sigmoid activation primitive, remaining activation Function uses LeakyReLU function, and using has mist figure I as condition, inputs true fogless figure J or defogging sub-network G2It generates Defogging figure G2(I), that output is true fogless figure J or defogging figure G2(I) probability value, range are (0,1).To arbiter network Structure with no restrictions.
Training objective are as follows: when the input of arbiter network is true fogless figure J, arbiter output is 1, when arbiter network Input be defogging figure G2(I), arbiter output is 0.The effect of arbiter is to improve generator in generating dual training to go The performance of mist.
2) shown in loss function such as formula (7) used in training arbiter:
In formula, Ji、IiBe i-th fogless figure and it is corresponding have a mist figure, i=1,2 ..., N, N are fogless figure in training set Quantity, D (Ii,Ji) represent have mist figure IiWhen as condition, true fogless figure JiBy the output of arbiter, D (Ii,G2(Ii)) generation Table has mist figure IiWhen as condition, there is mist figure IiThe defogging figure G generated by defogging sub-network2(Ii) pass through arbiter output, And D (Ii,G2(Ii)) ∈ (0,1), D (Ii,Ji)∈(0,1)。
Wherein, in step 104 dual training specific steps are as follows:
1) by the way of generator and arbiter alternately training, the parameters of generator network fixed first, training Arbiter network, then fixes the parameters of arbiter network, and training generator network carries out dual training;
2) the defogging figure G that arbiter network distinguishes true fogless figure J by learning and defogging sub-network generates2(I), raw It is the true fogless figure J or defogging figure G that defogging sub-network generates that network of growing up to be a useful person allows arbiter network cannot be distinguished by study2 (I)。
Wherein, the specific steps of step 105 are as follows: after the completion of generator and arbiter training, by one wait go when test Mist has mist figure input generator to obtain defogging figure by a propagated forward.
Embodiment 2
It describes in detail below with reference to specific attached drawing and calculation formula to the scheme in embodiment 1, it is as detailed below Description:
201: choosing global atmosphere light and atmospheric scattering coefficient, have mist figure and its transmittance figure using depth of field generation;By nothing Mist figure forms training set by mist figure and transmittance figure;
202: including the generator of estimation transmissivity sub-network and defogging sub-network based on the building of coder-decoder framework Network;And using the linear combination of confrontation loss function, transmissivity L1 norm loss function and defogging figure L1 norm loss function Training generator;
203: arbiter network is constructed based on convolutional layer, sigmoid activation primitive and LeakyReLU function;It will be true Fogless figure and the defogging figure generated by defogging sub-network are respectively as positive negative sample, and using cross entropy as cost function, training is sentenced Other device;
204: carrying out dual training by the way of generator and arbiter alternately training;
205: after the completion of training, thering is mist figure to input generator to defogging for one, obtained by a propagated forward Defogging figure.
Wherein, the specific steps for establishing sample data set in step 201 are as follows:
1) treatment process for having mist figure and transmittance figure is generated are as follows: be randomly provided the global atmosphere light A in tri- channels RGB Some value between [0.7,1.0], atmospheric scattering factor beta are randomly set to some value between [0.6,1.8], using known or The depth of field d (i.e. the distance of scene to camera) estimated from original image[5], according to formula (1), (2) generate have mist figure I and its thoroughly Penetrate rate figure t;
2) training set is made, 1399 width figures are selected, it is different to generate 10 width according to the above process by figure J fogless for every There is mist figure I and its transmittance figure t, obtains 13990 fogless figure J, has mist figure I and transmittance figure t, composing training collection;
Wherein, after step 201, before step 202, this method further include: image preprocessing step, specifically:
1) all image sizes of training set are fixed as 256 × 256;
2) by the RGB color pixel value of training set picture divided by 255, [0,1] is normalized to from [0,255], later Pixel value is normalized into [- 1,1], as network inputs multiplied by subtracting 1 after 2 again.
Wherein, the specific steps of generator network are constructed in step 202 are as follows:
1) generator network design is two sub-networks, is estimation transmissivity sub-network G respectively1With defogging sub-network G2, two A sub-network is all made of coder-decoder framework, estimates transmissivity sub-network G1Generate transmittance figure, defogging sub-network G2It is raw At defogging figure;
2) encoder is mainly made of 8 layers of convolutional neural networks, realizes that downscaled images size, enlarged image are logical by convolution Road number, the convolution kernel size used are 4 × 4, and setting stride is 2, and activation primitive selects LeakyReLU function, and slope is set as 0.2, and batch standardization (Batch Normalization) is carried out to image data;
3) decoder is equally made of 8 layers of convolutional neural networks, by transposition convolution come enlarged image size, the volume that uses Product core size is 4 × 4, and setting stride is 2, to guarantee that the output data range of generator network is (- 1,1), the last layer volume Long-pending activation primitive uses Tanh function, other layer of activation primitive uses ReLU function;
4) result of encoder is transmitted to decoding by the way of jump connection between the respective layer of encoder and decoder Characteristic pattern (channel k) after convolution is connected to the decoding of identical size by device, the symmetrical configuration of encoder and decoder, encoder On the channel of the characteristic pattern (channel k) of device, new characteristic pattern (channel 2k) is obtained;
5) estimate transmissivity sub-network G1With defogging sub-network structure G2It is identical, there is mist figure I defeated as network using same Enter, difference is estimation transmissivity sub-network G1Generate single pass transmittance figure G1(I), defogging sub-network G2Output is three The defogging figure G in channel2(I);
6) confrontation loss function, transmissivity L1 norm loss function and defogging figure L1 norm damage is respectively adopted in training generator Lose function.It fights shown in loss function such as formula (3), shown in transmissivity L1 norm loss function such as formula (4), defogging figure L1 norm damage It loses shown in function such as formula (5).In conjunction with above three loss functions, the total loss function of generator is obtained, such as the formula in embodiment 1 (6) shown in.θ, λ and α are respectively LA、LtAnd LJWeight, θ=1.0, λ=1.0, α=100.0.
Wherein, the specific steps of arbiter network are constructed in step 203 are as follows:
1) arbiter network is made of 5 convolutional layers, and the last layer output uses sigmoid activation primitive, remaining activation Function uses LeakyReLU function, and slope is set as 0.2, and using has mist figure I as condition, inputs true fogless figure J or defogging Network G2The defogging figure G of generation2(I), that output is true fogless figure J or defogging figure G2(I) probability value, range are (0,1). Training objective are as follows: when the input of arbiter network is true fogless figure J, arbiter output is 1, when the input of arbiter network is Defogging figure G2(I), arbiter output is 0.The effect of arbiter is that the performance of generator defogging is improved in generating dual training.
2) shown in the formula (7) in loss function such as embodiment 1 used in training arbiter.
Wherein, for the specific steps of dual training referring to embodiment 1, the embodiment of the present invention does not repeat them here this in step 204.
Wherein, the specific steps of step 205 are as follows: after the completion of generator and arbiter training, by one wait go when test Mist has mist figure input generator to obtain defogging figure by a propagated forward.
Embodiment 3
Feasibility verifying is carried out to the scheme in Examples 1 and 2 below by experimental data, described below:
3 real scenes are selected to have mist figure to carry out defogging using defogging method of the invention, Fig. 4, Fig. 5 and Fig. 6 are true Real field scape has mist figure and defogging figure.As can be seen from the results, this method is more satisfactory to the defog effect for having mist figure of real scene.
Bibliography
[1]He K,Sun J,Tang X.Single Image Haze Removal Using Dark Channel Prior[J].IEEE Transactions on Pattern Analysis And Machine Intelligence,2011, 33(12):2341-2353.
[2]Zhu Q,Mai J,Shao L.A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior[J].IEEE Transactions on Image Processing,2015, 24(11):3522-3533.
[3]Cai B,Xu X,Jia K,et al.DehazeNet:An End-to-End System for Single Image Haze Removal[J].IEEE Transactions on Image Processing,2016,25(11):5187- 5198.
[4]Li B,PengX,WangZ,et al.:AOD-Net:All-in-One Dehazing Network,2017 IEEE International Conference on Computer Vision,2017:4780-4788.
[5]Godard C,Mac Aodha O,Brostow G J,et al.:Unsupervised Monocular Depth Estimation with Left-Right Consistency,30th IEEE Conference on Computer Vision And Pattern Recognition,2017:6602-6611.
It will be appreciated by those skilled in the art that attached drawing is the schematic diagram of a preferred embodiment, the embodiments of the present invention Serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
The foregoing is merely presently preferred embodiments of the present invention, is not intended to limit the invention, it is all in spirit of the invention and Within principle, any modification, equivalent replacement, improvement and so on be should all be included in the protection scope of the present invention.

Claims (8)

1. a kind of image defogging method based on deep neural network, which is characterized in that the described method includes: choosing global atmosphere Light and atmospheric scattering coefficient have mist figure and its transmittance figure using depth of field generation;It is formed by fogless figure, by mist figure and transmittance figure Training set;
It include the generator network of estimation transmissivity sub-network and defogging sub-network based on the building of coder-decoder framework;And It is generated using the linear combination training of confrontation loss function, transmissivity L1 norm loss function and defogging figure L1 norm loss function Device;
Arbiter network is constructed based on convolutional layer, sigmoid activation primitive and LeakyReLU function;Will true fogless figure and The defogging figure generated by defogging sub-network is respectively as positive negative sample, the training arbiter using cross entropy as cost function;
Dual training is carried out by the way of generator and arbiter alternately training;
After the completion of training, there is mist figure to input generator to defogging for one, obtains defogging figure by a propagated forward.
2. a kind of image defogging method based on deep neural network according to claim 1, which is characterized in that the life Network of growing up to be a useful person includes:
The estimation transmissivity sub-network generates single pass transmittance figure, and the defogging sub-network generates the defogging of triple channel Figure;
Encoder is made of n-layer convolutional neural networks, and activation primitive is LeakyReLU function, and carries out batch mark to image data Standardization;
Decoder is made of n-layer convolutional neural networks, by transposition convolution come enlarged image size, the activation of the last layer convolution Function uses Tanh function, other layer of activation primitive uses ReLU function.
3. a kind of image defogging method based on deep neural network according to claim 2, which is characterized in that
Between the respective layer of the encoder and decoder by the way of jump connection, the result of encoder is transmitted to decoding Device, the symmetrical configuration of encoder and decoder;
Characteristic pattern after convolution is connected on the channel of the characteristic pattern of the decoder of identical size by encoder, obtains new feature Figure.
4. a kind of image defogging method based on deep neural network according to claim 1, which is characterized in that described right Anti- loss function specifically:
In formula, IiBe generated by i-th fogless figure have mist figure, i=1,2 ..., N, N is the quantity of fogless figure in training set, D (Ii,G2(Ii)) indicate mist figure IiBy defogging sub-network G2The defogging figure G of generation2(Ii) by the output of arbiter.
5. a kind of image defogging method based on deep neural network according to claim 1, which is characterized in that described Penetrate rate L1 norm loss function specifically:
In formula, IiAnd tiIt is to have mist figure and transmittance figure by what i-th fogless figure generated respectively, i=1,2 ..., N, N is training set In fogless figure quantity, G1(Ii) represent have mist figure IiBy estimating transmissivity sub-network G1The transmittance figure of generation.
6. a kind of image defogging method based on deep neural network according to claim 1, which is characterized in that described to go Mist figure L1 norm loss function specifically:
In formula, Ji、IiBe i-th fogless figure and it is corresponding have a mist figure, i=1,2 ..., N, N are the quantity of fogless figure in training set, G2(Ii) represent have mist figure IiBy defogging sub-network G2The defogging figure of generation.
7. a kind of image defogging method based on deep neural network according to claim 1, which is characterized in that described to sentence Other device network specifically:
Network is made of m convolutional layer, and the last layer output uses sigmoid activation primitive, remaining activation primitive uses LeakyReLU function;
Training objective are as follows:
When input is true fogless figure J, arbiter output is 1;When input is defogging figure G2(I), arbiter output is 0.
8. a kind of image defogging method based on deep neural network according to claim 7, which is characterized in that described to sentence The loss function of other device Web vector graphic specifically:
In formula, Ji、IiBe i-th fogless figure and it is corresponding have a mist figure, i=1,2 ..., N, N are the quantity of fogless figure in training set, D(Ii,Ji) represent have mist figure IiWhen as condition, true fogless figure JiBy the output of arbiter, D (Ii,G2(Ii)) represent have Mist figure IiWhen as condition, there is mist figure IiThe defogging figure G generated by defogging sub-network2(Ii) by the output of arbiter, and D(Ii,G2(Ii)) ∈ (0,1), D (Ii,Ji)∈(0,1)。
CN201811208024.0A 2018-10-17 2018-10-17 An image dehazing method based on deep neural network Expired - Fee Related CN109472818B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811208024.0A CN109472818B (en) 2018-10-17 2018-10-17 An image dehazing method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811208024.0A CN109472818B (en) 2018-10-17 2018-10-17 An image dehazing method based on deep neural network

Publications (2)

Publication Number Publication Date
CN109472818A true CN109472818A (en) 2019-03-15
CN109472818B CN109472818B (en) 2021-07-02

Family

ID=65664610

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811208024.0A Expired - Fee Related CN109472818B (en) 2018-10-17 2018-10-17 An image dehazing method based on deep neural network

Country Status (1)

Country Link
CN (1) CN109472818B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046653A (en) * 2019-03-22 2019-07-23 赣州好朋友科技有限公司 A kind of white tungsten method for separating and system based on XRT ray
CN110348569A (en) * 2019-07-18 2019-10-18 华中科技大学 Real-time optical chromatography method and system based on convolutional neural networks
CN110544213A (en) * 2019-08-06 2019-12-06 天津大学 An image defogging method based on global and local feature fusion
CN110570371A (en) * 2019-08-28 2019-12-13 天津大学 An image defogging method based on multi-scale residual learning
CN110675462A (en) * 2019-09-17 2020-01-10 天津大学 A Colorization Method of Grayscale Image Based on Convolutional Neural Network
CN110766640A (en) * 2019-11-05 2020-02-07 中山大学 An Image Dehazing Method Based on Deep Semantic Segmentation
CN111105336A (en) * 2019-12-04 2020-05-05 山东浪潮人工智能研究院有限公司 Image watermarking removing method based on countermeasure network
CN111489301A (en) * 2020-03-19 2020-08-04 山西大学 Image defogging method based on image depth information guide for migration learning
CN111507909A (en) * 2020-03-18 2020-08-07 南方电网科学研究院有限责任公司 A method, device and storage medium for sharpening foggy image
CN111539896A (en) * 2020-04-30 2020-08-14 华中科技大学 A method and system for image dehazing based on domain adaptation
CN111833277A (en) * 2020-07-27 2020-10-27 大连海事大学 A sea image dehazing method with unpaired multi-scale hybrid encoder-decoder structure
CN111932466A (en) * 2020-07-10 2020-11-13 北京邮电大学 Image defogging method, electronic equipment and storage medium
CN111986108A (en) * 2020-08-07 2020-11-24 西北工业大学 Complex sea-air scene image defogging method based on generation countermeasure network
CN112116537A (en) * 2020-08-31 2020-12-22 中国科学院长春光学精密机械与物理研究所 Image reflected light elimination method and image reflected light elimination network construction method
CN112598598A (en) * 2020-12-25 2021-04-02 南京信息工程大学滨江学院 Image reflected light removing method based on two-stage reflected light eliminating network
CN112994840A (en) * 2021-02-03 2021-06-18 白盒子(上海)微电子科技有限公司 Decoder based on neural network
CN113191964A (en) * 2021-04-09 2021-07-30 上海海事大学 Unsupervised night image defogging method using high-frequency and low-frequency decomposition
CN113379618A (en) * 2021-05-06 2021-09-10 航天东方红卫星有限公司 Optical remote sensing image cloud removing method based on residual dense connection and feature fusion
CN113658051A (en) * 2021-06-25 2021-11-16 南京邮电大学 A method and system for image dehazing based on recurrent generative adversarial network
CN113724156A (en) * 2021-08-09 2021-11-30 浙江工业大学 Generation countermeasure network defogging method and system combined with atmospheric scattering model
CN116342437A (en) * 2023-05-30 2023-06-27 中国空气动力研究与发展中心低速空气动力研究所 Image defogging method based on densely connected sub-pixel GAN model

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504658A (en) * 2014-12-15 2015-04-08 中国科学院深圳先进技术研究院 Single image defogging method and device on basis of BP (Back Propagation) neural network
US20160169076A1 (en) * 2014-12-15 2016-06-16 Continental Automotive Gmbh Method for monitoring an oxidation catalysis device
CN105719247A (en) * 2016-01-13 2016-06-29 华南农业大学 Characteristic learning-based single image defogging method
CN107798669A (en) * 2017-12-08 2018-03-13 北京小米移动软件有限公司 Image defogging method, device and computer-readable recording medium
CN108615226A (en) * 2018-04-18 2018-10-02 南京信息工程大学 A kind of image defogging method fighting network based on production
CN108665432A (en) * 2018-05-18 2018-10-16 百年金海科技有限公司 A kind of single image to the fog method based on generation confrontation network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504658A (en) * 2014-12-15 2015-04-08 中国科学院深圳先进技术研究院 Single image defogging method and device on basis of BP (Back Propagation) neural network
US20160169076A1 (en) * 2014-12-15 2016-06-16 Continental Automotive Gmbh Method for monitoring an oxidation catalysis device
CN105719247A (en) * 2016-01-13 2016-06-29 华南农业大学 Characteristic learning-based single image defogging method
CN107798669A (en) * 2017-12-08 2018-03-13 北京小米移动软件有限公司 Image defogging method, device and computer-readable recording medium
CN108615226A (en) * 2018-04-18 2018-10-02 南京信息工程大学 A kind of image defogging method fighting network based on production
CN108665432A (en) * 2018-05-18 2018-10-16 百年金海科技有限公司 A kind of single image to the fog method based on generation confrontation network

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046653A (en) * 2019-03-22 2019-07-23 赣州好朋友科技有限公司 A kind of white tungsten method for separating and system based on XRT ray
CN110046653B (en) * 2019-03-22 2021-05-25 赣州好朋友科技有限公司 White tungsten sorting method and system based on XRT rays
CN110348569A (en) * 2019-07-18 2019-10-18 华中科技大学 Real-time optical chromatography method and system based on convolutional neural networks
CN110348569B (en) * 2019-07-18 2021-10-08 华中科技大学 Real-time optical tomography method and system based on convolutional neural network
CN110544213A (en) * 2019-08-06 2019-12-06 天津大学 An image defogging method based on global and local feature fusion
CN110544213B (en) * 2019-08-06 2023-06-13 天津大学 An image defogging method based on global and local feature fusion
CN110570371B (en) * 2019-08-28 2023-08-29 天津大学 Image defogging method based on multi-scale residual error learning
CN110570371A (en) * 2019-08-28 2019-12-13 天津大学 An image defogging method based on multi-scale residual learning
CN110675462A (en) * 2019-09-17 2020-01-10 天津大学 A Colorization Method of Grayscale Image Based on Convolutional Neural Network
CN110766640A (en) * 2019-11-05 2020-02-07 中山大学 An Image Dehazing Method Based on Deep Semantic Segmentation
CN111105336A (en) * 2019-12-04 2020-05-05 山东浪潮人工智能研究院有限公司 Image watermarking removing method based on countermeasure network
CN111507909A (en) * 2020-03-18 2020-08-07 南方电网科学研究院有限责任公司 A method, device and storage medium for sharpening foggy image
CN111489301A (en) * 2020-03-19 2020-08-04 山西大学 Image defogging method based on image depth information guide for migration learning
CN111489301B (en) * 2020-03-19 2022-05-31 山西大学 An image dehazing method guided by image depth information based on transfer learning
CN111539896A (en) * 2020-04-30 2020-08-14 华中科技大学 A method and system for image dehazing based on domain adaptation
CN111539896B (en) * 2020-04-30 2022-05-27 华中科技大学 Domain-adaptive-based image defogging method and system
CN111932466A (en) * 2020-07-10 2020-11-13 北京邮电大学 Image defogging method, electronic equipment and storage medium
CN111833277A (en) * 2020-07-27 2020-10-27 大连海事大学 A sea image dehazing method with unpaired multi-scale hybrid encoder-decoder structure
CN111833277B (en) * 2020-07-27 2023-08-15 大连海事大学 A Method for Dehazing Maritime Images with Unpaired Multi-Scale Hybrid Codec Structure
CN111986108B (en) * 2020-08-07 2024-04-19 西北工业大学 Complex sea and air scene image defogging method based on generation countermeasure network
CN111986108A (en) * 2020-08-07 2020-11-24 西北工业大学 Complex sea-air scene image defogging method based on generation countermeasure network
CN112116537A (en) * 2020-08-31 2020-12-22 中国科学院长春光学精密机械与物理研究所 Image reflected light elimination method and image reflected light elimination network construction method
CN112598598A (en) * 2020-12-25 2021-04-02 南京信息工程大学滨江学院 Image reflected light removing method based on two-stage reflected light eliminating network
CN112598598B (en) * 2020-12-25 2023-11-28 南京信息工程大学滨江学院 Image reflected light removing method based on two-stage reflected light eliminating network
CN112994840B (en) * 2021-02-03 2021-11-02 白盒子(上海)微电子科技有限公司 Decoder based on neural network
CN112994840A (en) * 2021-02-03 2021-06-18 白盒子(上海)微电子科技有限公司 Decoder based on neural network
CN113191964A (en) * 2021-04-09 2021-07-30 上海海事大学 Unsupervised night image defogging method using high-frequency and low-frequency decomposition
CN113191964B (en) * 2021-04-09 2024-04-05 上海海事大学 Unsupervised night image defogging method using high-low frequency decomposition
CN113379618A (en) * 2021-05-06 2021-09-10 航天东方红卫星有限公司 Optical remote sensing image cloud removing method based on residual dense connection and feature fusion
CN113379618B (en) * 2021-05-06 2024-04-12 航天东方红卫星有限公司 Optical remote sensing image cloud removing method based on residual dense connection and feature fusion
CN113658051B (en) * 2021-06-25 2023-10-13 南京邮电大学 An image defogging method and system based on recurrent generative adversarial network
CN113658051A (en) * 2021-06-25 2021-11-16 南京邮电大学 A method and system for image dehazing based on recurrent generative adversarial network
CN113724156B (en) * 2021-08-09 2024-03-29 浙江工业大学 A defogging method and system based on a generative adversarial network combined with an atmospheric scattering model
CN113724156A (en) * 2021-08-09 2021-11-30 浙江工业大学 Generation countermeasure network defogging method and system combined with atmospheric scattering model
CN116342437B (en) * 2023-05-30 2023-07-21 中国空气动力研究与发展中心低速空气动力研究所 Image defogging method based on densely connected sub-pixel GAN model
CN116342437A (en) * 2023-05-30 2023-06-27 中国空气动力研究与发展中心低速空气动力研究所 Image defogging method based on densely connected sub-pixel GAN model

Also Published As

Publication number Publication date
CN109472818B (en) 2021-07-02

Similar Documents

Publication Publication Date Title
CN109472818A (en) An image dehazing method based on deep neural network
Zhou et al. Fsad-net: feedback spatial attention dehazing network
CN110544213B (en) An image defogging method based on global and local feature fusion
CN106910175B (en) A single image dehazing algorithm based on deep learning
CN109272455B (en) Image Dehazing Method Based on Weakly Supervised Generative Adversarial Network
CN102289791B (en) A Fast Single Image Dehazing Method
CN109493300B (en) Aerial image real-time defogging method based on FPGA (field programmable Gate array) convolutional neural network and unmanned aerial vehicle
CN108269244B (en) An Image Dehazing System Based on Deep Learning and Prior Constraints
CN105719247A (en) Characteristic learning-based single image defogging method
CN108805839A (en) Combined estimator image defogging method based on convolutional neural networks
CN104504658A (en) Single image defogging method and device on basis of BP (Back Propagation) neural network
CN109993804A (en) A road scene dehazing method based on conditional generative adversarial network
CN108154492B (en) An image haze removal method based on non-local mean filtering
CN109509156A (en) A kind of image defogging processing method based on generation confrontation model
CN114387195A (en) Infrared image and visible light image fusion method based on non-global pre-enhancement
CN109410144A (en) A kind of end-to-end image defogging processing method based on deep learning
CN115861113B (en) A semi-supervised dehazing method based on fusion of depth map and feature mask
CN112950521A (en) Image defogging method and generator network
CN116664454A (en) An Underwater Image Enhancement Method Based on Prediction of Multi-scale Color Migration Parameters
CN115527159A (en) A counting system and method based on cross-modal scale attention aggregation features
CN112184566A (en) An image processing method and system for removing attached water mist and water droplets
CN114841885B (en) A Dehazing Fusion Processing Method Based on Polarized Image Data
CN114820366A (en) Multi-scale lightweight image defogging network based on deep learning
CN109816610A (en) An image dehazing system
CN102968767A (en) Method for real-time restoration of fog-degraded image with white balance correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210702

CF01 Termination of patent right due to non-payment of annual fee