[go: up one dir, main page]

CN116703750A - Image defogging method and system based on edge attention and multi-order differential loss - Google Patents

Image defogging method and system based on edge attention and multi-order differential loss Download PDF

Info

Publication number
CN116703750A
CN116703750A CN202310519184.1A CN202310519184A CN116703750A CN 116703750 A CN116703750 A CN 116703750A CN 202310519184 A CN202310519184 A CN 202310519184A CN 116703750 A CN116703750 A CN 116703750A
Authority
CN
China
Prior art keywords
image
network
loss
foggy
clear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310519184.1A
Other languages
Chinese (zh)
Inventor
李展
康志清
龙航
杨洋
杜卓耘
朱琳徽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN202310519184.1A priority Critical patent/CN116703750A/en
Publication of CN116703750A publication Critical patent/CN116703750A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0895Weakly supervised learning, e.g. semi-supervised or self-supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image defogging method and system based on edge attention and multi-order differential loss, wherein the method comprises the following steps: acquiring a foggy image data set, preprocessing the foggy image data set, dividing the foggy image data set into a thick foggy image data set and a thin foggy image data set, and taking the thin foggy image data set as an input image data set; constructing a restoration network and a degradation network; and (3) performing combined training on the recovery network and the degradation network to obtain a trained recovery network and a trained degradation network, and inputting an image to be detected into the trained recovery network to obtain a defogging result. According to the invention, the defogging images and the defogging images are processed through the multi-order convolution templates, the consistency of the defogging images with the true defogging images in contrast, brightness and edge information is restrained, and the clear image four-classification discriminator and the restoration network are subjected to countertraining, so that the effects of the restoration network and the degradation network are more obvious, the performance of the network is improved, and the defogging effect is improved.

Description

Image defogging method and system based on edge attention and multi-order differential loss
Technical Field
The invention relates to the technical field of image processing, in particular to an image defogging method and system based on edge attention and multi-order differential loss.
Background
At present, the quality of an image shot in a foggy environment can be greatly reduced, the contrast of the image is reduced, the color fidelity is reduced, the edge information is lost, the visual effect of the whole image is poor, and the degraded image is very disturbing and destructive for a computer vision task with wide application.
The research on the defogging problem of a single image is to recover an original defogging-free image under the input condition of a given degraded picture caused by fog, and the defogging algorithm is mainly divided into a defogging method based on physical prior and a defogging method based on deep learning at present.
He finds out from a large number of outdoor haze-free images that at least one color channel in the non-sky region has a very low pixel value and is called a dark channel, and based on this dark channel priori knowledge, estimates the transmission rate map and the global atmospheric light intensity from the haze image, and restores a haze-free clear image. Since the gray values of the sky area are similar to those of the fog, the dark channel prior defogging method cannot well process the foggy image containing a large amount of sky area. Zhu et al found that the brightness and saturation of pixels in a fog image changed sharply with the change in fog concentration, the image had low brightness and high saturation characteristics in the non-fog region, and the high brightness and low saturation characteristics in the thick-fog region, gradually increased the brightness and gradually decreased the saturation with increasing fog concentration, and based on the color attenuation prior model, estimating the depth of field by brightness and saturation, and reconstructing a clear image. The defogging method based on the physical prior depends on the prior knowledge of the image, and the prior knowledge is related to the actual application scene, and when the prior knowledge is inconsistent, the defogging quality of the image is poor.
Along with the development of deep learning, a defogging method based on the deep learning is widely studied, but some defogging methods do not pay attention to edge information of an image in a defogging process, the quality of the image after defogging is poor, the problems of color cast, incomplete defogging and the like exist, and meanwhile, some defogging methods depend on paired data sets, so that the acquisition of the paired data sets is complex and difficult.
The defogging method based on deep learning is mainly divided into two main categories, one category is to rely on an atmospheric scattering model, acquire a transmission diagram through convolutional neural network learning, and then realize defogging of a single image according to a model formula. The DehazeNet inputs a foggy image into a constructed convolution network, outputs a corresponding transmission rate diagram, directly estimates the mapping relation between the foggy image and the transmission rate from end to end, reconstructs a clear image by using an atmospheric scattering model, and proposes a new activation function BReLU to improve defogging quality. The Boyun Li et al propose three combined sub-networks, namely a clear image estimation network, a transmission image estimation network and an atmospheric light intensity estimation network, respectively, input a foggy image, respectively obtain a clear image, a transmission rate image and atmospheric light intensity through the three sub-networks, and realize defogging of the image by an unsupervised method without depending on paired data sets.
Another type of defogging algorithm based on deep learning does not depend on an atmospheric scattering model, features are extracted through a convolutional neural network, mapping of foggy images and foggy clear images is learned from a large number of data sets, an end-to-end defogging network is built, a foggy image is input, a foggy image is output, and defogging of a single image is achieved.
The GridDehazeNet provides three modules, namely a preprocessing module, a trunk module and a post-processing module, wherein the preprocessing module can pertinently enhance an input image, the trunk network extracts multi-scale information of the image based on an attention mechanism and carries out effective information exchange on different scales, the post-processing module is used for improving the quality of defogging images, and the whole network has good effect on uniform mist, but has poor effect on non-uniform mist removal. The GCANet utilizes smooth hole convolution to reduce grid effect brought by general hole convolution, eliminates artifacts, and uses an additional gating sub-network to fuse information of different scales of the image for image restoration. Dong et al propose an enhanced decoder to supervise the recovery of haze-free images using only reconstructed L1 losses. Hong et al introduced an additional teacher network that represented the information of the positive sample/foggy image extracted from the teacher network to guide the student network/defogging network, but did not make good use of the information of the negative sample/foggy image.
The existing defogging methods based on deep learning are poor in defogging effect on non-uniform fog, edge information of images is not concerned in the defogging process, the problems that the edges of the images are blurred and details shielded by fog are difficult to reconstruct exist after defogging, meanwhile, the images depend on paired data sets, and acquisition of the foggy data sets is complex and difficult.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention provides an image defogging method and system based on edge attention and multi-order differential loss.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the invention provides an image defogging method based on edge attention and multi-order differential loss, which comprises the following steps:
acquiring a foggy image data set, preprocessing the foggy image data set, dividing the foggy image data set into a thick foggy image data set and a thin foggy image data set, and taking the thin foggy image data set as an input image data set;
constructing a restoration network and a degradation network;
the recovery network and the degradation network are trained in a combined way, and the recovery network and the degradation network after training are obtained, which concretely comprises the following steps:
first stage of training: the method comprises the steps that a first clear image is obtained through a restoration network by the aid of a mist image, and a mist image is obtained through a degradation network by the aid of the first clear image;
second stage of training: the haze image is subjected to a degradation network to obtain a haze image, and the haze image is subjected to a restoration network to obtain a second clear image;
performing content loss calculation on the first clear image, the second clear image and the real haze-free image, performing multi-order differential loss calculation on the first clear image, the second clear image and the real haze-free image, performing cyclic consistency loss calculation on the foggy image and the haze image, inputting the first clear image into a clear image four-classification discriminator for performing contrast loss calculation, performing contrast training on the clear image four-classification discriminator and a restoration network, inputting the dense fog image into a dense fog image four-classification discriminator for performing contrast loss calculation, and performing contrast training on the dense fog image four-classification discriminator and the degradation network;
obtaining total loss according to content loss, counterloss, cycle consistency loss and multi-order differential loss weighted summation, and updating parameters of a network through a gradient descent algorithm;
and inputting the image to be detected into the trained recovery network to obtain a defogging result.
As a preferred technical solution, preprocessing the fogging image data set specifically includes:
and measuring FADE fog concentration indexes of all the foggy images in the foggy image data set, calculating the average value of all the FADE fog concentration indexes, dividing the images larger than the average value into thick foggy images, and dividing the images lower than the average value into thin foggy images.
As a preferred technical solution, the restoration network includes an edge attention branch and a super resolution restoration branch;
the edge attention branches adopt a coding and decoding structure, a pre-training network Res2Net is used for extracting characteristics in the coding structure, a plurality of edge attention modules are included in the decoding structure, and each attention module comprises two edge attention layers;
the super-resolution recovery branch adopts a coding and decoding structure, a pre-training network Res2Net is used for extracting features in the coding structure, and a plurality of residual blocks are included in the decoding structure and are used for extracting image features and recovering lost image information;
the image features output by the edge attention branch and the super-resolution recovery branch sequentially pass through a progressive compression block, mirror image filling, a convolution layer and a Tanh activation function to obtain a recovered clear image;
the network structure of the degenerate network is the same as the network structure of the edge attention branches in the restoration network.
As a preferable technical scheme, the first training stage and the second training stage are synchronously performed, wherein the first training stage is supervised learning, and the second training stage is unsupervised learning.
As an preferable technical scheme, the content loss calculation is performed on the first clear image, the second clear image and the real haze-free image, specifically expressed as:
wherein ,Lcon For content loss, N is the total number of images, J gt Is a true haze-free image, b is a first clear image, and e is a second clear image.
As an preferable technical scheme, the multi-order differential loss calculation is performed on the first clear image, the second clear image and the real haze-free image, and specifically expressed as:
wherein ,Lfrac How much isThe order differential loss, N is the total number of images, v is the order, F v For V-order differential convolution, V is the order list, J gt Is a true haze-free image, b is a first clear image, and e is a second clear image.
As a preferable technical scheme, the calculation of the cyclic consistency loss between the foggy image and the thin foggy image is specifically expressed as follows:
where N is the total number of images, a is the input image, and c is the hazy image.
As an preferable technical scheme, the first clear image is input into a clear image four-classification discriminator to perform countermeasures loss calculation, the clear image four-classification discriminator and the recovery network perform countermeasures training, the thick fog image is input into a thick fog image four-classification discriminator to perform countermeasures loss calculation, and the thick fog image four-classification discriminator and the degradation network perform countermeasures training, specifically expressed as:
L gan (RNet,D 1 )=CE(D 1 (b),C gt )
wherein ,loss of four-class discriminators for training a clarity map, < >>To train the loss of the four-classification discriminant of the dense fog chart, L gan (RNet,D 1 ) Four-classification discriminant D for recovering network RNet and clear graph 1 Is the countering loss of L gan (DNet,D 2 ) Four-classification discriminant D for degenerate network DNet and dense fog patterns 2 Is the countering loss of L adv For total contrast loss, CE is cross entropy loss, J gt I is a true haze-free image dense-gt A is an input haze image, b is a first clear image, d is a haze image, C gt Representing the category of the real haze-free image, C a Representing the category of the input haze image, C dense-gt Representing the category of the true foggy image, C RNet Representing class of clear image generated by recovery network RNet, C DNet Representing the category of the foggy map generated by the degenerate network dnat.
The present invention also provides an image defogging system based on edge attention and multi-order differential loss, comprising: the system comprises an image acquisition module, an image preprocessing module, a network construction module, a network training module and a defogging module;
the image acquisition module is used for acquiring a foggy image data set;
the image preprocessing module is used for preprocessing a foggy image data set, dividing the foggy image data set into a thick foggy image data set and a thin foggy image data set, and taking the thin foggy image data set as an input image data set;
the network construction module is used for constructing a restoration network and a degradation network;
the network training module is used for jointly training the recovery network and the degradation network to obtain the trained recovery network and the trained degradation network, and specifically comprises the following steps:
first stage of training: the method comprises the steps that a first clear image is obtained through a restoration network by the aid of a mist image, and a mist image is obtained through a degradation network by the aid of the first clear image;
second stage of training: the haze image is subjected to a degradation network to obtain a haze image, and the haze image is subjected to a restoration network to obtain a second clear image;
performing content loss calculation on the first clear image, the second clear image and the real haze-free image, performing multi-order differential loss calculation on the first clear image, the second clear image and the real haze-free image, performing cyclic consistency loss calculation on the foggy image and the haze image, inputting the first clear image into a clear image four-classification discriminator for performing contrast loss calculation, performing contrast training on the clear image four-classification discriminator and a restoration network, inputting the dense fog image into a dense fog image four-classification discriminator for performing contrast loss calculation, and performing contrast training on the dense fog image four-classification discriminator and the degradation network;
obtaining total loss according to content loss, counterloss, cycle consistency loss and multi-order differential loss weighted summation, and updating parameters of a network through a gradient descent algorithm;
the defogging module is used for inputting the image to be detected into the trained recovery network to obtain a defogging result.
As an preferable technical scheme, the content loss calculation is performed on the first clear image, the second clear image and the real haze-free image, specifically expressed as:
wherein ,Lcon For content loss, N is the total number of images, J gt B is a first clear image, e is a second clear image;
the first clear image, the second clear image and the real haze-free image are subjected to multi-order differential loss calculation, and the multi-order differential loss calculation is specifically expressed as follows:
wherein ,Lfrac For multi-order differential loss, v is the order, F v For V-order differential convolution, V is the order list;
and (3) carrying out cyclic consistency loss calculation on the foggy image and the mist image, wherein the method is specifically expressed as:
wherein a is an input image and c is a foggy image;
inputting a first clear image into a clear image four-classification discriminator to calculate the countermeasures loss, wherein the clear image four-classification discriminator and a recovery network perform countermeasures training, inputting a thick fog image into the thick fog image four-classification discriminator to calculate the countermeasures loss, and the thick fog image four-classification discriminator and the degradation network perform countermeasures training, which is specifically expressed as follows:
L gan (RNet,D 1 )=CE(D 1 (b),C gt )
wherein ,loss of four-class discriminators for training a clarity map, < >>To train the loss of the four-classification discriminant of the dense fog chart, L gan (RNet,D 1 ) Four-classification discriminant D for recovering network RNet and clear graph 1 Is the countering loss of L gan (DNet,D 2 ) Four-classification discriminant D for degenerate network DNet and dense fog patterns 2 Is the countering loss of L adv For total contrast loss, CE is cross entropy loss, J gt I is a true haze-free image dense-gt D is a dense fog image, C is a true dense fog image gt Representing the category of the real haze-free image, C a Representing the category of the input haze image, C dense-gt Representing the category of the true foggy image, C RNet Representing class of clear image generated by recovery network RNet, C DNet Representing the category of the foggy map generated by the degenerate network dnat.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) The invention builds a cyclic double model, wherein a defogging network is a restoration network, a double branch structure design is adopted, an edge attention module is designed in an edge attention branch, the perception of the network to the image edge information is enhanced by learning the weight of the image edge, the capability of the network for restoring the edge information is improved, and a super-resolution restoration branch is used for reconstructing lost details; the fog adding network, namely the degradation network, finally improves the robustness and performance of the restoration network by learning images with different degradation degrees, wherein the restoration-degradation process is supervised learning in the training process, the non-supervised learning in the degradation-restoration process, and the dependence on a data set is reduced in a semi-supervised mode.
(2) The existing loss function commonly uses L1 loss to restrict defogging images and real defogging images, but the L1 loss only calculates the difference value of the two images at the pixel level, and does not capture the contrast difference and the edge information difference between the defogging images and the real defogging images.
(2) Compared with the conventional similar classification discriminators, the two four-classification discriminators are used for restraining the clear graph in the restoration-degradation process and the dense fog graph in the degradation-restoration process, so that the effects of the restoration network and the degradation network are more obvious, and the performance of the network is improved.
Drawings
FIG. 1 is a flow chart of an image defogging method based on edge attention and multi-order differential loss according to the present invention;
FIG. 2 is a flow chart of the combined training of the restoration network and the degradation network according to the present invention;
FIG. 3 is a schematic diagram of a network architecture of a restoration network according to the present invention;
FIG. 4 is a schematic diagram of a network structure of an edge attention layer according to the present invention;
FIG. 5 is a block diagram of the four-classification discriminator of the present invention;
FIG. 6 is a schematic diagram of a fractional convolution template in the multi-order differential loss calculation of the present invention;
FIG. 7 is a schematic diagram of a visual result of an edge attention map according to the present invention;
FIG. 8 is a comparative illustration of defogging effect according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Example 1
As shown in fig. 1, the present embodiment provides an image defogging method based on edge attention and multi-order differential loss, which includes the following steps:
s1: acquiring a foggy image data set, preprocessing the foggy image data set, dividing the foggy image data set into a thick foggy image data set and a thin foggy image data set, and taking the thin foggy image data set as an input image data set;
in this embodiment, the step of preprocessing the image specifically includes:
measuring FADE fog concentration indexes of all foggy images in a data set, evaluating the fog concentration degree of one image through the FADE indexes, calculating the average value of all the FADE indexes, dividing the image larger than the average value into thick fog images, dividing the image lower than the average value into thin fog images, taking the thin fog images as an input image a, and using the real non-fog images corresponding to the thin fog images for supervision training, wherein the thick fog images are used for non-supervision training, namely, used for countermeasures loss calculation in the defogging process;
s2: constructing a restoration network and a degradation network;
as shown in fig. 3, the restoration network includes an edge attention branch for improving the perception of edge information by the network and a super resolution restoration branch for reconstructing lost details and textures, the edge attention branch including a plurality of edge attention modules, each including two edge attention layers;
in this embodiment, the edge attention branch is based on a coding structure, in which a pre-training network Res2Net is used to extract features, so as to obtain features of multiple scales, namely E 1/2 ,E 1/4 ,E 1/8 The method and the device represent the characteristics of 1/2, 1/4 and 1/8 of the original input image, quicken the convergence of the network, and in the decoding process of different scales, the decoding network comprises a plurality of edge attention modules, and each edge attention module comprises 2 edge attention layers. In the decoding process, an up-sampling is performed by using a Pixelshuffle module, wherein the Pixelshuffle module is used for converting a low-resolution image into a high-resolution image, mixing and reconstructing pixels in the low-resolution image into large blocks of pixels of the high-resolution image;
as shown in fig. 4, a structure of each layer of the edge attention layer is constructed, each layer is composed of three branches of an X branch (XLayer), a Y branch (YLayer) and an information extraction branch (InfoLayer), wherein the X branch and the Y branch are respectively used for learning edge weights of images, final edge attention EA is obtained through pixel-by-pixel addition, the information extraction branch InfoLayer further extracts feature information of input F, the obtained output is multiplied by the edge attention strive to pixel-by-pixel, and F is added to obtain output feature F out
The invention visualizes the edge attention map by utilizing a visualization technology, normalizes the edge attention map and then converts the normalized edge attention map into a Jet color map, as shown in fig. 7, the fog in the input foggy map is unevenly distributed and occupies the whole image basically, and the textures and details in the image are lost in a large amount. The edge attention force diagram has the advantages that the weight of the edge part of the wood is higher, the right end face of the wood, the ground and part of sky areas are smoother areas, the weight is lower, and the edge part coincides with the Sobel edge diagram of the haze-free diagram. This verifies that the edge attention block can make the network focus more on the edge information of the image in the foggy area, thereby improving the ability of the network to reconstruct the edge information of the foggy area.
In this embodiment, the coding network in the super-resolution recovery branch uses the pre-training network Res2Net to extract features, and shares the features with the edge attention branch parameters, and the decoding structure includes a plurality of residual blocks to extract features and recover lost information, and further includes a plurality of pixelshutdown modules and a convolutional layer, where the pixelshutdown modules are used for up-sampling;
in this embodiment, the input image is subjected to the edge attention branch and the super resolution recovery branch to obtain the corresponding feature output, and the feature output is fused together through connection, and then is input into a progressive compression block and a mirror image filling reflection pad, 7X7 convolution and Tanh activation function to obtain the recovered clear image, wherein the progressive compression block is a plurality of continuous convolutions, the feature number is compressed from 80 to 20, and the loss of the feature is reduced.
In this embodiment, the network structure of the degradation network is the same as the network structure of the edge attention branches in the restoration network, and the degradation network is used for simulating and generating blurred and foggy images, and the performance of the restoration network is gradually enhanced by learning degradation factors in images with different degradation degrees.
In this embodiment, the supervised training is performed in the recovery-degradation process, and the unsupervised training is performed in the degradation-recovery process;
as shown in fig. 5, a clear image four-classification discriminator structure is constructed, the clear image four-classification discriminator performs countermeasure training with a recovery network, the recovery network generates a clear image, the clear image generated by a clear haze-free image, a haze image, a dense haze image and a recovery network RNet is input, four values are obtained through the convolution network, the four values are respectively the probabilities of inputting the corresponding categories of the images, and the clear image four-classification discriminator discriminates whether the generated image belongs to a real clear haze-free image, a haze image, a dense haze image or a recovery network;
in this embodiment, the structure of the dense fog four-classification discriminator is the same as that of the clear image four-classification discriminator, and the dense fog four-classification discriminator inputs a dense fog image generated by a clear non-fog image, a thin fog image, a dense fog image and a degradation network dnaet, and outputs a corresponding class image probability. In the countermeasure process with the degradation network, the degradation network generates a blurred and foggy image, and the foggy four-classification discriminator discriminates whether the generated image belongs to a true foggy image, a clear image or the degradation network, and the mapping capability of the degradation network is improved through countermeasure.
In the embodiment, the clear graph four-classification discriminant and the dense fog four-classification discriminant are utilized to restrict the restoration network and the degradation network, so that the mapping capability of the network is enhanced.
S3: as shown in fig. 2, the recovery network and the degradation network are jointly trained, so as to obtain a trained recovery network and a trained degradation network; the training process is as follows:
in the first training stage, the haze image is used as an input image a, the input image a is subjected to a restoration network to obtain a first clear image b, and then a degradation network is subjected to the first clear image b to obtain a haze image c.
In the second stage of training, the input image a is subjected to a degradation network to obtain a dense fog image d, and then is subjected to a restoration network to obtain a second clear image e.
The first stage and the second stage of training are synchronously carried out, the middle process of the first stage of training is supervised learning, the first clear image b and the real haze-free image are subjected to loss calculation, and the loss calculation is input into a clear image four-classification discriminator to be subjected to counterloss calculation. The middle process of the second stage of training is unsupervised learning, and the dense fog image d is input into a dense fog image four-classification discriminator to calculate the countermeasures loss.
The specific loss calculation process is as follows: the loss calculation referred to in FIG. 1 includes L pointed to by b and e con 、L frac L pointed by b gan L pointed by c gan . L directing b and e according to the nature of the loss function con Called content lossAnd b and e point to L frac Referred to as multi-order differential loss. b is directed to L gan And c is directed to L gan Referred to as the countering loss, the loss between c and a is referred to as the loop consistency loss. Specifically, the first clear image b, the second clear image e and the real haze-free image are subjected to content loss calculation and multi-order differential loss calculation, the first clear image b and the thick haze image d are respectively input into a clear image four-classification discriminator and a thick haze four-classification discriminator to be subjected to counter loss calculation, the haze image c and the thin haze image a are subjected to cyclic consistency loss calculation, finally, total loss is obtained according to content loss, counter loss, cyclic consistency loss and multi-order differential loss, parameters of a network are updated through a gradient descent algorithm, the loss is continuously reduced in the training process after the loss calculation, and the performance of the network is improved;
in this embodiment, the content loss is specifically expressed as:
wherein ,Lcon For content loss, N is the total number of images, J gt And b is a clear image generated by the input image through a restoration network, and e is a clear image obtained by the input image in the degradation-restoration process.
In this embodiment, the multi-order differential loss function is specifically expressed as:
wherein ,Lfrac For multi-order differential loss, N is the total number of images, v is the order, F v For V-order differential convolution, V is an order list, which may consist of any of the different orders, the classical Tiansi template employed in this example, the preferred order list being 0.5, 0.6 and 1 order;
as shown in fig. 6, where v is the corresponding order, the numbers in the template are parameters of the convolution kernel, the convolution size of the template5X5; j (J) gt And b is a clear image generated by the input image through a restoration network, and e is a clear image obtained by the input image through a degradation-restoration process. In the multi-order differential loss function of the present embodiment, the loss calculation is performed on the template-processed images of different orders, and then the images are weighted, and the three convolution templates of 0.5, 0.6, and 1.0 of fig. 6 are used to process the images, respectively.
In this embodiment, the cycle consistency loss is specifically expressed as:
wherein N is the total number of images, a is the input image, and c is the foggy image generated by the input image a after restoration of the restoration network and degradation of the degradation network.
In the present embodiment, the four-classification discriminator D for clear images 1 And dense fog pattern four-classification discriminator D 2 Training loss and countering loss are as follows:
L gan (RNet,D 1 )=CE(D 1 (b),C gt )
L gan (DNet,D 2 )=CE(D 2 (d),C dense-gt )
wherein ,loss of four-class discriminators for training a clarity map, < >>To train the loss of the four-classification discriminant of the dense fog chart, L gan (RNet,D 1 ) For RNet and D 1 Is the countering loss of L gan (DNet,D 2 ) For DNet and D 2 Is the countering loss of L adv Is the total countermeasures loss; CE is cross entropy loss, J gt For true haze-free images, for true haze-free images corresponding to haze images, i.e. paired, I dense-gt A is an input haze image, b is a clear image generated by the input image through a restoration network, d is a haze image obtained by the input image through a degradation network, and C gt Representing the category of the real haze-free image, C a Representing the category of the input haze image, C dense-gt Representing the category of the true foggy image, C RNet Representing class of clear image generated by recovery network RNet, C DNet Representing the category of the dense fog map generated by the degenerate network dnat;
the total loss function is specifically:
L total =αL con +βL cycle +γL adv +λL frac
wherein, alpha, beta, gamma and lambda represent weight parameters, and are preferably 1.0, 0.1 and 0.1 respectively;
and (3) reversely propagating and updating parameters of the network by a random gradient descent method until the network converges, so as to obtain a trained recovery network.
S4: and obtaining a recovery network after training, and processing the image to be detected through the recovery network to obtain a defogging result.
In the defogging process, only the restoration network is needed, and the effect of the degradation network is to generate an image with high degradation degree in the training process, and the image and the restoration network are jointly trained, so that the restoration performance of the restoration network is improved.
In this embodiment, an Adam optimizer is used for optimization, setting the parameters beta1 to 0.9 and beta2 to 0.999. The number of the edge attention layers is set to 2, the orders in the multi-order differential loss are selected to be 0.5, 0.6 and 1.0, and α in the total loss function is set to 1.0, and β is set to 0.2. The training epoch was set to 500, the batch_size was set to 1 each iteration, the learning rate was initialized to 1e-4, and the decay was half every 50 epochs.
According to the invention, experiments are carried out on the NH-HAZE data set, as shown in fig. 8, and experimental comparison results are obtained, wherein in the figures, a first behavior is a foggy image, a second behavior is a defogging result of FFA, a third behavior is a defogging result of the method, and a fourth behavior is a foggy image. By comparing the defogging results of the FFA, the effectiveness and rationality of the defogging method provided by the invention are shown.
Index comparison as shown in table 1 below, it can be seen from the table that the proposed method is superior to FFA networks both in PSNR and SSIM index.
Table 1 index comparison table
Example 2:
this embodiment is the same as embodiment 1 except for the following technical matters;
the present embodiment provides an image defogging system based on edge attention and multi-order differential loss, comprising: the system comprises an image acquisition module, an image preprocessing module, a network construction module, a network training module and a defogging module;
in this embodiment, the image acquisition module is configured to acquire a foggy image dataset;
in this embodiment, the image preprocessing module is configured to preprocess the foggy image dataset, divide the foggy image dataset into a thick foggy image dataset and a foggy image dataset, and use the foggy image dataset as the input image dataset;
in this embodiment, the network construction module is configured to construct a restoration network and a degradation network;
in this embodiment, the network training module is configured to perform combined training on the restoration network and the degradation network to obtain a trained restoration network and a trained degradation network, and specifically includes:
first stage of training: the method comprises the steps that a first clear image is obtained through a restoration network by the aid of a mist image, and a mist image is obtained through a degradation network by the aid of the first clear image;
second stage of training: the haze image is subjected to a degradation network to obtain a haze image, and the haze image is subjected to a restoration network to obtain a second clear image;
performing content loss calculation on the first clear image, the second clear image and the real haze-free image, performing multi-order differential loss calculation on the first clear image, the second clear image and the real haze-free image, performing cyclic consistency loss calculation on the foggy image and the haze image, inputting the first clear image into a clear image four-classification discriminator for performing contrast loss calculation, performing contrast training on the clear image four-classification discriminator and a restoration network, inputting the dense fog image into a dense fog image four-classification discriminator for performing contrast loss calculation, and performing contrast training on the dense fog image four-classification discriminator and the degradation network;
obtaining total loss according to content loss, counterloss, cycle consistency loss and multi-order differential loss weighted summation, and updating parameters of a network through a gradient descent algorithm;
in this embodiment, the defogging module is configured to input an image to be tested to a trained recovery network, so as to obtain a defogging result.
In this embodiment, the restoration network includes an edge attention branch and a super resolution restoration branch;
the edge attention branch adopts a coding and decoding structure, a pre-training network Res2Net is used for extracting characteristics in the coding structure, a plurality of edge attention modules are included in the decoding structure, and each attention module comprises two edge attention layers;
the super-resolution recovery branch adopts a coding and decoding structure, a pre-training network Res2Net is used for extracting features in the coding structure, and a plurality of residual blocks are included in the decoding structure and used for extracting image features and recovering lost image information;
image features output by the edge attention branch and the super-resolution recovery branch sequentially pass through a progressive compression block, mirror image filling, a convolution layer and a Tanh activation function to obtain a recovered clear image;
the network structure of the degenerate network is the same as the network structure of the edge attention branches in the restoration network.
In this embodiment, the first stage of training and the second stage of training are performed synchronously, and the first stage of training is supervised learning and the second stage of training is unsupervised learning.
In this embodiment, the content loss calculation is performed on the first clear image, the second clear image and the real haze-free image, which is specifically expressed as:
wherein ,Lcon For content loss, N is the total number of images, J gt B is a first clear image, e is a second clear image;
the first clear image, the second clear image and the real haze-free image are subjected to multi-order differential loss calculation, and the multi-order differential loss calculation is specifically expressed as follows:
wherein ,Lfrac For multi-order differential loss, v is the order, F v For V-order differential convolution, V is the order list;
and (3) carrying out cyclic consistency loss calculation on the foggy image and the mist image, wherein the method is specifically expressed as:
wherein a is an input image and c is a foggy image;
inputting a first clear image into a clear image four-classification discriminator to calculate the countermeasures loss, wherein the clear image four-classification discriminator and a recovery network perform countermeasures training, inputting a thick fog image into the thick fog image four-classification discriminator to calculate the countermeasures loss, and the thick fog image four-classification discriminator and the degradation network perform countermeasures training, which is specifically expressed as follows:
L gan (RNet,D 1 )=CE(D 1 (b),C gt )
wherein ,loss of four-class discriminators for training a clarity map, < >>To train the loss of the four-classification discriminant of the dense fog chart, L gan (RNet,D 1 ) Four-classification discriminant D for recovering network RNet and clear graph 1 Is the countering loss of L gan (DNet,D 2 ) Four-classification discriminant D for degenerate network DNet and dense fog patterns 2 Is the countering loss of L adv For total contrast loss, CE is cross entropy loss, J gt I is a true haze-free image dense-gt D is a dense fog image, C is a true dense fog image gt Representing the category of the real haze-free image, C a Representing the category of the input haze image, C dense-gt Representing the category of the true foggy image, C RNet Representing class of clear image generated by recovery network RNet, C DNet Representing the category of the foggy map generated by the degenerate network dnat.
In the embodiment, in the calculation of the loss function, multi-order differential loss is introduced, the consistency of the defogging image and the real defogging image in contrast and edge information is restrained, and the performance of the network is improved.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (10)

1. An image defogging method based on edge attention and multi-order differential loss, comprising the steps of:
acquiring a foggy image data set, preprocessing the foggy image data set, dividing the foggy image data set into a thick foggy image data set and a thin foggy image data set, and taking the thin foggy image data set as an input image data set;
constructing a restoration network and a degradation network;
the recovery network and the degradation network are trained in a combined way, and the recovery network and the degradation network after training are obtained, which concretely comprises the following steps:
first stage of training: the method comprises the steps that a first clear image is obtained through a restoration network by the aid of a mist image, and a mist image is obtained through a degradation network by the aid of the first clear image;
second stage of training: the haze image is subjected to a degradation network to obtain a haze image, and the haze image is subjected to a restoration network to obtain a second clear image;
performing content loss calculation on the first clear image, the second clear image and the real haze-free image, performing multi-order differential loss calculation on the first clear image, the second clear image and the real haze-free image, performing cyclic consistency loss calculation on the foggy image and the haze image, inputting the first clear image into a clear image four-classification discriminator for performing contrast loss calculation, performing contrast training on the clear image four-classification discriminator and a restoration network, inputting the dense fog image into a dense fog image four-classification discriminator for performing contrast loss calculation, and performing contrast training on the dense fog image four-classification discriminator and the degradation network;
obtaining total loss according to content loss, counterloss, cycle consistency loss and multi-order differential loss weighted summation, and updating parameters of a network through a gradient descent algorithm;
and inputting the image to be detected into the trained recovery network to obtain a defogging result.
2. The image defogging method based on edge attention and multi-order differential loss according to claim 1, wherein the preprocessing of the foggy image dataset comprises:
and measuring FADE fog concentration indexes of all the foggy images in the foggy image data set, calculating the average value of all the FADE fog concentration indexes, dividing the images larger than the average value into thick foggy images, and dividing the images lower than the average value into thin foggy images.
3. The image defogging method based on the edge attention and the multi-order differential loss of claim 1, wherein the restoration network comprises an edge attention branch and a super resolution restoration branch;
the edge attention branches adopt a coding and decoding structure, a pre-training network Res2Net is used for extracting characteristics in the coding structure, a plurality of edge attention modules are included in the decoding structure, and each attention module comprises two edge attention layers;
the super-resolution recovery branch adopts a coding and decoding structure, a pre-training network Res2Net is used for extracting features in the coding structure, and a plurality of residual blocks are included in the decoding structure and are used for extracting image features and recovering lost image information;
the image features output by the edge attention branch and the super-resolution recovery branch sequentially pass through a progressive compression block, mirror image filling, a convolution layer and a Tanh activation function to obtain a recovered clear image;
the network structure of the degenerate network is the same as the network structure of the edge attention branches in the restoration network.
4. The image defogging method based on the edge attention and the multi-order differential loss according to claim 1, wherein the first stage of training and the second stage of training are performed simultaneously, and the first stage of training is supervised learning and the second stage of training is unsupervised learning.
5. The image defogging method based on the edge attention and the multi-order differential loss according to claim 1, wherein the content loss calculation is performed on the first clear image, the second clear image and the true defogging image, specifically expressed as:
wherein ,Lcon For content loss, N is the total number of images, J gt Is a true haze-free image, b is a first clear image, and e is a second clear image.
6. The image defogging method based on the edge attention and the multi-order differential loss according to claim 1, wherein the multi-order differential loss calculation is performed on the first clear image, the second clear image and the true defogging image, specifically expressed as:
wherein ,Lfrac For multi-order differential loss, N is the total number of images, v is the order, F v For V-order differential convolution, V is the order list, J gt Is a true haze-free image, b is a first clear image, and e is a second clear image.
7. The image defogging method based on the edge attention and the multi-order differential loss according to claim 1, wherein the cyclic consistency loss calculation is performed on the foggy image and the foggy image, specifically expressed as:
where N is the total number of images, a is the input image, and c is the hazy image.
8. The image defogging method based on the edge attention and the multi-order differential loss according to claim 1, wherein the first clear image is input into a clear image four-classification discriminator for countermeasures loss calculation, the clear image four-classification discriminator and a restoration network perform countermeasures training, the thick fog image is input into a thick fog four-classification discriminator for countermeasures loss calculation, and the thick fog four-classification discriminator and a degradation network perform countermeasures training, specifically expressed as:
L gan (RNet,D 1 )=CE(D 1 (b),C gt )
wherein ,loss of four-class discriminators for training a clarity map, < >>To train the loss of the four-classification discriminant of the dense fog chart, L gan (RNet,D 1 ) To restore network RNet and clarityFour-figure classification discriminator D 1 Is the countering loss of L gan (DNet,D 2 ) Four-classification discriminant D for degenerate network DNet and dense fog patterns 2 Is the countering loss of L adv For total contrast loss, CE is cross entropy loss, J gt I is a true haze-free image dense-gt A is an input haze image, b is a first clear image, d is a haze image, C gt Representing the category of the real haze-free image, C a Representing the category of the input haze image, C dense-gt Representing the category of the true foggy image, C RNet Representing class of clear image generated by recovery network RNet, C DNet Representing the category of the foggy map generated by the degenerate network dnat.
9. An image defogging system based on edge attention and multi-order differential loss, comprising: the system comprises an image acquisition module, an image preprocessing module, a network construction module, a network training module and a defogging module;
the image acquisition module is used for acquiring a foggy image data set;
the image preprocessing module is used for preprocessing a foggy image data set, dividing the foggy image data set into a thick foggy image data set and a thin foggy image data set, and taking the thin foggy image data set as an input image data set;
the network construction module is used for constructing a restoration network and a degradation network;
the network training module is used for jointly training the recovery network and the degradation network to obtain the trained recovery network and the trained degradation network, and specifically comprises the following steps:
first stage of training: the method comprises the steps that a first clear image is obtained through a restoration network by the aid of a mist image, and a mist image is obtained through a degradation network by the aid of the first clear image;
second stage of training: the haze image is subjected to a degradation network to obtain a haze image, and the haze image is subjected to a restoration network to obtain a second clear image;
performing content loss calculation on the first clear image, the second clear image and the real haze-free image, performing multi-order differential loss calculation on the first clear image, the second clear image and the real haze-free image, performing cyclic consistency loss calculation on the foggy image and the haze image, inputting the first clear image into a clear image four-classification discriminator for performing contrast loss calculation, performing contrast training on the clear image four-classification discriminator and a restoration network, inputting the dense fog image into a dense fog image four-classification discriminator for performing contrast loss calculation, and performing contrast training on the dense fog image four-classification discriminator and the degradation network;
obtaining total loss according to content loss, counterloss, cycle consistency loss and multi-order differential loss weighted summation, and updating parameters of a network through a gradient descent algorithm;
the defogging module is used for inputting the image to be detected into the trained recovery network to obtain a defogging result.
10. The image defogging system based on the edge attention and the multi-order differential loss according to claim 9, wherein the content loss calculation is performed on the first clear image, the second clear image and the true defogging image, specifically expressed as:
wherein ,Lcon For content loss, N is the total number of images, J gt B is a first clear image, e is a second clear image;
the first clear image, the second clear image and the real haze-free image are subjected to multi-order differential loss calculation, and the multi-order differential loss calculation is specifically expressed as follows:
wherein ,Lfrac For multi-order differential loss, v is the order, F v For V-order differential convolution, V is the order list;
and (3) carrying out cyclic consistency loss calculation on the foggy image and the mist image, wherein the method is specifically expressed as:
wherein a is an input image and c is a foggy image;
inputting a first clear image into a clear image four-classification discriminator to calculate the countermeasures loss, wherein the clear image four-classification discriminator and a recovery network perform countermeasures training, inputting a thick fog image into the thick fog image four-classification discriminator to calculate the countermeasures loss, and the thick fog image four-classification discriminator and the degradation network perform countermeasures training, which is specifically expressed as follows:
L gan (RNet,D 1 )=CE(D 1 (b),C gt )
wherein ,loss of four-class discriminators for training a clarity map, < >>To train the loss of the four-classification discriminant of the dense fog chart, L gan (RNet,D 1 ) Four-classification discriminant D for recovering network RNet and clear graph 1 Is the countering loss of L gan (DNet,D 2 ) Four categories for degenerate network DNet and dense fog patternsDistinguishing device D 2 Is the countering loss of L adv For total contrast loss, CE is cross entropy loss, J gt I is a true haze-free image dense-gt D is a dense fog image, C is a true dense fog image gt Representing the category of the real haze-free image, C a Representing the category of the input haze image, C dense-gt Representing the category of the true foggy image, C RNet Representing class of clear image generated by recovery network RNet, C DNet Representing the category of the foggy map generated by the degenerate network dnat.
CN202310519184.1A 2023-05-10 2023-05-10 Image defogging method and system based on edge attention and multi-order differential loss Pending CN116703750A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310519184.1A CN116703750A (en) 2023-05-10 2023-05-10 Image defogging method and system based on edge attention and multi-order differential loss

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310519184.1A CN116703750A (en) 2023-05-10 2023-05-10 Image defogging method and system based on edge attention and multi-order differential loss

Publications (1)

Publication Number Publication Date
CN116703750A true CN116703750A (en) 2023-09-05

Family

ID=87830240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310519184.1A Pending CN116703750A (en) 2023-05-10 2023-05-10 Image defogging method and system based on edge attention and multi-order differential loss

Country Status (1)

Country Link
CN (1) CN116703750A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952865A (en) * 2024-03-25 2024-04-30 中国海洋大学 A Single Image Dehazing Method Based on Cyclic Generative Adversarial Network
CN118781016A (en) * 2024-07-02 2024-10-15 中国矿业大学 An adaptive image defogging and enhancement method and system in a mine dust and fog environment
CN119693273A (en) * 2024-11-29 2025-03-25 中国矿业大学(北京) Method and system for restoring high-level features of low-light images based on neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359107A (en) * 2022-01-11 2022-04-15 暨南大学 An unsupervised image dehazing method and system based on symbiotic dual model
US20220188975A1 (en) * 2019-04-19 2022-06-16 Nippon Telegraph And Telephone Corporation Image conversion device, image conversion model learning device, method, and program
CN115587934A (en) * 2022-10-18 2023-01-10 暨南大学 Method and system for image super-resolution reconstruction and dehazing based on loss classification and dual-branch network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220188975A1 (en) * 2019-04-19 2022-06-16 Nippon Telegraph And Telephone Corporation Image conversion device, image conversion model learning device, method, and program
CN114359107A (en) * 2022-01-11 2022-04-15 暨南大学 An unsupervised image dehazing method and system based on symbiotic dual model
CN115587934A (en) * 2022-10-18 2023-01-10 暨南大学 Method and system for image super-resolution reconstruction and dehazing based on loss classification and dual-branch network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RUTING DENG 等: "Non-homogeneous Image Dehazing with Edge Attention Based on Relative Haze Density", 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING, 8 August 2024 (2024-08-08) *
傅妍芳等: ""多级特征逐步细化及边缘增强的图像去雾"", 《光学精密工程》, 10 May 2022 (2022-05-10), pages 1091 - 1100 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952865A (en) * 2024-03-25 2024-04-30 中国海洋大学 A Single Image Dehazing Method Based on Cyclic Generative Adversarial Network
CN118781016A (en) * 2024-07-02 2024-10-15 中国矿业大学 An adaptive image defogging and enhancement method and system in a mine dust and fog environment
CN119693273A (en) * 2024-11-29 2025-03-25 中国矿业大学(北京) Method and system for restoring high-level features of low-light images based on neural network
CN119693273B (en) * 2024-11-29 2025-09-02 中国矿业大学(北京) Method and system for restoring high-level features of low-light images based on neural network

Similar Documents

Publication Publication Date Title
CN111915530B (en) An end-to-end haze concentration adaptive neural network image dehazing method
CN113658051B (en) An image defogging method and system based on recurrent generative adversarial network
CN111784602B (en) Method for generating countermeasure network for image restoration
Long et al. Bishift networks for thick cloud removal with multitemporal remote sensing images
CN108520503B (en) A method for repairing face defect images based on autoencoder and generative adversarial network
CN114187203B (en) Attention-optimized depth codec defogging countermeasure network
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
CN110443761B (en) Single image rain removing method based on multi-scale aggregation characteristics
CN115631107A (en) Edge-guided single image noise removal
CN115330639B (en) A deep-enhanced image denoising method based on non-local attention
CN114820381B (en) A digital image restoration method based on structural information embedding and attention mechanism
CN116721033B (en) A Single-Image Dehazing Method Based on Random Mask Convolution and Attention Mechanism
CN118333898B (en) Image defogging method and system based on improved generation countermeasure network
CN115082353B (en) Image restoration method based on multi-stream aggregation and dual attention dense connection network
Kumar et al. Underwater image enhancement using deep learning
CN117151990A (en) Image defogging method based on self-attention coding and decoding
CN111553856B (en) Image defogging method based on depth estimation assistance
CN116777782A (en) A multi-patch defogging method based on dual attention level feature fusion
Liu et al. Facial image inpainting using multi-level generative network
CN116228550A (en) A self-enhanced image defogging algorithm based on generative confrontation network
CN114841895A (en) Image shadow removing method based on bidirectional mapping network
Wu et al. Semantic image inpainting based on Generative Adversarial Networks
Yang et al. DBD-CR: Dual-Branch Diffusion Residual Reconstruction for Cloud Removal in Optical Remote Sensing Images
CN120852243B (en) Network training and application method and device based on spatial transformation and contrast learning
CN119579457B (en) An image motion deblurring method based on event-guided attention network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination