CN118279187B - Method and system for defogging foggy images by filling in information - Google Patents
Method and system for defogging foggy images by filling in information Download PDFInfo
- Publication number
- CN118279187B CN118279187B CN202410143628.0A CN202410143628A CN118279187B CN 118279187 B CN118279187 B CN 118279187B CN 202410143628 A CN202410143628 A CN 202410143628A CN 118279187 B CN118279187 B CN 118279187B
- Authority
- CN
- China
- Prior art keywords
- image
- image data
- color
- pixel
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an information filling defogging method and system for a foggy image. The method comprises the steps of obtaining a foggy image to conduct image preprocessing, conducting first image enhancement on processed image data based on an ACE algorithm to obtain intermediate image data, conducting gray-scale gamma conversion on the intermediate image data, adjusting gamma values of the intermediate image data, conducting contrast adjustment and second image enhancement to obtain image data to be filled, conducting missing information filling on the image data to be filled by using a coloring model, and outputting a defogging image. The acquired fog-containing image is subjected to image enhancement twice, and the second image enhancement is combined with the coloring model to fill the missing information, so that the missing information of the original fog-containing image is greatly restored, computer analysis and subjective observation of human eyes are facilitated, fog removal operation of the fog-containing image can be effectively solved, the image information is more truly displayed, and the fog removal effect of the image is improved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an information filling defogging method and system for a foggy image.
Background
Along with the development of society, the living standard of people is continuously improved, the ecological environment is gradually changed, and weather such as haze is frequently generated, and images shot in the weather such as haze are fog-containing images and are not clear enough. To date, defogging has been widely applied to various fields. Conventional defogging approaches often go through a video defogging framework MAP-Net, which is based on a physical priori guidance module of memory aimed at enhancing luminance and then restoring, which encodes the fog a priori related features as remote memory. There is also a multi-range scene cycle luminance restoration module based on spatiotemporal deformable attention and multi-range guide aggregation that is capable of capturing long-range temporal fog and scene cues efficiently from adjacent frames, and a multi-range scene cycle luminance restoration module based on spatiotemporal deformable attention and multi-range guide aggregation is proposed that is capable of capturing long-range temporal fog and scene cues efficiently from adjacent frames, with the help of the network, the foggy image being able to be restored to some extent. The true image defogging network based on the high-quality codebook prior (RIDCP) is proposed, the VQGAN is pre-trained on a large-scale high-quality data set to obtain a discrete codebook, the high-quality codebook is packaged, and after the decoder adopts HQPs to replace negative effects caused by haze, a novel normalized feature alignment module is provided, so that high-quality features can be effectively utilized, and a certain clean result can be generated. An end-to-end system DehazeNet has been proposed for medium transmission estimation, which takes a hazy image as input, outputs its medium projection map, then recovers the haze-free image by means of an Atmospheric scattering model (Atmospheric ScatteringModel, ASM), while the innovation of the system is that the haze in the image is removed not to be recognized as a whole, but rather according to the haze distribution non-uniform block training in the image, the transmission functions in the output image block ASM are different to recover the target image. An end-to-end feature fusion attention network (Feature FusionAt-tention Network, FFA-Net) has also been proposed to fuse channel attention with pixel attention mechanisms, considering that different communication features contain completely different weighted information, and that haze distribution on different images is not uniform. The processing of the different features and pixels is unequal, which provides additional flexibility in processing different types of information, extends the capabilities of Convolutional Neural Networks (CNNs), and the experimental conclusion suggests that this approach is quantitatively and qualitatively superior to the previous most advanced single image defogging approach.
However, the above image defogging method is almost related to the IET and IRT including the foggy image, and has a problem that the image distortion is serious and the defogging effect is not good.
Disclosure of Invention
In order to solve the technical problems, the information filling defogging method and system for the fog-containing image can improve defogging effect of the image.
An information-filled defogging method for a foggy image, the method comprising:
acquiring a foggy image, and carrying out image preprocessing on the foggy image to obtain processed image data;
Performing first image enhancement on the processed image data based on an ACE algorithm to obtain intermediate image data;
Performing graying gamma conversion on the intermediate image data, adjusting the gamma value of the intermediate image data, and performing contrast adjustment and second image enhancement to obtain image data to be filled;
and filling missing information into the image data to be filled by using a coloring model, and outputting a defogging image.
In one embodiment, the performing, based on the ACE algorithm, the first image enhancement on the processed image data to obtain intermediate image data includes:
Performing color space adjustment on the processed image data based on the ACE algorithm to obtain adjusted image data;
and performing dynamic tone scaling processing on the adjusted image data to obtain intermediate image data.
In one embodiment, the performing color space adjustment on the processed image data based on the ACE algorithm to obtain adjusted image data includes:
Calculating an output pixel value for each channel in the processed image data based on the ACE algorithm;
and performing color comparison according to the output pixel values, performing local or global balance processing, and selecting an image subset to obtain adjusted image data.
In one embodiment, the performing dynamic tone scaling on the adjusted image data to obtain intermediate image data includes:
taking the adjusted image data as an intermediate pixel matrix;
and scaling the intermediate pixel matrix by using a linear scaling method to obtain intermediate image data.
In one embodiment, the performing the grayscale gamma transformation on the intermediate image data to adjust the gamma value of the intermediate image data includes:
Determining an optimal gamma value according to the gray value of the intermediate image data;
And adjusting the gamma value in the intermediate image data to the optimal gamma value through graying gamma conversion.
In one embodiment, the filling missing information into the image data to be filled using a coloring model includes:
extracting image features in the image data to be filled by using a backbone network in the coloring model;
Inputting the image characteristics into a pixel decoder to restore an image space structure;
Performing color inquiry on visual features in the image data to be filled through a color decoder, and outputting color representation;
and fusing the image space structure and the color representation, generating a color channel output coloring result, and filling missing information.
In one embodiment, the method further comprises:
inputting the image features and color query instructions into a color decoding block in the coloring model;
Through the cross-attention mechanism, self-attention mechanism and feed forward operation employed in the shading model, a correlation between semantic and color representations is established.
In one embodiment, the method further comprises:
the coloring model colors the image data to be filled in an end-to-end manner.
An information-filled defogging system for a foggy image, the system comprising:
The image acquisition module is used for acquiring a foggy image, and carrying out image preprocessing on the foggy image to obtain processed image data;
the first image enhancement module is used for carrying out first image enhancement on the processed image data based on an ACE algorithm to obtain intermediate image data;
The second image enhancement module is used for carrying out gray-scale gamma conversion on the intermediate image data, adjusting the gamma value of the intermediate image data, and carrying out contrast adjustment and second image enhancement to obtain image data to be filled;
and the information filling module is used for filling missing information into the image data to be filled by using the coloring model and outputting defogging images.
According to the information filling defogging method and system for the foggy image, the collected foggy image is subjected to image enhancement twice, and the missing information filling is performed by combining the second image enhancement with the coloring model, so that the missing information of the original foggy image is greatly restored, computer analysis and subjective observation of human eyes are facilitated, defogging operation of the foggy image can be effectively solved, image information is truly displayed, and defogging effect of the image is improved.
Drawings
FIG. 1 is an application environment diagram of an information-filled defogging method of a foggy image in one embodiment;
FIG. 2 is a flow chart of a method for information-filled defogging of a misted image in one embodiment;
FIG. 3 is a flow diagram of image processing using the ACE algorithm in one embodiment;
FIG. 4 is a graph showing different gamma functions according to one embodiment;
FIG. 5 is a schematic diagram showing different effects of gamma values in one embodiment;
FIG. 6 is a schematic illustration of a coloring model in one embodiment;
FIG. 7 is a comparison of the front and rear of a defogging process using a message-filled defogging method for a foggy image in one embodiment;
FIG. 8 is a block diagram of an information-filled defogging system for a foggy image in one embodiment;
Fig. 9 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The information filling defogging method for the foggy image provided by the embodiment of the application can be applied to an application environment shown in figure 1. As shown in FIG. 1, the application environment includes a computer device 110. The computer device 110 can acquire a foggy image and perform image preprocessing on the foggy image to obtain processed image data, the computer device 110 can perform first image enhancement on the processed image data based on an ACE algorithm to obtain intermediate image data, the computer device 110 can perform gray-scale gamma conversion on the intermediate image data, adjust the gamma value of the intermediate image data and perform contrast adjustment and second image enhancement to obtain image data to be filled, and the computer device 110 can perform missing information filling on the image data to be filled by using a coloring model and output defogging images. The computer device 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, robots, unmanned aerial vehicles, tablet computers, and the like.
In one embodiment, as shown in fig. 2, there is provided an information-filled defogging method of a foggy image, comprising the steps of:
step 202, acquiring a foggy image, and performing image preprocessing on the foggy image to obtain processed image data.
The computer device can acquire the foggy image through the image acquisition device, wherein the foggy image can be an offshore foggy image. The computer device may then perform an image preprocessing operation on the foggy image. Specifically, the preprocessing operation may be image processing such as denoising and edge detection, and the computer device may perform denoising processing on the image containing fog to eliminate noise in the image, then, edge detection is performed on the image containing fog to obtain edge information of a target in the image containing fog, and finally, scale normalization is performed on the image containing fog so that subsequent processing can be performed better.
Step 204, performing first image enhancement on the processed image data based on the ACE algorithm to obtain intermediate image data.
The ACE algorithm is an unsupervised global and local color correction algorithm, and performs first image enhancement processing on the processed image data through the ACE algorithm to obtain colored intermediate image data.
Step 206, performing gray-scale gamma conversion on the intermediate image data, adjusting the gamma value of the intermediate image data, and performing contrast adjustment and second image enhancement to obtain the image data to be filled.
The human eyes do not have a linear relation with the input light intensity, but have an exponential relation with the light intensity, under low illumination, the human eyes can more easily distinguish the change of the brightness, along with the increase of the illumination, the human eyes cannot easily distinguish the change of the brightness, and the camera sensitization has a linear relation with the input light intensity. In order to more effectively save the image brightness information, in this embodiment, the computer device may perform the grayscale gamma conversion on the image after the first image enhancement process. The Gamma correction is used in an 8-bit RGB image, and can be used for saving as much human perception sensitive color content as possible in a limited storage space. The computer device may obtain image data to be filled by adjusting the gamma value of the intermediate image data, and performing contrast adjustment and a second image enhancement.
And step 208, filling missing information into the image data to be filled by using the coloring model, and outputting a defogging image.
The shading model may be DDColor shading model, DDColor shading model having dual decoders therein for image shading.
In the embodiment, the acquired fog-containing image is subjected to image enhancement twice, and the missing information is filled by combining the second image enhancement with the coloring model, so that the missing information of the original fog-containing image is greatly restored, the computer analysis and the subjective observation of human eyes are facilitated, the defogging operation of the fog-containing image can be effectively solved, the image information is more truly displayed, and the defogging effect of the image is improved.
In one embodiment, the provided information filling defogging method of the fog-containing image can further comprise a first image enhancement process, and the specific process comprises the steps of carrying out color space adjustment on processed image data based on an ACE algorithm to obtain adjusted image data, and carrying out dynamic tone scaling on the adjusted image data to obtain intermediate image data.
Specifically, in one embodiment, the computer device may calculate an output pixel value for each channel in the processed image data based on an ACE algorithm, perform color comparison according to the output pixel values, perform local or global balance processing, and select an image subset to obtain adjusted image data.
The computer device may mainly comprise two stages in the first image enhancement of the processed image data based on the ACE algorithm, the first stage being responsible for color space adjustment, for color constancy and contrast adjustment, and the second stage configuring the output range to achieve accurate tone mapping, performing a lateral suppression mechanism, weighted by pixel distance, resulting in local-global filtering. The second stage is to dynamically scale the image, normalizing the white color only over the full range, and the two-stage structure is a feature of the computing model of the visual system for most humans in the digital image field.
As shown in fig. 3, in this embodiment, the computer device, when performing the first stage, generates an output image R by performing color space adjustment, wherein each pixel is recalculated based on the image content. Each pixel P of the output image R is calculated separately for each channel c, with the formula: Where I c(p)-Ic (j) represents lateral suppression, d (·) is a distance function, the local or global quantities are weighted, and r (·) represents the relative luminance appearance of the pixel. In the process of calculating the pixels of the output image, the compensation of the distance from the pixels to the edge is not calculated, so that the calculation formula is modified by using the normalization coefficient, and the obtained formula is as follows: Where r max is the maximum value of r (·) the lateral suppression mechanism is simulated by computing the difference between each pixel value and all other pixel values in the selected subset of images.
In one embodiment, the computer device may use the adjusted image data as an intermediate pixel matrix, and perform scaling processing on the intermediate pixel matrix by using a linear scaling method to obtain intermediate image data.
When the computer device performs the second phase, i.e. dynamically scaling the image, it can map the intermediate pixel matrix R into the final output image O, not only simple dynamic maximization (linear scaling) can be performed, but also different reference values can be selected in the intermediate matrix, the relative brightness appearance of each channel is mapped into grey levels, and according to the selected reference points an extra global balance between the grey world and the white patches can be added. Two linear scaling methods are proposed to obtain a standard 24-bit output image from a signed floating point matrix R, and alternative scaling methods can be used to take into account the nonlinearity of human luma adaptation without changing the ACE two-stage structure.
In particular, the slope S c can be calculated by linearly scaling the value in R c by the formula O c(p)=round[127.5+scRc (p) ] using M c as the white reference point and the zero value in M c as the estimated value of the medium gray reference point. Thus, hues around which the available dynamics cannot all use very dark values may be lost, or some values in O c may lead to negative results. In this case, a value smaller than zero is set to zero. Global gray world adjustment is added to the final scaling so the dynamics of the final image is always centered on the middle gray.
In one embodiment, the information filling defogging method of the fog-containing image can further comprise a gamma conversion process, wherein the process comprises the steps of determining an optimal gamma value according to the gray value of the intermediate image data, and adjusting the gamma value in the intermediate image data to the optimal gamma value through the gray gamma conversion.
The basic formula of gamma transformation is: Wherein A and gamma are constants, V in is the input gray level, V out is the output gray level, and V out is plotted as various values of gamma as a function of gamma as shown in FIG. 4.
As shown in fig. 4, the gamma value in the curve maps the input narrowband dark value to the wideband output value, and also holds true when the high value is input, and the curves generated when gamma >1 and gamma <1 have opposite results. Because the image after primary enhancement is hazy and dark in whole, the gray scale expansion is necessary, and the method can be completed by gamma conversion, and an optimal gamma value is selected to be 0.4545, wherein the selection of the gamma value is subjectively judged by human eyes, so that the secondary enhancement value is more suitable for human eyes to observe. The gamma conversion of the original gray image by different gamma values is shown in fig. 5, when the gamma value is 0.4545, the image is brighter than before, the detail characteristics in the image are more obvious, and the computer analysis and the human eye observation are convenient.
In one embodiment, the provided information filling and defogging method for the fog-containing image can further comprise a process of performing image processing by using a coloring model, and the specific process comprises the steps of extracting image features in image data to be filled by using a backbone network in the coloring model, inputting the image features into a pixel decoder to restore an image space structure, performing color inquiry on visual features in the image data to be filled by using a color decoder to output color representation, fusing the image space structure and the color representation, generating a color channel output coloring result, and performing missing information filling.
After the computer equipment performs image processing through an ACE algorithm and gamma conversion, the image is no longer dim, becomes bright, has obvious local characteristics, and is beneficial to coloring a coloring model. In this embodiment, DDColor shading models can be used, with end-to-end methods, with dual decoders for image shading. In one embodiment, the shading model shadies the image data to be filled in an end-to-end manner. Coloring model principle as shown in fig. 6, the model DDColor colors the grayscale image x l in an end-to-end manner.
As shown in fig. 6, features are first extracted using a backbone network, i.e., (a) structure, and then input into a pixel decoder to restore the spatial structure of the image, while color decoders perform color queries on visual features of different scales to learn semantically perceived color representations, a fusion module combines the outputs of the two decoders to produce one color channel outputFinally, along the channel dimensionAnd x l to obtain the final coloring result
In one embodiment, the computer device may input image features and color query instructions into a color decoding block in the shading model, and establish correlations between semantic and color representations through cross-attention mechanisms, self-attention mechanisms, and feed-forward operations employed in the shading model.
As shown in fig. 6, (b) the structure is a color decoding block structure, and the color decoding block takes image features and color queries as input, and establishes correlation between semantic and color representations through a cross-attention mechanism, a self-attention mechanism, and a feed-forward operation.
In one embodiment, the provided information filling defogging method of the foggy image can be compared with a conventional enhancement algorithm by using SSIM (Structure Similarity Index Measure) structural measurement indexes, and the structural similarity indexes can be used for measuring the distortion degree of pictures and the similarity degree of two pictures. Unlike MSE and PSNR measures of absolute error, SSIM is a perceptual model, i.e. more in line with the visual perception of the human eye. The value range of SSIM [ -1,1] has a range formula for success, borderline, unique maximization (ssim=1 if and only if x=y).
SSIM mainly considers three key features of a picture, namely brightness (luminence), contrast (Contrast), structure (Structure). SSIM is an index for measuring the similarity of two images, the value range is [0,1], and the larger the SSIM value is, the smaller the image distortion degree is, which means that the better the image quality is. The following table is compared with the evaluation parameters of 4 images processed by a part of traditional algorithm, and the indexes of most of the images are superior to those of the traditional algorithm:
| Evaluation | FIG. 1 | FIG. 2 | FIG. 3 | FIG. 4 |
| Histogram equalization | 0.7783 | 0.7904 | 0.6590 | 0.6838 |
| retinex | 0.9025 | 0.9225 | 0.8981 | 0.9741 |
| Dark channel (MSRCR) | 0.6203 | 0.7068 | 0.6832 | 0.6606 |
| Information filling defogging method for foggy image | 0.9047 | 0.9742 | 0.9190 | 0.8416 |
As shown in the above table, the parameters of retinex in processing fig. 4 are better than those of the information-filled defogging method for a foggy image provided in this embodiment, but from the subjective observation of human eyes, the defogging image in this embodiment is obviously better than the defogged image of retinex, which also indicates that the index cannot completely evaluate whether the defogged image is convenient for the computer and human eyes to observe, and there is a certain disadvantage.
The image processing before and after image processing by the information filling defogging method for the foggy image provided in the embodiment is clearer after defogging processing by the defogging method in the embodiment compared with the image processing before and after image processing by the information filling defogging method for the foggy image as shown in fig. 7.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described above may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with at least a part of the sub-steps or stages of other steps or other steps.
In one embodiment, as shown in FIG. 8, an information-filled defogging system for a foggy image is provided, comprising an image acquisition module 810, a first image enhancement module 820, a second image enhancement module 830, and an information-filling module 840, wherein:
The image acquisition module 810 is configured to acquire a foggy image, and perform image preprocessing on the foggy image to obtain processed image data;
a first image enhancement module 820, configured to perform first image enhancement on the processed image data based on an ACE algorithm, to obtain intermediate image data;
the second image enhancement module 830 is configured to perform grayscale gamma conversion on the intermediate image data, adjust a gamma value of the intermediate image data, and perform contrast adjustment and second image enhancement to obtain image data to be filled;
And the information filling module 840 is used for filling missing information into the image data to be filled by using the coloring model and outputting defogging images.
In one embodiment, the first image enhancement module 820 is further configured to perform color space adjustment on the processed image data based on an ACE algorithm to obtain adjusted image data, and perform dynamic tone scaling on the adjusted image data to obtain intermediate image data.
In one embodiment, the first image enhancement module 820 is further configured to calculate an output pixel value for each channel in the processed image data based on the ACE algorithm, perform color comparison according to the output pixel values, perform local or global balance processing, and select a subset of the images to obtain adjusted image data.
In one embodiment, the first image enhancement module 820 is further configured to use the adjusted image data as an intermediate pixel matrix, and perform scaling on the intermediate pixel matrix using a linear scaling method to obtain intermediate image data.
In one embodiment, the second image enhancement module 830 is further configured to determine an optimal gamma value according to the gray value of the intermediate image data, and adjust the gamma value in the intermediate image data to the optimal gamma value by a graying gamma transformation.
In one embodiment, the second image enhancement module 830 is further configured to extract image features in the image data to be filled using a backbone network in the coloring model, input the image features into a pixel decoder, restore an image space structure, perform color query on visual features in the image data to be filled through a color decoder, output a color representation, fuse the image space structure and the color representation, generate a color channel output coloring result, and fill missing information.
In one embodiment, the second image enhancement module 830 is further configured to input image features and color query instructions into a color decoding block in the shading model, and to establish correlations between semantic and color representations by cross-attention mechanisms, self-attention mechanisms, and feed-forward operations employed in the shading model.
In one embodiment, the second order image enhancement module 830 is also used to render the image data to be filled in an end-to-end manner by the rendering model.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of information-filled defogging of a foggy image. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 9 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided that includes a memory having a computer program stored therein and a processor that when executing the computer program performs the steps of the information-filled defogging method of a foggy image.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, implements the steps of an information-filled defogging method for a foggy image.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
Claims (3)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410143628.0A CN118279187B (en) | 2024-02-01 | 2024-02-01 | Method and system for defogging foggy images by filling in information |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410143628.0A CN118279187B (en) | 2024-02-01 | 2024-02-01 | Method and system for defogging foggy images by filling in information |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118279187A CN118279187A (en) | 2024-07-02 |
| CN118279187B true CN118279187B (en) | 2025-02-14 |
Family
ID=91645267
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410143628.0A Active CN118279187B (en) | 2024-02-01 | 2024-02-01 | Method and system for defogging foggy images by filling in information |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118279187B (en) |
Family Cites Families (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7004904B2 (en) * | 2002-08-02 | 2006-02-28 | Diagnostic Ultrasound Corporation | Image enhancement and segmentation of structures in 3D ultrasound images for volume measurements |
| CN101901482B (en) * | 2009-05-31 | 2012-05-02 | 汉王科技股份有限公司 | Method for judging quality effect of defogged and enhanced image |
| CN110476416B (en) * | 2017-01-26 | 2021-08-17 | 菲力尔系统公司 | System and method for infrared imaging in multiple imaging modalities |
| CN112907474B (en) * | 2021-02-22 | 2023-08-25 | 大连海事大学 | Underwater image enhancement method based on background light optimization and gamma transformation |
| CN114022380A (en) * | 2021-11-04 | 2022-02-08 | 安徽工业大学 | A reference-free dehazing image quality evaluation method based on HSI color space |
| KR102618580B1 (en) * | 2022-09-28 | 2023-12-27 | 엘아이지넥스원 주식회사 | Nighttime low-light image enhancement method based on retinex and atmospheric light estimation, apparatus and computer program for performing the method |
| CN116385359A (en) * | 2023-02-28 | 2023-07-04 | 陕西智引科技有限公司 | Surface defect image enhancement method for low-contrast magnet device based on machine vision |
| CN116416162A (en) * | 2023-04-17 | 2023-07-11 | 安徽大学 | Image enhancement method and system based on improved MSR and CLAHE |
| CN116957958A (en) * | 2023-06-25 | 2023-10-27 | 东南大学 | An improved VIO front-end method based on inertial priori correction of image grayscale |
-
2024
- 2024-02-01 CN CN202410143628.0A patent/CN118279187B/en active Active
Non-Patent Citations (2)
| Title |
|---|
| Zhongjun Ding等.Underwater Image Fusion Enhancement Algorithm based on Color Correction and Sharpening.《The 2nd International Conference on Signal Processing,Computer Networks and Communications》.2023,第284-289页. * |
| 基于自适应图像增强和图像去噪的水面航行船舶识别方法;周林宏等;《船舶工程》;20211231;第101-105页 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118279187A (en) | 2024-07-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Wang et al. | Low-light image enhancement based on virtual exposure | |
| CN115223004B (en) | Method for generating image enhancement of countermeasure network based on improved multi-scale fusion | |
| Wang et al. | Biologically inspired image enhancement based on Retinex | |
| Panetta et al. | Tmo-net: A parameter-free tone mapping operator using generative adversarial network, and performance benchmarking on large scale hdr dataset | |
| CN111583161A (en) | Blurred image enhancement method, computer device and storage medium | |
| EP1399886A2 (en) | Automatic digital picture enhancement | |
| CN114862707B (en) | A multi-scale feature restoration image enhancement method, device and storage medium | |
| CN112508814A (en) | Image tone restoration type defogging enhancement method based on unmanned aerial vehicle at low altitude view angle | |
| CN114299193B (en) | Black-white video coloring method, system, equipment and storage medium based on neural network | |
| Wang et al. | A Low Light Image Enhancement Method Based on Dehazing Physical Model. | |
| US20190304072A1 (en) | Method and apparatus for converting low dynamic range video to high dynamic range video | |
| Feng et al. | Low-light image enhancement algorithm based on an atmospheric physical model | |
| WO2023005818A1 (en) | Noise image generation method and apparatus, electronic device, and storage medium | |
| Lyu et al. | Enhancing low-light light field images with a deep compensation unfolding network | |
| Yadav et al. | Contrast enhancement of region of interest of backlit image for surveillance systems based on multi-illumination fusion | |
| Wei et al. | An image fusion dehazing algorithm based on dark channel prior and retinex | |
| Zheng et al. | Overwater image dehazing via cycle-consistent generative adversarial network | |
| CN114663951B (en) | Low-illumination face detection method and device, computer equipment and storage medium | |
| CN114693548B (en) | Dark channel defogging method based on bright area detection | |
| US20230186612A1 (en) | Image processing methods and systems for generating a training dataset for low-light image enhancement using machine learning models | |
| Li et al. | Multi-scale fusion framework via retinex and transmittance optimization for underwater image enhancement | |
| Liu et al. | DRC: Chromatic aberration intensity priors for underwater image enhancement | |
| Lai et al. | Color correction methods for underwater image enhancement: A systematic literature review | |
| US20230186446A1 (en) | Image processing methods and systems for low-light image enhancement using machine learning models | |
| Zhang et al. | Joint Luminance Adjustment and Color Correction for Low-Light Image Enhancement Network. |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |