CN116757945A - A visibility calculation method based on video image gradient factors and dark channels - Google Patents
A visibility calculation method based on video image gradient factors and dark channels Download PDFInfo
- Publication number
- CN116757945A CN116757945A CN202310678919.5A CN202310678919A CN116757945A CN 116757945 A CN116757945 A CN 116757945A CN 202310678919 A CN202310678919 A CN 202310678919A CN 116757945 A CN116757945 A CN 116757945A
- Authority
- CN
- China
- Prior art keywords
- image
- gradient
- visibility
- dark
- sum
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a visibility resolving method based on video image gradient factors and a dark channel, which belongs to the technical field of image data information extraction and comprises the following steps: firstly, acquiring a plurality of images of a once low-visibility weather process and visibility data observed by a visibility instrument; acquiring a brightness image of each image; respectively solving the gradient sum of the dark channel image and the brightness image to obtain a gradient ratio; the visibility is obtained by the ratio of the prior visibility to the gradient. The method has the advantages that the model used is simple and has strong universality, the color, shape, image contrast change and the like of the vegetation caused by the change of seasons and light rays can be effectively restrained, the model is unstable, and meanwhile, the situation of controlling the long side on the short side caused by shallower scenes is also restrained to a certain extent.
Description
Technical Field
The application belongs to the technical field of image data information extraction, and particularly relates to a visibility resolving method based on video image gradient factors and dark channels.
Background
Atmospheric visibility is one of the main factors affecting traffic, and low visibility can affect people's travel, resulting in a great increase in traffic accidents. Therefore, the method for rapidly acquiring the atmospheric visibility has great significance for traffic management in haze days.
Visibility is one of important ground meteorological observation elements, has influence on aviation, land traffic and human production and life, and particularly has great influence on highways, and visibility observation generally adopts a visibility meter. The apparatus is expensive and, if it is widely deployed, particularly densely erected along highway lines, requires a large amount of capital and is not practical. In order to realize the whole-process monitoring of low visibility, the use of image resolution visibility is the current main research trend.
However, in the existing practical calculation method, the change of the colors and shapes of the vegetation is caused by the season change, the contrast is changed due to the intensity of light, or white objects in the image are more or no sky exists, and the like, so that the model is changed; meanwhile, the photo scene is shallow, so that the short side is controlled to be long, and the resolving precision is low. Therefore, it is important to build a visibility solution model with a fixed model formula, with few factors.
Disclosure of Invention
Aiming at the defects of the prior art, the application aims to provide a visibility resolving method based on video image gradient factors and dark channels so as to solve the problems in the background art.
The aim of the application can be achieved by the following technical scheme:
a visibility resolving method based on video image gradient factors and dark channels comprises the following steps:
firstly, acquiring a plurality of images of a once low-visibility weather process and visibility data observed by a visibility instrument;
acquiring a brightness image of each image;
respectively solving the gradient sum of the dark channel image and the brightness image to obtain a gradient ratio;
the visibility is obtained by the ratio of the prior visibility to the gradient.
Preferably, the dark channel image of the image is obtained by:
wherein J is c Representing a color channel of J, Ω (x) representing a neighborhood centered on x, the dark primary a priori being considered to be J except for the sky for a pair of haze free images dark Has a value of approximately 0, J dark Is a J dark channel image.
Preferably, the luminance image of the image is obtained by:
V(x)=max(r,g,b)
where brightness V describes the change in color.
Preferably, the image lateral gradient and longitudinal gradient formulas are as follows:
wherein G is x Is a transverse gradient, G y Is a longitudinal gradient.
Preferably, the image gradient is obtained as follows, based on the image transverse gradient and the image longitudinal gradient:
where G is the image gradient.
Preferably, the sum of gradients of the dark channel image and the luminance image is as follows:
G dark =sum(gradient(j dark (x))
G V =sum(gradient(V(x))
wherein G is dark G is the sum of the gradient of the dark channel image V Is the total gradient of the luminance image.
Preferably, the gradient ratio is:
where e is the gradient ratio.
Preferably, the visibility is:
V p =ae b
wherein V is p For visibility, the a, b parameters are determined by the observed data.
The application has the beneficial effects that:
the method has the advantages that the model used is simple and has strong universality, the color, shape, image contrast change and the like of the vegetation caused by the change of seasons and light rays can be effectively restrained, the model is unstable, and meanwhile, the situation of controlling the long side on the short side caused by shallower scenes is also restrained to a certain extent.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to those skilled in the art that other drawings can be obtained according to these drawings without inventive effort.
FIG. 1 is a flow chart of a visibility resolving method in an embodiment of the present application;
FIG. 2 is a schematic view of an image of a middle east sea observation field in accordance with an embodiment of the present application;
FIG. 3 is a view of a southward observation field in accordance with an embodiment of the present application;
FIG. 4 is a view of a view resolution model of a middle east sea scene in an embodiment of the application;
FIG. 5 is a view of a view resolution model of a southward scene in an embodiment of the application;
FIG. 6 is an error distribution diagram of a east sea scenario in an embodiment of the present application;
FIG. 7 is an error distribution diagram of a southward scene in an embodiment of the present application;
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, the present embodiment provides a method for resolving visibility based on video image gradient factors and dark channels, which includes the following steps:
step 1, firstly, acquiring a plurality of images of a low-visibility weather process and visibility data observed by a visibility instrument. For each foggy color image J, it is known from dark primary prior theory:
wherein J is c Representing a color channel of J, Ω (x) represents a neighborhood centered on x. Dark primary prior believes that for a pair of fogless images, except the sky, J dark Has a value of approximately 0, J dark Is a J dark channel image. And obtaining a dark channel image of each image by using the formula.
Step 2, for a pair of foggy color images J (r, g, b), obtaining a brightness image of each image:
V(x)=max(r,g,b)
where brightness V describes the change in color.
Step 3, according to the Sobel operator, the formulas of the transverse gradient and the longitudinal gradient of the image are as follows:
wherein G is x Is a transverse gradient, G y Is a longitudinal gradient.
According to the image transverse gradient and the image longitudinal gradient, the image gradient is as follows:
where G is the image gradient.
And 4, respectively obtaining the gradient sum of the dark channel image and the brightness image, wherein the gradient sum is as follows:
G dark =sum(gradient(j dark (x))
G V =sum(gradient(V(x))
wherein G is dark G is the sum of the gradient of the dark channel image V Is the total gradient of the luminance image.
Thereby obtaining the gradient ratio:
step 5, through priori visibility V p The following relationship exists with the gradient ratio e:
V p =ae b
wherein V is p For visibility, the a, b parameters are determined by the observed data. The a, b parameters can be determined by a low visibility process.
The verification process of the embodiment is as follows
1) And (3) data acquisition:
two scenes are selected in an experiment, namely an east sea observation field and a southward observation field, a photo is from a Haikang Williams camera, visibility observation data is from a DNQ1 forward scattering visibility meter, the visibility alpha is in a range of 10-10000 m, and the error is +/-10%. In an observation field in the east China sea (shown in fig. 2), the target color in the image clearly changes due to the change of seasons, and the definition of the image also changes under different illumination conditions; in the southward observation field (as shown in fig. 3), the scene has no sky, more white targets, shallower scene and closer observation targets to the camera. Overall, the data covers most of the scene.
2) And (3) establishing a model:
the radius of a dark channel scanning window needs to be determined according to the content of an image, and when the sky area in the image is too large, the scanning radius needs to be increased when a non-white object in the image is single; when there is almost no sky area in the image and there are more dark objects in the image, the scanning radius needs to be reduced, and according to experiments, the minimum scanning radius is considered to be 10 pixels.
Through multiple low-visibility weather process observation data simulation, the research finds that different dark channel scanning window radiuses have certain influence on model parameters, and the research respectively selects scanning window radiuses of 20, 25 and 30 pixels for an east sea scene to perform experiments, and determines that the model is T25 in FIG. 4, namely the scanning window radius pixel; the southward scene selects a model with a scanning window radius of 10, i.e. T10 in fig. 5.
3) Precision analysis
The accuracy specification of WMO (world meteorological organization) for visibility is shown in table 1:
TABLE 1
As can be seen from fig. 6, the total error is basically concentrated in ±20% for 1356 samples in the east-ocean scenario, and as the visibility V increases, the error increases, and the calculated value is smaller overall. Wherein less than 500 meters 599 samples, the error is 521 points of 50 meters, accounting for 87%; the observed samples with the visibility between 500 and 1500 have 489 pieces, wherein the error is less than 10 percent, 152 pieces are total, 31 percent is occupied, 288 pieces are total, more than 10 percent and less than 20 percent is occupied, and 59 percent is occupied. The number of observation samples with the visibility of more than 1500 is 267, the error is less than 20 percent and the number is 146, and the number is 55 percent.
Fig. 7 is a view of the error distribution diagram of the southward scene, wherein the error is concentrated in 20% when the visibility is below 1000 m, and the error is suddenly increased when the visibility is more than 1000 m, mainly because the scene is shallower, the object is closer to the camera, and the short side control long side phenomenon occurs. 695 samples in the southward scene, wherein the visibility is less than 500 meters and 258 samples in total, the error is less than 163 samples in total of 50 meters, and the error accounts for 63%; a total of 175 samples with visibility between 500-1500 meters, a total of 51 with less than 10% error, and a total of 104 with less than 20% error; a total of 262 with visibility greater than 1500 meters and 95 with errors less than 20%. The visibility observation is local visibility and has errors, and the visibility of video solution is average visibility in the image coverage range, so that the model is simple and stable, the errors are superior to other algorithms, and the model is easy to be used in actual work.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the application without departing from the spirit and scope of the application, which is intended to be covered by the claims.
Claims (8)
1. A method for resolving visibility based on video image gradient factors and dark channels, comprising the steps of:
firstly, acquiring a plurality of images of a once low-visibility weather process and visibility data observed by a visibility instrument;
acquiring a brightness image of each image;
respectively solving the gradient sum of the dark channel image and the brightness image to obtain a gradient ratio;
the visibility is obtained by the ratio of the prior visibility to the gradient.
2. The method of claim 1, wherein the dark channel image of the image is obtained by:
wherein J is c Representing a color channel of J, Ω (x) representing a neighborhood centered on x, the dark primary a priori being considered to be J except for the sky for a pair of haze free images dark Has a value of approximately 0, J dark Is a J dark channel image.
3. The method of claim 1, wherein the luminance image of the image is obtained by:
V(x)=max(r,g,b)
where brightness V describes the change in color.
4. The method of claim 1, wherein the image lateral gradient and longitudinal gradient formulas are as follows:
wherein G is x Is a transverse gradient, G y Is a longitudinal gradient.
5. The method of claim 4, wherein the image gradient is obtained from the image lateral gradient and the image longitudinal gradient as follows:
where G is the image gradient.
6. The method of claim 5, wherein the sum of gradients of the dark channel image and the luminance image is as follows:
G dark =sum(gradient(j dark (x))
G V =sum(gradient(V(x))
wherein G is dark G is the sum of the gradient of the dark channel image V Is the total gradient of the luminance image.
7. The method of claim 6, wherein the gradient ratio is:
where e is the gradient ratio.
8. The method of claim 7, wherein the visibility is:
V p =ae b
wherein V is p For visibility, the a, b parameters are determined by the observed data.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310678919.5A CN116757945A (en) | 2023-06-09 | 2023-06-09 | A visibility calculation method based on video image gradient factors and dark channels |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310678919.5A CN116757945A (en) | 2023-06-09 | 2023-06-09 | A visibility calculation method based on video image gradient factors and dark channels |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116757945A true CN116757945A (en) | 2023-09-15 |
Family
ID=87960110
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310678919.5A Pending CN116757945A (en) | 2023-06-09 | 2023-06-09 | A visibility calculation method based on video image gradient factors and dark channels |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116757945A (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104240192A (en) * | 2013-07-04 | 2014-12-24 | 西南科技大学 | Rapid single-image defogging algorithm |
| US20150339811A1 (en) * | 2014-05-20 | 2015-11-26 | Qualcomm Incorporated | Systems and methods for haziness detection |
| CN107194924A (en) * | 2017-05-23 | 2017-09-22 | 重庆大学 | Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning |
-
2023
- 2023-06-09 CN CN202310678919.5A patent/CN116757945A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104240192A (en) * | 2013-07-04 | 2014-12-24 | 西南科技大学 | Rapid single-image defogging algorithm |
| US20150339811A1 (en) * | 2014-05-20 | 2015-11-26 | Qualcomm Incorporated | Systems and methods for haziness detection |
| CN107194924A (en) * | 2017-05-23 | 2017-09-22 | 重庆大学 | Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning |
Non-Patent Citations (1)
| Title |
|---|
| 刘南辉等: "一种支持向量机和数字图像相结合的能见度检测算法", 福州大学学报(自然科学版), no. 01, 31 December 2018 (2018-12-31) * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Zheng et al. | Infrared traffic image enhancement algorithm based on dark channel prior and gamma correction | |
| CN104580925A (en) | Image brightness controlling method, device and camera | |
| CN112288736B (en) | Visibility estimation method based on images | |
| DE102011106050A1 (en) | Shadow removal in an image captured by a vehicle-based camera for detection of a clear path | |
| KR20050067389A (en) | Image fusion system and method | |
| CN110276267A (en) | Method for detecting lane lines based on Spatial-LargeFOV deep learning network | |
| CN106447602A (en) | Image mosaic method and device | |
| CN112529498B (en) | Warehouse logistics management method and system | |
| CN104766307A (en) | Picture processing method and device | |
| CN108513414A (en) | A kind of stage follow spotlight system and method for focus autotracking | |
| CN110986884A (en) | Unmanned aerial vehicle-based aerial survey data preprocessing and vegetation rapid identification method | |
| CN102768757A (en) | A Color Correction Method of Remote Sensing Image Based on Image Type Analysis | |
| CN115620162A (en) | A Flood Inundation Area Extraction Method Based on Decision-Level Data Fusion | |
| CN113506275A (en) | Urban image processing method based on panorama and application | |
| CN120089095A (en) | Image processing method and system for LED special-shaped screen | |
| CN113365400B (en) | Multi-period street lamp control system based on artificial intelligence and video analysis | |
| CN111275698B (en) | Method for detecting visibility of road in foggy weather based on unimodal offset maximum entropy threshold segmentation | |
| Ji et al. | Real-time enhancement of the image clarity for traffic video monitoring systems in haze | |
| CN111241916A (en) | Method for establishing traffic sign recognition model | |
| CN112381896B (en) | Brightness correction method and system for microscopic image and computer equipment | |
| CN105005966B (en) | A kind of single image based on yellow haze physical characteristic goes haze method | |
| CN105809622B (en) | Automatic identification of non-safety area and automatic drawing method of safety range map | |
| CN115797775B (en) | Intelligent illegal building identification method and system based on near-to-ground video image | |
| CN116757945A (en) | A visibility calculation method based on video image gradient factors and dark channels | |
| CN116630349A (en) | Straw returning area rapid segmentation method based on high-resolution remote sensing image |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |