Disclosure of Invention
In view of the analysis, the invention aims to provide a least square filtering-based visible light infrared image fusion method, which is used for solving the problems that detail parts with obvious contrast but low pixel intensity in an image fused by the existing visible light infrared image fusion method are missing and the image edge is blurred.
The aim of the invention is mainly realized by the following technical scheme:
a least square filtering-based visible light infrared image fusion method comprises the following steps:
step 1, obtaining a detail layer image after least square filtering of a visible light gray level image;
step 2, solving a high-level gradient and low-vertical gradient map for the input visible light and infrared gray level image;
Step 3, comparing high-level gradient and low-vertical gradient images of the visible light and infrared gray level images, and obtaining a contrast saliency image I Statistical;
Step 4, obtaining guide image images P Vis and P IR capable of combining the light image and the infrared image based on the contrast saliency image;
step 5, carrying out mean value filtering on the input visible light and infrared gray level images to obtain a detail layer and a background layer;
Step 6, respectively conducting guide filtering of different filters and different fuzzy coefficients on the input visible light and infrared gray level images according to the obtained guide image images P Vis and P IR to obtain a visible light and infrared image detail layer weight coefficient Wd Vis、WdIR and a background layer weight coefficient Wb Vis、WbIR;
Step 7, weighting the detail layer weight coefficient Wd Vis、WdIR and the weight coefficient Wb Vis、WbIR of the background layer obtained in the step 5 with the visible light, infrared detail layer and the background layer to obtain a detail layer FUSION D and a background layer FUSION B of the fused image;
Step 8, obtaining a detail layer average value for the visible light least square detail layer obtained in the step 1 and the fusion result detail layer obtained in the step 7;
And 9, fusing the background layer image obtained in the step 7 and the detail layer average value obtained in the step 8 to obtain a final fusion result image.
Further, in step 1, the least squares filter is first used to obtain an image VIS WLS of the visible light gray-scale image filtered by the least squares filter, and then the least squares filtered detail layer image VIS WLS-D of the visible light gray-scale image is obtained.
Further, in step 2, the high horizontal gradient and the low vertical gradient of the visible light and infrared grayscale images are obtained by subtracting the gradient map in the y direction from the gradient map in the x direction.
Further, in step 2, the operator for obtaining the gradient map uses a Sobel operator.
Further, in step 3, each pixel point of the contrast saliency image takes the maximum value of the two high-level gradient and low-level gradient image matrixes of visible light and infrared light, and the contrast saliency image I Statistical is obtained.
In step 4, the method for obtaining the guide map images P Vis and P IR of the visible light image and the infrared image is to compare the contrast significant value image with the high-level gradient and low-level gradient map of the visible light and the infrared light respectively, wherein the pixel point position guide map values with equal values are 1, and the pixel point position guide map values with unequal values are 0.
Further, in step 5, the detail layer and the background layer of the input visible light and infrared gray level image are obtained by respectively performing mean filtering, wherein the smoothness of the background layer is very high, a 31×31 mean filter is adopted to obtain the background layer, and the detail layer is calculated according to the obtained background layer:
Vismean-d=Vis-Vismean-b (6)
IRmean-d=IR-IRmean-b (7)
Wherein, vis mean-d、IRmean-d is the detail layer of the visible light and infrared gray level image after mean value filtering, vis is the visible light gray level image, IR infrared gray level image, vis mean-b、IRmean-b is the background layer of the visible light and infrared gray level image after mean value filtering.
Further, in step 6, the process of obtaining the weight coefficient Wd Vis of the detail layer of the visible light image by using the guide filter function is represented by formula (8), and the process of obtaining the weight coefficient Wb Vis of the background layer of the visible light image by using the guide filter function is represented by formula (9):
WdVis=guiderfiler(Vis,PVis,kd,epsd) (8)
WbVis=guiderfiler(Vis,PVis,kb,epsb) (9)
the process of using the guide filter function to obtain the weight coefficient Wd IR of the detail layer of the infrared image is shown as formula (10), and the process of using the guide filter function to obtain the weight coefficient Wb IR of the background layer of the infrared image is shown as formula (11):
WdlR=guiderfiler(IR,PIR,kd,epsd) (10)
WbIR=guiderfiler(IR,PIR,kb,epsb) (11)
Wherein guiderfiler () in the formula represents a guide filter function, vis is a visible light gray image, IR is an infrared gray image, P Vis and P IR are guide image images of the visible light and the infrared images respectively obtained in the step 3, k b and k d are filters with different sizes, and eps b and eps d are blur coefficients with different sizes.
Further, in step 7, during addition, the weighting coefficients are further normalized to obtain a detail layer FUSION D and a background layer FUSION B of the fused image.
Further, in step 8, the fusion method is as follows:
the invention can realize the following beneficial effects:
(1) The method for fusing the visible light infrared images based on least square filtering introduces the concept of image contrast saliency, so that detail information with obvious contrast but low pixel intensity value in an image area is reserved; the invention combines the least square method and the guided filter to better maintain the edge characteristic, so that the fused image obtains more effective information. The invention has important significance for human visual perception, target detection and identification.
In the invention, the technical schemes can be mutually combined to realize more preferable combination schemes. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
Detailed Description
The following detailed description of preferred embodiments of the invention is made in connection with the accompanying drawings, which form a part hereof, and together with the description of the embodiments of the invention, are used to explain the principles of the invention and are not intended to limit the scope of the invention.
In describing embodiments of the present invention, it should be noted that, unless explicitly stated and limited otherwise, the term "coupled" should be interpreted broadly, for example, as being fixedly coupled, detachably or integrally coupled, mechanically or electrically coupled, directly coupled, or indirectly coupled via an intermediary. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
The terms "top," "bottom," "above," "below," and "above" are used throughout the description to refer to relative positions of components of the device, such as the relative positions of the top and bottom substrates inside the device. It will be appreciated that the devices are versatile, irrespective of their orientation in space.
In one embodiment of the present invention, as shown in fig. 1 to 10, a method for fusing visible light and infrared light based on least square filtering is disclosed, wherein the visible light and infrared light simultaneously transmit through a spherical primary mirror to form a visible light image and an infrared image, comprising the following steps:
Step 1, solving a detail layer image after least square filtering of a visible light gray level image:
Firstly, a least square filter is used to obtain an image VIS WLS of a visible light gray image filtered by the least square filter, and then a detail layer image of the visible light gray image filtered by the least square filter is obtained:
VISWLS-D=Vis-VisWLS (1)
Wherein, VIS WLS-D represents the detail layer after the least square method of the visible light gray scale map, and VIS represents the visible light gray scale map.
Step 2, solving a high horizontal gradient and a low vertical gradient chart for the input visible light and infrared gray level images:
The visible light gray level image and the infrared gray level image are simultaneously acquired through a visible light infrared composite system based on spherical concentric mirrors, so that the consistency of the acquired visible light image and infrared image is ensured.
The high horizontal gradient and the low vertical gradient of the visible light and infrared gray scale image are obtained by subtracting the gradient map in the y direction from the gradient map in the x direction (as shown in fig. 5 and 6). The operator for obtaining the gradient map uses a Sobel operator, and a Sobel operator template map is shown in fig. 2. The input visible light and infrared images are gray images.
gradientVis=gradXVis-gradYVis (2)
gradientIR=gradXIR-gradYIR (3)
Wherein gradien t Vis represents a high-level gradient and low-vertical gradient map of the visible light gray-scale image, gradien t IR represents a high-level gradient and low-vertical gradient map of the infrared gray-scale image, gradX Vis、grad YVis represents a gradient map of the visible light gray-scale image in X, Y directions, and gradX IR、grad YIR represents a gradient map of the infrared gray-scale image in X, Y directions.
Step 3, comparing the high-level gradient and low-vertical gradient images of the visible light and infrared gray level images, and obtaining a contrast saliency image I Statistical:
Each pixel point of the contrast saliency image takes the maximum value of the visible light, infrared light, high horizontal gradient and low vertical gradient image matrix, and a contrast saliency image I Statistical is obtained, as shown in fig. 7.
Where I Statistical,i,j represents a contrast salient value when the pixel position is (I, j), gradiennt Vis,i,j represents a value when the pixel position of the high-horizontal gradient and low-vertical gradient image of the visible light gray image is (I, j), and gradient IR,i,j represents a value when the pixel position of the high-horizontal gradient and low-vertical gradient image of the infrared gray image is (I, j).
Step 4, obtaining guide map images P Vis and P IR of the visible light image and the infrared image based on the contrast saliency image:
the contrast significant value image is respectively compared with a high-level gradient and low-vertical gradient image of visible light and infrared, the pixel point position guiding image value with equal value is 1, and the pixel point position guiding image value with unequal value is 0;
Wherein P Vis,i,j is the visible light guide map value when the pixel position is (i, j), and P IR,i,j is the infrared guide map value when the pixel position is (i, i).
Step 5, carrying out mean filtering on the input visible light and infrared gray level image to obtain a detail layer and a background layer:
Respectively carrying out mean filtering on the input visible light and infrared gray images to obtain a detail layer and a background layer, wherein the smoothness of the background layer is quite high, adopting a 31 multiplied by 31 mean filter to obtain the background layer, and calculating the detail layer according to the obtained background layer:
Vismean-d=Vis-Vismean-b (7)
IRmean-d=IR-IRmean-b (8)
Wherein Vis mean-d、IRmean-d is the detail layer of the visible light and infrared gray level image after mean value filtration, vis is the visible light gray level image, IR is the infrared gray level image, and Vis mean-b、 IRmean-b is the background layer of the visible light and infrared gray level image after mean value filtration.
Step 6, respectively performing guide filtering of different filters and different fuzzy coefficients on the input visible light and infrared gray level images according to the obtained guide image images P Vis and P IR to obtain a visible light and infrared image detail layer weight coefficient Wd Vis、WdIR and a background layer weight coefficient Wb Vis、WbIR:
the process of obtaining the weight coefficient Wd Vis of the detail layer of the visible light image using the guide filter function may be expressed as formula (9), and the process of obtaining the weight coefficient Wb Vis of the background layer of the visible light image using the guide filter function may be expressed as formula (10):
WdVis=guiderfiler(Vis,PVis,kd,epsd) (9)
WbVis=guiderfiler(Vis,PVis,kb,epsb) (10)
The process of obtaining the weight coefficient Wd IR of the detail layer of the infrared image using the guide filter function may be expressed as formula (11), and the process of obtaining the weight coefficient Wb IR of the background layer of the infrared image using the guide filter function may be expressed as formula (12):
WdIR=guiderfiler(IR,PIR,kd,epsd) (11)
WbIR=guiderfiler(IR,PIR,kb,epsb) (12)
Wherein guiderfiler () in the formula represents a guide filter function, vis is a visible light gray image, IR is an infrared gray image, P Vis and P IR are guide image images of the visible light and the infrared images respectively obtained in the step 3, a larger filter k b and a larger blur coefficient eps b are adopted when calculating a background layer, a smaller filter k d and a smaller blur coefficient eps d are adopted when calculating a detail layer, the filter k b adopted when calculating the background layer is twice the filter k d adopted when calculating the detail layer, and the blur coefficient eps b when calculating the background layer is different from the blur coefficient eps d of the detail layer by one order of magnitude.
Further, the principle of guided filtering is as follows:
Wherein I i is a guide image pixel, q i is an output image pixel, q can be regarded as a local linear transformation of the guide image I, k is a midpoint of a localized window, and therefore, pixels belonging to the window ω k can be all calculated by transforming the corresponding pixels of the guide image by the coefficients of (a k,bk).
The loss function within the filter window can be written as:
Wherein, p i is the pixel point of the input image, and regularization parameter E is introduced to avoid excessive ak.
Solving the above equation yields:
wherein the upper dash represents the average of all such values calculated in the window.
Wherein the method comprises the steps ofIs the variance of the window:
Illustratively, in this embodiment, the window radius of the filter used in calculating the weight map of the background layer is 8, and the blur coefficient is 0.3×0.3. When the detail layer weight map is calculated, the window radius of the filter is 4, and the fuzzy coefficient is 0.03 x 0.03.
And 7, weighting the detail layer weight coefficient Wd Vis、WdIR and the weight coefficient Wb Vis、WbIR of the background layer obtained in the step 6 with the visible light, infrared detail layer and the background layer to obtain a detail layer FUSIO N D and a background layer FUSIO N B of the fused image.
When adding, the weighting coefficients are further normalized to obtain a detail layer FUSIO N D and a background layer FUSIO N B of the fused image, as shown in formula (13) and formula (14):
FUSIOND=Vismean-d*WdVis/(WdVis+WdIR)+IRmean-d×WdIR/(WdVis+WdIR) (13)
FUSIONB=Vismean-b*WbVis/(WbVis+WbIR)+IRmean-b*WbIR/(WbVis+WbIR) (14)
Step 8, obtaining a detail layer average value for the visible light least square detail layer obtained in the step 1 and the fusion result detail layer obtained in the step 7, wherein the fusion method is shown in the following formula:
And 9, fusing the fusion result background layer obtained in the step 7 and the detail layer average value obtained in the step 8 to obtain a final fusion result image, wherein the fusion method is shown in the following formula:
FUSION=DETAILD+FUSIONB (16)
As shown in the figure, compared with the image fused by the image fusion method based on pixel saliency (as shown in fig. 9) and the image fused by the image fusion method based on pixel saliency (as shown in fig. 10), the image fused by the image fusion method based on pixel saliency (as shown in fig. 8) retains the detail part with obvious contrast but low pixel intensity value, so that the image edge is clearer, and the fused image obtains more effective information.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention.