[go: up one dir, main page]

CN116109535A - Image fusion method, device and computer readable storage medium - Google Patents

Image fusion method, device and computer readable storage medium Download PDF

Info

Publication number
CN116109535A
CN116109535A CN202211743860.5A CN202211743860A CN116109535A CN 116109535 A CN116109535 A CN 116109535A CN 202211743860 A CN202211743860 A CN 202211743860A CN 116109535 A CN116109535 A CN 116109535A
Authority
CN
China
Prior art keywords
frequency component
image
low
pixel position
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211743860.5A
Other languages
Chinese (zh)
Inventor
杨坚华
任宇鹏
周宏宾
崔婵婕
李乾坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211743860.5A priority Critical patent/CN116109535A/en
Publication of CN116109535A publication Critical patent/CN116109535A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image fusion method, image fusion equipment and a computer readable storage medium. The image fusion method comprises the following steps: extracting a high-frequency component and a low-frequency component of the first image to obtain a first high-frequency component and a first low-frequency component; extracting a high-frequency component and a low-frequency component of the second image to obtain a second high-frequency component and a second low-frequency component; wherein the resolution of the first image is lower than the resolution of the second image; determining an edge intensity of each pixel location in the second high frequency component, determining a local variance metric value for each pixel location in the respective high frequency components of the first image and the second image; the first high-frequency component and the second high-frequency component are subjected to pixel-by-pixel weighted fusion to obtain a fused high-frequency component; and obtaining a fusion image based on the fusion high-frequency component and the fusion low-frequency component. By the method, the problem of detail information loss of edges and the like in the fused image when the image fusion is carried out by the current image fusion method can be solved.

Description

Image fusion method, device and computer readable storage medium
Technical Field
The present invention relates to the field of image fusion technologies, and in particular, to an image fusion method, apparatus, and computer readable storage medium.
Background
Image fusion has become a research hotspot in the field of image engineering in recent years. The image fusion specifically refers to a novel technology for comprehensively processing images from different sources, and aims to extract information from a plurality of source images and integrate information of two or more source images, so that more accurate, comprehensive and reliable image description of the same scene or target is obtained, and the reliability of target identification is improved by utilizing the image fusion technology.
In the prior art, when the images are fused, certain loss is usually generated on spectrum information in the original images, so that the fused images lose some fine detail textures such as edges and the like, and the fused images have poor quality.
Disclosure of Invention
The invention mainly solves the technical problem of providing an image fusion method, equipment and a computer readable storage medium, which can solve the problem of edge detail information loss in the fused image when the image fusion is carried out by applying the current image fusion method.
In order to solve the technical problems, the invention adopts a technical scheme that: there is provided an image fusion method including: extracting a high-frequency component and a low-frequency component of the first image to obtain a first high-frequency component and a first low-frequency component; extracting a high-frequency component and a low-frequency component of the second image to obtain a second high-frequency component and a second low-frequency component; wherein the resolution of the first image is lower than the resolution of the second image; determining an edge intensity for each pixel location in the second high frequency component; determining local difference measurement values of each pixel position in respective high-frequency components of the first image and the second image to respectively obtain a first local difference measurement value and a second local difference measurement value of each pixel position; the first high-frequency component and the second high-frequency component are subjected to pixel-by-pixel weighted fusion to obtain a fused high-frequency component; obtaining a fusion image based on the fusion high-frequency component and the fusion low-frequency component; the fusion low-frequency component is obtained by fusing a first low-frequency component and a second low-frequency component; in the process of pixel-by-pixel weighted fusion, the weight of each pixel position in the first high-frequency component is positively correlated with the first local difference metric value of each pixel position, and the weight of each pixel position in the second high-frequency component is positively correlated with the second local difference metric value of each pixel position and the edge intensity.
In an embodiment, the weight of each pixel position in the first high frequency component is a ratio of the first local variance measurement value of each pixel position to a preset value; the weight of each pixel position in the second high-frequency component is the ratio of the product of the second local difference metric value and the edge intensity to a preset value, and the preset value is equal to the value obtained by adding the first local difference metric value to the product of the second local difference metric value and the edge intensity.
In one embodiment, determining the edge intensity for each pixel location in the second high frequency component includes: and processing the first local area taking each pixel position as a center in the second image by using a gradient operator to obtain the edge intensity of each pixel position.
In one embodiment, determining a local variance measure for each pixel location in the respective high frequency components of the first image and the second image includes: calculating local variance of a second local area taking each pixel position as a center in the first image to obtain a first local variance measurement value of each pixel position; and calculating the local variance of a third local area taking each pixel position as a center in the second image to obtain a second local variance measurement value of each pixel position.
In one embodiment, based on fusing the high frequency component and the low frequency component, a fused image is obtained, which previously includes: performing histogram matching on the second low-frequency component by taking the first low-frequency component as a reference to obtain a second low-frequency component after histogram matching; and fusing the second low-frequency component and the first low-frequency component after the histogram is matched to obtain a fused low-frequency component.
In an embodiment, fusing the second low-frequency component and the first low-frequency component after histogram matching to obtain a fused low-frequency component includes: calculating the local area energy of each pixel position of the first low-frequency component and the second low-frequency component after histogram matching so as to obtain the first local area energy and the second local area energy of each pixel position respectively; taking the pixel value of each first pixel position in the first low-frequency component as the pixel value of each first pixel position in the fused low-frequency image, wherein the energy of a first local area of each first pixel position is larger than that of a second local area of each first pixel position; and taking the pixel value of each second pixel position in the first low-frequency component as the pixel value of each second pixel position in the fused low-frequency image, wherein the energy of the first local area of each second pixel position is larger than that of the second local area of each second pixel position.
In one embodiment, calculating the local area energy for each pixel location of each of the first low frequency component and the histogram matched second low frequency component includes: and calculating local average gray values of each pixel position of the first low-frequency component and the second low-frequency component after histogram matching to obtain first local area energy and second local area energy of each pixel position respectively.
In one embodiment, the first image is a luminance component of a multispectral remote sensing image and the second image is a panchromatic remote sensing image; based on the fused high frequency component and the fused low frequency component, a fused image is obtained, comprising: based on the fusion high-frequency component and the fusion low-frequency component, obtaining a fusion brightness component; and replacing the brightness component in the multispectral remote sensing image with the fusion brightness component to obtain the fused multispectral remote sensing image.
In order to solve the technical problems, the invention adopts another technical scheme that: there is provided a remote sensing image fusion apparatus comprising a processor for executing instructions to implement an image fusion method as defined in any one of the above.
In order to solve the technical problems, the invention adopts another technical scheme that: there is provided a computer readable storage medium for storing instructions/program data executable to implement an image fusion method as any one of the above.
The beneficial effects of the invention are as follows: different from the prior art, the method obtains the first high-frequency component and the first low-frequency component by extracting the high-frequency component and the low-frequency component of the first image; extracting a high-frequency component and a low-frequency component of the second image to obtain a second high-frequency component and a second low-frequency component; wherein the resolution of the first image is lower than the resolution of the second image; determining the edge intensity of each pixel position in the second high-frequency component, determining the local difference measurement value of each pixel position in the high-frequency components of the first image and the second image, and carrying out pixel-by-pixel weighted fusion on the first high-frequency component and the second high-frequency component based on the local difference measurement value to obtain a fused high-frequency component; and obtaining a fusion image based on the fusion high-frequency component and the fusion low-frequency component. The edge intensity of each pixel position in the second high-frequency component is used as the weight of the second image high-frequency component, so that the weighted fusion of the image high-frequency components is carried out, the definition of the edge details of the fused image is effectively improved, and meanwhile, the distortion of the spectrum information of the fused image is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flow chart of an embodiment of an image fusion method according to the present invention;
FIG. 2 is a flow chart of an embodiment of the invention for low frequency component blending;
FIG. 3 is a global image and a locally enlarged region map of a multispectral remote sensing image;
FIG. 4 is a global image and a locally enlarged region map of a full color remote sensing image;
FIG. 5 is a schematic representation of a fused multispectral image of the present invention;
fig. 6 is a schematic structural view of an embodiment of the image fusion apparatus of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not limiting. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, fig. 1 is a flow chart of an embodiment of an image fusion method according to the present invention, the method includes:
step 101: extracting a high-frequency component and a low-frequency component of the first image to obtain a first high-frequency component and a first low-frequency component; and extracting the high-frequency component and the low-frequency component of the second image to obtain a second high-frequency component and a second low-frequency component.
Alternatively, the first image may be transformed, and the high frequency component and the low frequency component of the first image may be extracted, thereby obtaining the first high frequency component and the first low frequency component. I.e. the first high frequency component is a high frequency component of the first image and the first low frequency component is a low frequency component of the first image.
And transforming the second image, extracting the high-frequency component and the low-frequency component of the second image, and further obtaining a second high-frequency component and a second low-frequency component. I.e. the second high frequency component is a high frequency component of the second image and the second low frequency component is a low frequency component of the second image.
The method of transforming the first image and the second image is not limited to the transformation methods such as Non-downsampled contourlet transformation (Nonsubsampled Contourlet Transform, NSCT) and Non-downsampled shear wave transformation (Non-subsample Shearlet Transform, NSST), as long as the extraction of the high frequency component and the low frequency component in the image can be realized.
Alternatively, the first image and the second image may be subjected to low-pass filtering processing, respectively, to extract a high-frequency component and a low-frequency component of the first image, and a high-frequency component and a low-frequency component of the second image, thereby obtaining a first high-frequency component, a first low-frequency component, a second high-frequency component, and a second low-frequency component. The low-pass filtering process may be a multi-layer low-pass filtering process of the image.
Or, the first image can be transformed by a pyramid transformation method, and a high-frequency component and a low-frequency component of the first image are extracted, so that a first high-frequency component and a first low-frequency component are obtained; and transforming the second image, extracting the high-frequency component and the low-frequency component of the second image, and further obtaining a second high-frequency component and a second low-frequency component. In particular, the main steps of the pyramid transformation method may include low-pass filtering, upsampling, downsampling, and bandpass filtering. Each time the pyramid transformation is completed, the image may be divided into a hierarchy. For example, the second image is used as an input image of pyramid transformation, and after pyramid transformation, a second high-frequency component and a second low-frequency component of one level of the corresponding second image can be obtained.
In a specific example, pyramid transformation may be performed on the first image and the second image three times, the first image and the second image are divided into three levels, and the first high frequency component, the first low frequency component, the second high frequency component, and the second low frequency component corresponding to each level of the first image and the second image are extracted based on the first image and the second image of each level, respectively. By way of example, the second image is taken as an input image of three pyramid transformations, and after pyramid transformation, a second high-frequency component and a second low-frequency component of the first hierarchy of the corresponding second image can be obtained. And taking the obtained second high-frequency component and second low-frequency component of the first level as input images of the second pyramid transformation, and the like, and obtaining the second high-frequency component and the second low-frequency component of the third level of the second image after three pyramid transformations. And then, the image fusion is carried out based on the low-frequency components and the high-frequency components of a plurality of layers, so that the clear display of the image details can be realized.
The first image and the second image may also be acquired before extracting the high frequency component and the low frequency component of the first image and the high frequency component and the low frequency component of the second image. Wherein the resolution of the first image is lower than the resolution of the second image.
In one implementation, the first image is a luminance component in a visible light image and the second image is an infrared image.
In another embodiment, the first image is a luminance component of a multispectral remote sensing image and the second image is a full-color remote sensing image.
Optionally, when the first image is a luminance component of the multispectral remote sensing image and the second image is a full-color remote sensing image, the original satellite remote sensing image may be obtained before extracting the high-frequency component and the low-frequency component of the first image and the high-frequency component and the low-frequency component of the second image; preprocessing a multispectral remote sensing image and a panchromatic remote sensing image in an original satellite remote sensing image to obtain a registered multispectral remote sensing image and a panchromatic remote sensing image.
When the first image is a luminance component of the multispectral remote sensing image, the obtaining the first image may be performing luminance component extraction on the multispectral remote sensing image.
Preferably, the first image may be obtained by performing a GIHS transform on the multispectral remote sensing image, and extracting a luminance component of the multispectral remote sensing image. The GIHS transform is a way to base a generalized IHS (General Intensity Hue Saturation) spatial transform. In addition, the chromaticity component and the saturation component obtained by the GIHS transformation of the multispectral remote sensing image can be reserved for subsequent inverse transformation. Compared with the traditional IHS conversion method which can only convert three channels in an image, the method provided by the invention is suitable for multi-channel images by adopting the GIHS conversion method, and can keep the characteristic that the IHS conversion does not change the shape of spectral lines.
Alternatively, the transform formula of the GIHS may be as follows:
Figure BDA0004027749630000061
wherein C is N N-th channel, I and V representing multispectral remote sensing image 1 ~V N-1 T for each component after GIHS transformation N Is the transformation matrix in the GIHS transform.
And calculate T N The formula of (2) is as follows:
Figure BDA0004027749630000071
wherein T is N The specific calculation mode of (a) is that
Figure BDA0004027749630000072
And so on.
In other embodiments, the multispectral remote sensing image may also be transformed by a color space algorithm. For example, applying the rgb2yuv conversion formula, the separation of luminance and chrominance in the multispectral remote sensing image is achieved to obtain the first image. Or the multispectral remote sensing image may be transformed in other ways, so long as separation of image luminance and chromaticity is enabled.
In view of the fact that errors may occur in the process of acquiring the first image and the second image, it is difficult to accurately record the surface information in the acquired images, and this may reduce the quality of the image data, thereby affecting the accuracy of the image fusion result, the first image and the second image may be corrected before the high-frequency component and the low-frequency component of the first image and the high-frequency component and the low-frequency component of the second image are extracted. Namely, the accuracy of image fusion is ensured by correcting deformation, distortion, blurring and noise generated in the image acquisition process. For example, the size of the acquired first and second images may be kept consistent by upsampling using a bicubic interpolation algorithm.
Step 102: an edge intensity of each pixel location in the second high frequency component is determined.
The edge intensity of each pixel location in the second high frequency component may be determined for subsequent pixel-by-pixel weighted fusion of the first high frequency component and the second high frequency component using the edge intensity of each pixel location in the second high frequency component.
In one embodiment, the edge intensity of each pixel in the second high frequency component may be calculated by a gradient operator. Specifically, the gradient operator may be used to process the first local area centered at each pixel location in the second high frequency component to obtain the edge intensity of each pixel location in the second high frequency component. The gradient operator can be a sobel (sobel) operator, a Roberts cross gradient operator (Roberts Cross Edge Detector) or the like, and the variety of the gradient operator is not limited.
Since the gradient value of each pixel is related to the edge, texture, etc. of the image. The larger the gradient value of each pixel is, the more prominent the detail features such as edges, textures and the like of the image are, which indicates that the more the spatial information of the image is rich, the stronger the layering of the image is. If the edge intensity of each pixel position in the second high-frequency component and the weight which is positively correlated with the edge intensity of each pixel position are utilized later, and the second high-frequency component and the first high-frequency component are subjected to pixel-by-pixel weighted fusion, the detail features such as the edge in the second high-frequency component can be reserved and highlighted on the fused high-frequency component as much as possible, so that the definition of the fused image edge detail is effectively increased.
In another embodiment, the edge intensity of each pixel in the second image (i.e., the gradient value of each pixel) may be calculated by a gradient operator. Alternatively, the first local region of the second image centered at each pixel location may be processed using a gradient operator to obtain the edge intensity for each pixel location.
Because the high-frequency component is embodied as detail information with severe edge and other changes, the extraction of the edge intensity on the high-frequency component can lose part of detail information, and compared with the direct extraction of the edge intensity on the second high-frequency component, the embodiment can effectively reduce the loss of the detail information such as the edge.
In another embodiment, the edge intensity of each pixel in the second image may be determined directly by an edge detection algorithm. Illustratively, when the second image is a full-color remote sensing image, filtering processing is performed on each pixel in the full-color remote sensing image through an edge detection algorithm, so as to obtain the edge intensity of each pixel. Compared with the method that the edge detection is directly carried out on the high-frequency component of the full-color remote sensing image, the method can effectively reduce the loss of detail information such as edges and the like. The calculation formula of the edge detection algorithm may be:
Figure BDA0004027749630000081
where lambda and epsilon are constants and,
Figure BDA0004027749630000082
is the derivative of the full color remote sensing image.
Thus, the edge intensity of each pixel position is obtained by the method in the embodiment, the information injection of the non-edge area in the high-frequency component of the full-color remote sensing image can be restrained, the distortion degree of the spectrum information of the multispectral image is reduced, and the effect of the subsequent image fusion processing is better.
Step 103: determining local difference measurement values of each pixel position in respective high-frequency components of the first image and the second image, and respectively obtaining a first local difference measurement value and a second local difference measurement value of each pixel position.
Specifically, calculating a local difference metric value of each pixel position in a first high-frequency component in a first image to obtain a first local difference metric value of each pixel position; and calculating a local difference measurement value of each pixel position in the second high-frequency component in the second image to obtain a second local difference measurement value of each pixel position so as to facilitate the subsequent calculation of weights based on the local difference measurement values.
In one embodiment, the local variance metric may be calculated by local variance. Specifically, a local variance of each pixel position in the first high-frequency component in the first image may be calculated, so as to obtain a first local variance of each pixel position; and calculating the local variance of each pixel position in the second high-frequency component in the second image to obtain a second local variance of each pixel position. Still alternatively, the local variance metric may be characterized in terms of local standard deviation, local covariance, or the like.
It should be noted that, the above steps 102 and 103 are not limited in execution sequence, and the step 102 may be executed first and then the step 103 may be executed, the step 103 may be executed first and then the step 102 may be executed, and the steps 102 and 103 may be executed simultaneously, which is reasonable, and has no influence on the subsequent calculation of the fused high frequency component and the fused low frequency component.
Step 104: and carrying out pixel-by-pixel weighted fusion on the first high-frequency component and the second high-frequency component to obtain a fused high-frequency component.
Specifically, based on the edge intensity of each pixel position in the second high-frequency component obtained in the above step, and the first local difference metric value of each pixel position in the first high-frequency component in the first image and the second local difference metric value of each pixel position in the second high-frequency component in the second image, weighting and fusing each pixel in the first high-frequency component and the second high-frequency component one by one respectively, so as to obtain a fused high-frequency component. In the process of pixel-by-pixel weighted fusion, the weight of each pixel position in the first high-frequency component is positively correlated with the first local difference metric value of each pixel position, and the weight of each pixel position in the second high-frequency component is positively correlated with the second local difference metric value of each pixel position and the edge intensity.
The weight of each pixel position in the first high-frequency component is the ratio of the first local difference measurement value of each pixel position to a preset value; the weight of each pixel position in the second high-frequency component is the ratio of the product of the second local difference metric value and the edge intensity to a preset value, and the preset value is the value obtained by adding the product of the second local difference metric value and the edge intensity to the first local difference metric value.
Alternatively, the calculation formula of the fused high-frequency component is as follows:
Figure BDA0004027749630000101
wherein Hf (I, j) is a pixel value at the (I, j) position in the fused high frequency component, var_p (I, j) is a local difference metric value at the (I, j) position of the second high frequency component of the second image, var_I (I, j) is a local difference metric value at the (I, j) position of the first high frequency component, wp (I, j) is an edge intensity at the (I, j) position of the second high frequency component, hp (I, j) is a pixel value at the (I, j) position in the second high frequency component of the second image, HI (I, j) is a pixel value at the (I, j) position in the first high frequency component of the first image.
Because the high-frequency component extracted from the image is embodied as the detail information of the image, the weight is determined by the edge intensity, the injection of the brightness information in the second image can be restricted, the loss of spectrum information can be reduced while the fusion definition degree of the image is enhanced, and the edge information of the high-resolution image can be fused into the image as much as possible, so that the details such as the edge in the finally obtained fused image are relatively clear.
Alternatively, in addition to the fused high-frequency component obtained based on the above steps, a fused low-frequency component may be obtained so that a fused image is obtained based on the fused high-frequency component and the fused low-frequency component in the subsequent steps. The fusion low-frequency component is obtained by fusing the first low-frequency component and the second low-frequency component.
Further, referring to fig. 2, fig. 2 is a flow chart of an embodiment of the present invention for merging low frequency components.
Step 201: and carrying out histogram matching on the second low-frequency component by taking the first low-frequency component as a reference to obtain a second low-frequency component after the histogram matching.
Alternatively, gray level distribution of the first low frequency component and the second low frequency component may be counted first, a gray level histogram may be established, and a mapping relationship between gray levels of the first low frequency component and the second low frequency component may be further established according to the gray level histogram. The gray level of the first low-frequency component is used as a reference, the gray level of the second low-frequency component is mapped one by one, and the histogram matching is carried out on the second low-frequency component, so that the spectrum distortion influence of an image caused by large difference between the overall gray level of the second low-frequency component and the overall gray level of the first low-frequency component can be reduced in the process of fusing the low-frequency components. Compared with the scheme of carrying out histogram matching on the first image and the second image, the method can avoid influencing the second high-frequency component so as to avoid loss of detail features such as edge information in the second high-frequency component.
Step 202: and fusing the second low-frequency component and the first low-frequency component after the histogram is matched to obtain a fused low-frequency component.
Specifically, firstly calculating the local area energy of each pixel position of a first low-frequency component to obtain the first local area energy of each pixel position; and calculating the local area energy of each pixel position of the second low-frequency component after histogram matching, and obtaining the second local area energy of each pixel position. It should be noted that, the "region" herein may be a region with a single pixel, or may include a single pixel and a neighborhood region centered on the single pixel.
Illustratively, the calculation formula of the local area energy is:
Figure BDA0004027749630000111
where w (x, y) is the weight, (N, M) is the region size, a represents the low frequency component, (, j) is the local region center.
Based on the formula, the local average gray value of each pixel position of the first low-frequency component can be calculated to obtain the first local average gray value of each pixel position; and the local average gray value of each pixel position of the second low-frequency component after histogram matching can be calculated to obtain the second local average gray value of each pixel position.
Alternatively, the local area energy of each pixel position of the first low-frequency component can be obtained by using image modeling of a Markov random field to obtain the first local area energy of each pixel position, and the local area energy of each pixel position of the second low-frequency component after histogram matching is calculated to obtain the second local area energy.
After determining the first local area energy and the second local area energy of each pixel location, the first local area energy and the second local area energy of each pixel location may be compared, and the first low frequency component and the second low frequency component may be fused based on the comparison result.
Specifically, when the first local area energy of each first pixel position is greater than the second local area energy of each first pixel position, the pixel value of each first pixel position in the first low-frequency component is taken as the pixel value of each first pixel position in the fused low-frequency image.
And when the first local area energy of each second pixel position is larger than the second local area energy of each second pixel position, taking the pixel value of each second pixel position in the first low-frequency component as the pixel value of each second pixel position in the fused low-frequency image.
Illustratively, the formula for fusing the low frequency components is:
Figure BDA0004027749630000112
wherein Lp (i, j) is a pixel value at the (i, j) position in the second low-frequency component after histogram matching, and lp_area (i, j) is a second local Area energy at the (i, j) position in the second low-frequency component; LI (i, j) is the pixel value of the (i, j) position in the first low frequency component, and LI_Area (i, j) is the first local Area energy of the (i, j) position in the first low frequency component; lf (i, j) is a pixel value fusing the (i, j) position in the low frequency component.
Step 105: and obtaining a fusion image based on the fusion high-frequency component and the fusion low-frequency component.
Specifically, after the fused high-frequency component and the fused low-frequency component are obtained based on the steps, a fused image can be obtained based on the fused high-frequency component and the fused low-frequency component.
In one implementation, the fused high frequency component and the fused low frequency component may be inverse transformed to obtain a fused image. The inverse transform method is not limited to inverse application of transform methods such as Non-downsampled contourlet transform (Nonsubsampled Contourlet Transform, NSCT) and Non-downsampled shear wave transform (Non-subsample Shearlet Transform, NSST).
In a specific embodiment, the fused high frequency component and the fused low frequency component may be inverse transformed to obtain a fused luminance component; and obtaining a fusion image based on the fusion brightness component. Taking the first image as the brightness component of the multispectral remote sensing image and the second image as the full-color remote sensing image as an example for description: and after the fused luminance component is obtained, replacing the luminance component in the multispectral remote sensing image with the fused luminance component to obtain the fused multispectral remote sensing image. Specifically, the fused luminance component and the rest components except the luminance component of the multispectral remote sensing image can be fused, so as to obtain the multispectral remote sensing image after fusion. The method of fusing the brightness component and the rest components except the brightness component of the multispectral remote sensing image can be reverse application of transformation methods such as GIHS or IHS.
In order to better illustrate the image fusion method of the present application, the following specific embodiments of the image fusion method are provided for exemplary illustration:
1. the multispectral remote sensing image is subjected to space transformation through GIHS transformation, and the brightness component (namely the first image) is extracted, wherein the transformation of the brightness component in the GIHS space does not change the spectral line change of a single pixel point in the original multispectral remote sensing image, and the GIHS transformation formula is as follows:
Figure BDA0004027749630000121
wherein C is N N-th channel, I and V representing multispectral remote sensing image 1 ~V N-1 T for each component after GIHS transformation N Is the transformation matrix in the GIHS transform. T (T) N The formula is as follows:
Figure BDA0004027749630000131
wherein T is N Specific calculation method of (2)Is of the type
Figure BDA0004027749630000132
And so on.
2. And respectively carrying out non-downsampled contourlet transform (Nonsubsampled Contourlet Transform, NSCT) on the luminance component of the multispectral remote sensing image and the panchromatic remote sensing image to obtain corresponding high-frequency components and low-frequency components.
3. Aiming at the characteristics that the high-frequency component reflects the detailed information of the original image and the low-frequency component reflects the global information of the original remote sensing image, a histogram constraint fusion rule is adopted for the low-frequency component: the histogram of the low-frequency component of the full-color remote sensing image is converted into the shape of the histogram of the brightness low-frequency component of the multispectral remote sensing image by adopting a histogram matching method, so that the influence on the fusion of the low-frequency component caused by the difference of the pixel values of the low-frequency component of the full-color remote sensing image and the brightness low-frequency component of the multispectral remote sensing image is reduced; and comparing the partial area energy of the low-frequency component of the full-color remote sensing image and the brightness low-frequency component of the multispectral remote sensing image after histogram matching, and selecting larger partial area energy as a fusion result. The fusion rule for the high-frequency component is as follows: and carrying out edge detection on the non-decomposed full-color remote sensing image to obtain the edge intensity of each pixel position in the full-color remote sensing image, and carrying out weighted fusion based on the edge intensity of each pixel position in the full-color remote sensing image and the local difference metric value.
The histogram matching method mainly comprises the following steps: and counting the gray level distribution of the low-frequency component of the panchromatic remote sensing image and the gray level distribution of the luminance low-frequency component of the multispectral remote sensing image, establishing a mapping relation between the gray level of the low-frequency component of the panchromatic remote sensing image and the gray level of the luminance low-frequency component of the multispectral remote sensing image according to the gray level histogram, and carrying out one-to-one mapping on the gray level of the low-frequency component of the panchromatic remote sensing image to obtain the low-frequency component of the panchromatic remote sensing image after histogram matching.
The local area energy is mainly: an average gray value of the local area is calculated. The formula is as follows:
Figure BDA0004027749630000141
where w (x, y) is a weight, (N, M) is a region size, a represents a low frequency component, and (i, j) is a local region center. And (3) obtaining a low-frequency component fusion result by comparing the Area (i, j) of the low-frequency component of the full-color remote sensing image and the brightness low-frequency component of the multispectral remote sensing image after histogram matching, wherein the low-frequency fusion formula is as follows:
Figure BDA0004027749630000142
wherein Lp (I, j) is a low frequency component value at the (I, j) position of the full-color remote sensing image after histogram matching, LI (I, j) is a low frequency component value at the (I, j) position of the I component, and Lf (I, j) is a low frequency component value at the (I, j) position after fusion.
The edge intensity of each pixel position in the full-color remote sensing image is obtained by filtering the non-decomposed full-color remote sensing image through an edge detection operator, so that the information injection of a non-edge region in a high-frequency component of the full-color remote sensing image is restricted, and the distortion degree of spectrum information of the multispectral remote sensing image is reduced. Wherein the edge detection operator is:
Figure BDA0004027749630000143
where lambda and epsilon are constants and,
Figure BDA0004027749630000144
is the derivative of a full color image.
The high frequency fusion formula is as follows:
Figure BDA0004027749630000145
wherein Hf (I, j) is the fused high-frequency component, var_p (I, j) is the local difference measurement value of the high-frequency component of the full-color remote sensing image at the (I, j) position, var_I (I, j) is the local difference measurement value of the brightness high-frequency component of the multispectral remote sensing image at the (I, j) position, W p (i, j) isThe edge intensity of each pixel position in the full-color remote sensing image is Hp (i, j) which is the high-frequency component of the full-color remote sensing image, and HI (i, j) which is the brightness high-frequency component of the multispectral remote sensing image.
4. And (3) carrying out NSCT inverse transformation on the high-frequency component and the low-frequency component obtained by fusion to obtain a fused brightness component, and carrying out GIHS inverse transformation to obtain a final multispectral fusion image.
For an example of fusion, refer to fig. 3, 4 and 5. Fig. 3 and 4 are respectively a global image and a local enlarged area map of a multispectral remote sensing image and a global image and a local enlarged area map of a panchromatic remote sensing image in the same unprocessed scene. Fig. 5 is an effect diagram after fusion according to an embodiment of the present invention. Therefore, the image processed by the embodiment of the invention absorbs the multispectral remote sensing image, and has the characteristics of hyperspectral information and high spatial resolution of the panchromatic remote sensing image. The local features and detail information of the fused image are more prominent, and the definition is higher.
Meanwhile, the GIHS transformation is optimized in the image fusion process, and compared with the traditional IHS transformation method which can only meet the transformation of three channels in an image, the method is applicable to multi-channel images and can keep the characteristic that the IHS transformation does not change the spectral line shape, and the quality of the fused image can be improved. The invention designs different fusion rules for the low-frequency component and the high-frequency component aiming at the characteristics that the high-frequency component reflects the detailed information of the original image and the low-frequency component reflects the global information of the original remote sensing image. In the low frequency component fusion process: and gray mapping transformation is carried out on the low-frequency components of the full-color remote sensing image through histogram matching, so that the spectrum distortion influence on the final fusion image due to the fact that the overall gray difference between the low-frequency components of the full-color remote sensing image and the brightness low-frequency components of the multispectral remote sensing image is large in the fusion process of the low-frequency components of the full-color image is reduced. In the high-frequency component fusion process, edge detection is carried out on the non-decomposed panchromatic remote sensing image through a detection operator, so that the edge intensity of each pixel position in the panchromatic remote sensing image is obtained, the edge intensity is used as the weight of the high-frequency component of the panchromatic image in the high-frequency component weighted fusion, the definition of edge details is effectively increased, and the distortion of spectrum information is reduced. In summary, the invention can solve the problem of detail information loss such as edges in the fused image when the image fusion is carried out by applying the current image fusion method.
The image fusion method provided by the embodiment of the invention can be applied to electronic equipment, and the electronic equipment can be image acquisition equipment such as a camera. Cameras, etc., and may also be devices such as cell phones, personal Computers (PCs), tablet computers, etc.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of an image fusion apparatus according to the present invention, where the image fusion apparatus includes a processor 50 and a memory 51 coupled to each other for cooperating with each other to implement the image fusion method described in any of the above embodiments. Also stored in memory 51 is at least one computer program 52 running on processor 50, which processor 50 implements the steps of any of the various image fusion method embodiments described above when executing computer program 52.
The processor 50 may be a central first processing unit (Central Processing Unit, CPU), the processor 50 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), or the like. Further, the memory 51 may also include both an internal storage unit and an external storage device. The memory 51 is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, etc., such as program codes of computer programs, etc. The memory 51 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application also provide a computer readable storage medium storing a computer program, where the computer program when executed by a processor implements steps of the foregoing method embodiments.
The present embodiments provide a computer program product which, when run on a terminal device, causes the terminal device to perform steps that enable the respective method embodiments described above to be implemented.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow in the methods of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program may implement the steps of the above embodiments of the methods when executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a terminal device, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunication signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (10)

1. An image fusion method, comprising:
extracting a high-frequency component and a low-frequency component of the first image to obtain a first high-frequency component and a first low-frequency component; extracting a high-frequency component and a low-frequency component of the second image to obtain a second high-frequency component and a second low-frequency component; wherein the resolution of the first image is lower than the resolution of the second image;
determining an edge intensity for each pixel location in the second high frequency component;
determining a local difference measurement value of each pixel position in respective high-frequency components of the first image and the second image, and respectively obtaining a first local difference measurement value and a second local difference measurement value of each pixel position;
the first high-frequency component and the second high-frequency component are subjected to pixel-by-pixel weighted fusion to obtain a fused high-frequency component;
obtaining a fusion image based on the fusion high-frequency component and the fusion low-frequency component; the fusion low-frequency component is obtained by fusion of the first low-frequency component and the second low-frequency component;
wherein, in the pixel-by-pixel weighted fusion process, the weight of each pixel position in the first high frequency component is positively correlated with the first local variance measure value of each pixel position, and the weight of each pixel position in the second high frequency component is positively correlated with the second local variance measure value of each pixel position and the edge intensity.
2. The method of image fusion according to claim 1, wherein,
the weight of each pixel position in the first high-frequency component is the ratio of the first local difference measurement value of each pixel position to a preset value; the weight of each pixel position in the second high-frequency component is the ratio of the product of the second local difference metric value and the edge intensity to the preset value, and the preset value is equal to the value obtained by adding the product of the second local difference metric value and the edge intensity to the first local difference metric value.
3. The image fusion method of claim 1, wherein the determining the edge intensity for each pixel location in the second high frequency component comprises:
and processing a first local area taking each pixel position as a center in the second image by using a gradient operator to obtain the edge intensity of each pixel position.
4. The image fusion method of claim 1, wherein the determining a local variance metric value for each pixel location in the respective high frequency components of the first image and the second image comprises:
calculating local variance of a second local area taking each pixel position as a center in the first image to obtain a first local variance measurement value of each pixel position;
and calculating the local variance of a third local area taking each pixel position as a center in the second image to obtain a second local difference metric value of each pixel position.
5. The image fusion method according to claim 1, wherein the obtaining a fused image based on the fused high frequency component and the fused low frequency component includes:
performing histogram matching on the second low-frequency component by taking the first low-frequency component as a reference to obtain a second low-frequency component after histogram matching;
and fusing the second low-frequency component and the first low-frequency component after histogram matching to obtain the fused low-frequency component.
6. The method of image fusion according to claim 5, wherein the fusing the histogram-matched second low-frequency component and the first low-frequency component to obtain the fused low-frequency component includes:
calculating the local area energy of each pixel position of the first low-frequency component and the second low-frequency component after histogram matching so as to obtain the first local area energy and the second local area energy of each pixel position respectively;
taking the pixel value of each first pixel position in the first low-frequency component as the pixel value of each first pixel position in the fused low-frequency image, wherein the energy of a first local area of each first pixel position is larger than the energy of a second local area of each first pixel position;
and taking the pixel value of each second pixel position in the first low-frequency component as the pixel value of each second pixel position in the fused low-frequency image, wherein the energy of a first local area of each second pixel position is larger than that of a second local area of each second pixel position.
7. The method of image fusion according to claim 6, wherein calculating the local area energy for each pixel location of each of the first low frequency component and the histogram matched second low frequency component comprises:
and calculating local average gray values of each pixel position of the first low-frequency component and the second low-frequency component after histogram matching to respectively obtain first local area energy and second local area energy of each pixel position.
8. The image fusion method of claim 1, wherein the first image is a luminance component of a multispectral remote sensing image and the second image is a full-color remote sensing image;
based on the fused high-frequency component and the fused low-frequency component, obtaining the fused image includes:
based on the fused high-frequency component and the fused low-frequency component, a fused brightness component is obtained;
and replacing the brightness component in the multispectral remote sensing image with the fusion brightness component to obtain a fused multispectral remote sensing image.
9. An image fusion apparatus comprising a processor for executing instructions to implement the image fusion method of any of claims 1-8.
10. A computer readable storage medium storing instructions/program data executable to implement the image fusion method of any one of claims 1-8.
CN202211743860.5A 2022-12-29 2022-12-29 Image fusion method, device and computer readable storage medium Pending CN116109535A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211743860.5A CN116109535A (en) 2022-12-29 2022-12-29 Image fusion method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211743860.5A CN116109535A (en) 2022-12-29 2022-12-29 Image fusion method, device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116109535A true CN116109535A (en) 2023-05-12

Family

ID=86257521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211743860.5A Pending CN116109535A (en) 2022-12-29 2022-12-29 Image fusion method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116109535A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645285A (en) * 2023-05-15 2023-08-25 浙江大华技术股份有限公司 Image detail enhancement method
CN116703769A (en) * 2023-06-08 2023-09-05 福建鼎旸信息科技股份有限公司 Satellite remote sensing image full-color sharpening system
CN118195925A (en) * 2024-05-17 2024-06-14 浙江大华技术股份有限公司 Image fusion method, device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645285A (en) * 2023-05-15 2023-08-25 浙江大华技术股份有限公司 Image detail enhancement method
CN116703769A (en) * 2023-06-08 2023-09-05 福建鼎旸信息科技股份有限公司 Satellite remote sensing image full-color sharpening system
CN116703769B (en) * 2023-06-08 2024-03-12 福建鼎旸信息科技股份有限公司 Satellite remote sensing image full-color sharpening system
CN118195925A (en) * 2024-05-17 2024-06-14 浙江大华技术股份有限公司 Image fusion method, device and storage medium

Similar Documents

Publication Publication Date Title
US20220044375A1 (en) Saliency Map Enhancement-Based Infrared and Visible Light Fusion Method
CN116109535A (en) Image fusion method, device and computer readable storage medium
CN111260580B (en) Image denoising method, computer device and computer readable storage medium
CN112307901A (en) A SAR and optical image fusion method and system for landslide detection
CN106920221A (en) Take into account the exposure fusion method that Luminance Distribution and details are presented
KR102397148B1 (en) Color Correction Method Using Low Resolution Color Image Of Large-capacity Aerial Orthoimage
CN113066030A (en) Multispectral image panchromatic sharpening method and system based on space-spectrum fusion network
KR20210096925A (en) Flexible Color Correction Method for Massive Aerial Orthoimages
CN118691654B (en) Registration and fusion method and device for optical satellite panchromatic image and multispectral image
CN116777964B (en) Remote sensing image fusion method and system based on texture saliency weighting
Zhang et al. Preprocessing and fusion analysis of GF-2 satellite Remote-sensed spatial data
CN112734636A (en) Fusion method of multi-source heterogeneous remote sensing images
Parmehr et al. Automatic parameter selection for intensity-based registration of imagery to LiDAR data
Miao et al. A dual branch network combining detail information and color feature for remote sensing image dehazing
Neamah et al. The Deep Learning Methods for Fusion Infrared and Visible Images: A Survey.
CN117592001B (en) A data fusion method and device
CN109615584B (en) SAR image sequence MAP super-resolution reconstruction method based on homography constraint
CN115147691B (en) A multimodal remote sensing image fusion method based on multi-directional gradient filtering
CN118279183A (en) Unmanned aerial vehicle remote sensing mapping image enhancement method and system
CN116385567A (en) Method, device and medium for obtaining color card ROI coordinate information
Goud et al. Evaluation of image fusion of multi focus images in spatial and frequency domain
CN108470351A (en) A method, device and storage medium for measuring offset by image plaque tracking
CN119762340B (en) A Hyperspectral Image Stitching Method and System Based on Spectral Fitting
Kamal et al. Resoluting multispectral image using image fusion and CNN model
Kashtan et al. Computer Technology of High Resolution Satellite Image Processing Based on Packet Wavelet Transform.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination