[go: up one dir, main page]

CN118799179A - Dual-domain network reconstruction method for hyperspectral image super-resolution based on progressive hybrid convolution - Google Patents

Dual-domain network reconstruction method for hyperspectral image super-resolution based on progressive hybrid convolution Download PDF

Info

Publication number
CN118799179A
CN118799179A CN202410801601.6A CN202410801601A CN118799179A CN 118799179 A CN118799179 A CN 118799179A CN 202410801601 A CN202410801601 A CN 202410801601A CN 118799179 A CN118799179 A CN 118799179A
Authority
CN
China
Prior art keywords
resolution
hyperspectral image
network
domain
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410801601.6A
Other languages
Chinese (zh)
Inventor
刘源
刘婷婷
隋修宝
邱秉文
魏宏光
蒋桐
陈钱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202410801601.6A priority Critical patent/CN118799179A/en
Publication of CN118799179A publication Critical patent/CN118799179A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,对高光谱图像进行降采样预处理,形成高分辨率‑低分辨率图像对的数据集;构建基于渐进式混合卷积的超分辨率网络,并对其进行训练;将低分辨率的高光谱图像输入已训练完成的基于渐进式混合卷积的网络,以获得高分辨率的高光谱图像。本发明利用了空间域和频域的双域策略增强图像特征信息,通过将重建的高光谱图像和原始高分辨率图像转换为频域表示并加入额外模块,迫使模型自适应地恢复高频信息。这一发明通过端到端的神经网络实现了空间超分辨率和光谱维度的高保真,有效提高了空‑谱特征信息的利用,从而实现了高空间分辨率的高光谱图像重建效果。

The present invention discloses a method for super-resolution dual-domain network reconstruction of hyperspectral images based on progressive hybrid convolution, which performs downsampling preprocessing on hyperspectral images to form a data set of high-resolution and low-resolution image pairs; constructs a super-resolution network based on progressive hybrid convolution and trains it; and inputs the low-resolution hyperspectral image into the trained network based on progressive hybrid convolution to obtain a high-resolution hyperspectral image. The present invention utilizes a dual-domain strategy of spatial domain and frequency domain to enhance image feature information, and forces the model to adaptively restore high-frequency information by converting the reconstructed hyperspectral image and the original high-resolution image into frequency domain representation and adding additional modules. This invention achieves high fidelity of spatial super-resolution and spectral dimensions through an end-to-end neural network, effectively improving the utilization of spatial-spectral feature information, thereby achieving a high-spatial-resolution hyperspectral image reconstruction effect.

Description

基于渐进式混合卷积的高光谱图像超分辨率双域网络重建 方法Hyperspectral image super-resolution dual-domain network reconstruction based on progressive hybrid convolution Method

技术领域Technical Field

本发明涉及涉及一种基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,属于计算机视觉增强技术领域。The invention relates to a hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution, and belongs to the technical field of computer vision enhancement.

背景技术Background Art

高光谱图像(HSI)包含大范围地理信息,但受传感器分辨率限制,分辨率不足以捕捉细节,影响地物边缘和纹理结构的连续性。超分辨率图像重建是指从一系列有噪声、模糊及欠采样的低分辨率图像序列中恢复出一幅高分辨率图像的过程。超分辨率(SR)重建可提高图像质量、改善可视化效果,对变化检测、地图制作和环境监测等领域至关重要。Hyperspectral images (HSI) contain large-scale geographic information, but due to the limitation of sensor resolution, the resolution is not enough to capture details, which affects the continuity of the edges and texture structures of objects. Super-resolution image reconstruction refers to the process of recovering a high-resolution image from a series of noisy, blurred and undersampled low-resolution image sequences. Super-resolution (SR) reconstruction can improve image quality and visualization, and is crucial in fields such as change detection, map making and environmental monitoring.

对于高光谱图像,需要解决的问题是如何在保留光谱信息的同时提升其空间分辨率。遥感超分主要分为两大类:基于融合的方法和单一超分辨方法。基于融合的方法旨在从高分辨率全色图像(PAN)或多光谱图像(MSI)中寻找并提取缺失的高频细节信息。然而,在实际应用场景中,这些辅助配准图像往往难以获取。因此,单一高光谱图像超分辨的方法在实际场景中越来越受到重视。单一高光谱图像超分辨的方法利用有限的低分辨率图像信息,通过探索和学习大量额外数据集中的图像先验知识,以约束超分辨率解空间的不确定性。早期的研究主要采用稀疏表示进行处理和分析。For hyperspectral images, the problem that needs to be solved is how to improve their spatial resolution while retaining spectral information. Remote sensing super-resolution can be divided into two main categories: fusion-based methods and single super-resolution methods. Fusion-based methods aim to find and extract missing high-frequency detail information from high-resolution panchromatic images (PAN) or multispectral images (MSI). However, in practical application scenarios, these auxiliary registration images are often difficult to obtain. Therefore, single hyperspectral image super-resolution methods are increasingly valued in practical scenarios. Single hyperspectral image super-resolution methods use limited low-resolution image information to constrain the uncertainty of the super-resolution solution space by exploring and learning image prior knowledge in a large number of additional data sets. Early studies mainly used sparse representation for processing and analysis.

近年来,随着深度学习技术的快速发展,尤其是卷积神经网络(CNNs)在图像处理领域的成功应用,使得图像超分辨率问题得到了新的突破。通过引入深度学习模型,可以有效地从大量训练数据中学习到复杂的映射关系,并在此基础上预测出高分辨率图像。然而,与RGB图像超分相比,高光谱图像超分更具挑战性。首先,高光谱图像通常覆盖很广的区域,与检测到的物体大小相比,空间分辨率有限。其次,由于高光谱图像样本数量相对较少,因此建立高精度数据驱动模型具有挑战性。尽管现有的深度学习方法取得了一定的成功,但仍面临许多挑战和问题,例如如何设计更有效的网络结构以提升超分性能,如何处理训练样本不足的问题,如何平衡重建精度和计算复杂度等。因此,高光谱图像超分辨率仍是一个值得进一步研究的问题。In recent years, with the rapid development of deep learning technology, especially the successful application of convolutional neural networks (CNNs) in the field of image processing, new breakthroughs have been made in the image super-resolution problem. By introducing deep learning models, complex mapping relationships can be effectively learned from a large amount of training data, and high-resolution images can be predicted based on this. However, compared with RGB image super-resolution, hyperspectral image super-resolution is more challenging. First, hyperspectral images usually cover a wide area and have limited spatial resolution compared to the size of the detected objects. Second, due to the relatively small number of hyperspectral image samples, it is challenging to establish a high-precision data-driven model. Although existing deep learning methods have achieved certain success, they still face many challenges and problems, such as how to design a more effective network structure to improve super-resolution performance, how to deal with the problem of insufficient training samples, and how to balance reconstruction accuracy and computational complexity. Therefore, hyperspectral image super-resolution is still an issue worthy of further study.

现有的基于CNNs的高光谱图像超分辨率(HSI SR)方法存在以下问题:(1)CNNs方法在捕捉扩展范围相关性和频谱自相似性方面受到限制。有些方法试融入注意机制,但往往会牺牲权重较小的细节特征,以提高注意图的计算效率。(2)现有的方法侧重于在空间域中重建高光谱图像,即利用空间域损失函数来优化网络,使得图像在频域中没有得到很好的建模。Existing CNN-based hyperspectral image super-resolution (HSI SR) methods have the following problems: (1) CNNs are limited in capturing extended range correlation and spectral self-similarity. Some methods try to incorporate attention mechanisms, but often sacrifice detail features with smaller weights to improve the computational efficiency of attention maps. (2) Existing methods focus on reconstructing hyperspectral images in the spatial domain, that is, using spatial domain loss functions to optimize the network, so that the image is not well modeled in the frequency domain.

因此,需要一种基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法以解决上述问题。Therefore, a hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution is needed to solve the above problems.

发明内容Summary of the invention

发明目的:针对现有技术所存在的问题,本发明提供一种基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法。Purpose of the invention: In view of the problems existing in the prior art, the present invention provides a hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution.

一种基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,包括如下步骤:A hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution includes the following steps:

步骤一、对高分辨率的高光谱图像进行退化处理,得到对应的低分辨率的高光谱图像,将低分辨率的高光谱图像与高分辨率的高光谱图像构成图像对的数据集;Step 1: Degrade the high-resolution hyperspectral image to obtain a corresponding low-resolution hyperspectral image, and form an image pair data set with the low-resolution hyperspectral image and the high-resolution hyperspectral image;

步骤二、构建高光谱图像超分辨率网络,所述高光谱图像超分辨率网络包括浅层特征提取模块、深层特征提取模块、全局上采样模块和重构模块,Step 2: construct a hyperspectral image super-resolution network, which includes a shallow feature extraction module, a deep feature extraction module, a global upsampling module and a reconstruction module.

所述浅层特征提取模块包括3×3卷积和残差块,所述残差块包含2个卷积层和一个ReLu非线性函数;The shallow feature extraction module includes a 3×3 convolution and a residual block, and the residual block includes 2 convolution layers and a ReLu nonlinear function;

所述深层特征提取模块为混合卷积网络,所述混合卷积网络包括2D分支网路和3D分支网络;The deep feature extraction module is a hybrid convolutional network, and the hybrid convolutional network includes a 2D branch network and a 3D branch network;

利用步骤一得到的数据集对所述高光谱图像超分辨率网络进行训练,得到训练完成的高光谱图像超分辨率网络;Using the data set obtained in step 1 to train the hyperspectral image super-resolution network, a trained hyperspectral image super-resolution network is obtained;

步骤三、使用具有空间域和频域损失的双域网络优化所述高光谱图像超分辨率网络,得到基于渐进式混合卷积的高光谱图像超分辨率双域网络;Step 3: Optimizing the hyperspectral image super-resolution network using a dual-domain network with spatial domain and frequency domain losses to obtain a hyperspectral image super-resolution dual-domain network based on progressive hybrid convolution;

步骤四、重复步骤二和步骤三更新卷积和权重直至收敛,得到训练完成的基于渐进式混合卷积的高光谱图像超分辨率双域网络;Step 4: Repeat steps 2 and 3 to update the convolution and weights until convergence, and obtain a trained hyperspectral image super-resolution dual-domain network based on progressive hybrid convolution;

步骤五、利用训练完成的基于渐进式混合卷积的高光谱图像超分辨率双域网络对低分辨率的高光谱图像进行重建,得到高空间分辨率的高光谱图像。Step 5: Use the trained hyperspectral image super-resolution dual-domain network based on progressive mixed convolution to reconstruct the low-resolution hyperspectral image to obtain a hyperspectral image with high spatial resolution.

更进一步的,步骤二中所述浅层特征提取模块通过下式表示:Furthermore, the shallow feature extraction module in step 2 is represented by the following formula:

式中,x0为浅特征,为第一个残差块,为低分辨率高光谱图像,FConv(·)为3×3的卷积。Where x0 is a shallow feature, is the first residual block, is a low-resolution hyperspectral image, and F Conv (·) is a 3×3 convolution.

更进一步的,步骤二所述深层特征提取模块通过下式表示:Furthermore, the deep feature extraction module in step 2 is represented by the following formula:

式中,xt为深特征,x0为浅特征,为第二个残差块,y2D表示2D分支网络的输出,y3D表示3D分支网络的输出;In the formula, xt is the deep feature, x0 is the shallow feature, is the second residual block, y2D represents the output of the 2D branch network, and y3D represents the output of the 3D branch network;

y2D通过下式表示:y 2D is represented by the following formula:

式中,表示自注意力模块的输出特征,表示2D局部上采样操作,σ表示ReLU激活函数;In the formula, represents the output features of the self-attention module, represents a 2D local upsampling operation, σ represents the ReLU activation function;

y3D通过下式表示:y 3D is expressed by the following formula:

式中,fSqueese(·)表示用于降维后获得的特征图维度,表示3D局部上采样操作,表示3D单元各分层输出级联操作,表示卷积核为1×1×1的3D卷积,分别表示3个3D卷积的分层输出。Where f Squeese (·) represents the dimension of the feature map obtained after dimensionality reduction. represents a 3D local upsampling operation, Represents the cascade operation of each layer output of the 3D unit, represents a 3D convolution with a convolution kernel of 1×1×1. and They represent the layered outputs of three 3D convolutions respectively.

更进一步的,步骤二中上采样模块通过下式表示:Furthermore, the upsampling module in step 2 is expressed by the following formula:

xup=FG_up(xt,r')x up = F G_up (x t , r')

式中,xt为深特征,FG_up(·)表示全局上采样操作,r’为2D局部上采样操作和3D局部上采样操作后剩余的比例因子。Where xt is the deep feature, FG_up (·) represents the global upsampling operation, and r' is the scaling factor remaining after the 2D local upsampling operation and the 3D local upsampling operation.

更进一步的,步骤二中重构模块通过下式表示:Furthermore, the reconstruction module in step 2 is represented by the following formula:

xrec=Fconv(xup)x rec = F conv ( x up )

其中,xrec表示卷积层重建,xup表示上采样模块的特征输出,FUP(ILR↑,r)表示输入的低分辨率高光谱图像通过双三次插值进行r倍比例因子的上采样操作。Among them, x rec represents the reconstruction of the convolutional layer, x up represents the feature output of the upsampling module, and F UP (I LR ↑, r) means that the input low-resolution hyperspectral image is upsampled by a scale factor of r through bicubic interpolation.

更进一步的,步骤三中使用具有空间域和频域损失的双域网络优化所述高光谱图像超分辨率网络,损失函数通过下式表示:Furthermore, in step 3, a dual-domain network with spatial domain and frequency domain losses is used to optimize the hyperspectral image super-resolution network, and the loss function is expressed by the following formula:

Ltotal=L1+βLHFL L total = L 1 + βL HFL

式中,L1为空间域像素损失,LHFL为频域损失,β为损失函数的权重;In the formula, L1 is the pixel loss in the spatial domain, L HFL is the loss in the frequency domain, and β is the weight of the loss function;

其中,L1损失函数通过下式表示:Among them, the L1 loss function is expressed by the following formula:

式中,ILR表示低分辨率高光谱图像,分别表示第n个原始高分辨率图像和重构的高光谱图像,HNet(·)表示高光谱图像超分辨率网络,N表示训练批次中的高光谱图像数量,Θ表示可学习的网络参数;Where I LR represents the low-resolution hyperspectral image, and denote the nth original high-resolution image and the reconstructed hyperspectral image, respectively, H Net (·) denotes the hyperspectral image super-resolution network, N denotes the number of hyperspectral images in the training batch, and Θ denotes the learnable network parameters;

LHFL损失函数通过下式表示:The L HFL loss function is expressed as follows:

式中,(u,v)表示频谱上空间频率的坐标,F(u,v)是复数频率值,是遍历空间域中的每个图像像素的函数和;H和W表示图像的空间尺寸;FGT表示原始高分辨率图像经过频域变换后的频域值;FSR表示重构的高光谱图像经过频域变换后的频域值;为第k个频率的欧几里得距离,共有S个频率,d表示欧几里得距离;为(u,v)处的空间频率的权重。Where (u, v) represents the coordinates of the spatial frequency on the spectrum, F(u, v) is the complex frequency value, which is the function and traversal of each image pixel in the spatial domain; H and W represent the spatial size of the image; F GT represents the frequency domain value of the original high-resolution image after frequency domain transformation; F SR represents the frequency domain value of the reconstructed hyperspectral image after frequency domain transformation; is the Euclidean distance of the kth frequency, there are S frequencies in total, and d represents the Euclidean distance; is the weight of the spatial frequency at (u,v).

更进一步的,β为0.1:Furthermore, β is 0.1:

有益效果:本发明的基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法设计了一种具有渐进式上采样的混合卷积模型,该混合卷积模型包含了二维卷积(2D)与三维卷积(3D)模块,其中,利用2D模块来提高空间分辨率,3D模块则用来提取光谱信息。还提供了一个双域网络,可以更好地利用光谱先验来增强空间信息和光谱信息之间的一致性。在空间域,设计了一个具有金字塔结构的自注意力机制(HSL)来对特征进行全局建模。在频域,使用频域损失(HFL)来优化生成的模型,以提高图像质量。Beneficial effects: The hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution of the present invention designs a hybrid convolution model with progressive upsampling, which includes two-dimensional convolution (2D) and three-dimensional convolution (3D) modules, wherein the 2D module is used to improve the spatial resolution, and the 3D module is used to extract spectral information. A dual-domain network is also provided, which can better utilize spectral priors to enhance the consistency between spatial information and spectral information. In the spatial domain, a self-attention mechanism (HSL) with a pyramid structure is designed to globally model the features. In the frequency domain, the frequency domain loss (HFL) is used to optimize the generated model to improve the image quality.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法的流程图;FIG1 is a flow chart of a method for super-resolution dual-domain network reconstruction of hyperspectral images based on progressive hybrid convolution;

图2为基于渐进式混合卷积的高光谱图像超分辨率双域网络的构建训练流程图;FIG2 is a flowchart of constructing and training a dual-domain network for super-resolution hyperspectral images based on progressive hybrid convolution;

图3为基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法的框架图,其中包含浅层特征提取、深层特征提取、上采样与重构模块、双域损失变换网络;FIG3 is a framework diagram of a dual-domain network reconstruction method for hyperspectral image super-resolution based on progressive hybrid convolution, which includes shallow feature extraction, deep feature extraction, upsampling and reconstruction modules, and a dual-domain loss transformation network;

图4为本发明提供的异构模块的网络架构图;FIG4 is a network architecture diagram of heterogeneous modules provided by the present invention;

图5为本发明提供的光谱注意力机制的模型图。FIG5 is a model diagram of the spectral attention mechanism provided by the present invention.

具体实施方式DETAILED DESCRIPTION

下面将结合附图对本发明的优选实施方式进行描述,更加清楚、完整地阐述本发明的技术方案。The preferred embodiments of the present invention will be described below in conjunction with the accompanying drawings to more clearly and completely illustrate the technical solutions of the present invention.

近年来,神经网络算法在增强高光谱图像分辨率方面应用广泛,但存在两个主要问题:(1)神经网络在捕获长距离依赖关系和谱间相关性方面有限;(2)现有方法主要注重空间域重建,而对频域信息建模不足。为解决这些问题,本发明提出了一种创新方法,该方法采用具有渐进式上采样的并行结构的混合卷积模型,融合2D和3D模块,实现了高光谱图像的逐步重建;此外,本发明提出了超分辨率双域网络,以更好地关注高频信息。具体来说,在空间-光谱域,设计了异构组块(IGM)增强通道间关系,并采用空间-光谱注意力机制(HSL)捕获相关性和自相似性;在频域,使用高光谱频率损失(HFL)优化建模。In recent years, neural network algorithms have been widely used to enhance the resolution of hyperspectral images, but there are two main problems: (1) neural networks are limited in capturing long-range dependencies and inter-spectral correlations; (2) existing methods mainly focus on spatial domain reconstruction, but insufficiently model the frequency domain information. To address these problems, this paper proposes an innovative method that uses a hybrid convolutional model with a parallel structure with progressive upsampling to fuse 2D and 3D modules to achieve progressive reconstruction of hyperspectral images; in addition, this paper proposes a super-resolution dual-domain network to better focus on high-frequency information. Specifically, in the spatial-spectral domain, a heterogeneous group block (IGM) is designed to enhance the relationship between channels, and a spatial-spectral attention mechanism (HSL) is used to capture correlation and self-similarity; in the frequency domain, a hyperspectral frequency loss (HFL) is used to optimize the modeling.

下面将结合本设计实例对具体实施方式、以及本次发明的技术难点、发明点进行进一步介绍。The following will further introduce the specific implementation method, as well as the technical difficulties and inventive points of this invention in combination with this design example.

结合图1-图5,本发明公开了一种基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,包括以下步骤:In conjunction with FIG. 1 to FIG. 5 , the present invention discloses a hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution, comprising the following steps:

步骤1:生成训练样本集。Step 1: Generate a training sample set.

为了进一步实施上述技术方案,步骤1中对高分辨率高光谱图像进行退化处理,得到对应的低分辨率高光谱图像;然后分别对高、低分辨率高光谱图像进行分块,每一对高、低分辨率高光谱图像块即为一个训练样本。本实施例中采用以下方式:In order to further implement the above technical solution, in step 1, the high-resolution hyperspectral image is degraded to obtain the corresponding low-resolution hyperspectral image; then the high-resolution and low-resolution hyperspectral images are divided into blocks, and each pair of high-resolution and low-resolution hyperspectral image blocks is a training sample. In this embodiment, the following method is adopted:

步骤1-1:.取高光谱数据集,在数据集上建立低分辨率图像和高分辨率图像对,原图像作为高分辨率图像,对原图像进行高斯核模糊下采样和插值处理得到低分辨率图像;Step 1-1: Take a hyperspectral data set, create a low-resolution image and a high-resolution image pair on the data set, use the original image as the high-resolution image, perform Gaussian kernel blur downsampling and interpolation processing on the original image to obtain a low-resolution image;

步骤1-2:.设置图像块尺寸及滑块步长,在高-低分辨率图像对上通过滑块获取图像块并保存相应的高、低分辨率图像块对。Step 1-2: Set the image block size and slider step size, obtain image blocks through the slider on the high-low resolution image pair and save the corresponding high- and low-resolution image block pairs.

步骤2:构建基于渐进式混合卷积的高光谱图像超分辨率双域网络模型:本发明构建的网络分为三个主要部分:浅层特征提取、深层特征提取和上采样与重构模块,本发明的高光谱图像超分辨率重建网络如图3所示。首先,利用3×3卷积和残差快进行浅层特征提取,其中残差块包含2个卷积层和一个ReLu非线性函数。其次,采用具有渐进式上采样的并行结构的混合卷积网络进行深层特征提取,其中混合卷积包含了2D/3D单元。最后,上采样和重构出目标尺寸的高光谱高分辨率图像。Step 2: Construct a dual-domain network model for hyperspectral image super-resolution based on progressive hybrid convolution: The network constructed by the present invention is divided into three main parts: shallow feature extraction, deep feature extraction, and upsampling and reconstruction modules. The hyperspectral image super-resolution reconstruction network of the present invention is shown in Figure 3. First, 3×3 convolution and residual blocks are used for shallow feature extraction, where the residual block contains 2 convolution layers and a ReLu nonlinear function. Secondly, a hybrid convolution network with a parallel structure with progressive upsampling is used for deep feature extraction, where the hybrid convolution contains 2D/3D units. Finally, upsample and reconstruct a hyperspectral high-resolution image of the target size.

步骤3:利用训练样本集,构建基于光谱分组和混合卷积的高光谱图像超分辨率重建模型,旨在学习低分辨率图像到高分辨率图像的映射关系:Step 3: Using the training sample set, a hyperspectral image super-resolution reconstruction model based on spectral grouping and hybrid convolution is constructed to learn the mapping relationship from low-resolution images to high-resolution images:

本实施例中的训练过程具体如下:The training process in this embodiment is as follows:

步骤3-1:首先将训练样本集中的高-低分辨率图像对表示为(x1,y1),(x2,y2),…,(xi,yi),…,(xn,yn),其中yi是xi对应的高分辨率图像;i=1,2,…,n,作为输入。Step 3-1: First, represent the high-low resolution image pairs in the training sample set as (x 1 ,y 1 ),(x 2 ,y 2 ),…,(x i ,y i ),…,(x n , yn ), where yi is the high-resolution image corresponding to xi ; i = 1, 2,…,n, as input.

步骤3-2:本发明的目标是通过超分辨率网络对输入的低分辨率高光谱图像ILR进行重建,生成高分辨率高光谱图像ISR。其中,表示低分辨率高光谱图像, 表示其重构结果,表示原始高分辨率高光谱图像,W和H分别为HSI的宽度和高度,L为HSI的通道数,r为LR所生成的SR比例因子大小。具体表达如公式1:Step 3-2: The goal of the present invention is to reconstruct the input low-resolution hyperspectral image I LR through a super-resolution network to generate a high-resolution hyperspectral image I SR . represents a low-resolution hyperspectral image, Represents the reconstruction result, represents the original high-resolution hyperspectral image, W and H are the width and height of HSI, L is the number of channels of HSI, and r is the SR scale factor generated by LR. The specific expression is as follows:

ISR=HNet(ILR) (1)I SR = H Net (I LR ) (1)

其中HNet(·)表示所提出的基于光谱分组和混合卷积的高光谱图像超分辨率重建模型的函数。where H Net (·) represents the function of the proposed hyperspectral image super-resolution reconstruction model based on spectral grouping and hybrid convolution.

步骤3-3:利用3×3的卷积和第一个残差块进行浅层特征提取:Step 3-3: Use 3×3 convolution and the first residual block Perform shallow feature extraction:

步骤3-4、采用并行结构2D/3D混合模块(PAM)以及第二个残差块提取深层特征。为缓解HSI SR梯度的消失以及有效地提高算法的性能,在网络中添加了残差连接,将浅特征x0与深特征xt相结合,以进一步提高模型稳定性和信息流。相应的特征xt定义为:Step 3-4: Use a parallel structure 2D/3D hybrid module (PAM) and a second residual block Extract deep features. To alleviate the vanishing gradient of HSI SR and effectively improve the performance of the algorithm, a residual connection is added to the network to combine the shallow feature x0 with the deep feature xt to further improve the model stability and information flow. The corresponding feature xt is defined as:

式中,FPAM(·)表示并行结构2D/3D混合模块操作。为了减轻最终超分辨率重建的负担,本发明在2D和3D分支网络的末端都采用了局部上采样策略,将复杂的任务分解为多个简单的子任务。具体如下:Wherein, F PAM (·) represents the parallel structure 2D/3D hybrid module operation. In order to reduce the burden of the final super-resolution reconstruction, the present invention adopts a local upsampling strategy at the end of both the 2D and 3D branch networks to decompose the complex task into multiple simple subtasks. The details are as follows:

在2D分支网络中,包含了异构模块和注意力机制。其中异构模块由对称组卷积块和互补卷积块组成,采用并行方式增强不同信道的内部和外部关系,以获取不同类型的更具代表性的结构信息,如图4所示。对称组卷积块包含两个3层支路子网络,每个支路子网络的每一层是Conv+ReLU,用于将特征信息转换为非线性特征。互补卷积块只有1个3层支路子网络,考虑到所有通道的整体特征,以增强它们的外部相关性。高光谱注意机制由空间和光谱两个方面构成,用于探索像素位置之间的长程依赖性和光谱维度之间的相关性,如图5所示。In the 2D branch network, heterogeneous modules and attention mechanisms are included. The heterogeneous module consists of a symmetric group convolution block and a complementary convolution block, which uses a parallel approach to enhance the internal and external relationships of different channels to obtain more representative structural information of different types, as shown in Figure 4. The symmetric group convolution block contains two 3-layer branch sub-networks, and each layer of each branch sub-network is Conv+ReLU, which is used to convert feature information into nonlinear features. The complementary convolution block has only one 3-layer branch sub-network, which takes into account the overall characteristics of all channels to enhance their external correlation. The hyperspectral attention mechanism consists of two aspects, spatial and spectral, to explore the long-range dependencies between pixel positions and the correlations between spectral dimensions, as shown in Figure 5.

其中表示HSL模块的输出特征,fL-up(·)表示局部上采样操作。in represents the output features of the HSL module, and f L-up (·) represents the local upsampling operation.

在3D分支网络:采用3D卷积对高光谱图像的空间-光谱信息进行提取,使得在增强空间分辨率的同时,能够使得光谱信息高保真。其中3D单元包含了3个可分离卷积层和一个ReLU非线性函数:In the 3D branch network: 3D convolution is used to extract the spatial-spectral information of the hyperspectral image, so that the spectral information can be kept high-fidelity while enhancing the spatial resolution. The 3D unit contains 3 separable convolution layers and a ReLU nonlinear function:

式中,表示3D-Unit各分层输出级联操作;表示卷积核为1×1×1的3D卷积;分别表示3个3D卷积的分层输出; 表示局部上采样操作。最后把获得的特征从四个维度(1×C×W×H)重塑为三个维度(C×W×H)图像大小,得到3DUnit支路的最终输出:In the formula, Indicates the cascade operation of each layer output of 3D-Unit; Indicates a 3D convolution with a convolution kernel of 1×1×1; Represent the layered outputs of three 3D convolutions respectively; Represents a local upsampling operation. Finally, the obtained features Reshape from four dimensions (1×C×W×H) to three dimensions (C×W×H) image size to get the final output of the 3DUnit branch:

式中,fSqueese(·)表示用于降维后获得的特征图维度。Where f Squeese (·) represents the dimension of the feature map obtained after dimensionality reduction.

在PAM网络中,对3D单元分支网络和2DUnit分支网络的输出从空间像素上进行融合:In the PAM network, the outputs of the 3D unit branch network and the 2DUnit branch network are fused from the spatial pixels:

式中,y2D表示2D分支网络的输出,y3D表示3D分支网络的输出,注意的是,公式(8)与上述的公式(3)等价。Wherein, y2D represents the output of the 2D branch network, and y3D represents the output of the 3D branch network. Note that formula (8) is equivalent to the above formula (3).

步骤3-5、为了将获得的特征升级到目标大小,使用上采样模块来生成目标的空间光谱特征图。特别注意的是,为了减轻最终超分辨率重建的负担,本分明中采用渐进式上采样,分为局部上采样和全局上采样。Step 3-5: In order to upgrade the obtained features to the target size, an upsampling module is used to generate the spatial spectral feature map of the target. It is particularly noted that in order to reduce the burden of the final super-resolution reconstruction, progressive upsampling is used in this paper, which is divided into local upsampling and global upsampling.

xup=FG_up(xt,r) (9)x up = F G_up (x t , r) (9)

式中,FG_up(·)表示全局上采样操作,这里使用转置2D卷积层通过比例因子r将特征图上采样到所需的比例,相应的特征输出为xupwhere F G_up (·) represents the global upsampling operation, where a transposed 2D convolutional layer is used to upsample the feature map to the required scale by a scaling factor r, and the corresponding feature output is x up .

步骤3-6、完成上采样后利用卷积层重建操作重构出SR高光谱图像ISRStep 3-6: After upsampling, the convolution layer reconstruction operation is used to reconstruct the SR hyperspectral image I SR :

xrec=Fconv(xup) (10)x rec =F conv ( x up ) (10)

其中,xrec表示卷积层重建,FUP(ILR↑)表示输入的高光谱低分辨率图像通过双三次插值进行r倍比例因子的上采样操作。Where x rec represents the reconstruction of the convolutional layer, and F UP (I LR ↑) represents the upsampling operation of the input high-spectral low-resolution image with a scale factor of r through bicubic interpolation.

步骤4、使用双域损失变换网络从空间域和频域优化建模,逐步细化生成的特征以提高图像质量,具体如下:Step 4: Use the dual-domain loss transformation network to optimize modeling from the spatial domain and frequency domain, and gradually refine the generated features to improve image quality, as follows:

步骤4-1、本发明采用L1损失来计算空间域像素的损失,有效惩罚小误差:Step 4-1, the present invention uses L1 loss to calculate the loss of pixels in the spatial domain, effectively penalizing small errors:

步骤4-2、由于CNNs固有的偏差使网络倾向于避开难以合成的频率,而空域损失难以帮助网络找到这些频率,因此本发明将HSI数据转换为频域表示,并添加额外模块,迫使模型自适应地恢复高频信息:Step 4-2: Since the inherent bias of CNNs makes the network tend to avoid frequencies that are difficult to synthesize, and the spatial domain loss makes it difficult for the network to find these frequencies, the present invention converts the HSI data into a frequency domain representation and adds an additional module to force the model to adaptively restore high-frequency information:

式中,(u,v)表示频谱上空间频率的坐标,F(u,v)是复数频率值,是遍历空间域中的每个图像像素的函数和;为单个频率(第k个为例)的欧几里得距离;为(u,v)处的空间频率的权重;LHFL为频域损失。Where (u, v) represents the coordinates of the spatial frequency on the spectrum, F(u, v) is the complex frequency value, and is the function sum of each image pixel traversed in the spatial domain; is the Euclidean distance of a single frequency (taking the kth frequency as an example); is the weight of the spatial frequency at (u, v); L HFL is the frequency domain loss.

步骤4-3、本发明采用的总损失如下:使用L1损失计算空间域的像素损失,同时使用LHFL计算频谱域的损耗,将双域损失相结合:Step 4-3, the total loss used in the present invention is as follows: L1 loss is used to calculate the pixel loss in the spatial domain, and L HFL is used to calculate the loss in the spectral domain, combining the dual-domain losses:

Ltotal=L1+βLHFL (15)L total = L 1 + βL HFL (15)

其中,β用于平衡两种损失,本发明中β设为0.1。Wherein, β is used to balance the two losses, and in the present invention, β is set to 0.1.

步骤5:通过更新卷积和权重,反复执行步骤3到步骤4直至模型收敛。一旦训练结束,就会得到一个经过训练的基于光谱分组和混合卷积的高光谱图像超分辨率网络模型。Step 5: Repeat steps 3 to 4 until the model converges by updating the convolution and weights. Once the training is completed, a trained hyperspectral image super-resolution network model based on spectral grouping and hybrid convolution will be obtained.

步骤6:采用训练好的基于光谱分组和混合卷积的高光谱图像超分辨率网络模型,对待重建的低分辨率高光谱图像进行处理,从而重建出超分辨率高光谱图像。Step 6: Use the trained hyperspectral image super-resolution network model based on spectral grouping and mixed convolution to process the low-resolution hyperspectral image to be reconstructed, so as to reconstruct the super-resolution hyperspectral image.

Claims (7)

1.一种基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,其特征在于,包括如下步骤:1. A method for super-resolution dual-domain network reconstruction of hyperspectral images based on progressive hybrid convolution, characterized in that it comprises the following steps: 步骤一、对高分辨率的高光谱图像进行退化处理,得到对应的低分辨率的高光谱图像,将低分辨率的高光谱图像与高分辨率的高光谱图像构成图像对的数据集;Step 1: Degrade the high-resolution hyperspectral image to obtain a corresponding low-resolution hyperspectral image, and form an image pair data set with the low-resolution hyperspectral image and the high-resolution hyperspectral image; 步骤二、构建高光谱图像超分辨率网络,所述高光谱图像超分辨率网络包括浅层特征提取模块、深层特征提取模块、全局上采样模块和重构模块,Step 2: construct a hyperspectral image super-resolution network, which includes a shallow feature extraction module, a deep feature extraction module, a global upsampling module and a reconstruction module. 所述浅层特征提取模块包括3×3卷积和残差块,所述残差块包含2个卷积层和一个ReLu非线性函数;The shallow feature extraction module includes a 3×3 convolution and a residual block, and the residual block includes 2 convolution layers and a ReLu nonlinear function; 所述深层特征提取模块为混合卷积网络,所述混合卷积网络包括2D分支网路和3D分支网络;The deep feature extraction module is a hybrid convolutional network, and the hybrid convolutional network includes a 2D branch network and a 3D branch network; 利用步骤一得到的数据集对所述高光谱图像超分辨率网络进行训练,得到训练完成的高光谱图像超分辨率网络;Using the data set obtained in step 1 to train the hyperspectral image super-resolution network, a trained hyperspectral image super-resolution network is obtained; 步骤三、使用具有空间域和频域损失的双域网络优化所述高光谱图像超分辨率网络,得到基于渐进式混合卷积的高光谱图像超分辨率双域网络;Step 3: Optimizing the hyperspectral image super-resolution network using a dual-domain network with spatial domain and frequency domain losses to obtain a hyperspectral image super-resolution dual-domain network based on progressive hybrid convolution; 步骤四、重复步骤二和步骤三更新卷积和权重直至收敛,得到训练完成的基于渐进式混合卷积的高光谱图像超分辨率双域网络;Step 4: Repeat steps 2 and 3 to update the convolution and weights until convergence, and obtain a trained hyperspectral image super-resolution dual-domain network based on progressive hybrid convolution; 步骤五、利用训练完成的基于渐进式混合卷积的高光谱图像超分辨率双域网络对低分辨率的高光谱图像进行重建,得到高空间分辨率的高光谱图像。Step 5: Use the trained hyperspectral image super-resolution dual-domain network based on progressive mixed convolution to reconstruct the low-resolution hyperspectral image to obtain a hyperspectral image with high spatial resolution. 2.如权利要求1所述的基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,其特征在于,步骤二中所述浅层特征提取模块通过下式表示:2. The hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution as claimed in claim 1, characterized in that the shallow feature extraction module in step 2 is represented by the following formula: 式中,x0为浅特征,为第一个残差块,为低分辨率高光谱图像,FConv(·)为3×3的卷积。Where x0 is a shallow feature, is the first residual block, is a low-resolution hyperspectral image, and F Conv (·) is a 3×3 convolution. 3.如权利要求1所述的基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,其特征在于,步骤二所述深层特征提取模块通过下式表示:3. The hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution as claimed in claim 1, characterized in that the deep feature extraction module in step 2 is represented by the following formula: 式中,xt为深特征,x0为浅特征,为第二个残差块,y2D表示2D分支网络的输出,y3D表示3D分支网络的输出;In the formula, xt is the deep feature, x0 is the shallow feature, is the second residual block, y2D represents the output of the 2D branch network, and y3D represents the output of the 3D branch network; y2D通过下式表示:y 2D is represented by the following formula: 式中,表示自注意力模块的输出特征,表示2D局部上采样操作,σ表示ReLU激活函数;In the formula, represents the output features of the self-attention module, represents a 2D local upsampling operation, σ represents the ReLU activation function; y3D通过下式表示:y 3D is expressed by the following formula: 式中,fSqueese(·)表示用于降维后获得的特征图维度,表示3D局部上采样操作,表示3D单元各分层输出级联操作,表示卷积核为1×1×1的3D卷积,分别表示3个3D卷积的分层输出。Where f S queese(·) represents the dimension of the feature map obtained after dimensionality reduction. represents a 3D local upsampling operation, Represents the cascade operation of each layer output of the 3D unit, represents a 3D convolution with a convolution kernel of 1×1×1. and They represent the layered outputs of three 3D convolutions respectively. 4.如权利要求1所述的基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,其特征在于,步骤二中全局上采样模块通过下式表示:4. The hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution as claimed in claim 1, characterized in that the global upsampling module in step 2 is represented by the following formula: xup=FG_up(xt,r')x up = F G_up (x t , r') 式中,xt为深特征,FG_up(·)表示全局上采样操作,r’为2D局部上采样操作和3D局部上采样操作后剩余的比例因子。Where xt is the deep feature, FG_up (·) represents the global upsampling operation, and r' is the scaling factor remaining after the 2D local upsampling operation and the 3D local upsampling operation. 5.如权利要求1所述的基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,其特征在于,步骤二中重构模块通过下式表示:5. The hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution as claimed in claim 1, characterized in that the reconstruction module in step 2 is represented by the following formula: xrec=Fconv(xup)x rec = F conv ( x up ) 其中,xrec表示卷积层重建,xup表示上采样模块的特征输出,FUP(ILR↑,r)表示输入的低分辨率高光谱图像通过双三次插值进行r倍比例因子的上采样操作。Among them, x rec represents the reconstruction of the convolutional layer, x up represents the feature output of the upsampling module, and F UP (I LR ↑, r) means that the input low-resolution hyperspectral image is upsampled by a scale factor of r through bicubic interpolation. 6.如权利要求1所述的基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,其特征在于,步骤三中使用具有空间域和频域损失的双域网络优化所述高光谱图像超分辨率网络,损失函数通过下式表示:6. The method for reconstructing hyperspectral image super-resolution based on dual-domain network with progressive hybrid convolution as claimed in claim 1, characterized in that in step 3, a dual-domain network with spatial domain and frequency domain losses is used to optimize the hyperspectral image super-resolution network, and the loss function is expressed by the following formula: Ltotal=L1+βLHFL L total = L 1 + βL HFL 式中,L1为空间域像素损失,LHFL为频域损失,β为损失函数的权重;In the formula, L1 is the pixel loss in the spatial domain, L HFL is the loss in the frequency domain, and β is the weight of the loss function; 其中,L1损失函数通过下式表示:Among them, the L1 loss function is expressed by the following formula: 式中,ILR表示低分辨率高光谱图像,分别表示第n个原始高分辨率图像和重构的高光谱图像,HNet(·)表示高光谱图像超分辨率网络,N表示训练批次中的高光谱图像数量,Θ表示可学习的网络参数;Where I LR represents the low-resolution hyperspectral image, and denote the nth original high-resolution image and the reconstructed hyperspectral image, respectively, H Net (·) denotes the hyperspectral image super-resolution network, N denotes the number of hyperspectral images in the training batch, and Θ denotes the learnable network parameters; LHFL损失函数通过下式表示:The L HFL loss function is expressed as follows: 式中,(u,v)表示频谱上空间频率的坐标,F(u,v)是复数频率值,是遍历空间域中的每个图像像素的函数和;H和W表示图像的空间尺寸;FGT表示原始高分辨率图像经过频域变换后的频域值;FSR表示重构的高光谱图像经过频域变换后的频域值;为第k个频率的欧几里得距离,共有S个频率,d表示欧几里得距离;为(u,v)处的空间频率的权重。Where (u, v) represents the coordinates of the spatial frequency on the spectrum, F(u, v) is the complex frequency value, which is the function and traversal of each image pixel in the spatial domain; H and W represent the spatial size of the image; F GT represents the frequency domain value of the original high-resolution image after frequency domain transformation; F SR represents the frequency domain value of the reconstructed hyperspectral image after frequency domain transformation; is the Euclidean distance of the kth frequency, there are S frequencies in total, and d represents the Euclidean distance; is the weight of the spatial frequency at (u,v). 7.如权利要求6所述的基于渐进式混合卷积的高光谱图像超分辨率双域网络重建方法,其特征在于,β为0.1。7. The hyperspectral image super-resolution dual-domain network reconstruction method based on progressive hybrid convolution as described in claim 6 is characterized in that β is 0.1.
CN202410801601.6A 2024-06-20 2024-06-20 Dual-domain network reconstruction method for hyperspectral image super-resolution based on progressive hybrid convolution Pending CN118799179A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410801601.6A CN118799179A (en) 2024-06-20 2024-06-20 Dual-domain network reconstruction method for hyperspectral image super-resolution based on progressive hybrid convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410801601.6A CN118799179A (en) 2024-06-20 2024-06-20 Dual-domain network reconstruction method for hyperspectral image super-resolution based on progressive hybrid convolution

Publications (1)

Publication Number Publication Date
CN118799179A true CN118799179A (en) 2024-10-18

Family

ID=93023050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410801601.6A Pending CN118799179A (en) 2024-06-20 2024-06-20 Dual-domain network reconstruction method for hyperspectral image super-resolution based on progressive hybrid convolution

Country Status (1)

Country Link
CN (1) CN118799179A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119599874A (en) * 2024-12-05 2025-03-11 云南大学 Hyperspectral image super-resolution method based on double-domain gating convolution attention mechanism
CN119599890A (en) * 2024-10-29 2025-03-11 南京理工大学 Enhanced photoacoustic microscopy imaging method based on physical degradation learning
CN119625112A (en) * 2025-02-14 2025-03-14 宁波大学科学技术学院 A high-quality HRHS image generation method based on twin network structure
CN119887574A (en) * 2025-01-03 2025-04-25 西华师范大学 Mixed domain dual-branch network for image denoising
CN120259478A (en) * 2025-06-04 2025-07-04 西湖大学 Efficient hyperspectral imaging method, device and readable storage medium for mobile devices
CN120388095A (en) * 2025-06-27 2025-07-29 之江实验室 MRI image reconstruction method, device, electronic device and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119599890A (en) * 2024-10-29 2025-03-11 南京理工大学 Enhanced photoacoustic microscopy imaging method based on physical degradation learning
CN119599874A (en) * 2024-12-05 2025-03-11 云南大学 Hyperspectral image super-resolution method based on double-domain gating convolution attention mechanism
CN119887574A (en) * 2025-01-03 2025-04-25 西华师范大学 Mixed domain dual-branch network for image denoising
CN119625112A (en) * 2025-02-14 2025-03-14 宁波大学科学技术学院 A high-quality HRHS image generation method based on twin network structure
CN120259478A (en) * 2025-06-04 2025-07-04 西湖大学 Efficient hyperspectral imaging method, device and readable storage medium for mobile devices
CN120388095A (en) * 2025-06-27 2025-07-29 之江实验室 MRI image reconstruction method, device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
CN110119780B (en) A Generative Adversarial Network-Based Super-resolution Reconstruction Method for Hyperspectral Images
CN112634137B (en) Hyperspectral and panchromatic image fusion method for extracting multiscale spatial spectrum features based on AE
CN113673590B (en) Rain removal method, system and medium based on multi-scale hourglass densely connected network
CN118799179A (en) Dual-domain network reconstruction method for hyperspectral image super-resolution based on progressive hybrid convolution
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN109859110B (en) Panchromatic Sharpening Method of Hyperspectral Image Based on Spectral Dimension Control Convolutional Neural Network
CN118864245A (en) A dual-branch image high-magnification super-resolution enhancement method based on diffusion model
CN116309070A (en) Super-resolution reconstruction method and device for hyperspectral remote sensing image and computer equipment
CN107123089A (en) Remote sensing images super-resolution reconstruction method and system based on depth convolutional network
CN101976435A (en) Combination learning super-resolution method based on dual constraint
CN112001843B (en) A deep learning-based infrared image super-resolution reconstruction method
CN116309227A (en) Remote Sensing Image Fusion Method Based on Residual Network and Spatial Attention Mechanism
CN107301372A (en) Hyperspectral image super-resolution method based on transfer learning
CN115760814A (en) Remote sensing image fusion method and system based on double-coupling deep neural network
CN111833261A (en) An Attention-Based Generative Adversarial Network for Image Super-Resolution Restoration
CN105447840A (en) Image super-resolution method based on active sampling and Gaussian process regression
CN116503251A (en) Super-resolution reconstruction method for generating countermeasure network remote sensing image by combining hybrid expert
CN113744134A (en) Hyperspectral image super-resolution method based on spectrum unmixing convolution neural network
CN114972024A (en) Image super-resolution reconstruction device and method based on graph representation learning
CN118154984B (en) Method and system for generating non-supervision neighborhood classification superpixels by fusing guided filtering
CN118172499B (en) Building height inversion method based on resource third-order remote sensing image
CN115511733A (en) Image degradation modeling method, neural network training method and device
CN114022362B (en) An image super-resolution method based on pyramid attention mechanism and symmetric network
CN119919313B (en) Full-color sharpening method and system based on high-frequency differential spatial attention mechanism
CN111461976A (en) Image super-resolution method based on efficient lightweight coordinate neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination