[go: up one dir, main page]

CN116630700A - Remote sensing image classification method based on introduction channel-space attention mechanism - Google Patents

Remote sensing image classification method based on introduction channel-space attention mechanism Download PDF

Info

Publication number
CN116630700A
CN116630700A CN202310577290.5A CN202310577290A CN116630700A CN 116630700 A CN116630700 A CN 116630700A CN 202310577290 A CN202310577290 A CN 202310577290A CN 116630700 A CN116630700 A CN 116630700A
Authority
CN
China
Prior art keywords
layer
channel
feature
spatial
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310577290.5A
Other languages
Chinese (zh)
Inventor
张朝柱
赵茹
薛丹
刘晓
穆虹志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202310577290.5A priority Critical patent/CN116630700A/en
Publication of CN116630700A publication Critical patent/CN116630700A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/58Extraction of image or video features relating to hyperspectral data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the field of hyperspectral remote sensing image classification, in particular to a remote sensing image classification method based on an introduction channel-space attention mechanism, which comprises the following steps: s1, obtaining characteristic information of a hyperspectral pixel module as a hyperspectral remote sensing image; s2, inputting the hyperspectral pixel module into a model multichannel convolutional neural network to extract deep features so as to obtain a spatial spectrum fusion feature map; s3, introducing a channel-space attention module during feature extraction to obtain a full spatial spectrum feature map; s4, inputting the feature map subjected to feature extraction through the model into a full-connection layer, integrating the abstract features extracted by the convolution layer, and mapping the input vector elements to a (0, 1) interval by using a Softmax function at an output layer to obtain an output classification result map. The method is embedded into a hyperspectral remote sensing image classification frame of the multichannel convolutional neural network, and the spatial characteristics and the spectral characteristics are fully considered, so that the classification accuracy of hyperspectral images is improved.

Description

基于引入通道-空间注意力机制的遥感图像分类方法Remote Sensing Image Classification Method Based on Introducing Channel-Spatial Attention Mechanism

技术领域technical field

本发明涉及高光谱遥感图像分类领域,尤其涉及基于引入通道-空间注意力机制的遥感图像分类方法,其中采用了多通道卷积神经网络。The invention relates to the field of hyperspectral remote sensing image classification, in particular to a remote sensing image classification method based on introducing a channel-spatial attention mechanism, wherein a multi-channel convolutional neural network is used.

背景技术Background technique

遥感是通过对地球表面进行遥感探测,获取其光谱、辐射等相关信息,从而对地球表面进行观测和研究的一种技术。高光谱传感器通过搭载光谱遥感平台获取高光谱图像,与传统的遥感图像比较,高光谱图像更具有丰富的空间信息和光谱信息。因此高光谱图像在农业、环境、军事、草原等方面应用广泛。高光谱图像为每一个像元分配唯一的标签,计算机通过识别标签属性自动判别,最后将高光谱图像的像素点自动分类到预定义的类别中完成高光谱图像的分类任务。高光谱图像分类是高光谱图像应用的基础。Remote sensing is a technique for observing and researching the earth's surface through remote sensing detection of the earth's surface to obtain relevant information such as its spectrum and radiation. The hyperspectral sensor acquires hyperspectral images by carrying a spectral remote sensing platform. Compared with traditional remote sensing images, hyperspectral images have richer spatial and spectral information. Therefore, hyperspectral images are widely used in agriculture, environment, military affairs, and grasslands. The hyperspectral image assigns a unique label to each pixel, and the computer automatically distinguishes by identifying the label attributes, and finally automatically classifies the pixels of the hyperspectral image into predefined categories to complete the classification task of the hyperspectral image. Hyperspectral image classification is the basis of hyperspectral image applications.

近年来,深度学习(DL)的高光谱图像分类方法因其独特的参数共享等优势逐渐成为研究热点。然而,卷积神经网络(CNN)在深度学习中拔得头筹。目前常见的卷积核有三种类型分别是1D-CNN、2D-CNN和3D-CNN。光谱信息为一维向量,通常使用1D-CNN进行分类;利用2D-CNN可以提取高光谱遥感图像目标像素周围的局部空间信息,过程中可能会破坏光谱结构;3D-CNN使用较为广泛,它能考虑空间信息和光谱信息,特征提取更加全面性。在网络结构方面,基于CNN的高光谱图像分类方法主要分为两类:一类是基于传统的CNN结构,例如LeNet、AlexNet、VGG等;另一类是基于深度残差网络(ResNet)、Inception等更加深层次的网络结构。在特征提取方面,一些研究人员提出了一些新的卷积核设计方法,例如谱空间卷积(Spectral-Spatial Convolution)、多尺度卷积(Multi-Scale Convolution)、多通道卷积(Multi-Channel Convolution)等,这些方法能够提取出更加丰富和有意义的特征,从而提高了高光谱图像分类的准确率。In recent years, deep learning (DL) hyperspectral image classification methods have gradually become a research hotspot due to their unique advantages such as parameter sharing. However, Convolutional Neural Networks (CNNs) come out on top in deep learning. There are currently three types of common convolution kernels: 1D-CNN, 2D-CNN, and 3D-CNN. Spectral information is a one-dimensional vector, which is usually classified using 1D-CNN; 2D-CNN can be used to extract local spatial information around target pixels in hyperspectral remote sensing images, and the spectral structure may be destroyed in the process; 3D-CNN is widely used, and it can Considering spatial information and spectral information, feature extraction is more comprehensive. In terms of network structure, CNN-based hyperspectral image classification methods are mainly divided into two categories: one is based on traditional CNN structures, such as LeNet, AlexNet, VGG, etc.; the other is based on deep residual network (ResNet), Inception and other deeper network structures. In terms of feature extraction, some researchers have proposed some new convolution kernel design methods, such as Spectral-Spatial Convolution, Multi-Scale Convolution, Multi-Channel Convolution Convolution) and so on, these methods can extract more abundant and meaningful features, thereby improving the accuracy of hyperspectral image classification.

目前多通道多尺度特征提取的特征融合方法已经应用广泛。如双通道频谱增强的CNN网络,设计光谱特征提取通道来增强特定像素的光谱特征表示,在空间光谱通道中采用小卷积核提取HSI的空间光谱特征,通过调整双通道的融合比例,更好的进行HSI分类。At present, the feature fusion method of multi-channel and multi-scale feature extraction has been widely used. For example, in the CNN network with dual-channel spectrum enhancement, the spectral feature extraction channel is designed to enhance the spectral feature representation of specific pixels. In the spatial spectral channel, a small convolution kernel is used to extract the spatial spectral features of HSI. By adjusting the fusion ratio of the two channels, it is better. for HSI classification.

为了提高模型对特征提取的性能,研究人员尝试把其他技术相结合,例如使用注意力机制(Attention Mechanism)来提高分类精度。注意力机制的本质是在空间、光谱和通道上分配贡献权重,决定在特定时刻关注那些特征,忽略无用信息,从而提高模型的表现性能。Woo等人在基础的CNN模型上加入CBAM模块,CBAM模块是由通道注意力模块和空间注意力模块相结合,在通道注意力模块中是将特征层在空间域上压缩,空间注意力模块是对通道域进行压缩。对光谱和空间进行加权处理,有效的捕获图像特征。In order to improve the performance of the model for feature extraction, researchers try to combine other techniques, such as using the attention mechanism (Attention Mechanism) to improve classification accuracy. The essence of the attention mechanism is to assign contribution weights in space, spectrum, and channel, to decide which features to focus on at a specific moment, and to ignore useless information, thereby improving the performance of the model. Woo et al. added the CBAM module to the basic CNN model. The CBAM module is a combination of the channel attention module and the spatial attention module. In the channel attention module, the feature layer is compressed in the spatial domain. The spatial attention module is Compresses the channel domain. Weighting is performed on spectrum and space to effectively capture image features.

深度学习方法针对于高光谱图像进行分类,由于网络结构简单提取特征有限,因而分类精度有待提高。本申请主要贡献是引入一种通道-空间注意力机制模块,将其嵌入多通道卷积神经网络的高光谱遥感图像分类框架中,更多关注有用特征信息,忽略无用信息,从而提高高光谱图像的分类精度。The deep learning method is aimed at classifying hyperspectral images. Due to the simple network structure and limited feature extraction, the classification accuracy needs to be improved. The main contribution of this application is to introduce a channel-spatial attention mechanism module, which is embedded in the hyperspectral remote sensing image classification framework of multi-channel convolutional neural network, pay more attention to useful feature information, and ignore useless information, thereby improving hyperspectral image classification accuracy.

发明内容Contents of the invention

针对上述问题,本发明提供了一种基于引入通道-空间注意力机制的遥感图像分类方法。将其嵌入多通道卷积神经网络的高光谱遥感图像分类框架中,充分的考虑空间特性和光谱特性,从而提高高光谱图像的分类精度。In view of the above problems, the present invention provides a remote sensing image classification method based on introducing a channel-spatial attention mechanism. Embed it into the hyperspectral remote sensing image classification framework of multi-channel convolutional neural network, fully consider the spatial characteristics and spectral characteristics, so as to improve the classification accuracy of hyperspectral images.

本发明提供如下技术方案:基于引入通道-空间注意力机制的遥感图像分类方法,包括如下步骤:The present invention provides the following technical solutions: a remote sensing image classification method based on introducing a channel-spatial attention mechanism, comprising the following steps:

S1、获得高光谱遥感图像,并对高光谱遥感图像数据预处理,得到高光谱像素模块为高光谱遥感图像的特征信息;S1. Obtain a hyperspectral remote sensing image, and preprocess the hyperspectral remote sensing image data to obtain the characteristic information that the hyperspectral pixel module is a hyperspectral remote sensing image;

S2、将高光谱像素模块输入到模型多通道卷积神经网络AMC-CNN进行深层特征提取得到空间光谱融合特征图,在网络中每个通道均使用三维卷积作为特征提取器,对空间光谱融合特征图进行光谱关键信息和空间关键信息提取,得到特征图谱R4S2. Input the hyperspectral pixel module into the model multi-channel convolutional neural network AMC-CNN for deep feature extraction to obtain the spatial spectral fusion feature map. Each channel in the network uses three-dimensional convolution as a feature extractor to perform spatial spectral fusion The feature map is used to extract spectral key information and spatial key information to obtain a feature map R 4 ;

S3、在特征提取时,为了提高模型的分类性能,引入了通道-空间注意力模块,对光谱融合特征图进行光谱关键信息和空间关键信息提取,得到充分的空谱特征图R5S3. In feature extraction, in order to improve the classification performance of the model, a channel-spatial attention module is introduced to extract spectral key information and spatial key information from the spectral fusion feature map, and obtain a sufficient spatial spectral feature map R 5 ;

S4、将通过模型进行特征提取后的特征图谱R5输入到全连接层FC,整合卷积层提取到的抽象特征,在输出层使用Softmax函数将输入向量元素映射到(0,1)区间,最后输出不同类别的概率并得到输出分类结果图R6S4. Input the feature map R5 after feature extraction through the model to the fully connected layer FC, integrate the abstract features extracted by the convolution layer, and use the Softmax function in the output layer to map the input vector elements to the (0, 1) interval, Finally, the probabilities of different categories are output and the output classification result graph R 6 is obtained.

步骤S1中,预处理时将高光谱数据R1划分为两个不同尺度模块,分别是:3×3像素模块R2和5×5像素模块R3,得到高光谱像素模块的特征信息;In step S1, the hyperspectral data R 1 is divided into two modules of different scales during preprocessing, namely: 3×3 pixel module R 2 and 5×5 pixel module R 3 , to obtain the characteristic information of the hyperspectral pixel module;

所述步骤S1具体包括:The step S1 specifically includes:

S101:原始的高光谱遥感图像是三维数据,包含丰富的空间信息和丰富的光谱信息;S101: The original hyperspectral remote sensing image is three-dimensional data, which contains rich spatial information and rich spectral information;

S102:原始高光谱数据R1以单个像元为中心,按照多尺度3×3和5×5设定大小值获取相邻像素模块;S102: The original hyperspectral data R 1 is centered on a single pixel, and the adjacent pixel modules are acquired according to the multi-scale 3×3 and 5×5 set size values;

S103:对获取的3×3像素模块R2和5×5像素模块R3分别输入到不同大小的通道中,输入大小为1×3×3×B和1×5×5×B,其中B表示高光谱图像的波段数;S103: Input the acquired 3×3 pixel module R 2 and 5×5 pixel module R 3 into channels of different sizes respectively, the input sizes are 1×3×3×B and 1×5×5×B, where B Indicates the number of bands of the hyperspectral image;

S104:对预处理的像素模块进行数据集划分,划分为训练样本集和测试样本集。S104: Divide the preprocessed pixel module into a data set, and divide it into a training sample set and a testing sample set.

所述步骤S2具体包括:Described step S2 specifically comprises:

S201、在网络中每个通道均使用三维卷积作为特征提取器,对空间光谱融合特征图进行光谱关键信息和空间关键信息提取,得到特征图谱R4S201. Each channel in the network uses three-dimensional convolution as a feature extractor to extract spectral key information and spatial key information from the spatial spectral fusion feature map to obtain the feature map R 4 ;

S202、在AMC-CNN中,依次通过两个卷积层、一个池化层(AvgPool)、一个通道注意力模块、一个卷积层、一个空间注意力模块、一个池化层、一个卷积层、全连接层(FC层),最后通过Softmax层得出分类结果;S202. In AMC-CNN, pass through two convolutional layers, a pooling layer (AvgPool), a channel attention module, a convolutional layer, a spatial attention module, a pooling layer, and a convolutional layer in sequence. , fully connected layer (FC layer), and finally obtain the classification result through the Softmax layer;

S203、卷积层包括若干个卷积核,用来特征提取,也是卷积神经网络中最重要的组成部分。设输入的图像为X,卷积层的输出Q为公式(1)S203. The convolutional layer includes several convolutional kernels, which are used for feature extraction and are also the most important component of the convolutional neural network. Let the input image be X, and the output Q of the convolutional layer be formula (1)

其中,w表示权重,b表示偏置;Among them, w represents the weight and b represents the bias;

S204、在利用卷积神经网络对高光谱遥感像素模块R2和R3进行卷积操作的过程中,不同层卷积层使用了n个卷积核个数,对于不同尺度的3×3像素模块和5×5像素模块分别采用不同的卷积核大小;S204. During the convolution operation of the hyperspectral remote sensing pixel modules R 2 and R 3 using the convolutional neural network, n convolution kernels are used in different convolution layers. For 3×3 pixels of different scales The module and the 5×5 pixel module use different convolution kernel sizes;

S205、池化层用于缩小卷积运算后的特征空间的尺寸,从而减少网络参数,加快计算深度,减少参数到全连接层的数量,防止过拟合现象,池化操作包含最大池化层和平均池化层;S205. The pooling layer is used to reduce the size of the feature space after the convolution operation, thereby reducing the network parameters, speeding up the calculation depth, reducing the number of parameters to the fully connected layer, and preventing overfitting. The pooling operation includes a maximum pooling layer and the average pooling layer;

S206、在网络模型中使用交叉熵损失函数,表达式如公式(2)S206, using the cross-entropy loss function in the network model, the expression is as formula (2)

其中,M是类别的数量,yic是指符号函数,取值为0或1,如果样本i的真实类别等于c取1,否则取0,pic是观测样本i属于类别c的预测概率。Among them, M is the number of categories, y ic refers to the sign function, the value is 0 or 1, if the real category of sample i is equal to c, it takes 1, otherwise it takes 0, p ic is the predicted probability that the observed sample i belongs to category c.

所述步骤S3包括:Described step S3 comprises:

S301:对输入的像素模块先进行两个连续的卷积层,然后对其进行补零操作,使多尺度像素模块向量和通道-空间注意力模块向量相同;S301: Perform two consecutive convolution layers on the input pixel module, and then perform zero padding operation on it, so that the multi-scale pixel module vector and the channel-spatial attention module vector are the same;

S302:多尺度像素模块在通道注意力机制模块中对通道进行加权处理,基于宽度和长度的全局最大池化(7,7,1)和全局平均池化(7,7,1)得到两个特征长条,特征长条的长度是输入特征层的通道数,共享MLP网络参数同时输出的特征进行各元素相加和,通过Sigmoid激活操作映射到[0,1]之间同时得到通道注意力Mc,神经元个数与输入的特征层的通道数一一对应相乘,最后输出空间注意力模块需要的输入特征,完成通道注意力机制的流程;S302: The multi-scale pixel module weights the channels in the channel attention mechanism module, and obtains two The feature strip, the length of the feature strip is the number of channels of the input feature layer, and the features output while sharing the MLP network parameters are added and summed, and mapped to [0,1] through the Sigmoid activation operation to obtain channel attention at the same time M c , the number of neurons is multiplied by the number of channels of the input feature layer in one-to-one correspondence, and finally the input features required by the spatial attention module are output to complete the process of the channel attention mechanism;

S303:通过通道注意力机制模块输出的特征层作为空间注意力模块的特征层的输入,在空间注意力机制模块中对空间进行加权处理,主要集中空间形状更重要的特征上,减弱不重要位置的特征,完成空间注意力机制流程。S303: The feature layer output by the channel attention mechanism module is used as the input of the feature layer of the spatial attention module, and the space is weighted in the spatial attention mechanism module, mainly focusing on the more important features of the spatial shape, and weakening the unimportant positions The characteristics of the spatial attention mechanism are completed.

步骤S303中,在完成空间注意力机制流程时,首先将通道注意力输出的特征图作为空间注意力模块的输入,对输入特征层做一个基于通道的最大池化和平均池化,将得到的结果通道数增加,采用一个卷积大小为(7,7,1)进行降维,经过Sigmoid函数获得每个特征点的比重,生成空间注意力特征图Ms,最后将生成的空间注意力特征图Ms与输入的特征层相乘得到最终输出特征图,完成空间注意力机制流程。In step S303, when completing the process of the spatial attention mechanism, the feature map output by the channel attention is first used as the input of the spatial attention module, and a channel-based maximum pooling and average pooling are performed on the input feature layer, and the obtained As a result, the number of channels increases, and a convolution size of (7, 7, 1) is used for dimensionality reduction. The proportion of each feature point is obtained through the Sigmoid function, and the spatial attention feature map M s is generated. Finally, the generated spatial attention feature The graph M s is multiplied by the input feature layer to obtain the final output feature map, completing the process of the spatial attention mechanism.

所述步骤S4包括:Described step S4 comprises:

将通过模型进行特征提取后的特征图谱R5输入到全连接层FC,全连接层使用Softmax函数输出不同类别的概率,其中Softmax函数使全连接层输出的特征图归一化处理再输入到输出层,Softmax函数采用公式(3)The feature map R 5 after feature extraction through the model is input to the fully connected layer FC, and the fully connected layer uses the Softmax function to output the probability of different categories, where the Softmax function normalizes the feature map output by the fully connected layer and then inputs it to the output Layer, the Softmax function uses the formula (3)

其中,Yi表示向量Y的第i个元素,i是正整数,Yi也是高光谱像元经过全连接层后的输出,Pi表示所属地物类别的输出概率。Among them, Y i represents the i-th element of the vector Y, i is a positive integer, Y i is also the output of the hyperspectral pixel after passing through the fully connected layer, and P i represents the output probability of the object category to which it belongs.

通过上述描述可以看出,本方案的基于引入通道-空间注意力机制的遥感图像分类方法,引入通道-空间注意力机制的多通道卷积神经网络。采用多尺度像素模块,更适应特征提取的尺寸大小,有利于网络运算速度,模型采用多通道卷积神经网络结构,多通道结构更加充分、有效地提取深层次的空间域特征和光谱域特征,加入通道-空间注意力模块,关注光谱信息的重要特征,忽略无用信息,提高了效率,使得分类精度有所提高,分类更加准确。From the above description, it can be seen that the remote sensing image classification method based on the introduction of the channel-spatial attention mechanism in this scheme, and the multi-channel convolutional neural network that introduces the channel-spatial attention mechanism. The use of multi-scale pixel modules is more suitable for the size of feature extraction, which is conducive to the speed of network operations. The model adopts a multi-channel convolutional neural network structure. The multi-channel structure can more fully and effectively extract deep-level spatial domain features and spectral domain features. The channel-spatial attention module is added to focus on the important features of spectral information and ignore useless information, which improves the efficiency, improves the classification accuracy and makes the classification more accurate.

附图说明Description of drawings

图1是本发明具体实施方式的高光谱图像分类流程图。Fig. 1 is a flowchart of hyperspectral image classification according to a specific embodiment of the present invention.

图2是像素模块卷积神经网络高光谱遥感图像分类网络图。Figure 2 is a network diagram of hyperspectral remote sensing image classification with pixel module convolutional neural network.

图3是本发明实施例所述的高光谱遥感图像分类结果对比示意图。Fig. 3 is a schematic diagram of a comparison of hyperspectral remote sensing image classification results according to an embodiment of the present invention.

具体实施方式Detailed ways

下面将结合本发明具体实施方式中的附图,对本发明具体实施方式中的技术方案进行清楚、完整地描述,显然,所描述的具体实施方式仅仅是本发明一种具体实施方式,而不是全部的具体实施方式。基于本发明中的具体实施方式,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他具体实施方式,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the specific embodiments of the present invention in conjunction with the accompanying drawings in the specific embodiments of the present invention. Obviously, the described specific embodiment is only a specific embodiment of the present invention, not all specific implementation. Based on the specific implementation modes in the present invention, all other specific implementation modes obtained by persons of ordinary skill in the art without making creative efforts fall within the protection scope of the present invention.

通过附图可以看出,本发明的基于引入通道-空间注意力机制的遥感图像分类方法,具体步骤如下:As can be seen from the accompanying drawings, the remote sensing image classification method based on the introduction of channel-spatial attention mechanism of the present invention, the specific steps are as follows:

S1、获得高光谱遥感图像,并对高光谱遥感图像数据预处理,在高光谱遥感图像数据预处理阶段,将高光谱数据R1划分为两个不同尺度模块,分别是:3×3像素模块R2和5×5像素模块R3,得到高光谱像素模块为高光谱像素模块的特征信息,具体包括:S1. Obtain the hyperspectral remote sensing image, and preprocess the hyperspectral remote sensing image data. In the hyperspectral remote sensing image data preprocessing stage, divide the hyperspectral data R1 into two modules of different scales, namely: 3×3 pixel modules R 2 and 5×5 pixel module R 3 obtain the characteristic information of the hyperspectral pixel module as a hyperspectral pixel module, specifically including:

S101:原始的高光谱遥感图像是三维数据,包含丰富的空间信息和丰富的光谱信息;S101: The original hyperspectral remote sensing image is three-dimensional data, which contains rich spatial information and rich spectral information;

S102:原始高光谱数据R1以单个像元为中心,按照尺度3×3和5×5设定大小值获取相邻像素模块;S102: The original hyperspectral data R 1 is centered on a single pixel, and the size value is set according to the scale 3×3 and 5×5 to obtain adjacent pixel modules;

S103:对获取的3×3像素模块R2和5×5像素模块R3分别输入到不同大小的通道中,输入大小为1×3×3×B和1×5×5×B,其中B表示高光谱图像的波段数;S103: Input the acquired 3×3 pixel module R 2 and 5×5 pixel module R 3 into channels of different sizes respectively, the input sizes are 1×3×3×B and 1×5×5×B, where B Indicates the number of bands of the hyperspectral image;

S104:对预处理的像素模块进行数据集划分,划分为训练样本集和测试样本集。S104: Divide the preprocessed pixel module into a data set, and divide it into a training sample set and a testing sample set.

S2、对高光谱3×3像素模块R2和5×5像素模块R3输入到模型多通道卷积神经网络(AMC-CNN)进行深层特征提取,在网络中每个通道均使用三维卷积作为特征提取器,对空间光谱融合特征图进行光谱关键信息和空间关键信息提取,得到特征图谱R4,具体包括:S201、在网络中每个通道均使用三维卷积作为特征提取器,对空间光谱融合特征图进行光谱关键信息和空间关键信息提取,得到特征图谱R4S2. Input the hyperspectral 3×3 pixel module R 2 and 5×5 pixel module R 3 into the model multi-channel convolutional neural network (AMC-CNN) for deep feature extraction, and each channel in the network uses three-dimensional convolution As a feature extractor, extract spectral key information and spatial key information from the spatial-spectral fusion feature map to obtain a feature map R 4 , specifically including: S201. Each channel in the network uses a three-dimensional convolution as a feature extractor, and the spatial The spectral fusion feature map is used to extract spectral key information and spatial key information to obtain a feature map R 4 ;

S202、在AMC-CNN中,依次通过两个卷积层、一个池化层、一个通道注意力模块、一个卷积层、一个空间注意力模块、一个池化层、一个卷积层、全连接层,最后通过Softmax层得出分类结果;S202. In AMC-CNN, sequentially pass through two convolutional layers, a pooling layer, a channel attention module, a convolutional layer, a spatial attention module, a pooling layer, a convolutional layer, and a full connection layer, and finally the classification result is obtained through the Softmax layer;

S203、卷积层包括若干个卷积核,用来特征提取,设输入的图像为X,卷积层的输出Q为公式(1)S203, the convolution layer includes several convolution kernels for feature extraction, the input image is set to X, and the output Q of the convolution layer is formula (1)

其中,w表示权重,b表示偏置;Among them, w represents the weight and b represents the bias;

S204、在利用卷积神经网络对高光谱遥感像素模块R2和R3进行卷积操作的过程中,不同层卷积层使用了n个卷积核个数,对于不同尺度的3×3像素模块和5×5像素模块分别采用不同的卷积核大小;S204. During the convolution operation of the hyperspectral remote sensing pixel modules R 2 and R 3 using the convolutional neural network, n convolution kernels are used in different convolution layers. For 3×3 pixels of different scales The module and the 5×5 pixel module use different convolution kernel sizes;

S205、池化层用于缩小卷积运算后的特征空间的尺寸,从而减少网络参数,加快计算深度,减少参数到全连接层的数量,防止过拟合现象,池化操作包含最大池化层和平均池化层;S205. The pooling layer is used to reduce the size of the feature space after the convolution operation, thereby reducing the network parameters, speeding up the calculation depth, reducing the number of parameters to the fully connected layer, and preventing overfitting. The pooling operation includes a maximum pooling layer and the average pooling layer;

S206、在网络模型中使用交叉熵损失函数,表达式如公式(2)S206, using the cross-entropy loss function in the network model, the expression is as formula (2)

其中,M是类别的数量,yic是指符号函数,取值为0或1,如果样本i的真实类别等于c取1,否则取0,pic是观测样本i属于类别c的预测概率。Among them, M is the number of categories, y ic refers to the sign function, the value is 0 or 1, if the real category of sample i is equal to c, it takes 1, otherwise it takes 0, p ic is the predicted probability that the observed sample i belongs to category c.

S3、在特征提取时,为了提高模型的分类性能,引入了通道-空间注意力机制模块,更加关注有用的空间光谱信息,因而更好的对光谱融合特征图进行光谱关键信息和空间关键信息提取,得到更加充分的空谱特征图R5,具体包括:S301:对输入的像素模块先进行两个连续的卷积层,然后对其进行补零操作,使多尺度像素模块向量和通道-空间注意力模块向量相同;S3. In feature extraction, in order to improve the classification performance of the model, the channel-spatial attention mechanism module is introduced to pay more attention to the useful spatial spectral information, so that the spectral key information and spatial key information can be better extracted from the spectral fusion feature map. , to obtain a more sufficient spatial spectral feature map R 5 , specifically including: S301: first perform two consecutive convolutional layers on the input pixel module, and then perform zero padding on it, so that the multi-scale pixel module vector and channel-space The attention module vectors are the same;

S302:多尺度像素模块在通道注意力机制模块中对通道进行加权处理,基于宽度和长度的全局最大池化(7,7,1)和全局平均池化(7,7,1)得到两个特征长条,特征长条的长度是输入特征层的通道数,共享MLP网络参数同时输出的特征进行各元素相加和,通过Sigmoid激活操作映射到[0,1]之间同时得到通道注意力Mc,神经元个数与输入的特征层的通道数一一对应相乘,最后输出空间注意力模块需要的输入特征,完成通道注意力机制的流程;S302: The multi-scale pixel module weights the channels in the channel attention mechanism module, and obtains two The feature strip, the length of the feature strip is the number of channels of the input feature layer, and the features output while sharing the MLP network parameters are added and summed, and mapped to [0,1] through the Sigmoid activation operation to obtain channel attention at the same time M c , the number of neurons is multiplied by the number of channels of the input feature layer in one-to-one correspondence, and finally the input features required by the spatial attention module are output to complete the process of the channel attention mechanism;

S303:通过通道注意力机制模块输出的特征层作为空间注意力模块的特征层的输入,在空间注意力机制模块中对空间进行加权处理,主要集中空间形状更重要的特征上,减弱不重要位置的特征。首先将通道注意力输出的特征图作为空间注意力模块的输入,对输入特征层做一个基于通道的最大池化和平池化,将得到的结果通道数增加,采用一个卷积大小为(7,7,1)进行降维,经过Sigmoid函数获得每个特征点的比重,生成空间注意力特征图Ms,最后将生成的空间注意力特征图Ms与输入的特征层相乘得到最终输出特征图,完成空间注意力机制流程。S303: The feature layer output by the channel attention mechanism module is used as the input of the feature layer of the spatial attention module, and the space is weighted in the spatial attention mechanism module, mainly focusing on the more important features of the spatial shape, and weakening the unimportant positions Characteristics. First, the feature map output by the channel attention is used as the input of the spatial attention module, and a channel-based maximum pooling and flat pooling is performed on the input feature layer, and the number of resulting channels is increased, and a convolution size of (7, 7.1) Perform dimensionality reduction, obtain the proportion of each feature point through the Sigmoid function, generate a spatial attention feature map M s , and finally multiply the generated spatial attention feature map M s with the input feature layer to obtain the final output feature Figure, complete the process of spatial attention mechanism.

S4、将通过模型进行特征提取后的特征图谱R5输入到全连接层FC,整合卷积层提取到的抽象特征,在输出层使用Softmax函数将输入向量元素映射到(0,1)区间,最后输出不同类别的概率并得到输出分类结果图R6S4. Input the feature map R5 after feature extraction through the model to the fully connected layer FC, integrate the abstract features extracted by the convolution layer, and use the Softmax function in the output layer to map the input vector elements to the (0, 1) interval, Finally, the probabilities of different categories are output and the output classification result graph R 6 is obtained.

高光谱图像也有对分类结果进行对比评价的技术指标,它是通过计算公式准确得分类精度,常见的评价指标有:总体分类精度(OA)、平均分类精度(AA)、Kappa系数。Hyperspectral images also have technical indicators for comparative evaluation of classification results. It accurately obtains classification accuracy through calculation formulas. Common evaluation indicators include: overall classification accuracy (OA), average classification accuracy (AA), and Kappa coefficient.

为了验证步骤2所述的模型方法,在Pavia Center数据集上进行试验,表一是本发明在Pavia Center数据集上与传统方法SVM、基于深度学习的3D-CNN、双通道3D-CNN模型和多通道模型分类结果对比结果。具体如下:In order to verify the model method described in step 2, test is carried out on the Pavia Center data set, and table one is the present invention on the Pavia Center data set with traditional method SVM, 3D-CNN based on deep learning, dual-channel 3D-CNN model and Multi-channel model classification results comparison results. details as follows:

本发明分类结果图与传统方法SVM、基于深度学习的3D-CNN和双通道3D-CNN以及多通道MC-CNN模型对比,对比结果图如图3所示,(a)三通道的RGB图像,(b)真实地物标签,(c)SVM分类结果图,(d)3D-CNN分类结果图,(e)双通道3D-CNN,(f)MC-CNN,(g)本发明的分类结果(MC-CNN)。通过对比可以看出各种算法将9种地物进行分类,但是图像结果显示出许多模糊的斑点。如图3所示,在步骤2所述的高光谱图像分类模型较为优秀,从矩形框区域可以看出分类斑点逐步减少,也可看出深度学习方法优胜于传统方法,但是步骤2所述的分类方法又是深度学习方法中最优的,分类精度最高,分类效果最好。The classification result diagram of the present invention is compared with the traditional method SVM, 3D-CNN based on deep learning, dual-channel 3D-CNN and multi-channel MC-CNN model, and the comparison result diagram is shown in Figure 3, (a) three-channel RGB image, (b) real object label, (c) SVM classification result map, (d) 3D-CNN classification result map, (e) dual-channel 3D-CNN, (f) MC-CNN, (g) classification result of the present invention (MC-CNN). Through comparison, it can be seen that various algorithms classify 9 kinds of ground objects, but the image results show many blurred spots. As shown in Figure 3, the hyperspectral image classification model described in step 2 is relatively excellent. From the rectangular frame area, it can be seen that the classification spots are gradually reduced. It can also be seen that the deep learning method is superior to the traditional method, but the method described in step 2 The classification method is the best among deep learning methods, with the highest classification accuracy and the best classification effect.

尽管已经示出和描述了本发明的具体实施方式,对于本领域的普通技术人员而言,可以理解在不脱离发明的原理和精神的情况下可以对这些具体实施方式进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。Although specific embodiments of the present invention have been shown and described, those skilled in the art can understand that various changes, modifications, Alternatives and modifications, the scope of the present invention is defined by the appended claims and their equivalents.

Claims (7)

1.基于引入通道-空间注意力机制的遥感图像分类方法,其特征在于包括如下步骤:1. based on introducing the remote sensing image classification method of channel-spatial attention mechanism, it is characterized in that comprising the steps: S1、获得高光谱遥感图像,并对高光谱遥感图像数据预处理,得到高光谱像素模块为高光谱遥感图像的特征信息;S1. Obtain a hyperspectral remote sensing image, and preprocess the hyperspectral remote sensing image data to obtain the characteristic information that the hyperspectral pixel module is a hyperspectral remote sensing image; S2、将高光谱像素模块输入到模型多通道卷积神经网络AMC-CNN进行深层特征提取得到空间光谱融合特征图,在网络中每个通道均使用三维卷积作为特征提取器,对空间光谱融合特征图进行光谱关键信息和空间关键信息提取,得到特征图谱R4S2. Input the hyperspectral pixel module into the model multi-channel convolutional neural network AMC-CNN for deep feature extraction to obtain the spatial spectral fusion feature map. Each channel in the network uses three-dimensional convolution as a feature extractor to perform spatial spectral fusion The feature map is used to extract spectral key information and spatial key information to obtain a feature map R 4 ; S3、在特征提取时,为了提高模型的分类性能,引入了通道-空间注意力模块,对光谱融合特征图进行光谱关键信息和空间关键信息提取,得到充分的空谱特征图R5S3. In feature extraction, in order to improve the classification performance of the model, a channel-spatial attention module is introduced to extract spectral key information and spatial key information from the spectral fusion feature map, and obtain a sufficient spatial spectral feature map R 5 ; S4、将通过模型进行特征提取后的特征图谱R5输入到全连接层FC,整合卷积层提取到的抽象特征,在输出层使用Softmax函数将输入向量元素映射到(0,1)区间,最后输出不同类别的概率并得到输出分类结果图R6S4. Input the feature map R5 after feature extraction through the model to the fully connected layer FC, integrate the abstract features extracted by the convolution layer, and use the Softmax function in the output layer to map the input vector elements to the (0, 1) interval, Finally, the probabilities of different categories are output and the output classification result graph R 6 is obtained. 2.根据权利要求1所述的基于引入通道-空间注意力机制的遥感图像分类方法,其特征是:2. the remote sensing image classification method based on introducing channel-spatial attention mechanism according to claim 1, is characterized in that: 步骤S1中,预处理时将高光谱数据R1划分为两个不同尺度模块,分别是:3×3像素模块R2和5×5像素模块R3,得到高光谱像素模块的特征信息。In step S1, during preprocessing, the hyperspectral data R 1 is divided into two modules of different scales, namely: 3×3 pixel module R 2 and 5×5 pixel module R 3 , to obtain the feature information of the hyperspectral pixel module. 3.根据权利要求2所述的基于引入通道-空间注意力机制的遥感图像分类方法,其特征是,所述步骤S1包括:3. the remote sensing image classification method based on introducing channel-spatial attention mechanism according to claim 2, is characterized in that, described step S1 comprises: S101:原始的高光谱遥感图像是三维数据,包含丰富的空间信息和丰富的光谱信息;S101: The original hyperspectral remote sensing image is three-dimensional data, which contains rich spatial information and rich spectral information; S102:原始高光谱数据R1以单个像元为中心,按照尺度3×3和5×5设定大小值获取相邻像素模块;S102: The original hyperspectral data R 1 is centered on a single pixel, and the size value is set according to the scale 3×3 and 5×5 to obtain adjacent pixel modules; S103:对获取的3×3像素模块R2和5×5像素模块R3分别输入到不同大小的通道中,输入大小为1×3×3×B和1×5×5×B,其中B表示高光谱图像的波段数;S103: Input the acquired 3×3 pixel module R 2 and 5×5 pixel module R 3 into channels of different sizes respectively, the input sizes are 1×3×3×B and 1×5×5×B, where B Indicates the number of bands of the hyperspectral image; S104:对预处理的像素模块进行数据集划分,划分为训练样本集和测试样本集。S104: Divide the preprocessed pixel module into a data set, and divide it into a training sample set and a testing sample set. 4.根据权利要求2所述的基于引入通道-空间注意力机制的遥感图像分类方法,其特征是,所述步骤S2包括:4. the remote sensing image classification method based on introducing channel-spatial attention mechanism according to claim 2, is characterized in that, described step S2 comprises: S201、在网络中每个通道均使用三维卷积作为特征提取器,对空间光谱融合特征图进行光谱关键信息和空间关键信息提取,得到特征图谱R4S201. Each channel in the network uses three-dimensional convolution as a feature extractor to extract spectral key information and spatial key information from the spatial spectral fusion feature map to obtain the feature map R 4 ; S202、在AMC-CNN中,依次通过两个卷积层、一个池化层、一个通道注意力模块、一个卷积层、一个空间注意力模块、一个池化层、一个卷积层、全连接层,最后通过Softmax层得出分类结果;S202. In AMC-CNN, sequentially pass through two convolutional layers, a pooling layer, a channel attention module, a convolutional layer, a spatial attention module, a pooling layer, a convolutional layer, and a full connection layer, and finally the classification result is obtained through the Softmax layer; S203、卷积层包括若干个卷积核,用来特征提取,设输入的图像为X,卷积层的输出Q为公式(1)S203, the convolution layer includes several convolution kernels for feature extraction, the input image is set to X, and the output Q of the convolution layer is formula (1) 其中,w表示权重,b表示偏置;Among them, w represents the weight and b represents the bias; S204、在利用卷积神经网络对高光谱遥感像素模块R2和R3进行卷积操作的过程中,不同层卷积层使用了n个卷积核个数,对于不同尺度的3×3像素模块和5×5像素模块分别采用不同的卷积核大小;S204. During the convolution operation of the hyperspectral remote sensing pixel modules R 2 and R 3 using the convolutional neural network, n convolution kernels are used in different convolution layers. For 3×3 pixels of different scales The module and the 5×5 pixel module use different convolution kernel sizes; S205、池化层用于缩小卷积运算后的特征空间的尺寸,从而减少网络参数,加快计算深度,减少参数到全连接层的数量,防止过拟合现象,池化操作包含最大池化层和平均池化层;S205. The pooling layer is used to reduce the size of the feature space after the convolution operation, thereby reducing the network parameters, speeding up the calculation depth, reducing the number of parameters to the fully connected layer, and preventing overfitting. The pooling operation includes a maximum pooling layer and the average pooling layer; S206、在网络模型中使用交叉熵损失函数,表达式如公式(2)S206, using the cross-entropy loss function in the network model, the expression is as formula (2) 其中,M是类别的数量,yic是指符号函数,取值为0或1,如果样本i的真实类别等于c取1,否则取0,pic是观测样本i属于类别c的预测概率。Among them, M is the number of categories, y ic refers to the sign function, the value is 0 or 1, if the real category of sample i is equal to c, it takes 1, otherwise it takes 0, p ic is the predicted probability that the observed sample i belongs to category c. 5.根据权利要求4所述的基于引入通道-空间注意力机制的遥感图像分类方法,其特征是,所述步骤S3包括:5. the remote sensing image classification method based on introducing channel-spatial attention mechanism according to claim 4, is characterized in that, described step S3 comprises: S301:对输入的像素模块先进行两个连续的卷积层,然后对其进行补零操作,使多尺度像素模块向量和通道-空间注意力模块向量相同;S301: Perform two consecutive convolution layers on the input pixel module, and then perform zero padding operation on it, so that the multi-scale pixel module vector and the channel-spatial attention module vector are the same; S302:多尺度像素模块在通道注意力机制模块中对通道进行加权处理,基于宽度和长度的全局最大池化(7,7,1)和全局平均池化(7,7,1)得到两个特征长条,特征长条的长度是输入特征层的通道数,共享MLP网络参数同时输出的特征进行各元素相加和,通过Sigmoid激活操作映射到[0,1]之间同时得到通道注意力Mc,神经元个数与输入的特征层的通道数一一对应相乘,最后输出空间注意力模块需要的输入特征,完成通道注意力机制的流程;S302: The multi-scale pixel module weights the channels in the channel attention mechanism module, and obtains two The feature strip, the length of the feature strip is the number of channels of the input feature layer, and the features output while sharing the MLP network parameters are added and summed, and mapped to [0,1] through the Sigmoid activation operation to obtain channel attention at the same time M c , the number of neurons is multiplied by the number of channels of the input feature layer in one-to-one correspondence, and finally the input features required by the spatial attention module are output to complete the process of the channel attention mechanism; S303:通过通道注意力机制模块输出的特征层作为空间注意力模块的特征层的输入,在空间注意力机制模块中对空间进行加权处理,完成空间注意力机制流程。S303: The feature layer output by the channel attention mechanism module is used as the input of the feature layer of the spatial attention module, and the space is weighted in the spatial attention mechanism module to complete the process of the spatial attention mechanism. 6.根据权利要求5所述的基于引入通道-空间注意力机制的遥感图像分类方法,其特征是,步骤S303中,在完成空间注意力机制流程时,首先将通道注意力输出的特征图作为空间注意力模块的输入,对输入特征层做一个基于通道的最大池化和平均池化,将得到的结果通道数增加,采用一个卷积大小为(7,7,1)进行降维,经过Sigmoid函数获得每个特征点的比重,生成空间注意力特征图Ms,最后将生成的空间注意力特征图Ms与输入的特征层相乘得到最终输出特征图,完成空间注意力机制流程。6. The remote sensing image classification method based on the introduction of channel-spatial attention mechanism according to claim 5, characterized in that, in step S303, when completing the process of the spatial attention mechanism, at first the feature map of the channel attention output is used as The input of the spatial attention module performs a channel-based maximum pooling and average pooling on the input feature layer, increases the number of resulting channels, and uses a convolution size of (7, 7, 1) for dimensionality reduction. After The Sigmoid function obtains the proportion of each feature point, generates a spatial attention feature map M s , and finally multiplies the generated spatial attention feature map M s with the input feature layer to obtain the final output feature map, and completes the process of the spatial attention mechanism. 7.根据权利要求5所述的基于引入通道-空间注意力机制的遥感图像分类方法,其特征在于,所述步骤S4包括:7. the remote sensing image classification method based on introducing channel-spatial attention mechanism according to claim 5, is characterized in that, described step S4 comprises: 将通过模型进行特征提取后的特征图谱R5输入到全连接层FC,全连接层使用Softmax函数输出不同类别的概率,其中Softmax函数使全连接层输出的特征图归一化处理再输入到输出层,Softmax函数采用公式(3)The feature map R 5 after feature extraction through the model is input to the fully connected layer FC, and the fully connected layer uses the Softmax function to output the probability of different categories, where the Softmax function normalizes the feature map output by the fully connected layer and then inputs it to the output Layer, the Softmax function uses the formula (3) 其中,Yi表示向量Y的第i个元素,i是正整数,Yi也是高光谱像元经过全连接层后的输出,Pi表示所属地物类别的输出概率。Among them, Y i represents the i-th element of the vector Y, i is a positive integer, Y i is also the output of the hyperspectral pixel after passing through the fully connected layer, and P i represents the output probability of the object category to which it belongs.
CN202310577290.5A 2023-05-22 2023-05-22 Remote sensing image classification method based on introduction channel-space attention mechanism Pending CN116630700A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310577290.5A CN116630700A (en) 2023-05-22 2023-05-22 Remote sensing image classification method based on introduction channel-space attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310577290.5A CN116630700A (en) 2023-05-22 2023-05-22 Remote sensing image classification method based on introduction channel-space attention mechanism

Publications (1)

Publication Number Publication Date
CN116630700A true CN116630700A (en) 2023-08-22

Family

ID=87620799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310577290.5A Pending CN116630700A (en) 2023-05-22 2023-05-22 Remote sensing image classification method based on introduction channel-space attention mechanism

Country Status (1)

Country Link
CN (1) CN116630700A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117765402A (en) * 2024-02-21 2024-03-26 山东科技大学 Hyperspectral image matching detection method based on attention mechanism
CN118195553A (en) * 2024-05-15 2024-06-14 山东省地质科学研究院 Ecological product information investigation system based on big data
CN118470553A (en) * 2024-07-15 2024-08-09 吉林大学 Hyperspectral remote sensing image processing method based on spatial spectral attention mechanism
CN118506201A (en) * 2024-05-23 2024-08-16 盐城工学院 Remote sensing image classification method and system based on improvement MobileNet v2
CN119169399A (en) * 2024-11-25 2024-12-20 南京信息工程大学 A hyperspectral image classification method
CN120766048A (en) * 2025-09-05 2025-10-10 中国科学院上海技术物理研究所 Hyperspectral classification method based on reconstructed convolutional Transformer

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705526A (en) * 2021-09-07 2021-11-26 安徽大学 Hyperspectral remote sensing image classification method
CN113822207A (en) * 2021-09-27 2021-12-21 海南长光卫星信息技术有限公司 Hyperspectral remote sensing image recognition method, device, electronic device and storage medium
WO2022073452A1 (en) * 2020-10-07 2022-04-14 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN115471757A (en) * 2022-09-23 2022-12-13 重庆邮电大学 Hyperspectral image classification method based on convolutional neural network and attention mechanism
CN115719445A (en) * 2022-12-20 2023-02-28 齐鲁工业大学 A Seafood Recognition Method Based on Deep Learning and Raspberry Pi 4B Module
CN115909052A (en) * 2022-10-26 2023-04-04 杭州师范大学 Hyperspectral remote sensing image classification method based on hybrid convolutional neural network
CN115953303A (en) * 2023-03-14 2023-04-11 山东省计算中心(国家超级计算济南中心) Multi-scale image compressed sensing reconstruction method and system combining channel attention

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022073452A1 (en) * 2020-10-07 2022-04-14 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN113705526A (en) * 2021-09-07 2021-11-26 安徽大学 Hyperspectral remote sensing image classification method
CN113822207A (en) * 2021-09-27 2021-12-21 海南长光卫星信息技术有限公司 Hyperspectral remote sensing image recognition method, device, electronic device and storage medium
CN115471757A (en) * 2022-09-23 2022-12-13 重庆邮电大学 Hyperspectral image classification method based on convolutional neural network and attention mechanism
CN115909052A (en) * 2022-10-26 2023-04-04 杭州师范大学 Hyperspectral remote sensing image classification method based on hybrid convolutional neural network
CN115719445A (en) * 2022-12-20 2023-02-28 齐鲁工业大学 A Seafood Recognition Method Based on Deep Learning and Raspberry Pi 4B Module
CN115953303A (en) * 2023-03-14 2023-04-11 山东省计算中心(国家超级计算济南中心) Multi-scale image compressed sensing reconstruction method and system combining channel attention

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHEN CHEN 等: ""Classification of Hyperspectral Data Using a Multi-Channel Convolutional Neural Network"", 《LECTURE NOTES IN ARTIFICIAL INTELLIGENCE》, 18 June 2019 (2019-06-18), pages 81 - 92 *
吴鸿昊等: ""高光谱图像小样本分类的卷积神经网络方法"", 《中国图象图形学报》, vol. 26, no. 08, 31 August 2021 (2021-08-31), pages 2009 - 2020 *
李向春;张浩;刘晓燕;宗芳伊;刘军礼;: "基于透射率优化和颜色修正的水下图像增强方法", 《山东科学》, no. 02, 12 April 2019 (2019-04-12) *
杨桄著: "《高光谱图像处理与分析应用》", 30 June 2021, 吉林大学出版社, pages: 54 - 57 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117765402A (en) * 2024-02-21 2024-03-26 山东科技大学 Hyperspectral image matching detection method based on attention mechanism
CN117765402B (en) * 2024-02-21 2024-05-17 山东科技大学 A hyperspectral image matching detection method based on attention mechanism
CN118195553A (en) * 2024-05-15 2024-06-14 山东省地质科学研究院 Ecological product information investigation system based on big data
CN118506201A (en) * 2024-05-23 2024-08-16 盐城工学院 Remote sensing image classification method and system based on improvement MobileNet v2
CN118506201B (en) * 2024-05-23 2025-01-17 盐城工学院 Remote sensing image classification method and system based on improvement MobileNet v2
CN118470553A (en) * 2024-07-15 2024-08-09 吉林大学 Hyperspectral remote sensing image processing method based on spatial spectral attention mechanism
CN119169399A (en) * 2024-11-25 2024-12-20 南京信息工程大学 A hyperspectral image classification method
CN120766048A (en) * 2025-09-05 2025-10-10 中国科学院上海技术物理研究所 Hyperspectral classification method based on reconstructed convolutional Transformer

Similar Documents

Publication Publication Date Title
CN116630700A (en) Remote sensing image classification method based on introduction channel-space attention mechanism
CN113095409B (en) Hyperspectral Image Classification Method Based on Attention Mechanism and Weight Sharing
CN112801146B (en) A target detection method and system
CN112348036A (en) Adaptive Object Detection Method Based on Lightweight Residual Learning and Deconvolution Cascade
CN111160273B (en) A hyperspectral image space-spectrum joint classification method and device
CN116486251B (en) A Hyperspectral Image Classification Method Based on Multimodal Fusion
CN112101271A (en) Hyperspectral remote sensing image classification method and device
CN113435254A (en) Sentinel second image-based farmland deep learning extraction method
CN111222545B (en) Image classification method based on linear programming incremental learning
CN113673556B (en) A hyperspectral image classification method based on multi-scale dense convolutional network
Tanwar et al. Deep learning-based hybrid model for severity prediction of leaf smut rice infection
CN118485822A (en) A target detection method based on improved YOLOv8
Lin et al. Determination of the varieties of rice kernels based on machine vision and deep learning technology
Kotwal et al. Yolov5-based convolutional feature attention neural network for plant disease classification
CN117877034A (en) Remote sensing image instance segmentation method and model based on dynamic convolution enhancement
CN113052130A (en) Hyperspectral image classification method based on depth residual error network and edge protection filtering
CN116704241A (en) A hyperspectral remote sensing image classification method with full-channel 3D convolutional neural network
Song et al. Using dual-channel CNN to classify hyperspectral image based on spatial-spectral information
CN113469084B (en) Hyperspectral image classification method based on contrast generation countermeasure network
CN119810539B (en) Semi-supervised hyperspectral image intelligent classification method
CN117115675B (en) A cross-temporal lightweight spatial-spectral feature fusion hyperspectral change detection method, system, device and medium
CN119399541A (en) Hyperspectral image classification method based on feature weighted fusion of GAT and CNN
CN119169384A (en) Cultural relic pattern recognition method and device based on attention mechanism and graph convolutional network
Junos et al. Improved hybrid feature extractor in lightweight convolutional neural network for postharvesting technology: automated oil palm fruit grading
Ma et al. DGCC-Fruit: a lightweight fine-grained fruit recognition network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination