[go: up one dir, main page]

CN114299382B - Hyperspectral remote sensing image classification method and hyperspectral remote sensing image classification system - Google Patents

Hyperspectral remote sensing image classification method and hyperspectral remote sensing image classification system

Info

Publication number
CN114299382B
CN114299382B CN202111401330.8A CN202111401330A CN114299382B CN 114299382 B CN114299382 B CN 114299382B CN 202111401330 A CN202111401330 A CN 202111401330A CN 114299382 B CN114299382 B CN 114299382B
Authority
CN
China
Prior art keywords
spatial
spectral
remote sensing
feature map
attention mechanism
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111401330.8A
Other languages
Chinese (zh)
Other versions
CN114299382A (en
Inventor
王晶晶
孙增钊
张波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN202111401330.8A priority Critical patent/CN114299382B/en
Publication of CN114299382A publication Critical patent/CN114299382A/en
Application granted granted Critical
Publication of CN114299382B publication Critical patent/CN114299382B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a hyperspectral remote sensing image classification method and a hyperspectral remote sensing image classification system, which belong to the technical field of image processing, and are used for carrying out dimension reduction processing on hyperspectral remote sensing images, extracting spatial features of the images subjected to dimension reduction processing to obtain a spatial feature map, extracting spectral features of the spatial feature map to obtain a spatial spectrum fusion feature map, carrying out spectral key information and spatial key information extraction on the spatial spectrum fusion feature map, and processing the images subjected to the spectral key information and the spatial key information extraction by an optimizer to obtain an image classification result. The method adopts residual connection, can effectively reduce gradient vanishing phenomenon, combines a spectrum and a spatial attention mechanism, extracts more complete spatial features and spectral features, effectively inhibits noise influence, adopts entropy rate super-pixel pretreatment of hyperspectral images, focuses on the relation between different wave bands more than the traditional principal component analysis method, improves image classification precision, and has more accurate and clear classification results.

Description

Hyperspectral remote sensing image classification method and hyperspectral remote sensing image classification system
Technical Field
The invention relates to the technical field of image processing, in particular to a hyperspectral remote sensing image classification method and a hyperspectral remote sensing image classification system based on a convolutional neural residual network combined with a spectrum attention mechanism and a space attention mechanism.
Background
The hyperspectral remote sensing image refers to an image obtained by a hyperspectral imager, and has very rich spatial information and spectral information. In addition, the hyperspectral image also has more wave band numbers and extremely high resolution, so that the hyperspectral image can be subjected to spectral feature and spatial feature analysis to obtain detailed feature. Currently, hyperspectral image technology has been widely used, including fields such as precise agriculture, atmosphere monitoring, ocean detection, and the like. With the application of hyperspectral images in many fields becoming wider and wider, how to rapidly and accurately judge each pixel in the hyperspectral image becomes a primary problem.
For hyperspectral image classification tasks, conventional methods include random forest (Random Forests), decision Trees (Decision Trees), support vector machines (Support Vector Machines) and the like. Based on manual characteristics, the method requires operators to have rich prior knowledge of hyperspectral images, and is low in processing speed and working efficiency, and a large amount of manpower is consumed in marking or distinguishing. In addition, the conventional method ignores abundant spatial information to cause incomplete feature extraction, and finally causes lower classification accuracy.
In recent years, some deep learning models have also been applied in hyperspectral image classification. The convolutional neural network effectively extracts features through local connection, reduces parameters obviously through sharing weights, and is widely applied to the fields of target recognition, medical image processing and the like. At present, convolutional neural networks have three different forms of convolutional kernels, including 1D-CNN, 2D-CNN and 3D-CNN, and all adopt counter-propagation algorithms to update network parameters.
The method of convolutional neural network is mostly a classification method based on spatial spectrum joint characteristics, which can directly extract spectral information and spatial information, but the hyperspectral image has similar textures in a plurality of wave bands, so that the calculation complexity is increased, in addition, the phenomenon of gradient disappearance can occur along with the increase of the depth of the network due to the limited training samples of hyperspectral data sets, and the homogeneity of adjacent pixels is not considered in the traditional dimension reduction mode.
Disclosure of Invention
The invention aims to provide a hyperspectral remote sensing image classification method and a hyperspectral remote sensing image classification system based on a convolutional neural residual network combined with a spectrum attention mechanism and a space attention mechanism, so as to solve at least one technical problem in the background art.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in one aspect, the invention provides a hyperspectral remote sensing image classification method, which comprises the following steps:
performing dimension reduction treatment on the hyperspectral remote sensing image;
Extracting the spectral features of the space feature map to obtain a space spectrum fusion feature map;
Extracting spectrum key information and space key information from the space spectrum fusion feature map;
And processing the image after the spectrum key information and the space key information are extracted by using an optimizer to obtain an image classification result.
Preferably, the dimension reduction processing of the hyperspectral remote sensing image comprises the step of generating super pixels suitable for the boundary of the hyperspectral image by adopting an entropy rate super pixel algorithm.
Preferably, in the entropy rate superpixel algorithm, the image is represented in the form of a vertex edge network, wherein vertices are composed of pixels, and weights of edges connected between the vertices are composed of a pair of similarities given in the form of a similarity matrix.
Preferably, each pixel belongs to a class, and of all possible edges, the edge that optimizes the objective function is selected to be added to the image.
Preferably, a hybrid convolution neural network is adopted, wherein the two-dimensional convolution is used for carrying out spatial feature learning on the image under the condition that spectrum information is not lost, and the spatial spectrum fusion features are extracted in cooperation with the three-dimensional convolution.
Preferably, the step size in each convolutional layer in the hybrid convolutional neural network is set to 1, and a ReLU is used as an activation function.
Preferably, the back propagation algorithm for the optimizer is trained by using the classification loss function and the parameters are updated by back propagation.
In a second aspect, the present invention provides a hyperspectral remote sensing image classification system, the system comprising:
The dimension reduction module is used for carrying out dimension reduction treatment on the hyperspectral remote sensing image;
The extraction module is used for extracting the spatial features of the image after the dimension reduction treatment to obtain a spatial feature map;
The attention mechanism module is used for extracting spectrum key information and space key information from the space spectrum fusion feature map;
and the classification module is used for processing the image after the key information is extracted by using the optimizer to obtain an image classification result.
In a third aspect, the present invention provides a non-transitory computer readable storage medium for storing computer instructions which, when executed by a processor, implement a hyperspectral remote sensing image classification method as described above.
In a fourth aspect, the invention provides an electronic device comprising a processor, a memory and a computer program, wherein the processor is connected with the memory, the computer program is stored in the memory, and when the electronic device is running, the processor executes the computer program stored in the memory to enable the electronic device to execute instructions for implementing the hyperspectral remote sensing image classification method as described above.
The method has the advantages that gradient vanishing phenomenon can be effectively reduced by adopting residual connection, more complete spatial characteristics and spectral characteristics are extracted by combining a spectrum and a spatial attention mechanism, noise influence is effectively inhibited, and the entropy rate super-pixel preprocessing hyperspectral image is adopted to pay more attention to the relation between different wave bands compared with the traditional principal component analysis method, so that the image classification precision is improved, and the classification result is more accurate and clear.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of performing hyperspectral remote sensing image classification by using a convolutional neural residual network based on an attention mechanism of super-pixel preprocessing according to an embodiment of the present invention.
Fig. 2 is a schematic diagram showing comparison of classification results of hyperspectral remote sensing images according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements throughout or elements having like or similar functionality. The embodiments described below by way of the drawings are exemplary only and should not be construed as limiting the invention.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, and/or groups thereof.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
In order that the invention may be readily understood, a further description of the invention will be rendered by reference to specific embodiments that are illustrated in the appended drawings and are not to be construed as limiting embodiments of the invention.
It will be appreciated by those skilled in the art that the drawings are merely schematic representations of examples and that the elements of the drawings are not necessarily required to practice the invention.
Example 1
Embodiment 1 provides a hyperspectral remote sensing image classification system, which includes:
The dimension reduction module is used for carrying out dimension reduction treatment on the hyperspectral remote sensing image;
The extraction module is used for extracting the spatial features of the image after the dimension reduction treatment to obtain a spatial feature map;
The attention mechanism module is used for extracting spectrum key information and space key information from the space spectrum fusion feature map;
And the classification module is used for processing the image after the spectrum key information and the space key information are extracted by using the optimizer to obtain an image classification result.
In this embodiment 1, the hyperspectral remote sensing image classification method is implemented by using the hyperspectral remote sensing image classification system, and the method includes:
The method comprises the steps of carrying out dimension reduction on a hyperspectral remote sensing image by using a dimension reduction module, then extracting spatial features of the image subjected to dimension reduction processing by using an extraction module to obtain a spatial feature map, extracting spectral features of the spatial feature map to obtain a spatial spectrum fusion feature map, carrying out spectral key information and spatial key information extraction on the spatial spectrum fusion feature map by using an attention mechanism module, and finally processing the image subjected to key information extraction by using an optimizer by using a classification module to obtain an image classification result.
In the embodiment 1, the dimension reduction processing of the hyperspectral remote sensing image comprises the step of generating super pixels suitable for the boundary of the hyperspectral image by adopting an entropy rate super pixel algorithm. In the entropy rate superpixel algorithm, an image is represented in the form of a vertex edge network, wherein vertices are composed of pixels, and weights of edges connected between the vertices are composed of a pair of similarities given in the form of a similarity matrix. Each pixel belongs to a class and of all possible edges, the edge that optimizes the objective function is selected to be added to the image.
Specifically, first, the image is represented in the form of p= (V, E), where V represents a vertex, is composed of pixels, E represents an edge weight, and is composed of a pair of similarities given in the form of a similarity matrix. The initial condition is that each pixel belongs to a class, then, of all possible edges, the edge that optimizes the objective function is selected and added to the graph with the following algorithm: Where H (a) represents an entropy rate term based on random walk, a uniform and compact cluster can be formed, B (a) is a balance term, the size of the cluster can be made similar, and λ is used to balance the entropy rate term and the weight of the balance term.
In this embodiment 1, a hybrid convolutional neural network is adopted, where two-dimensional convolution is used to learn spatial features of an image without losing spectral information, and extract spatial spectrum fusion features in cooperation with three-dimensional convolution.
Specifically, a 2D-3D mixed convolution neural network is constructed, the 2D convolution focuses on spatial feature learning on an image under the condition that spectrum information is not lost, and the 3D convolution is matched to achieve extraction of spatial-spectral features and reduce network parameters. The first three convolutional layers use 32 convolutional blocks of 3 x 3, 16 3 x 5, 83 x 7, then 64 3 x 3 convolutional layers are fed, the step size in each layer being set to 1. In order to increase the expression capacity of the model, reLU was used as an activation function:
fRelu(x)=max(0,w(i)Txj+b(i));
Where w (i)Txj+b(i) represents the input vector x of a layer on the neural network, the linear transformation undergone, and the nonlinear result of the final output depends on the current location of the neuron in the network structure. And training multiple groups of parameters simultaneously, and selecting the activation value with the largest activation value as the activation value of the next layer. Where w represents the weight, x j represents the neuron, w (i)T represents the counter-propagating weight, and b (i) represents the bias.
In this embodiment 1, a spectral attention mechanism and a spatial attention mechanism are added to the attention mechanism module, respectively. Specifically, the dual attention mechanism derives attention patterns from the two independent dimensions of spectral and spatial order, then multiplies the attention patterns by the input feature pattern, and performs adaptive feature refinement. Given a feature map G epsilon R H×W×C, a one-dimensional spectrum feature map M spe∈R1×1×C and a two-dimensional space feature map M spa∈RH×W×1 are obtained after the feature map G epsilon R H×W×C passes through an extraction module, wherein H, W, C represents the height, the broadband and the spectrum number respectively. The dual-attention mechanism may be represented by the following formula:
Wherein, the Representing the multiplication of the corresponding elements, g' represents the output after the spectral attention mechanism, and g″ represents the output result after the spatial attention mechanism.
In this embodiment 1, after being output by the attention mechanism module, the result is obtained by the soft max classification layer through the Dropout layer twice, which also includes dense connection and ReLU activation functions.
In this example 1, the back propagation algorithm for the optimizer is trained by using the classification loss function and the parameters are updated by back propagation. Wherein the classification loss function is expressed as follows:
Wherein, the AndRepresenting the true and predicted values, respectively, M represents the total number of small batches of samples, and L represents the total number of feature coverage classes.
Example 2
In the embodiment 2, the hyperspectral remote sensing image classification method based on the super-pixel preprocessing attention mechanism convolutional neural residual network is provided, residual connection is adopted, gradient disappearance is effectively reduced, a double-attention mechanism is adopted, the network can more completely extract spatial features and spectral features and restrain noise influence, entropy rate super-pixel preprocessing hyperspectral images are adopted, relations among different wave bands are emphasized more, classification precision is improved, and classification results are accurate and clear.
In this embodiment 1, the overall architecture of the attention mechanism convolutional neural residual network is shown in fig. 1, and the image is firstly subjected to dimension reduction (PCA), then subjected to ERS (Entropy Rate Superpixel ) algorithm, and then subjected to PCA again, and then enters the convolutional network (Conv), wherein a residual connection (BN) is set between each convolutional layer and the activation function layer (Relu). The flat layer is used to "Flatten" the input, i.e., to unidimensionally unify the input, often used in the transition from the convolutional layer to the fully-connected layer. The role of the Dense layer is to map the previously extracted feature correlations onto the output space.
The method comprises the following specific steps:
The hyperspectral image is subjected to dimension reduction, the process uses entropy rate superpixel algorithm to generate superpixels which are suitable for the boundary of the hyperspectral image and have uniform and compact size, and the pixel-level feature extraction is important for classification, so that the difference between different spectral features cannot be ignored. The image is represented in the form of p= (V, E), where V represents the vertex, consists of pixels, E represents the edge weight, consists of a pair of similarities given in the form of a similarity matrix. The initial condition is that each pixel belongs to a class, then, of all possible edges, the edge that optimizes the objective function is selected and added to the graph with the following algorithm:
mAaxH(A)+λB(A)s.t.A∈E;
Where H (a) represents an entropy rate term based on random walk, a uniform and compact cluster can be formed, B (a) is a balance term, the size of the cluster can be made similar, and λ is used to balance the entropy rate term and the weight of the balance term.
Constructing a 2D-3D hybrid convolution neural network, as shown in fig. 1, the 2D convolution focuses on spatial feature learning on an image under the condition that spectrum information is not lost, and is matched with the 3D convolution to extract spatial-spectral features and reduce network parameters. The first three layers use 32 convolutional blocks of 3x 3, 16 3x 5, 83 x 7, then 643 x 3 convolutional layers are fed, the step size in each layer being set to 1.
In order to increase the expression capacity of the model, reLU was used as an activation function:
fRelu(x)=max(0,w(i)Txj+b(i))
Where w (i)Txj+b(i) represents the input vector x of a layer on the neural network, the linear transformation undergone, and the nonlinear result of the final output depends on the current location of the neuron in the network structure.
In this example 2, in order to prevent the gradient from disappearing, a residual connection was added to the convolution layer, and then, a spectral attention mechanism and a spatial attention mechanism were added, respectively.
Finally, after two Dropout layers, including dense connection and ReLU activation functions as well, the result is finally obtained by a softmax classification layer.
In this embodiment 2, the mentioned dual attention mechanism derives attention patterns from the two independent dimensions of the spectral and spatial order, then multiplies the attention patterns by the input feature pattern, and performs adaptive feature refinement. The attention module is integrated into the convolutional network, which adds negligible overhead, but greatly improves the accuracy of image classification and object detection.
By using the above network, given a feature map G e R H×W×C, a one-dimensional spectrum feature map M spe∈R1 ×1×C and a two-dimensional space feature map M spa∈RH×W×1 can be obtained, where H, W, C represents the height, the bandwidth, and the number of spectra, respectively. The dual-attention mechanism may be represented by the following formula:
Wherein, the Representing the multiplication of the corresponding elements, g' represents the output after the spectral attention mechanism, and g″ represents the output result after the spatial attention mechanism.
In this embodiment 2, an Adam optimizer with high computational efficiency and small memory requirements is used, and the back propagation algorithm of the optimizer is trained by using softmaxloss, and the parameters are updated by back propagation. The classification loss function is expressed as follows:
Wherein, the AndRepresenting the true and predicted values, respectively, M represents the total number of small batches of samples, and L represents the total number of feature coverage classes.
In this example 2, a number of experiments were performed on INDIAN PINES dataset and Xuzhou dataset. And comparing the result obtained by the model with GroundTruth of the data set, and respectively evaluating classification results by three evaluation indexes of OA (overall accuracy), AA (average accuracy) and Kappa coefficient. The Kappa coefficient is an index for measuring the classification accuracy. It is a result of summing all classes by multiplying the total number of pixels (N) in all earth's surface true classes by the sum of the diagonal (Xkk) of the confusion matrix, and subtracting the product of the total number of earth's surface true pixels of a class and the total number of classified pixels in that class, dividing by the square of the total number of pixels minus total number of real pixels of a certain type of ground surface and the ground surface the product of the total number of classified pixels in a class is the result of summing all classes.
In this embodiment 2, the evaluation result is compared with the current classification model, and the comparison result is shown in fig. 2, where (a) is an original hyperspectral remote sensing image, (b) is a classification image using an SVM, (c) is a classification result of a 2D-CNN network, (D) is a classification result of a 3D-CNN network, (e) is a classification result of a SSRN network, (f) is a classification result of a DRNN network, (g) is a classification result of the method described in this embodiment 2, and (h) is a true classification label image. The hyperspectral remote sensing image method described in embodiment 2 has higher classification accuracy, better classification effect and certain practicability.
Example 3
Embodiment 3 of the present invention provides a non-transitory computer readable storage medium for storing computer instructions which, when executed by a processor, implement a hyperspectral remote sensing image classification method as described above, the method comprising:
performing dimension reduction treatment on the hyperspectral remote sensing image;
Extracting the spectral features of the space feature map to obtain a space spectrum fusion feature map;
Extracting spectrum key information and space key information from the space spectrum fusion feature map;
And processing the image after the spectrum key information and the space key information are extracted by using an optimizer to obtain an image classification result.
Example 4
Embodiment 4 of the present invention provides a computer program (product) comprising a computer program for implementing a hyperspectral remote sensing image classification method as described above when run on one or more processors, the method comprising:
performing dimension reduction treatment on the hyperspectral remote sensing image;
Extracting the spectral features of the space feature map to obtain a space spectrum fusion feature map;
Extracting spectrum key information and space key information from the space spectrum fusion feature map;
And processing the image after the spectrum key information and the space key information are extracted by using an optimizer to obtain an image classification result.
Example 5
The embodiment 5 of the invention provides an electronic device, which comprises a processor, a memory and a computer program, wherein the processor is connected with the memory, the computer program is stored in the memory, and when the electronic device runs, the processor executes the computer program stored in the memory to enable the electronic device to execute instructions for realizing the hyperspectral remote sensing image classification method, and the method comprises the following steps:
performing dimension reduction treatment on the hyperspectral remote sensing image;
Extracting the spectral features of the space feature map to obtain a space spectrum fusion feature map;
Extracting spectrum key information and space key information from the space spectrum fusion feature map;
And processing the image after the spectrum key information and the space key information are extracted by using an optimizer to obtain an image classification result.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it should be understood that various changes and modifications could be made by one skilled in the art without the need for inventive faculty, which would fall within the scope of the invention.

Claims (8)

1.一种高光谱遥感图像分类方法,其特征在于,包括:1. A hyperspectral remote sensing image classification method, comprising: 对高光谱遥感图像进行降维处理;Perform dimensionality reduction on hyperspectral remote sensing images; 提取降维处理后的图像的空间特征,得到空间特征图;提取空间特征图的光谱特征,得到空间光谱融合特征图;采用混合卷积神经网络,其中,二维卷积用于在不丢失光谱信息的情况下对图像进行空间特征学习,配合三维卷积提取空间光谱融合特征,具体的,构建2D-3D混合卷积神经网络,前三层卷积层分别使用32个3×3×3、16个3×3×5、8个3×3×7的卷积块,然后送入64个3×3的卷积层,每层中的步长设置为1;采用ReLU作为激活函数:Extract the spatial features of the image after dimensionality reduction to obtain a spatial feature map; extract the spectral features of the spatial feature map to obtain a spatial-spectral fusion feature map; use a hybrid convolutional neural network, in which two-dimensional convolution is used to learn the spatial features of the image without losing spectral information, and cooperate with three-dimensional convolution to extract spatial-spectral fusion features. Specifically, a 2D-3D hybrid convolutional neural network is constructed. The first three convolutional layers use 32 3×3×3, 16 3×3×5, and 8 3×3×7 convolution blocks, respectively, and then send them to 64 3×3 convolutional layers. The step size in each layer is set to 1; ReLU is used as the activation function: ; 其中,表示神经网络上一层的输入向量所经历的线性变换,最终输出的非线性结果取决于神经元在网络结构中的当前位置;同时训练多组参数,选择激活值最大的作为下一层的激活值;其中,表示权重,表示神经元,表示反向传播的权重,表示偏置;in, Represents the input vector of the previous layer of the neural network The linear transformation experienced and the final nonlinear output result depend on the current position of the neuron in the network structure; multiple sets of parameters are trained at the same time, and the one with the largest activation value is selected as the activation value of the next layer; represents the weight, represents neurons, represents the weight of back propagation, Indicates bias; 注意力机制模块中分别加入光谱注意力机制和空间注意力机制,具体的,双重注意机制根据光谱和空间顺序两个独立的维度得出注意图,然后将注意图乘以输入特征图,并进行自适应的特征细化;给定一个特征图,经过提取模块后得到一维光谱特征图和二维空间特征图,其中,分别代表高度、宽带、光谱数量;双注意力机制由下式表示:The attention mechanism module adds spectral attention mechanism and spatial attention mechanism respectively. Specifically, the dual attention mechanism obtains the attention map according to the two independent dimensions of spectral and spatial order, then multiplies the attention map by the input feature map and performs adaptive feature refinement. Given a feature map , after the extraction module, we get a one-dimensional spectral feature map and two-dimensional spatial feature maps ,in, Represent height, bandwidth, and spectrum quantity respectively; the dual attention mechanism is expressed as follows: 其中,表示对应元素相乘,表示经过光谱注意力机制后的输出,表示经过空间注意力机制后的输出结果;in, Indicates the multiplication of corresponding elements, represents the output after the spectral attention mechanism, Represents the output result after the spatial attention mechanism; 在经过注意力机制模块输出后,再经过两次Dropout层,同样包括密集连接和ReLU激活函数,最终通过softmax分类层得出结果;After the output of the attention mechanism module, it passes through two Dropout layers, which also include dense connections and ReLU activation functions, and finally the result is obtained through the softmax classification layer; 对空间光谱融合特征图进行光谱关键信息和空间关键信息提取;Extract spectral key information and spatial key information from the spatial-spectral fusion feature map; 利用优化器对提取了光谱关键信息和空间关键信息后的图像进行处理,获得图像分类结果。The optimizer is used to process the image after extracting the spectral key information and spatial key information to obtain the image classification result. 2.根据权利要求1所述的高光谱遥感图像分类方法,其特征在于,对高光谱遥感图像进行降维处理包括:采用熵率超像素算法,生成适合高光谱图像边界的超像素。2. The hyperspectral remote sensing image classification method according to claim 1, wherein the dimensionality reduction processing of the hyperspectral remote sensing image comprises: using an entropy rate superpixel algorithm to generate superpixels suitable for the hyperspectral image boundary. 3.根据权利要求2所述的高光谱遥感图像分类方法,其特征在于,熵率超像素算法中,将图像以顶点边缘网络的形式表示,其中,顶点由像素组成,顶点之间连接的边缘的权重由相似度矩阵形式给出的一对相似度组成。3. The hyperspectral remote sensing image classification method according to claim 2 is characterized in that, in the entropy rate superpixel algorithm, the image is represented in the form of a vertex-edge network, wherein the vertices are composed of pixels and the weights of the edges connecting the vertices are composed of a pair of similarities given in the form of a similarity matrix. 4.根据权利要求3所述的高光谱遥感图像分类方法,其特征在于,每个像素属于一个类别,在所有可行的边缘中,选择使目标函数最优的边缘添加到图像中。4. The hyperspectral remote sensing image classification method according to claim 3 is characterized in that each pixel belongs to a category, and among all feasible edges, the edge that optimizes the objective function is selected and added to the image. 5.根据权利要求1所述的高光谱遥感图像分类方法,其特征在于,对于优化器的反向传播算法通过使用分类损失函数进行训练,参数通过反向传播进行更新。5. The hyperspectral remote sensing image classification method according to claim 1, wherein the back propagation algorithm of the optimizer is trained by using a classification loss function, and the parameters are updated by back propagation. 6.一种高光谱遥感图像分类系统,其特征在于,包括:6. A hyperspectral remote sensing image classification system, comprising: 降维模块,用于对高光谱遥感图像进行降维处理;Dimensionality reduction module, used to perform dimensionality reduction processing on hyperspectral remote sensing images; 提取模块,用于提取降维处理后的图像的空间特征,得到空间特征图;于提取空间特征图的光谱特征,得到空间光谱融合特征图;采用混合卷积神经网络,其中,二维卷积用于在不丢失光谱信息的情况下对图像进行空间特征学习,配合三维卷积提取空间光谱融合特征,具体的,构建2D-3D混合卷积神经网络,前三层卷积层分别使用32个3×3×3、16个3×3×5、8个3×3×7的卷积块,然后送入64个3×3的卷积层,每层中的步长设置为1;采用ReLU作为激活函数:The extraction module is used to extract the spatial features of the image after dimensionality reduction processing and obtain a spatial feature map; to extract the spectral features of the spatial feature map and obtain a spatial-spectral fusion feature map; a hybrid convolutional neural network is used, in which two-dimensional convolution is used to learn the spatial features of the image without losing spectral information, and three-dimensional convolution is used to extract spatial-spectral fusion features. Specifically, a 2D-3D hybrid convolutional neural network is constructed. The first three convolutional layers use 32 3×3×3, 16 3×3×5, and 8 3×3×7 convolution blocks, respectively, and then send them to 64 3×3 convolutional layers. The step size in each layer is set to 1; ReLU is used as the activation function: ; 其中,表示神经网络上一层的输入向量所经历的线性变换,最终输出的非线性结果取决于神经元在网络结构中的当前位置;同时训练多组参数,选择激活值最大的作为下一层的激活值;其中,表示权重,表示神经元,表示反向传播的权重,表示偏置;in, Represents the input vector of the previous layer of the neural network The linear transformation experienced and the final nonlinear output result depend on the current position of the neuron in the network structure; multiple sets of parameters are trained at the same time, and the one with the largest activation value is selected as the activation value of the next layer; represents the weight, represents neurons, represents the weight of back propagation, Indicates bias; 注意力机制模块中分别加入光谱注意力机制和空间注意力机制,具体的,双重注意机制根据光谱和空间顺序两个独立的维度得出注意图,然后将注意图乘以输入特征图,并进行自适应的特征细化;给定一个特征图,经过提取模块后得到一维光谱特征图和二维空间特征图,其中,分别代表高度、宽带、光谱数量;双注意力机制由下式表示:The attention mechanism module adds spectral attention mechanism and spatial attention mechanism respectively. Specifically, the dual attention mechanism obtains the attention map according to the two independent dimensions of spectral and spatial order, then multiplies the attention map by the input feature map and performs adaptive feature refinement. Given a feature map , after the extraction module, we get a one-dimensional spectral feature map and two-dimensional spatial feature maps ,in, Represent height, bandwidth, and spectrum quantity respectively; the dual attention mechanism is expressed as follows: 其中,表示对应元素相乘,表示经过光谱注意力机制后的输出,表示经过空间注意力机制后的输出结果;in, Indicates the multiplication of corresponding elements, represents the output after the spectral attention mechanism, Represents the output result after the spatial attention mechanism; 在经过注意力机制模块输出后,再经过两次Dropout层,同样包括密集连接和ReLU激活函数,最终通过softmax分类层得出结果;After the output of the attention mechanism module, it passes through two Dropout layers, which also include dense connections and ReLU activation functions, and finally the result is obtained through the softmax classification layer; 注意力机制模块,用于对空间光谱融合特征图进行光谱关键信息和空间关键信息提取;Attention mechanism module, used to extract spectral key information and spatial key information from the spatial-spectral fusion feature map; 分类模块,用于利用优化器对提取关键信息后的图像进行处理,获得图像分类结果。The classification module is used to process the image after extracting key information using the optimizer to obtain image classification results. 7.一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质用于存储计算机指令,所述计算机指令被处理器执行时,实现如权利要求1-5任一项所述的高光谱遥感图像分类方法。7. A non-transitory computer-readable storage medium, characterized in that the non-transitory computer-readable storage medium is used to store computer instructions, and when the computer instructions are executed by a processor, the hyperspectral remote sensing image classification method according to any one of claims 1 to 5 is implemented. 8.一种电子设备,其特征在于,包括:处理器、存储器以及计算机程序;其中,处理器与存储器连接,计算机程序被存储在存储器中,当电子设备运行时,所述处理器执行所述存储器存储的计算机程序,以使电子设备执行实现如权利要求1-5任一项所述的高光谱遥感图像分类方法的指令。8. An electronic device, characterized in that it comprises: a processor, a memory, and a computer program; wherein the processor is connected to the memory, the computer program is stored in the memory, and when the electronic device is running, the processor executes the computer program stored in the memory to enable the electronic device to execute instructions for implementing the hyperspectral remote sensing image classification method according to any one of claims 1 to 5.
CN202111401330.8A 2021-11-19 2021-11-19 Hyperspectral remote sensing image classification method and hyperspectral remote sensing image classification system Active CN114299382B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111401330.8A CN114299382B (en) 2021-11-19 2021-11-19 Hyperspectral remote sensing image classification method and hyperspectral remote sensing image classification system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111401330.8A CN114299382B (en) 2021-11-19 2021-11-19 Hyperspectral remote sensing image classification method and hyperspectral remote sensing image classification system

Publications (2)

Publication Number Publication Date
CN114299382A CN114299382A (en) 2022-04-08
CN114299382B true CN114299382B (en) 2025-09-16

Family

ID=80966258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111401330.8A Active CN114299382B (en) 2021-11-19 2021-11-19 Hyperspectral remote sensing image classification method and hyperspectral remote sensing image classification system

Country Status (1)

Country Link
CN (1) CN114299382B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880346B (en) * 2023-02-10 2023-05-23 耕宇牧星(北京)空间科技有限公司 Precise registration method of visible light remote sensing image based on deep learning
CN116403046A (en) * 2023-04-13 2023-07-07 中国人民解放军海军航空大学 Hyperspectral image classification device and method
CN118053051A (en) * 2024-04-16 2024-05-17 南京信息工程大学 Hyperspectral remote sensing image classification method based on superpixel self-attention mechanism
CN119169399B (en) * 2024-11-25 2025-04-22 南京信息工程大学 Hyperspectral image classification method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695467A (en) * 2020-06-01 2020-09-22 西安电子科技大学 Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion
CN112580480A (en) * 2020-12-14 2021-03-30 河海大学 Hyperspectral remote sensing image classification method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215053B (en) * 2018-10-16 2021-04-27 西安建筑科技大学 A moving vehicle detection method with pause state in UAV aerial video
CN111310598B (en) * 2020-01-20 2023-06-20 浙江工业大学 A Hyperspectral Remote Sensing Image Classification Method Based on 3D and 2D Hybrid Convolution
CN112633202B (en) * 2020-12-29 2022-09-16 河南大学 A hyperspectral image classification algorithm based on double denoising and multi-scale superpixel dimensionality reduction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695467A (en) * 2020-06-01 2020-09-22 西安电子科技大学 Spatial spectrum full convolution hyperspectral image classification method based on superpixel sample expansion
CN112580480A (en) * 2020-12-14 2021-03-30 河海大学 Hyperspectral remote sensing image classification method and device

Also Published As

Publication number Publication date
CN114299382A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN114299382B (en) Hyperspectral remote sensing image classification method and hyperspectral remote sensing image classification system
Duan et al. SAR image segmentation based on convolutional-wavelet neural network and Markov random field
Thai et al. Image classification using support vector machine and artificial neural network
CN105608433B (en) A kind of hyperspectral image classification method based on nuclear coordination expression
CN110443286B (en) Training method of neural network model, image recognition method and device
Chen et al. Convolutional neural network based dem super resolution
CN113870157A (en) A SAR Image Synthesis Method Based on CycleGAN
Liao et al. A two-stage mutual fusion network for multispectral and panchromatic image classification
Alipourfard et al. A novel deep learning framework by combination of subspace-based feature extraction and convolutional neural networks for hyperspectral images classification
Bhandari et al. A new beta differential evolution algorithm for edge preserved colored satellite image enhancement
CN109284741A (en) A large-scale remote sensing image retrieval method and system based on deep hash network
Shi et al. (SARN) spatial-wise attention residual network for image super-resolution
Lei et al. Agricultural surface water extraction in environmental remote sensing: A novel semantic segmentation model emphasizing contextual information enhancement and foreground detail attention
CN113850315A (en) Hyperspectral image classification method and device combining EMP (empirical mode decomposition) features and TNT (trinitrotoluene) modules
Park et al. Learning affinity with hyperbolic representation for spatial propagation
Ye et al. A novel semi-supervised learning framework for hyperspectral image classification
CN114220021B (en) A remote sensing image classification algorithm and method based on parallel 3D-2D-1D CNN
Huang et al. DeeptransMap: a considerably deep transmission estimation network for single image dehazing
CN111860068A (en) A fine-grained bird recognition method based on cross-layer simplified bilinear network
Shelare et al. Stridenet: Swin transformer for terrain recognition with dynamic roughness extraction
Sánchez et al. Robust multiband image segmentation method based on user clues
CN116993760A (en) A gesture segmentation method, system, device and medium based on graph convolution and attention mechanism
Rajyalakshmi et al. Hyperspectral Image: A Fusion-Based Extraction and Classification Utilizing Centernet Technique
Ouzounis et al. Interactive collection of training samples from the max-tree structure
Kanth et al. Multi-modal image super-resolution with joint coupled deep transform learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant