[go: up one dir, main page]

CN114764884A - End-to-end polarization SAR image classification method based on superpixel and graph convolution - Google Patents

End-to-end polarization SAR image classification method based on superpixel and graph convolution Download PDF

Info

Publication number
CN114764884A
CN114764884A CN202210005850.5A CN202210005850A CN114764884A CN 114764884 A CN114764884 A CN 114764884A CN 202210005850 A CN202210005850 A CN 202210005850A CN 114764884 A CN114764884 A CN 114764884A
Authority
CN
China
Prior art keywords
matrix
pixel
network
polarization
superpixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210005850.5A
Other languages
Chinese (zh)
Other versions
CN114764884B (en
Inventor
金海燕
贺天生
石俊飞
信程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202210005850.5A priority Critical patent/CN114764884B/en
Publication of CN114764884A publication Critical patent/CN114764884A/en
Application granted granted Critical
Publication of CN114764884B publication Critical patent/CN114764884B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an end-to-end polarization SAR image classification method based on superpixels and graph convolution, which comprises the following steps: step 1, inputting a polarized SAR image to be classified and cutting the image into uniform size; step 2, dividing the cut pictures into a training set and a test set according to the proportion; step 3, decomposing the complex scattering matrix of each pixel point of each image of the test set training set to generate a polarization coherent matrix and converting the polarization coherent matrix into a row vector serving as the polarization characteristic of the pixel point; step 4, splicing the polarization characteristics to the horizontal and vertical coordinates of the pixel point, and splicing the row vectors as the characteristics of the pixel point; step 5, building an end-to-end network based on the full convolution network, the graph convolution network and the convolution neural network; and 6, sending the training set into an end-to-end network for joint training, and sending the test set into the trained end-to-end network to obtain a result. The method can further improve the classification precision of the polarized SAR image.

Description

基于超像素和图卷积的端到端极化SAR图像分类方法An end-to-end polarimetric SAR image classification method based on superpixels and graph convolution

技术领域technical field

本发明属于图像处理和遥感技术领域,涉及一种基于超像素和图卷积的端到端极化SAR图像分类方法。The invention belongs to the technical field of image processing and remote sensing, and relates to an end-to-end polarimetric SAR image classification method based on superpixels and graph convolution.

背景技术Background technique

极化合成孔径雷达(polarimetric synthetic aperture radar,PolSAR)图像分类是PolSAR领域热门的研究方向之一,其可为土地利用调查、地理国情监测和城乡规划等诸多领域提供基础支持。所谓极化SAR图像的分类问题,就是通过算法确定图像中的每一个像素点的类别,以此来判断它所对应的地物。Polarimetric synthetic aperture radar (PolSAR) image classification is one of the most popular research directions in the PolSAR field, which can provide basic support for many fields such as land use surveys, geographic national conditions monitoring, and urban and rural planning. The so-called classification problem of polarimetric SAR images is to determine the category of each pixel in the image through an algorithm, so as to judge the corresponding ground objects.

在传统的基于超像素分割对极化SAR图像进行分类的方法中,超像素分割是作为一个单独的任务对图像进行预处理,然后将超像素分割预处理后的图像结果输入到极化SAR分类网络中从而得到极化SAR图像分类结果。In the traditional method of classifying polarimetric SAR images based on superpixel segmentation, superpixel segmentation is to preprocess the image as a separate task, and then the preprocessed image results of superpixel segmentation are input to polarimetric SAR classification The classification results of polarimetric SAR images are obtained in the network.

由于PolSAR图像本身存在较多随机相干斑噪声并且分辨率较低,而且不同地物类型间容易表现出较为相近的特性,因此传统的超像素分割方法会受到较大干扰,超像素分割结果会直接影响下游的极化SAR分类网络输出结果。Because the PolSAR image itself has more random coherent speckle noise and low resolution, and different ground object types tend to show relatively similar characteristics, the traditional superpixel segmentation method will be greatly disturbed, and the superpixel segmentation results will be directly Influence the output of the downstream polarimetric SAR classification network.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提供一种基于超像素和图卷积的端到端极化SAR图像分类方法,能够进一步提高极化SAR图像的分类精度。The purpose of the present invention is to provide an end-to-end polarimetric SAR image classification method based on superpixels and graph convolution, which can further improve the classification accuracy of polarimetric SAR images.

本发明所采用的技术方案是:The technical scheme adopted in the present invention is:

基于超像素和图卷积的端到端极化SAR图像分类方法,包括以下步骤:An end-to-end polarimetric SAR image classification method based on superpixels and graph convolution, including the following steps:

步骤1,输入待分类的极化SAR图像并裁剪成统一大小;Step 1, input the polarimetric SAR image to be classified and crop it into a uniform size;

步骤2,将剪裁后图片的按比例划分为训练集和测试集;Step 2: Divide the cropped image into a training set and a test set in proportion;

步骤3,对测试集合训练集的每张图像的每个像素点的复散射矩阵进行分解,生成极化相干矩阵并转换为行向量作为该像素点的极化特征;Step 3, decompose the complex scattering matrix of each pixel of each image of the test set training set, generate a polarization coherence matrix and convert it into a row vector as the polarization feature of the pixel;

步骤4,将极化特征拼接上该像素点的横纵坐标,拼接行向量作为像素点特征;Step 4, splicing the polarization feature on the horizontal and vertical coordinates of the pixel, and splicing the row vector as the pixel feature;

步骤5,搭建基于全卷积网络、图卷积网络和卷积神经网络的端到端网络;Step 5, build an end-to-end network based on a fully convolutional network, a graph convolutional network, and a convolutional neural network;

步骤6,将训练集送入端到端网络中进行联合训练,将测试集送入到训练好的端到端网络中得到结果。Step 6: The training set is sent to the end-to-end network for joint training, and the test set is sent to the trained end-to-end network to obtain the result.

步骤3的具体步骤为:The specific steps of step 3 are:

对步骤2中剪裁后的每张图像的每个像素点的复散射矩阵进行分解,生成极化相干矩阵并将其转换为大小为1×9的行向量作为该像素点的极化特征。生成的极化相干矩阵T的表达式如下:Decompose the complex scattering matrix of each pixel of each image cropped in step 2 to generate a polarization coherence matrix and convert it into a 1×9 row vector as the polarization feature of the pixel. The expression of the generated polarization coherence matrix T is as follows:

Figure BDA0003455466940000021
Figure BDA0003455466940000021

将极化相干矩阵T记为

Figure BDA0003455466940000022
然后将极化相干矩阵转换为行向量得到特征矩阵:T′=[T11,T12,T13,T21,T22,T23,T31,T32,T33],并且该极化相干矩阵为复共轭矩阵,对此复共轭矩阵进行预处理:得到该像素点极化特征向量F:Denote the polarization coherence matrix T as
Figure BDA0003455466940000022
Then convert the polarization coherence matrix to row vector to get the characteristic matrix: T′=[T 11 , T 12 , T 13 , T 21 , T 22 , T 23 , T 31 , T 32 , T 33 ], and the polarization The coherence matrix is a complex conjugate matrix, and the complex conjugate matrix is preprocessed to obtain the pixel polarization eigenvector F:

Figure BDA0003455466940000034
Figure BDA0003455466940000034

其中Re表示取复数的实数部分,Im表示取复数的虚数部分,而Tij表示极化相干矩阵的第i行第j列数据。Among them, Re represents the real part of the complex number, Im represents the imaginary part of the complex number, and T ij represents the data of the i-th row and the j-th column of the polarization coherence matrix.

步骤4具体的为:将步骤3生成的每个像素点的极化特征向量拼接上该像素点的横纵坐标得到像素点特征。Step 4 is specifically: splicing the polarization feature vector of each pixel point generated in step 3 with the horizontal and vertical coordinates of the pixel point to obtain the pixel point feature.

步骤5中全卷积网络的结构为一次连接的输入层、第一下采样层、第二下采样层、第三下采样层、第四下采样层、第五下采样层、第一上采样层、第二上采样层、第三上采样层、第四上采样层和softMax输出层,全卷积网络的损失函数为:The structure of the fully convolutional network in step 5 is a connected input layer, a first downsampling layer, a second downsampling layer, a third downsampling layer, a fourth downsampling layer, a fifth downsampling layer, and a first upsampling layer. layer, the second upsampling layer, the third upsampling layer, the fourth upsampling layer and the softMax output layer, the loss function of the fully convolutional network is:

Figure BDA0003455466940000031
Figure BDA0003455466940000031

其中Φ代表更新前的像素点特征,

Figure BDA0003455466940000032
代表更新后像素点的特征,
Figure BDA0003455466940000033
代表求两者之间的交叉熵损失函数;where Φ represents the pixel feature before updating,
Figure BDA0003455466940000032
represents the feature of the updated pixel point,
Figure BDA0003455466940000033
Represents the cross entropy loss function between the two;

图卷积神经网络的结构为依次连接的输入层、第一图卷积层、第二图卷积层、第三图卷积层和softmax输出层,每个图卷积层的激活函数为tanh函数;The structure of the graph convolutional neural network is the input layer, the first graph convolutional layer, the second graph convolutional layer, the third graph convolutional layer and the softmax output layer connected in sequence, and the activation function of each graph convolutional layer is tanh function;

卷积神经网络结构为依次连接的输入层、第一卷积层、第一池化层、第二卷积层、第二池化层和softMax输出层,每个卷积层的激活函数是LeakyRelu函数。The convolutional neural network structure is sequentially connected input layer, first convolutional layer, first pooling layer, second convolutional layer, second pooling layer and softMax output layer. The activation function of each convolutional layer is LeakyRelu function.

述步骤6的具体步骤为:The specific steps of step 6 are:

步骤6.1,将训练集和测试集初始化为超像素块。Step 6.1, initialize the training set and test set as superpixel blocks.

步骤6.2,将训练集经步骤3步骤4得到的像素点特征送入全卷积网络中得到输出结果,输出矩阵Q为超像素和像素的软关联矩阵;Step 6.2, send the pixel point feature obtained in step 3 and step 4 of the training set into the fully convolutional network to obtain the output result, and the output matrix Q is the soft correlation matrix of the superpixel and the pixel;

步骤6.3,通过超像素和像素软关联矩阵获取超像素块的邻接矩阵A、特征矩阵B、超像素与像素之间的转换矩阵C;Step 6.3, obtain the adjacency matrix A of superpixel block, feature matrix B, conversion matrix C between superpixel and pixel by superpixel and pixel soft correlation matrix;

根据步骤6.2得到的超像素和像素软关联矩阵Q,将每个像素点以最高的概率分配给周边的超像素块来获得整张图像的超像素分割结果,根据超像素分割结果来获取每张图像的邻接矩阵A和特征矩阵B;According to the superpixel and pixel soft correlation matrix Q obtained in step 6.2, each pixel is assigned to the surrounding superpixel block with the highest probability to obtain the superpixel segmentation result of the entire image, and each pixel is obtained according to the superpixel segmentation result. The adjacency matrix A and feature matrix B of the image;

其中邻接矩阵A中Ai,j表示邻接矩阵A第i行第j列的元素:Among them, A i,j in the adjacency matrix A represents the elements of the i-th row and the j-th column of the adjacency matrix A:

Figure BDA0003455466940000041
Figure BDA0003455466940000041

特征矩阵B中Bi表示第i个超像素块的特征;

Figure BDA0003455466940000042
表示超像素块i中第j个像素点特征,n表示第i个超像素块中像素点个数。B i in the feature matrix B represents the feature of the i-th superpixel block;
Figure BDA0003455466940000042
represents the feature of the j-th pixel in the superpixel block i, and n represents the number of pixels in the i-th superpixel block.

Figure BDA0003455466940000043
Figure BDA0003455466940000043

超像素和像素的转换矩阵为转换矩阵CThe transformation matrix of superpixels and pixels is transformation matrix C

步骤6.4,将每张图像的邻接矩阵和特征矩阵输入到图卷积网络中,输入是该图像的邻接矩阵和特征矩阵,输出是二维张量H为图像的超像素特征;Step 6.4, input the adjacency matrix and feature matrix of each image into the graph convolution network, the input is the adjacency matrix and feature matrix of the image, and the output is a two-dimensional tensor H as the superpixel feature of the image;

步骤6.5,将图像的超像素特征转换为像素特征:Step 6.5, convert the superpixel features of the image to pixel features:

Hgcn=C·H (14) Hgcn = C·H (14)

其中C代表转换矩阵,H代表图卷积网络的输出。Hgcn为图卷积网络输出的超像素特征经过超像素和像素的转换矩阵得到的像素特征where C represents the transformation matrix and H represents the output of the graph convolutional network. H gcn is the pixel feature obtained from the superpixel feature output by the graph convolution network through the transformation matrix of superpixel and pixel

步骤6.6,卷积神经网络得到图像的像素级特征;Step 6.6, the convolutional neural network obtains the pixel-level features of the image;

将训练集经步骤3得到的极化特征作为输入送入至卷积神经网络得到像素级特征;The polarization features obtained in step 3 of the training set are sent to the convolutional neural network as input to obtain pixel-level features;

步骤6.7,将步骤6.6得到的图像像素级特征和步骤6.5得到的经过超像素和像素的转换矩阵得到的像素级特征融合进行分类;Step 6.7, classify the image pixel-level features obtained in step 6.6 and the pixel-level features obtained in step 6.5 through the transformation matrix of superpixels and pixels;

步骤6.8,计算总的损失函数并反向传递迭代更新网络直至收敛。端到端网络总的损失函数loss为:Step 6.8, compute the total loss function and pass back iteratively update the network until convergence. The total loss function loss of the end-to-end network is:

loss=loss1+loss2 (15)loss=loss1+loss2 (15)

其中loss1为超像素分割的全卷积网络损失函数;loss2为分类损失函数。最后对网络进行迭代更新,直至网络收敛,则端到端网络模型训练完成;where loss1 is the fully convolutional network loss function for superpixel segmentation; loss2 is the classification loss function. Finally, the network is iteratively updated until the network converges, and the end-to-end network model training is completed;

步骤6.9,将测试集送入到训练好的端到端网络模型中得到分类结果。Step 6.9, send the test set into the trained end-to-end network model to obtain the classification result.

本发明的有益效果是:The beneficial effects of the present invention are:

本发明使用全卷积网络对图像进行超像素分割并且结合了下游的分类网络,超像素分割网络和下游的分类网络进行端到端联合训练,下游的分类网络是由图卷积网络和卷积神经网络这两个网络组成。超像素分割网络的结果可以影响下游的分类网络输出结果,下游的分类网络结果又可以反过来影响超像素分割结果,通过迭代训练网络模型来完成图像分类工作并取得了较好的分类结果。The present invention uses a fully convolutional network to perform superpixel segmentation on the image and combines the downstream classification network. The superpixel segmentation network and the downstream classification network perform end-to-end joint training. The downstream classification network is composed of a graph convolutional network and a convolutional network. The neural network consists of these two networks. The results of the superpixel segmentation network can affect the output results of the downstream classification network, and the downstream classification network results can in turn affect the superpixel segmentation results. By iteratively training the network model to complete the image classification work and achieve better classification results.

另一方面,下游的分类网络用图卷积网络与卷积神经网络同时提取图像特征,并将两者提取的特征融合,这样图卷积可以学习极化SAR图像的全局信息,卷积神经网络可以学习极化SAR图像的像素级特征,将两者提取的特征融合,可以进一步提高极化SAR图像的分类准确率。On the other hand, the downstream classification network uses the graph convolutional network and the convolutional neural network to extract image features at the same time, and fuse the features extracted by the two, so that the graph convolution can learn the global information of the polarimetric SAR image, and the convolutional neural network The pixel-level features of polarimetric SAR images can be learned, and the features extracted from the two can be fused, which can further improve the classification accuracy of polarimetric SAR images.

附图说明Description of drawings

图1是本发明的流程图。Figure 1 is a flow chart of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施方式对本发明进行详细说明。The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.

本发明基于超像素和图卷积的端到端极化SAR图像分类方法,如图1,包括以下步骤:The present invention's end-to-end polarimetric SAR image classification method based on superpixels and graph convolution, as shown in Figure 1, includes the following steps:

步骤1,输入待分类的极化SAR图像,将极化SAR图像进行裁剪成宽和高都为512像素大小500张;Step 1, input the polarimetric SAR image to be classified, and crop the polarimetric SAR image into 500 images with a width and height of 512 pixels;

步骤2,将剪裁后图片按比例划分,20%作为训练集,80%作为测试集;Step 2: Divide the cropped images proportionally, 20% as the training set and 80% as the test set;

步骤3,对训练集和测试集每张图像的每个像素点的复散射矩阵进行分解,生成极化相干矩阵并将其转换为大小为1×9的行向量作为该像素点的极化特征;Step 3: Decompose the complex scattering matrix of each pixel of each image in the training set and the test set to generate a polarization coherence matrix and convert it into a 1×9 row vector as the polarization feature of the pixel ;

步骤4,将极化特征拼接上该像素点横纵坐标,将拼接后大小为1×11的行向量作为像素点特征。Step 4, splicing the polarization feature on the horizontal and vertical coordinates of the pixel point, and using the spliced row vector with a size of 1×11 as the pixel point feature.

步骤5,搭建基于全卷积网络、图卷积网络和卷积神经网络的端到端网络Step 5, build an end-to-end network based on fully convolutional network, graph convolutional network and convolutional neural network

步骤6,将训练集送入端到端网络中进行联合训练,将测试集送入到训练好的端到端网络中得到分类结果。Step 6: The training set is sent to the end-to-end network for joint training, and the test set is sent to the trained end-to-end network to obtain the classification result.

步骤3的具体步骤为:The specific steps of step 3 are:

对训练集和测试集每张图像的每个像素点的复散射矩阵进行分解,生成极化相干矩阵并将其转换为大小为1×9的行向量作为该像素点的极化特征。生成的极化相干矩阵T的表达式如下:The complex scattering matrix of each pixel of each image in the training set and the test set is decomposed to generate a polarization coherence matrix and convert it into a 1×9 row vector as the polarization feature of the pixel. The expression of the generated polarization coherence matrix T is as follows:

Figure BDA0003455466940000071
Figure BDA0003455466940000071

将极化相干矩阵T记为

Figure BDA0003455466940000072
然后将极化相干矩阵转换为行向量得到特征矩阵:T′=[T11,T12,T13,T21,T22,T23,T31,T32,T33],并且该极化相干矩阵为复共轭矩阵,对此复共轭矩阵进行预处理:得到该像素点极化特征向量F::Denote the polarization coherence matrix T as
Figure BDA0003455466940000072
Then convert the polarization coherence matrix to row vector to get the characteristic matrix: T′=[T 11 , T 12 , T 13 , T 21 , T 22 , T 23 , T 31 , T 32 , T 33 ], and the polarization The coherence matrix is a complex conjugate matrix, and the complex conjugate matrix is preprocessed to obtain the pixel polarization eigenvector F:::

Figure BDA0003455466940000073
Figure BDA0003455466940000073

其中Re表示取复数的实数部分,Im表示取复数的虚数部分,而Tij表示极化相干矩阵的第i行第j列数据。Among them, Re represents the real part of the complex number, Im represents the imaginary part of the complex number, and T ij represents the data of the i-th row and the j-th column of the polarization coherence matrix.

步骤4的具体步骤为:The specific steps of step 4 are:

将步骤3生成的每个像素点的9纬极化特征向量拼接上该像素点的2维位置特征向量即该像素点的横纵坐标,将拼接后的11纬向量S作为像素点特征:The 9-latitude polarization feature vector of each pixel point generated in step 3 is spliced with the 2-dimensional position feature vector of the pixel point, that is, the horizontal and vertical coordinates of the pixel point, and the spliced 11-latitude vector S is used as the pixel point feature:

Figure BDA0003455466940000074
Figure BDA0003455466940000074

其中x,y代表该像素点的横纵坐标。Where x, y represent the horizontal and vertical coordinates of the pixel.

步骤5的具体步骤中全卷积网络、图卷积网络和卷积神经网络的结构为:In the specific steps of step 5, the structures of the fully convolutional network, the graph convolutional network and the convolutional neural network are:

搭建全卷积网络:Build a fully convolutional network:

a)搭建全卷积网络,其结构依次是:输入层(1,11,512,512)→第一下采样层(1,16,512,512)→第二下采样层(1,32,512,512)→第三下采样层(1,64,512,512)→第四下采样层(1,128,512,512)→第五下采样层(1,256,512,512)→第一上采样层(1,128,512,512)→第二上采样层(1,64,512,512)→第三上采样层(1,32,512,512)→第四上采样层(1,16,512,512)→softMax输出层。a) Build a fully convolutional network with the following structure: input layer (1,11,512,512)→first downsampling layer (1,16,512,512)→second downsampling layer (1,32,512,512)→third downsampling layer (1) ,64,512,512)→the fourth downsampling layer (1,128,512,512)→the fifth downsampling layer (1,256,512,512)→the first upsampling layer (1,128,512,512)→the second upsampling layer (1,64,512,512)→the third upsampling layer (1, 32, 512, 512) → the fourth upsampling layer (1, 16, 512, 512) → softMax output layer.

其中输入层输入数据大小为:(1,11,512,512),其中1代表batchsize,11表示每个像素点的1×11的行向量,512为图片的宽和高。The input data size of the input layer is: (1, 11, 512, 512), where 1 represents the batch size, 11 represents the 1×11 row vector of each pixel, and 512 is the width and height of the image.

softMax输出层输出结果大小为:(1,9,512,512)。我们将输出数据记为Q,为超像素和像素的软关联矩阵。其中1为batchsize大小,9代表该像素点属于该像素点所在的超像素块和该像素点所在的超像素块与其相邻的八个超像素块(坐上,正上,右上,左,右,左下,正下,右下)的概率值。这9个概率值和为1。The output size of the softMax output layer is: (1, 9, 512, 512). We denote the output data as Q, the soft correlation matrix of superpixels and pixels. Among them, 1 is the batchsize size, 9 means that the pixel belongs to the superpixel block where the pixel is located, and the superpixel block where the pixel is located and its adjacent eight superpixel blocks (sit, top, top right, left, right , lower left, right lower, lower right) probability values. The sum of these 9 probability values is 1.

b)构建全卷积网络的损失函数:b) Construct the loss function of the fully convolutional network:

获取超像素块的极化特征:Obtain the polarization features of a superpixel block:

Figure BDA0003455466940000081
Figure BDA0003455466940000081

其中,

Figure BDA0003455466940000082
为超像素m的极化特征,fi,j为像素点(i,j)的极化特征,其中q为超像素和像素的软关联矩阵,
Figure BDA0003455466940000083
表示像素点(i,j)属于超像素块m的概率值。in,
Figure BDA0003455466940000082
is the polarization feature of superpixel m, f i,j is the polarization feature of pixel point (i,j), where q is the soft correlation matrix of superpixel and pixel,
Figure BDA0003455466940000083
Indicates the probability value that the pixel point (i, j) belongs to the superpixel block m.

获取超像素块的坐标特征:Get the coordinate features of the superpixel block:

Figure BDA0003455466940000091
Figure BDA0003455466940000091

根据公式(4)和公式(5)可以将超像素更新过程写为:According to formula (4) and formula (5), the superpixel update process can be written as:

Figure BDA0003455466940000092
Figure BDA0003455466940000092

其中

Figure BDA0003455466940000093
为超像素m的极化特征;
Figure BDA0003455466940000094
为超像素m的位置特征,pi,j为像素点(i,j)的位置特征,Φ代表更新前的像素点特征,
Figure BDA0003455466940000095
表示像素点(i,j)属于超像素块m的概率值。in
Figure BDA0003455466940000093
is the polarization characteristic of superpixel m;
Figure BDA0003455466940000094
is the position feature of the superpixel m, p i, j is the position feature of the pixel point (i, j), Φ represents the pixel point feature before updating,
Figure BDA0003455466940000095
Indicates the probability value that the pixel point (i, j) belongs to the superpixel block m.

结合超像素块特征和超像素和像素的软关联矩阵得到每个像素点的极化特征:Combining the superpixel block feature and the soft correlation matrix of superpixels and pixels to obtain the polarization feature of each pixel:

Figure BDA0003455466940000096
Figure BDA0003455466940000096

其中

Figure BDA0003455466940000097
为像素点(i,j)更新后的极化特征,
Figure BDA0003455466940000098
为超像素m的极化特征,
Figure BDA0003455466940000099
表示像素点(i,j)属于超像素块m的概率值。in
Figure BDA0003455466940000097
is the updated polarization feature of the pixel point (i, j),
Figure BDA0003455466940000098
is the polarization characteristic of superpixel m,
Figure BDA0003455466940000099
Indicates the probability value that the pixel point (i, j) belongs to the superpixel block m.

结合超像素块特征和超像素和像素的软关联矩阵得到每个像素点的位置特征:Combine the superpixel block features and the soft correlation matrix of superpixels and pixels to get the position feature of each pixel:

Figure BDA00034554669400000910
Figure BDA00034554669400000910

根据公式(7)和公式(8)可以将像素点特征更新过程写为:According to formula (7) and formula (8), the pixel feature update process can be written as:

Figure BDA00034554669400000911
Figure BDA00034554669400000911

其中

Figure BDA00034554669400000912
为像素点(i,j)更新后的位置特征,
Figure BDA00034554669400000913
为超像素m的位置特征,
Figure BDA00034554669400000914
表示像素点(i,j)属于超像素块m的概率值。
Figure BDA00034554669400000915
代表更新后的像素点特征。in
Figure BDA00034554669400000912
is the updated position feature of the pixel point (i, j),
Figure BDA00034554669400000913
is the location feature of superpixel m,
Figure BDA00034554669400000914
Indicates the probability value that the pixel point (i, j) belongs to the superpixel block m.
Figure BDA00034554669400000915
Represents the updated pixel feature.

超像素分割的全卷积网络损失函数为:The fully convolutional network loss function for superpixel segmentation is:

Figure BDA0003455466940000101
Figure BDA0003455466940000101

其中Φ代表更新前的像素点特征,

Figure BDA0003455466940000102
代表更新后像素点的特征,
Figure BDA0003455466940000103
代表求两者之间的交叉熵损失函数;where Φ represents the pixel feature before updating,
Figure BDA0003455466940000102
represents the feature of the updated pixel point,
Figure BDA0003455466940000103
Represents the cross entropy loss function between the two;

其中

Figure BDA0003455466940000104
代表求两者之间的交叉熵损失函数。in
Figure BDA0003455466940000104
Represents the cross-entropy loss function between the two.

搭建图卷积神经网络:其结构依次是:输入层→第一图卷积层→第二图卷积层→第三图卷积层→softmax输出层,每个图卷积层的激活函数为tanh函数。Build a graph convolutional neural network: its structure is: input layer→first graph convolutional layer→second graph convolutional layer→third graph convolutional layer→softmax output layer, and the activation function of each graph convolutional layer is tanh function.

搭建卷积神经网络:其结构依次是:输入层→第一卷积层→第一池化层→第二卷积层→第二池化层→softMax输出层。每个卷积层的激活函数是LeakyRelu函数。Build a convolutional neural network: its structure is: input layer→first convolutional layer→first pooling layer→second convolutional layer→second pooling layer→softMax output layer. The activation function of each convolutional layer is the LeakyRelu function.

步骤6的具体步骤为:The specific steps of step 6 are:

将训练集送入端到端网络中训练,将测试集送入到训练好的端到端网络中得到模型的分类准确率。The training set is sent to the end-to-end network for training, and the test set is sent to the trained end-to-end network to obtain the classification accuracy of the model.

步骤6.1,初始化超像素;Step 6.1, initialize superpixels;

初始化超像素,超像素初始化大小为高宽为16像素,那么长宽都为512像素的图像被分成了1024个超像素块。Initialize the superpixel, the initial size of the superpixel is 16 pixels in height and width, then the image with a length and width of 512 pixels is divided into 1024 superpixel blocks.

步骤6.2,获取超像素与像素之间的软关联矩阵Q;Step 6.2, obtain the soft correlation matrix Q between superpixels and pixels;

将训练集经步骤3步骤4得到的像素点特征送入全卷积网络中得到输出结果。输出结果大小为:(1,9,512,512)。我们将输出矩阵记为Q,为超像素和像素的软关联矩阵。其中1为batchsize大小,9代表该像素点属于该像素点所在的超像素块和该像素点所在的超像素块与其相邻的八个超像素块(坐上,正上,右上,左,右,左下,正下,右下)的概率值。这9个概率值和为1。The pixel point features obtained in step 3 and step 4 of the training set are sent to the fully convolutional network to obtain the output result. The output result size is: (1, 9, 512, 512). We denote the output matrix as Q, which is the soft-association matrix of superpixels and pixels. Among them, 1 is the batchsize size, 9 means that the pixel belongs to the superpixel block where the pixel is located, and the superpixel block where the pixel is located and its adjacent eight superpixel blocks (sit, top, top right, left, right , lower left, right lower, lower right) probability values. The sum of these 9 probability values is 1.

步骤6.3,通过超像素和像素软关联矩阵获取超像素块的邻接矩阵A、特征矩阵B、超像素与像素之间的转换矩阵C;Step 6.3, obtain the adjacency matrix A of superpixel block, feature matrix B, conversion matrix C between superpixel and pixel by superpixel and pixel soft correlation matrix;

根据步骤6.2得到的超像素和像素软关联矩阵Q,将每个像素点以最高的概率分配给周边的超像素块来获得整张图像的超像素分割结果。根据超像素分割结果来获取每张图像的邻接矩阵A和特征矩阵B。According to the superpixel and pixel soft correlation matrix Q obtained in step 6.2, each pixel is assigned to the surrounding superpixel block with the highest probability to obtain the superpixel segmentation result of the entire image. The adjacency matrix A and feature matrix B of each image are obtained according to the superpixel segmentation results.

邻接矩阵A大小为(1024,1024),Ai,j表示邻接矩阵A第i行第j列的元素。The size of the adjacency matrix A is (1024, 1024), and A i,j represents the elements of the i-th row and the j-th column of the adjacency matrix A.

Figure BDA0003455466940000111
Figure BDA0003455466940000111

特征矩阵B大小为(1024,9),Bi表示第i个超像素块的特征;

Figure BDA0003455466940000112
表示超像素块i中第j个像素点特征,n表示第i个超像素块中像素点个数。The size of the feature matrix B is (1024, 9), and B i represents the feature of the i-th superpixel block;
Figure BDA0003455466940000112
represents the feature of the j-th pixel in the superpixel block i, and n represents the number of pixels in the i-th superpixel block.

Figure BDA0003455466940000113
Figure BDA0003455466940000113

超像素和像素的转换矩阵C大小为(512×512,1024),Ci,j表示超像素和像素转换矩阵C第i行第j列的元素。The size of the transformation matrix C of superpixels and pixels is (512×512,1024), and C i,j represent the elements of the ith row and jth column of the superpixel and pixel transformation matrix C.

Figure BDA0003455466940000114
Figure BDA0003455466940000114

其中i代表图像的像素点下标,j代表图像的超像素块下标。i∈[1,512×512];j∈[1,1024]。where i represents the pixel subscript of the image, and j represents the superpixel block subscript of the image. i∈[1,512×512]; j∈[1,1024].

步骤6.4,图卷积得到图像的超像素特征;Step 6.4, graph convolution to obtain superpixel features of the image;

将每张图像的邻接矩阵和特征矩阵输入到图卷积网络中,输入是该图像的邻接矩阵和特征矩阵。输出是二维张量H,为图像的超像素特征;张量大小为(1024,64),其中第一个纬度1024代表该图像有1024个超像素块,第二个纬度64代表每个超像素块的特征。The adjacency matrix and feature matrix of each image are input into the graph convolutional network, and the input is the adjacency matrix and feature matrix of the image. The output is a two-dimensional tensor H, which is the superpixel feature of the image; the size of the tensor is (1024,64), where the first latitude 1024 represents that the image has 1024 superpixel blocks, and the second latitude 64 represents each superpixel block. Features of the pixel block.

步骤6.5,将图像的超像素特征转换为像素特征;Step 6.5, convert the superpixel features of the image into pixel features;

Hgcn=C·H (14) Hgcn = C·H (14)

其中C代表步骤6.3求出的超像素与像素之间的转换矩阵,H代表图卷积网络的输出,大小为:(1024,64)。Hgcn图卷积网络输出的超像素特征经过超像素和像素的转换矩阵得到的像素特征,大小为:(512×512,64)。Among them, C represents the transformation matrix between superpixels and pixels obtained in step 6.3, and H represents the output of the graph convolution network, and the size is: (1024,64). The superpixel feature output by the Hgcn graph convolutional network is the pixel feature obtained by the transformation matrix of superpixel and pixel, and the size is: (512×512,64).

步骤6.6,卷积神经网络得到图像的像素级特征;Step 6.6, the convolutional neural network obtains the pixel-level features of the image;

将训练集经步骤3得到的极化特征作为输入送入至卷积神经网络得到像素级特征;The polarization features obtained in step 3 of the training set are sent to the convolutional neural network as input to obtain pixel-level features;

卷积神经网络的输入是四维张量,大小为:(1,512,512,9)。输入数据的第一个纬度1代表batchsizoe;第二个纬度和第三个纬度代表图像的宽和高;第四个纬度9代表图像中每个像素点的极化特征。The input to the convolutional neural network is a four-dimensional tensor of size: (1,512,512,9). The first latitude 1 of the input data represents batchsizoe; the second latitude and the third latitude represent the width and height of the image; the fourth latitude 9 represents the polarization characteristics of each pixel in the image.

卷积神经网络的输出是四维张量,大小为:(1,256,256,64)。其中第四个纬度64代表每个像素点提取的特征。The output of the convolutional neural network is a four-dimensional tensor of size: (1,256,256,64). The fourth latitude 64 represents the features extracted by each pixel.

步骤6.7,将步骤6.6得到的图像像素级特征和步骤6.5得到的经过超像素和像素的转换矩阵得到的像素级特征融合进行分类;Step 6.7, classify the image pixel-level features obtained in step 6.6 and the pixel-level features obtained in step 6.5 through the transformation matrix of superpixels and pixels;

步骤6.5得到的超像素级特征大小为:(512×512,64);步骤6.6得到的像素级特征大小为:(512×512,64)。将像素级特征和超像素级特征进行融合得到图像最终提取的特征大小为:(512×512,128),最后进行softmax分类。得到分类结果并且计算分类损失loss2。这里的分类损失使用的是交叉熵损失函数。The superpixel-level feature size obtained in step 6.5 is: (512×512,64); the pixel-level feature size obtained in step 6.6 is: (512×512,64). The pixel-level features and super-pixel-level features are fused to obtain the final extracted feature size of the image: (512×512,128), and finally the softmax classification is performed. Get the classification result and calculate the classification loss loss2. The classification loss here uses the cross-entropy loss function.

步骤6.8,计算总的损失函数并反向传递迭代更新网络直至收敛。Step 6.8, compute the total loss function and pass back iteratively update the network until convergence.

端到端网络总的损失函数loss为:The total loss function loss of the end-to-end network is:

loss=loss1+loss2 (15)loss=loss1+loss2 (15)

其中loss1为超像素分割的全卷积网络损失函数;loss2为分类损失函数。最后对网络进行迭代更新,直至网络收敛。where loss1 is the fully convolutional network loss function for superpixel segmentation; loss2 is the classification loss function. Finally, the network is iteratively updated until the network converges.

步骤6.9,将测试集送入到训练好的端到端网络模型中得到分类结果。Step 6.9, send the test set into the trained end-to-end network model to obtain the classification result.

Claims (5)

1. The end-to-end polarization SAR image classification method based on the superpixel and graph convolution is characterized by comprising the following steps of:
step 1, inputting polarized SAR images to be classified and cutting the images into uniform sizes
Step 2, dividing the cut pictures into a training set and a test set according to the proportion;
step 3, decomposing the complex scattering matrix of each pixel point of each image of the test set training set to generate a polarization coherent matrix and converting the polarization coherent matrix into a row vector serving as the polarization characteristic of the pixel point;
step 4, splicing the polarization characteristics to the horizontal and vertical coordinates of the pixel point, and splicing the row vectors as the characteristics of the pixel point;
step 5, building an end-to-end network based on the full convolution network, the graph convolution network and the convolution neural network;
and 6, sending the training set into an end-to-end network for joint training, and sending the test set into the trained end-to-end network to obtain a result.
2. The end-to-end polarization SAR image classification method based on superpixel and graph convolution according to claim 1, characterized in that the specific steps of step 3 are:
decomposing the complex scattering matrix of each pixel point of each image cut in the step 2 to generate a polarization coherent matrix, and converting the polarization coherent matrix into a row vector with the size of 1 multiplied by 9 as the polarization characteristic of the pixel point; the expression of the generated polarization coherence matrix T is as follows:
Figure FDA0003455466930000011
let the polarization coherence matrix T be
Figure FDA0003455466930000012
Then, converting the polarization coherent matrix into a row vector to obtain a characteristic matrix: t ═ T11,T12,T13,T21,T22,T23,T31,T32,T33]And the polarization coherent matrix is a complex conjugate matrix, and the complex conjugate matrix is preprocessed: obtaining a polarization characteristic vector F of the pixel point:
Figure FDA0003455466930000024
where Re represents the real part of the complex number, Im represents the imaginary part of the complex number, and TijRepresents the ith row and jth column data of the polarized coherent matrix.
3. The end-to-end polarization SAR image classification method based on superpixel and graph convolution according to claim 1, characterized in that said step 4 specifically is: and (4) splicing the polarization characteristic vector of each pixel point generated in the step (3) with the horizontal and vertical coordinates of the pixel point to obtain the pixel point characteristics.
4. The method for classifying the SAR image with end-to-end polarization based on the super pixel and graph convolution as claimed in claim 1, wherein the structure of the full convolution network in step 5 is an input layer, a first down-sampling layer, a second down-sampling layer, a third down-sampling layer, a fourth down-sampling layer, a fifth down-sampling layer, a first up-sampling layer, a second up-sampling layer, a third up-sampling layer, a fourth up-sampling layer and a softMax output layer which are connected once, and the loss function of the full convolution network is as follows:
Figure FDA0003455466930000021
wherein phi represents the pixel point characteristics before updating,
Figure FDA0003455466930000022
representing the characteristics of the pixel points after the update,
Figure FDA0003455466930000023
representing the cross entropy loss function between the two;
the graph convolution neural network is structurally characterized by comprising an input layer, a first graph convolution layer, a second graph convolution layer, a third graph convolution layer and a softmax output layer which are sequentially connected, wherein the activation function of each graph convolution layer is a tanh function;
the convolutional neural network structure comprises an input layer, a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer and a softMax output layer which are sequentially connected, and the activation function of each convolutional layer is a LeakyRelu function.
5. The end-to-end polarization SAR image classification method based on superpixel and graph convolution according to claim 1, characterized in that the specific steps of step 6 are:
step 6.1, initializing the training set and the test set into a superpixel block;
step 6.2, the pixel point characteristics obtained by the training set through the step 3 and the step 4 are sent to a full convolution network to obtain an output result, and an output matrix Q is a soft correlation matrix of super pixels and pixels;
6.3, acquiring an adjacent matrix A and a characteristic matrix B of the super pixel block and a conversion matrix C between the super pixels and the pixels through the super pixels and the pixel soft association matrix;
according to the superpixel and pixel soft association matrix Q obtained in the step 6.2, distributing each pixel point to the peripheral superpixel blocks with the highest probability to obtain a superpixel segmentation result of the whole image, and obtaining an adjacent matrix A and a feature matrix B of each image according to the superpixel segmentation result;
wherein A in the adjacency matrix Ai,jElements representing the ith row and jth column of the adjacency matrix a:
Figure FDA0003455466930000031
b in the feature matrix BiFeatures representing the ith superpixel block;
Figure FDA0003455466930000032
representing the characteristics of the jth pixel point in the superpixel block i, wherein n represents the number of the pixel points in the ith superpixel block;
Figure FDA0003455466930000033
conversion matrix of super-pixel and pixel into conversion matrix C
Step 6.4, inputting the adjacent matrix and the characteristic matrix of each image into a graph convolution network, wherein the input is the adjacent matrix and the characteristic matrix of the image, and the output is the superpixel characteristic of which the two-dimensional tensor H is the image;
step 6.5, converting the super pixel characteristics of the image into pixel characteristics:
Hgcn=C·H (14)
wherein C represents a transformation matrix and H represents the output of the graph convolution network; hgcnPixel characteristics obtained by converting super-pixel characteristics output by the image convolution network into a super-pixel and pixel conversion matrix
6.6, obtaining pixel level characteristics of the image by the convolutional neural network;
the polarization characteristics obtained by the training set through the step 3 are used as input and sent to a convolutional neural network to obtain pixel level characteristics;
step 6.7, fusing and classifying the image pixel level characteristics obtained in the step 6.6 and the pixel level characteristics obtained in the step 6.5 through the conversion matrix of the super pixels and the pixels;
6.8, calculating a total loss function and reversely transmitting an iterative updating network until convergence;
the total loss function loss of the end-to-end network is:
loss=loss1+loss2 (15)
wherein loss1 is the super-pixel-partitioned full convolution network loss function; loss2 is a classification loss function. Finally, the network is updated iteratively until the network is converged, and the end-to-end network model training is completed;
and 6.9, sending the test set into the trained end-to-end network model to obtain a classification result.
CN202210005850.5A 2022-01-04 2022-01-04 End-to-end polarimetric SAR image classification method based on superpixels and graph convolution Active CN114764884B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210005850.5A CN114764884B (en) 2022-01-04 2022-01-04 End-to-end polarimetric SAR image classification method based on superpixels and graph convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210005850.5A CN114764884B (en) 2022-01-04 2022-01-04 End-to-end polarimetric SAR image classification method based on superpixels and graph convolution

Publications (2)

Publication Number Publication Date
CN114764884A true CN114764884A (en) 2022-07-19
CN114764884B CN114764884B (en) 2025-05-13

Family

ID=82364544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210005850.5A Active CN114764884B (en) 2022-01-04 2022-01-04 End-to-end polarimetric SAR image classification method based on superpixels and graph convolution

Country Status (1)

Country Link
CN (1) CN114764884B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578599A (en) * 2022-10-27 2023-01-06 西北工业大学 A Polarimetric SAR Image Classification Method Based on Superpixel-Hypergraph Feature Enhancement Network
CN119762494A (en) * 2024-12-24 2025-04-04 西安电子科技大学 End-to-end polarimetric SAR image superpixel segmentation method, system, device and medium based on fully convolutional neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9239384B1 (en) * 2014-10-21 2016-01-19 Sandia Corporation Terrain detection and classification using single polarization SAR
CN111339924A (en) * 2020-02-25 2020-06-26 中国电子科技集团公司第五十四研究所 A Polarimetric SAR Image Classification Method Based on Superpixels and Fully Convolutional Networks
CN111626380A (en) * 2020-07-07 2020-09-04 西安邮电大学 Polarized SAR image classification method based on super-pixels and convolution network
CN113298129A (en) * 2021-05-14 2021-08-24 西安理工大学 Polarized SAR image classification method based on superpixel and graph convolution network
CN113313164A (en) * 2021-05-27 2021-08-27 复旦大学附属肿瘤医院 Digital pathological image classification method and system based on superpixel segmentation and image convolution
CN113486967A (en) * 2021-07-15 2021-10-08 南京中科智慧应急研究院有限公司 SAR image classification algorithm combining graph convolution network and Markov random field

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9239384B1 (en) * 2014-10-21 2016-01-19 Sandia Corporation Terrain detection and classification using single polarization SAR
CN111339924A (en) * 2020-02-25 2020-06-26 中国电子科技集团公司第五十四研究所 A Polarimetric SAR Image Classification Method Based on Superpixels and Fully Convolutional Networks
CN111626380A (en) * 2020-07-07 2020-09-04 西安邮电大学 Polarized SAR image classification method based on super-pixels and convolution network
CN113298129A (en) * 2021-05-14 2021-08-24 西安理工大学 Polarized SAR image classification method based on superpixel and graph convolution network
CN113313164A (en) * 2021-05-27 2021-08-27 复旦大学附属肿瘤医院 Digital pathological image classification method and system based on superpixel segmentation and image convolution
CN113486967A (en) * 2021-07-15 2021-10-08 南京中科智慧应急研究院有限公司 SAR image classification algorithm combining graph convolution network and Markov random field

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
柴英特: "基于空间信息的极化SAR图像分类", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 February 2019 (2019-02-15), pages 136 - 1220 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578599A (en) * 2022-10-27 2023-01-06 西北工业大学 A Polarimetric SAR Image Classification Method Based on Superpixel-Hypergraph Feature Enhancement Network
CN119762494A (en) * 2024-12-24 2025-04-04 西安电子科技大学 End-to-end polarimetric SAR image superpixel segmentation method, system, device and medium based on fully convolutional neural network

Also Published As

Publication number Publication date
CN114764884B (en) 2025-05-13

Similar Documents

Publication Publication Date Title
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN110287800B (en) Remote sensing image scene classification method based on SGSE-GAN
CN112508936B (en) A method of remote sensing image change detection based on deep learning
CN111369442B (en) Remote sensing image super-resolution reconstruction method based on fuzzy kernel classification and attention mechanism
CN117314811A (en) SAR-optical image fusion method based on hybrid model
Ji et al. Few-shot scene classification of optical remote sensing images leveraging calibrated pretext tasks
CN117934978A (en) Hyperspectral and laser radar multilayer fusion classification method based on countermeasure learning
CN107451528B (en) Method and system for automatic recognition of land cover images based on deep learning
CN113591633B (en) Object-oriented land utilization information interpretation method based on dynamic self-attention transducer
CN113610905A (en) Deep learning remote sensing image registration method and application based on sub-image matching
CN114863266B (en) A land use classification method based on deep spatiotemporal pattern interaction network
CN116310883B (en) Agricultural disaster prediction methods and related equipment based on spatio-temporal fusion of remote sensing images
CN116563682A (en) An Attention Scheme and Strip Convolutional Semantic Line Detection Method Based on Deep Hough Networks
CN114943893B (en) Feature enhancement method for land coverage classification
CN110969182A (en) Convolutional neural network construction method and system based on farmland image
CN113781311A (en) A Generative Adversarial Network-Based Image Super-Resolution Reconstruction Method
CN116503251A (en) Super-resolution reconstruction method for generating countermeasure network remote sensing image by combining hybrid expert
CN114764884B (en) End-to-end polarimetric SAR image classification method based on superpixels and graph convolution
CN111583330A (en) Multi-scale space-time Markov remote sensing image sub-pixel positioning method and system
CN117953375A (en) A method for extracting rural roads from remote sensing images based on a multi-task structure
CN117036884A (en) Remote sensing image space-time fusion method based on self-adaptive normalization and attention mechanism
CN115937704B (en) Remote sensing image road segmentation method based on topology perception neural network
Qian et al. C3DGS: Compressing 3D Gaussian Model for Surface Reconstruction of Large-Scale Scenes Based on Multi-View UAV Images
Zheng et al. An efficient and fast image mosaic approach for highway panoramic UAV images
CN112488413A (en) AWA-DRCN-based population spatialization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant