[go: up one dir, main page]

CN107292336A - A kind of Classification of Polarimetric SAR Image method based on DCGAN - Google Patents

A kind of Classification of Polarimetric SAR Image method based on DCGAN Download PDF

Info

Publication number
CN107292336A
CN107292336A CN201710440090.XA CN201710440090A CN107292336A CN 107292336 A CN107292336 A CN 107292336A CN 201710440090 A CN201710440090 A CN 201710440090A CN 107292336 A CN107292336 A CN 107292336A
Authority
CN
China
Prior art keywords
mrow
mtd
msub
mtr
dcgan
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710440090.XA
Other languages
Chinese (zh)
Inventor
焦李成
屈嵘
张婷
马晶晶
杨淑媛
侯彪
马文萍
刘芳
尚荣华
张向荣
张丹
唐旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201710440090.XA priority Critical patent/CN107292336A/en
Publication of CN107292336A publication Critical patent/CN107292336A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于DCGAN的极化SAR图像分类方法,包括以下步骤:1)得奇次散射系数、偶次散射系数及体散射系数,再构建基于像素点的特征矩阵F;2)将基于像素点的特征矩阵F中的各元素值归一化到[0,1]内,并将归一化的结果记作特征矩阵F1;3)将特征矩阵F1中的每个元素通过其周围64×64的图像块进行替换,得基于图像块的特征矩阵F2;4)构造无标签训练数据集D1的特征矩阵W1及有标签训练数据集D2的特征矩阵W2;5)构造测试数据集T的超像素聚类中心的特征矩阵W3;6)得到训练后的训练网络模型DCGAN;7)构建判别分类网络模型,然后通过判别分类网络模型对特征矩阵W3进行分类,该方法能够实现极化SAR图像的分类,并且分类精度较高。

The invention discloses a polarimetric SAR image classification method based on DCGAN, comprising the following steps: 1) Obtain odd-order scattering coefficients, even-order scattering coefficients, and volume scattering coefficients, and then construct a pixel-based feature matrix F; The value of each element in the pixel-based feature matrix F is normalized to [0, 1], and the normalized result is recorded as the feature matrix F1; 3) Pass each element in the feature matrix F1 through its surrounding 64×64 image blocks are replaced to obtain the feature matrix F2 based on the image block; 4) Construct the feature matrix W1 of the unlabeled training data set D1 and the feature matrix W2 of the labeled training data set D2; 5) Construct the test data set T The feature matrix W3 of the superpixel clustering center; 6) Obtain the training network model DCGAN after training; 7) Construct the discriminant classification network model, and then classify the feature matrix W3 through the discriminant classification network model, this method can realize polarimetric SAR Image classification, and the classification accuracy is high.

Description

一种基于DCGAN的极化SAR图像分类方法A Polarization SAR Image Classification Method Based on DCGAN

技术领域technical field

本发明属于图像处理技术领域,涉及一种基于DCGAN的极化SAR图像分类方法。The invention belongs to the technical field of image processing, and relates to a polarimetric SAR image classification method based on DCGAN.

背景技术Background technique

极化SAR是一种高分辨率主动式有源微波遥感成像雷达,具有全天候、全天时、分辨率高、可侧视成像等优点,能获得目标更丰富的信息。极化SAR图像分类的目的是利用机载或者星载极化SAR传感器获得的极化测量数据确定每个像素所属的类别,在农业、林业、军事、地质学、水文学和海洋等方面具有广泛的研究和应用价值。经典极化SAR图像分类方法有:Polarization SAR is a high-resolution active microwave remote sensing imaging radar, which has the advantages of all-weather, all-time, high resolution, side-view imaging, etc., and can obtain richer information on targets. The purpose of polarimetric SAR image classification is to use the polarization measurement data obtained by airborne or spaceborne polarimetric SAR sensors to determine the category to which each pixel belongs. It has a wide range of applications in agriculture, forestry, military, geology, hydrology, and ocean research and application value. The classic polarimetric SAR image classification methods are:

1992年,Lee等研究认为,多视极化SAR图像可以表示为极化协方差矩阵的形式,并且该矩阵近似服从复Wishart分布,在此基础上,他提出一种简单有效的Wishart分类算法并用来对森林、城区、海洋、海冰等地物类型进行分类。In 1992, Lee et al. researched that the multi-view polarization SAR image can be expressed in the form of polarization covariance matrix, and the matrix approximately obeys the complex Wishart distribution. On this basis, he proposed a simple and effective Wishart classification algorithm and used To classify forests, urban areas, oceans, sea ice and other types of land features.

1998年,Lee等用H/Alpha分解法提取的特征对图像进行初始聚类,得到8个聚类中心;然后用描述多视协方差矩阵的Wishart迭代分类器对图像进行分类(简称H/Alpha-Wishart分类器)。In 1998, Lee et al. used the features extracted by the H/Alpha decomposition method to initially cluster the images and obtained 8 cluster centers; then they used the Wishart iterative classifier describing the multi-view covariance matrix to classify the images (referred to as H/Alpha -Wishart classifier).

2000年,Pottier等提出了H/Alpha/A-Wishart分类器,在H/Alpha分解的基础上,加入A特征,将图像聚成16类,然后再对图像进行Wishart迭代分类。In 2000, Pottier et al. proposed the H/Alpha/A-Wishart classifier. On the basis of the H/Alpha decomposition, the A feature was added to group the images into 16 categories, and then the Wishart iterative classification was performed on the images.

极化SAR由于起步较晚,目前发展还不成熟,很多核心技术,如滤波技术、极化目标分解技术、分类技术亟待提高,特别是极化SAR图像分类目前还缺少高效可靠的算法,一些先进的机器学习理论和方法尚未在极化SAR图像分类中得到应用。经典的极化SAR图像分类方法,难以适应越来越多的极化SAR数据,从而难以充分学习利用到极化SAR数据的分布特性,难以提取到好的特征,达不到很高的分类精度。Due to its late start, the development of polarimetric SAR is still immature. Many core technologies, such as filtering technology, polarimetric target decomposition technology, and classification technology, need to be improved urgently. In particular, polarimetric SAR image classification still lacks efficient and reliable algorithms. The machine learning theories and methods of have not been applied in polarimetric SAR image classification. The classic polarization SAR image classification method is difficult to adapt to more and more polarization SAR data, so it is difficult to fully learn and utilize the distribution characteristics of polarization SAR data, it is difficult to extract good features, and it cannot achieve high classification accuracy .

发明内容Contents of the invention

本发明的目的在于克服上述现有技术的缺点,提供了一种基于DCGAN的极化SAR图像分类方法,该方法能够实现极化SAR图像的分类,并且分类精度较高。The purpose of the present invention is to overcome the above-mentioned shortcomings of the prior art, and to provide a polarimetric SAR image classification method based on DCGAN, which can realize the classification of the polarimetric SAR image, and has high classification accuracy.

为达到上述目的,本发明所述的基于DCGAN的极化SAR图像分类方法包括以下步骤:In order to achieve the above object, the polarimetric SAR image classification method based on DCGAN of the present invention comprises the following steps:

1)获取极化散射矩阵S,对极化散射矩阵S进行Pauli分解,得奇次散射系数、偶次散射系数及体散射系数,再将奇次散射系数、偶次散射系数及体散射系数作为待分类极化SAR图像的三维图像特征构建基于像素点的特征矩阵F;1) Obtain the polarized scattering matrix S, perform Pauli decomposition on the polarized scattering matrix S, and obtain odd-order scattering coefficients, even-order scattering coefficients, and volume scattering coefficients, and then use odd-order scattering coefficients, even-order scattering coefficients, and volume scattering coefficients as The three-dimensional image features of the polarimetric SAR image to be classified construct a pixel-based feature matrix F;

2)将基于像素点的特征矩阵F中的各元素值归一化到[0,1]内,并将归一化的结果记作特征矩阵F1;2) Normalize the values of each element in the pixel-based feature matrix F to [0, 1], and record the normalized result as the feature matrix F1;

3)将特征矩阵F1中的每个元素通过其周围64×64的图像块进行替换,得基于图像块的特征矩阵F2;3) Each element in the feature matrix F1 is replaced by a 64×64 image block around it to obtain a feature matrix F2 based on the image block;

4)利用基于图像块的特征矩阵F2构造无标签训练数据集D1的特征矩阵W1及有标签训练数据集D2的特征矩阵W2;4) Utilize the feature matrix F2 based on the image block to construct the feature matrix W1 of the unlabeled training data set D1 and the feature matrix W2 of the labeled training data set D2;

5)利用基于图像块的特征矩阵F2构造测试数据集T,再在基于像素点的特征矩阵F中利用SLIC超像素算法划分超像素块,得超像素块的聚类中心,然后在基于图像块的特征矩阵F2中构造测试数据集T的超像素聚类中心的特征矩阵W3;5) Use the feature matrix F2 based on the image block to construct the test data set T, and then use the SLIC superpixel algorithm to divide the superpixel block in the feature matrix F2 based on the pixel point, and obtain the clustering center of the superpixel block, and then in the image block based Construct the feature matrix W3 of the superpixel clustering center of the test data set T in the feature matrix F2 of ;

6)通过无标签训练数据集D1对训练网络模型DCGAN进行训练,得到训练后的训练网络模型DCGAN;6) Train the training network model DCGAN through the unlabeled training data set D1 to obtain the trained training network model DCGAN;

7)将训练后的训练网络模型DCGAN中判别器D中的二分类器更换为softmax分类器,再将更换后的判别器D作为分类网络模型;7) Replace the binary classifier in the discriminator D in the trained training network model DCGAN with a softmax classifier, and then use the replaced discriminator D as the classification network model;

8)将有标签训练数据集D2的特征矩阵W2输入到分类网络模型中,并更新softmax分类器的参数,然后通过有标签训练数据集D2的特征矩阵W2更新整个分类网络模型的参数,然后通过判别分类网络模型对测试数据集T的超像素聚类中心的特征矩阵W3进行分类,再标记测试数据集T的类标,实现基于DCGAN的极化SAR图像分类。8) Input the feature matrix W2 of the labeled training data set D2 into the classification network model, and update the parameters of the softmax classifier, then update the parameters of the entire classification network model through the feature matrix W2 of the labeled training data set D2, and then pass The discriminative classification network model classifies the feature matrix W3 of the superpixel cluster center of the test data set T, and then marks the class labels of the test data set T to realize the polarimetric SAR image classification based on DCGAN.

步骤1)中对极化散射矩阵S进行Pauli分解,得到奇次散射系数、偶次散射系数及体散射系数的操作为:In step 1), the Pauli decomposition is performed on the polarized scattering matrix S to obtain the odd-order scattering coefficient, even-order scattering coefficient and volume scattering coefficient as follows:

1a)设置Pauli基{S1,S2,S3},其中,1a) Set the Pauli basis {S 1 ,S 2 ,S 3 }, where,

其中,S1为奇次散射,S2为偶次散射,S3为体散射;Among them, S 1 is odd scattering, S 2 is even scattering, S 3 is volume scattering;

1b)由Pauli分解定义得:1b) Defined by Pauli decomposition:

其中,a为奇次散射系数,b为偶次散射系数,c为体散射系数;Among them, a is the odd-order scattering coefficient, b is the even-order scattering coefficient, and c is the volume scattering coefficient;

1c)求解式(2),得奇次散射系数a、偶次散射系数b及体散射系数c,其中,1c) Solve the formula (2) to get the odd-order scattering coefficient a, the even-order scattering coefficient b, and the volume scattering coefficient c, where,

步骤1)中构建基于像素点的特征矩阵F的具体操作为:The specific operation of constructing the feature matrix F based on pixels in step 1) is:

设大小为M1×M2×3的特征矩阵,再将奇次散射系数a、偶次散射系数b及体散射系数c赋给大小为M1×M2×3的特征矩阵,得基于像素点的特征矩阵F,其中,M1为待分类极化SAR图像的长,M2为待分类极化SAR图像的宽。Set the characteristic matrix of size M1×M2×3, then assign the odd scattering coefficient a, the even scattering coefficient b and the volume scattering coefficient c to the characteristic matrix of size M1×M2×3, and obtain the pixel-based characteristic matrix F, where M1 is the length of the polarimetric SAR image to be classified, and M2 is the width of the polarimetric SAR image to be classified.

步骤2)的具体操作为:求解基于像素点的特征矩阵F的最大值max(F),再将基于像素点的特征矩阵F中的每个元素均除以所述最大值max(F),得特征矩阵F1。The specific operation of step 2) is: solving the maximum value max(F) of the feature matrix F based on the pixel point, and then dividing each element in the feature matrix F based on the pixel point by the maximum value max(F), Get the feature matrix F1.

步骤6)中通过无标签训练数据集D1对训练网络模型DCGAN进行训练的具体操作为:In step 6), the specific operation of training the training network model DCGAN through the unlabeled training data set D1 is as follows:

6a)设训练网络模型DCGAN中的生成器G包括依次相连接的输入层a、第一反卷积层、第二反卷积层、第三反卷积层、第四反卷积层及输出层,其中,输入层a的输入为100维噪声向量;第一反卷积层的特征映射图数目为512,第一反卷积层的滤波器尺寸为5;第二反卷积层的特征映射图数目为256,第二反卷积层的滤波器尺寸为5;第三反卷积层的特征映射图数目为128,第三反卷积层的滤波器尺寸为5;第四反卷积层的特征映射图数目为64,第四反卷积层的滤波器尺寸为5;输出层输出64×64×3大小的伪彩图;6a) Let the generator G in the training network model DCGAN include the input layer a, the first deconvolution layer, the second deconvolution layer, the third deconvolution layer, the fourth deconvolution layer and the output layer, where the input of the input layer a is a 100-dimensional noise vector; the number of feature maps of the first deconvolution layer is 512, and the filter size of the first deconvolution layer is 5; the feature of the second deconvolution layer The number of maps is 256, the filter size of the second deconvolution layer is 5; the number of feature maps of the third deconvolution layer is 128, the filter size of the third deconvolution layer is 5; the fourth deconvolution layer The number of feature maps in the product layer is 64, and the filter size of the fourth deconvolution layer is 5; the output layer outputs a pseudo-color map with a size of 64×64×3;

6b)设训练网络模型DCGAN中的判别器D包括依次相连接的输入层b、第一卷积层a、第二卷积层a、第三卷积层a、第四卷积层a及二分类器,其中,输入层b的特征映射图数目为3;第一层卷积层a的特征映射图数目为64,第一层卷积层a的滤波器尺寸为5;第二层卷积层a的特征映射图数目为128,第二层卷积层a的滤波器尺寸为5;第三层卷积层a的特征映射图数目为256,第三层卷积层a的滤波器尺寸为5;第四层卷积层a的特征映射图数目为512,第四层卷积层a的滤波器尺寸为5;二分类器输出一个标量;6b) Let the discriminator D in the training network model DCGAN include the input layer b, the first convolutional layer a, the second convolutional layer a, the third convolutional layer a, the fourth convolutional layer a, and the second convolutional layer connected in sequence. Classifier, wherein the number of feature maps of the input layer b is 3; the number of feature maps of the first convolutional layer a is 64, and the filter size of the first convolutional layer a is 5; the second layer of convolution The number of feature maps of layer a is 128, and the filter size of the second convolutional layer a is 5; the number of feature maps of the third convolutional layer a is 256, and the filter size of the third convolutional layer a is is 5; the number of feature maps of the fourth convolutional layer a is 512, and the filter size of the fourth convolutional layer a is 5; the binary classifier outputs a scalar;

6c)向训练网络模型DCGAN的生成器G中输入100维的均匀噪声,将无标签训练数据集D1的特征矩阵W1及生成器G的输出输入到训练网络模型DCGAN中判别器D中,通过训练网络模型DCGAN中的生成器G及判别器D互相竞争对抗学习训练,完成训练网络模型DCGAN的训练。6c) Input 100-dimensional uniform noise into the generator G of the training network model DCGAN, input the feature matrix W1 of the unlabeled training data set D1 and the output of the generator G into the discriminator D in the training network model DCGAN, and pass the training The generator G and the discriminator D in the network model DCGAN compete against each other for learning and training, and complete the training of the training network model DCGAN.

判别分类网络模型包括依次相连接的输入层c、第一卷积层b、第二卷积层b、第三卷积层b、第四卷积层b及softmax分类器,其中softmax分类器的特征映射图数目为5。The discriminative classification network model includes an input layer c, a first convolutional layer b, a second convolutional layer b, a third convolutional layer b, a fourth convolutional layer b and a softmax classifier connected in sequence, wherein the softmax classifier The number of feature maps is 5.

本发明具有以下有益效果:The present invention has the following beneficial effects:

本发明所述的基于DCGAN的极化SAR图像分类方法在具体操作时,通过对训练网络模型DCGAN进行训练,然后重用训练网络模型DCGAN中的判别器D,并将二分类器更换为softmax分类器构造判别分类网络模型,并通过判别分类网络模型实现极化SAR图像的分类,相对于其他深度学习特征提取方法,本发明无需启发式损失函数,也能够实现对极化SAR图像的分类,从而有效的提高极化SAR图像的分类精度。另外,需要说明的是,本发明通过对训练网络模型DCGAN进行训练,使训练网络模型DCGAN能够从大量无标记数据样本中学习出数据的分布特性,使得在有标记训练样本较少的情况下仍可以达到很高的分类精度,克服传统极化SAR图像分类过程中有标签样本较少、分类精度较差的问题。The DCGAN-based polarization SAR image classification method of the present invention is in specific operation, by training the training network model DCGAN, then reusing the discriminator D in the training network model DCGAN, and replacing the binary classifier with a softmax classifier Construct a discriminant classification network model, and realize the classification of polarimetric SAR images through the discriminant classification network model. Compared with other deep learning feature extraction methods, the present invention can also realize the classification of polarimetric SAR images without heuristic loss function, thus effectively Improving the classification accuracy of polarimetric SAR images. In addition, it should be noted that, by training the training network model DCGAN in the present invention, the training network model DCGAN can learn the distribution characteristics of the data from a large number of unlabeled data samples, making it possible to still It can achieve high classification accuracy and overcome the problems of fewer labeled samples and poor classification accuracy in the traditional polarization SAR image classification process.

附图说明Description of drawings

图1为本发明的实现流程图;Fig. 1 is the realization flowchart of the present invention;

图2为本发明中对待分类图像的人工标记图;Fig. 2 is the manual marking diagram of the image to be classified in the present invention;

图3为用本发明对待分类图像的分类结果图。Fig. 3 is a diagram of classification results of images to be classified using the present invention.

具体实施方式detailed description

下面结合附图对本发明做进一步详细描述:The present invention is described in further detail below in conjunction with accompanying drawing:

参考图1,本发明所述的基于DCGAN的极化SAR图像分类方法包括以下步骤:With reference to Fig. 1, the polarization SAR image classification method based on DCGAN of the present invention comprises the following steps:

1)获取极化散射矩阵S,对极化散射矩阵S进行Pauli分解,得奇次散射系数、偶次散射系数及体散射系数,再将奇次散射系数、偶次散射系数及体散射系数作为待分类极化SAR图像的三维图像特征构建基于像素点的特征矩阵F;1) Obtain the polarized scattering matrix S, perform Pauli decomposition on the polarized scattering matrix S, and obtain odd-order scattering coefficients, even-order scattering coefficients, and volume scattering coefficients, and then use odd-order scattering coefficients, even-order scattering coefficients, and volume scattering coefficients as The three-dimensional image features of the polarimetric SAR image to be classified construct a pixel-based feature matrix F;

其中,本实施例中待分类的极化SAR图像选用2008年4月,旧金山海湾SanFrancisco Bay全极化数据图像,分辨率为50m,图像为L波段,图像大小为1800*1380,所有像素点共分为5类。Among them, the polarization SAR image to be classified in this embodiment is selected from April 2008, the San Francisco Bay full polarization data image of San Francisco Bay, the resolution is 50m, the image is L-band, the image size is 1800*1380, and all pixels have a total of Divided into 5 categories.

步骤1)中对极化散射矩阵S进行Pauli分解,得到奇次散射系数、偶次散射系数及体散射系数的操作为:In step 1), the Pauli decomposition is performed on the polarized scattering matrix S to obtain the odd-order scattering coefficient, even-order scattering coefficient and volume scattering coefficient as follows:

1a)设置Pauli基{S1,S2,S3},其中,1a) Set the Pauli basis {S 1 ,S 2 ,S 3 }, where,

其中,S1为奇次散射,S2为偶次散射,S3为体散射;Among them, S 1 is odd scattering, S 2 is even scattering, S 3 is volume scattering;

1b)由Pauli分解定义得:1b) Defined by Pauli decomposition:

其中,a为奇次散射系数,b为偶次散射系数,c为体散射系数;Among them, a is the odd-order scattering coefficient, b is the even-order scattering coefficient, and c is the volume scattering coefficient;

1c)求解式(2),得奇次散射系数a、偶次散射系数b及体散射系数c,其中,1c) Solve the formula (2) to get the odd-order scattering coefficient a, the even-order scattering coefficient b, and the volume scattering coefficient c, where,

步骤1)中构建基于像素点的特征矩阵F的具体操作为:The specific operation of constructing the feature matrix F based on pixels in step 1) is:

设大小为M1×M2×3的特征矩阵,再将奇次散射系数a、偶次散射系数b及体散射系数c赋给大小为M1×M2×3的特征矩阵,得基于像素点的特征矩阵F,其中,M1为待分类极化SAR图像的长,M2为待分类极化SAR图像的宽。Set the characteristic matrix of size M1×M2×3, then assign the odd scattering coefficient a, the even scattering coefficient b and the volume scattering coefficient c to the characteristic matrix of size M1×M2×3, and obtain the pixel-based characteristic matrix F, where M1 is the length of the polarimetric SAR image to be classified, and M2 is the width of the polarimetric SAR image to be classified.

2)将基于像素点的特征矩阵F中的各元素值归一化到[0,1]内,并将归一化的结果记作特征矩阵F1;2) Normalize the values of each element in the pixel-based feature matrix F to [0, 1], and record the normalized result as the feature matrix F1;

其中,常用的归一化方法有特征线性缩放法、特征标准化及特征白化。本实施例中步骤2)的具体操作为:求解基于像素点的特征矩阵F的最大值max(F),再将基于像素点的特征矩阵F中的每个元素均除以所述最大值max(F),得特征矩阵F1。Among them, the commonly used normalization methods include feature linear scaling, feature standardization, and feature whitening. The specific operation of step 2) in this embodiment is: solve the maximum value max(F) of the feature matrix F based on the pixel point, and then divide each element in the feature matrix F based on the pixel point by the maximum value max (F), get the feature matrix F1.

3)将特征矩阵F1中的每个元素通过其周围64×64的图像块进行替换,得基于图像块的特征矩阵F2;3) Each element in the feature matrix F1 is replaced by a 64×64 image block around it to obtain a feature matrix F2 based on the image block;

4)利用基于图像块的特征矩阵F2构造无标签训练数据集D1的特征矩阵W1及有标签训练数据集D2的特征矩阵W2;4) Utilize the feature matrix F2 based on the image block to construct the feature matrix W1 of the unlabeled training data set D1 and the feature matrix W2 of the labeled training data set D2;

本实施例中,步骤4)的具体操作为:In the present embodiment, the concrete operation of step 4) is:

4a)将极化SAR图像地物分为5类,记录每个类别对应的像素点在待分类图像中的位置,生成5种对应不同类地物像素点的位置A1、A2、A3、A4、A5,其中,A1对应第1类地物像素点在待分类图像中的位置,A2对应第2类地物像素点在待分类图像中的位置,A3对应第3类地物像素点在待分类图像中的位置,A4对应第4类地物像素点在待分类图像中的位置,A5对应第5类地物像素点在待分类图像中的位置;4a) Divide the polarimetric SAR image into 5 categories, record the position of the pixels corresponding to each category in the image to be classified, and generate 5 positions A1, A2, A3, A4, A5, where A1 corresponds to the position of the pixel of the first type of feature in the image to be classified, A2 corresponds to the position of the pixel of the second type of feature in the image to be classified, and A3 corresponds to the pixel of the third type of feature in the image to be classified The position in the image, A4 corresponds to the position of the pixel point of the fourth type of ground object in the image to be classified, and A5 corresponds to the position of the pixel point of the fifth type of ground object in the image to be classified;

4b)从所述A1、A2、A3、A4、A5中随机选取0.5%的元素,生成5种对应不同类地物被选作训练数据集的像素点的位置B1、B2、B3、B4、B5,其中,B1为对应第1类地物中被选作训练数据集的像素点在待分类图像中的位置,B2为对应第2类地物中被选作训练数据集的像素点在待分类图像中的位置,B3为对应第3类地物中被选作训练数据集的像素点在待分类图像中的位置,B4为对应第4类地物中被选作训练数据集的像素点在待分类图像中的位置,B5为对应第5类地物中被选作训练数据集的像素点在待分类图像中的位置,并将B1、B2、B3、B4、B5中的元素合并组成训练数据集的所有像素点在待分类图像中的位置L1;4b) Randomly select 0.5% of the elements from the A1, A2, A3, A4, A5, and generate 5 positions B1, B2, B3, B4, B5 corresponding to the pixel points of different types of ground objects selected as the training data set , where B1 is the position of the pixels corresponding to the first type of ground objects selected as the training data set in the image to be classified, and B2 is the position of the pixels corresponding to the second type of ground objects selected as the training data set in the image to be classified The position in the image, B3 is the position of the pixel selected as the training data set corresponding to the third type of ground object in the image to be classified, and B4 is the position of the pixel selected as the training data set corresponding to the fourth type of ground object in The position in the image to be classified, B5 is the position in the image to be classified of the pixel selected as the training data set corresponding to the fifth type of ground object, and the elements in B1, B2, B3, B4, and B5 are combined to form the training The position L1 of all pixels in the data set in the image to be classified;

4c)定义无标签训练数据集D1的特征矩阵W1,在基于图像块的特征矩阵F2中随机选择总像素点的3.5%的元素构成无标签训练数据集D1的特征矩阵W1;4c) Define the feature matrix W1 of the unlabeled training data set D1, randomly select 3.5% of the elements of the total pixels in the feature matrix F2 based on image blocks to form the feature matrix W1 of the unlabeled training data set D1;

4d)定义有标签训练数据集D2的特征矩阵W2,在基于图像块的特征矩阵F2中依据L1取对应位置上的值,并赋值给训练数据集D2的特征矩阵W2。4d) Define the feature matrix W2 of the labeled training data set D2, take the value at the corresponding position according to L1 in the feature matrix F2 based on the image block, and assign it to the feature matrix W2 of the training data set D2.

5)利用基于图像块的特征矩阵F2构造测试数据集T,再在基于像素点的特征矩阵F中利用SLIC超像素算法划分超像素块,得超像素块的聚类中心,然后在基于图像块的特征矩阵F2中构造测试数据集T的超像素聚类中心的特征矩阵W3;5) Use the feature matrix F2 based on the image block to construct the test data set T, and then use the SLIC superpixel algorithm to divide the superpixel block in the feature matrix F2 based on the pixel point, and obtain the clustering center of the superpixel block, and then in the image block based Construct the feature matrix W3 of the superpixel clustering center of the test data set T in the feature matrix F2 of ;

步骤5)的具体操作为:The concrete operation of step 5) is:

5a)用步骤4所述A1、A2、A3、A4、A5中其余99.5%的元素生成5种对应不同类地物被选作测试数据集的像素点的位置C1、C2、C3、C4、C5,其中,C1为对应第1类地物中被选作测试数据集的像素点在待分类图像中的位置,C2为对应第2类地物中被选作测试数据集的像素点在待分类图像中的位置,C3为对应第3类地物中被选作测试数据集的像素点在待分类图像中的位置,C4为对应第4类地物中被选作测试数据集的像素点在待分类图像中的位置,C5为对应第5类地物中被选作测试数据集的像素点在待分类图像中的位置,并将C1、C2、C3、C4、C5中的元素合并组成测试数据集的所有像素点在待分类图像中的位置L2;5a) Use the remaining 99.5% of the elements in A1, A2, A3, A4, and A5 described in step 4 to generate 5 positions C1, C2, C3, C4, and C5 corresponding to pixels of different types of ground objects selected as test data sets , where C1 is the position of the pixel selected as the test data set corresponding to the first type of ground object in the image to be classified, and C2 is the position of the pixel selected as the test data set corresponding to the second type of ground object in the image to be classified The position in the image, C3 is the position of the pixel selected as the test data set corresponding to the third type of ground object in the image to be classified, and C4 is the pixel point selected as the test data set corresponding to the fourth type of ground object in The position in the image to be classified, C5 is the position of the pixel selected as the test data set corresponding to the fifth type of ground object in the image to be classified, and the elements in C1, C2, C3, C4, and C5 are combined to form a test The position L2 of all pixels in the data set in the image to be classified;

5b)依据L2取对应位置上的像素点集合即为测试数据集T;5b) According to L2, the pixel point set at the corresponding position is taken as the test data set T;

5c)初始化种子点,按照设定的超像素个数K=80000,在图像内均匀的分配种子点,假设图片总共有N个像素点,预分割为K个相同尺寸的超像素,则每个超像素的大小为N/K,则相邻种子点的距离(步长)近似为:5c) Initialize the seed points, according to the set number of superpixels K=80000, evenly distribute the seed points in the image, assuming that the picture has a total of N pixels, pre-divided into K superpixels of the same size, each The size of the superpixel is N/K, then the distance (step size) between adjacent seed points is approximately:

计算每个像素块的大小为(1800×1380)÷80000≈32,像素块的边长约为6;Calculate the size of each pixel block as (1800×1380)÷80000≈32, and the side length of the pixel block is about 6;

5d)在种子点的n*n邻域内重新选择种子点,具体方法为:计算该邻域内所有像素点的梯度值,将种子点移到该邻域内梯度最小的位置;5d) Reselect the seed point in the n*n neighborhood of the seed point, the specific method is: calculate the gradient value of all the pixel points in the neighborhood, and move the seed point to the position with the smallest gradient in the neighborhood;

5e)在每个种子点周围的邻域内为每个像素点分配类标签(即属于哪个聚类中心),搜索范围限制为2S*2S;5e) Assign a class label (that is, which cluster center it belongs to) to each pixel point in the neighborhood around each seed point, and the search range is limited to 2S*2S;

5f)距离度量包括颜色距离和空间距离,对于每个搜索到的像素点,分别计算它和该种子点的距离,距离计算方法如下:5f) The distance measure includes color distance and space distance. For each searched pixel point, the distance between it and the seed point is calculated respectively. The distance calculation method is as follows:

其中,dc代表颜色距离,dc代表空间距离,Ns是类内最大空间距离,定义为Ns=S,适用于每个聚类,最大的颜色距离Nc既随图片不同而不同,也随聚类不同而不同,由于每个像素点都会被多个种子点搜索到,所以每个像素点都会有一个与周围种子点的距离,取最小值对应的种子点作为该像素点的聚类中心;Among them, d c represents the color distance, d c represents the spatial distance, N s is the maximum spatial distance within the class, defined as N s = S, applicable to each cluster, the maximum color distance N c varies with different pictures, It also varies with different clusters. Since each pixel point will be searched by multiple seed points, each pixel point will have a distance from the surrounding seed points, and the seed point corresponding to the minimum value will be taken as the clustering value of the pixel point. class center;

5g)不断迭代上述步骤直到误差收敛,以边长为6重新划分原图,则有(1800×1380)÷36=69000个超像素块,这就是最终的超像素块个数,记录这些超像素块的中心像素点的位置L3;5g) Iterate the above steps until the error converges, and re-divide the original image with a side length of 6, then there will be (1800×1380)÷36=69000 superpixel blocks, which is the final number of superpixel blocks, record these superpixels The position L3 of the center pixel of the block;

5h)定义测试数据集T的超像素聚类中心的特征矩阵W3,在基于图像块的特征矩阵F2中依据L3取对应位置上的值,并赋值给测试数据集T的超像素聚类中心的特征矩阵W3;5h) Define the feature matrix W3 of the superpixel clustering center of the test data set T, take the value at the corresponding position according to L3 in the feature matrix F2 based on the image block, and assign it to the superpixel clustering center of the test data set T feature matrix W3;

6)通过无标签训练数据集D1对训练网络模型DCGAN进行训练,得到训练后的训练网络模型DCGAN;6) Train the training network model DCGAN through the unlabeled training data set D1 to obtain the trained training network model DCGAN;

7)将训练后的训练网络模型DCGAN中判别器D中的二分类器更换为softmax分类器,再将更换后的判别器D作为分类网络模型;7) Replace the binary classifier in the discriminator D in the trained training network model DCGAN with a softmax classifier, and then use the replaced discriminator D as the classification network model;

8)将有标签训练数据集D2的特征矩阵W2输入到分类网络模型中,并更新softmax分类器的参数,然后通过有标签训练数据集D2的特征矩阵W2更新整个分类网络模型的参数,然后通过判别分类网络模型对测试数据集T的超像素聚类中心的特征矩阵W3进行分类,再标记测试数据集T的类标,实现基于DCGAN的极化SAR图像分类。8) Input the feature matrix W2 of the labeled training data set D2 into the classification network model, and update the parameters of the softmax classifier, then update the parameters of the entire classification network model through the feature matrix W2 of the labeled training data set D2, and then pass The discriminative classification network model classifies the feature matrix W3 of the superpixel cluster center of the test data set T, and then marks the class labels of the test data set T to realize the polarimetric SAR image classification based on DCGAN.

步骤6)中通过无标签训练数据集D1对训练网络模型DCGAN进行训练的具体操作为:In step 6), the specific operation of training the training network model DCGAN through the unlabeled training data set D1 is as follows:

6a)设训练网络模型DCGAN中的生成器G包括依次相连接的输入层a、第一反卷积层、第二反卷积层、第三反卷积层、第四反卷积层及输出层,其中,输入层a的输入为100维噪声向量;第一反卷积层的特征映射图数目为512,第一反卷积层的滤波器尺寸为5;第二反卷积层的特征映射图数目为256,第二反卷积层的滤波器尺寸为5;第三反卷积层的特征映射图数目为128,第三反卷积层的滤波器尺寸为5;第四反卷积层的特征映射图数目为64,第四反卷积层的滤波器尺寸为5;输出层输出64×64×3大小的伪彩图;6a) Let the generator G in the training network model DCGAN include the input layer a, the first deconvolution layer, the second deconvolution layer, the third deconvolution layer, the fourth deconvolution layer and the output layer, where the input of the input layer a is a 100-dimensional noise vector; the number of feature maps of the first deconvolution layer is 512, and the filter size of the first deconvolution layer is 5; the feature of the second deconvolution layer The number of maps is 256, the filter size of the second deconvolution layer is 5; the number of feature maps of the third deconvolution layer is 128, the filter size of the third deconvolution layer is 5; the fourth deconvolution layer The number of feature maps in the product layer is 64, and the filter size of the fourth deconvolution layer is 5; the output layer outputs a pseudo-color map with a size of 64×64×3;

6b)设训练网络模型DCGAN中的判别器D包括依次相连接的输入层b、第一卷积层a、第二卷积层a、第三卷积层a、第四卷积层a及二分类器,其中,输入层b的特征映射图数目为3;第一层卷积层a的特征映射图数目为64,第一层卷积层a的滤波器尺寸为5;第二层卷积层a的特征映射图数目为128,第二层卷积层a的滤波器尺寸为5;第三层卷积层a的特征映射图数目为256,第三层卷积层a的滤波器尺寸为5;第四层卷积层a的特征映射图数目为512,第四层卷积层a的滤波器尺寸为5;二分类器输出一个标量;6b) Let the discriminator D in the training network model DCGAN include the input layer b, the first convolutional layer a, the second convolutional layer a, the third convolutional layer a, the fourth convolutional layer a, and the second convolutional layer connected in sequence. Classifier, wherein the number of feature maps of the input layer b is 3; the number of feature maps of the first convolutional layer a is 64, and the filter size of the first convolutional layer a is 5; the second layer of convolution The number of feature maps of layer a is 128, and the filter size of the second convolutional layer a is 5; the number of feature maps of the third convolutional layer a is 256, and the filter size of the third convolutional layer a is is 5; the number of feature maps of the fourth convolutional layer a is 512, and the filter size of the fourth convolutional layer a is 5; the binary classifier outputs a scalar;

6c)向训练网络模型DCGAN的生成器G中输入100维的均匀噪声,将无标签训练数据集D1的特征矩阵W1及生成器G的输出输入到训练网络模型DCGAN中判别器D中,通过训练网络模型DCGAN中的生成器G及判别器D互相竞争对抗学习训练,完成训练网络模型DCGAN的训练。6c) Input 100-dimensional uniform noise into the generator G of the training network model DCGAN, input the feature matrix W1 of the unlabeled training data set D1 and the output of the generator G into the discriminator D in the training network model DCGAN, and pass the training The generator G and the discriminator D in the network model DCGAN compete against each other for learning and training, and complete the training of the training network model DCGAN.

判别分类网络模型包括依次相连接的输入层c、第一卷积层b、第二卷积层b、第三卷积层b、第四卷积层b及softmax分类器,其中softmax分类器的特征映射图数目为5;The discriminative classification network model includes an input layer c, a first convolutional layer b, a second convolutional layer b, a third convolutional layer b, a fourth convolutional layer b and a softmax classifier connected in sequence, wherein the softmax classifier The number of feature maps is 5;

softmax分类器进行训练的具体操作为:The specific operation of the softmax classifier for training is:

将有标签训练数据集D2的特征矩阵W2作为判别分类网络模型的输入,有标签训练数据集D2中每个像素点的类别作为判别分类网络模型的输出,训练softmax分类器,通过求解上述类别与人工标记的正确类别之间的误差并对误差后向传播,只更新softmax分类器的参数,得到训练好的softmax分类器,人工标记的正确类标如图2所示。The feature matrix W2 of the labeled training data set D2 is used as the input of the discriminant classification network model, and the category of each pixel in the labeled training data set D2 is used as the output of the discriminative classification network model, and the softmax classifier is trained. By solving the above categories and The errors between the manually marked correct categories are propagated backwards, and only the parameters of the softmax classifier are updated to obtain a trained softmax classifier. The manually marked correct class labels are shown in Figure 2.

步骤8)中通过判别分类网络模型对测试数据集T的超像素聚类中心的特征矩阵W3进行分类的具体操作为:In step 8), the specific operation of classifying the feature matrix W3 of the superpixel cluster center of the test data set T through the discriminant classification network model is as follows:

将测试数据集T的超像素聚类中心的特征矩阵W3作为训练好的判别分类网络模型的输入,训练好的判别分类网络模型的输出为测试数据集T的超像素聚类中心的分类类别,然后对测试集T中每个像素点标记为该像素点所在超像素块的聚类中心的类别,从而完成测试集的分类。The feature matrix W3 of the superpixel clustering center of the test data set T is used as the input of the trained discriminant classification network model, and the output of the trained discriminant classification network model is the classification category of the superpixel cluster center of the test data set T, Then, each pixel in the test set T is marked as the category of the cluster center of the superpixel block where the pixel is located, so as to complete the classification of the test set.

仿真实验Simulation

仿真条件:硬件平台为:HP Z840;软件平台为:TensorFlow;仿真内容与结果:用本发明方法在上述仿真条件下进行实验,首先选取3.5%的无标签样本(占所有样本总数1800*1380的3.5%,大约8w个像素点的块)进行无监督学习,再分别从极化SAR数据的每个类别中随机选取0.5%的有标签样本(占有标签样本总数的0.5%),训练softmax后再对分类网络进行微调,其余有标记的像素点作为测试样本,得到如图3的分类结果,分为5类地物。Simulation condition: hardware platform is: HP Z840; Software platform is: TensorFlow; Simulation content and result: carry out experiment under above-mentioned simulation condition with the inventive method, at first select the unlabeled sample of 3.5% (accounting for all samples total number 1800*1380 3.5%, a block of about 8w pixels) for unsupervised learning, and then randomly select 0.5% of labeled samples from each category of polarized SAR data (accounting for 0.5% of the total number of labeled samples), and then train softmax The classification network is fine-tuned, and the remaining marked pixels are used as test samples, and the classification results shown in Figure 3 are obtained, which are divided into 5 types of ground objects.

从图3可以看出:分类结果的区域一致性较好,不同区域划分后的边缘清晰可辨,且保持了细节信息,分类结果图中噪声也比较少。It can be seen from Figure 3 that the regional consistency of the classification results is good, the edges of different regions are clearly identifiable, and the detailed information is maintained, and the noise in the classification result map is relatively small.

将本发明与卷积神经网络CNN的测试数据集分类精度进行对比,结果如表1所示:The test data set classification accuracy of the present invention and convolutional neural network CNN is compared, and the result is as shown in table 1:

表1Table 1

分类方法Classification 卷积神经网络convolutional neural network 本发明this invention 类别1(%)Category 1 (%) 99.984999.9849 99.991399.9913 类别2(%)Category 2 (%) 96.920896.9208 98.598098.5980 类别3(%)Category 3 (%) 89.474989.4749 98.555498.5554 类别4(%)Category 4 (%) 98.635298.6352 99.649499.6494 类别5(%)Category 5(%) 98.409798.4097 99.088399.0883 总准确率total accuracy 97.25497.254 99.434699.4346

减少有标签样本至0.2%,与卷积神经网络CNN的测试数据集分类精度进行对比,分类精度如下表2所示:Reduce the labeled samples to 0.2%, and compare it with the classification accuracy of the test data set of the convolutional neural network CNN. The classification accuracy is shown in Table 2 below:

表2Table 2

分类方法Classification 卷积神经网络convolutional neural network 本发明this invention 类别1(%)Category 1 (%) 99.003699.0036 99.931799.9317 类别2(%)Category 2 (%) 92.601592.6015 97.699897.6998 类别3(%)Category 3 (%) 92.823192.8231 98.847998.8479 类别4(%)Category 4 (%) 93.242293.2422 97.811197.8111 类别5(%)Category 5(%) 96.070396.0703 97.378997.3789 总准确率total accuracy 95.923995.9239 98.980598.9805

从表1和表2可见,在有标记样本为0.5%和0.2%的条件下,本发明的测试数据集每一类别的分类精度均高于卷积神经网络,能够提高分类精度。It can be seen from Table 1 and Table 2 that under the conditions of 0.5% and 0.2% of labeled samples, the classification accuracy of each category of the test data set of the present invention is higher than that of the convolutional neural network, which can improve the classification accuracy.

综上所述,本发明使用DCGAN进行特征提取,能够从大量无标记数据中学习数据分布特性,具有很好的特征表示能力,即使用少量的有标记样本对极化SAR数据分类仍可以达到很高的分类精度。In summary, the present invention uses DCGAN for feature extraction, which can learn data distribution characteristics from a large amount of unlabeled data, and has a good feature representation ability, even if a small number of labeled samples are used to classify polarimetric SAR data. High classification accuracy.

Claims (6)

1. a kind of Classification of Polarimetric SAR Image method based on DCGAN, it is characterised in that comprise the following steps:
1) polarization scattering matrix S is obtained, Pauli decomposition is carried out to polarization scattering matrix S, odd scattering coefficient, even scattering is obtained Coefficient and volume scattering coefficient, then it regard odd scattering coefficient, even scattering coefficient and volume scattering coefficient as polarization SAR figure to be sorted Eigenmatrix F of the 3-D view feature construction of picture based on pixel;
2) each element value in the eigenmatrix F based on pixel is normalized in [0,1], and normalized result is denoted as Eigenmatrix F1;
3) each element in eigenmatrix F1 is replaced by around it 64 × 64 image block, obtained based on image block Eigenmatrix F2;
4) construct the eigenmatrix W1 without label training dataset D1 using the eigenmatrix F2 based on image block and have label instruction Practice data set D2 eigenmatrix W2;
5) the eigenmatrix F2 construction test data set T based on image block, then the profit in the eigenmatrix F based on pixel are utilized With SLIC super-pixel algorithm partition super-pixel block, the cluster centre of super-pixel block is obtained, then in the eigenmatrix based on image block The eigenmatrix W3 of test data set T super-pixel cluster centre is constructed in F2;
6) by being trained without label training dataset D1 to training network model DCGAN, the training network after being trained Model DCGAN;
7) two graders in arbiter D in the training network model DCGAN after training are replaced by softmax graders, then It regard the arbiter D after replacing as sorter network model;
8) the eigenmatrix W2 for having label training dataset D2 is input in sorter network model, and updates softmax classification The parameter of device, then by there is label training dataset D2 eigenmatrix W2 to update the parameter of whole sorter network model, then The eigenmatrix W3 of test data set T super-pixel cluster centre is classified by identification and classification network model, then marked Test data set T category, realizes the Classification of Polarimetric SAR Image based on DCGAN.
2. the Classification of Polarimetric SAR Image method according to claim 1 based on DCGAN, it is characterised in that step 1) in it is right Polarization scattering matrix S carries out Pauli decomposition, and the operation for obtaining odd scattering coefficient, even scattering coefficient and volume scattering coefficient is:
Pauli bases { S 1a) is set1,S2,S3, wherein,
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>S</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mn>2</mn> </msqrt> </mfrac> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>S</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mn>2</mn> </msqrt> </mfrac> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>S</mi> <mn>3</mn> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mn>2</mn> </msqrt> </mfrac> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein, S1Scattered for odd, S2Scattered for even, S3For volume scattering;
1b) decomposed and defined by Pauli:
<mrow> <mi>S</mi> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>S</mi> <mrow> <mi>H</mi> <mi>H</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>S</mi> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>S</mi> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>S</mi> <mrow> <mi>V</mi> <mi>V</mi> </mrow> </msub> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msub> <mi>aS</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>bS</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>cS</mi> <mn>3</mn> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Wherein, a is odd scattering coefficient, and b is even scattering coefficient, and c is volume scattering coefficient;
Formula (2) 1c) is solved, odd scattering coefficient a, even scattering coefficient b and volume scattering coefficient c is obtained, wherein,
<mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>a</mi> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mn>2</mn> </msqrt> </mfrac> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mrow> <mi>H</mi> <mi>H</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>S</mi> <mrow> <mi>V</mi> <mi>V</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>b</mi> <mo>=</mo> <mfrac> <mn>1</mn> <msqrt> <mn>2</mn> </msqrt> </mfrac> <mrow> <mo>(</mo> <msub> <mi>S</mi> <mrow> <mi>H</mi> <mi>H</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>S</mi> <mrow> <mi>V</mi> <mi>V</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>c</mi> <mo>=</mo> <msqrt> <mn>2</mn> </msqrt> <msub> <mi>S</mi> <mrow> <mi>H</mi> <mi>V</mi> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> <mo>.</mo> </mrow>
3. the Classification of Polarimetric SAR Image method according to claim 1 based on DCGAN, it is characterised in that step 1) in structure The concrete operations for building the eigenmatrix F based on pixel are:
If size is the eigenmatrix of M1 × M2 × 3, then by odd scattering coefficient a, even scattering coefficient b and volume scattering coefficient c The eigenmatrix that size is M1 × M2 × 3 is assigned to, the eigenmatrix F based on pixel is obtained, wherein, M1 is polarization SAR to be sorted The length of image, M2 is the width of Polarimetric SAR Image to be sorted.
4. the Classification of Polarimetric SAR Image method according to claim 1 based on DCGAN, it is characterised in that step 2) tool Gymnastics conduct:The maximum max (F) of the eigenmatrix F based on pixel is solved, then by the eigenmatrix F based on pixel Each element is equal divided by the maximum max (F), obtain eigenmatrix F1.
5. the Classification of Polarimetric SAR Image method according to claim 1 based on DCGAN, it is characterised in that step 6) in lead to Cross no label training dataset D1 is to the training network model DCGAN concrete operations being trained:
6a) setting maker G in training network model DCGAN includes being sequentially connected the input layer a connect, the first warp lamination, the Two warp laminations, the 3rd warp lamination, the 4th warp lamination and output layer, wherein, input layer a input for 100 dimension noises to Amount;The Feature Mapping map number of first warp lamination is 512, and the filter size of the first warp lamination is 5;Second warp lamination Feature Mapping map number be 256, the filter size of the second warp lamination is 5;The Feature Mapping figure number of 3rd warp lamination Mesh is 128, and the filter size of the 3rd warp lamination is 5;The Feature Mapping map number of 4th warp lamination is 64, the 4th warp The filter size of lamination is 5;Output layer exports the pcolor of 64 × 64 × 3 sizes;
6b) setting the arbiter D in training network model DCGAN includes being sequentially connected the input layer b connect, the first convolutional layer a, second Convolutional layer a, the 3rd convolutional layer a, Volume Four lamination a and two graders, wherein, input layer b Feature Mapping map number is 3;The One layer of convolutional layer a Feature Mapping map number is 64, and first layer convolutional layer a filter size is 5;Second layer convolutional layer a's Feature Mapping map number is 128, and second layer convolutional layer a filter size is 5;Third layer convolutional layer a Feature Mapping figure number Mesh is 256, and third layer convolutional layer a filter size is 5;4th layer of convolutional layer a Feature Mapping map number is the 512, the 4th Layer convolutional layer a filter size is 5;Two graders export a scalar;
6c) the Uniform noise that input 100 is tieed up into training network model DCGAN maker G, will be without label training dataset D1 Eigenmatrix W1 and maker G output be input in training network model DCGAN in arbiter D, pass through training network mould Maker G and arbiter D in type DCGAN compete with one another for resisting learning training, complete training network model DCGAN training.
6. the Classification of Polarimetric SAR Image method according to claim 1 based on DCGAN, it is characterised in that step 7) in sentence Other sorter network model includes being sequentially connected the input layer c connect, the first convolutional layer b, the second convolutional layer b, the 3rd convolutional layer b, the Four convolutional layer b and softmax graders, wherein the Feature Mapping map number of softmax graders are 5.
CN201710440090.XA 2017-06-12 2017-06-12 A kind of Classification of Polarimetric SAR Image method based on DCGAN Pending CN107292336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710440090.XA CN107292336A (en) 2017-06-12 2017-06-12 A kind of Classification of Polarimetric SAR Image method based on DCGAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710440090.XA CN107292336A (en) 2017-06-12 2017-06-12 A kind of Classification of Polarimetric SAR Image method based on DCGAN

Publications (1)

Publication Number Publication Date
CN107292336A true CN107292336A (en) 2017-10-24

Family

ID=60096544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710440090.XA Pending CN107292336A (en) 2017-06-12 2017-06-12 A kind of Classification of Polarimetric SAR Image method based on DCGAN

Country Status (1)

Country Link
CN (1) CN107292336A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944483A (en) * 2017-11-17 2018-04-20 西安电子科技大学 Classification of Multispectral Images method based on binary channels DCGAN and Fusion Features
CN107943751A (en) * 2017-11-14 2018-04-20 华南理工大学 A kind of autonomous channel convolution method based on depth convolution confrontation network model
CN108564006A (en) * 2018-03-26 2018-09-21 西安电子科技大学 Based on the polarization SAR terrain classification method from step study convolutional neural networks
CN109614979A (en) * 2018-10-11 2019-04-12 北京大学 A data augmentation method and image classification method based on selection and generation
CN109784401A (en) * 2019-01-15 2019-05-21 西安电子科技大学 A kind of Classification of Polarimetric SAR Image method based on ACGAN
CN110009015A (en) * 2019-03-25 2019-07-12 西北工业大学 EO-1 hyperion small sample classification method based on lightweight network and semi-supervised clustering
WO2019237240A1 (en) * 2018-06-12 2019-12-19 中国科学院深圳先进技术研究院 Enhanced generative adversarial network and target sample identification method
CN110610207A (en) * 2019-09-10 2019-12-24 重庆邮电大学 A small-sample SAR image ship classification method based on transfer learning
CN111192221A (en) * 2020-01-07 2020-05-22 中南大学 Image inpainting method of aluminum electrolysis fire eye based on deep convolutional generative adversarial network
CN112307679A (en) * 2020-11-23 2021-02-02 内蒙古工业大学 Method and device for constructing river ice thickness inversion microwave scattering model
CN116385813A (en) * 2023-06-07 2023-07-04 南京隼眼电子科技有限公司 ISAR image classification method, ISAR image classification device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6943724B1 (en) * 2002-10-30 2005-09-13 Lockheed Martin Corporation Identification and tracking of moving objects in detected synthetic aperture imagery
CN101488188A (en) * 2008-11-10 2009-07-22 西安电子科技大学 SAR image classification method based on SVM classifier of mixed nucleus function
CN102999908A (en) * 2012-11-19 2013-03-27 西安电子科技大学 Synthetic aperture radar (SAR) airport segmentation method based on improved visual attention model
CN104331707A (en) * 2014-11-02 2015-02-04 西安电子科技大学 Polarized SAR (synthetic aperture radar) image classification method based on depth PCA (principal component analysis) network and SVM (support vector machine)
CN105718957A (en) * 2016-01-26 2016-06-29 西安电子科技大学 Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network
CN105868793A (en) * 2016-04-18 2016-08-17 西安电子科技大学 Polarization SAR image classification method based on multi-scale depth filter

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6943724B1 (en) * 2002-10-30 2005-09-13 Lockheed Martin Corporation Identification and tracking of moving objects in detected synthetic aperture imagery
CN101488188A (en) * 2008-11-10 2009-07-22 西安电子科技大学 SAR image classification method based on SVM classifier of mixed nucleus function
CN102999908A (en) * 2012-11-19 2013-03-27 西安电子科技大学 Synthetic aperture radar (SAR) airport segmentation method based on improved visual attention model
CN104331707A (en) * 2014-11-02 2015-02-04 西安电子科技大学 Polarized SAR (synthetic aperture radar) image classification method based on depth PCA (principal component analysis) network and SVM (support vector machine)
CN105718957A (en) * 2016-01-26 2016-06-29 西安电子科技大学 Polarized SAR image classification method based on nonsubsampled contourlet convolutional neural network
CN105868793A (en) * 2016-04-18 2016-08-17 西安电子科技大学 Polarization SAR image classification method based on multi-scale depth filter

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALEC RADFORD等: "UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS", 《 ARXIV》 *
史彩娟等: "基于增强稀疏性特征选择的网络图像标注", 《软件学报》 *
王鑫: "SAR图像显著性检测与分类算法研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107943751A (en) * 2017-11-14 2018-04-20 华南理工大学 A kind of autonomous channel convolution method based on depth convolution confrontation network model
CN107944483A (en) * 2017-11-17 2018-04-20 西安电子科技大学 Classification of Multispectral Images method based on binary channels DCGAN and Fusion Features
CN107944483B (en) * 2017-11-17 2020-02-07 西安电子科技大学 Multispectral image classification method based on dual-channel DCGAN and feature fusion
CN108564006B (en) * 2018-03-26 2021-10-29 西安电子科技大学 Polarimetric SAR ground object classification method based on self-paced learning convolutional neural network
CN108564006A (en) * 2018-03-26 2018-09-21 西安电子科技大学 Based on the polarization SAR terrain classification method from step study convolutional neural networks
US12154036B2 (en) 2018-06-12 2024-11-26 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Enhanced generative adversarial network and target sample recognition method
WO2019237240A1 (en) * 2018-06-12 2019-12-19 中国科学院深圳先进技术研究院 Enhanced generative adversarial network and target sample identification method
CN109614979A (en) * 2018-10-11 2019-04-12 北京大学 A data augmentation method and image classification method based on selection and generation
CN109614979B (en) * 2018-10-11 2023-05-02 北京大学 A data augmentation method and image classification method based on selection and generation
CN109784401A (en) * 2019-01-15 2019-05-21 西安电子科技大学 A kind of Classification of Polarimetric SAR Image method based on ACGAN
CN110009015A (en) * 2019-03-25 2019-07-12 西北工业大学 EO-1 hyperion small sample classification method based on lightweight network and semi-supervised clustering
CN110610207B (en) * 2019-09-10 2022-11-25 重庆邮电大学 A small-sample SAR image ship classification method based on transfer learning
CN110610207A (en) * 2019-09-10 2019-12-24 重庆邮电大学 A small-sample SAR image ship classification method based on transfer learning
CN111192221A (en) * 2020-01-07 2020-05-22 中南大学 Image inpainting method of aluminum electrolysis fire eye based on deep convolutional generative adversarial network
CN111192221B (en) * 2020-01-07 2024-04-16 中南大学 Aluminum electrolysis fire eye image repair method based on deep convolutional generative adversarial network
CN112307679A (en) * 2020-11-23 2021-02-02 内蒙古工业大学 Method and device for constructing river ice thickness inversion microwave scattering model
CN116385813A (en) * 2023-06-07 2023-07-04 南京隼眼电子科技有限公司 ISAR image classification method, ISAR image classification device and storage medium
CN116385813B (en) * 2023-06-07 2023-08-29 南京隼眼电子科技有限公司 ISAR Image Spatial Target Classification Method, Device and Storage Medium Based on Unsupervised Contrastive Learning

Similar Documents

Publication Publication Date Title
CN107292336A (en) A kind of Classification of Polarimetric SAR Image method based on DCGAN
CN110321963B (en) Hyperspectral image classification method based on fusion of multi-scale and multi-dimensional spatial spectral features
CN102096825B (en) Graph-based semi-supervised high-spectral remote sensing image classification method
CN103034863B (en) The remote sensing image road acquisition methods of a kind of syncaryon Fisher and multiple dimensioned extraction
CN104077599B (en) Polarization SAR image classification method based on deep neural network
CN102542302B (en) Automatic complicated target identification method based on hierarchical object semantic graph
CN107368852A (en) A kind of Classification of Polarimetric SAR Image method based on non-down sampling contourlet DCGAN
CN105138970B (en) Classification of Polarimetric SAR Image method based on spatial information
CN102982338B (en) Classification of Polarimetric SAR Image method based on spectral clustering
CN102999762B (en) Decompose and the Classification of Polarimetric SAR Image method of spectral clustering based on Freeman
CN108564115A (en) Semi-supervised polarization SAR terrain classification method based on full convolution GAN
CN105718942B (en) Hyperspectral Image Imbalance Classification Method Based on Mean Shift and Oversampling
CN104123555A (en) Super-pixel polarimetric SAR land feature classification method based on sparse representation
CN108830243A (en) Hyperspectral image classification method based on capsule network
CN111639587A (en) Hyperspectral image classification method based on multi-scale spectrum space convolution neural network
CN107358260A (en) A kind of Classification of Multispectral Images method based on surface wave CNN
CN104102928B (en) A kind of Classifying Method in Remote Sensing Image based on texture primitive
CN103365985B (en) The adaptive polarization SAR sorting technique of one kind
CN107832797A (en) Classification of Multispectral Images method based on depth integration residual error net
CN107491734A (en) Semi-supervised Classification of Polarimetric SAR Image method based on multi-core integration Yu space W ishart LapSVM
CN107239799A (en) Polarization SAR image classification method with depth residual error net is decomposed based on Pauli
CN110020693A (en) The Classification of Polarimetric SAR Image method for improving network with feature is paid attention to based on feature
CN102073867A (en) Sorting method and device for remote sensing images
CN109784401A (en) A kind of Classification of Polarimetric SAR Image method based on ACGAN
CN109635789B (en) High-resolution SAR image classification method based on intensity ratio and spatial structure feature extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171024

RJ01 Rejection of invention patent application after publication