[go: up one dir, main page]

CN115859167A - Sky wave radar ground-sea clutter semi-supervised classification model construction method for improving generation of countermeasure network - Google Patents

Sky wave radar ground-sea clutter semi-supervised classification model construction method for improving generation of countermeasure network Download PDF

Info

Publication number
CN115859167A
CN115859167A CN202211512196.3A CN202211512196A CN115859167A CN 115859167 A CN115859167 A CN 115859167A CN 202211512196 A CN202211512196 A CN 202211512196A CN 115859167 A CN115859167 A CN 115859167A
Authority
CN
China
Prior art keywords
loss
sample
semi
discriminator
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211512196.3A
Other languages
Chinese (zh)
Other versions
CN115859167B (en
Inventor
王增福
张效宣
潘泉
卢琨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202211512196.3A priority Critical patent/CN115859167B/en
Publication of CN115859167A publication Critical patent/CN115859167A/en
Application granted granted Critical
Publication of CN115859167B publication Critical patent/CN115859167B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

本发明公开了一种改进生成对抗网络的天波雷达地海杂波半监督分类模型构建方法,该方法将半监督生成对抗网络中判别器的中间层网络筛选出多层网络中的特征,并对各层的特征匹配损失进行加权得出联合特征匹配损失。进一步,将标准的对抗损失与联合特征匹配损失作线性加权,得到加权损失。基于标准对抗损失、联合特征损失和加权损失设计基于改进生成对抗网络的地海杂波分类模型WL‑SSGAN。解决了如何避免人工添加标签并提高模型的分类性能的技术问题,达到了节省添加标签成本,提高分来效率的技术效果。

Figure 202211512196

The invention discloses a method for constructing a semi-supervised classification model of sky-wave radar ground-sea clutter with an improved generative confrontation network. The method screens out the features in the multi-layer network from the middle layer network of the discriminator in the semi-supervised generative confrontation network, and The feature matching loss of each layer is weighted to obtain the joint feature matching loss. Further, the standard adversarial loss is linearly weighted with the joint feature matching loss to obtain a weighted loss. Based on standard adversarial loss, joint feature loss and weighted loss, an improved generative adversarial network-based ground-sea clutter classification model WL‑SSGAN is designed. It solves the technical problem of how to avoid manual labeling and improve the classification performance of the model, and achieves the technical effect of saving the cost of adding labels and improving the efficiency of scoring.

Figure 202211512196

Description

Sky wave radar ground-sea clutter semi-supervised classification model construction method for improving generation of countermeasure network
Technical Field
The invention belongs to a frequency spectrum data processing technology in the field of radar data processing, and relates to a sky wave radar sea clutter semi-supervised classification method for improving and generating a countermeasure network.
Background
Sky wave beyond visual range radar (sky wave radar) is an important device for beyond visual range remote detection, and is widely applied to military and civil fields. However, when the electromagnetic wave propagates through the ionosphere, a coordinate registration problem is generated, and the target positioning accuracy of the sky-wave radar is seriously affected. The sky wave radar ground sea clutter recognition is a process for identifying the background clutter source of each distance-azimuth unit of radar echo as the ground or the sea. The ground/sea boundary or terrain contour is formed by using the ground-sea clutter recognition result, and then the ground/sea boundary or terrain contour is matched with the prior geographic information, so that coordinate registration parameters can be provided for target positioning. The identification of the terrestrial sea clutter has received increasing attention due to the advantages of not providing other auxiliary sources or detection equipment.
The sky wave radar sends out electromagnetic wave signals by a transmitter, the electromagnetic wave signals are refracted to the surface of the ground sea by an ionosphere, echo signals of targets or clutter return to a receiver along an original path to form ground sea clutter, a first-order Bragg peak of the sea clutter is formed by high-frequency electromagnetic waves transmitted by the sky wave radar and Bragg resonance scattering of sea waves, and the sea clutter is symmetrical to a double peak of zero frequency. The ground clutter samples appear as a single peak near zero frequency due to the echo characteristics of land or islands with no significant motion fluctuations. The characteristics of the sea clutter sample and the ground clutter sample are fused into the ground-sea boundary clutter sample, and the ground-sea boundary clutter sample appears as a three-peak sample near zero frequency.
In recent years, deep learning has been applied very successfully in the classification of ground-sea clutter. Compared with the traditional method which depends on manual extraction of the characteristics of the ground-sea clutter, the deep learning-based ground-sea clutter classification method can automatically extract high-level characteristics of data from a large number of sea clutter samples by utilizing the strong characteristic extraction capability of a neural network, and accurate prediction of subsequent classification tasks is realized. However, existing methods of classifying ground-sea clutter, as a fully supervised classification framework, rely on a large number of labeled training samples. Because the label calibration of the sea clutter samples is extremely time-consuming and labor-consuming and requires expert knowledge in the field, the fully supervised sea clutter classification method has certain limitations in practical application.
Therefore, a novel classification method needs to be provided to solve the technical problems of reducing the cost of adding the label to the sample and improving the sample classification efficiency.
Disclosure of Invention
The invention aims to provide a sky wave radar sea clutter semi-supervised classification model construction method for improving and generating an anti-network, so as to solve the problems of reducing the cost of adding labels to sea clutter samples and improving the classification precision of the sea clutter samples.
The invention adopts the following technical scheme:
the embodiment of the invention provides a method for constructing a sky wave radar ground sea clutter semi-supervised classification model for improving and generating a countermeasure network, which is characterized by comprising the following steps:
inputting training samples of the ground-sea clutter into a semi-supervised classification model for training, wherein the training samples comprise label samples, label-free samples and generating samples, and the semi-supervised generation countermeasure network comprises a discriminator and a generator;
calculating a first countermeasure loss of the discriminator according to the information entropy of the labeled sample output by the discriminator, wherein the first countermeasure loss represents the full supervision loss of the discriminator;
calculating a second countermeasure loss of the discriminator according to the information entropy of the output unlabeled sample of the discriminator and the information entropy of the generated sample, wherein the second countermeasure loss represents the semi-supervised loss of the discriminator;
calculating combined characteristic loss according to the generated sample characteristics and the label-free sample characteristics in the multilayer network in the discriminator;
weighting the second confrontation loss and the joint characteristic loss to generate a weighted loss of the generator;
and determining the semi-supervised classification model meeting the preset classification precision by repeatedly training the semi-supervised classification model and updating the classification weight of the discriminator based on the first countermeasure loss, the second countermeasure loss and the weighted loss.
Optionally, the first countermeasure loss is calculated as follows:
Figure BDA0003969698390000031
wherein p is D The entropy of the labeled sample output by the discriminator, y is a sample label, K represents the K-th sample space, x is a sample feature,
Figure BDA0003969698390000032
representing expected operators of labeled exemplars, L supervised Is the first to combat the loss.
Optionally, the second countermeasure loss is calculated as follows:
Figure BDA0003969698390000033
wherein, log [1-p ] D (y=K+1|x)]Entropy of information of sample features in the K +1 th sample space in the unlabeled sample, logp, output by the discriminator D (y = K +1 luminance x) is the entropy of information of the sample feature in the K +1 th sample space of the generated sample output by the discriminator,
Figure BDA0003969698390000034
for the desired operator of an unlabeled sample, ->
Figure BDA0003969698390000035
To generate the desired operator for the labeled sample, L unsupervised Is the second to combat the loss.
Optionally, the calculating the joint feature loss according to the generated sample features and the unlabeled sample features in the multi-layer network in the discriminator comprises:
extracting label-free sample characteristics and generated sample characteristics in each layer of the network from the middle layer network with the preset layer number of the discriminator;
solving the Euclidean distance between the label-free sample characteristics and the generated sample characteristics in each layer of network to obtain the characteristic matching loss of each layer of network;
and carrying out weighted summation on the feature matching loss of each layer of the network to obtain the combined feature matching loss.
Optionally, the joint feature matching loss is calculated as follows:
Figure BDA0003969698390000036
wherein l max To represent the sum of the intermediate network of discriminators, l mul Indicates a selected predetermined number of layers, and
Figure BDA0003969698390000041
Ch (l) the number of channels, le, corresponding to the sample feature of the l-th layer (l) Is the length of the characteristic of the sample,
Figure BDA0003969698390000042
for feature matching penalty in each tier network>
Figure BDA0003969698390000043
Generating an expectation of a sample characteristic for the l-th layer>
Figure BDA0003969698390000044
Expectation of the characteristics of the unlabeled sample of the L-th layer, L FM The loss is matched for the joint features.
Optionally, the weighting loss is calculated as follows:
L WL-SSGAN =α(-L unsupervised )+βL FM
wherein α is a first constant, β is a second constant, L FM For joint feature matching loss, L WL-SSGAN Is a weighted loss.
Optionally, the method further comprises: when only the label sample is used as the input of the semi-supervised classification model, only the discriminator works, and the loss of the discriminator is the sum of the first pair of the anti-loss and the second pair of the anti-loss.
Optionally, the method further comprises: when the labeled sample and the unlabeled sample are input as the semi-supervised classification model, the discriminator and the generator both work, and at the moment, the generated sample is output by adding random noise in the generator.
Optionally, the LeakyReLU activation function is linked to the back of the first convolutional layer of the discriminator, as well as to the back of all convolutional layers.
Optionally, the first convolutional layer of the generator is followed by a ReLU activation function and the last deconvolution layer is followed by a Tanh activation function.
The invention has the beneficial effects that: and screening out the characteristics in the multilayer network by using the intermediate layer network of the discriminator in the semi-supervised generation countermeasure network, and weighting the characteristic matching loss of each layer to obtain the combined characteristic matching loss. Further, the standard confrontation loss and the joint feature matching loss are linearly weighted to obtain a weighted loss. And designing a ground sea clutter classification model WL-SSGAN based on the standard countermeasure loss, the joint characteristic loss and the weighted loss, wherein the ground sea clutter classification model is used for generating a countermeasure network based on improvement. When the input of the WL-SSGAN contains only the tag samples, it is considered as a ground-sea clutter fully supervised classification model that generates a countermeasure network based on the improvement. When the input of the WL-SSGAN contains both label samples and non-label samples, the input is regarded as a sea clutter semi-supervised classification model for generating an antagonistic network based on improvement; on one hand, the characteristics which are beneficial to improving the classification performance of the classifier are extracted from different layer networks of the discriminator, and the weighting loss for weighting the standard countermeasure loss and the combined characteristic matching loss of the multilayer network is calculated, so that the classification performance of the classification model based on the weighting loss is higher compared with the traditional method of simply using the countermeasure loss or the characteristic matching loss of a single layer network; on the other hand, the discriminator and the classifier can be started simultaneously for samples with strong distribution randomness, and the discriminator can extract useful features from non-label samples to generate samples, so that the distribution of the generated samples is the same as that of the initial samples, manual label adding is avoided, the cost of adding labels is saved, and the sorting efficiency is improved.
Drawings
FIG. 1 is a diagram of steps of a method for improving a method for constructing a sky-wave radar sea clutter semi-supervised classification model for generating a countermeasure network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a semi-supervised classification model of sky wave radar and sea clutter for improving generation of a countermeasure network according to an embodiment of the present invention;
fig. 3 is a frequency spectrum diagram of a full-range section of a certain beam of a real-time acquired sky-wave radar according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a distance-azimuth unit sea clutter sample obtained from a spectrogram sample according to an embodiment of the present invention;
fig. 5 is a schematic diagram of range-azimuth unit ground clutter samples obtained from spectrogram samples according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a sample of the range-azimuth unit sea boundary clutter obtained from sampling a spectrogram according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The embodiment of the invention provides a method for constructing a sky wave radar ground sea clutter semi-supervised classification model for improving and generating a countermeasure network, which is characterized by comprising the following steps:
step S101, inputting training samples of the ground sea clutter into a semi-supervised classification model for training, wherein the training samples comprise label samples, label-free samples and generating samples, and the semi-supervised generation countermeasure network comprises a discriminator and a generator;
in one embodiment, a training sample set is first established before training the semi-supervised classification model. It should be noted that, after the training of the classification model is finished, the classification performance of the classification model is tested by using the test sample, and therefore, a test sample set needs to be established at the same time.
Specifically, the source of the ground-sea clutter data set is the echo power spectrum of the sky-wave radar target or clutter. The sky wave radar sends out electromagnetic wave signals by a transmitter, the electromagnetic wave signals are refracted to the surface of the ground sea by an ionosphere, and echo signals of targets or clutter return to a receiver along an original path to form the ground sea clutter. And (3) screening and calibrating distance-azimuth unit sea clutter samples meeting the project requirements from a sky wave radar sea clutter distance-Doppler spectrum database, wherein the distance-azimuth unit sea clutter samples comprise sea clutter samples, ground clutter samples and sea boundary clutter samples. The distance-azimuth unit sea clutter samples meet the requirement of project and are symmetrically double peaks at zero frequency, the ground clutter samples are single peaks near the zero frequency, and the sea boundary clutter samples are triple peaks near the zero frequency. Then dividing the ground sea clutter sample into a training set and a testing set;
for example, the total amount of the sea clutter, ground clutter, and ground-sea boundary clutter samples is 1000, wherein the training sample amount is 700 (accounting for 70%) and the testing sample amount is 300 (accounting for 30%). The training samples are randomly divided into two parts, wherein one part is a small number of samples with labels, and the label information is effective; the other part is a large amount of unlabeled samples, and the label information of the unlabeled samples is negligible. Furthermore, the test samples are only used to evaluate the performance of the WL-SSGAN and do not participate in any training process, only the semi-supervised classification model is trained by the training samples.
Step S102, calculating a first countermeasure loss of the discriminator according to the information entropy of the output labeled sample of the discriminator, wherein the first countermeasure loss represents the full supervision loss of the discriminator;
step S103, calculating a second countermeasure loss of the discriminator according to the information entropy of the output unlabeled sample of the discriminator and the information entropy of the generated sample, wherein the second countermeasure loss represents the semi-supervised loss of the discriminator;
in one embodiment, when the training samples only contain labeled samples and unlabeled samples, the model consists of a semi-supervised generation countermeasure network SSGAN, in which only the discriminators are active and the loss function regularization form consists of two parts, namely a supervised loss and a semi-supervised loss:
L SSGAN =L supervised +L unsupervised
wherein, the first confrontation loss is the full supervision loss, and the calculation mode is as follows:
Figure BDA0003969698390000071
wherein p is D The entropy of the labeled sample output by the discriminator, y is a sample label, K represents the K-th sample space, x is a sample feature,
Figure BDA0003969698390000072
representing expected operators of labeled exemplars, L supervised Is the first to combat the loss.
The second confrontation loss is a semi-supervised loss, and is calculated as follows:
Figure BDA0003969698390000073
wherein, log [1-p ] D (y=K+1|x)]Entropy of information of sample features in the K +1 th sample space in the unlabeled sample, logp, output by the discriminator D (y = K +1 luminance x) is the entropy of information of the sample feature in the K +1 th sample space of the generated sample output by the discriminator,
Figure BDA0003969698390000074
for the desired operator of an unlabeled sample, ->
Figure BDA0003969698390000075
To generate the desired operator for the labeled sample, L unsupervised Is the second to combat the loss.
Step S104, calculating the loss of the joint features according to the generated sample features and the label-free sample features in the multilayer network in the discriminator;
in one embodiment, optionally, calculating the joint feature loss from the generated sample features and the unlabeled sample features in the multi-layer network in the discriminator comprises:
extracting label-free sample characteristics and generated sample characteristics in each layer of the network from the middle layer network with the preset layer number of the discriminator;
solving the Euclidean distance between the label-free sample characteristics and the generated sample characteristics in each layer of network to obtain the characteristic matching loss of each layer of network;
and carrying out weighted summation on the feature matching loss of each layer of the network to obtain the combined feature matching loss.
Specifically, a network set with a preset number of layers is screened from an intermediate network of the discriminator, the label-free sample features and the generated sample features are extracted from each layer of the network, and the feature matching loss of each layer of the network is obtained according to the Euclidean distance for calculating the label-free sample features and the generated sample features in each layer of the network:
Figure BDA0003969698390000081
wherein,
Figure BDA0003969698390000082
generating a desire for a sample characteristic for layer l>
Figure BDA0003969698390000083
For the expectation of a no-label sample characteristic of layer I>
Figure BDA0003969698390000084
Loss of feature matching for the l-th network.
Further, the number of channels corresponding to the joint generation sample and the label-free sample and the length of the sample feature are combined
And performing joint feature matching loss calculation in the following way:
Figure BDA0003969698390000085
wherein l max To represent the sum of the intermediate-layer networks of the discriminators, l mul Indicates a selected predetermined number of layers, and
Figure BDA0003969698390000086
Ch (l) is the same as the first layerNumber of channels, le, corresponding to this characteristic (l) Is the length of the characteristic of the sample,
Figure BDA0003969698390000087
for feature matching penalty in each tier network>
Figure BDA0003969698390000088
Generating an expectation of a sample characteristic for the l-th layer>
Figure BDA0003969698390000089
Expectation of the characteristics of the unlabeled sample of the L-th layer, L FM The loss is matched for the joint features.
Step S105, weighting the second confrontation loss and the joint characteristic loss to generate a weighted loss of a generator;
in an embodiment, it should be noted that, when the training samples input in the semi-supervised model include both labeled samples and unlabeled samples, the arbiter and the generator in the model both function, and the loss of the generator is a weighted loss, which is calculated as follows:
L WL-SSGAN =α(-L unsupervised )+βL FM
wherein α is a first constant, β is a second constant, L FM For joint feature matching loss, L WL-SSGAN For weighting the loss, L unsupervised Is the second to combat the loss.
The values α and β may be set according to actual conditions, and α + β =1, α, β ≧ 0. The specific weight of the weighting factors α and β can be controlled to balance the degree to which the countervailing loss and the feature matching loss contribute to the WL-SSGAN training. In addition to this, l can be controlled mul The different layers are feature weighted to measure in addition the impact of different signal feature combinations on the performance of the WL-SSGAN classification. In the present embodiment, to improve the generalization capability of the WL-SSGAN model, the signal features extracted from a single layer are not directly used
Figure BDA0003969698390000091
As the feature matching loss, the multi-layer signals are weighted to obtain the combined feature matching loss L FM In order to obtain better semi-supervised classification performance.
And S106, repeatedly training the semi-supervised classification model, updating the classification weight of the discriminator based on the first countermeasure loss, the second countermeasure loss and the weighted loss, and determining the semi-supervised classification model meeting the preset classification precision.
In one embodiment, the semi-supervised model is trained for multiple times by adopting a training sample, the classification weight of the classifier is updated by back propagation of the first countermeasure loss, the second countermeasure loss and the weighted loss at the current training time, the classification type and the probability are finally output, the classification precision of the classifier is guided to meet the preset classification precision, and the final semi-supervised classification model is obtained.
With reference to fig. 2, the implementation of the above steps S101 to S106 is described as follows:
the training set sample sizes for the first through ten sets are respectively as follows:
(1) The amount of labeled sample was 15. The amount of the labeled sea clutter samples is 5, the amount of the labeled ground clutter samples is 5, and the amount of the labeled ground-sea boundary clutter samples is 5. The amount of unlabeled sample was 2085. The amount of the unlabeled sea clutter samples is 695, the amount of the unlabeled ground clutter samples is 695, and the amount of the unlabeled ground sea boundary clutter samples is 695;
(2) The amount of labeled sample was 30. The amount of the labeled sea clutter samples is 10, the amount of the labeled ground clutter samples is 10, and the amount of the labeled ground-sea boundary clutter samples is 10. The unlabeled exemplar size is 2070. The amount of the unlabeled sea clutter samples is 690, the amount of the unlabeled ground clutter samples is 690, and the amount of the unlabeled ground sea boundary clutter samples is 690;
(3) The amount of labeled sample was 45. The amount of the labeled sea clutter samples is 15, the amount of the labeled ground clutter samples is 15, and the amount of the labeled ground-sea boundary clutter samples is 15. The amount of unlabeled sample was 2055. Wherein the unlabeled sea clutter sample size is 685, the unlabeled ground clutter sample size is 685, and the unlabeled ground sea boundary clutter sample size is 685; c. C
(4) The amount of labeled sample was 60. The amount of the labeled sea clutter samples is 20, the amount of the labeled ground clutter samples is 20, and the amount of the labeled ground-sea boundary clutter samples is 20. The unlabeled sample size was 2040. The amount of the unlabeled sea clutter samples is 680, the amount of the unlabeled ground clutter samples is 680, and the amount of the unlabeled ground sea boundary clutter samples is 680;
(5) The amount of labeled sample was 75. The amount of the labeled sea clutter samples is 25, the amount of the labeled ground clutter samples is 25, and the amount of the labeled ground-sea boundary clutter samples is 25. The unlabeled sample size was 2025. Wherein the unlabeled sea clutter sample size is 675, the unlabeled ground clutter sample size is 675, and the unlabeled ground sea boundary clutter sample size is 675;
(6) The labeled sample size was 90. The amount of the labeled sea clutter samples is 30, the amount of the labeled ground clutter samples is 30, and the amount of the labeled ground-sea boundary clutter samples is 30. The unlabeled sample size is 2010. The amount of the unlabeled sea clutter samples is 670, the amount of the unlabeled ground clutter samples is 670, and the amount of the unlabeled ground sea boundary clutter samples is 670;
(7) The amount of labeled sample was 105. The amount of the labeled sea clutter samples is 35, the amount of the labeled ground clutter samples is 35, and the amount of the labeled ground-sea boundary clutter samples is 35. The unlabeled sample size was 1995. The amount of the unlabeled sea clutter samples is 665, the amount of the unlabeled ground clutter samples is 665, and the amount of the unlabeled ground sea boundary clutter samples is 665;
(8) The amount of labeled sample was 120. The amount of the labeled sea clutter samples is 40, the amount of the labeled ground clutter samples is 40, and the amount of the labeled ground-sea boundary clutter samples is 40. The unlabeled sample size was 1980. The amount of the unlabeled sea clutter samples is 660, the amount of the unlabeled ground clutter samples is 660, and the amount of the unlabeled ground sea boundary clutter samples is 660;
(9) The labeled sample size was 135. The amount of the labeled sea clutter samples is 45, the amount of the labeled ground clutter samples is 45, and the amount of the labeled ground-sea boundary clutter samples is 45. The unlabeled sample size was 1965. The amount of the unlabeled sea clutter samples is 655, the amount of the unlabeled ground clutter samples is 655, and the amount of the unlabeled ground sea boundary clutter samples is 655;
(10) The amount of labeled sample was 150. The amount of the labeled sea clutter samples is 50, the amount of the labeled ground clutter samples is 50, and the amount of the labeled ground-sea boundary clutter samples is 50. The amount of unlabeled specimen was 1950. The amount of the unlabeled sea clutter samples is 650, the amount of the unlabeled ground clutter samples is 650, and the amount of the unlabeled ground sea boundary clutter samples is 650.
When only the label sample is used as the input of the semi-supervised classification model, only the discriminator works, and the loss of the discriminator is the sum of the first pair of the anti-loss and the second pair of the anti-loss.
At this time, the discriminator loss L of WL-SSGAN D Loss of SSGAN as standard:
L D =L supervised +L unsupervised
wherein L is supervised To combat the loss, L unsupervised Is the second to combat the loss.
When the labeled sample and the unlabeled sample are input as the semi-supervised classification model, both the discriminator and the generator work, at the moment, the useful sample characteristics are extracted from the unlabeled sample by adding random noise in the generator, the generated sample is output, and what needs to be explained is that the labeled sample is obtained through manual labeling.
At this point, the producer loss of WL-SSGAN is the proposed weighted loss L WL-SSGAN
L WL-SSGAN =αL adv +βL FM =α(-L unsupervised )+βL FM
Wherein, alpha + beta =1, alpha, beta is more than or equal to 0.
The feature matching loss of the single-layer network is as follows:
Figure BDA0003969698390000121
the joint feature matching loss function is then as follows:
Figure BDA0003969698390000122
to facilitate understanding of L WL-SSGAN By substitution of L with D (x) unsupervis 1-p in (1) D (y = K +1 luminance x), thus L unsupervis Conversion to standard GAN loss form:
Figure BDA0003969698390000123
further, L WL-SSGAN Can be rewritten as the following more detailed representation:
Figure BDA0003969698390000124
based on the established ground-sea clutter training set containing a small number of labeled samples and a large number of unlabeled samples, according to the loss part L of the discriminator D Sum generator loss section L supervised And alternately updating the parameters of the constructed ground-sea clutter semi-supervised classification model based on the improvement generation countermeasure network, and performing semi-supervised learning.
And testing the classification performance of the trained ground sea clutter full-supervised classification model according to the test set.
Optionally, the LeakyReLU activation function is linked to the back of the first convolutional layer of the discriminator, as well as to the back of all convolutional layers.
In one embodiment, the discriminator contains seven 1-dimensional convolutional layers and two fully-connected layers, where all convolutional kernels have a size of 4 and step size of 2. The first convolutional layer is connected with an LeakyReLU activation function at the back, all convolutional layers are connected with a 1-dimensional batch normalization and a LeakyReLU activation function at the back, and the first fully-connected layer is connected with a LeakyReLU activation function at the back. In addition to this, all convolutional layers are followed by a residual block and a LeakyReLU activation function. The input of the discriminator is a1 × 512-dimensional signal, and the output is a 3-dimensional classification result.
Optionally, the first convolutional layer of the generator is followed by a ReLU activation function and the last deconvolution layer is followed by a Tanh activation function.
In one embodiment, the generator contains eight 1-dimensional deconvolution layers, with all convolution kernels having a size of 4, a first layer step size of 1 and the remainder of 2. The first seven deconvolution layers are connected with 1-dimensional batch normalization and ReLU activation functions at the back, and the last deconvolution layer is connected with a Tanh activation function at the back. In addition, the first seven deconvolution layers are all followed by a residual block and a ReLU activation function. The input of the generator is 100-dimensional Gaussian distribution random noise, and the output is 1 × 512 signals.
Based on the sky wave radar sea clutter semi-supervised classification model for improving and generating the countermeasure network, the WL-SSGAN semi-supervised classification performance test is carried out.
In the experiment of the present embodiment, the experimental environment and corresponding version number for training the WL-SSGAN model are: system: windows10 (64-bit), GPU: NVIDIAGeForce RTX 3090, CUDA:11.6, python:3.9.0 (in Anaconda 4.11.0), torch:1.11.0, torchvision:0.12.0, numpy:1.22.3.
The parameters for training the WL-SSGAN are configured as: batch size 64, learning Rate 0.0001, leakyReLU 0.2, adma Optimizer beta1-0.5, beta2-0.999, data Normalization-1, weight Initialization.
To evaluate the performance of WL-SSGAN, let the number of labeled samples γ in the training set be: 15,30,45,60,75,90,105,120,135,150, the remainder being the corresponding number of unlabeled specimens. Next, the number of labeled samples γ, the specific gravities of α and β, l are considered mui The impact of these three factors on the performance of the WL-SSGAN classification. In contrast to the WL-SSGAN fully supervised classifier trained with only a small number of labeled samples.
First, assume that all intermediate layers of the arbiter contribute to the loss of the generator, i.e./ mui And (4) = {1,2,3,4,5,6,7}, so that the influence of different specific gravity of alpha and beta on the WL-SSGAN classification performance is measured by fixing a combined feature matching loss function. The results of the experiment are shown in table 1. Wherein base represents the classification accuracy of the WL-SSGAN fully supervised classifier trained with only a small number of labeled samples.
TABLE 1 Classification accuracy of WL-SSGAN under different values of alpha and beta
Figure BDA0003969698390000141
From table 1, the following conclusions can be drawn: (1) The classification performance of WL-SSGAN is all higher than that of the fully supervised classifier, which indicates that the model can extract potential features related to classification of ground sea clutter from a large number of unlabeled samples by adding a generator. Therefore, the WL-SSGAN can improve the classification performance of the fully supervised model when only a few labeled samples exist. (2) The WL-SSGAN classification performance is improved more obviously along with the reduction of the sample size gamma of the sea clutter on the ground. This is because the fully supervised classifier tends to get stuck in overfitting when the training sample size is small. (3) When joint feature matching loses L FM Number of layers l mul The different specific gravities of alpha and beta at fixation are obviously influential on the classification accuracy, and only the antagonistic loss L is utilized adv Or joint feature matching penalty L FM That is, (α, β) = (1.0,0.0) or (α, β) = (0.0,1.0), the classification accuracy obtained is not the highest. Thus, the proposed weighting loss function L WL-SSGAN Is superior to L adv And L FM
Second, assume that the penalty on immunity and the combined feature matching penalty have the same contribution to the penalty on the generator, i.e., (α, β) = (0.5). The contribution of the fixed countermeasure loss and the feature matching loss to WL-SSGAN is used as a measure of l mul The impact of different choices of WL-SSGAN classification performance. The results of the experiment are shown in table 2.
TABLE 2l mul Classification precision of WL-SSGAN under different values
Figure BDA0003969698390000151
From table 2, the following conclusions can be drawn: while resisting the loss L adv And joint feature matching penalty L FM When the contributions of alpha and beta are fixed, l mul Is obviously influential on classification accuracy and exploits the feature matching penalty of only a single layer
Figure BDA0003969698390000152
Or a joint feature matching penalty comprising all layers, i.e. /) mul =1 or l mul The classification accuracy obtained when = {1-7} is not the highest. This indicates that: (1) Proposed joint feature matching penalty L FM Loss of feature matching over conventional>
Figure BDA0003969698390000153
(2) Not all of the intermediate layer features of the joint discriminators can provide the best semi-supervised classification performance. Conversely, some layers of extracted features may inhibit classification performance. Therefore, the optimal classification performance depends on the appropriate combination of the number of layers. Therefore, the proposed WL-SSGAN can realize the semi-supervised classification of the sea clutter, and the proposed weighting loss is superior to the countermeasure loss and the feature matching loss.
As shown in fig. 3, it can be seen that the sea clutter samples are sky wave radar echo signals, and the data distribution thereof has strong randomness, so that it is extremely difficult to train the SSGAN generator using the standard loss-fighting method. In contrast, the improved semi-supervised classification method for generating the sea clutter of the countermeasure network, namely WL-SSGAN, provided by the invention can utilize the randomness to enable the feature matching loss to capture diversified sample features, as shown in FIGS. 4-6, so that the mode collapse of the generator and the overfitting of the discriminator are relieved to a certain extent. Therefore, the middle layer of the SSGAN discriminator of the semi-supervised generation countermeasure network is weighted by features, and the loss of joint feature matching is proposed. Further, the WL-SSGAN weighted loss is proposed by linearly weighting the standard antagonistic loss with the joint feature matching loss. Semi-supervised classification of the sample of the sea clutter is realized, and the proposed weighting loss is superior to the individual countermeasure loss or the feature matching loss.

Claims (10)

1. A method for constructing a sky wave radar sea clutter semi-supervised classification model for improving and generating a countermeasure network is characterized by comprising the following steps:
inputting training samples of the ground-sea clutter into a semi-supervised classification model for training, wherein the training samples comprise label samples, label-free samples and generation samples, and the semi-supervised generation countermeasure network comprises a discriminator and a generator;
calculating a first countermeasure loss of the discriminator according to the information entropy of the labeled samples output by the discriminator, wherein the first countermeasure loss represents the full supervision loss of the discriminator;
calculating a second countermeasure loss of the discriminator according to the information entropy of the unlabeled samples and the information entropy of the generated samples, wherein the second countermeasure loss represents the semi-supervised loss of the discriminator;
calculating combined feature loss according to the generated sample features and the label-free sample features in the multilayer network in the discriminator;
weighting the second confrontation loss and the joint characteristic loss to generate a weighted loss for the generator;
and determining the semi-supervised classification model meeting preset classification precision by repeatedly training the semi-supervised classification model and updating the classification weight of the discriminator based on the first countermeasure loss, the second countermeasure loss and the weighting loss.
2. The method for constructing a sky-wave radar ground sea clutter semi-supervised classification model for generating an antagonistic network as claimed in claim 1, wherein the first antagonistic loss is calculated as follows:
Figure FDA0003969698380000011
wherein p is D The entropy of the labeled sample outputted by the discriminator, y is a sample label, K represents the K-th sample space, x is a sample feature,
Figure FDA0003969698380000012
representing expected operators of labeled exemplars, L supervised Is the first to combat the loss.
3. The method for improving the skywave radar ground sea clutter semi-supervised classification model building of the countermeasure network as claimed in claim 1, wherein the second countermeasure loss is calculated as follows:
Figure FDA0003969698380000021
wherein, log [1-p ] D (y=K+1|x)]Entropy of sample characteristics in the K +1 th sample space in the unlabeled sample output by the discriminator, logp D (y = K +1 luminance x) is the entropy of information of the sample feature in the K +1 th sample space of the generated sample output by the discriminator,
Figure FDA0003969698380000022
for desired operators of unlabeled samples, based on the number of desired operators>
Figure FDA0003969698380000023
To generate the desired operator for the labeled sample, L unsupervised Is the second countermeasure loss.
4. The method for improving skywave radar ground sea clutter semi-supervised classification model construction of a generation countermeasure network as claimed in claim 1, wherein calculating joint feature loss from generated sample features and unlabeled sample features in a multi-layer network in the discriminator comprises:
extracting the label-free sample characteristics and the generated sample characteristics in each layer of the network from the middle layer network with the preset layer number of the discriminator;
solving the Euclidean distance between the label-free sample characteristics and the generated sample characteristics in each layer of network to obtain the characteristic matching loss of each layer of network;
and carrying out weighted summation on the feature matching loss of each layer of the network to obtain the combined feature matching loss.
5. The method for improving the sky-wave radar ground-sea clutter semi-supervised classification model construction for generating the countermeasure network as claimed in claim 4, wherein the joint feature matching loss is calculated as follows:
Figure FDA0003969698380000024
wherein l max To represent the sum of the intermediate-layer networks of the discriminators, l mul Represents the selected preset number of layers, and
Figure FDA0003969698380000025
Ch (l) the number of channels corresponding to the sample feature of the l-th layer, le (l) Is the length of the characteristic of the sample,
Figure FDA0003969698380000031
for feature matching loss for each tier of the network, based on the result of the comparison>
Figure FDA0003969698380000032
Generating an expectation of a sample characteristic for the l-th layer>
Figure FDA0003969698380000033
Expectation of the characteristics of the unlabeled sample of the L-th layer, L FM The loss is matched for the joint features.
6. The method for improving the construction of the sky wave radar ground sea clutter semi-supervised classification model for generating the countermeasure network as claimed in claim 1, wherein the weighted loss is calculated as follows:
L WL-SSGAN =α(-L unsupervised )+βL FM
wherein α is a first constant, β is a second constant, L FM For the joint feature matching loss, L WL-SSGAN Is the weighted loss.
7. The method for improving the sky-wave radar ground sea clutter semi-supervised classification model construction for generating the countermeasure network as claimed in claim 1, wherein the method further comprises:
when only the labeled sample is used as the input of the semi-supervised classification model, only the discriminator works, and the loss of the discriminator is the sum of the first pair of the immunity losses and the second pair of the immunity losses.
8. The method for improving the construction of the sky-wave radar ground sea clutter semi-supervised classification model for generating the countermeasure network as claimed in claim 1, wherein the method further comprises:
when the labeled sample and the unlabeled sample are used as the input of the semi-supervised classification model, the discriminator and the generator both work, and at the moment, the generated sample is output by adding random noise in the generator.
9. The method for improving the skywave radar ground sea clutter semi-supervised classification model construction for generating the countermeasure network as claimed in claim 1, wherein a LeakyReLU activation function is linked behind a first convolutional layer of the discriminator and behind all convolutional layers.
10. The method for improving the skywave radar ground sea clutter semi-supervised classification model construction for the generation countermeasure network as claimed in claim 1, wherein a ReLU activation function is connected to the back of a first convolution layer of the generator and a Tanh activation function is connected to the back of a last deconvolution layer of the generator.
CN202211512196.3A 2022-11-29 2022-11-29 Method for constructing sky-wave radar land-sea clutter semi-supervised classification model for improving generation of countermeasure network Active CN115859167B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211512196.3A CN115859167B (en) 2022-11-29 2022-11-29 Method for constructing sky-wave radar land-sea clutter semi-supervised classification model for improving generation of countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211512196.3A CN115859167B (en) 2022-11-29 2022-11-29 Method for constructing sky-wave radar land-sea clutter semi-supervised classification model for improving generation of countermeasure network

Publications (2)

Publication Number Publication Date
CN115859167A true CN115859167A (en) 2023-03-28
CN115859167B CN115859167B (en) 2025-08-15

Family

ID=85667829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211512196.3A Active CN115859167B (en) 2022-11-29 2022-11-29 Method for constructing sky-wave radar land-sea clutter semi-supervised classification model for improving generation of countermeasure network

Country Status (1)

Country Link
CN (1) CN115859167B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117217103A (en) * 2023-11-09 2023-12-12 南京航空航天大学 Spaceborne SAR sea clutter generation method and system based on multi-scale attention mechanism

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110745A (en) * 2019-03-29 2019-08-09 上海海事大学 Based on the semi-supervised x-ray image automatic marking for generating confrontation network
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network
US20220129735A1 (en) * 2019-05-20 2022-04-28 Institute of intelligent manufacturing, Guangdong Academy of Sciences Semi-supervised Hyperspectral Data Quantitative Analysis Method Based on Generative Adversarial Network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110745A (en) * 2019-03-29 2019-08-09 上海海事大学 Based on the semi-supervised x-ray image automatic marking for generating confrontation network
US20220129735A1 (en) * 2019-05-20 2022-04-28 Institute of intelligent manufacturing, Guangdong Academy of Sciences Semi-supervised Hyperspectral Data Quantitative Analysis Method Based on Generative Adversarial Network
CN110689086A (en) * 2019-10-08 2020-01-14 郑州轻工业学院 Semi-supervised high-resolution remote sensing image scene classification method based on generating countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘雨溪;张铂;王斌;: "基于生成式对抗网络的遥感图像半监督语义分割", 红外与毫米波学报, no. 04, 15 August 2020 (2020-08-15) *
朱克凡;王杰贵;吴世俊;: "基于GAN的半监督低分辨雷达目标识别算法", 探测与控制学报, no. 06, 26 December 2019 (2019-12-26) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117217103A (en) * 2023-11-09 2023-12-12 南京航空航天大学 Spaceborne SAR sea clutter generation method and system based on multi-scale attention mechanism
CN117217103B (en) * 2023-11-09 2024-03-15 南京航空航天大学 Satellite-borne SAR sea clutter generation method and system based on multi-scale attention mechanism

Also Published As

Publication number Publication date
CN115859167B (en) 2025-08-15

Similar Documents

Publication Publication Date Title
Li et al. IncepTCN: A new deep temporal convolutional network combined with dictionary learning for strong cultural noise elimination of controlled-source electromagnetic data
CN115372960B (en) Method for enhancing sky-wave radar land-sea clutter data of improved generation countermeasure network
CN114861537A (en) GNSS-R sea surface wind speed inversion method and system based on CNN multi-information fusion
CN113759323B (en) Signal sorting method and device based on improved K-Means joint convolution self-encoder
CN108846426A (en) Polarization SAR classification method based on the twin network of the two-way LSTM of depth
Mostajabi et al. Single-sensor source localization using electromagnetic time reversal and deep transfer learning: application to lightning
CN111707999A (en) A detection method of small floating objects on sea surface based on the combination of multi-feature and ensemble learning
CN109753874A (en) A kind of low slow small classification of radar targets method based on machine learning
CN112684427A (en) Radar target identification method based on serial quadratic reinforcement training
Du et al. Balanced neural architecture search and its application in specific emitter identification
Qu et al. Human activity recognition based on WRGAN-GP-synthesized micro-Doppler spectrograms
CN113420812B (en) Polarized SAR image classification method based on evolutionary convolutional neural network
CN116166982A (en) Multi-frequency multi-polarization ultra-narrow pulse echo target fusion identification method based on feature constraint
CN115859167B (en) Method for constructing sky-wave radar land-sea clutter semi-supervised classification model for improving generation of countermeasure network
Luwanga et al. Automatic spread-F detection using deep learning
Li et al. CREDIT-X1local: A reference dataset for machine learning seismology from ChinArray in Southwest China
CN107341511A (en) Classification of Polarimetric SAR Image method based on super-pixel Yu sparse self-encoding encoder
CN116580252A (en) Hyperspectral Image Classification Method Based on Multiscale Dense Connection and Feature Aggregation Network
Zhang et al. An incremental recognition method for MFR working modes based on deep feature extension in dynamic observation scenarios
CN115308705A (en) Multi-pose extremely narrow pulse echo generation method based on generation countermeasure network
CN110033043A (en) Radar range profile's based on condition production confrontation network are refused to sentence method
CN109635738A (en) A kind of image characteristic extracting method and system
CN117454245A (en) A method and device for classifying ground and sea clutter based on graph neural network
CN108509835A (en) PolSAR image terrain classification methods based on DFIC super-pixel
CN119939185A (en) A signal source direction positioning method based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant