[go: up one dir, main page]

CN120071136A - SAR image sparse countermeasure attack method based on generator - Google Patents

SAR image sparse countermeasure attack method based on generator Download PDF

Info

Publication number
CN120071136A
CN120071136A CN202510137476.8A CN202510137476A CN120071136A CN 120071136 A CN120071136 A CN 120071136A CN 202510137476 A CN202510137476 A CN 202510137476A CN 120071136 A CN120071136 A CN 120071136A
Authority
CN
China
Prior art keywords
difference
image
point
generator
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510137476.8A
Other languages
Chinese (zh)
Inventor
高飞
李明阳
刘天金
王俊
孙进平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202510137476.8A priority Critical patent/CN120071136A/en
Publication of CN120071136A publication Critical patent/CN120071136A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

本发明公开了一种基于生成器的SAR图像稀疏对抗攻击方法,包括,S1、通过生成器生成SAR图像的每个像素点的强度干扰值,得到强度干扰图像;S2、基于点群滑动差值与单点环绕差值相结合的量化方法,通过关键点提取生成位置干扰图像;S3、将生成的强度干扰图像与位置干扰图像按元素相乘得到对抗样本,通过差值损失函数更新生成器的参数。本发明采用生成器生成强度干扰图像并加以幅值约束,采用关键点提取方法生成位置干扰图像实现稀疏攻击,采用差值损失函数实现训练模型的快速收敛,大大提升了对抗攻击方法的隐蔽性以及对不同类型神经网络的欺骗性。

The present invention discloses a SAR image sparse adversarial attack method based on a generator, comprising: S1, generating the intensity interference value of each pixel point of the SAR image through the generator to obtain the intensity interference image; S2, generating the position interference image through key point extraction based on a quantization method combining the point group sliding difference and the single point surrounding difference; S3, multiplying the generated intensity interference image and the position interference image element by element to obtain an adversarial sample, and updating the parameters of the generator through a difference loss function. The present invention uses a generator to generate an intensity interference image and imposes amplitude constraints, uses a key point extraction method to generate a position interference image to achieve sparse attack, and uses a difference loss function to achieve rapid convergence of a training model, which greatly improves the concealment of the adversarial attack method and the deception of different types of neural networks.

Description

SAR image sparse countermeasure attack method based on generator
Technical Field
The invention belongs to the field of automatic image identification and attack resistance, and mainly relates to the problems of improving the spoofing rate of SAR images against attacks and improving the concealment and universality of an attack method.
Background
Synthetic aperture radar (SYNTHETIC APERTURE RADAR, SAR) is a high resolution all-day, all-weather, multi-polarized, multi-band imaging radar technology that plays an important role in applications such as geographic investigation, climate change research, environmental monitoring, military information processing, etc. With the wide application of SAR in the remote sensing field, the extraction of target information from SAR data has become a research hotspot, wherein automatic target recognition (Automatic Target Recognition, ATR) of SAR images is a mature and effective method in the research of current recognition technology, and more researchers explore the application of deep learning in SAR-ATR.
However, since the linear or nonlinear operation in the neural network model has an amplifying effect on the micro-disturbance, if some positions on the SAR image are slightly modified, the confidence of the classification result of the SAR-ATR system is greatly affected, and even the neural network model can be wrongly identified. The act of spoofing DNN (Deep Neural Network) by adding interference is referred to as a challenge attack, and the images modified by the attack are referred to as challenge samples (ADVERSARIAL EXAMPLES, AEs), the presence of which indicates a defect in the neural network in identifying classification tasks. If the loopholes of the neural network are utilized, a general anti-attack method is designed to add weak interference on the SAR image, the obtained anti-sample data set can deceive various types of neural network models, the military SAR image of the own party can be protected from being correctly identified by the enemy, and the method has a certain military defense value.
The general anti-attack method needs to be suitable for spoofing various neural network models, and realize higher spoofing performance under the condition of slightly modifying the image as much as possible. The initial anti-attack Method is designed aiming at loopholes in the neural network learning process, for example, classical FGSM (FAST GRADIENT SIGN Method) attack utilizes learning thinking of gradient decline of the neural network, adds interference to input in the direction of the fastest gradient rise, realizes error classification of the neural network on images, and subsequent I-FGSM and MI-FGSM enhance the anti-attack performance by using iterative and momentum methods. However, the attack method changes almost every pixel point of the image, and has a too large interference range on the image, so that the obtained countermeasure sample set has a large difference from the original data, and the concealment performance is poor. And the attack method based on the gradient attacks the specific neural network, so that the dependence on the parameters of the model of the specific neural network is too great, and the generated countermeasure sample only obtains higher spoofing rate on a single model, so that other neural network models cannot be spoofed efficiently.
Aiming at the problem of overlarge anti-attack interference range, some researchers introduce the l 0 norm to limit the number of pixels of an attack image, for example JSMA (Jacobian-based SALIENCY MAP ATTACK) determines the influence value of different pixels of the image on target classification by using the gradient direction of the target predicted value, and then the interference on pixels with specific proportion realizes the interference on only part of pixels of the image, namely sparse anti-attack. Although the method realizes sparse attack by selecting important points, the method is designed aiming at a trained network model, is only suitable for a single network structure, and the generated anti-sample has poor spoofing effect on other neural network models, namely, has low transfer spoofing performance and does not have universal attack performance.
The method is characterized in that the first method is used for realizing good deception performance under the condition of controlling the interference intensity and range of interference images, namely, the deception performance and deception performance of an anti-sample set are both considered, and the second method is used for designing a general attack method, so that the generated anti-sample can deception different types of neural network models, and the universal deception capability of the anti-attack method is improved.
Disclosure of Invention
The method focuses on the concealment, deception and universality of the attack resistance method, and provides the SAR image sparse attack method based on the generator, which is used for controlling the quantity and intensity of interference, is independent of a specific neural network structure, and improves the deception rate and the transfer deception rate of the attack method while ensuring the concealment of the attack method.
The invention relates to a SAR image sparse countermeasure attack method based on a generator, which comprises the following steps:
s1, generating an intensity interference value of each pixel point of an SAR image through a generator to obtain an intensity interference image;
s2, generating a position interference image through key point extraction based on a quantization method combining the point group sliding difference value and the single-point surrounding difference value;
S3, multiplying the generated intensity interference image and the position interference image by elements to obtain an countermeasure sample, and updating parameters of the generator through a difference loss function.
Aiming at the problem of low deception rate and low transfer deception rate in the background technology, the method designs a generating method of an intensity interference image based on a generator and a generating method of a position image based on a key point, improves deception of the attacking method through nonlinear properties of a neural network, and improves transfer deception of the attacking method by providing structural constraint.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a generator-based SAR image sparse challenge attack method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a calculation process of a point group sliding difference value according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a calculation process of a single-point surround difference value according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of correspondence between an optical image and an SAR image of an MSTAR dataset according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a process for extracting key points and removing isolated points according to an embodiment of the present invention;
FIG. 6 is a graph showing the variation of the attack rate with epoch for a method of attack when training a model under a training set in accordance with an embodiment of the present invention;
fig. 7 is a graph showing the difference between the position disturbance images obtained at different lambda values according to the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described in connection with the embodiments, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Method embodiment
According to an embodiment of the invention, a generator-based SAR image sparse challenge method is provided, and fig. 1 is a flowchart of the generator-based SAR image sparse challenge method in the embodiment of the invention. As shown in fig. 1, the method specifically includes:
(1) Generating an intensity interference value of each pixel point of the SAR image through a generator to obtain an intensity interference image;
(2) Generating a position interference image through key point extraction based on a quantization method combining the point group sliding difference value and the single-point surrounding difference value;
(3) And multiplying the generated intensity interference image and the position interference image by elements to obtain a countering sample, and updating parameters of the generator through a difference loss function.
Fig. 2 is a schematic diagram of a calculation process of a point group sliding difference value according to an embodiment of the present invention. As shown in fig. 2, the formula for calculating the point group sliding difference d slide is as follows:
Wherein the method comprises the steps of Representing pixel points satisfying (x-x i)2+(y-yj)2≤r2), w (x, y) representing a weight function, selecting a two-dimensional gaussian function as a weight for highlighting the weight of the center point, and I (x, y) representing the pixel value size of the image at the point (x, y).
The point group sliding difference value represents the overall change condition of a part of the image taking a certain pixel as the center during sliding, if the pixel point is positioned at a boundary position such as a contour, the pixel value change between different point groups is larger during the point group sliding, and then the point group sliding difference value d slide is larger, so that whether the pixel point is positioned on the boundary of a target can be quantified.
FIG. 3 is a schematic diagram illustrating a calculation process of a single-point surround difference value according to an embodiment of the present invention. As shown in fig. 3, the formula for calculating the single-point surround difference d surround is as follows:
Where I 1(xi,yj) represents the pixel value of 8 pixels surrounding the first week (x i,yj), I 2(xi,yj) represents the pixel value of 16 pixels surrounding the second week (x i,yj), s 1 and s 2 are adjustment parameters.
The single-point surrounding difference d surround indicates the variation difference between a pixel point and its neighboring pixels, and can determine whether the pixel point is a mutation point with larger variation from the surrounding points.
Fig. 4 is a schematic diagram of an optical image of an MSTAR dataset corresponding to an SAR image according to an embodiment of the present invention. As shown in fig. 4, the MSTAR data set adopted in the experiment is a public data set acquired by the synthetic aperture radar, and the polarization mode of the radar signal and HH in the X-band is adopted, so that the MSTAR data set is widely used by the automatic target recognition research technology of the SAR image. The MSTAR dataset contains 10 categories in total, and can be divided into 3 major categories, wherein 2S1 and ZSU belong to the artillery category, BMP2, BDRM2, BTR70, BTR60, D7 and ZIL131 belong to the truck category, and T62 and T72 belong to the tank category. The experiment selects a training set in the MSTAR data set for training, totalizes 2747 pictures, and tests with a test set in the MSTAR data set, totalizes 2425 pictures.
FIG. 6 is a graph showing the fraud rate as a function of epoch when training a model under a training set in accordance with an embodiment of the present invention. As shown in FIG. 6, the training process is totally 6 epochs, the convergence rate of training is greatly improved because the attack range is limited in the key area of the image, the spoofing rate of 99.8% is already reached at the 3 rd epochs, and the generator parameters can meet the requirement of high spoofing performance. After the training process is finished, the generator model parameters obtained through training are experimentally saved for subsequent generation of the countermeasure sample. The method has the advantages that once model training is completed, the generator can generate universal challenge samples, and corresponding challenge samples with universal deception capability can be output only by inputting original images.
Fig. 7 is a graph comparing differences between position disturbance images obtained at different lambda values according to an embodiment of the present invention. As shown in fig. 7, the ratio of λ 1 to λ 2 affects the importance of the difference between the sliding of the point group and the surrounding difference of the single point, so that the ratio of the contour point to the mutation point in the extracted key points is changed, the values of λ 1 and λ 2 are respectively set to be three groups of λ 1=0.5,λ2=0.5、λ1=0.6,λ2 =0.4 and λ 1=0.7,λ2 =0.3 in the experiment, the influence of the key point extraction on the attack method is compared, and the key point extraction images with different parameters are drawn as shown in fig. 7. Under the condition that the value of lambda 1 is smaller and the value of lambda 2 is larger, the proportion of the abrupt change point is increased, more points with larger differences from surrounding points are extracted from the image, clutter interference points of the background in the image are easily extracted as key points affecting identification, under the condition that the value of lambda 1 is larger and the value of lambda 2 is smaller, more contour points are extracted, even contour information is misjudged because the pixel value of the points around the real contour is smaller, the extracted key points are limited in a small area, and the follow-up efficient addition of interference in the area with the largest influence on classification is also not facilitated, so that the attack efficiency is improved.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module for implementation. Thus, the present invention is not limited to any specific combination of hardware and software.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present invention, and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions may not make the essence of the corresponding technical solution deviate from the scope of the present solution.

Claims (6)

1.一种基于生成器的SAR图像稀疏对抗攻击方法,其特征在于,包括:1. A generator-based SAR image sparse adversarial attack method, characterized by comprising: S1、通过生成器生成SAR图像的每个像素点的强度干扰值,得到强度干扰图像;S1, generating an intensity interference value of each pixel point of the SAR image through a generator to obtain an intensity interference image; S2、基于点群滑动差值与单点环绕差值相结合的量化方法,通过关键点提取生成位置干扰图像;S2, a quantization method based on the combination of point group sliding difference and single point surround difference, generates a position interference image through key point extraction; S3、将生成的强度干扰图像与位置干扰图像按元素相乘得到对抗样本,通过差值损失函数更新生成器的参数。S3. Multiply the generated intensity interference image and the position interference image element by element to obtain the adversarial sample, and update the parameters of the generator through the difference loss function. 2.根据权利要求1所述的一种基于生成器的SAR图像稀疏对抗攻击方法,其特征在于,S1具体包括:2. According to the generator-based SAR image sparse adversarial attack method of claim 1, it is characterized in that S1 specifically comprises: 设输入图像x∈Rm×n,Rm×n是维数为m×n的图像域。生成器用Gθ(x)表示,其中θ是生成器的参数,初始化设定为随机值,通过不断的训练优化参数θ的值,使得生成器生成的强度干扰图像与S2中的位置干扰图像按元素相乘后得到的对抗样本集达到更高的欺骗性能。Assume that the input image x∈Rm ×n , Rm ×n is an image domain with dimension m×n. The generator is represented by (x), where θ is the parameter of the generator, which is initialized to a random value. Through continuous training and optimization of the value of the parameter θ, the adversarial sample set obtained by element-wise multiplication of the intensity interference image generated by the generator and the position interference image in S2 can achieve higher deception performance. 由生成器生成的是像素值在[0,255]之间的干扰图像xG,为保证l范数的限制,即控制干扰图像每个像素不能过大,设置阈值ε对xG的幅值进行约束,生成包含攻击强度信息的干扰图像xstrengthThe generator generates an interference image x G with pixel values between [0, 255]. To ensure the l∞ norm limit, that is, to control each pixel of the interference image not to be too large, a threshold ε is set to constrain the amplitude of x G and generate an interference image x strength containing attack strength information: 其中ε∈[0,255],是干扰强度的最大值,保证了xstrength每个点的像素值在[x-ε,x+ε]范围内。Among them, ε∈[0,255] is the maximum value of the interference intensity, which ensures that the pixel value of each point of x strength is in the range of [x-ε,x+ε]. 生成器采用神经网络模型生成干扰图像,由于神经网络具有根据损失函数自学习并更新参数的优点,可以通过训练不断提升生成的干扰图像的欺骗性能,最优化每个像素点的干扰强度大小,实现生成高欺骗率的强度干扰图像。The generator uses a neural network model to generate interference images. Since the neural network has the advantage of self-learning and updating parameters according to the loss function, the deception performance of the generated interference images can be continuously improved through training, the interference intensity of each pixel can be optimized, and the generation of intensity interference images with a high deception rate can be achieved. 3.根据权利要求1所述的一种基于生成器的SAR图像稀疏对抗攻击方法,其特征在于,S2具体包括:3. The generator-based SAR image sparse adversarial attack method according to claim 1, wherein S2 specifically comprises: 关键点提取方法包含两个模块,一是通过设计的点群滑动差值与单点环绕差值相结合的量化方法,设置量化阈值对图像的关键点进行提取,旨在提取图像中包含较多目标类别信息的目标轮廓点与突变点等关键点。The key point extraction method includes two modules. The first one is a quantization method combining the designed point group sliding difference with the single point surrounding difference. The quantization threshold is set to extract the key points of the image, aiming to extract key points such as target contour points and mutation points in the image that contain more target category information. 二是为了得到图像的目标信息,滤除杂波噪声干扰,对关键点图像的点群聚集特性判别,进行孤立点消除得到位置干扰图像,为对抗攻击提供结构化约束,保证了攻击方法的普适性与干扰图像的稀疏性。Secondly, in order to obtain the target information of the image, filter out the interference of clutter and noise, identify the point group clustering characteristics of the key point image, eliminate isolated points to obtain the position interference image, provide structured constraints for counterattacks, and ensure the universality of the attack method and the sparsity of the interference image. 4.根据权利要求3所述的一种基于关键点提取的位置干扰图像生成方法,其特征在于,点群滑动差值与单点环绕差值的计算方法如下:4. According to the method for generating a position interference image based on key point extraction in claim 3, it is characterized in that the calculation method of the point group sliding difference and the single point surrounding difference is as follows: 对于图像的任一像素点(xi,yj),点群滑动差值以(xi,yj)为中心,以r为半径构造一个点群,对于边界溢出部分的像素值在后续计算差值时设为相等。u、v分别为x、y轴的移动步长,计算点群滑动差值dslide的公式如下:For any pixel point (x i ,y j ) of the image, the point group sliding difference constructs a point group with (x i ,y j ) as the center and r as the radius. The pixel values of the overflow part of the boundary are set equal in the subsequent difference calculation. u and v are the moving steps of the x and y axes respectively. The formula for calculating the point group sliding difference d slide is as follows: 其中表示满足(x-xi)2+(y-yj)2≤r2的像素点,w(x,y)表示权重函数,为突出中心点的权值,选择二维高斯函数作为权重,I(x,y)表示图像在点(x,y)处的像素值大小。in represents the pixel point that satisfies (xx i ) 2 +(yy j ) 2 ≤r 2 , w(x,y) represents the weight function. In order to highlight the weight of the center point, a two-dimensional Gaussian function is selected as the weight, and I(x,y) represents the pixel value of the image at the point (x,y). 计算单点环绕差值dsurround的公式如下:The formula for calculating the single point surround difference d surround is as follows: 其中I1(xi,yj)表示环绕在(xi,yj)第一周的8个像素点的像素值,I2(xi,yj)表示环绕在(xi,yj)第二周的16个像素点的像素值,s1和s2为调节参数。Wherein I 1 ( xi , yj ) represents the pixel values of the 8 pixels surrounding the first circle of ( xi , yj ), I 2 ( xi , yj ) represents the pixel values of the 16 pixels surrounding the second circle of ( xi , yj ), and s 1 and s 2 are adjustment parameters. 点群滑动差值表示以某一像素为中心的部分图像在滑动时整体变化情况,如果像素点处于轮廓等边界位置,则在点群滑动时不同点群之间的像素值变化较大,进而点群滑动差值dslide较大,因此可以量化该像素点是否处于目标的边界上;而单点环绕差值dsurround表示某一像素点与其周围相邻像素的变化差异,可以判断该像素点是否是与周围点差异较大的突变点。The point group sliding difference indicates the overall change of a part of the image centered on a certain pixel when it slides. If the pixel is at a boundary position such as a contour, the pixel values between different point groups will change greatly when the point group slides, and the point group sliding difference d slide will be larger. Therefore, it can be quantified whether the pixel is on the boundary of the target. The single point surround difference d surround indicates the difference in change between a pixel and its surrounding pixels, which can be used to determine whether the pixel is a mutation point with a large difference from the surrounding points. 5.根据权利要求3所述的一种基于关键点提取的位置干扰图像生成方法,其特征在于,孤立点消除方法如下:5. The method for generating a position interference image based on key point extraction according to claim 3 is characterized in that the method for eliminating isolated points is as follows: 将点群滑动差值和单点环绕差值相结合即可提取具有轮廓信息或像素值突变的关键点,定义关键点特征参数dkey为:The key points with contour information or pixel value mutation can be extracted by combining the point group sliding difference and the single point surrounding difference. The key point feature parameter d key is defined as: dkey=λ1dslide2dsurround d key1 d slide2 d surround 其中λ1与λ2控制点群滑动差值与单点环绕差值的比例。设置阈值dth进行关键点提取,将阈值以上的点视为关键点:Among them, λ 1 and λ 2 control the ratio of the sliding difference of the point group to the surrounding difference of a single point. Set the threshold dth to extract key points, and regard the points above the threshold as key points: 之后进行孤立点消除,对误检得到的背景区域噪声点置零:Then, isolated points are eliminated and the noise points in the background area that are falsely detected are set to zero: 其中numl[I(xi,yj)]表示以(xi,yj)为中心,长和宽分别为2l+1个像素点的点群中关键点的数量,N为点群关键点的最低数量。Where num l [I( xi , yj )] represents the number of key points in the point group centered at ( xi , yj ) with a length and width of 2l+1 pixels respectively, and N is the minimum number of key points in the point group. 6.根据权利要求1所述的一种基于生成器的SAR图像稀疏对抗攻击方法,其特征在于,S3具体包括:6. The generator-based SAR image sparse adversarial attack method according to claim 1, characterized in that S3 specifically comprises: 将强度干扰图像与位置干扰图像按元素相乘再与原始图像叠加,便得到对抗样本 Intensity interference image Interference image with position Multiply the elements and superimpose them with the original image to get the adversarial sample. 常规神经网络训练模型通过计算模型预测生成的向量与数据集的标签向量进行比对,并计算损失函数衡量神经网络的拟合程度。本方法基于交叉熵损失函数,设计了差值损失函数作为生成器参数更新的指导,定义如下:Conventional neural network training models compare the vectors generated by the calculation model prediction with the label vectors of the data set, and calculate the loss function to measure the degree of fit of the neural network. This method is based on the cross entropy loss function and designs a difference loss function as a guide for updating the generator parameters, which is defined as follows: Ldif(θ)=Lcross[Dψ(xadv)-Dψ(x),O(xadv,x)]L dif (θ)=L cross [D ψ (x adv )-D ψ (x),O(x adv ,x)] 其中Ldif(θ)代表差值损失函数,是生成器Gθ(x)的参数θ的函数,控制生成器参数的更新;Dψ(x)是判别器输出的概率标签向量;Lcross为交叉熵损失函数,计算两个向量的偏离程度;O(xadv,x)是差值归一化函数,旨在寻找差异最大的类别并使得差值在训练过程中不断增大该类别的差异,以实现欺骗判别器分类的目的,定义差值归一化函数为:Where L dif (θ) represents the difference loss function, which is a function of the parameter θ of the generator G θ (x) and controls the update of the generator parameters; D ψ (x) is the probability label vector output by the discriminator; L cross is the cross entropy loss function, which calculates the degree of deviation between two vectors; O (x adv , x) is the difference normalization function, which aims to find the category with the largest difference and make the difference increase the difference of this category during the training process, so as to achieve the purpose of deceiving the discriminator classification. The difference normalization function is defined as: O(xadv,x)=norm{Dψ(xadv)-Dψ(x)}O(x adv ,x)=norm{D ψ (x adv )-D ψ (x)} 对于任意向量操作对于xi=max{x1,x2,…,xn}的唯一一个最大值分量设置为1,对于xi≠max{x1,x2,…,xn}的其余n-1个分量全部置0。For any vector The operation sets the only maximum value component of xi = max{ x1 , x2 , ..., xn } to 1, and sets the remaining n-1 components of xi max{ x1 , x2 , ..., xn } to 0. 差值损失函数与差值归一化操作最大化对抗样本与原始图像在某一类别上的差异,由于生成器的更新取决于对抗样本与原始图像输出的差值,并不和特定的判别器类型或神经网络的结构参数配套,所以提升了攻击方法对不同网络模型的普适性,提高了方法的应用价值。The difference loss function and the difference normalization operation maximize the difference between the adversarial sample and the original image in a certain category. Since the update of the generator depends on the difference between the output of the adversarial sample and the original image, it is not compatible with the specific discriminator type or the structural parameters of the neural network. Therefore, the universality of the attack method to different network models is improved, and the application value of the method is improved.
CN202510137476.8A 2025-02-07 2025-02-07 SAR image sparse countermeasure attack method based on generator Pending CN120071136A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510137476.8A CN120071136A (en) 2025-02-07 2025-02-07 SAR image sparse countermeasure attack method based on generator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510137476.8A CN120071136A (en) 2025-02-07 2025-02-07 SAR image sparse countermeasure attack method based on generator

Publications (1)

Publication Number Publication Date
CN120071136A true CN120071136A (en) 2025-05-30

Family

ID=95790520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510137476.8A Pending CN120071136A (en) 2025-02-07 2025-02-07 SAR image sparse countermeasure attack method based on generator

Country Status (1)

Country Link
CN (1) CN120071136A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120353716A (en) * 2025-06-23 2025-07-22 中国民航大学 Highly dissimilar test case generation method and system under high imperceptibility constraint

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120353716A (en) * 2025-06-23 2025-07-22 中国民航大学 Highly dissimilar test case generation method and system under high imperceptibility constraint

Similar Documents

Publication Publication Date Title
Huang et al. Adversarial attacks on deep-learning-based SAR image target recognition
CN110109060B (en) Radar radiation source signal sorting method and system based on deep learning network
Li et al. Sparse coding-inspired GAN for hyperspectral anomaly detection in weakly supervised learning
Zhang et al. Polarimetric HRRP recognition based on ConvLSTM with self-attention
CN113822328B (en) Image classification method for defending against sample attack, terminal device and storage medium
Zheng et al. Fast ship detection based on lightweight YOLOv5 network
CN113240047B (en) SAR target recognition method based on component analysis multi-scale convolutional neural network
Xiao et al. Radar signal recognition based on transfer learning and feature fusion
CN110136162B (en) Unmanned aerial vehicle visual angle remote sensing target tracking method and device
Xiao et al. Specific emitter identification of radar based on one dimensional convolution neural network
Wang et al. Target detection and recognition based on convolutional neural network for SAR image
CN116778225A (en) SAR true and false target identification and target recognition method based on decoupling and reconstruction learning
Zhang et al. Learning nonlocal quadrature contrast for detection and recognition of infrared rotary-wing UAV targets in complex background
CN120071136A (en) SAR image sparse countermeasure attack method based on generator
Zanddizari et al. Generating black-box adversarial examples in sparse domain
CN112149526A (en) Lane line detection method and system based on long-distance information fusion
Du et al. Local aggregative attack on SAR image classification models
Xue et al. Single sample per person face recognition algorithm based on the robust prototype dictionary and robust variation dictionary construction
Hao et al. MDFOaNet: A novel multi-modal pedestrian detection network based on multi-scale image dynamic feature optimization and attention mapping
Arai Maximum likelihood classification based on classified result of boundary mixed pixels for high spatial resolution of satellite images
Gao et al. General sparse adversarial attack method for sar images based on key points
Tang et al. Pothole detection‐you only look once: Deformable convolution based road pothole detection
Li et al. Multi-scale ships detection in high-resolution remote sensing image via saliency-based region convolutional neural network
Shi et al. SDNet: Image‐based sonar detection network for multi‐scale objects
Hamouda et al. Modified convolutional neural network based on adaptive patch extraction for hyperspectral image classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination