CN116304685A - Method and device for generating countermeasure sample and electronic equipment - Google Patents
Method and device for generating countermeasure sample and electronic equipment Download PDFInfo
- Publication number
- CN116304685A CN116304685A CN202310148707.6A CN202310148707A CN116304685A CN 116304685 A CN116304685 A CN 116304685A CN 202310148707 A CN202310148707 A CN 202310148707A CN 116304685 A CN116304685 A CN 116304685A
- Authority
- CN
- China
- Prior art keywords
- data
- value
- neural network
- layer
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
本申请提供一种对抗样本生成方法、装置及电子设备。所述方法包括:获取训练数据,以及根据所述训练数据训练得到的目标神经网络模型;将所述训练数据作为输入值,分别计算所述目标神经网络模型每一层输出值的平均值,并将每一层的平均值作为该层的输出基准数据;根据所述输出基准数据,通过迭代计算,得到目标数据;响应于确定达到预定条件,将所述目标数据作为对抗样本。通过所述方法,限制了生成的对抗样本远离正常样本流形的程度,降低了对抗样本被检测到的可能性。
The present application provides a method, device and electronic equipment for generating an adversarial sample. The method includes: acquiring training data, and a target neural network model trained according to the training data; using the training data as an input value, respectively calculating the average value of the output value of each layer of the target neural network model, and The average value of each layer is used as the output reference data of the layer; according to the output reference data, the target data is obtained through iterative calculation; in response to determining that a predetermined condition is met, the target data is used as an adversarial example. Through the method, the extent to which the generated adversarial samples are far from the normal sample manifold is limited, and the possibility of the adversarial samples being detected is reduced.
Description
技术领域technical field
本申请涉及人工智能技术领域,尤其涉及一种对抗样本生成方法、装置及电子设备。The present application relates to the technical field of artificial intelligence, and in particular to a method, device and electronic equipment for generating an adversarial example.
背景技术Background technique
对抗样本是指在数据集中通过故意添加细微的干扰所形成的输入样本,导致模型以高置信度给出一个错误的输出。利用对抗样本对目标神经网络模型进行训练,可以提高目标神经网络的鲁棒性。Adversarial examples refer to input samples formed by deliberately adding subtle disturbances in the data set, causing the model to give a wrong output with high confidence. Using adversarial examples to train the target neural network model can improve the robustness of the target neural network.
相关技术中,普遍采用单一的损失函数衡量对抗样本的输出标签与真实标签之间的差距,这种方式限制、影响了对抗样本的生成。In related technologies, a single loss function is generally used to measure the gap between the output label of an adversarial example and the real label, which limits and affects the generation of adversarial examples.
发明内容Contents of the invention
有鉴于此,本申请的目的在于解决背景技术中提出的限制、影响对抗样本生成的问题,提出一种对抗样本生成方法、装置及电子设备。In view of this, the purpose of this application is to solve the limitations and problems affecting the generation of adversarial samples proposed in the background technology, and propose a method, device and electronic equipment for generating adversarial samples.
基于上述目的,本申请提供了一种对抗样本生成方法,包括:Based on the above purpose, this application provides a method for generating an adversarial example, including:
获取训练数据,以及根据所述训练数据训练得到的目标神经网络模型;Acquiring training data, and a target neural network model trained according to the training data;
将所述训练数据作为输入值,分别计算所述目标神经网络模型每一层输出值的平均值,并将每一层的平均值作为该层的输出基准数据;Using the training data as an input value, calculate the average value of the output value of each layer of the target neural network model respectively, and use the average value of each layer as the output reference data of this layer;
根据所述输出基准数据,通过迭代计算,得到目标数据;Obtaining target data through iterative calculation according to the output benchmark data;
响应于确定达到预定条件,将所述目标数据作为对抗样本。In response to determining that a predetermined condition is met, the target data is used as an adversarial example.
可选地,所述根据所述基准值,通过迭代计算,得到目标数据,包括迭代执行如下步骤,直到达到所述预定条件:Optionally, the obtaining the target data through iterative calculation according to the reference value includes performing the following steps iteratively until the predetermined condition is reached:
响应于确定执行第一轮迭代计算,将所述训练数据作为输入数据;In response to determining to perform a first round of iterative calculation, using the training data as input data;
响应于确定执行非第一轮迭代计算,将上一轮迭代计算得到的目标数据作为输入数据;In response to determining that the iterative calculation is not performed in the first round, using the target data obtained in the previous round of iterative calculation as input data;
根据所述输出基准数据和所述输入数据,计算得到损失函数值;calculating a loss function value according to the output benchmark data and the input data;
根据所述损失函数值和预定的扰动系数,计算得到临时目标数据;Calculate and obtain temporary target data according to the loss function value and a predetermined disturbance coefficient;
根据所述临时目标数据计算得到扰动值;calculating a disturbance value according to the temporary target data;
将所述扰动值叠加到所述输入数据,得到本轮迭代计算的目标数据。The disturbance value is superimposed on the input data to obtain the target data of the current round of iterative calculation.
可选地,所述根据所述输出基准数据和所述输入数据,计算得到损失函数值,包括:Optionally, the calculation of the loss function value according to the output reference data and the input data includes:
确定所述输入数据对应的真实类别;determining the true category corresponding to the input data;
根据所述输入数据,通过所述目标神经网络模型,得到所述目标神经网络模型每一层的输出值以及分类类别;According to the input data, through the target neural network model, the output value and classification category of each layer of the target neural network model are obtained;
根据所述目标神经网络模型每一层的输出值、输出基准数据、所述真实类别、所述分类类别和所述输入数据,通过如下公式,计算得到损失函数值:According to the output value of each layer of the target neural network model, the output reference data, the real category, the classification category and the input data, the loss function value is calculated by the following formula:
其中,θ表示所述目标神经网络模型的参数,x表示所述输入数据,y表示所述真实类别,L(θ,x,y)表示所述损失函数值,C(θ,x,y)表示所述真实类别与所述分类类别的交叉熵损失函数值,k表示所述目标神经网络模型的层数,ζi表示均方损失项系数,表示所述目标神经网络模型第i层的所述输出值与所述输出基准数据的欧式距离,/>表示所述目标神经网络模型第i层的输出值,/>表示所述目标神经网络模型第i层的输出基准数据。Among them, θ represents the parameters of the target neural network model, x represents the input data, y represents the true category, L(θ,x,y) represents the loss function value, C(θ,x,y) Represent the cross-entropy loss function value of the true category and the classification category, k represents the number of layers of the target neural network model, ζi represents the mean square loss term coefficient, Indicates the Euclidean distance between the output value of the i-th layer of the target neural network model and the output benchmark data, /> Indicates the output value of the i-th layer of the target neural network model, /> Indicates the output benchmark data of the i-th layer of the target neural network model.
可选地,所述根据所述损失函数值和预定的扰动系数,计算得到临时目标数据,包括:Optionally, the calculating and obtaining temporary target data according to the loss function value and a predetermined disturbance coefficient includes:
根据所述损失函数值,通过反向传播算法,计算得到梯度值;According to the loss function value, the gradient value is obtained by calculating the backpropagation algorithm;
将所述扰动系数、所述梯度值的乘积与所述输入数据进行叠加,得到临时目标数据。Superimpose the product of the disturbance coefficient and the gradient value on the input data to obtain temporary target data.
可选地,所述均方损失项系数可以通过如下方法得到:Optionally, the mean square loss term coefficient can be obtained by the following method:
获取预定的初始均方损失项系数;Obtain the predetermined initial mean square loss term coefficient;
响应于确定执行第一轮迭代计算,所述均方损失项系数为所述初始均方损失项系数;In response to determining to perform a first round of iterative calculation, the mean square loss term coefficient is the initial mean square loss term coefficient;
响应于确定执行非第一轮迭代计算,根据上一轮迭代计算中所述目标神经网络模型每一层的输出值、输出基准数据,通过如下公式计算得到上一轮迭代计算中所述目标神经网络模型每一层的输出距离:In response to determining to perform non-first round of iterative calculations, according to the output value and output reference data of each layer of the target neural network model in the previous round of iterative calculations, the target neural network in the previous round of iterative calculations is calculated by the following formula: The output distance of each layer of the network model:
其中,Disi表示第i层的输出距离,表示计算上一轮次计算中所述所述目标神经网络模型的第i层输出值与第i层所述输出基准数据的差值的绝对值,所述/>表示上一轮次得到的目标数据通过所述目标神经网络模型得到的第i层输出值,所述/>表示所述输出基准数据;Among them, Dis i represents the output distance of the i-th layer, Indicates the absolute value of the difference between the output value of the i-th layer of the target neural network model in the previous round of calculation and the output reference data of the i-th layer, the /> Represents the i-th layer output value obtained by the target data obtained in the previous round through the target neural network model, the /> represents said output reference data;
确定上一轮迭代计算中所述目标神经网络模型输出距离最大的一层,并将该层均方损失项系数设置为预设数值倍的初始均方损失项系数,将其它层的均方损失项系数设置为初始均方损失项系数。Determine the layer with the largest output distance of the target neural network model in the previous round of iterative calculation, and set the coefficient of the mean square loss term of this layer to the initial mean square loss term coefficient of the preset value, and set the mean square loss term coefficient of other layers to The term coefficient is set to the initial mean squared loss term coefficient.
可选地,所述预定的均方损失项系数还可以通过如下方法得到:Optionally, the predetermined mean square loss term coefficient can also be obtained by the following method:
响应于确定执行第三轮及以上轮次迭代计算,通过如下公式分别计算此前两轮迭代计算中所述目标神经网络模型的总输出值距离:In response to determining to execute the third and above rounds of iterative calculations, the distances between the total output values of the target neural network model in the previous two rounds of iterative calculations are respectively calculated by the following formula:
其中,Disi表示第i层的输出距离,k表示所述目标神经网络模型的层数;Wherein, Dis i represents the output distance of the i-th layer, and k represents the number of layers of the target neural network model;
响应于确定上一轮次迭代计算的总输出值距离大于上上一轮次迭代计算的总输出值距离,通过如下公式计算所述均方损失项系数:In response to determining that the total output value distance calculated by the previous round of iteration is greater than the total output value distance calculated by the previous round of iteration, the mean square loss term coefficient is calculated by the following formula:
其中,ζi,t表示本轮次第i层的所述均方损失项系数,ζi,t-1表示上一轮次第i层的所述均方损失项系数,Disi表示上一轮次计算中所述所述目标神经网络模型的第i层的输出距离,Dismin表示上一轮次计算中所述所述目标神经网络模型的所有层的输出距离中的最小值。Among them, ζ i, t represents the coefficient of the mean square loss term of the i-th layer of the current round, ζ i, t-1 represents the coefficient of the mean square loss term of the i-th layer of the previous round, Dis i represents the previous round The output distance of the i-th layer of the target neural network model in the calculation, Dis min represents the minimum value of the output distances of all layers of the target neural network model in the previous round of calculation.
可选地,所述根据所述临时目标数据计算得到扰动值,包括:Optionally, the calculation of the disturbance value according to the temporary target data includes:
根据所述训练数据、所述临时目标数据通过如下计算公式,得到所述扰动值:According to the training data and the temporary target data, the disturbance value is obtained through the following calculation formula:
其中,ε′为扰动,p为临时目标数据;ε为预设的扰动最大值;po为输入数据。Among them, ε' is the disturbance, p is the temporary target data; ε is the preset maximum disturbance value; p o is the input data.
可选地,所述预定条件包括迭代计算轮次达到预定次数。Optionally, the predetermined condition includes that iterative calculation rounds reach a predetermined number of times.
基于同一发明构思,本申请还提供了一种对抗样本生成装置,包括:Based on the same inventive concept, this application also provides an adversarial sample generation device, including:
获取模块,被配置为获取训练数据,以及根据所述训练数据训练得到的目标神经网络模型;An acquisition module configured to acquire training data and a target neural network model trained according to the training data;
输出基准数据计算模块,被配置为将所述训练数据作为输入值,分别计算所述目标神经网络模型每一层输出值的平均值,并将每一层的平均值作为该层的输出基准数据;The output reference data calculation module is configured to use the training data as an input value, respectively calculate the average value of the output value of each layer of the target neural network model, and use the average value of each layer as the output reference data of the layer ;
目标数据计算模块,被配置为根据所述输出基准数据,通过迭代计算,得到目标数据;The target data calculation module is configured to obtain the target data through iterative calculation according to the output benchmark data;
生成模块,被配置为响应于确定达到预定条件,将所述目标数据作为对抗样本。A generating module configured to use the target data as an adversarial example in response to determining that a predetermined condition is met.
基于同一发明构思,本申请还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如上述任意一项所述的对抗样本生成方法。Based on the same inventive concept, the present application also provides an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor. When the processor executes the program, any one of the above-mentioned The adversarial example generation method described in Item .
从上面所述可以看出,本申请提供的对抗样本生成方法,通过综合考虑目标神经网络输出数据和目标分类结果之间的损失值,计算得到对抗样本的扰动值,从而得到对抗样本。通过上述方法得到的对抗样本更接近训练数据的流形几何结构,因此具有更接近正常流形几何结构,不易被基于流形检测的检测器识别的优点。It can be seen from the above that the adversarial sample generation method provided by this application calculates the disturbance value of the adversarial sample by comprehensively considering the loss value between the output data of the target neural network and the target classification result, thereby obtaining the adversarial sample. The adversarial samples obtained by the above method are closer to the manifold geometry structure of the training data, so it has the advantage of being closer to the normal manifold geometry structure and not easily recognized by the detector based on manifold detection.
附图说明Description of drawings
为了更清楚地说明本申请或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the present application or related technologies, the following will briefly introduce the accompanying drawings that need to be used in the description of the embodiments or related technologies. Obviously, the accompanying drawings in the following description are only for this application Embodiments, for those of ordinary skill in the art, other drawings can also be obtained based on these drawings without any creative effort.
图1为本申请一个或多个实施例的对抗样本生成方法的流程示意图;FIG. 1 is a schematic flowchart of a method for generating an adversarial example in one or more embodiments of the present application;
图2为本申请一个或多个实施例的对抗样本生成装置的结构示意图;FIG. 2 is a schematic structural diagram of an adversarial sample generation device in one or more embodiments of the present application;
图3为本申请一个或多个实施例的电子设备硬件结构示意图。FIG. 3 is a schematic diagram of a hardware structure of an electronic device according to one or more embodiments of the present application.
具体实施方式Detailed ways
为使本申请的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本申请进一步详细说明。In order to make the purpose, technical solutions and advantages of the present application clearer, the present application will be further described in detail below in conjunction with specific embodiments and with reference to the accompanying drawings.
需要说明的是,除非另外定义,本申请实施例使用的技术术语或者科学术语应当为本申请所属领域内具有一般技能的人士所理解的通常意义。本申请实施例中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。It should be noted that, unless otherwise defined, the technical terms or scientific terms used in the embodiments of the present application shall have the usual meanings understood by those skilled in the art to which the present application belongs. "First", "second" and similar words used in the embodiments of the present application do not indicate any order, quantity or importance, but are only used to distinguish different components. "Comprising" or "comprising" and similar words mean that the elements or items appearing before the word include the elements or items listed after the word and their equivalents, without excluding other elements or items. Words such as "connected" or "connected" are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "Up", "Down", "Left", "Right" and so on are only used to indicate the relative positional relationship. When the absolute position of the described object changes, the relative positional relationship may also change accordingly.
流形是局部具有欧几里得空间性质的空间,在数学中用于描述几何形体。物理上,经典力学的相空间和构造广义相对论的时空模型的四维伪黎曼流形都是流形的实例。流形的应用之一即是通过综合考虑数据的距离和拓扑结构,在不损失信息的基础上将数据从高维空间降维到低维空间。A manifold is a space that locally has the properties of a Euclidean space, and is used in mathematics to describe geometric shapes. In physics, the phase space of classical mechanics and the four-dimensional pseudo-Riemannian manifold that constructs the space-time model of general relativity are examples of manifolds. One of the applications of manifolds is to reduce the dimensionality of data from high-dimensional space to low-dimensional space without loss of information by comprehensively considering the distance and topology of data.
深度学习是利用神经网络模型,通过学习样本数据的内在规律和表示层次,发现数据的本质,并由此进行数据分析,实现例如目标检测等功能。但由于其内部结构主要是基于线性块构建,因此在一些实验中,它们实现的整体函数被证明是高度线性的。这些线性函数很容易优化,但也容易受到输入数据的影响发生迅速变化。Deep learning is to use the neural network model to discover the essence of the data by learning the internal laws and representation levels of the sample data, and then perform data analysis to realize functions such as target detection. But since their internal structures are mostly built based on linear blocks, the overall functions they implement are proven to be highly linear in some experiments. These linear functions are easy to optimize, but are also susceptible to rapid changes in the input data.
为提高目标神经网络的鲁棒性,相关技术中提出利用对抗样本对目标神经网络进行训练。In order to improve the robustness of the target neural network, it is proposed in related technologies to use adversarial examples to train the target neural network.
如背景技术所述,目前生成对抗样本的方法普遍采用单一的损失函数衡量对抗样本的输出值与真实标签之间的距离,进而对训练数据进行修正得到对抗样本。但通过上述方法生成的对抗样本未能考虑训练数据之间的流形几何结构,导致其在经过目标神经网络模型的中间层输出会逐渐偏离训练数据的流形,以致对抗样本与训练数据的流形之间存在偏差,进入目标标签的流形。通过上述方法生成的对抗样本不仅损失了原有训练数据的信息,而且很难通过基于流形检测的检测器的检测。因此,上述方法限制、影响了对抗样本的生成。As mentioned in the background, current methods for generating adversarial samples generally use a single loss function to measure the distance between the output value of the adversarial sample and the real label, and then correct the training data to obtain the adversarial sample. However, the adversarial samples generated by the above method fail to consider the manifold geometry between the training data, resulting in the output of the middle layer of the target neural network model will gradually deviate from the manifold of the training data, so that the flow of adversarial samples and training data There is a deviation between the graphs, entering the manifold of the target label. The adversarial examples generated by the above methods not only lose the information of the original training data, but also are difficult to pass the detection of the detector based on manifold detection. Therefore, the above methods limit and affect the generation of adversarial examples.
由此,本申请提出一种对抗样本生成方法,不仅考虑目标神经网络模型各个输出层的输出值、模型输出值与真实标签之间的损失值,同时也考虑到生成对抗样本与训练数据的流形几何结构的偏差,以期寻找训练数据流形与对抗样本流形之间的耦合区域,在迭代计算过程中,确定对抗样本进入所述耦合区域。Therefore, this application proposes a method for generating adversarial samples, which not only considers the output value of each output layer of the target neural network model, the loss value between the model output value and the real label, but also considers the flow of adversarial samples and training data. In order to find the coupling area between the training data manifold and the adversarial sample manifold, it is determined that the adversarial sample enters the coupling area during the iterative calculation process.
以下,通过具体的实施例来详细说明本说明书一个或多个实施例的技术方案。Hereinafter, the technical solution of one or more embodiments in this specification will be described in detail through specific embodiments.
参考图1,本申请一个或多个实施例的对抗样本生成方法,包括以下步骤:Referring to Figure 1, the method for generating an adversarial example in one or more embodiments of the present application includes the following steps:
步骤S101:获取训练数据,以及根据所述训练数据训练得到的目标神经网络模型。Step S101: Obtain training data, and a target neural network model trained according to the training data.
在一些实施例中,所述训练数据可以为影像、文本序列等数据。在一些实施例中,所述原始数据集可以是现有的公开数据集,例如CIFAR-10、MNIST等,也可以是自定义数据集。在一些实施例中,所述目标神经网络模型可以为ResNet34,Inception v3等神经网络模型。In some embodiments, the training data may be images, text sequences and other data. In some embodiments, the original dataset may be an existing public dataset, such as CIFAR-10, MNIST, etc., or a custom dataset. In some embodiments, the target neural network model may be a neural network model such as ResNet34 or Inception v3.
步骤S102:将所述训练数据作为输入值,分别计算所述目标神经网络模型每一层输出值的平均值,并将每一层的平均值作为该层的输出基准数据。Step S102: Using the training data as an input value, calculate the average value of the output value of each layer of the target neural network model, and use the average value of each layer as the output reference data of the layer.
在实现本申请的过程中,申请人发现神经网络模型之所以在目标识别等领域具有优势,是因为其在复杂的数据集合上完成了数据降维,将具有复杂流形几何结构的、高维的、具有多种特征标签的数据降维为简单流形几何结构的、低维的、具有少量特征标签的数据。在数据降维的过程中,数据信息损失越少,其输出值的准确性越高。在数据降维的过程中,数据维度、数据流形几何结构在神经网络模型的每一层都会发生变化。由此,本申请提出的对抗样本生成方法综合考虑目标神经网络模型每一层的输出值,以期获得信息损失最小的对抗样本。In the process of implementing this application, the applicant found that the reason why the neural network model has advantages in the field of target recognition is that it has completed data dimensionality reduction on complex data sets, and will have complex manifold geometric structures, high-dimensional Dimensionality reduction of data with a variety of feature labels into simple manifold geometry, low-dimensional data with a small number of feature labels. In the process of data dimensionality reduction, the less data information loss, the higher the accuracy of its output value. In the process of data dimensionality reduction, the data dimension and data manifold geometry will change at each layer of the neural network model. Therefore, the adversarial example generation method proposed in this application comprehensively considers the output value of each layer of the target neural network model, in order to obtain an adversarial example with the least information loss.
在本步骤中,所述目标神经网络模型的每一层都会得到其输出数据。计算每一层的输出数据的平均值,作为该层的输出基准数据。本申请的一个实施例中,所述目标神经网络模型为卷积神经网络,输入数据为至少一张32*32像素的图像数据。将所述图像数据输入所述卷积神经网络后,所述卷积神经网络的每一中间层都会输出至少一份输出值,计算每一层的输出值的平均值,作为该层的输出基准数据。以第一层为例,每一张输出图像数据在该层输出值为6张28*28像素的特征图。因此,该层包括多张28*28像素的特征图,计算每一像素的平均值,并将由所有所述平均值组成的28*28像素的图像作为该层的数据基准数据。In this step, each layer of the target neural network model will obtain its output data. Calculate the average value of the output data of each layer as the output benchmark data of this layer. In an embodiment of the present application, the target neural network model is a convolutional neural network, and the input data is at least one image data of 32*32 pixels. After the image data is input into the convolutional neural network, each intermediate layer of the convolutional neural network will output at least one output value, and the average value of the output value of each layer is calculated as the output benchmark of this layer data. Taking the first layer as an example, the output value of each output image data in this layer is 6 feature maps of 28*28 pixels. Therefore, this layer includes multiple feature maps of 28*28 pixels, the average value of each pixel is calculated, and the 28*28 pixel image composed of all the average values is used as the data reference data of this layer.
步骤S103:根据所述输出基准数据,通过迭代计算,得到目标数据。Step S103: Obtain target data through iterative calculation according to the output reference data.
根据上述内容所述,本步骤中,以步骤S102的输出基准数据为依据,综合考虑数据的特征和流形几何结构,通过迭代计算,得到目标数据。According to the above content, in this step, based on the output benchmark data in step S102, the characteristics of the data and the geometric structure of the manifold are comprehensively considered, and the target data is obtained through iterative calculation.
在一些实施例中,首先确定迭代计算的输入数据;然后计算所述输入数据的损失值和距离值;最后根据所述损失值和距离值计算扰动值,并将所述扰动值叠加到所述输入数据上得到对抗样本。In some embodiments, first determine the input data for iterative calculation; then calculate the loss value and distance value of the input data; finally calculate the disturbance value according to the loss value and distance value, and add the disturbance value to the Adversarial examples are obtained on the input data.
具体的,在一些实施例中,响应于确定执行第一轮迭代计算,将所述训练数据作为输入数据;响应于确定执行非第一轮迭代计算,将上一轮迭代计算得到的目标数据作为输入数据。也即,在一些实施例中,进行第一次迭代计算时,将所述训练数据作为输入数据;在后续的迭代计算中,将上一轮得到的目标数据作为本轮次的输入数据。Specifically, in some embodiments, in response to determining to perform the first round of iterative calculations, the training data is used as input data; in response to determining to perform non-first round of iterative calculations, the target data obtained in the previous round of iterative calculations is used as Input data. That is, in some embodiments, when performing the first iterative calculation, the training data is used as the input data; in the subsequent iterative calculation, the target data obtained in the last round is used as the input data of the current round.
在一些实施例中,通过交叉熵函数计算所述输入数据的损失值。交叉熵损失函数用于表示输出基准数据概率分布与本轮次输出数据概率分布之间的差异。在一些实施例中,通过欧氏距离计算所述输入数据与所述输出基准数据的距离值。流形是局部具有欧几里得空间性质的空间,能用欧氏距离来进行距离计算。在一些实施例中,通过如下公式,结合上述损失值和距离值计算得到损失函数值:In some embodiments, the loss value of the input data is calculated by a cross-entropy function. The cross-entropy loss function is used to represent the difference between the probability distribution of output benchmark data and the probability distribution of output data in this round. In some embodiments, the distance value between the input data and the output reference data is calculated by Euclidean distance. Manifold is a space with local Euclidean space properties, and Euclidean distance can be used for distance calculation. In some embodiments, the loss function value is calculated by combining the above loss value and the distance value through the following formula:
其中,θ表示所述目标神经网络模型的参数,x表示所述输入数据,y表示所述真实类别,L(θ,x,y)表示所述损失函数值,C(θ,x,y)表示所述真实类别与所述分类类别的交叉熵损失函数值,k表示所述目标神经网络模型的层数,ζi表示均方损失项系数,表示所述目标神经网络模型第i层的所述输出值与所述输出基准数据的欧式距离,/>表示所述目标神经网络模型第i层的输出值,/>表示所述目标神经网络模型第i层的输出基准数据。Among them, θ represents the parameters of the target neural network model, x represents the input data, y represents the true category, L(θ,x,y) represents the loss function value, C(θ,x,y) Represent the cross-entropy loss function value of the true category and the classification category, k represents the number of layers of the target neural network model, ζi represents the mean square loss term coefficient, Indicates the Euclidean distance between the output value of the i-th layer of the target neural network model and the output benchmark data, /> Indicates the output value of the i-th layer of the target neural network model, /> Indicates the output benchmark data of the i-th layer of the target neural network model.
在一些实施例中,根据所述损失函数值,通过反向传播算法,计算得到所述目标神经网络模型的梯度值。In some embodiments, according to the loss function value, the gradient value of the target neural network model is calculated by using a backpropagation algorithm.
在一些实施例中,将所述梯度值、所述输入数据、所述扰动系数的乘积作为临时目标数据。In some embodiments, the product of the gradient value, the input data, and the disturbance coefficient is used as temporary target data.
在一些实施例中,所述扰动系统通过预定的方式获取。在一些实施例中,预定初始扰动系数,并根据所述初始扰动系数得到扰动系数。在一些实施例中,通过如下方法确定所述扰动系数:In some embodiments, the perturbation system is acquired by predetermined means. In some embodiments, an initial disturbance coefficient is predetermined, and the disturbance coefficient is obtained according to the initial disturbance coefficient. In some embodiments, the disturbance coefficient is determined by the following method:
获取预定的初始均方损失项系数;Obtain the predetermined initial mean square loss term coefficient;
响应于确定执行第一轮迭代计算,所述均方损失项系数为所述初始均方损失项系数;In response to determining to perform a first round of iterative calculation, the mean square loss term coefficient is the initial mean square loss term coefficient;
响应于确定执行非第一轮迭代计算,根据上一轮迭代计算中所述目标神经网络模型每一层的输出值、输出基准数据,通过如下公式计算得到上一轮迭代计算中所述目标神经网络模型每一层的输出距离:In response to determining to perform non-first round of iterative calculations, according to the output value and output reference data of each layer of the target neural network model in the previous round of iterative calculations, the target neural network in the previous round of iterative calculations is calculated by the following formula: The output distance of each layer of the network model:
其中,Disi表示第i层的输出距离,表示计算上一轮次计算中所述所述目标神经网络模型的第i层输出值与第i层所述输出基准数据的差值的绝对值,所述/>表示上一轮次得到的目标数据通过所述目标神经网络模型得到的第i层输出值,所述/>表示所述输出基准数据;Among them, Dis i represents the output distance of the i-th layer, Indicates the absolute value of the difference between the output value of the i-th layer of the target neural network model in the previous round of calculation and the output reference data of the i-th layer, the /> Represents the i-th layer output value obtained by the target data obtained in the previous round through the target neural network model, the /> represents said output reference data;
确定上一轮迭代计算中所述目标神经网络模型输出距离最大的一层,并将该层均方损失项系数设置为1.05倍的初始均方损失项系数,将其它层的均方损失项系数设置为初始均方损失项系数。Determine the layer with the largest output distance of the target neural network model in the previous round of iterative calculation, and set the mean square loss term coefficient of this layer to 1.05 times the initial mean square loss term coefficient, and set the mean square loss term coefficients of other layers to Set to the initial mean squared loss term coefficient.
在一些实施例中,响应于确定执行第三轮及以上轮次迭代计算,通过如下公式分别计算此前两轮迭代计算中所述目标神经网络模型的总输出值距离:In some embodiments, in response to determining to perform the third and above rounds of iterative calculations, the distances between the total output values of the target neural network model in the previous two rounds of iterative calculations are respectively calculated by the following formula:
其中,Disi表示第i层的输出距离,k表示所述目标神经网络模型的层数;Wherein, Dis i represents the output distance of the i-th layer, and k represents the number of layers of the target neural network model;
响应于确定上一轮次迭代计算的总输出值距离大于上上一轮次迭代计算的总输出值距离,通过如下公式计算所述均方损失项系数:In response to determining that the total output value distance calculated by the previous round of iteration is greater than the total output value distance calculated by the previous round of iteration, the mean square loss term coefficient is calculated by the following formula:
其中,ζi,t表示本轮次第i层的所述均方损失项系数,ζi,t-1表示上一轮次第i层的所述均方损失项系数,Disi表示上一轮次计算中所述所述目标神经网络模型的第i层的输出距离,Dismin表示上一轮次计算中所述所述目标神经网络模型的所有层的输出距离中的最小值。Among them, ζ i, t represents the coefficient of the mean square loss term of the i-th layer of the current round, ζ i, t-1 represents the coefficient of the mean square loss term of the i-th layer of the previous round, Dis i represents the previous round The output distance of the i-th layer of the target neural network model in the calculation, Dis min represents the minimum value of the output distances of all layers of the target neural network model in the previous round of calculation.
在一些实施例中,根据所述训练数据、所述临时目标数据通过如下计算公式,得到所述扰动值:In some embodiments, the disturbance value is obtained according to the training data and the temporary target data through the following calculation formula:
其中,ε′为扰动,p为临时目标数据;ε为预设的扰动最大值;po为输入数据。Among them, ε' is the disturbance, p is the temporary target data; ε is the preset maximum disturbance value; p o is the input data.
在一些实施例中,将所述扰动值叠加到所述输入数据,得到本轮迭代计算的目标数据。在一些实施例中,将所述扰动值叠加到所述输入数据后,对得到的值进行最大最小值校验,通过最大最小值校验使得数据在预定取值范围内分布。在一些实施例中,通过如下公式进行最大最小值校验:In some embodiments, the disturbance value is superimposed on the input data to obtain the target data of the current round of iterative calculation. In some embodiments, after the disturbance value is added to the input data, a maximum and minimum value check is performed on the obtained value, and the data is distributed within a predetermined value range through the maximum and minimum value check. In some embodiments, the maximum and minimum value checks are performed by the following formula:
其中,p为输入数据;γmax为预定的最大值;γmin为预定的最小值。Among them, p is the input data; γ max is the predetermined maximum value; γ min is the predetermined minimum value.
步骤S104:响应于确定达到预定条件,将所述目标数据作为对抗样本。Step S104: In response to determining that a predetermined condition is met, use the target data as an adversarial example.
在一些实施例中,所述预定条件为迭代轮次次数达到预定值。In some embodiments, the predetermined condition is that the number of iterations reaches a predetermined value.
在一些实施例中,所述预定值根据过往实验结果进行预定。对抗样本的最终目标是其数据的流形进入所述训练数据流形的耦合区域。在一些实施例中,通过过往实验确定对抗样本的流形进入所述耦合区域的平均迭代次数,根据该平均迭代次数设置所述预定值。In some embodiments, the predetermined value is predetermined according to past experimental results. The ultimate goal of an adversarial example is that its data manifold enters the coupled region of the training data manifold. In some embodiments, the average number of iterations for the manifold of adversarial samples entering the coupling region is determined through past experiments, and the predetermined value is set according to the average number of iterations.
需要说明的是,本申请实施例的方法可以由单个设备执行,例如一台计算机或服务器等。本实施例的方法也可以应用于分布式场景下,由多台设备相互配合来完成。在这种分布式场景的情况下,这多台设备中的一台设备可以只执行本申请实施例的方法中的某一个或多个步骤,这多台设备相互之间会进行交互以完成所述的方法。It should be noted that the method in the embodiment of the present application may be executed by a single device, such as a computer or a server. The method of this embodiment can also be applied in a distributed scenario, and is completed by cooperation of multiple devices. In the case of such a distributed scenario, one of the multiple devices may only perform one or more steps in the method of the embodiment of the present application, and the multiple devices will interact with each other to complete all described method.
需要说明的是,上述对本申请的一些实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于上述实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。It should be noted that some embodiments of the present application are described above. Other implementations are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in an order different from those in the above-described embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Multitasking and parallel processing are also possible or may be advantageous in certain embodiments.
基于同一发明构思,与上述任意实施例方法相对应的,本申请还提供了一种对抗样本生成装置。Based on the same inventive concept, the present application also provides an adversarial example generation device corresponding to the method in any of the above embodiments.
参考图2,所述对抗样本生成装置,包括:Referring to Figure 2, the device for generating an adversarial example includes:
获取模块,被配置为获取训练数据,以及根据所述训练数据训练得到的目标神经网络模型;An acquisition module configured to acquire training data and a target neural network model trained according to the training data;
输出基准数据计算模块,被配置为将所述训练数据作为输入值,分别计算所述目标神经网络模型每一层输出值的平均值,并将每一层的平均值作为该层的输出基准数据;The output reference data calculation module is configured to use the training data as an input value, respectively calculate the average value of the output value of each layer of the target neural network model, and use the average value of each layer as the output reference data of the layer ;
目标数据计算模块,被配置为根据所述输出基准数据,通过迭代计算,得到目标数据;The target data calculation module is configured to obtain the target data through iterative calculation according to the output benchmark data;
生成模块,被配置为响应于确定达到预定条件,将所述目标数据作为对抗样本。A generating module configured to use the target data as an adversarial example in response to determining that a predetermined condition is met.
为了描述的方便,描述以上装置时以功能分为各种模块分别描述。当然,在实施本申请时可以把各模块的功能在同一个或多个软件和/或硬件中实现。For the convenience of description, when describing the above devices, functions are divided into various modules and described separately. Of course, when implementing the present application, the functions of each module can be realized in one or more pieces of software and/or hardware.
上述实施例的装置用于实现前述任一实施例中相应的对抗样本生成方法,并且具有相应的方法实施例的有益效果,在此不再赘述。The device in the above embodiment is used to implement the corresponding adversarial sample generation method in any of the above embodiments, and has the beneficial effects of the corresponding method embodiment, and will not be repeated here.
基于同一发明构思,与上述任意实施例方法相对应的,本申请还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上任意一实施例所述的对抗样本生成方法。Based on the same inventive concept, and corresponding to the method in any of the above embodiments, the present application also provides an electronic device, including a memory, a processor, and a computer program stored in the memory and operable on the processor, the processor When the program is executed, the method for generating an adversarial example described in any one of the above embodiments is realized.
图3示出了本实施例所提供的一种更为具体的电子设备硬件结构示意图,该设备可以包括:处理器1010、存储器1020、输入/输出接口1030、通信接口1040和总线1050。其中处理器1010、存储器1020、输入/输出接口1030和通信接口1040通过总线1050实现彼此之间在设备内部的通信连接。FIG. 3 shows a schematic diagram of a more specific hardware structure of an electronic device provided by this embodiment. The device may include: a
处理器1010可以采用通用的CPU(Central Processing Unit,中央处理器)、微处理器、应用专用集成电路(Application Specific Integrated Circuit,ASIC)、或者一个或多个集成电路等方式实现,用于执行相关程序,以实现本说明书实施例所提供的技术方案。The
存储器1020可以采用ROM(Read Only Memory,只读存储器)、RAM(Random AccessMemory,随机存取存储器)、静态存储设备,动态存储设备等形式实现。存储器1020可以存储操作系统和其他应用程序,在通过软件或者固件来实现本说明书实施例所提供的技术方案时,相关的程序代码保存在存储器1020中,并由处理器1010来调用执行。The
输入/输出接口1030用于连接输入/输出模块,以实现信息输入及输出。输入输出/模块可以作为组件配置在设备中(图中未示出),也可以外接于设备以提供相应功能。其中输入设备可以包括键盘、鼠标、触摸屏、麦克风、各类传感器等,输出设备可以包括显示器、扬声器、振动器、指示灯等。The input/
通信接口1040用于连接通信模块(图中未示出),以实现本设备与其他设备的通信交互。其中通信模块可以通过有线方式(例如USB、网线等)实现通信,也可以通过无线方式(例如移动网络、WIFI、蓝牙等)实现通信。The
总线1050包括一通路,在设备的各个组件(例如处理器1010、存储器1020、输入/输出接口1030和通信接口1040)之间传输信息。
需要说明的是,尽管上述设备仅示出了处理器1010、存储器1020、输入/输出接口1030、通信接口1040以及总线1050,但是在具体实施过程中,该设备还可以包括实现正常运行所必需的其他组件。此外,本领域的技术人员可以理解的是,上述设备中也可以仅包含实现本说明书实施例方案所必需的组件,而不必包含图中所示的全部组件。It should be noted that although the above device only shows the
上述实施例的电子设备用于实现前述任一实施例中相应的对抗样本生成方法,并且具有相应的方法实施例的有益效果,在此不再赘述。The electronic device in the foregoing embodiments is used to implement the corresponding adversarial sample generation method in any of the preceding embodiments, and has the beneficial effects of the corresponding method embodiments, which will not be repeated here.
所属领域的普通技术人员应当理解:以上任何实施例的讨论仅为示例性的,并非旨在暗示本申请的范围(包括权利要求)被限于这些例子;在本申请的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本申请实施例的不同方面的许多其它变化,为了简明它们没有在细节中提供。Those of ordinary skill in the art should understand that: the discussion of any of the above embodiments is exemplary only, and is not intended to imply that the scope of the application (including claims) is limited to these examples; under the thinking of the application, the above embodiments or The technical features in different embodiments can also be combined, the steps can be implemented in any order, and there are many other variations of the different aspects of the embodiments of the application as described above, which are not provided in details for the sake of brevity.
另外,为简化说明和讨论,并且为了不会使本申请实施例难以理解,在所提供的附图中可以示出或可以不示出与集成电路(IC)芯片和其它部件的公知的电源/接地连接。此外,可以以框图的形式示出装置,以便避免使本申请实施例难以理解,并且这也考虑了以下事实,即关于这些框图装置的实施方式的细节是高度取决于将要实施本申请实施例的平台的(即,这些细节应当完全处于本领域技术人员的理解范围内)。在阐述了具体细节(例如,电路)以描述本申请的示例性实施例的情况下,对本领域技术人员来说显而易见的是,可以在没有这些具体细节的情况下或者这些具体细节有变化的情况下实施本申请实施例。因此,这些描述应被认为是说明性的而不是限制性的。In addition, to simplify illustration and discussion, and so as not to obscure the embodiments of the present application, well-known power supply/connection circuits associated with integrated circuit (IC) chips and other components may or may not be shown in the provided figures. ground connection. Furthermore, devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present application, and this also takes into account the fact that details regarding the implementation of these block diagram devices are highly dependent on the implementation of the embodiments of the present application to be implemented. platform (ie, the details should be well within the purview of those skilled in the art). Where specific details (e.g., circuits) have been set forth to describe example embodiments of the present application, it will be apparent to those skilled in the art that reference may be made without or with variation from these specific details. Implement the embodiment of the present application below. Accordingly, these descriptions should be regarded as illustrative rather than restrictive.
尽管已经结合了本申请的具体实施例对本申请进行了描述,但是根据前面的描述,这些实施例的很多替换、修改和变型对本领域普通技术人员来说将是显而易见的。例如,其它存储器架构(例如,动态RAM(DRAM))可以使用所讨论的实施例。Although the application has been described in conjunction with specific embodiments thereof, many alternatives, modifications and variations of those embodiments will be apparent to those of ordinary skill in the art from the foregoing description. For example, other memory architectures such as dynamic RAM (DRAM) may use the discussed embodiments.
本申请实施例旨在涵盖落入所附权利要求的宽泛范围之内的所有这样的替换、修改和变型。因此,凡在本申请实施例的精神和原则之内,所做的任何省略、修改、等同替换、改进等,均应包含在本申请的保护范围之内。The embodiments of the present application are intended to embrace all such alternatives, modifications and variations that fall within the broad scope of the appended claims. Therefore, any omissions, modifications, equivalent replacements, improvements, etc. within the spirit and principles of the embodiments of the present application shall be included within the protection scope of the present application.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310148707.6A CN116304685B (en) | 2023-02-14 | 2023-02-14 | Adversarial sample generation method, device and electronic device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310148707.6A CN116304685B (en) | 2023-02-14 | 2023-02-14 | Adversarial sample generation method, device and electronic device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN116304685A true CN116304685A (en) | 2023-06-23 |
| CN116304685B CN116304685B (en) | 2025-08-12 |
Family
ID=86795372
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310148707.6A Active CN116304685B (en) | 2023-02-14 | 2023-02-14 | Adversarial sample generation method, device and electronic device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116304685B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111626367A (en) * | 2020-05-28 | 2020-09-04 | 深圳前海微众银行股份有限公司 | Countermeasure sample detection method, apparatus, device and computer readable storage medium |
| US20220172000A1 (en) * | 2020-02-25 | 2022-06-02 | Zhejiang University Of Technology | Defense method and an application against adversarial examples based on feature remapping |
-
2023
- 2023-02-14 CN CN202310148707.6A patent/CN116304685B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20220172000A1 (en) * | 2020-02-25 | 2022-06-02 | Zhejiang University Of Technology | Defense method and an application against adversarial examples based on feature remapping |
| CN111626367A (en) * | 2020-05-28 | 2020-09-04 | 深圳前海微众银行股份有限公司 | Countermeasure sample detection method, apparatus, device and computer readable storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116304685B (en) | 2025-08-12 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114332578B (en) | Image anomaly detection model training method, image anomaly detection method and device | |
| CN111797589B (en) | A text processing network, a neural network training method and related equipment | |
| CN113408743A (en) | Federal model generation method and device, electronic equipment and storage medium | |
| CN108898086A (en) | Method of video image processing and device, computer-readable medium and electronic equipment | |
| CN113094746B (en) | High-dimensional data publishing method based on localized differential privacy and related equipment | |
| CN114399028A (en) | Information processing method, graph convolutional neural network training method and electronic device | |
| CN117392378B (en) | Infrared small target detection method, device, equipment and readable storage medium | |
| CN110246166A (en) | Method and apparatus for handling point cloud data | |
| CN110246167A (en) | Method and apparatus for handling point cloud data | |
| CN112529068B (en) | Multi-view image classification method, system, computer equipment and storage medium | |
| CN113591969B (en) | Facial similarity evaluation method, device, equipment and storage medium | |
| CN116721217B (en) | Training method of three-dimensional prediction model, three-dimensional reconstruction method and device | |
| CN114463553A (en) | Image processing method and apparatus, electronic device, and storage medium | |
| CN108520201B (en) | Robust face recognition method based on weighted mixed norm regression | |
| CN113435531B (en) | Zero-sample image classification method, system, electronic device and storage medium | |
| CN119047317B (en) | Capacitance value extraction method and device based on sparse data machine learning technology | |
| Park et al. | Multi-attributed graph matching with multi-layer graph structure and multi-layer random walks | |
| CN110211167A (en) | Method and apparatus for handling point cloud data | |
| CN114662549A (en) | DOA (direction of arrival) determination method, device and medium of signal | |
| CN116304685A (en) | Method and device for generating countermeasure sample and electronic equipment | |
| US20240412043A1 (en) | Method and apparatus for training noise data determining model and determining noise data | |
| CN109447147A (en) | The image clustering method decomposed based on the sparse matrix of depths of digraph | |
| CN113780327A (en) | Data processing method and device | |
| CN117638882A (en) | Loss prediction method, device, equipment, medium and product of electric power system | |
| CN113721191B (en) | Signal source positioning method and system for improving matrix completion performance through self-adaptive rasterization |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |