CN112839034B - A Network Intrusion Detection Method Based on CNN-GRU Hierarchical Neural Network - Google Patents
A Network Intrusion Detection Method Based on CNN-GRU Hierarchical Neural Network Download PDFInfo
- Publication number
- CN112839034B CN112839034B CN202011590155.7A CN202011590155A CN112839034B CN 112839034 B CN112839034 B CN 112839034B CN 202011590155 A CN202011590155 A CN 202011590155A CN 112839034 B CN112839034 B CN 112839034B
- Authority
- CN
- China
- Prior art keywords
- data
- network
- gru
- convolution
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1416—Event detection, e.g. attack signature detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
技术领域technical field
本发明涉及网络安全技术领域,具体涉及一种基于CNN-GRU分层神经网络的网络入侵检测方法。The invention relates to the technical field of network security, in particular to a network intrusion detection method based on a CNN-GRU layered neural network.
背景技术Background technique
随着Internet的飞速发展,大量的设备和人员加入了Internet环境。同时,与网络流量安全有关的问题也增加了。其中网络攻击者经常根据互联网上的漏洞使网络瘫痪,给用户造成不可估量的损失。过去,此类攻击经常给企业造成经济损失,但现在包括个人隐私被盗,这对大多数网络用户的权益造成了极大的伤害。With the rapid development of the Internet, a large number of devices and personnel have joined the Internet environment. At the same time, issues related to the security of network traffic have also increased. Among them, network attackers often paralyze the network according to the loopholes on the Internet, causing immeasurable losses to users. In the past, such attacks have often caused financial losses to businesses, but now include the theft of personal privacy, which has caused great harm to the rights and interests of most network users.
为了避免此类问题,我们经常需要可以通过分析网络用户生成的流量数据来检测攻击行为。关键的挑战是如何有效地识别具有攻击性行为的流量数据。由于传统破解并解密网络流量的方法需要部署额外设备,成本和部署难度较高。传统的基于有效载荷的方法已无法处理加当今越来越多的加密流量,传统的机器学习模型经常用于基于机器学习的网络入侵检测中。但遇到的常见问题是,很难找到合适的功能作为网络的参考标准,机器学习模型通常需要更多可量化的特征作为训练参考,不适用于特征不明确的分类训练。当使用机器学习方法进行分类时,会进一步导致准确率的瓶颈,这一点很难改善。To avoid such problems, we often need to be able to detect attacks by analyzing the traffic data generated by network users. The key challenge is how to effectively identify traffic data with aggressive behavior. Since the traditional method of cracking and decrypting network traffic needs to deploy additional equipment, the cost and deployment difficulty are high. Traditional payload-based methods have been unable to handle today's increasing amounts of encrypted traffic, and traditional machine learning models are often used in machine learning-based network intrusion detection. However, the common problem encountered is that it is difficult to find suitable functions as the reference standard of the network. Machine learning models usually require more quantifiable features as training references, which are not suitable for classification training with unclear features. When using machine learning methods for classification, it further leads to a bottleneck in accuracy, which is difficult to improve.
随着芯片技术的发展,近年来计算机的计算能力得到了极大的发展。同时,互联网的发展也催生了大量数据。在这种情况下,深度学习网络已被广泛使用,包括网络入侵检测。与传统的机器学习方法相比,深度学习方法可以自动发现不同交通信息之间的相关性,并通过海量数据训练对特征赋予不同的权重。与手动定义特征的方法相比,该方法获得的特征具有更好的适用性,更适用于网络入侵检测系统的实现。With the development of chip technology, the computing power of computers has been greatly developed in recent years. At the same time, the development of the Internet has also spawned a large amount of data. In this context, deep learning networks have been widely used, including network intrusion detection. Compared with traditional machine learning methods, deep learning methods can automatically discover the correlation between different traffic information, and assign different weights to features through massive data training. Compared with the method of manually defining features, the features obtained by this method have better applicability and are more suitable for the realization of network intrusion detection systems.
发明内容SUMMARY OF THE INVENTION
本发明的目的是:本发明通过CNN-GRU分层神经网络对采集的网络流量中的特征属性进行分析,提出一种基于CNN-GRU分层神经网络的网络入侵检测方法。根据网络入侵检测的实际问题,对可以获取到的原始数据进行采集,同时利用已经确定的标签数据,提取出CNN-GRU分层神经网络样本全集,通过特征工程,对原始数据进行预处理,去掉数据包中部分的无效内容。将样本全集根据合适比例划分为训练集和测试集后,对模型进行训练,通过测试集,验证模型的有效性,得到CNN-GRU分层神经网络分类模型,实现网络入侵行为的精确监测。The purpose of the present invention is: the present invention analyzes the characteristic attributes in the collected network traffic through the CNN-GRU layered neural network, and proposes a network intrusion detection method based on the CNN-GRU layered neural network. According to the actual problems of network intrusion detection, the raw data that can be obtained is collected, and the label data that has been determined is used to extract the complete set of CNN-GRU hierarchical neural network samples. Through feature engineering, the raw data is preprocessed and removed. Invalid content in part of packet. After dividing the full set of samples into training set and test set according to the appropriate ratio, the model is trained, and the validity of the model is verified through the test set, and the CNN-GRU hierarchical neural network classification model is obtained to realize the accurate monitoring of network intrusion behavior.
为了解决上述问题,本发明所采用的技术方案是:In order to solve the above problems, the technical scheme adopted in the present invention is:
一种基于CNN-GRU分层神经网络的网络入侵检测方法,其特征在于,包括如下步骤:A network intrusion detection method based on a CNN-GRU layered neural network, characterized in that it comprises the following steps:
(1)通过Wireshark软件对网络流量进行抓取,获得网络流量数据包,即待分类数据包;(1) Capture network traffic through Wireshark software to obtain network traffic data packets, that is, data packets to be classified;
(2)对待分类数据包进行数据包标记,同时,通过特征工程对待分类数据包进行预处理,去掉数据包中的部分无效内容;对全部流内的数据包进行清洗,对每一个数据流进行清洗;进而将数据包解析成十进制数据,并将十进制数据转化为40*40单通道灰度图;得到模型训练所需的全部图片样本,从而得到CNN-GRU分层神经网络样本全集;(2) Data packet marking is performed on the data packets to be classified, and at the same time, the data packets to be classified are preprocessed through feature engineering to remove some invalid contents in the data packets; Cleaning; then parse the data packet into decimal data, and convert the decimal data into a 40*40 single-channel grayscale image; obtain all the image samples required for model training, thereby obtaining the complete set of CNN-GRU hierarchical neural network samples;
(3)将样本全集根据合适比例划分为训练集和测试集,基于CNN-GRU分层神经网络算法,将单通道灰度图矩阵作为输入向量并通过训练集建立CNN-GRU分层神经网络分类模型,使得模型学习如何对样本进行分类;(3) Divide the entire sample set into training set and test set according to the appropriate ratio, based on the CNN-GRU hierarchical neural network algorithm, the single-channel grayscale image matrix is used as the input vector, and the CNN-GRU hierarchical neural network classification is established through the training set model, which allows the model to learn how to classify samples;
(4)模型训练完成,将测试集的数据传入模型中,模型根据训练得到的参数对输入的数据进行预测,对未知的网络流量进行分类判断其是否为攻击流量或为何种类型的攻击流量。(4) After the model training is completed, the data of the test set is passed into the model, and the model predicts the input data according to the parameters obtained by training, and classifies the unknown network traffic to determine whether it is attack traffic or what type of attack traffic it is. .
进一步的,步骤(1)抓取的网络流量数据包,数据包内此时储存内容为二进制数据。Further, in the network traffic data packet captured in step (1), the content stored in the data packet at this time is binary data.
进一步的,步骤(2)的具体流程包括:Further, the specific process of step (2) includes:
(2.1)对待分类数据包进行标记,根据需求标记出正常流量和攻击流量,如有对攻击流量进行分类的需求,则还需将不同种类的攻击流量分类进行标注,流量种类标注的结果以数字存储,从0开始;(2.1) Mark the data packets to be classified, and mark the normal traffic and attack traffic according to the requirements. If there is a need to classify the attack traffic, it is also necessary to mark the different types of attack traffic. The result of traffic type marking is numerical storage, starting from 0;
(2.2)通过特征工程对待分类数据包进行预处理,根据源IP地址、源端口、目的IP地址对抓取的网络流进行分流,利用SliptCat软件实现分流;(2.2) Preprocess the data packets to be classified through feature engineering, divide the captured network flow according to the source IP address, source port, and destination IP address, and use the SliptCat software to realize the distribution;
(2.3)对全部流内的数据包进行清洗,去掉数据包中的MAC源地址、MAC目的地址、数据包所用的网络协议种类信息,每个数据包提取前160个字节的数据,对数据包中不足160字节部分进行填充0处理;(2.3) Clean the data packets in all the flows, remove the MAC source address, MAC destination address, and network protocol type information used in the data packets, extract the first 160 bytes of data from each data packet, and analyze the data. The part of the packet less than 160 bytes is filled with 0;
(2.4)对每一个数据流进行清洗,每个数据流提取前10个数据包,对数据流中不足10个数据包的情况进行填充160字节全0数据包处理直至10个数据包;(2.4) Cleaning each data stream, extracting the first 10 data packets from each data stream, and filling 160 bytes of all 0 data packets in the case of less than 10 data packets in the data stream until 10 data packets;
(2.5)此时每个流内的数据为160*10共1600字节,将每个字节内的数据转化为十进制,得到取值范围在0~255的数值,将1600维度的十进制数据转化为40*40的矩阵数据;(2.5) At this time, the data in each stream is 160*10 and a total of 1600 bytes. Convert the data in each byte into decimal to obtain a value ranging from 0 to 255, and convert the 1600-dimensional decimal data into is 40*40 matrix data;
(2.6)将40*40矩阵数据中的数值转化为灰度,得到每个矩阵对应的大小为40*40的单通道灰度图,得到模型训练所需的全部图片样本。(2.6) Convert the values in the 40*40 matrix data into grayscale, and obtain a single-channel grayscale image with a size of 40*40 corresponding to each matrix, and obtain all the image samples required for model training.
进一步的,步骤(3)的具体流程包括:Further, the specific flow of step (3) includes:
(3.1)数据首先进入一个改进的LetNet-5网络,使用两个卷积层和两个最大池化层来提取原始网络流量数据的空间特征;在卷积过程的第一层中使用了32个5*5卷积核,然后执行最大池化操作,第二个卷积层使用64个3*3卷积核,然后执行最大池化操作;每次卷积操作后,CNN隐藏层首先使用ReLU激活函数进行转换,然后使用最大池化操作,经过处理后,原始的单通道40*40图片将转换为具有64通道的8*8图片;充分扩展它们之后,获得4096维向量并将其传输到CNN网络的输出层,输出层使用全结层,而全结层使用1600个神经元,此变换保留了相同的维数提取完原始数据后,考虑到全连接层使某些神经元随机失活,以免过度拟合的发生;(3.1) The data first entered an improved LetNet-5 network, using two convolutional layers and two max-pooling layers to extract the spatial features of the original network traffic data; 32 were used in the first layer of the convolution process 5*5 convolution kernels, and then perform the max pooling operation, the second convolution layer uses 64 3*3 convolution kernels, and then performs the max pooling operation; after each convolution operation, the CNN hidden layer first uses ReLU The activation function is transformed, and then a max pooling operation is used. After processing, the original single-channel 40*40 picture will be transformed into an 8*8 picture with 64 channels; after fully expanding them, a 4096-dimensional vector is obtained and transferred to The output layer of the CNN network, the output layer uses a full-junction layer, and the full-junction layer uses 1600 neurons. This transformation retains the same dimension. After extracting the original data, considering that the fully-connected layer makes some neurons randomly deactivated , to avoid overfitting;
(3.2)随后使用GRU网络自动提取原始流数据的时间特征,GRU网络使用两层单元提取时间特征;GRU的每个单元都包括256个GRU单元,并且每个层的激活函数使用S型函数进行非线性运算;GRU网络的最后一层使用完全连接层,并且完全连接层中神经元的数量等于流类别的数量;(3.2) The temporal features of the original stream data are then automatically extracted using the GRU network, which uses two layers of units to extract temporal features; each unit of the GRU includes 256 GRU units, and the activation function of each layer is performed using a sigmoid function. Non-linear operations; the last layer of the GRU network uses a fully connected layer, and the number of neurons in the fully connected layer is equal to the number of flow categories;
(3.3)使用训练集训练得到网络入侵检测模型。(3.3) Use the training set to train the network intrusion detection model.
进一步的,步骤(3.1)中的卷积运算使用f*f大小的卷积核ω对大小为n*n的图片执行滑动卷积,每次滑动卷积都会产生一个新特征;假设X是卷积的输入,b是偏差项,ci是在第i层由卷积产生的新特征,而σr是激活函数ReLU;然后,通过卷积运算获得的新特征为:ci=σr*(ω*Xi+bi),在卷积运算之后,n*n的特征图将生成c=(nf+1)*(nf+1)的特征图;通过大小为ff的卷积核滑动窗口来确定大小;卷积后对特征图c进行最大池化,并将所选窗口中的最大值作为最终特征;最终的特征图大小为:[(nf+1)*(nf+1)]/2。Further, the convolution operation in step (3.1) uses a convolution kernel ω of size f*f to perform sliding convolution on an image of size n*n, and each sliding convolution will generate a new feature; assuming X is the volume The input of the product, b is the bias term, ci is the new feature generated by convolution at the i-th layer, and σr is the activation function ReLU; then, the new feature obtained by the convolution operation is: ci=σr*(ω*Xi +bi), after the convolution operation, the feature map of n*n will generate the feature map of c=(nf+1)*(nf+1); the size is determined by the sliding window of the convolution kernel of size ff; the volume After the product, the feature map c is max-pooled, and the maximum value in the selected window is used as the final feature; the final feature map size is: [(nf+1)*(nf+1)]/2.
进一步的,步骤(3.2)中的GRU网络一个传输下来的状态ht-1和当前节点的输入xt来获取两个门控状态;其中r控制重置的门控,z为控制更新的门控。ht-1′=ht-1Θr得到门控信号之后,首先使用重置门控来得到之后的数据ht-1′=ht-1Θr,再将ht-1′与输入xt进行拼接,再通过一个tanh激活函数来将数据放缩到-1~1的范围内,同时进行遗忘了记忆两个步骤,使用先前得到的更新门控z,得到最终表达式:ht=(1-z)Θht-1+zΘh′。Further, the GRU network in step (3.2) obtains two gated states from a transmitted state h t -1 and the input x t of the current node; where r controls the reset gate, and z is the gate that controls the update control. h t-1′ = h t-1 Θr After obtaining the gate control signal, first use the reset gate to obtain the subsequent data h t-1′ = h t-1 Θr, and then connect h t-1′ with the input x t is spliced, and then a tanh activation function is used to scale the data to the range of -1 to 1. At the same time, two steps of forgetting and memory are performed, and the previously obtained update gate z is used to obtain the final expression: h t = (1-z)Θh t-1 +zΘh ′ .
完全连接层输出的结果输入到softmax回归层,softmax分类器输出每种流的分类概率;概率最高的标签代表的是流上层次网络的分类结果;模型中使用的损失函数是均方损失函数,训练优化器使用AdamOptimizer,该优化器使用自适应矩估计进行梯度下降。The output of the fully connected layer is input to the softmax regression layer, and the softmax classifier outputs the classification probability of each stream; the label with the highest probability represents the classification result of the hierarchical network on the stream; the loss function used in the model is the mean square loss function, The training optimizer uses AdamOptimizer, which uses adaptive moment estimation for gradient descent.
进一步的,步骤(4)还包括:将模型预测的结果和测试集实际的结果相比较,判断模型预测结果的具体指标,参考项有准确率、精确率、召回率和F1-Measure以及收敛速度。Further, step (4) also includes: comparing the results predicted by the model with the actual results of the test set, and judging the specific indicators of the model prediction results, and the reference items include accuracy, precision, recall, F1-Measure and convergence speed. .
本发明提供的上述技术方案的有益效果至少包括:相较于当前广泛使用的通过深度学习进行网络入侵检测的方法,本方法有以下优势:The beneficial effects of the above technical solutions provided by the present invention at least include: compared with the currently widely used method for network intrusion detection through deep learning, the method has the following advantages:
1.获取的流量数据直接来自传输的数据包,在数据的获取上成本极低能显著增加数据来源的广泛性;1. The acquired traffic data comes directly from the transmitted data packets, and the extremely low cost of data acquisition can significantly increase the breadth of data sources;
2.创新性将一维的数据包数据转化为二维的图像,通过这样的方式使得数据包中不同的特征得以充分的组合,得到更利于描述数据包类型的特征组合;2. Innovatively convert the one-dimensional data packet data into a two-dimensional image, in this way, the different features in the data packet can be fully combined, and a feature combination that is more conducive to describing the type of data packet is obtained;
3.使用GRU网络描述数据包之间的时序性关系,鉴于截取的同一流中前10数据包间存在仅部分数据包包含攻击信息的情况,GRU网络随机的前后关联性从底层设计上描述了这一情况,更贴近实际的描述了数据包传输的情况;3. The GRU network is used to describe the timing relationship between data packets. In view of the fact that only some of the data packets contain attack information among the first 10 data packets in the same stream intercepted, the random before and after correlation of the GRU network describes this from the underlying design. A situation, which more closely describes the situation of packet transmission;
4.将两个网络按层次结合后,相比于传统的机器学习方法,本发明提出的方法在准确性上具有明显的提升,根据实验结果展示,对与分类正常流量和攻击流量的准确率达到了99.92%,分类正常流量和攻击流量种类的准确率达到了99.77%,传统方法无法达到如此高的预测精度;4. After the two networks are combined hierarchically, compared with the traditional machine learning method, the method proposed by the present invention has a significant improvement in accuracy. According to the experimental results, the accuracy of classifying normal traffic and attack traffic is improved. It reaches 99.92%, and the accuracy of classifying normal traffic and attack traffic types reaches 99.77%, which cannot be achieved by traditional methods;
5.该模型相比于传统机器学习方法或是单一网络深度学习方法的不足在于收敛时间较长。5. The disadvantage of this model compared to traditional machine learning methods or single-network deep learning methods is that the convergence time is longer.
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。Other features and advantages of the present invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description, claims, and drawings.
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solutions of the present invention will be further described in detail below through the accompanying drawings and embodiments.
附图说明Description of drawings
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:The accompanying drawings are used to provide a further understanding of the present invention, and constitute a part of the specification, and are used to explain the present invention together with the embodiments of the present invention, and do not constitute a limitation to the present invention. In the attached image:
图1是本发明实施例公开的一种基于CNN-GRU分层神经网络的网络入侵检测方法的模型结构图;1 is a model structure diagram of a network intrusion detection method based on a CNN-GRU layered neural network disclosed in an embodiment of the present invention;
图2为网络流量数据图形化流程图。Figure 2 is a graphical flow chart of network traffic data.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开的示例性实施方式。虽然附图中显示了本公开的示例性实施方式,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施方式所限制。相反,提供这些实施方式是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that the present disclosure will be more thoroughly understood, and will fully convey the scope of the present disclosure to those skilled in the art.
如图1所示,一种基于CNN-GRU分层神经网络的网络入侵检测方法具体实施方式如下:As shown in Figure 1, a specific implementation of a network intrusion detection method based on a CNN-GRU hierarchical neural network is as follows:
通过Wireshark软件对网络流量进行抓取,获得网络流量数据包,即待分类数据包,数据包内此时储存内容为二进制数据。The network traffic is captured by the Wireshark software to obtain the network traffic data packets, that is, the data packets to be classified, and the content stored in the data packets is binary data at this time.
对待分类数据包进行标记,根据需求标记出正常流量和攻击流量,如有对攻击流量进行分类的需求,则还需将不同种类的攻击流量分类进行标注,流量种类标注的结果以数字存储,从0开始。Mark the data packets to be classified, and mark the normal traffic and attack traffic according to the requirements. If there is a need to classify the attack traffic, you need to mark the different types of attack traffic. The result of traffic type marking is stored in numbers, from 0 starts.
通过特征工程对对待分类数据包进行预处理,根据源IP地址,源端口,目的IP地址对抓取的网络流进行分流,利用SliptCat软件实现分流。The data packets to be classified are preprocessed through feature engineering, and the captured network flow is divided according to the source IP address, source port and destination IP address, and the flow is realized by using SliptCat software.
对全部流内的数据包进行清洗,去掉数据包中的MAC源地址、MAC目的地址、数据包所用的网络协议种类信息,每个数据包提取前160个字节的数据,对数据包中不足160字节部分进行填充0处理。Clean the data packets in all flows, remove the MAC source address, MAC destination address, and network protocol type information used in the data packets, and extract the first 160 bytes of data from each data packet. The 160-byte part is filled with 0s.
选择160字节的数据考虑到了两方面的影响,选择更短的长度可能使得选取的特征不够充分,无法取得良好的展示数据包的特征,若选择的数据过长会使得模型的训练时间显著增加,同时选择更长的数据会使得填充0过多进而影响数据包特征的展示。The selection of 160-byte data takes into account two effects. Choosing a shorter length may make the selected features insufficient to obtain a good feature of the display data package. If the selected data is too long, the training time of the model will be significantly increased. , and selecting longer data will cause too much padding to affect the display of packet characteristics.
对每一个数据流进行清洗,每个数据流提取前10个数据包,对数据流中不足10个数据包的情况进行填充160字节全0数据包处理直至10个数据包。Each data stream is cleaned, the first 10 data packets are extracted from each data stream, and if there are less than 10 data packets in the data stream, 160 bytes of all 0 data packets are filled and processed until there are 10 data packets.
选择10个数据包是考虑到在大部分的网络攻击行为下,网络流量数据较少,为了更好地描述这一过程不宜选择过多的数据包,否则极有可能将多种网络攻击混在一起或是将攻击流量和正常流量混在一起。The selection of 10 data packets is based on the fact that in most network attacks, there is less network traffic data. In order to better describe this process, it is not appropriate to select too many data packets, otherwise it is very likely that various network attacks will be mixed together. Or mix attack traffic with normal traffic.
此时每个流内的数据为160*10共1600字节,将每个字节内的数据转化为十进制,得到取值范围在0~255的数值,将1600维度的十进制数据转化为40*40的矩阵数据。At this time, the data in each stream is 160*10 and a total of 1600 bytes. Convert the data in each byte into decimal to obtain a value ranging from 0 to 255, and convert the 1600-dimensional decimal data into 40* 40 matrix data.
将40*40矩阵数据中的数值转化为灰度,得到每个矩阵对应的大小为40*40的单通道灰度图,得到模型训练所需的全部图片样本。Convert the values in the 40*40 matrix data to grayscale to obtain a single-channel grayscale image of size 40*40 corresponding to each matrix, and obtain all the image samples required for model training.
在处理好的全部样本中取出20%作为测试样本,其余为训练样本,将训练样本送入模型进行训练,使得模型学习如何对样本进行分类。20% of all processed samples are taken as test samples, and the rest are training samples. The training samples are sent to the model for training, so that the model learns how to classify the samples.
训练样本进入模型,首先是一个改进的LetNet-5网络,使用两个卷积层和两个最大池化层来提取原始网络流量数据的空间特征,该特征用于描述每个数据包包含的特征,通过将数据包转化为图片将原本距离较远的数据包特征组合在了一起,使得更容易学习出利分类的特征组合。The training samples enter the model, which starts with an improved LetNet-5 network, which uses two convolutional layers and two max-pooling layers to extract the spatial features of the original network traffic data, which are used to describe the features contained in each data packet , by converting the data packets into pictures, the features of the data packets that are originally far away are combined together, which makes it easier to learn the feature combination that is beneficial for classification.
该网络共有两层,在卷积过程的第一层中使用了32个5*5卷积核,然后执行最大池化操作,第二个卷积层使用64个3*3卷积核,然后执行最大池化操作。每次卷积操作后,CNN隐藏层首先使用ReLU激活函数进行转换,然后使用最大池化操作,经过处理后,原始的单通道40*40图片将转换为具有64通道的8*8图片。充分扩展它们之后,获得4096维向量并将其传输到CNN网络的输出层,输出层使用全结层,而全结层使用1600个神经元,此变换保留了相同的维数。提取完原始数据后,考虑到全连接层使某些神经元随机失活,以免过度拟合的发生。The network has two layers. In the first layer of the convolution process, 32 5*5 convolution kernels are used, and then a max pooling operation is performed. The second convolution layer uses 64 3*3 convolution kernels, and then Perform a max pooling operation. After each convolution operation, the CNN hidden layer is first transformed using the ReLU activation function, and then using the max pooling operation, after processing, the original single-channel 40*40 image will be transformed into an 8*8 image with 64 channels. After expanding them sufficiently, a 4096-dimensional vector is obtained and transmitted to the output layer of the CNN network, which uses a full-junction layer with 1600 neurons, this transformation preserves the same dimensionality. After extracting the original data, consider that the fully connected layer deactivates some neurons randomly to avoid overfitting.
卷积运算使用f*f大小的卷积核ω对大小为n*n的图片执行滑动卷积,每次滑动卷积都会产生一个新特征。假设X是卷积的输入,b是偏差项,ci是在第i层由卷积产生的新特征,而σr是激活函数ReLU。然后,通过卷积运算获得的新特征为:ci=σr*(ω*Xi+bi)在卷积运算之后,n*n的特征图将生成c=(nf+1)*(nf+1)的特征图。通过大小为ff的卷积核滑动窗口来确定大小。卷积后对特征图c进行最大池化,并将所选窗口中的最大值作为最终特征。最终的特征图大小为:[(nf+1)*(nf+1)]/2。The convolution operation uses a convolution kernel ω of size f*f to perform sliding convolution on an image of size n*n, and each sliding convolution generates a new feature. Suppose X is the input of the convolution, b is the bias term, ci is the new feature produced by the convolution at the i-th layer, and σr is the activation function ReLU. Then, the new feature obtained by the convolution operation is: ci=σr*(ω*Xi+bi) After the convolution operation, the feature map of n*n will generate c=(nf+1)*(nf+1) feature map. The size is determined by a sliding window of convolution kernels of size ff. The feature map c is max-pooled after convolution, and the maximum value in the selected window is taken as the final feature. The final feature map size is: [(nf+1)*(nf+1)]/2.
模型的第二层是一个双层的GRU网络用来提取原始网络流量数据的时间特征,该特征用于描述同一流中的数据包按时间戳顺序间的关系,符合数据包的传输的实际过程,更全面的描述网络流的特征用于辨别其种类。The second layer of the model is a two-layer GRU network used to extract the temporal features of the original network traffic data. This feature is used to describe the relationship between the data packets in the same flow in the order of timestamps, which is in line with the actual process of data packet transmission. , which more comprehensively describes the characteristics of network flows to identify their types.
该网络的每一层包含了256个GRU单元并且每个层的激活函数使用S型函数进行非线性运算。GRU网络的最后一层使用完全连接层,并且完全连接层中神经元的数量等于流类别的数量。Each layer of the network contains 256 GRU units and the activation function of each layer uses a sigmoid function for nonlinear operations. The last layer of the GRU network uses a fully connected layer, and the number of neurons in the fully connected layer is equal to the number of flow categories.
GRU网络一个传输下来的状态ht-1和当前节点的输入xt来获取两个门控状态。其中r控制重置的门控,z为控制更新的门控。ht-1′=ht-1Θr得到门控信号之后,首先使用重置门控来得到之后的数据ht-1′=ht-1Θr,再将ht-1′与输入xt进行拼接,再通过一个tanh激活函数来将数据放缩到-1~1的范围内,同时进行了遗忘了记忆两个步骤,使用了先前得到的更新门控z,得到最终表达式:ht=(1-z)Θht-1+zΘh′。The GRU network obtains two gated states with a transmitted state h t -1 and the input x t of the current node. where r controls the gate for reset and z is the gate for update. h t-1′ = h t-1 Θr After obtaining the gate control signal, first use the reset gate to obtain the subsequent data h t-1′ = h t-1 Θr, and then connect h t-1′ with the input x t is spliced, and then a tanh activation function is used to scale the data to the range of -1 to 1. At the same time, the two steps of forgetting and memory are carried out, and the previously obtained update gate z is used to obtain the final expression: h t = (1-z)Θh t-1 +zΘh'.
完全连接层输出的结果输入到softmax回归层,softmax分类器输出每种流的分类概率。概率最高的标签代表的是流上层次网络的分类结果。模型中使用的损失函数是均方损失函数,训练优化器使用AdamOptimizer,该优化器使用自适应矩估计进行梯度下降。The output of the fully connected layer is input to the softmax regression layer, and the softmax classifier outputs the classification probability of each stream. The label with the highest probability represents the classification result of the hierarchical network on the stream. The loss function used in the model is a mean squared loss function and the training optimizer uses AdamOptimizer which uses adaptive moment estimation for gradient descent.
模型训练完成,将测试集的数据传入模型中,模型根据训练得到的参数对输入的数据进行预测,对未知的网络流量进行分类判断其是否为攻击流量或为何种类型的攻击流量。After the model training is completed, the data of the test set is passed into the model, and the model predicts the input data according to the parameters obtained from the training, and classifies the unknown network traffic to determine whether it is attack traffic or what type of attack traffic it is.
将模型预测的结果和测试集实际的结果相比较,判断模型预测结果的具体指标,参考项有准确率(Accuracy),精确率(Precision),召回率(Recall)和F1-Measure以及收敛速度(Convergence speed)。Compare the predicted results of the model with the actual results of the test set, and judge the specific indicators of the predicted results of the model. The reference items are Accuracy, Precision, Recall, F1-Measure and convergence speed ( Convergence speed).
具体实施例:Specific examples:
使用CICIDS2017数据集来测试模型,该数据集的优势在于它具有更丰富的流量类型和相对较新的数据发布时间,这更符合当前网络的实际情况。该数据集来自研究人员设计的攻击场景。第一天收集的所有数据都是正常的网络流量。在接下来的四天内,网络受到攻击,并记录了流量信息。最终结果存储在PCAP文件中,该文件包括标记为正常网络流量和各种网络攻击的所有流量。考虑到训练结果的可靠性,我们选择了前十名攻击流量和正常流量作为我们的训练集和测试集,确保每种类型至少包含两千个流量数据。鉴于数据集中给出的标签不符合实际需求,我们将标签重新添加到交通数据中以满足培训要求。经过一定处理后,网络流的数目和所占比例见下表:The model is tested using the CICIDS2017 dataset, which has the advantage of having richer traffic types and relatively new data release time, which is more in line with the actual situation of the current network. This dataset comes from attack scenarios designed by the researchers. All data collected on the first day is normal network traffic. Over the next four days, the network was attacked and traffic information was recorded. The final result is stored in a PCAP file, which includes all traffic marked as normal network traffic and various network attacks. Considering the reliability of the training results, we selected the top ten attack traffic and normal traffic as our training and test sets, ensuring that each type contains at least two thousand traffic data. Given that the labels given in the dataset do not meet the actual needs, we re-add the labels to the traffic data to meet the training requirements. After certain processing, the number and proportion of network flows are shown in the following table:
表1:网络流数目量及比例Table 1: The number and proportion of network traffic
为了使对模型的测试更具完整性,我们测试了该模型仅对正常流量和攻击流量进行分类的结果以及对每种攻击流量都进行分类的结果,迭代次数为两万次:To give more completeness to the testing of the model, we tested the results of the model classifying only normal and attack traffic, and the results of classifying each type of attack traffic, for 20,000 iterations:
表2:模型分类结果Table 2: Model Classification Results
从表中可以看到,该模型预测的准确率在无论在哪种分类模式下都超过了99.5%拥有非常高的训练精度。As can be seen from the table, the accuracy of the model's prediction exceeds 99.5% in any classification mode with very high training accuracy.
此实例可以表明,本方法可有效实现对网络入侵检测流量的精确分类,实现网络入侵检测。This example can show that the method can effectively realize the accurate classification of network intrusion detection traffic and realize network intrusion detection.
应该明白,公开的过程中的步骤的特定顺序或层次是示例性方法的实例。基于设计偏好,应该理解,过程中的步骤的特定顺序或层次可以在不脱离本公开的保护范围的情况下得到重新安排。所附的方法权利要求以示例性的顺序给出了各种步骤的要素,并且不是要限于所述的特定顺序或层次。It is understood that the specific order or hierarchy of steps in the disclosed processes is an example of a sample approach. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
在上述的详细描述中,各种特征一起组合在单个的实施方案中,以简化本公开。不应该将这种公开方法解释为反映了这样的意图,即,所要求保护的主题的实施方案需要清楚地在每个权利要求中所陈述的特征更多的特征。相反,如所附的权利要求书所反映的那样,本发明处于比所公开的单个实施方案的全部特征少的状态。因此,所附的权利要求书特此清楚地被并入详细描述中,其中每项权利要求独自作为本发明单独的优选实施方案。In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of simplifying the disclosure. This method of disclosure should not be construed as reflecting an intention that embodiments of the claimed subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, present invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the Detailed Description, with each claim standing on its own as a separate preferred embodiment of this invention.
本领域技术人员还应当理解,结合本文的实施例描述的各种说明性的逻辑框、模块、电路和算法步骤均可以实现成电子硬件、计算机软件或其组合。为了清楚地说明硬件和软件之间的可交换性,上面对各种说明性的部件、框、模块、电路和步骤均围绕其功能进行了一般地描述。至于这种功能是实现成硬件还是实现成软件,取决于特定的应用和对整个系统所施加的设计约束条件。熟练的技术人员可以针对每个特定应用,以变通的方式实现所描述的功能,但是,这种实现决策不应解释为背离本公开的保护范围。Those skilled in the art will also appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments herein may be implemented as electronic hardware, computer software, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether this functionality is implemented as hardware or software depends on the specific application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, however, such implementation decisions should not be interpreted as a departure from the scope of the present disclosure.
结合本文的实施例所描述的方法或者算法的步骤可直接体现为硬件、由处理器执行的软件模块或其组合。软件模块可以位于RAM存储器、闪存、ROM存储器、EPROM存储器、EEPROM存储器、寄存器、硬盘、移动磁盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质连接至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。该ASIC可以位于用户终端中。当然,处理器和存储介质也可以作为分立组件存在于用户终端中。The steps of a method or algorithm described in connection with the embodiments herein may be directly embodied in hardware, a software module executed by a processor, or a combination thereof. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. Of course, the storage medium can also be an integral part of the processor. The processor and storage medium may reside in an ASIC. The ASIC may be located in the user terminal. Of course, the processor and the storage medium may also exist in the user terminal as discrete components.
对于软件实现,本申请中描述的技术可用执行本申请所述功能的模块(例如,过程、函数等)来实现。这些软件代码可以存储在存储器单元并由处理器执行。存储器单元可以实现在处理器内,也可以实现在处理器外,在后一种情况下,它经由各种手段以通信方式耦合到处理器,这些都是本领域中所公知的。For a software implementation, the techniques described in this application may be implemented in modules (eg, procedures, functions, etc.) that perform the functions described in this application. These software codes may be stored in a memory unit and executed by a processor. The memory unit may be implemented within the processor or external to the processor, in which case it is communicatively coupled to the processor via various means, as is known in the art.
上文的描述包括一个或多个实施例的举例。当然,为了描述上述实施例而描述部件或方法的所有可能的结合是不可能的,但是本领域普通技术人员应该认识到,各个实施例可以做进一步的组合和排列。因此,本文中描述的实施例旨在涵盖落入所附权利要求书的保护范围内的所有这样的改变、修改和变型。此外,就说明书或权利要求书中使用的术语“包含”,该词的涵盖方式类似于术语“包括”,就如同“包括,”在权利要求中用作衔接词所解释的那样。此外,使用在权利要求书的说明书中的任何一个术语“或者”是要表示“非排它性的或者”。The above description includes examples of one or more embodiments. Of course, it is not possible to describe all possible combinations of components or methods in order to describe the above embodiments, but one of ordinary skill in the art will recognize that further combinations and permutations of the various embodiments are possible. Accordingly, the embodiments described herein are intended to cover all such changes, modifications and variations that fall within the scope of the appended claims. Furthermore, with respect to the term "comprising," as used in the specification or claims, the word is encompassed in a manner similar to the term "comprising," as if "comprising," were construed as a conjunction in the claims. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or."
Claims (4)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011590155.7A CN112839034B (en) | 2020-12-29 | 2020-12-29 | A Network Intrusion Detection Method Based on CNN-GRU Hierarchical Neural Network |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202011590155.7A CN112839034B (en) | 2020-12-29 | 2020-12-29 | A Network Intrusion Detection Method Based on CNN-GRU Hierarchical Neural Network |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN112839034A CN112839034A (en) | 2021-05-25 |
| CN112839034B true CN112839034B (en) | 2022-08-05 |
Family
ID=75925146
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202011590155.7A Active CN112839034B (en) | 2020-12-29 | 2020-12-29 | A Network Intrusion Detection Method Based on CNN-GRU Hierarchical Neural Network |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN112839034B (en) |
Families Citing this family (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113364787B (en) * | 2021-06-10 | 2023-08-01 | 东南大学 | A Botnet Traffic Detection Method Based on Parallel Neural Network |
| CN113556328B (en) * | 2021-06-30 | 2022-09-30 | 杭州电子科技大学 | A Deep Learning-Based Encrypted Traffic Classification Method |
| CN113569992B (en) * | 2021-08-26 | 2024-01-09 | 中国电子信息产业集团有限公司第六研究所 | Abnormal data identification method and device, electronic equipment and storage medium |
| CN114157513B (en) * | 2022-02-07 | 2022-09-13 | 南京理工大学 | Vehicle networking intrusion detection method and equipment based on improved convolutional neural network |
| CN114760098A (en) * | 2022-03-16 | 2022-07-15 | 南京邮电大学 | CNN-GRU-based power grid false data injection detection method and device |
| CN114615172B (en) * | 2022-03-22 | 2024-04-16 | 中国农业银行股份有限公司 | Flow detection method and system, storage medium and electronic equipment |
| CN114724020B (en) * | 2022-04-14 | 2024-12-06 | 南京邮电大学 | A mobile application recognition method based on CNN-GRU |
| CN115001781B (en) * | 2022-05-25 | 2023-05-26 | 国网河南省电力公司信息通信公司 | Terminal network state safety monitoring method |
| CN115037535B (en) * | 2022-06-01 | 2023-07-07 | 上海磐御网络科技有限公司 | Intelligent recognition method for network attack behaviors |
| CN115102773A (en) * | 2022-06-29 | 2022-09-23 | 苏州浪潮智能科技有限公司 | Smuggling attack detection method, system, equipment and readable storage medium |
| CN115580445B (en) * | 2022-09-22 | 2024-06-28 | 东北大学 | Unknown attack intrusion detection method, unknown attack intrusion detection device and computer readable storage medium |
| CN115277258B (en) * | 2022-09-27 | 2022-12-20 | 广东财经大学 | Network attack detection method and system based on temporal-spatial feature fusion |
| CN115865486B (en) * | 2022-11-30 | 2024-04-09 | 山东大学 | Network intrusion detection method and system based on multi-layer perception convolutional neural network |
| CN115865534B (en) * | 2023-02-27 | 2023-05-12 | 深圳大学 | Malicious encryption-based traffic detection method, system, device and medium |
| CN117640254A (en) * | 2024-01-25 | 2024-03-01 | 浙江大学 | An industrial control network intrusion detection method and device |
| CN117914618B (en) * | 2024-01-30 | 2025-04-22 | 广东技术师范大学 | Network intrusion detection method, system, equipment and medium based on contrast learning |
| CN118659934B (en) * | 2024-08-20 | 2024-11-15 | 温州中壹技术研究院有限公司 | Network traffic security assessment method and system based on big data |
| CN119363386A (en) * | 2024-09-27 | 2025-01-24 | 广州大学 | A CNN-GRU industrial control network attack detection method based on packet and flow-level feature fusion |
| CN119544378A (en) * | 2024-12-31 | 2025-02-28 | 深圳市东美通科技有限公司 | Network defense method, device, equipment and medium based on unknown attack detection |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109034264A (en) * | 2018-08-15 | 2018-12-18 | 云南大学 | Traffic accident seriousness predicts CSP-CNN model and its modeling method |
| CN109086878A (en) * | 2018-10-19 | 2018-12-25 | 电子科技大学 | Keep the convolutional neural networks model and its training method of rotational invariance |
| CN110351244A (en) * | 2019-06-11 | 2019-10-18 | 山东大学 | A kind of network inbreak detection method and system based on multireel product neural network fusion |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9904893B2 (en) * | 2013-04-02 | 2018-02-27 | Patternex, Inc. | Method and system for training a big data machine to defend |
| CN109117634B (en) * | 2018-09-05 | 2020-10-23 | 济南大学 | Malicious software detection method and system based on network traffic multi-view fusion |
| CN110619049A (en) * | 2019-09-25 | 2019-12-27 | 北京工业大学 | Message anomaly detection method based on deep learning |
| CN110597240B (en) * | 2019-10-24 | 2021-03-30 | 福州大学 | A fault diagnosis method for hydro-generator units based on deep learning |
| CN111064678A (en) * | 2019-11-26 | 2020-04-24 | 西安电子科技大学 | Network traffic classification method based on lightweight convolutional neural network |
| CN111371806B (en) * | 2020-03-18 | 2021-05-25 | 北京邮电大学 | A kind of Web attack detection method and device |
| CN111683108B (en) * | 2020-08-17 | 2020-11-17 | 鹏城实验室 | Method for generating network flow anomaly detection model and computer equipment |
-
2020
- 2020-12-29 CN CN202011590155.7A patent/CN112839034B/en active Active
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109034264A (en) * | 2018-08-15 | 2018-12-18 | 云南大学 | Traffic accident seriousness predicts CSP-CNN model and its modeling method |
| CN109086878A (en) * | 2018-10-19 | 2018-12-25 | 电子科技大学 | Keep the convolutional neural networks model and its training method of rotational invariance |
| CN110351244A (en) * | 2019-06-11 | 2019-10-18 | 山东大学 | A kind of network inbreak detection method and system based on multireel product neural network fusion |
Non-Patent Citations (2)
| Title |
|---|
| 基于ResNet和双向LSTM融合的物联网入侵检测分类模型构建与优化研究;陈红松等;《湖南大学学报(自然科学版)》;20200825(第08期);全文 * |
| 基于深度学习的入侵检测研究;张露璐等;《信息与电脑(理论版)》;20190615(第11期);全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112839034A (en) | 2021-05-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112839034B (en) | A Network Intrusion Detection Method Based on CNN-GRU Hierarchical Neural Network | |
| CN112953924B (en) | Network abnormal flow detection method, system, storage medium, terminal and application | |
| CN106599869B (en) | A vehicle attribute recognition method based on multi-task convolutional neural network | |
| CN112003870A (en) | Network encryption traffic identification method and device based on deep learning | |
| CN109104441A (en) | A kind of detection system and method for the encryption malicious traffic stream based on deep learning | |
| CN111915437A (en) | RNN-based anti-money laundering model training method, device, equipment and medium | |
| CN113128287B (en) | Method and system for training cross-domain facial expression recognition model and facial expression recognition | |
| CN112104570A (en) | Traffic classification method and device, computer equipment and storage medium | |
| CN112165484B (en) | Network encryption traffic identification method and device based on deep learning and side channel analysis | |
| CN111741002B (en) | A method and device for training a network intrusion detection model | |
| CN111835763B (en) | DNS tunnel traffic detection method and device and electronic equipment | |
| CN110225030A (en) | Malice domain name detection method and system based on RCNN-SPP network | |
| CN111355671B (en) | Network traffic classification method, medium and terminal equipment based on self-attention mechanism | |
| CN110781980B (en) | Training method of target detection model, target detection method and device | |
| CN114998330A (en) | Unsupervised wafer defect detection method, unsupervised wafer defect detection device, unsupervised wafer defect detection equipment and storage medium | |
| CN111367908A (en) | Incremental intrusion detection method and system based on security assessment mechanism | |
| CN115630298A (en) | Network flow abnormity detection method and system based on self-attention mechanism | |
| CN118018260A (en) | Network attack detection method, system, equipment and medium | |
| CN117633627A (en) | Deep learning unknown network traffic classification method and system based on evidence uncertainty evaluation | |
| CN115713669B (en) | An image classification method, device, storage medium and terminal based on inter-class relationships | |
| CN108805211A (en) | IN service type cognitive method based on machine learning | |
| CN118965201B (en) | A malware detection and classification method and system based on multimodal feature fusion | |
| CN114884704A (en) | Network traffic abnormal behavior detection method and system based on involution and voting | |
| CN115987631A (en) | Malicious traffic identification method and system based on deep learning | |
| CN117834184A (en) | A detection method and storage medium for malicious Internet entities |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |