[go: up one dir, main page]

CN117036771A - Indicator diagram feature extraction method and system based on full convolution neural network - Google Patents

Indicator diagram feature extraction method and system based on full convolution neural network Download PDF

Info

Publication number
CN117036771A
CN117036771A CN202310747344.8A CN202310747344A CN117036771A CN 117036771 A CN117036771 A CN 117036771A CN 202310747344 A CN202310747344 A CN 202310747344A CN 117036771 A CN117036771 A CN 117036771A
Authority
CN
China
Prior art keywords
dynamometer
neural network
convolutional neural
model
fully convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310747344.8A
Other languages
Chinese (zh)
Inventor
王相
邵志伟
何岩峰
芮诚
丁阳阳
陈林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou University
Original Assignee
Changzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou University filed Critical Changzhou University
Priority to CN202310747344.8A priority Critical patent/CN117036771A/en
Publication of CN117036771A publication Critical patent/CN117036771A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method and a system for extracting characteristics of an indicator diagram based on a full convolution neural network, which relate to the technical field of oil exploitation fault diagnosis and comprise the steps of constructing an indicator diagram image data set; constructing a full convolution neural network model, wherein the input and output of the model are images of an indicator diagram; pre-training the model, wherein the trained evaluation index is used for judging the similarity between the output and input indicator diagram images of the model; performing super-parameter optimization on the model by adopting grid search, and optimizing the extraction capacity of the model; the convolution stage of the truncated model acts as a graph feature extractor. The method for extracting the characteristics of the indicator diagram, disclosed by the application, solves the problems of poor performance, low speed and high model optimization difficulty of direct learning of a large number of samples. In the subsequent establishment of the problem of oil well fault diagnosis according to the indicator diagram, the requirement of model learning can be met by only relying on a small quantity of marked samples, the certain labor and time cost during sample marking and model optimization is reduced, and the accuracy of indicator diagram identification is improved.

Description

一种基于全卷积神经网络的示功图特征提取方法及系统A method and system for dynamometer feature extraction based on fully convolutional neural network

技术领域Technical field

本发明涉及石油开采故障诊断技术领域,特别是一种基于全卷积神经网络的示功图特征提取方法及系统。The present invention relates to the technical field of oil production fault diagnosis, and in particular to a method and system for extracting features of dynamometer diagrams based on a fully convolutional neural network.

背景技术Background technique

抽油机是油田开发的主要设备,在石油开采过程中受气温和地层等因素的影响容易出现故障。因此,抽油机井故障诊断是石油开采领域中的一个关键问题。示功图是诊断抽油机故障的重要依据,随着数据挖掘和存储技术的飞速发展,传感器采集油井各类数据,这些数据被传输到油田数据中心进行存储和分析,形成了油井生产故障诊断的大数据。基于“大数据+深度学习”的新一代人工智能技术不断突破现有技术的局限,引领油井故障诊断技术进入新的阶段。Pumping units are the main equipment in oil field development. During the oil extraction process, they are prone to failure due to factors such as temperature and formation. Therefore, fault diagnosis of pumping unit wells is a key issue in the field of oil extraction. The dynamometer diagram is an important basis for diagnosing pumping unit faults. With the rapid development of data mining and storage technology, sensors collect various data from oil wells. These data are transmitted to the oilfield data center for storage and analysis, forming the formation of oil well production fault diagnosis. of big data. The new generation of artificial intelligence technology based on "big data + deep learning" continues to break through the limitations of existing technology and lead oil well fault diagnosis technology to a new stage.

抽油机井故障诊断模型的诊断准确率取决于示功图样本集的数量和质量。一方面,故障诊断需要训练大量的标记样本,而大量样本训练的过程迭代缓慢、收敛速度慢,模型优化耗时长、难度大;另一方面,目前故障诊断研究的样本规模小、覆盖故障类型少、样本不均衡问题突出。The diagnostic accuracy of the pumping unit fault diagnosis model depends on the quantity and quality of the dynamometer sample set. On the one hand, fault diagnosis requires training a large number of labeled samples, and the process of training a large number of samples is slow in iteration and convergence, making model optimization time-consuming and difficult. On the other hand, current fault diagnosis research samples are small in size and cover few fault types. , the problem of sample imbalance is prominent.

本发明的目的在于提供了一种基于全卷积神经网络的示功图特征提取方法,具有较高的识别准确率,而且降低了模型对大量已标记数据的依赖和训练耗费的时间。The purpose of the present invention is to provide a dynamometer feature extraction method based on a fully convolutional neural network, which has a high recognition accuracy and reduces the model's dependence on a large amount of labeled data and the time spent on training.

发明内容Contents of the invention

本部分的目的在于概述本发明的实施例的一些方面以及简要介绍一些较佳实施例。在本部分以及本申请的说明书摘要和发明名称中可能会做些简化或省略以避免使本部分、说明书摘要和发明名称的目的模糊,而这种简化或省略不能用于限制本发明的范围。The purpose of this section is to outline some aspects of embodiments of the invention and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section, the abstract and the title of the invention to avoid obscuring the purpose of this section, the abstract and the title of the invention, and such simplifications or omissions cannot be used to limit the scope of the invention.

鉴于上述和/或现有的一种基于全卷积神经网络的示功图特征提取方法中存在的问题,提出了本发明。In view of the above and/or existing problems in a dynamometer feature extraction method based on a fully convolutional neural network, the present invention is proposed.

因此,本发明所要解决的问题是:故障诊断需要训练大量的标记样本,而大量样本训练的过程迭代缓慢、收敛速度慢,模型优化耗时长、难度大,且目前故障诊断研究的样本规模小、覆盖故障类型少、样本不均衡问题突出。Therefore, the problems to be solved by this invention are: fault diagnosis requires training a large number of labeled samples, and the process of training a large number of samples has slow iteration and slow convergence speed, model optimization is time-consuming and difficult, and the sample size of current fault diagnosis research is small, There are few coverage fault types and sample imbalance is a prominent problem.

为解决上述技术问题,本发明提供如下技术方案:一种基于全卷积神经网络的示功图特征提取方法,其包括,构建示功图图像数据集;构建全卷积神经网络模型,模型输入和输出均为示功图图像;使用构建的示功图图像数据集对全卷积神经网络模型进行预训练,训练的评价指标为判断模型输出的示功图图像和输入的示功图图像之间的相似性;采用网格搜索对全卷积神经网络模型进行超参数优化,优化模型的提取能力;截取全卷积神经网络模型的卷积阶段作为示功图特征提取器。In order to solve the above technical problems, the present invention provides the following technical solution: a dynamometer diagram feature extraction method based on a fully convolutional neural network, which includes: constructing a dynamometer diagram image data set; constructing a fully convolutional neural network model, and model input and output are both dynamometer images; use the constructed dynamometer image data set to pre-train the fully convolutional neural network model. The training evaluation index is the judgment between the dynamometer image output by the model and the input dynamometer image. similarity between the models; use grid search to optimize the hyperparameters of the fully convolutional neural network model to optimize the extraction capability of the model; intercept the convolution stage of the fully convolutional neural network model as a feature extractor for the dynamometer diagram.

作为本发明所述一种基于全卷积神经网络的示功图特征提取方法的一种优选方案,其中:所述示功图图像数据集包括,根据数据库中海量的示功图位移载荷数据,以横坐标为位移、纵坐标为载荷,批量的绘制成标准化的示功图图像,构建示功图数据集。As a preferred solution of the dynamometer feature extraction method based on a fully convolutional neural network according to the present invention, the dynamometer image data set includes, according to the massive dynamometer displacement load data in the database, With the abscissa as the displacement and the ordinate as the load, standardized dynamometer images are drawn in batches to construct a dynamometer data set.

作为本发明所述一种基于全卷积神经网络的示功图特征提取方法的一种优选方案,其中:所述全卷积神经网络模型包括卷积、池化、反卷积、反池化操作步骤,模型的输入是一张示功图图像,反卷积阶段是卷积阶段的逆过程,对卷积层的输出进行解码,模型的输出是一张示功图图像,模型的输入输出相同,为同一张示功图图像。As a preferred solution of the invention for extracting features of dynamometer diagrams based on a fully convolutional neural network, the fully convolutional neural network model includes convolution, pooling, deconvolution, and anti-pooling. Operation steps: The input of the model is a dynamometer image. The deconvolution stage is the reverse process of the convolution stage. The output of the convolution layer is decoded. The output of the model is a dynamometer image. The input and output of the model are The same, it is the same power diagram image.

作为本发明所述一种基于全卷积神经网络的示功图特征提取方法的一种优选方案,其中:所述反卷积阶段包括,填充,卷积,裁剪。As a preferred solution of the dynamometer feature extraction method based on a fully convolutional neural network according to the present invention, the deconvolution stage includes filling, convolution, and cropping.

作为本发明所述一种基于全卷积神经网络的示功图特征提取方法的一种优选方案,其中:所述预训练包括,使用所述示功图图像数据集对所述的全卷积神经网络进行预训练,通过反卷积操作将模型卷积操作输出的示功图特征还原成输入时的维度,每轮训练后计算示功图输入卷积层时的特征与经过反卷积操作还原后特征的特征相似度;所述特征相似度包括,特征相似度的衡量方法为Jaccard相似系数;对于两个特征矩阵A和B,计算每个向量的Jaccard相似系数,求平均,其Jaccard相似系数的计算公式表示为,As a preferred solution of the dynamometer feature extraction method based on a fully convolutional neural network according to the present invention, the pre-training includes using the dynamometer image data set to perform the full convolution on the dynamometer image data set. The neural network is pre-trained, and the characteristics of the dynamometer diagram output by the model convolution operation are restored to the dimensions of the input through the deconvolution operation. After each round of training, the characteristics of the dynamometer diagram when input into the convolution layer are calculated and the characteristics of the dynamometer diagram after the deconvolution operation are calculated. Feature similarity of the restored features; the feature similarity includes: the measurement method of feature similarity is the Jaccard similarity coefficient; for the two feature matrices A and B, calculate the Jaccard similarity coefficient of each vector, average it, and the Jaccard similarity The calculation formula of the coefficient is expressed as,

J(A,B)=(J(A1,B1)+J(A2,B2)+...+J(An,Bn))/nJ(A,B)=(J(A 1 ,B 1 )+J(A 2 ,B 2 )+...+J(A n ,B n ))/n

对于两个向量An和Bn,作为两个集合,则Jaccard相似系数表示为,For two vectors A n and B n , as two sets, the Jaccard similarity coefficient is expressed as,

J(An,Bn)=|S|/|U|J(A n ,B n )=|S|/|U|

其中,S为集合An和Bn的交集,U为集合An和Bn的并集;若特征相似度超过90%,则截取全卷积神经网络模型的卷积阶段,若特征相似度低于90%,则进行超参数优化。Among them, S is the intersection of sets A n and B n , and U is the union of sets A n and B n ; if the feature similarity exceeds 90%, the convolution stage of the fully convolutional neural network model is intercepted. If the feature similarity is Below 90%, perform hyperparameter optimization.

作为本发明所述一种基于全卷积神经网络的示功图特征提取方法的一种优选方案,其中:所述超参数优化包括,对所述全卷积神经网络模型进行超参数优化,直到卷积层输入的示功图与反卷积层输出的示功图特征相似度达到90%以上时,得到优化后的全卷积神经网络模型;采用网格搜索作为全卷积神经网络模型的超参数优化方法,预先定义要优化的每个超参数的取值范围。As a preferred solution of the dynamometer feature extraction method based on a fully convolutional neural network according to the present invention, the hyperparameter optimization includes performing hyperparameter optimization on the fully convolutional neural network model until When the feature similarity between the dynamometer diagram input by the convolution layer and the dynamometer diagram output by the deconvolution layer reaches more than 90%, the optimized fully convolutional neural network model is obtained; grid search is used as the algorithm for the fully convolutional neural network model. Hyperparameter optimization method pre-defines the value range of each hyperparameter to be optimized.

作为本发明所述一种基于全卷积神经网络的示功图特征提取方法的一种优选方案,其中:所述示功图特征提取器包括,去除全卷积神经网络模型的反卷积层,得到基于全卷积神经网络模型卷积阶段的示功图特征提取器。As a preferred solution of the dynamometer feature extraction method based on a fully convolutional neural network according to the present invention, the dynamometer feature extractor includes removing the deconvolution layer of the fully convolutional neural network model. , a dynamometer feature extractor based on the convolution stage of the fully convolutional neural network model is obtained.

本发明的另外一个目的是提供一种基于全卷积神经网络的示功图特征提取方法的系统,其能通过构建示功图特征提取系统,降低了模型对大量已标记数据的依赖和训练耗费的时间。Another object of the present invention is to provide a system based on a fully convolutional neural network-based dynamometer feature extraction method, which can reduce the model's dependence on a large amount of labeled data and training costs by constructing a dynamometer feature extraction system. time.

作为本发明所述一种基于全卷积神经网络的示功图特征提取系统的一种优选方案,其中:包括,示功图输入模块、预训练模块、超参数优化模块以及特征提取模块;所述示功图输入模块用于输入待提取特征的示功图数据;所述预训练模块使用示功图图像数据集对全卷积神经网络进行预训练,并通过反卷积操作将模型卷积操作输出的示功图特征还原成输入时的维度,计算示功图输入卷积层时的特征与经过反卷积操作还原后的特征的相似度,以Jaccard相似系数作为特征相似度的衡量方法;所述超参数优化模块对训练完成的全卷积神经网络模型进行超参数优化,通过网格搜索寻找最佳的超参数组合,直到卷积层输入的示功图与反卷积层输出的示功图特征相似度达到90%以上时,得到优化后的全卷积神经网络模型;所述特征提取模块使用示功图特征提取器,从示功图数据中提取特征。As a preferred solution of the dynamometer chart feature extraction system based on a fully convolutional neural network according to the present invention, it includes: a dynamometer chart input module, a pre-training module, a hyperparameter optimization module and a feature extraction module; The dynamometer input module is used to input dynamometer data of features to be extracted; the pre-training module uses the dynamometer image data set to pre-train the fully convolutional neural network, and convolves the model through deconvolution operations. The dynamometer features output by the operation are restored to the input dimensions, and the similarity between the features when the dynamometer graph is input into the convolution layer and the features restored after the deconvolution operation is calculated, and the Jaccard similarity coefficient is used as a measure of feature similarity. ; The hyperparameter optimization module performs hyperparameter optimization on the trained fully convolutional neural network model and finds the best hyperparameter combination through grid search until the power diagram input by the convolution layer and the output of the deconvolution layer When the dynamometer feature similarity reaches more than 90%, an optimized fully convolutional neural network model is obtained; the feature extraction module uses a dynamometer feature extractor to extract features from the dynamometer data.

一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如上所述方法的步骤。A computer device includes a memory and a processor. The memory stores a computer program. It is characterized in that when the processor executes the computer program, the steps of the above method are implemented.

一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如上所述方法的步骤。A computer-readable storage medium on which a computer program is stored, characterized in that when the computer program is executed by a processor, the steps of the above method are implemented.

本发明有益效果为提供的示功图特征提取方法,当有标记数据较少时,利用大量未标记的样本改善学习性能,采用“卷积+反卷积”的全卷积神经网络作为训练器,根据输入和输出的特征相似度衡量特征提取性能,得到能够精准识别示功图的特征提取器,避免了大量样本直接学习的性能差、速度慢,模型优化难度大的问题。在后续建立依据示功图进行油井故障诊断的问题中只需要依靠少量标记样本就可以满足模型学习的需求,降低了样本标记和模型优化时一定的人力和时间成本并提高了示功图识别的准确率。The beneficial effect of the present invention is to provide a dynamometer feature extraction method that uses a large number of unlabeled samples to improve learning performance when there is less labeled data, and uses a "convolution + deconvolution" fully convolutional neural network as a trainer. , measure the feature extraction performance based on the similarity of input and output features, and obtain a feature extractor that can accurately identify power diagrams, avoiding the problems of poor performance, slow speed, and difficulty in model optimization caused by direct learning of a large number of samples. In the subsequent problem of establishing oil well fault diagnosis based on dynamometer diagrams, only a small number of labeled samples can meet the needs of model learning, which reduces certain manpower and time costs in sample labeling and model optimization and improves the efficiency of dynamometer diagram identification. Accuracy.

附图说明Description of the drawings

为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。其中:In order to explain the technical solutions of the embodiments of the present invention more clearly, the drawings needed to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present invention. Those of ordinary skill in the art can also obtain other drawings based on these drawings without exerting any creative effort. in:

图1为本发明一个实施例提供的一种基于全卷积神经网络的示功图特征提取方法的整体流程图。Figure 1 is an overall flow chart of a method for extracting features of dynamometer diagrams based on a fully convolutional neural network according to an embodiment of the present invention.

图2为本发明第一个实施例提供的一种基于全卷积神经网络的示功图特征提取方法的特征提取流程图。FIG. 2 is a feature extraction flow chart of a feature extraction method for dynamometer diagrams based on a fully convolutional neural network provided by the first embodiment of the present invention.

图3为本发明第一个实施例提供的一种基于全卷积神经网络的示功图特征提取系统的系统结构图。Figure 3 is a system structure diagram of a power diagram feature extraction system based on a fully convolutional neural network provided by the first embodiment of the present invention.

图4为本发明第一个实施例提供的一种基于全卷积神经网络的示功图特征提取方法的数据集中部分示功图的示意图。Figure 4 is a schematic diagram of some dynamometer diagrams in the data set of a dynamometer diagram feature extraction method based on a fully convolutional neural network provided by the first embodiment of the present invention.

图5为本发明第一个实施例提供的一种基于全卷积神经网络的示功图特征提取方法的全卷积神经网络示意图。FIG. 5 is a schematic diagram of a fully convolutional neural network of a dynamometer feature extraction method based on a fully convolutional neural network provided by the first embodiment of the present invention.

具体实施方式Detailed ways

为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合说明书附图对本发明的具体实施方式做详细的说明。In order to make the above objects, features and advantages of the present invention more obvious and understandable, the specific implementation modes of the present invention will be described in detail below with reference to the accompanying drawings.

在下面的描述中阐述了很多具体细节以便于充分理解本发明,但是本发明还可以采用其他不同于在此描述的其它方式来实施,本领域技术人员可以在不违背本发明内涵的情况下做类似推广,因此本发明不受下面公开的具体实施例的限制。Many specific details are set forth in the following description to fully understand the present invention. However, the present invention can also be implemented in other ways different from those described here. Those skilled in the art can do so without departing from the connotation of the present invention. Similar generalizations are made, and therefore the present invention is not limited to the specific embodiments disclosed below.

其次,此处所称的“一个实施例”或“实施例”是指可包含于本发明至少一个实现方其中的特定特征、结构或特性。在本说明书中不同地方出现的“在一个实施例中”并非均指同一个实施例,也不是单独的或选择性的与其他实施例互相排斥的实施例。Second, "one embodiment" or "an embodiment" as used herein refers to a specific feature, structure, or characteristic that may be included in at least one implementation of the present invention. "In one embodiment" appearing in different places in this specification does not all refer to the same embodiment, nor is it a separate or selective embodiment that is mutually exclusive with other embodiments.

实施例1Example 1

参照图1和图2,为本发明第一个实施例,该实施例提供了一种基于全卷积神经网络的示功图特征提取方法,包括:构建示功图图像数据集;构建全卷积神经网络模型,模型输入和输出均为示功图图像;使用构建的示功图图像数据集对全卷积神经网络模型进行预训练,训练的评价指标为判断模型输出的示功图图像和输入的示功图图像之间的相似性;采用网格搜索对全卷积神经网络模型进行超参数优化,优化模型的提取能力;截取全卷积神经网络模型的卷积阶段作为示功图特征提取器。Referring to Figures 1 and 2, a first embodiment of the present invention is provided. This embodiment provides a dynamometer feature extraction method based on a fully convolutional neural network, including: constructing a dynamometer image data set; constructing a full volume Convolutional neural network model, the model input and output are both dynamometer diagram images; use the constructed dynamometer diagram image data set to pre-train the fully convolutional neural network model, and the training evaluation index is to judge the dynamometer diagram image output by the model and Similarity between the input dynamometer images; use grid search to optimize the hyperparameters of the fully convolutional neural network model to optimize the extraction capability of the model; intercept the convolution stage of the fully convolutional neural network model as the dynamometer feature Extractor.

S1、构建示功图数据集。根据数据库中海量的示功图位移载荷数据,以横坐标为位移、纵坐标为载荷,批量的绘制成标准化的示功图图像,如图2所示。S1. Construct a dynamometer data set. According to the massive dynamometer displacement load data in the database, with the abscissa as the displacement and the ordinate as the load, standardized dynamometer images are drawn in batches, as shown in Figure 2.

S2、构建全卷积神经网络模型。模型的输入是一张示功图图像,模型包含卷积、池化、反卷积、反池化等操作步骤,全卷积神经网络模型基于卷积阶段和反卷积阶段,反卷积阶段是卷积阶段的逆过程,对卷积层的输出进行解码,模型的输出是一张示功图图像,模型的输入输出相同,为同一张示功图图像,其中,卷积阶段使用但不限于VGG16作为特征提取器,当200*100*3示功图输入后,VGG16模型第一层将示功图的特征提取为100*50*32维度,经过一系列的特征提取之后,最后一层的示功图特征被压缩为7*4*512。S2. Construct a fully convolutional neural network model. The input of the model is a power diagram image. The model includes operation steps such as convolution, pooling, deconvolution, and depooling. The fully convolutional neural network model is based on the convolution stage and the deconvolution stage. The deconvolution stage It is the reverse process of the convolution stage. It decodes the output of the convolution layer. The output of the model is a dynamometer image. The input and output of the model are the same and are the same dynamometer image. Among them, the convolution stage uses but does not Limited to VGG16 as a feature extractor, when the 200*100*3 dynamometer diagram is input, the first layer of the VGG16 model extracts the features of the dynamometer diagram into 100*50*32 dimensions. After a series of feature extractions, the last layer The dynamometer characteristics are compressed to 7*4*512.

进一步的,反卷积阶段可以看作是卷积阶段的逆过程,通过对卷积层的输出进行解码,从而得到原始输入的重建。其详细过程分为以下几个步骤:Furthermore, the deconvolution stage can be regarded as the inverse process of the convolution stage, by decoding the output of the convolution layer to obtain the reconstruction of the original input. The detailed process is divided into the following steps:

填充:在输出特征图周围进行填充操作,以保持输出的大小与原始输入相同。其中,填充方式采用“零填充”方式,将输出特征图周围添加一圈值为零的像素。Padding: A padding operation is performed around the output feature map to keep the output the same size as the original input. Among them, the filling method adopts the "zero filling" method, which adds a circle of pixels with a value of zero around the output feature map.

卷积:在填充后的特征图上进行卷积操作,通常使用与卷积层相同的卷积核。卷积核的大小和步幅通常也与卷积层相同。其中,卷积的目的是将填充的像素与输入的特征信息进行卷积,从而得到更加精细的特征信息。Convolution: Convolution operation is performed on the filled feature map, usually using the same convolution kernel as the convolution layer. The size and stride of the convolution kernel are also usually the same as those of the convolutional layer. Among them, the purpose of convolution is to convolve the filled pixels with the input feature information to obtain more refined feature information.

裁剪:最后需要裁剪输出特征图的边缘,以去除填充的部分,得到最终的反卷积结果。Cropping: Finally, the edges of the output feature map need to be cropped to remove the filled parts and obtain the final deconvolution result.

进一步的,反卷积阶段的第一层读取示功图的特征为7*4*512维度,最后一层将特征还原为200*100*3。Furthermore, the first layer of the deconvolution stage reads the features of the dynamometer diagram as 7*4*512 dimensions, and the last layer restores the features to 200*100*3.

S3、全卷积神经网络模型预训练。使用示功图数据集对所述的全卷积神经网络进行预训练,初始化权重参数,通过反卷积操作不断将模型卷积操作输出的示功图特征还原成输入时的维度。S3. Fully convolutional neural network model pre-training. The fully convolutional neural network is pre-trained using the dynamometer data set, the weight parameters are initialized, and the dynamometer features output by the model convolution operation are continuously restored to the input dimensions through deconvolution operations.

进一步的,每轮训练后计算示功图输入卷积层时的特征与经过反卷积操作还原后的特征相似度,特征维度都是200*100*3。Furthermore, after each round of training, the similarity between the features when the power diagram is input into the convolution layer and the features restored after deconvolution is calculated. The feature dimensions are both 200*100*3.

其中,所述特征相似度的衡量方法为Jaccard相似系数,当Jaccard相似系数越接近1时,表示两个特征矩阵越相似。Wherein, the measurement method of the feature similarity is the Jaccard similarity coefficient. When the Jaccard similarity coefficient is closer to 1, it means that the two feature matrices are more similar.

进一步的,对于两个特征矩阵A和B,计算每个向量的Jaccard相似系数,最后求平均,其Jaccard相似系数的计算公式为:Further, for the two feature matrices A and B, calculate the Jaccard similarity coefficient of each vector, and finally average it. The calculation formula of the Jaccard similarity coefficient is:

J(A,B)=(J(A1,B1)+J(A2,B2)+...+J(An,Bn))/nJ(A,B)=(J(A 1 ,B 1 )+J(A 2 ,B 2 )+...+J(A n ,B n ))/n

进一步的,对于两个向量An和Bn,将他们看作两个集合,他们的交集为S,并集为U,则Jaccard相似系数可以表示为:Furthermore, for two vectors A n and B n , consider them as two sets, their intersection is S, and their union is U, then the Jaccard similarity coefficient can be expressed as:

J(An,Bn)=|S|/|U|J(A n ,B n )=|S|/|U|

S4、超参数优化。对训练完成的全卷积神经网络模型进行超参数优化,直到卷积层输入的示功图与反卷积层输出的示功图特征相似度达到90%以上时,得到优化后的全卷积神经网络模型。S4. Hyperparameter optimization. Perform hyperparameter optimization on the trained fully convolutional neural network model until the feature similarity between the dynamometer input by the convolution layer and the dynamometer output by the deconvolution layer reaches more than 90%, and the optimized fully convolutional model is obtained. Neural network model.

进一步的,采用“网格搜索”作为该全卷积神经网络模型的超参数优化方法,预先定义要优化的每个超参数的可能取值范围。Furthermore, "grid search" is used as the hyperparameter optimization method of the fully convolutional neural network model, and the possible value range of each hyperparameter to be optimized is predefined.

进一步的,需要优化的超参数及取值范围定义如下:Further, the hyperparameters that need to be optimized and their value ranges are defined as follows:

批次大小:batch_size_list=[10,20,30,40,50]Batch size: batch_size_list=[10, 20, 30, 40, 50]

失活率:dropout_list=[0.3,0.4,0.5,0.6,0.7]Inactivation rate: dropout_list=[0.3, 0.4, 0.5, 0.6, 0.7]

学习率:learning_rate_list=[0.1,0.01,0.001,0.0001]Learning rate: learning_rate_list=[0.1, 0.01, 0.001, 0.0001]

激活函数:activation_list=[sigmoid,tanh,relu]Activation function: activation_list=[sigmoid, tanh, relu]

优化器:optimizer_list=[sgd,Adadelta,Adam,RMSProp]Optimizer: optimizer_list=[sgd, Adadelta, Adam, RMSProp]

进一步的,“网格搜索”通过枚举所有可能的超参数组合来进行搜索和评估。Further, "grid search" searches and evaluates by enumerating all possible hyperparameter combinations.

S5、去除该模型的反卷积层。当输入输出的特征相似度达到90%,说明卷积阶段提取的7*4*512特征能够基本还原示功图的全部内涵信息,此时截取该全卷积神经网络的卷积阶段作为示功图特征提取器。S5. Remove the deconvolution layer of the model. When the similarity of input and output features reaches 90%, it means that the 7*4*512 features extracted in the convolution stage can basically restore all the connotative information of the dynamometer diagram. At this time, the convolution stage of the fully convolutional neural network is intercepted as the dynamometer diagram. Graph feature extractor.

进一步的,利用该模型对示功图的强特征提取能力,在后续建立依据示功图进行油井故障诊断的问题中只需要依靠少量标记样本就可以满足模型学习的需求,降低了样本标记和模型训练、优化时一定的人力和时间成本并能够有效提高示功图识别的准确率。Furthermore, by utilizing the strong feature extraction capability of this model for dynamometer diagrams, in the subsequent problem of establishing oil well fault diagnosis based on dynamometer diagrams, only a small number of labeled samples can be relied upon to meet the needs of model learning, reducing the need for sample labeling and model There is a certain cost of manpower and time in training and optimization, and it can effectively improve the accuracy of dynamometer diagram recognition.

实施例2Example 2

参照图3,为本发明第二个实施例,其不同于前一个实施例的是,提供了一种基于全卷积神经网络的示功图特征提取系统,包括:示功图输入模块,预训练模块,超参数优化模块,特征提取模块。Referring to Figure 3, a second embodiment of the present invention is shown. What is different from the previous embodiment is that a dynamometer diagram feature extraction system based on a fully convolutional neural network is provided, including: a dynamometer diagram input module, a preset Training module, hyperparameter optimization module, feature extraction module.

示功图输入模块用于输入待提取特征的示功图数据。The dynamometer diagram input module is used to input the dynamometer diagram data of the features to be extracted.

预训练模块使用示功图图像数据集对全卷积神经网络进行预训练,并通过反卷积操作将模型卷积操作输出的示功图特征还原成输入时的维度,计算示功图输入卷积层时的特征与经过反卷积操作还原后的特征的相似度,以Jaccard相似系数作为特征相似度的衡量方法。The pre-training module uses the dynamometer image data set to pre-train the fully convolutional neural network, and restores the dynamometer features output by the model convolution operation to the input dimensions through the deconvolution operation, and calculates the dynamometer input volume. The Jaccard similarity coefficient is used as a measure of feature similarity between the features during stacking and the features restored after deconvolution.

超参数优化模块对训练完成的全卷积神经网络模型进行超参数优化,通过网格搜索寻找最佳的超参数组合,直到卷积层输入的示功图与反卷积层输出的示功图特征相似度达到90%以上时,得到优化后的全卷积神经网络模型。The hyperparameter optimization module performs hyperparameter optimization on the trained fully convolutional neural network model and finds the best hyperparameter combination through grid search until the dynamism diagram input by the convolution layer and the dynamism diagram output by the deconvolution layer When the feature similarity reaches more than 90%, the optimized fully convolutional neural network model is obtained.

特征提取模块使用示功图特征提取器,从示功图数据中提取特征。The feature extraction module uses the dynamometer feature extractor to extract features from the dynamometer data.

所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-OnlyMemory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product. The computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present invention. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code.

在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,“计算机可读介质”可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。The logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered a sequenced list of executable instructions for implementing the logical functions, and may be embodied in any computer-readable medium, For use by, or in combination with, instruction execution systems, devices or devices (such as computer-based systems, systems including processors or other systems that can fetch instructions from and execute instructions from the instruction execution system, device or device) or equipment. For the purposes of this specification, a "computer-readable medium" may be any device that can contain, store, communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置)、便携式计算机盘盒(磁装置)、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编辑只读存储器(EPROM或闪速存储器)、光纤装置以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections with one or more wires (electronic device), portable computer disk cartridges (magnetic device), random access memory (RAM), Read-only memory (ROM), erasable and programmable read-only memory (EPROM or flash memory), fiber optic devices, and portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, and subsequently edited, interpreted, or otherwise suitable as necessary. process to obtain the program electronically and then store it in computer memory.

应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方其中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方其中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, multiple steps or methods may be implemented using software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if it is implemented in hardware, as in another embodiment, it can be implemented by any one or a combination of the following technologies known in the art: a logic gate circuit having a logic gate circuit for implementing a logic function on a data signal. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.

实施例3Example 3

参照图4和图5,本发明第三个实施例,其不同于前两个实施例的是:为对本发明中采用的技术效果加以验证说明,以验证本方法所具有的真实效果。Referring to Figures 4 and 5, the third embodiment of the present invention is different from the first two embodiments in that the technical effects used in the present invention are verified and explained to verify the real effects of the method.

本发明实施例利用在井口布置的传感器每隔30min记录一次抽油杆的位移和载荷,其不断实时采集示功图的原始数据并传入数据库。The embodiment of the present invention uses sensors arranged at the wellhead to record the displacement and load of the sucker rod every 30 minutes, which continuously collects the original data of the power indicator diagram in real time and transmits it to the database.

其中,每30min记录一次的位移和载荷都分别包含200个样本点,这四百个样本点构成了一张示功图的向量。该实施例共获取了数据库中的20000条向量,如表1所示:Among them, the displacement and load recorded every 30 minutes each contain 200 sample points, and these 400 sample points constitute the vector of a dynamometer diagram. This embodiment obtained a total of 20,000 vectors in the database, as shown in Table 1:

表1位移载荷数据Table 1 Displacement load data

进一步的,以向量的前200个样本点为横坐标(位移)、后两百个样本点为对应的纵坐标(载荷),批量的绘制成标准化的示功图图像。该图像由红绿蓝三个通道的颜色组成,每张图像的大小为200*100*3(3代表3通道),宽高比为2:1,无坐标轴标签,包含多种油井故障类型,部分图像如图4所示。Furthermore, using the first 200 sample points of the vector as the abscissa (displacement) and the last 200 sample points as the corresponding ordinate (load), standardized dynamometer images are drawn in batches. The image consists of three channels of red, green and blue. The size of each image is 200*100*3 (3 represents 3 channels), the aspect ratio is 2:1, there is no axis label, and it contains a variety of oil well fault types. , some images are shown in Figure 4.

如图5所示,该模型包含“卷积+反卷积”两个阶段,其中,卷积阶段使用VGG16作为特征提取器,当200*100*3示功图输入后,VGG16模型第一层将示功图的特征提取为100*50*32维度,经过一系列的特征提取之后,最后一层的示功图特征被压缩为7*4*512。As shown in Figure 5, the model contains two stages of "convolution + deconvolution". The convolution stage uses VGG16 as the feature extractor. When the 200*100*3 power diagram is input, the first layer of the VGG16 model The features of the dynamometer diagram are extracted into 100*50*32 dimensions. After a series of feature extractions, the features of the last layer of the dynamometer diagram are compressed into 7*4*512.

进一步的,反卷积阶段的第一层读取示功图的特征为7*4*512维度,最后一层将特征还原为200*100*3。Furthermore, the first layer of the deconvolution stage reads the features of the dynamometer diagram as 7*4*512 dimensions, and the last layer restores the features to 200*100*3.

进一步的,每轮训练后计算示功图输入卷积层时的特征与经过反卷积操作还原后的特征相似度,特征维度都是200*100*3。Furthermore, after each round of training, the similarity between the features when the power diagram is input into the convolution layer and the features restored after deconvolution is calculated. The feature dimensions are both 200*100*3.

其中,所述特征相似度的衡量方法为Jaccard相似系数,当Jaccard相似系数越接近1时,表示两个特征矩阵越相似。经过10轮训练后,模型输入输出端的Jaccard相似系数达到0.6。Wherein, the measurement method of the feature similarity is the Jaccard similarity coefficient. When the Jaccard similarity coefficient is closer to 1, it means that the two feature matrices are more similar. After 10 rounds of training, the Jaccard similarity coefficient at the input and output ends of the model reached 0.6.

进一步的,对于两个特征矩阵A和B,计算每个向量的Jaccard相似系数,最后求平均,其Jaccard相似系数的计算公式为:Further, for the two feature matrices A and B, calculate the Jaccard similarity coefficient of each vector, and finally average it. The calculation formula of the Jaccard similarity coefficient is:

J(A,B)=(J(A1,B1)+J(A2,B2)+...+J(An,Bn))/nJ(A,B)=(J(A 1 ,B 1 )+J(A 2 ,B 2 )+...+J(A n ,B n ))/n

进一步的,对于两个向量An和Bn,将他们看作两个集合,他们的交集为S,并集为U,则Jaccard相似系数可以表示为:Furthermore, for two vectors A n and B n , consider them as two sets, their intersection is S, and their union is U, then the Jaccard similarity coefficient can be expressed as:

J(An,Bn)=|S|/|U|J(A n ,B n )=|S|/|U|

对训练完成的全卷积神经网络模型进行超参数优化,直到卷积层输入的示功图与反卷积层输出的示功图特征相似度达到90%以上时,得到优化后的全卷积神经网络模型。Perform hyperparameter optimization on the trained fully convolutional neural network model until the feature similarity between the dynamometer input by the convolution layer and the dynamometer output by the deconvolution layer reaches more than 90%, and the optimized fully convolutional model is obtained. Neural network model.

进一步的,采用“网格搜索”作为该全卷积神经网络模型的超参数优化方法,预先定义要优化的每个超参数的可能取值范围。Furthermore, "grid search" is used as the hyperparameter optimization method of the fully convolutional neural network model, and the possible value range of each hyperparameter to be optimized is predefined.

进一步的,需要优化的超参数及取值范围定义如下:Further, the hyperparameters that need to be optimized and their value ranges are defined as follows:

批次大小:batch_size_list=[10,20,30,40,50]Batch size: batch_size_list=[10, 20, 30, 40, 50]

失活率:dropout_list=[0.3,0.4,0.5,0.6,0.7]Inactivation rate: dropout_list=[0.3, 0.4, 0.5, 0.6, 0.7]

学习率:learning_rate_list=[0.1,0.01,0.001,0.0001]Learning rate: learning_rate_list=[0.1, 0.01, 0.001, 0.0001]

激活函数:activation_list=[sigmoid,tanh,relu]Activation function: activation_list=[sigmoid, tanh, relu]

优化器:optimizer_list=[sgd,Adadelta,Adam,RMSProp]Optimizer: optimizer_list=[sgd, Adadelta, Adam, RMSProp]

进一步的,“网格搜索”通过枚举所有可能的超参数组合来进行搜索和评估。经过10轮训练后,最佳的超参数组合为:批次大小:batch_size_list=20,失活率:dropout_list=0.3,学习率:learning_rate_list=0.01,激活函数:activation_list=relu,优化器:optimizer_list=sgd,此时输入输出端的Jaccard相似系数达到0.9。Further, "grid search" searches and evaluates by enumerating all possible hyperparameter combinations. After 10 rounds of training, the best hyperparameter combination is: batch size: batch_size_list=20, inactivation rate: dropout_list=0.3, learning rate: learning_rate_list=0.01, activation function: activation_list=relu, optimizer: optimizer_list=sgd , at this time the Jaccard similarity coefficient of the input and output ends reaches 0.9.

当输入输出的特征相似度达到90%,说明卷积阶段提取的7*4*512特征能够基本还原示功图的全部内涵信息,此时截取该全卷积神经网络的卷积阶段作为示功图特征提取器。When the similarity of input and output features reaches 90%, it means that the 7*4*512 features extracted in the convolution stage can basically restore all the connotative information of the dynamometer diagram. At this time, the convolution stage of the fully convolutional neural network is intercepted as the dynamometer diagram. Graph feature extractor.

进一步的,利用该模型对示功图的强特征提取能力,在后续建立依据示功图进行油井故障诊断的问题中只需要依靠少量标记样本就可以满足模型学习的需求,降低了样本标记和模型训练、优化时一定的人力和时间成本并能够有效提高示功图识别的准确率。Furthermore, by utilizing the strong feature extraction capability of this model for dynamometer diagrams, in the subsequent problem of establishing oil well fault diagnosis based on dynamometer diagrams, only a small number of labeled samples can be relied upon to meet the needs of model learning, reducing the need for sample labeling and model There is a certain cost of manpower and time in training and optimization, and it can effectively improve the accuracy of dynamometer diagram recognition.

应说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或者等同替换,而不脱离本发明技术方案的精神和范围,其均应涵盖在本发明的权利要求范围当中。It should be noted that the above embodiments are only used to illustrate the technical solution of the present invention rather than to limit it. Although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solution of the present invention can be carried out. Modifications or equivalent substitutions without departing from the spirit and scope of the technical solution of the present invention shall be included in the scope of the claims of the present invention.

Claims (10)

1.一种基于全卷积神经网络的示功图特征提取方法,其特征在于:包括,1. A dynamometer feature extraction method based on a fully convolutional neural network, which is characterized by: including, 构建示功图图像数据集;Construct dynamometer image data set; 构建全卷积神经网络模型,模型输入和输出均为示功图图像;Construct a fully convolutional neural network model, and the model input and output are both power diagram images; 使用构建的示功图图像数据集对全卷积神经网络模型进行预训练,训练的评价指标为判断模型输出的示功图图像和输入的示功图图像之间的相似性;Use the constructed dynamometer image data set to pre-train the fully convolutional neural network model. The evaluation index of the training is to judge the similarity between the dynamometer image output by the model and the input dynamometer image; 采用网格搜索对全卷积神经网络模型进行超参数优化,优化模型的提取能力;Use grid search to optimize hyperparameters of the fully convolutional neural network model to optimize the extraction capability of the model; 截取全卷积神经网络模型的卷积阶段作为示功图特征提取器。The convolution stage of the fully convolutional neural network model is intercepted as the dynamometer feature extractor. 2.如权利要求1所述的一种基于全卷积神经网络的示功图特征提取方法,其特征在于:所述示功图图像数据集包括,根据数据库中海量的示功图位移载荷数据,以横坐标为位移、纵坐标为载荷,批量的绘制成标准化的示功图图像,构建示功图数据集。2. A dynamometer diagram feature extraction method based on a fully convolutional neural network as claimed in claim 1, characterized in that: the dynamometer diagram image data set includes, according to a large amount of dynamometer diagram displacement load data in the database , with the abscissa as the displacement and the ordinate as the load, draw standardized dynamometer images in batches to construct a dynamometer data set. 3.如权利要求2所述的一种基于全卷积神经网络的示功图特征提取方法,其特征在于:所述全卷积神经网络模型包括卷积、池化、反卷积、反池化操作步骤,模型的输入是一张示功图图像,反卷积阶段是卷积阶段的逆过程,对卷积层的输出进行解码,模型的输出是一张示功图图像,模型的输入输出相同,为同一张示功图图像。3. A method for extracting features of dynamometer diagrams based on a fully convolutional neural network as claimed in claim 2, characterized in that: the fully convolutional neural network model includes convolution, pooling, deconvolution, and anti-pooling. Operation steps, the input of the model is a dynamometer image, the deconvolution stage is the reverse process of the convolution stage, the output of the convolution layer is decoded, the output of the model is a dynamometer image, the input of the model The output is the same, which is the same dynamometer image. 4.如权利要求3所述的一种基于全卷积神经网络的示功图特征提取方法,其特征在于:所述反卷积阶段包括,填充,卷积,裁剪。4. A dynamometer feature extraction method based on a fully convolutional neural network as claimed in claim 3, characterized in that: the deconvolution stage includes filling, convolution and cropping. 5.如权利要求4所述的一种基于全卷积神经网络的示功图特征提取方法,其特征在于:所述预训练包括,使用所述示功图图像数据集对所述的全卷积神经网络进行预训练,通过反卷积操作将模型卷积操作输出的示功图特征还原成输入时的维度,每轮训练后计算示功图输入卷积层时的特征与经过反卷积操作还原后特征的特征相似度;5. A dynamometer feature extraction method based on a fully convolutional neural network as claimed in claim 4, characterized in that: the pre-training includes using the dynamometer image data set to The convolutional neural network is pre-trained, and the dynamometer characteristics output by the model convolution operation are restored to the dimensions of the input through the deconvolution operation. After each round of training, the characteristics of the dynamometer diagram when input into the convolution layer are calculated and deconvolved. Feature similarity of features after operation reduction; 所述特征相似度包括,特征相似度的衡量方法为Jaccard相似系数;The feature similarity includes: the measurement method of feature similarity is Jaccard similarity coefficient; 对于两个特征矩阵A和B,计算每个向量的Jaccard相似系数,求平均,其Jaccard相似系数的计算公式表示为,For the two feature matrices A and B, calculate the Jaccard similarity coefficient of each vector and average it. The calculation formula of the Jaccard similarity coefficient is expressed as, J(A,B)=(J(A1,B1)+J(A2,B2)+...+J(An,Bn))/nJ(A,B)=(J(A 1 ,B 1 )+J(A 2 ,B 2 )+...+J(A n ,B n ))/n 对于两个向量An和Bn,作为两个集合,则Jaccard相似系数表示为,For two vectors A n and B n , as two sets, the Jaccard similarity coefficient is expressed as, J(An,Bn)=|S|/|U|J(A n ,B n )=|S|/|U| 其中,S为集合An和Bn的交集,U为集合An和Bn的并集;Among them, S is the intersection of sets A n and B n , and U is the union of sets A n and B n ; 若特征相似度超过90%,则截取全卷积神经网络模型的卷积阶段,若特征相似度低于90%,则进行超参数优化。If the feature similarity exceeds 90%, the convolution stage of the fully convolutional neural network model is intercepted. If the feature similarity is less than 90%, hyperparameter optimization is performed. 6.如权利要求5所述的一种基于全卷积神经网络的示功图特征提取方法,其特征在于:所述超参数优化包括,对所述全卷积神经网络模型进行超参数优化,直到卷积层输入的示功图与反卷积层输出的示功图特征相似度达到90%以上时,得到优化后的全卷积神经网络模型;6. A dynamometer feature extraction method based on a fully convolutional neural network as claimed in claim 5, characterized in that: the hyperparameter optimization includes performing hyperparameter optimization on the fully convolutional neural network model, Until the feature similarity between the dynamometer diagram input by the convolution layer and the dynamometer diagram output by the deconvolution layer reaches more than 90%, the optimized fully convolutional neural network model is obtained; 采用网格搜索作为全卷积神经网络模型的超参数优化方法,预先定义要优化的每个超参数的取值范围。Grid search is used as a hyperparameter optimization method for the fully convolutional neural network model, and the value range of each hyperparameter to be optimized is predefined. 7.如权利要求6所述的一种基于全卷积神经网络的示功图特征提取方法,其特征在于:所述示功图特征提取器包括,去除全卷积神经网络模型的反卷积层,得到基于全卷积神经网络模型卷积阶段的示功图特征提取器。7. A dynamometer feature extraction method based on a fully convolutional neural network as claimed in claim 6, characterized in that: the dynamometer feature extractor includes deconvolution that removes the fully convolutional neural network model. layer to obtain a dynamometer feature extractor based on the convolution stage of the fully convolutional neural network model. 8.一种采用如权利要求1~7任一所述的一种基于全卷积神经网络的示功图特征提取方法的系统,其特征在于:包括,示功图输入模块、预训练模块、超参数优化模块以及特征提取模块;8. A system using a dynamometer feature extraction method based on a fully convolutional neural network as described in any one of claims 1 to 7, characterized in that it includes: a dynamometer input module, a pre-training module, Hyperparameter optimization module and feature extraction module; 所述示功图输入模块用于输入待提取特征的示功图数据;The dynamometer diagram input module is used to input dynamometer diagram data of features to be extracted; 所述预训练模块使用示功图图像数据集对全卷积神经网络进行预训练,并通过反卷积操作将模型卷积操作输出的示功图特征还原成输入时的维度,计算示功图输入卷积层时的特征与经过反卷积操作还原后的特征的相似度,以Jaccard相似系数作为特征相似度的衡量方法;The pre-training module uses the dynamometer image data set to pre-train the fully convolutional neural network, and restores the dynamometer features output by the model convolution operation to the input dimensions through the deconvolution operation, and calculates the dynamometer The similarity between the features input to the convolutional layer and the features restored after the deconvolution operation is measured using the Jaccard similarity coefficient as a measure of feature similarity; 所述超参数优化模块对训练完成的全卷积神经网络模型进行超参数优化,通过网格搜索寻找最佳的超参数组合,直到卷积层输入的示功图与反卷积层输出的示功图特征相似度达到90%以上时,得到优化后的全卷积神经网络模型;The hyperparameter optimization module performs hyperparameter optimization on the trained fully convolutional neural network model and finds the best hyperparameter combination through grid search until the performance diagram input by the convolution layer and the output diagram of the deconvolution layer are When the functional map feature similarity reaches more than 90%, the optimized fully convolutional neural network model is obtained; 所述特征提取模块使用示功图特征提取器,从示功图数据中提取特征。The feature extraction module uses a dynamometer feature extractor to extract features from the dynamometer data. 9.一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于:所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述的方法的步骤。9. A computer device, comprising a memory and a processor, the memory stores a computer program, characterized in that: when the processor executes the computer program, the method of any one of claims 1 to 7 is implemented. step. 10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于:所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的方法的步骤。10. A computer-readable storage medium with a computer program stored thereon, characterized in that when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 7 are implemented.
CN202310747344.8A 2023-06-25 2023-06-25 Indicator diagram feature extraction method and system based on full convolution neural network Pending CN117036771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310747344.8A CN117036771A (en) 2023-06-25 2023-06-25 Indicator diagram feature extraction method and system based on full convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310747344.8A CN117036771A (en) 2023-06-25 2023-06-25 Indicator diagram feature extraction method and system based on full convolution neural network

Publications (1)

Publication Number Publication Date
CN117036771A true CN117036771A (en) 2023-11-10

Family

ID=88643696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310747344.8A Pending CN117036771A (en) 2023-06-25 2023-06-25 Indicator diagram feature extraction method and system based on full convolution neural network

Country Status (1)

Country Link
CN (1) CN117036771A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170113251A (en) * 2016-03-24 2017-10-12 재단법인 아산사회복지재단 Method and device for automatic inner and outer vessel wall segmentation in intravascular ultrasound images using deep learning
CN108428227A (en) * 2018-02-27 2018-08-21 浙江科技学院 No-reference image quality assessment method based on fully convolutional neural network
US20190311202A1 (en) * 2018-04-10 2019-10-10 Adobe Inc. Video object segmentation by reference-guided mask propagation
CN112668494A (en) * 2020-12-31 2021-04-16 西安电子科技大学 Small sample change detection method based on multi-scale feature extraction
CN113095414A (en) * 2021-04-15 2021-07-09 中国石油大学(华东) Indicator diagram identification method based on convolutional neural network and support vector machine
CN113192062A (en) * 2021-05-25 2021-07-30 湖北工业大学 Arterial plaque ultrasonic image self-supervision segmentation method based on image restoration
CN113780652A (en) * 2021-09-07 2021-12-10 中国石油化工股份有限公司 Oil well indicator diagram fault diagnosis and prediction method and device
US20220138573A1 (en) * 2020-11-05 2022-05-05 Leica Microsystems Cms Gmbh Methods and systems for training convolutional neural networks
CN114966860A (en) * 2022-05-13 2022-08-30 重庆科技学院 Seismic data denoising method based on convolutional neural network
WO2022178946A1 (en) * 2021-02-25 2022-09-01 平安科技(深圳)有限公司 Melanoma image recognition method and apparatus, computer device, and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170113251A (en) * 2016-03-24 2017-10-12 재단법인 아산사회복지재단 Method and device for automatic inner and outer vessel wall segmentation in intravascular ultrasound images using deep learning
CN108428227A (en) * 2018-02-27 2018-08-21 浙江科技学院 No-reference image quality assessment method based on fully convolutional neural network
US20190311202A1 (en) * 2018-04-10 2019-10-10 Adobe Inc. Video object segmentation by reference-guided mask propagation
US20220138573A1 (en) * 2020-11-05 2022-05-05 Leica Microsystems Cms Gmbh Methods and systems for training convolutional neural networks
CN112668494A (en) * 2020-12-31 2021-04-16 西安电子科技大学 Small sample change detection method based on multi-scale feature extraction
WO2022178946A1 (en) * 2021-02-25 2022-09-01 平安科技(深圳)有限公司 Melanoma image recognition method and apparatus, computer device, and storage medium
CN113095414A (en) * 2021-04-15 2021-07-09 中国石油大学(华东) Indicator diagram identification method based on convolutional neural network and support vector machine
CN113192062A (en) * 2021-05-25 2021-07-30 湖北工业大学 Arterial plaque ultrasonic image self-supervision segmentation method based on image restoration
CN113780652A (en) * 2021-09-07 2021-12-10 中国石油化工股份有限公司 Oil well indicator diagram fault diagnosis and prediction method and device
CN114966860A (en) * 2022-05-13 2022-08-30 重庆科技学院 Seismic data denoising method based on convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PENGYU WANG等: "3D shape Segmentation via Shape Fully Convolutional Networks", COMPUTERS & GRAPHICS, 26 May 2018 (2018-05-26) *
冯其红;李玉润;王森;任佳伟;周代余;范坤;: "基于深度卷积生成对抗神经网络预测气窜方向", 中国石油大学学报(自然科学版), no. 04, 30 July 2020 (2020-07-30) *

Similar Documents

Publication Publication Date Title
CN114494195B (en) Small sample attention mechanism parallel twin method for fundus image classification
CN111898432B (en) Pedestrian detection system and method based on improved YOLOv3 algorithm
CN110046550B (en) Pedestrian attribute recognition system and method based on multi-layer feature learning
CN113435269A (en) Improved water surface floating object detection and identification method and system based on YOLOv3
CN116502175A (en) A graph neural network fault diagnosis method, device and storage medium
CN112016237B (en) Deep learning method, device and system for lithium battery life prediction
CN114048787B (en) A real-time intelligent diagnosis method and system for bearing faults based on Attention CNN model
CN114092815B (en) Remote sensing intelligent extraction method for large-range photovoltaic power generation facility
CN110516305A (en) Intelligent fault diagnosis method under small sample based on attention mechanism meta-learning model
CN110132626A (en) A Fault Diagnosis Method for Pumping Unit Based on Multi-scale Convolutional Neural Network
CN114299305A (en) A Salient Object Detection Algorithm for Aggregating Dense and Attentional Multiscale Features
CN111860233A (en) Method and system for extracting complex buildings from SAR images based on selective attention network
US12056950B2 (en) Transformer-based multi-scale pedestrian re-identification method
CN111709516A (en) Compression method and compression device, storage medium and equipment of neural network model
CN113111889B (en) Object detection network processing method for edge computing
CN117036843A (en) Target detection model training method, target detection method and device
CN111401519B (en) Deep neural network unsupervised learning method based on similarity distance in object and between objects
CN116188433A (en) Insulator defect detection method, device and equipment
CN115249302B (en) Intestinal wall blood vessel segmentation method based on multi-scale contextual information and attention mechanism
CN113850387A (en) Expert system knowledge base construction method, question answering method, system, device and medium
CN111241550B (en) Vulnerability detection method based on binary mapping and deep learning
CN117351218B (en) A method for identifying crypt stretch images of pathological morphological characteristics of inflammatory bowel disease
CN117093984A (en) Side channel attack method and system based on deep learning
CN116229445A (en) Natural scene text detection method, system, storage medium and computing device
CN117036771A (en) Indicator diagram feature extraction method and system based on full convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination