[go: up one dir, main page]

CN117036665B - Knob switch state identification method based on twin neural network - Google Patents

Knob switch state identification method based on twin neural network Download PDF

Info

Publication number
CN117036665B
CN117036665B CN202311137629.6A CN202311137629A CN117036665B CN 117036665 B CN117036665 B CN 117036665B CN 202311137629 A CN202311137629 A CN 202311137629A CN 117036665 B CN117036665 B CN 117036665B
Authority
CN
China
Prior art keywords
image
knob
knob switch
feature
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311137629.6A
Other languages
Chinese (zh)
Other versions
CN117036665A (en
Inventor
陈为祥
徐贵力
刘若鹏
程月华
董文德
马栎敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202311137629.6A priority Critical patent/CN117036665B/en
Publication of CN117036665A publication Critical patent/CN117036665A/en
Application granted granted Critical
Publication of CN117036665B publication Critical patent/CN117036665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种基于孪生神经网络的旋钮开关状态识别方法,涉及电力技术领域,该方法首先对获取到的原始预置位图像进行图像预处理,提取待识别状态的旋钮开关经过图像校正后的正角度图像,以降低复杂背景和光照以及多变的拍摄角度带来的影响,然后利用基于孪生神经网络的相似度计算模型学习和表达旋钮开关的状态特征以及各个档位状态之间的差异性,确定待识别图像与各个旋钮开关标准图像之间的图像相似度来实现状态识别,具有较好的识别准确性和鲁棒性,非常适用于自动化控制、智能变电站等领域,有利于提高设备监控和管理的效率和安全性。

This application discloses a method for identifying the state of a knob switch based on a twin neural network, which relates to the field of electric power technology. The method first performs image preprocessing on the acquired original preset image, and extracts the knob switch in the state to be identified after image correction. Positive-angle images to reduce the impact of complex backgrounds and lighting as well as changing shooting angles, and then use a similarity calculation model based on twin neural networks to learn and express the status characteristics of the knob switch and the differences between each gear status It determines the image similarity between the image to be recognized and the standard image of each knob switch to achieve state recognition. It has good recognition accuracy and robustness. It is very suitable for automation control, smart substations and other fields, and is conducive to improving equipment. Monitoring and management efficiency and security.

Description

Knob switch state identification method based on twin neural network
Technical Field
The application relates to the technical field of electric power, in particular to a knob switch state identification method based on a twin neural network.
Background
The transformer substation is a connection point of a power grid line in the power system, plays a key role in maintaining stable operation of the power system and normal production and life of human beings, and has a vital significance in ensuring normal operation of the transformer substation. The knob switch is used as an indispensable device of a transformer substation and widely applied to various equipment and machines, and realizes distribution and scheduling of electric energy by switching on and off a circuit. Therefore, the knob switch plays an important role in operation and maintenance of the transformer substation, and determining the state of the knob switch is very critical to guaranteeing normal operation of the transformer substation.
The traditional method is that various devices in the transformer substation are inspected in a manual inspection mode, and specific personnel are responsible for inspecting and recording states of a knob switch and other devices in a designated area every day. However, the manual inspection method has low automation degree and efficiency, is influenced by human factors, and is difficult to ensure accuracy.
Along with the development of visual recognition technology, some methods introduce image processing technology to automatically recognize the state of a knob switch, but the distinguishing standard needs to be manually set for the unique characteristics of each knob switch in advance, so that along with the continuous increase of the number and the types of knob switches, the real-time requirements of large data streams are difficult to comprehensively cope with, and the problems of low recognition accuracy, sensitivity to environmental changes and the like in the traditional image processing technology also exist.
Further, the deep learning-based method is gradually introduced into the state recognition problem of the knob switch, and two main methods exist at present: one approach is to use classification networks such as VGG16, mobilenet v3, res net to identify the status of images containing only the knob switches separately. Another approach is to find the knob switch and identify its state directly in an image containing a large background using object detection models such as YOLO series, R-CNN series. However, the environmental factors of the transformer substation are complex, so that the identification effect of the two methods is unstable, the anti-interference capability is insufficient, and a larger probability of false detection exists, for example, the condition that the target detection model occupies a very small proportion of the knob switch in an image containing the background is difficult to detect the knob switch, the state cannot be identified, and the classification network is easy to misclassify the state with smaller difference (for example, the gear rotation angle of the knob switch is smaller or the gear rotation is 180 degrees).
Disclosure of Invention
Aiming at the problems and the technical requirements, the application provides a knob switch state identification method based on a twin neural network, and the technical scheme of the application is as follows:
a knob switch state identification method based on a twin neural network comprises the following steps:
acquiring an original preset bit image, wherein the original preset bit image comprises a knob image and a background image of an area where a knob switch in a state to be identified is located;
performing image preprocessing on the original preset position image, and extracting to obtain a knob image which is subjected to image correction in the original preset position image and is used as an image to be identified, wherein the image to be identified is a positive angle image which is subjected to image correction on a knob switch in a state to be identified;
traversing all the knob switch standard images in the knob switch standard library in sequence, and inputting each traversed knob switch standard image and the image to be identified into a similarity calculation model trained in advance at the same time to obtain the image similarity between the knob switch standard image and the image to be identified; the standard knob switch library comprises knob switch standard images of various knob switches in various gear states, and each knob switch standard image is a positive angle image of an area where the knob switch is located, wherein the positive angle image is acquired by a shooting angle opposite to the knob switch; the similarity calculation model is constructed and trained on the basis of a twin neural network in advance;
and obtaining the gear state of the knob switch in the state to be identified according to the image similarity between the image to be identified and the knob switch standard images in different gear states.
The further technical scheme is that the knob switch state identification method further comprises the following steps:
shooting original sample images of various types of knob switches in various gear states from various different shooting angles respectively, extracting an image of an area where the knob switch is located from each original sample image, marking the gear state of the corresponding knob switch as a training sample, and constructing a training data set;
the network framework for constructing the similarity calculation model comprises a twin neural network, a full-connection layer and a Softmax layer, wherein the twin neural network comprises two feature extraction modules which are connected together and share network parameters, and each feature extraction module takes ResNet50 as a basic network structure and introduces a CBAM module to learn an attention mechanism;
and performing model training by using a training data set based on a network frame of the similarity calculation model to obtain a trained similarity calculation model, wherein two feature extraction modules in the similarity calculation model respectively and independently acquire one input image and output feature images, and the twin neural network connects the feature images output by the two feature extraction modules together to form final feature representation, and calculates Euclidean distance after sequentially passing through a full-connection layer and a Softmax layer to obtain the image similarity between the two images.
The further technical scheme is that the gear state of the knob switch of the state to be identified comprises:
after the image similarity of each knob switch standard image and the image to be identified is obtained respectively, calculating the average value of the image similarity of the knob switch standard image and the image to be identified with the same gear state, and taking the average value as the corresponding gear similarity of the gear state;
and taking the gear state with the highest corresponding gear similarity as the gear state of the knob switch in the state to be identified.
The further technical scheme is that the knob image after image correction in the original preset position image is extracted and obtained as an image to be identified comprises the following steps:
determining a preset bit image template corresponding to a preset shooting position of an original preset bit image, wherein different preset shooting positions are provided with different preset bit image templates, and the preset bit image template corresponding to each preset shooting position indicates the area where a knob switch in the image shot at the preset shooting position is located;
and carrying out image preprocessing on the original preset bit image by using the preset bit image template obtained through determination, and extracting to obtain an image to be identified.
The further technical scheme is that the image preprocessing of the original preset bit image by using the preset bit image template obtained by determination comprises the following steps:
respectively carrying out feature extraction on the preset bit image template and the original preset bit image by using a convolutional neural network to obtain respective feature images;
performing feature matching on the two feature images to align the original preset bit image with the image of the preset bit image template, and extracting a knob image of an area where a knob switch in the state to be identified is located in the original preset bit image by combining the area where the knob switch in the preset bit image template is located;
and carrying out image correction on the knob image obtained by extraction, and extracting to obtain an image to be identified.
The further technical scheme is that the extracting and obtaining the knob image of the area where the knob switch of the state to be identified is located in the original preset position image comprises the following steps:
detecting image feature points of a preset bit image template based on a feature map of the preset bit image template, and detecting image feature points of an original preset bit image based on a feature map of the original preset bit image;
filtering image feature points of the preset bit image template and image feature points of the original preset bit image by using a random sampling consistency algorithm, screening to obtain four pairs of image feature points which are optimally matched, wherein each pair of image feature points comprises one image feature point in the preset bit image template and one image feature point in the matched original preset bit image;
according toSolving according to the coordinates of four pairs of image feature points to obtain a transformation matrix +.>,/>And->Is the coordinates of a pair of image feature points, +.>Andis the coordinates of a pair of image feature points, +.>And->Is the coordinates of a pair of image feature points,and->Is the coordinates of a pair of image feature points, and +.>、/>、/>Andare all image feature points in the original preset bit image, < >>、/>、/>All are image feature points in a preset bit image template;
using a transformation matrixAnd carrying out coordinate transformation on the position coordinates of the area where the knob switch is located in the preset bit image template to obtain the position coordinates of the area where the knob switch in the state to be identified is located in the original preset bit image, and extracting to obtain the knob image.
The further technical scheme is that the image correction of the extracted knob image comprises the following steps:
determining coordinates of four vertices of a knob image in an original preset bit image、/>And->
Calculating according to the coordinates of four vertexes to obtain vertexesAnd vertex->European distance between->Top>And vertex->European distance between->Top>And vertex->European distance between->Top>And vertex->European distance between->
Determining coordinates of four vertexes of the transformed image as、/>、/>And->Wherein->Is->And->Maximum value of>Is->And->Maximum value of (2);
according toSolving to obtain a transformation matrix->And utilize the transformation matrix->And carrying out image transformation on the knob image in the original preset position image to obtain an image to be identified.
The further technical proposal is that a transformation matrix is utilizedThe step of carrying out image transformation on the knob image in the original preset bit image to obtain an image to be identified comprises the following steps:
using a transformation matrixPerforming image transformation on the knob image in the original preset position image to obtain an output image, wherein the output image is a positive angle image of a knob switch in a state to be identified;
calculating a gray level histogram of each pixel point in the output image, and calculating a cumulative distribution function of each gray level in the gray level histogram, and arbitrary gray levelsCumulative distribution function +.>Equal to gray level +.>Sum of number of pixels in the range, < ->,/>Is the most gray levelSmall value (S)>Is the maximum of the gray levels;
according toThe arbitrary gray level in the output image is +.>The gray level of the pixel of (2) is converted to +.>,/>Is the total number of pixels contained in the output image,representation pair->Rounding is performed.
The further technical scheme is that detecting the image feature points of the preset bit image template and detecting the image feature points of the original preset bit image comprises inputting any one of the image of the preset bit image template and the original preset bit image:
extracting each key pixel point and descriptors of each key pixel point in a feature map of an input image, wherein each key pixel point has a maximum spatial feature value in a preset local neighborhood range and a maximum channel feature value in a channel direction; the descriptor of each key pixel point is a channel direction vector of the position of the key pixel point;
and recovering each key pixel point to the image size of the input image through bilinear interpolation according to the extracted key pixel points and the descriptors of the key pixel points, and extracting the image feature points of the input image.
The method comprises the further technical scheme that a convolutional neural network is utilized to respectively conduct feature extraction on a preset bit image template and an original preset bit image to obtain respective feature images, and the method comprises the steps that any one of the preset bit image template and the original preset bit image is input with:
performing convolution and maximum pooling combination downsampling on an input image for two times to obtain a deep feature image of the input image, wherein the image size of the deep feature image is 1/4 of the image size of the input image;
and performing convolution, average pooling and cavity convolution on the deep feature map again, and fusing and extracting to obtain the feature map of the input image.
The beneficial technical effects of this application are:
the method utilizes a similarity calculation model based on the twin neural network to learn and express the state characteristics of the knob switch and the difference between gear states, can effectively reduce the influence brought by complex background, illumination and changeable shooting angles in the image preprocessing process of an original preset position image, realizes state recognition by determining the image similarity between a knob switch standard image and the image to be recognized after the image to be recognized is extracted, has better recognition accuracy and robustness, is very suitable for the fields of automatic control, intelligent transformer substations and the like, and is beneficial to improving the efficiency and safety of equipment monitoring and management.
When the similarity calculation model is used for constructing the twin neural network, resNet50 is taken as a basic network structure, and a CBAM attention mechanism is added, so that the deep convolution neural network can concentrate on the extraction of knob features, and the feature extraction effect is better.
Drawings
FIG. 1 is a method flow diagram of a method of recognizing a status of a rotary switch according to one embodiment of the present application.
FIG. 2 is a flow chart of a method of training a similarity calculation model in one embodiment of the present application.
FIG. 3 is a flowchart of a method for extracting an image to be identified by performing image preprocessing on an original preset bit image in an embodiment of the present application.
FIG. 4 is a transformation diagram of an output image obtained after transforming a knob image in an original preset bit image in one embodiment of the present application.
Detailed Description
The following describes the embodiments of the present application further with reference to the accompanying drawings.
The application discloses a knob switch state identification method based on a twin neural network, please refer to a flow chart shown in fig. 1, the knob switch state identification method comprises the following steps:
and step 1, acquiring an original preset bit image.
In the application scene of the transformer substation, a plurality of cameras are fixed at a plurality of different positions of the transformer substation, each camera can rotate to a plurality of different target angles to shoot images within a view field range, the shooting position of each camera at each target angle is a preset shooting position, and the image acquired by the camera at one preset shooting position is an original preset position image in the application. Since the fixed position of each camera is predetermined and the respective target angle at which each camera is rotatable is also predetermined, all preset shots present in the entire substation can be determined, and each original preset image acquired in this application is taken at one of the preset shots in the substation, and it can be determined in particular at which preset shot.
Because the environment of the transformer substation is complex, the original preset bit image shot at each preset shooting position often comprises other equipment in the transformer substation besides the knob switch, so that each obtained original preset bit image comprises a background image besides the knob image of the area where the knob switch in the state to be identified is located.
And 2, performing image preprocessing on the original preset bit image, and extracting to obtain a knob image which is subjected to image correction in the original preset bit image as an image to be identified.
Because the types and the number of the rotary switches included in the transformer substation are large, it is generally impossible to arrange a camera for each rotary switch to shoot the rotary switch, so that the obtained original preset bitmap image is often not an image obtained by shooting the rotary switch in a state to be identified, but an image shot from various non-uniform inclination angles, and the original preset bitmap image is also influenced by other environmental factors.
Therefore, after the original preset position image is acquired, image preprocessing is firstly carried out to make up for the interference caused by the shooting angle and the shooting environment, and the extracted image to be identified is a positive angle image of the knob switch in the state to be identified after image correction.
And step 3, sequentially traversing all the knob switch standard images in the knob switch standard library, and inputting each traversed knob switch standard image and the image to be identified into a similarity calculation model which is obtained by training in advance at the same time to obtain the image similarity between the knob switch standard image and the image to be identified.
The standard library of the rotary switches is pre-constructed, the standard library of the rotary switches comprises rotary switch standard images of various rotary switches in various gear states, and each rotary switch standard image is a positive angle image of an area where the rotary switch is located, wherein the positive angle image is acquired from a shooting angle opposite to the rotary switch. Each knob switch has a plurality of different gear states according to different gear rotation angles, each gear state covers one gear rotation angle range of the knob switch, the gear states are divided in advance according to actual conditions, for example, the gear states are obtained by dividing the gear states according to the gear rotation angle range of every 30 degrees from 0 degree, and the gear states can be actually defined and divided. When the knob switch standard library is constructed, the knob switches of all types in the transformer substation are respectively adjusted to different gear states, a knob switch standard image is obtained by shooting the knob switches by using a camera in each gear state, the gear states are switched to shoot again, and after the knob switch standard image of the knob switch in all gear states is obtained, the knob switches of other types are operated in the same way, so that the knob switch standard image of all types of knob switches in the transformer substation in all gear states is obtained.
The step also needs to use a similarity calculation model which is constructed and trained in advance based on the twin neural network, and a training method thereof is described later.
And 4, obtaining the gear state of the knob switch in the state to be identified according to the image similarity between the image to be identified and the knob switch standard images in different gear states.
In one embodiment, the gear state of the knob switch standard image with the highest image similarity with the image to be identified is used as the gear state of the knob switch of the state to be identified.
However, in order to improve the recognition accuracy, in another embodiment, after the image similarity between each of the standard images of the rotary switch and the image to be recognized is obtained, an average value of the image similarity between each of the standard images of the rotary switch and the image to be recognized, which has the same gear state, is calculated as the gear similarity corresponding to the gear state. After the gear similarity of each gear state is obtained, the gear state with the highest corresponding gear similarity is used as the gear state of the knob switch in the state to be identified.
In the application of step 3, a similarity calculation model is needed, so the method further includes a method for pre-training the similarity calculation model, including the following steps, please refer to the flowchart shown in fig. 2:
1. and respectively shooting original sample images of the knob switches of various types in various gear states from various different shooting angles, extracting an image of an area where the knob switches are positioned from each original sample image, marking the gear states of the corresponding knob switches as training samples, and constructing a training data set.
The method for constructing the training data set is similar to the method for constructing the knob switch standard library, and in practical application, the constructed training data set contains all knob switch standard images in the knob switch standard library, so that the two parts share one process, and the knob switch standard library can be constructed while the training data set is constructed.
The difference from building a standard library of rotary switches is that, for each type of rotary switch in a substation, in the case of adjusting the rotary switch to each gear state, the rotary switch is photographed at a plurality of different photographing angles by means of a camera in addition to photographing the rotary switch by means of the camera.
A space three-dimensional virtual coordinate system is established by the knob switch, an x0y plane of the space three-dimensional virtual coordinate system is parallel to a horizontal plane, a z-axis direction is vertical to the horizontal plane upwards, a y-axis direction faces to the front of the knob switch, and an x-axis direction is vertical to the y-axis direction. The plurality of different photographing angle photographing includes a horizontal photographing angle having a different x-axis direction with respect to the knob switch, and a vertical photographing angle having a different z-axis direction with respect to the knob switch, and a different front-to-rear distance along the y-axis direction with respect to the knob switch. In one embodiment, the knob switches are sequentially moved stepwise in the x-axis direction by 30 ° to 150 ° with respect to the x-axis positive direction and photographed with each kind of knob switch in each gear state, each time stepwise 30 °. And sequentially moving the knob switch in a stepping manner along the z-axis direction within the range of 30-150 degrees of the included angle of the positive direction of the z-axis by utilizing the dome camera, and shooting the knob switch in a stepping manner each time by 30 degrees. And sequentially moving the knob switch in a stepping manner along the y-axis direction by using the dome camera in a range of 0.5m to 2m from the front to back of the knob switch, wherein the stepping length is 0.5m.
After each original sample image is obtained through shooting, an image of the area where the knob switch is located is extracted from the original sample image, a background image is removed, and then a corresponding gear state is marked.
2. The network framework for constructing the similarity calculation model comprises a twin neural network, a full-connection layer and a Softmax layer, wherein the twin neural network comprises two feature extraction modules which are connected together and share network parameters, and the two feature extraction modules share the network parameters so as to increase network efficiency and reduce the number of parameters. Each feature extraction module takes ResNet50 as an underlying network structure and introduces the learning of the attention mechanism by the CBAM module.
3. And performing model training by using a network framework of the training data set based on the similarity calculation model to obtain a trained similarity calculation model.
The two feature extraction modules in the similarity calculation model respectively and independently acquire an input image and output feature images, the twin neural network connects the feature images output by the two feature extraction modules together to form final feature representation, and the Euclidean distance is calculated after the final feature representation sequentially passes through the full connection layer and the Softmax layer to obtain the image similarity between the two images.
The similarity calculation model is subjected to cross matching training by using training samples of different types of knob switches in the training data set under different gear states and different shooting angles, so that the similarity calculation model has stronger feature extraction capability and robustness.
After the knob switch standard image and the image to be identified are obtained by the similarity calculation model in the step 3, firstly, the knob switch standard image and the image to be identified are subjected to standardized processing to be identical in image shape and image size, then, the two images are sent into the feature extraction module to be transmitted forwards, the two images respectively pass through a sharing layer and an independent layer of the twin neural network, so that respective feature images are obtained, and feature vectors at the stage are high-level representations of the images and are used for describing features and structures of the images. And finally, connecting the feature images of the two images together to form final feature representation of the twin neural network, and performing Euclidean distance calculation after passing through a full-connection layer, a Softmax layer and other subsequent processing modules to obtain the image similarity of the knob switch standard image and the image to be identified.
In one embodiment, when the step 2 performs image preprocessing on the original preset bit image, a preset bit image template corresponding to a preset shooting position of the original preset bit image is determined first, and then the determined preset bit image template is used for performing image preprocessing on the original preset bit image, so as to extract an image to be identified. Different preset shooting positions are provided with different preset image templates, the preset image template corresponding to each preset shooting position indicates the area where a knob switch is located in an image shot at the preset shooting position, and the preset image template corresponding to each preset shooting position is predetermined.
The image preprocessing of the original preset bit image by using the preset bit image template obtained by determination includes the following steps, please refer to the flowchart shown in fig. 3:
1. and respectively carrying out feature extraction on the preset bit image template and the original preset bit image by using a convolutional neural network to obtain respective feature images.
In order to capture key information in an image, for any one of a preset bit image template and an original preset bit image, extracting a feature map of the input image includes: (1) And carrying out convolution twice and maximum pooling combined downsampling on the input image to obtain a deep feature image of the input image, wherein the image size of the deep feature image is 1/4 of the image size of the input image. This prevents the loss of excessive spatial features during the convolution feature extraction process. (2) The operations of convolution, average pooling and cavity convolution are carried out on the deep feature map again so as to fuse and extract sparse features and obtain a larger receptive field, thereby extracting the feature map of the input image, and further acquiring more global image information and features with more identification degree.
2. And performing feature matching on the two feature images to realize the image alignment of the original preset bit image and the preset bit image template, and extracting the knob image of the region where the knob switch in the state to be identified is located in the original preset bit image by combining the region where the knob switch in the preset bit image template is located. Comprising the following steps:
(1) Image feature points of the preset bit image template are detected based on the feature images of the preset bit image template, and image feature points of the original preset bit image are detected based on the feature images of the original preset bit image.
In one embodiment, for any one of the input image of the preset bit image template and the original preset bit image, detecting the image feature points of the input image based on the feature map of the input image includes:
extracting each key pixel point and descriptors of each key pixel point in a feature map of an input image, wherein each key pixel point has a maximum spatial feature value in a preset local neighborhood range and a maximum channel feature value in a channel direction. The descriptors of each key pixel point are channel direction vectors of the positions of the key pixel points, and the descriptors have richer and comprehensive information, and can better resist interference such as scale, rotation, illumination, visual angle, non-rigid transformation and the like compared with the traditional characteristic point extraction algorithm.
Extracting key pixel pointsCan be expressed as +.>Wherein->Representing arbitrary pixel point in the feature map>Is>A set of pixels corresponding to a maximum value of the spatial feature values within a predetermined local neighborhood range +.>Can be customized, such as 3*3 neighborhood, and->Representing the output characteristic diagram->At->All pixels of the location. />Representing the index of a characteristic channel in a characteristic map +.>A set of pixel points corresponding to a maximum value of channel characteristic values of the channel direction, +.>Representing the output characteristic diagram->Index>Is a pixel in the channel direction. Wherein the symbols are: indicating that the parameter is default.
And then, according to the extracted key pixel points and descriptors of the key pixel points, restoring the key pixel points to the image size of the input image through bilinear interpolation, and extracting the image feature points of the input image.
(2) And filtering the image feature points of the preset bit image template and the image feature points of the original preset bit image by using a random sampling consistency algorithm, and screening to obtain four pairs of image feature points which are optimally matched, wherein each pair of image feature points comprises one image feature point in the preset bit image template and one image feature point in the matched original preset bit image.
(3) According toSolving according to the coordinates of four pairs of image feature points to obtain a transformation matrix +.>Transform matrix->Is a 3*3 matrix.
Wherein,and->Is the coordinates of a pair of image feature points, +.>And->Is the coordinates of a pair of image feature points, +.>And->Is the coordinates of a pair of image feature points, +.>Andis the coordinates of a pair of image feature points, and +.>、/>、/>And->Are all image feature points in the original preset bit image, < >>、/>、/>、/>Are image feature points in the preset bit image template.
(4) Using a transformation matrixAnd carrying out coordinate transformation on the position coordinates of the area where the knob switch is positioned in the preset bit image template to finish the image alignment of the original preset bit image and the preset bit image template.
After the original preset bit image is aligned with the image of the preset bit image template, the position coordinates of the area where the knob switch in the state to be recognized is located in the original preset bit image can be correspondingly obtained and the knob image can be extracted because the area where the knob switch in the preset bit image template is located is known.
It is more common practice to use a transformation matrix, where the coordinates of the four vertices of the rectangular area in which the knob switch is located in the preset bit image template are knownAnd carrying out coordinate transformation on the four vertex coordinates to obtain coordinates of four vertexes of an area where the knob switch in the state to be identified is located in the original preset position image, then sequentially connecting the coordinates of the four vertexes to determine the area where the knob switch in the state to be identified is located, and intercepting the image in the area to obtain the knob image.
3. And carrying out image correction on the knob image obtained by extraction, and extracting to obtain an image to be identified.
When the original preset position image is shot on the knob switch in the state to be identified, the boundary of the knob image extracted here forms a rectangular structure, but due to the problem of shooting angle, the boundary of the knob image extracted here is often not rectangular, usually is an irregular quadrangle, and when the image correction is carried out, firstly, the irregular boundary of the knob image needs to be converted into the rectangle, so that the knob image is restored to a form under positive angle shooting as much as possible, and the influence of the shooting angle on the identification is reduced. The method comprises the following steps:
(1) Determining coordinates of four vertices of a knob image in an original preset bit image、/>And->. As shown in fig. 4, a quadrangle formed by four vertexes of the knob image is generally irregular. Then the vertex +_ can be calculated according to the coordinates of the four vertices>And vertex->European distance between->Top>And vertex->European distance between->Top>And vertex->European distance between->Top>And vertex->European distance between->
(2) Determining coordinates of four vertexes of the transformed image according to Euclidean distance between two adjacent vertexes of the knob image as respectively、/>、/>And->Wherein->Is->And->Maximum value of>Is->And->Is the maximum value of (a).
(3) According toSolving to obtain a transformation matrix->Transform matrix->Is a 3*3 matrix and transforms the matrix +.>The value range of each element is [1,3 ]]。
(4) Using a transformation matrixAnd carrying out image transformation on the knob image in the original preset position image to obtain an image to be identified. This step is completed by using OpenCV library function warp select ().
In this step (4), a transformation matrix is usedThe output image obtained by performing image transformation on the knob image in the original preset position image is a positive angle image of the knob switch in the state to be identified, namely the original knob image is restored to a form of shooting under the positive angle as far as possible, so that the influence of the shooting angle on the state identification is effectively reduced, but considering that the complex environment of the transformer substation also often has the problem of uneven illumination, in order to further reduce the influence of uneven illumination on the state identification, in one embodiment, the output image is not directly used as the image to be identified, but the remapping process of the image to be identified is further included, and the remapping process comprises the following steps:
(5) Calculating a gray level histogram of each pixel point in the output image, and calculating a cumulative distribution function of each gray level in the gray level histogram, and arbitrary gray levelsCumulative distribution function +.>Equal to gray level +.>Sum of number of pixels in the range, < ->,/>Is the minimum of the gray level, +.>Is the maximum of the gray levels.
(5) According toThe arbitrary gray level in the output image is +.>The gray level of the pixel of (2) is converted to +.>,/>Is the total number of pixels contained in the output image,representation pair->Rounding is performed.
Therefore, the positive angle image of the knob switch in the state to be identified after image correction can be extracted, and the image to be identified, which reduces the influence of shooting angle and uneven illumination, is obtained.
What has been described above is only a preferred embodiment of the present application, which is not limited to the above examples. It is to be understood that other modifications and variations which may be directly derived or contemplated by those skilled in the art without departing from the spirit and concepts of the present application are to be considered as being included within the scope of the present application.

Claims (9)

1.一种基于孪生神经网络的旋钮开关状态识别方法,其特征在于,所述旋钮开关状态识别方法包括:1. A method for identifying the status of a knob switch based on twin neural networks, characterized in that the method for identifying the status of a knob switch includes: 获取原始预置位图像,所述原始预置位图像中包含待识别状态的旋钮开关所在区域的旋钮图像及背景图像;Obtain an original preset image, which includes a knob image and a background image of the area where the knob switch in the to-be-identified state is located; 对所述原始预置位图像进行图像预处理,提取得到所述原始预置位图像中经过图像校正后的旋钮图像作为待识别图像,所述待识别图像是所述待识别状态的旋钮开关经过图像校正后的正角度图像;Perform image preprocessing on the original preset image, and extract the image-corrected knob image in the original preset image as the image to be identified. The image to be identified is the process of the knob switch in the state to be identified. Positive angle image after image correction; 依次遍历旋钮开关标准库中的各个旋钮开关标准图像,将遍历到的每一个旋钮开关标准图像和所述待识别图像同时输入预先训练得到的相似度计算模型,得到所述旋钮开关标准图像和所述待识别图像之间的图像相似度;其中,所述旋钮开关标准库中包括各个种类的旋钮开关在各种不同档位状态下的旋钮开关标准图像,且每个旋钮开关标准图像是以正对旋钮开关的拍摄角度获取到的旋钮开关所在区域的正角度图像;所述相似度计算模型预先基于孪生神经网络构建并训练得到;Traverse each knob switch standard image in the knob switch standard library in sequence, input each knob switch standard image traversed and the image to be recognized into the pre-trained similarity calculation model at the same time, and obtain the knob switch standard image and all the knob switch standard images. The image similarity between the images to be recognized is described; wherein, the rotary switch standard library includes rotary switch standard images of various types of rotary switches in various gear states, and each rotary switch standard image is based on the normal The positive angle image of the area where the knob switch is located is obtained from the shooting angle of the knob switch; the similarity calculation model is constructed and trained in advance based on the twin neural network; 根据所述待识别图像与不同档位状态的旋钮开关标准图像之间的图像相似度,得到所述待识别状态的旋钮开关的档位状态,包括:在分别得到每一个旋钮开关标准图像与所述待识别图像的图像相似度之后,计算具有相同档位状态的旋钮开关标准图像与所述待识别图像的图像相似度的平均值,作为所述档位状态对应的档位相似度;以对应的档位相似度最高的档位状态作为所述待识别状态的旋钮开关的档位状态。According to the image similarity between the image to be recognized and the standard image of the rotary switch in different gear states, obtaining the gear state of the rotary switch in the state to be recognized includes: obtaining each standard image of the rotary switch and the corresponding After describing the image similarity of the image to be recognized, calculate the average value of the image similarity between the standard image of the knob switch with the same gear state and the image to be recognized, as the gear similarity corresponding to the gear state; to correspond to The gear state with the highest gear similarity is used as the gear state of the rotary switch in the state to be identified. 2.根据权利要求1所述的旋钮开关状态识别方法,其特征在于,所述旋钮开关状态识别方法还包括:2. The method for identifying the status of the knob switch according to claim 1, characterized in that the method for identifying the status of the knob switch further includes: 从各个不同的拍摄角度分别拍摄各个种类的旋钮开关在各种不同档位状态下的原始样本图像,从每个原始样本图像提取旋钮开关所在区域的图像并标注对应的旋钮开关的档位状态作为训练样本,构建得到训练数据集;Take original sample images of various types of rotary switches in various gear states from different shooting angles. Extract the image of the area where the rotary switch is located from each original sample image and mark the gear state of the corresponding rotary switch as Training samples are used to construct a training data set; 构建相似度计算模型的网络框架包括孪生神经网络、全连接层和Softmax层,所述孪生神经网络包括两个连接在一起且共享网络参数的特征提取模块,每个特征提取模块以ResNet50为基础网络结构并引入CBAM模块进行注意力机制的学习;The network framework for constructing the similarity calculation model includes a twin neural network, a fully connected layer and a softmax layer. The twin neural network includes two feature extraction modules that are connected together and share network parameters. Each feature extraction module uses ResNet50 as the basic network. Structure and introduce the CBAM module to learn the attention mechanism; 利用所述训练数据集基于所述相似度计算模型的网络框架进行模型训练,得到训练后的所述相似度计算模型,所述相似度计算模型中的两个特征提取模块分别独立获取输入的一个图像并输出特征图,所述孪生神经网络将两个特征提取模块输出的特征图连接在一起形成最终特征表示并依次经过全连接层和Softmax层后计算欧式距离得到两个图像之间的图像相似度。The training data set is used to perform model training based on the network framework of the similarity calculation model to obtain the trained similarity calculation model. The two feature extraction modules in the similarity calculation model independently obtain one of the inputs. image and output feature maps. The twin neural network connects the feature maps output by the two feature extraction modules to form the final feature representation, and successively passes through the fully connected layer and the Softmax layer to calculate the Euclidean distance to obtain the image similarity between the two images. Spend. 3.根据权利要求1所述的旋钮开关状态识别方法,其特征在于,所述提取得到所述原始预置位图像中经过图像校正后的旋钮图像作为待识别图像包括:3. The knob switch state identification method according to claim 1, wherein the extraction of the image-corrected knob image in the original preset position image as the image to be identified includes: 确定拍摄所述原始预置位图像的预置拍摄位所对应的预置位图像模板,不同的预置拍摄位具有不同的预置位图像模板,每个预置拍摄位对应的预置位图像模板指示在所述预置拍摄位处拍摄的图像中的旋钮开关所在区域;Determine the preset image template corresponding to the preset shooting position of the original preset position image. Different preset shooting positions have different preset position image templates. The preset position image corresponding to each preset shooting position The template indicates the area where the knob switch is located in the image taken at the preset shooting position; 利用确定得到的所述预置位图像模板对所述原始预置位图像进行图像预处理,提取得到所述待识别图像。The determined preset position image template is used to perform image preprocessing on the original preset position image, and the image to be recognized is extracted. 4.根据权利要求3所述的旋钮开关状态识别方法,其特征在于,所述利用确定得到的所述预置位图像模板对所述原始预置位图像进行图像预处理包括:4. The knob switch state identification method according to claim 3, wherein the image preprocessing of the original preset position image using the determined preset position image template includes: 利用卷积神经网络分别对所述预置位图像模板和所述原始预置位图像进行特征提取得到各自的特征图;Using a convolutional neural network to perform feature extraction on the preset image template and the original preset image to obtain respective feature maps; 对两个特征图进行特征匹配以实现所述原始预置位图像与所述预置位图像模板的图像对齐,并结合所述预置位图像模板中的旋钮开关所在区域提取得到所述原始预置位图像中待识别状态的旋钮开关所在区域的旋钮图像;Feature matching is performed on the two feature maps to achieve image alignment between the original preset position image and the preset position image template, and the original preset position image is obtained by extracting the area where the knob switch is located in the preset position image template. Set the knob image in the area where the knob switch in the state to be recognized is located in the position image; 对提取得到的旋钮图像进行图像校正,提取得到所述待识别图像。Image correction is performed on the extracted knob image, and the image to be identified is extracted. 5.根据权利要求4所述的旋钮开关状态识别方法,其特征在于,所述提取得到所述原始预置位图像中待识别状态的旋钮开关所在区域的旋钮图像包括:5. The method for identifying the status of a knob switch according to claim 4, wherein the extraction of the knob image of the area where the knob switch in the to-be-identified state is located in the original preset image includes: 基于所述预置位图像模板的特征图检测所述预置位图像模板的图像特征点,基于所述原始预置位图像的特征图检测所述原始预置位图像的图像特征点;Detect the image feature points of the preset position image template based on the feature map of the preset position image template, and detect the image feature points of the original preset position image based on the feature map of the original preset position image; 利用随机采样一致性算法对所述预置位图像模板的图像特征点和所述原始预置位图像的图像特征点进行过滤,筛选得到最佳匹配的四对图像特征点,每对图像特征点包括所述预置位图像模板中的一个图像特征点及其匹配的所述原始预置位图像中的一个图像特征点;The random sampling consistency algorithm is used to filter the image feature points of the preset position image template and the image feature points of the original preset position image, and four pairs of image feature points that best match are obtained. Each pair of image feature points is Including an image feature point in the preset position image template and an image feature point in the matching original preset position image; 按照根据四对图像特征点的坐标求解得到变换矩阵/>,/>和/>是一对图像特征点的坐标,/>是一对图像特征点的坐标,/>和/>是一对图像特征点的坐标,和/>是一对图像特征点的坐标,且/>、/>、/>均是所述原始预置位图像中的图像特征点,/>、/>、/>均是所述预置位图像模板中的图像特征点;according to The transformation matrix is obtained by solving the coordinates of the four pairs of image feature points/> ,/> and/> is the coordinates of a pair of image feature points,/> and is the coordinates of a pair of image feature points,/> and/> is the coordinates of a pair of image feature points, and/> is the coordinates of a pair of image feature points, and/> ,/> ,/> and are all image feature points in the original preset image,/> ,/> ,/> , They are all image feature points in the preset image template; 利用所述变换矩阵对所述预置位图像模板中的旋钮开关所在区域的位置坐标进行坐标变换,得到所述原始预置位图像中待识别状态的旋钮开关所在区域的位置坐标并提取得到所述旋钮图像。Using the transformation matrix Coordinate transformation is performed on the position coordinates of the area where the knob switch is located in the preset position image template, to obtain the position coordinates of the area where the knob switch is to be identified in the original preset position image, and the knob image is extracted. 6.根据权利要求4所述的旋钮开关状态识别方法,其特征在于,所述对提取得到的旋钮图像进行图像校正包括:6. The knob switch state identification method according to claim 4, wherein the image correction of the extracted knob image includes: 确定所述原始预置位图像中的旋钮图像的四个顶点的坐标、/>和/>Determine the coordinates of the four vertices of the knob image in the original preset image , ,/> and/> ; 根据四个顶点的坐标计算得到顶点和顶点/>之间的欧式距离/>、顶点/>和顶点之间的欧式距离/>、顶点/>和顶点/>之间的欧式距离/>、顶点/>和顶点/>之间的欧式距离/>Calculate the vertices based on the coordinates of the four vertices and vertices/> Euclidean distance between/> , vertex/> and vertices Euclidean distance between/> , vertex/> and vertices/> Euclidean distance between/> , vertex/> and vertices/> Euclidean distance between/> ; 确定变换后的图像的四个顶点的坐标分别为、/>、/>,其中,/>是/>和/>中的最大值,/>是/>和/>中的最大值;Determine the coordinates of the four vertices of the transformed image as ,/> ,/> and , where,/> Yes/> and/> The maximum value in ,/> Yes/> and/> the maximum value in; 按照求解得到变换矩阵/>,并利用所述变换矩阵/>对所述原始预置位图像中的旋钮图像进行图像变换得到所述待识别图像。according to Solve to get the transformation matrix/> , and use the transformation matrix/> The image to be recognized is obtained by performing image transformation on the knob image in the original preset image. 7.根据权利要求6所述的旋钮开关状态识别方法,其特征在于,所述利用所述变换矩阵对所述原始预置位图像中的旋钮图像进行图像变换得到所述待识别图像包括:7. The method for identifying the status of a knob switch according to claim 6, characterized in that, using the transformation matrix Performing image transformation on the knob image in the original preset image to obtain the image to be recognized includes: 利用所述变换矩阵对所述原始预置位图像中的旋钮图像进行图像变换得到输出图像,所述输出图像是所述待识别状态的旋钮开关的正角度图像;Using the transformation matrix Perform image transformation on the knob image in the original preset image to obtain an output image, where the output image is a positive angle image of the knob switch in the state to be identified; 计算所述输出图像中的各个像素点的灰度直方图,并计算所述灰度直方图中各个灰度级别的累积分布函数,任意灰度级别的累积分布函数/>等于灰度级别在/>范围内的像素点的数量之和,/>,/>是灰度级别的最小值,/>是灰度级别的最大值;Calculate the gray histogram of each pixel in the output image, and calculate the cumulative distribution function of each gray level in the gray histogram, any gray level The cumulative distribution function/> Equal to the gray level at/> The sum of the number of pixels within the range,/> ,/> is the minimum value of gray level,/> is the maximum value of gray level; 按照将所述输出图像中任意灰度级别为/>的像素点的灰度级别转换为/>,/>是所述输出图像中包含的像素点的总数量,表示对/>进行四舍五入取整。according to Set any gray level in the output image to/> The gray level of the pixel is converted to/> ,/> is the total number of pixels contained in the output image, Expresses yes/> Perform rounding. 8.根据权利要求5所述的旋钮开关状态识别方法,其特征在于,检测所述预置位图像模板的图像特征点和检测所述原始预置位图像的图像特征点包括对于所述预置位图像模板和所述原始预置位图像中的任意一个输入图像:8. The knob switch state identification method according to claim 5, wherein detecting the image feature points of the preset position image template and detecting the image feature points of the original preset position image includes: Either input image of the bit image template and the original preset bit image: 提取所述输入图像的特征图中的各个关键像素点及各个关键像素点的描述符,每个所述关键像素点既在预定局部邻域范围内具有最大的空间特征值、也在通道方向上具有最大的通道特征值;每个关键像素点的描述符是所述关键像素点所在位置的通道方向向量;Extract each key pixel point and the descriptor of each key pixel point in the feature map of the input image. Each key pixel point not only has the largest spatial feature value within the predetermined local neighborhood range, but also has the largest spatial feature value in the channel direction. Has the largest channel feature value; the descriptor of each key pixel is the channel direction vector of the location of the key pixel; 根据提取得到各个关键像素点及各个关键像素点的描述符,通过双线性插值将各个关键像素点恢复至所述输入图像的图像尺寸,提取得到所述输入图像的图像特征点。According to the extracted key pixel points and the descriptors of each key pixel point, each key pixel point is restored to the image size of the input image through bilinear interpolation, and the image feature points of the input image are extracted. 9.根据权利要求4所述的旋钮开关状态识别方法,其特征在于,所述利用卷积神经网络分别对所述预置位图像模板和所述原始预置位图像进行特征提取得到各自的特征图,包括对于所述预置位图像模板和所述原始预置位图像中的任意一个输入图像:9. The method for identifying the status of a knob switch according to claim 4, wherein the convolutional neural network is used to perform feature extraction on the preset position image template and the original preset position image to obtain respective features. Figure, including the input image for any one of the preset image template and the original preset image: 对所述输入图像进行两次卷积和最大池化组合下采样,得到所述输入图像的深层特征图,所述深层特征图的图像尺寸是所述输入图像的图像尺寸的1/4;Perform two convolutions and maximum pooling combined downsampling on the input image to obtain a deep feature map of the input image, where the image size of the deep feature map is 1/4 of the image size of the input image; 对所述深层特征图再次执行卷积、平均池化和空洞卷积的操作,融合提取得到所述输入图像的特征图。The operations of convolution, average pooling and atrous convolution are again performed on the deep feature map, and the feature map of the input image is obtained through fusion extraction.
CN202311137629.6A 2023-09-04 2023-09-04 Knob switch state identification method based on twin neural network Active CN117036665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311137629.6A CN117036665B (en) 2023-09-04 2023-09-04 Knob switch state identification method based on twin neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311137629.6A CN117036665B (en) 2023-09-04 2023-09-04 Knob switch state identification method based on twin neural network

Publications (2)

Publication Number Publication Date
CN117036665A CN117036665A (en) 2023-11-10
CN117036665B true CN117036665B (en) 2024-03-08

Family

ID=88637374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311137629.6A Active CN117036665B (en) 2023-09-04 2023-09-04 Knob switch state identification method based on twin neural network

Country Status (1)

Country Link
CN (1) CN117036665B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118690250B (en) * 2024-08-22 2024-11-08 北京尚优力达科技有限公司 A method for knob switch state recognition based on multimodal model
CN120707906B (en) * 2025-08-22 2025-11-25 合肥中科类脑智能技术有限公司 State identification method for power transformation knob equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145927A (en) * 2017-06-16 2019-01-04 杭州海康威视数字技术股份有限公司 The target identification method and device of a kind of pair of strain image
CN112381104A (en) * 2020-11-16 2021-02-19 腾讯科技(深圳)有限公司 Image identification method and device, computer equipment and storage medium
CN114202731A (en) * 2022-02-15 2022-03-18 南京天创电子技术有限公司 Multi-state knob switch identification method
CN115861210A (en) * 2022-11-25 2023-03-28 国网重庆市电力公司潼南供电分公司 Transformer substation equipment abnormity detection method and system based on twin network
CN116363573A (en) * 2023-01-31 2023-06-30 智洋创新科技股份有限公司 Transformer substation equipment state anomaly identification method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI330808B (en) * 2007-01-23 2010-09-21 Pixart Imaging Inc Quasi-analog knob controlling method and apparatus using the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145927A (en) * 2017-06-16 2019-01-04 杭州海康威视数字技术股份有限公司 The target identification method and device of a kind of pair of strain image
CN112381104A (en) * 2020-11-16 2021-02-19 腾讯科技(深圳)有限公司 Image identification method and device, computer equipment and storage medium
CN114202731A (en) * 2022-02-15 2022-03-18 南京天创电子技术有限公司 Multi-state knob switch identification method
CN115861210A (en) * 2022-11-25 2023-03-28 国网重庆市电力公司潼南供电分公司 Transformer substation equipment abnormity detection method and system based on twin network
CN116363573A (en) * 2023-01-31 2023-06-30 智洋创新科技股份有限公司 Transformer substation equipment state anomaly identification method and system

Also Published As

Publication number Publication date
CN117036665A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
CN112380952B (en) Real-time detection and recognition method of infrared image of power equipment based on artificial intelligence
CN112199993B (en) Method for identifying transformer substation insulator infrared image detection model in any direction based on artificial intelligence
CN111986240A (en) Drowning person detection method and system based on visible light and thermal imaging data fusion
CN117036665B (en) Knob switch state identification method based on twin neural network
CN109308447A (en) Method for Automatically Extracting Equipment Operating Parameters and Operating Status in Electric Power Remote Monitoring
CN109190446A (en) Pedestrian&#39;s recognition methods again based on triple focused lost function
Le et al. Surface defect detection of industrial parts based on YOLOv5
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN113869122B (en) Reinforced management and control method for distribution network engineering
Xiong et al. Speal: Skeletal prior embedded attention learning for cross-source point cloud registration
CN113505808A (en) Detection and identification algorithm for power distribution facility switch based on deep learning
CN111402224A (en) Target identification method for power equipment
CN116052222A (en) Cattle face recognition method for naturally collecting cattle face image
CN114241194A (en) Instrument identification and reading method based on lightweight network
CN117036891A (en) Cross-modal feature fusion-based image recognition method and system
CN113670268B (en) Distance measurement method between UAV and power pole tower based on binocular vision
CN119027970A (en) A recognition method for two-dimensional drawings of substations
Peng et al. Automatic recognition of pointer meter reading based on Yolov4 and improved U-net algorithm
Sun et al. An infrared-optical image registration method for industrial blower monitoring based on contour-shape descriptors
CN111709429A (en) A method for identifying structural parameters of woven fabrics based on convolutional neural network
CN115409789A (en) Power transmission line engineering defect detection method based on image semantic segmentation
CN114862920B (en) Cross-camera pedestrian re-identification method and device based on multi-scale image restoration
Liao et al. Automatic meter reading based on bi-fusion MSP network and carry-out rechecking
CN117671367A (en) System and method for solving shielding problem in gangue identification process
Sun et al. Light-YOLOv3: License plate detection in multi-vehicle scenario

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant