[go: up one dir, main page]

CN119559701A - A method and system for identifying abnormal behavior in financial escort process - Google Patents

A method and system for identifying abnormal behavior in financial escort process Download PDF

Info

Publication number
CN119559701A
CN119559701A CN202411782328.3A CN202411782328A CN119559701A CN 119559701 A CN119559701 A CN 119559701A CN 202411782328 A CN202411782328 A CN 202411782328A CN 119559701 A CN119559701 A CN 119559701A
Authority
CN
China
Prior art keywords
target
abnormal behavior
data
feature
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411782328.3A
Other languages
Chinese (zh)
Inventor
刘勇
赵国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Research Institute of Dalian University of Technology
Original Assignee
Ningbo Research Institute of Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Research Institute of Dalian University of Technology filed Critical Ningbo Research Institute of Dalian University of Technology
Priority to CN202411782328.3A priority Critical patent/CN119559701A/en
Publication of CN119559701A publication Critical patent/CN119559701A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • G06V40/1359Extracting features related to ridge properties; Determining the fingerprint type, e.g. whorl or loop
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种金融押运过程异常行为识别方法及系统,涉及数据处理领域。所述方法包括:采集人员特征数据,建立多模态生物特征数据库;使用卷积神经网络将所述实时人员特征数据提取的特征与多模态生物特征数据库中存储的特征进行匹配,获得匹配结果;当所述匹配结果为匹配成功,继续实时获取监控区域的视频流并进行预处理;利用深度学习算法对预处理后的视频流进行目标检测,识别出跟踪目标;利用目标跟踪算法对检测到的跟踪目标进行连续帧的跟踪,获取跟踪目标的运动轨迹;利用姿态估计算法提取目标的姿态信息,并结合目标的运动轨迹和时空特征,构建多维行为特征向量;建立预设的异常行为规则库,所述异常行为规则库包含了已知的异常行为集合;基于所述多维行为特征向量的输出,结合预设的异常行为规则库,对识别到的行为进行实时分析,识别出目标是否正在进行异常行为。解决了现有技术中于单一的人员识别技术,无法同时同时识别押运过程的风险,难以满足及时发现押运风险要求的技术问题,通过对人员多模态生物特征的验证和对目标的运动轨迹的追踪,实现实时的运动轨迹与预设的轨迹对比分析,识别到异常行为,从而提高实时监控的效率,降低押运风险技术效果。

The present invention discloses a method and system for identifying abnormal behavior in a financial escort process, and relates to the field of data processing. The method comprises: collecting personnel feature data, establishing a multimodal biometric database; using a convolutional neural network to match the features extracted from the real-time personnel feature data with the features stored in the multimodal biometric database, and obtaining a matching result; when the matching result is a successful match, continuing to obtain the video stream of the monitoring area in real time and preprocessing; using a deep learning algorithm to perform target detection on the preprocessed video stream, and identifying the tracking target; using a target tracking algorithm to track the detected tracking target for continuous frames, and obtaining the motion trajectory of the tracking target; using a posture estimation algorithm to extract the posture information of the target, and combining the motion trajectory and spatiotemporal characteristics of the target to construct a multidimensional behavior feature vector; establishing a preset abnormal behavior rule library, and the abnormal behavior rule library contains a known abnormal behavior set; based on the output of the multidimensional behavior feature vector, combined with the preset abnormal behavior rule library, the identified behavior is analyzed in real time to identify whether the target is performing abnormal behavior. It solves the technical problem that the existing technology is based on a single personnel identification technology, which is unable to simultaneously identify the risks of the escort process and is difficult to meet the requirements of timely discovering escort risks. By verifying the multimodal biometric characteristics of personnel and tracking the motion trajectory of the target, the real-time motion trajectory is compared and analyzed with the preset trajectory, and abnormal behavior is identified, thereby improving the efficiency of real-time monitoring and reducing the technical effect of escort risks.

Description

Abnormal behavior identification method and system in financial escort process
Technical Field
The application relates to the technical field of data processing, in particular to a method and a system for identifying abnormal behaviors in a financial escort process.
Background
With the rapid development of financial transactions, financial escort work is becoming increasingly important and complex. The traditional authentication mode such as passwords, certificates and the like have potential safety hazards such as easy leakage, easy counterfeiting and the like, and the safety requirements of modern financial escort are difficult to meet.
The existing face recognition technology or iris recognition technology is adopted, however, the single person recognition technology improves the convenience and accuracy of identity verification to a certain extent, but the risk that the escort process can not be recognized at the same time after the person recognition still exists, and the financial institution can not grasp the escort dynamics in real time, so that the problems of increased safety risk, unsmooth escort flow and the like are caused.
In summary, in the prior art, because of a single personnel identification technology, risks in the escort process cannot be identified simultaneously, and technical problems of timely finding escort risk requirements are difficult to be satisfied.
Disclosure of Invention
Based on the above technical problems, it is necessary to provide a method and a system for identifying abnormal behaviors in a financial escort process, which can solve the technical problems that in the prior art, a single personnel identification technology cannot identify risks in the escort process simultaneously, and the requirements of escort risks are difficult to be found in time, and by verifying multi-modal biological characteristics of personnel and tracking motion trajectories of targets, real-time motion trajectories and preset trajectory comparison analysis are realized, and abnormal behaviors are identified, so that the efficiency of real-time monitoring is improved, and the technical effects of escort risks are reduced.
The method comprises the steps of collecting personnel characteristic data, wherein the characteristic data comprise fingerprint data, facial data and iris data, conducting characteristic extraction on the personnel characteristic data, establishing a multi-mode biological characteristic database, obtaining real-time personnel characteristic data, matching the characteristics extracted by the real-time personnel characteristic data with the characteristics stored in the multi-mode biological characteristic database through a convolutional neural network to obtain a matching result, continuously obtaining a video stream of a monitoring area in real time and conducting preprocessing when the matching result is successful, conducting target detection on the preprocessed video stream through a deep learning algorithm, identifying a tracking target, wherein the tracking target comprises personnel and vehicles, conducting continuous frame tracking on the detected tracking target through a target tracking algorithm, obtaining the motion track of the tracking target, utilizing a gesture estimation algorithm to extract gesture information of the target, combining the motion track and space-time characteristics of the target, establishing a preset abnormal behavior rule base, wherein the abnormal behavior rule base comprises a known abnormal behavior set, conducting target-state analysis on whether the abnormal behavior is being identified through the preset behavior rule based on the multi-dimensional behavior characteristic vector.
In a second aspect, a system for identifying abnormal behavior in a financial escort process is provided, the system comprises a data acquisition module, a feature extraction module, a feature matching module, a video stream acquisition module, a target recognition module, a motion track acquisition module and a multi-dimensional behavior module, wherein the data acquisition module is used for acquiring personnel feature data, the feature data comprises fingerprint data, facial data and iris data, the feature extraction module is used for carrying out feature extraction on the personnel feature data to establish a multi-mode biological feature database, the feature matching module is used for acquiring real-time personnel feature data, a convolutional neural network is used for matching the features extracted from the real-time personnel feature data with the features stored in the multi-mode biological feature database to obtain a matching result, the video stream acquisition module is used for continuously acquiring and preprocessing video streams of a monitoring area in real time when the matching result is successful, the target recognition module is used for carrying out target detection on the preprocessed video streams by using a deep learning algorithm, the tracking target comprises personnel and vehicles, the motion track acquisition module is used for carrying out continuous frame tracking on the detected tracking targets by using a target tracking algorithm, the motion track acquisition module is used for carrying out continuous frame tracking tracks on the detected tracking targets by using the target tracking algorithms, the motion track acquisition module is used for carrying out the multi-dimensional behavior feature algorithm, the motion track acquisition module is used for establishing a multi-dimensional behavior feature database is used for establishing an abnormal behavior feature database by using a motion vector, and an abnormal behavior feature database is used for establishing an abnormal behavior feature database, and has an abnormal behavior characteristic vector, and is used for setting an abnormal behavior graph is used for setting established, the abnormal behavior recognition module is used for analyzing the recognized behaviors in real time based on the output of the multidimensional behavior feature vector and combining a preset abnormal behavior rule base to recognize whether the target is in abnormal behavior.
The abnormal behavior identification method and the abnormal behavior identification system in the financial escort process solve the technical problems that in the prior art, in a single personnel identification technology, risks in the escort process cannot be identified simultaneously, and the escort risk requirement is difficult to find timely.
The foregoing description is only an overview of the present application, and is intended to be implemented in accordance with the teachings of the present application in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present application more readily apparent.
Drawings
FIG. 1 is a flowchart of a method for identifying abnormal behavior in a financial escort process according to one embodiment;
FIG. 2 is a schematic flow chart of a method for identifying abnormal behaviors in a financial escort process by extracting the gesture information of a target by using a gesture estimation algorithm and combining the motion trail and space-time characteristics of the target to construct a multidimensional behavior feature vector;
FIG. 3 is a block diagram of a system for identifying abnormal behavior of a financial escort process in one embodiment.
The reference numerals illustrate a data acquisition module 11, a feature extraction module 12, a feature matching module 13, a video stream acquisition module 14, a target identification module 15, a motion trail acquisition module 16, a multi-dimensional behavior feature vector construction module 17, an abnormal behavior rule base construction module 18 and an abnormal behavior identification module 19.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
As shown in fig. 1, the present application provides a method for identifying abnormal behavior in a financial escort process, which includes:
collecting personnel characteristic data, wherein the characteristic data comprises fingerprint data, facial data and iris data;
Fingerprint data, facial data and iris data are all unique biological identification characteristics, and have high uniqueness and stability. In the financial escort process, fingerprint data, face data and iris data of escort personnel are collected for identifying and verifying personnel identity. Among other things, iris recognition techniques are used to provide a higher level of authentication for enhanced security measures to prevent unauthorized persons from accessing escort items, since iris features are difficult to forge or imitate.
Collecting personnel characteristic data, wherein the characteristic data comprises fingerprint data, facial data and iris data, and the method comprises the following steps of:
acquiring fingerprints by using professional fingerprint acquisition equipment to acquire fingerprint data;
acquiring face data of a person by using a high-resolution camera;
And (3) performing multiple scans by using an iris acquisition device to acquire iris data under different angles and illumination conditions.
The optical fingerprint acquisition equipment is used for capturing fine textures and characteristic points of fingerprints, so that the acquired fingerprint data is ensured to have high resolution and definition. When a finger is pressed on the acquisition surface in the process of acquiring fingerprint data, the reflection of lines and valleys on the surface of the finger is different, and the device captures the differences of the reflected light through the sensor to form a fingerprint image. When collecting facial data, a high-resolution camera is used for capturing facial details of a person and generating facial data, including the shape and the outline of eyes, a nose, a mouth and other parts, skin texture and other characteristics. And capturing the facial expression and the motion of the escort personnel in real time, so as to identify potential abnormal behaviors such as tension, anxiety and the like. In order to adapt to different illumination environments, the camera is provided with automatic focusing and automatic exposure functions. Under the environment with darker light, the camera can automatically increase the exposure degree, and under the environment with too strong light, the exposure degree can also be automatically reduced, so that the collected facial image has proper contrast and brightness; the iris acquisition equipment is high-precision biological identification equipment, adopts image processing and optical technology, accurately captures fine textures and characteristic points of the iris, and realizes high-precision iris identification. The iris acquisition equipment is internally provided with a special optical system, so that the incidence angle and intensity of light can be adjusted within a certain range, the light is focused on an iris area, and the texture information of the iris is accurately captured.
Extracting the characteristics of the personnel characteristic data, and establishing a multi-mode biological characteristic database;
First, feature extraction is performed from the person feature data obtained from the acquisition device, including fingerprint features, facial features, and iris features. The fingerprint feature extraction determines the pattern of the fingerprint image by analyzing the trend and the mode of the integral pattern of the fingerprint image, and comprises a bucket type, a skip type and a bow type. The fingerprint has circular or spiral fingerprint lines, dustpan-like fingerprint, and bow-shaped fingerprint lines are smooth and arc-shaped. The fingerprint image contains a number of unique minutiae such as endpoints, bifurcation points, etc. The fingerprint lines are thinned into lines with single pixel width by using an image thinning method, and the coordinate position (x, y) and the direction (expressed by angles) and the type (end points or bifurcation points) of each minutiae point are marked. Facial feature extraction locates facial key points such as corners of the eyes, nose, corners of the mouth, etc. through an active appearance model. And learning the distribution pattern of the facial key points on the training set through a facial feature point detection algorithm, and searching the matched key point positions on the input facial image. Then, the positions of the key points of the face of the person are determined, and the relative distance, angle and other relations between the key points are calculated. Including calculating the distance between the eyes, the distance from the tip of the nose to the corner of the mouth, etc. Finally, texture information of the facial skin is extracted. The iris feature extraction is to locate the collected iris image to determine the inner and outer boundaries of the iris, normalize the iris region to a fixed size, and filter the normalized iris region.
Extracting the characteristics of the personnel characteristic data, and establishing a multi-mode biological characteristic database comprises the following steps:
Extracting characteristic information such as texture, direction and frequency of the fingerprint from the collected fingerprint data by using a fingerprint characteristic extraction algorithm;
Extracting feature information such as key points, textures, shapes and the like of the face from the acquired face data by using a face feature extraction algorithm, wherein the key points of the face comprise eye corners, mouth corners and the like;
extracting characteristic information such as texture, spots, lines and the like of the iris from the acquired iris data by using an iris characteristic extraction algorithm;
And integrating the extracted fingerprint features, facial features and iris features to form multi-modal biological feature data of each person, and establishing a multi-modal biological feature database.
Feature extraction is performed by Convolutional Neural Network (CNN). Firstly, carrying out convolution operation to output a feature map, then calculating the size of the feature map generated after the convolution operation, then carrying out pooling operation on the feature map, and finally carrying out full-connection layer operation.
Convolution operation expression:
Wherein Y is an output feature map;
w is a convolution kernel;
X is an input image;
valid is a convolution type (no zero padding);
b is a bias term;
conv2 is a convolution operation.
The calculation formula of the size of the feature map generated after the convolution operation is as follows:
Where w is the size of the input image;
p is the number of layers filled;
k is the size of the convolution kernel;
s is stride;
constraint that division operations round down.
The max pooling operation expression is:
S is an output feature map;
max_ pooling represents the max pooling operation;
X is an input feature map;
k is a window size value.
The output Y expression of the full connection layer is:
Wherein W represents the weight of the full connection layer;
A represents the output of the previous layer;
b represents the bias of the fully connected layer.
Acquiring real-time personnel feature data, and matching the extracted features of the real-time personnel feature data with the features stored in the multi-mode biological feature database by using a convolutional neural network to acquire a matching result;
Firstly, performing feature matching by using a convolutional neural network, secondly, defining and training a convolutional neural network by adopting a cosine similarity verification and matching process to extract features from input data, and calculating a similarity matrix between samples by using the features. In the similarity measurement stage, similarity between the feature vectors is calculated by utilizing measurement methods such as cosine similarity and the like, so that verification and matching tasks are completed. And finally, integrating the verification results of the multiple modes to obtain a final verification conclusion. Based on the verification result, feedback (e.g., "verification success" or "verification failure") is provided to the user.
The output characteristic diagram expression is:
wherein X is an m X n input image matrix;
W is a convolution kernel matrix of k x k (k < m, k < n);
y is one Is provided.
When the matching result is that the matching is successful, continuously acquiring the video stream of the monitoring area in real time and preprocessing the video stream;
And after the matching is successful, continuously acquiring the video stream of the monitoring area in real time, and preprocessing the acquired video frames for subsequent analysis in order to improve the visibility of the video image. Firstly, the video stream image is subjected to a resizing operation, and various noises may be contained in the video stream due to the complexity of a monitoring environment, so that a Gaussian filtering denoising process is performed. Contrast enhancement processing is carried out on license plates of escort vehicles and faces of escort personnel in the financial escort process.
Performing target detection on the preprocessed video stream by using a deep learning algorithm, and identifying a tracking target, wherein the tracking target comprises personnel and vehicles;
and distinguishing escort personnel and other personnel according to the color and style characteristics of the uniform of escort personnel. By analyzing multi-frame images in the video stream, whether the person walks, runs normally or makes abnormal actions (such as suddenly squatting, waving an arm, etc.) is identified. Vehicles have significant shape features such as rectangular contours of the body, rounded wheels, etc. Meanwhile, the color and license plate information of the vehicle are also important identification features. The driving state of the vehicle, such as speed, driving direction, etc., can also be obtained by analyzing the position change of the vehicle in successive frames in the video stream. Thereby monitoring whether the escort vehicle runs along a preset route, whether abnormal acceleration or deceleration exists or not, and the like.
Target detection is carried out on the preprocessed video stream by using a deep learning algorithm, and a tracking target is identified, wherein the tracking target comprises personnel and vehicles and comprises the following components:
Extracting images from the video stream frame by frame for frame analysis;
performing enhancement processing on the extracted image and adjusting the image to a size suitable for the input of the deep learning model;
carrying out normalization processing on the image to enable the pixel value to be in a specific range;
performing feature extraction and classification by using a convolutional neural network;
Training the selected deep learning model by using the marked data set;
Applying a trained model to each frame in the video stream to perform target detection, and identifying the positions and the categories of personnel and vehicles;
a tracking algorithm is applied between successive frames to track the motion trajectory of the object in the video.
The method comprises the steps of extracting images from a video stream frame by frame, carrying out contrast enhancement and noise reduction on the extracted images, adjusting the images to a size suitable for deep learning model input, carrying out normalization on the images, normalizing pixel values to a specific range, carrying out characteristic extraction and classification by using a convolutional neural network, carrying out convolution operation firstly, carrying out pooling operation secondly, and carrying out full-connection layer operation. Training the selected deep learning model by using the marked data set, applying the trained model to each frame in the video stream to perform target detection, sliding a convolution kernel on the image, extracting features and judging whether personnel or vehicle targets exist according to the output of the full connection layer. If an object is present, the model determines the location and class of the person and vehicle based on the feature map and associated algorithms. And finally, a tracking algorithm is applied between the continuous frames to track the motion trail of the target in the video.
Tracking the detected tracking target in continuous frames by utilizing a target tracking algorithm to acquire a motion trail of the tracking target;
The regions that may contain the target are generated using an algorithm and then classified and regressed to determine the exact location and class of the target. Then, the optical flow method is used for assuming that the intensity of pixels in the image is kept unchanged in a short time, and the gray level change of the pixels between adjacent frames is analyzed to determine the movement direction and speed of the pixels so as to realize target tracking. In the target tracking process, the characteristics of the target are continuously extracted from the current frame along with the movement of the target in the video. The sliding window method is adopted when searching the most similar area in the subsequent frame. And sliding a window similar to the initial size of the target in the subsequent frames, and calculating the similarity between the region in the window and the target characteristics. When the most similar region is found (determined based on the set similarity threshold), the location of the object in the frame is considered to be found. Once the most similar area is found in the subsequent frames, the position information of the target is updated in time, and the target is accurately identified.
Tracking the detected tracking target in continuous frames by using a target tracking algorithm to acquire a motion trail of the tracking target, wherein the method comprises the following steps:
performing target detection on each frame of the video stream by using a target detection algorithm;
Initializing the detected target, wherein the initialization comprises the initial position, the size and possible appearance characteristics of the target;
Calculating the motion of pixels in the image by using an optical flow method, so as to track the motion trail of the target;
extracting features of the target from the current frame;
Searching a region most similar to the target in a subsequent frame according to the characteristics of the extracted target;
When the most similar area is found, updating the position information of the target;
and estimating the motion state of the target according to the position information of the target in the continuous frames, and obtaining the motion trail of the target.
Extracting the gesture information of the target by using a gesture estimation algorithm, and constructing a multidimensional behavior feature vector by combining the motion trail and space-time features of the target;
pose in the video frames of escort personnel and vehicles, such as the joint positions of human bodies, the angles of limbs and the like, are determined by using a pose estimation algorithm. The position coordinate points at different moments are used for representing the movement tracks of escort personnel and vehicles, and for the time, duration, time rhythm and other time characteristics of the movement of the escort personnel and the vehicles in the video, the spatial characteristics comprise whether the vehicles are in a lane, an intersection, a parking lot and other different spatial areas. And combining the gesture information, the motion trail features and the space-time features into a multidimensional behavior feature vector.
Extracting the gesture information of the target by using a gesture estimation algorithm, and constructing a multidimensional behavior feature vector by combining the motion trail and the space-time feature of the target, wherein the method comprises the following steps:
carrying out gesture estimation on a target in the video stream by using a gesture estimation algorithm to obtain key points of the target and position information of the key points;
extracting gesture information of a target by utilizing the output of a gesture estimation algorithm, wherein the gesture information comprises coordinates of key points, relative position relations among the key points, motion tracks of the key points and the like;
Acquiring a motion trail of a target by using a target tracking algorithm, wherein the motion trail is a position information sequence of the target in continuous frames;
extracting space-time characteristics of a target, wherein the space-time characteristics refer to change information of the target in time and space, and the change information comprises acceleration, speed change, gesture change and the like of the target;
And combining the extracted gesture information, motion trail and space-time characteristics to construct a multidimensional behavior feature vector.
Establishing a preset abnormal behavior rule base, wherein the abnormal behavior rule base comprises a known abnormal behavior set;
And establishing an abnormal behavior rule base, wherein the abnormal behavior rule base aims at identifying abnormal behaviors of financial escort personnel in the task execution process. By defining the rules of the behaviors, the abnormal behaviors are automatically detected by using a monitoring technology, so that the security of the field finance escort is ensured.
Establishing a preset abnormal behavior rule base, wherein the abnormal behavior rule base comprises a known abnormal behavior set and comprises the following steps:
collecting sample data containing various abnormal behaviors, and establishing a preset abnormal behavior rule base;
extracting behavior characteristic information capable of describing an abnormal behavior sample, wherein the characteristic information comprises the gesture, the motion track, the speed, the acceleration, the interaction with other targets and the like of the targets;
and constructing an abnormal behavior rule base based on the extracted characteristic information.
The method comprises the steps of establishing a preset abnormal behavior rule base, and collecting abnormal behavior sample data of escort personnel and vehicles, wherein the abnormal behaviors of the escort personnel comprise abnormal behaviors such as abnormal sitting postures, dozing, calling, eating, smoking, connecting with the ears and the like, and the abnormal behaviors of the vehicles comprise abnormal behaviors such as non-running according to a set track, overspeed or slow speed of the vehicles, collision accidents of target vehicles and other vehicles or people, abnormal vehicle components and the like. And constructing an abnormal behavior rule base, wherein behavior characteristic information in an abnormal behavior sample is extracted from the abnormal behavior rule base data, and the behavior characteristic information comprises the gesture of escort personnel, the motion track of escort vehicles, the vehicle speed, the acceleration and the interaction with other targets.
Based on the output of the multidimensional behavior feature vector, the identified behaviors are analyzed in real time by combining a preset abnormal behavior rule base, and whether the target is in abnormal behavior is identified.
Firstly, comparing and analyzing behaviors in the escort personnel and vehicle video frames output by the multidimensional behavior feature vectors with a preset abnormal behavior rule base, and identifying whether the escort personnel and the vehicle are performing abnormal behaviors.
Based on the output of the multidimensional behavior feature vector, the identified behavior is analyzed in real time by combining with a preset abnormal behavior rule base, and whether the target is in abnormal behavior is identified, which comprises the following steps:
Extracting multidimensional behavior feature vectors of targets from video streams in real time by using a gesture estimation algorithm, a target tracking algorithm and a space-time feature extraction method;
matching the extracted multidimensional behavior feature vector with a preset abnormal behavior rule base;
Judging whether the target is performing abnormal behavior in real time according to the matching result of the feature vector and the rule base;
when abnormal behavior is identified, the system immediately triggers an alarm mechanism.
And identifying the positions of the joints of the human body and the movement track information of the targets by using an attitude estimation algorithm, a target tracking algorithm and a space-time feature extraction method, wherein the movement track information comprises coordinates of joints such as the head, the shoulder, the elbow, the wrist, the hip, the knee, the ankle and the like of escort personnel and the movement track of escort vehicles. The posture of the human body, such as standing, bending down, lifting hands, moving track of the vehicle and the like, is described through the position relation of the joint points of the personnel, and the characteristics of the moving direction, the speed change and the like of the target are further analyzed. If the motion direction vector of the target is obtained by calculating the vector between adjacent points on the track, the speed information of the target is obtained by calculating the change rate of the distance between the track points along with time, and the multidimensional behavior feature vector of the target is extracted. After the multidimensional behavior feature vector of the target is extracted, the multidimensional behavior feature vector is matched with rules in an abnormal behavior rule base one by one. If the abnormal behavior is matched, the system gives an alarm, and sends alarm information to a remote monitoring center through a network.
As shown in fig. 3, an embodiment of the present application includes a system for identifying abnormal behavior of a financial escort process, the system including:
The data acquisition module 11 is used for acquiring personnel characteristic data, wherein the characteristic data comprises fingerprint data, facial data and iris data;
the feature extraction module 12 is used for carrying out feature extraction on the personnel feature data and establishing a multi-mode biological feature database;
The feature matching module 13 is used for acquiring real-time personnel feature data, and matching the features extracted from the real-time personnel feature data with the features stored in the multi-mode biological feature database by using a convolutional neural network to acquire a matching result;
The video stream obtaining module 14 is configured to continuously obtain the video stream of the monitoring area in real time and perform preprocessing when the matching result is that the matching is successful;
The target recognition module 15 is used for performing target detection on the preprocessed video stream by using a deep learning algorithm and recognizing a tracking target, wherein the tracking target comprises personnel and vehicles;
A motion track acquisition module 16, configured to track the detected tracking target in successive frames by using a target tracking algorithm, and acquire a motion track of the tracking target;
The multidimensional behavior feature vector construction module 17 is used for extracting the gesture information of the target by using a gesture estimation algorithm and constructing a multidimensional behavior feature vector by combining the motion track and the space-time feature of the target;
an abnormal behavior rule base establishing module 18, configured to establish a preset abnormal behavior rule base, where the abnormal behavior rule base includes a known abnormal behavior set;
The abnormal behavior recognition module 19 is configured to analyze the recognized behavior in real time based on the output of the multidimensional behavior feature vector in combination with a preset abnormal behavior rule base, and recognize whether the target is performing an abnormal behavior.
Further, the data acquisition module 11 further includes:
acquiring fingerprints by using professional fingerprint acquisition equipment to acquire fingerprint data;
acquiring face data of a person by using a high-resolution camera;
And (3) performing multiple scans by using an iris acquisition device to acquire iris data under different angles and illumination conditions.
Further, the feature extraction module 12 further includes:
Extracting characteristic information such as texture, direction and frequency of the fingerprint from the collected fingerprint data by using a fingerprint characteristic extraction algorithm;
Extracting feature information such as key points, textures, shapes and the like of the face from the acquired face data by using a face feature extraction algorithm, wherein the key points of the face comprise eye corners, mouth corners and the like;
extracting characteristic information such as texture, spots, lines and the like of the iris from the acquired iris data by using an iris characteristic extraction algorithm;
And integrating the extracted fingerprint features, facial features and iris features to form multi-modal biological feature data of each person, and establishing a multi-modal biological feature database.
Further, the object recognition module 15 further includes:
Extracting images from the video stream frame by frame for frame analysis;
performing enhancement processing on the extracted image and adjusting the image to a size suitable for the input of the deep learning model;
carrying out normalization processing on the image to enable the pixel value to be in a specific range;
performing feature extraction and classification by using a convolutional neural network;
Training the selected deep learning model by using the marked data set;
Applying a trained model to each frame in the video stream to perform target detection, and identifying the positions and the categories of personnel and vehicles;
a tracking algorithm is applied between successive frames to track the motion trajectory of the object in the video.
Further, the motion trajectory acquisition module 16 further includes:
performing target detection on each frame of the video stream by using a target detection algorithm;
Initializing the detected target, wherein the initialization comprises the initial position, the size and possible appearance characteristics of the target;
Calculating the motion of pixels in the image by using an optical flow method, so as to track the motion trail of the target;
extracting features of the target from the current frame;
Searching a region most similar to the target in a subsequent frame according to the characteristics of the extracted target;
When the most similar area is found, updating the position information of the target;
and estimating the motion state of the target according to the position information of the target in the continuous frames, and obtaining the motion trail of the target.
Further, the multidimensional behavior feature vector constructing module 17 further includes:
carrying out gesture estimation on a target in the video stream by using a gesture estimation algorithm to obtain key points of the target and position information of the key points;
extracting gesture information of a target by utilizing the output of a gesture estimation algorithm, wherein the gesture information comprises coordinates of key points, relative position relations among the key points, motion tracks of the key points and the like;
Acquiring a motion trail of a target by using a target tracking algorithm, wherein the motion trail is a position information sequence of the target in continuous frames;
extracting space-time characteristics of a target, wherein the space-time characteristics refer to change information of the target in time and space, and the change information comprises acceleration, speed change, gesture change and the like of the target;
And combining the extracted gesture information, motion trail and space-time characteristics to construct a multidimensional behavior feature vector.
Further, the abnormal behavior rule base building module 18 further includes:
collecting sample data containing various abnormal behaviors, and establishing a preset abnormal behavior rule base;
extracting behavior characteristic information capable of describing an abnormal behavior sample, wherein the characteristic information comprises the gesture, the motion track, the speed, the acceleration, the interaction with other targets and the like of the targets;
and constructing an abnormal behavior rule base based on the extracted characteristic information.
Further, the abnormal behavior recognition module 19 further includes:
Extracting multidimensional behavior feature vectors of targets from video streams in real time by using a gesture estimation algorithm, a target tracking algorithm and a space-time feature extraction method;
matching the extracted multidimensional behavior feature vector with a preset abnormal behavior rule base;
Judging whether the target is performing abnormal behavior in real time according to the matching result of the feature vector and the rule base;
when abnormal behavior is identified, the system immediately triggers an alarm mechanism.
For specific embodiments of the abnormal behavior recognition system in the financial escort process, reference may be made to the above embodiments of the abnormal behavior recognition method in the financial escort process, which are not described herein. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (9)

1.一种金融押运过程异常行为识别方法,其特征在于,所述方法包括:1. A method for identifying abnormal behavior in a financial escort process, characterized in that the method comprises: 采集人员特征数据,所述特征数据包括指纹数据、面部数据、虹膜数据;Collecting personnel characteristic data, including fingerprint data, facial data, and iris data; 对所述人员特征数据进行特征提取,建立多模态生物特征数据库;Extracting features from the personnel feature data and establishing a multimodal biometric database; 获取实时人员特征数据,使用卷积神经网络将所述实时人员特征数据提取的特征与多模态生物特征数据库中存储的特征进行匹配,获得匹配结果;Acquire real-time personnel feature data, and use a convolutional neural network to match features extracted from the real-time personnel feature data with features stored in a multimodal biometric database to obtain a matching result; 当所述匹配结果为匹配成功,继续实时获取监控区域的视频流并进行预处理;When the matching result is successful, continue to obtain the video stream of the monitored area in real time and perform preprocessing; 利用深度学习算法对预处理后的视频流进行目标检测,识别出跟踪目标,所述跟踪目标包括人员和车辆;Using a deep learning algorithm to perform target detection on the preprocessed video stream and identify tracking targets, which include people and vehicles; 利用目标跟踪算法对检测到的跟踪目标进行连续帧的跟踪,获取跟踪目标的运动轨迹;The target tracking algorithm is used to track the detected tracking target in continuous frames to obtain the motion trajectory of the tracking target; 利用姿态估计算法提取目标的姿态信息,并结合目标的运动轨迹和时空特征,构建多维行为特征向量;The posture estimation algorithm is used to extract the posture information of the target, and the multi-dimensional behavior feature vector is constructed by combining the target's motion trajectory and spatiotemporal characteristics. 建立预设的异常行为规则库,所述异常行为规则库包含了已知的异常行为集合;Establishing a preset abnormal behavior rule library, wherein the abnormal behavior rule library includes a known abnormal behavior set; 基于所述多维行为特征向量的输出,结合预设的异常行为规则库,对识别到的行为进行实时分析,识别出目标是否正在进行异常行为。Based on the output of the multi-dimensional behavior feature vector and in combination with a preset abnormal behavior rule library, the identified behavior is analyzed in real time to identify whether the target is performing abnormal behavior. 2.如权利要求1所述的方法,其特征在于,采集人员特征数据,所述特征数据包括指纹数据、面部数据、虹膜数据,包括:2. The method according to claim 1, characterized in that the personnel characteristic data is collected, and the characteristic data includes fingerprint data, facial data, and iris data, including: 使用专业的指纹采集设备采集指纹获得指纹数据;Use professional fingerprint collection equipment to collect fingerprints and obtain fingerprint data; 使用高分辨率摄像头采集人员面部获得面部数据;Use a high-resolution camera to capture a person's face to obtain facial data; 使用虹膜采集设备进行多次扫描以获取不同角度和光照条件下的虹膜数据。Use the iris acquisition device to perform multiple scans to obtain iris data under different angles and lighting conditions. 3.如权利要求1所述的方法,其特征在于,对所述人员特征数据进行特征提取,建立多模态生物特征数据库,包括:3. The method according to claim 1, wherein extracting features from the personnel feature data and establishing a multimodal biometric feature database comprises: 使用指纹特征提取算法,从采集到的指纹数据中提取出指纹的纹理、方向、频率等特征信息;Use fingerprint feature extraction algorithm to extract fingerprint texture, direction, frequency and other feature information from the collected fingerprint data; 使用面部特征提取算法,从采集到的面部数据中提取出面部的关键点、纹理和形状等特征信息,所述面部的关键点包括眼角、嘴角等;Using a facial feature extraction algorithm, feature information such as facial key points, texture and shape are extracted from the collected facial data, wherein the facial key points include the corners of the eyes and the corners of the mouth; 使用虹膜特征提取算法,从采集到的虹膜数据中提取出虹膜的纹理、斑点、线条等特征信息;Use iris feature extraction algorithm to extract iris texture, spots, lines and other feature information from the collected iris data; 将提取出的指纹特征、面部特征和虹膜特征进行整合,形成每个人的多模态生物特征数据,建立多模态生物特征数据库。The extracted fingerprint features, facial features and iris features are integrated to form each person's multimodal biometric data and establish a multimodal biometric database. 4.如权利要求1所述的方法,其特征在于,利用深度学习算法对预处理后的视频流进行目标检测,识别出跟踪目标,所述跟踪目标包括人员和车辆,包括:4. The method according to claim 1, characterized in that the deep learning algorithm is used to perform target detection on the preprocessed video stream to identify the tracking target, wherein the tracking target includes a person and a vehicle, including: 从视频流中逐帧提取图像,以便进行逐帧分析;Extract images frame by frame from the video stream for frame-by-frame analysis; 对提取的图像进行增强处理并将图像调整到适合深度学习模型输入的尺寸;Enhance the extracted images and resize them to a size suitable for deep learning model input; 对图像进行归一化处理,使像素值在特定范围内;Normalize the image so that the pixel values are within a specific range; 利用卷积神经网络进行特征提取和分类;Use convolutional neural networks for feature extraction and classification; 使用标注好的数据集对选择的深度学习模型进行训练;Use the labeled dataset to train the selected deep learning model; 对视频流中的每一帧应用训练好的模型进行目标检测,识别出人员和车辆的位置和类别;Apply the trained model to each frame in the video stream for object detection, identifying the location and category of people and vehicles; 在连续帧之间应用跟踪算法,跟踪目标在视频中的运动轨迹。A tracking algorithm is applied between consecutive frames to track the movement trajectory of the target in the video. 5.如权利要求1所述的方法,其特征在于,利用目标跟踪算法对检测到的跟踪目标进行连续帧的跟踪,获取跟踪目标的运动轨迹,包括:5. The method according to claim 1, characterized in that the detected tracking target is tracked in consecutive frames using a target tracking algorithm to obtain a motion trajectory of the tracking target, comprising: 利用目标检测算法对视频流的每一帧进行目标检测;Use the target detection algorithm to detect the target in each frame of the video stream; 对检测到目标进行初始化,所述的初始化包括目标的初始位置、大小、以及可能的外观特征;Initializing the detected target, wherein the initialization includes the initial position, size, and possible appearance features of the target; 利用光流法计算图像中像素的运动,从而跟踪目标的运动轨迹;Use the optical flow method to calculate the movement of pixels in the image, thereby tracking the movement trajectory of the target; 从当前帧中提取目标的特征;Extract the features of the target from the current frame; 根据所述提取目标的特征,在后续帧中搜索与目标最相似的区域;According to the features of the extracted target, searching for the area most similar to the target in subsequent frames; 当找到所述最相似的区域,更新目标的位置信息;When the most similar area is found, the location information of the target is updated; 根据连续帧中目标的位置信息,预估目标的运动状态,获取目标的运动轨迹。According to the position information of the target in the continuous frames, the motion state of the target is estimated and the motion trajectory of the target is obtained. 6.如权利要求1所述的方法,其特征在于,利用姿态估计算法提取目标的姿态信息,并结合目标的运动轨迹和时空特征,构建多维行为特征向量,包括:6. The method according to claim 1, characterized in that the posture information of the target is extracted by using a posture estimation algorithm, and a multi-dimensional behavior feature vector is constructed by combining the motion trajectory and spatiotemporal characteristics of the target, comprising: 利用姿态估计算法对视频流中的目标进行姿态估计,获得目标的关键点及关键点的位置信息;Use the posture estimation algorithm to estimate the posture of the target in the video stream and obtain the key points of the target and the position information of the key points; 利用姿态估计算法的输出提取目标的姿态信息,所述姿态信息包括关键点的坐标、关键点之间的相对位置关系、以及关键点的运动轨迹等;The output of the posture estimation algorithm is used to extract the posture information of the target, wherein the posture information includes the coordinates of key points, the relative position relationship between key points, and the motion trajectory of key points; 利用目标跟踪算法获取目标的运动轨迹,所述运动轨迹是目标在连续帧中的位置信息序列;Using a target tracking algorithm to obtain a motion trajectory of the target, the motion trajectory is a sequence of position information of the target in consecutive frames; 提取目标的时空特征,所述时空特征是指目标在时间和空间上的变化信息,所述变化信息包括目标的加速度、速度变化、姿态变化等;Extracting the spatiotemporal features of the target, wherein the spatiotemporal features refer to the change information of the target in time and space, and the change information includes the acceleration, speed change, posture change, etc. of the target; 将提取的姿态信息、运动轨迹和时空特征结合起来,构建多维行为特征向量。The extracted posture information, motion trajectory and spatiotemporal features are combined to construct a multi-dimensional behavior feature vector. 7.如权利要求1所述的方法,其特征在于,建立预设的异常行为规则库,所述异常行为规则库包含了已知的异常行为集合,包括:7. The method according to claim 1, characterized in that a preset abnormal behavior rule library is established, wherein the abnormal behavior rule library contains a known abnormal behavior set, including: 收集包含各种异常行为的样本数据,建立预设的异常行为规则库;Collect sample data containing various abnormal behaviors and establish a preset abnormal behavior rule library; 提取能够描述异常行为样本的行为特征信息,所述特征信息包括目标的姿态、运动轨迹、速度、加速度、与其他目标的交互等;Extracting behavioral feature information that can describe abnormal behavior samples, the feature information includes the target's posture, motion trajectory, speed, acceleration, interaction with other targets, etc.; 基于提取的所述特征信息,构建异常行为规则库。Based on the extracted feature information, an abnormal behavior rule library is constructed. 8.如权利要求1所述的方法,其特征在于,基于所述多维行为特征向量的输出,结合预设的异常行为规则库,对识别到的行为进行实时分析,识别出目标是否正在进行异常行为,包括:8. The method according to claim 1, characterized in that, based on the output of the multi-dimensional behavior feature vector and in combination with a preset abnormal behavior rule library, the identified behavior is analyzed in real time to identify whether the target is performing abnormal behavior, including: 利用姿态估计算法、目标跟踪算法和时空特征提取方法,从视频流中实时提取目标的多维行为特征向量;Using posture estimation algorithm, target tracking algorithm and spatiotemporal feature extraction method, the multi-dimensional behavior feature vector of the target is extracted from the video stream in real time; 将提取的多维行为特征向量与预设的异常行为规则库进行匹配;Matching the extracted multi-dimensional behavior feature vector with the preset abnormal behavior rule library; 根据特征向量与规则库的匹配结果,实时判定目标是否正在进行异常行为;Based on the matching results between the feature vector and the rule base, determine in real time whether the target is performing abnormal behavior; 当识别到异常行为,系统立即触发报警机制。When abnormal behavior is identified, the system immediately triggers the alarm mechanism. 9.一种金融押运过程异常行为识别系统,其特征在于,所述系统包括:9. A system for identifying abnormal behavior during financial escort, characterized in that the system comprises: 数据采集模块,所述数据采集模块用于采集人员特征数据,所述特征数据包括指纹数据、面部数据、虹膜数据;A data collection module, wherein the data collection module is used to collect personnel characteristic data, wherein the characteristic data includes fingerprint data, facial data, and iris data; 特征提取模块,所述特征提取模块用于对所述人员特征数据进行特征提取,建立多模态生物特征数据库;A feature extraction module, the feature extraction module is used to extract features from the personnel feature data and establish a multimodal biometric database; 特征匹配模块,所述特征匹配模块用于获取实时人员特征数据,使用卷积神经网络将所述实时人员特征数据提取的特征与多模态生物特征数据库中存储的特征进行匹配,获得匹配结果;A feature matching module, the feature matching module is used to obtain real-time personnel feature data, and use a convolutional neural network to match features extracted from the real-time personnel feature data with features stored in a multimodal biometric database to obtain a matching result; 视频流获取模块,所述视频流获取模块用于当所述匹配结果为匹配成功,继续实时获取监控区域的视频流并进行预处理;A video stream acquisition module, wherein when the matching result is a successful match, the video stream acquisition module continues to acquire the video stream of the monitoring area in real time and performs preprocessing; 目标识别模块,所述目标识别模块用于利用深度学习算法对预处理后的视频流进行目标检测,识别出跟踪目标,所述跟踪目标包括人员和车辆;A target recognition module, which is used to perform target detection on the preprocessed video stream using a deep learning algorithm to identify tracking targets, including people and vehicles; 运动轨迹获取模块,所述运动轨迹获取模块用于利用目标跟踪算法对检测到的跟踪目标进行连续帧的跟踪,获取跟踪目标的运动轨迹;A motion trajectory acquisition module, wherein the motion trajectory acquisition module is used to track the detected tracking target in consecutive frames using a target tracking algorithm to acquire the motion trajectory of the tracking target; 多维行为特征向量构建模块,所述多维行为特征向量构建模块用于利用姿态估计算法提取目标的姿态信息,并结合目标的运动轨迹和时空特征,构建多维行为特征向量;A multi-dimensional behavior feature vector construction module, wherein the multi-dimensional behavior feature vector construction module is used to extract the target's posture information by using a posture estimation algorithm, and to construct a multi-dimensional behavior feature vector by combining the target's motion trajectory and spatiotemporal characteristics; 异常行为规则库建立模块,所述异常行为规则库建立模块用于建立预设的异常行为规则库,所述异常行为规则库包含了已知的异常行为集合;An abnormal behavior rule base establishment module, the abnormal behavior rule base establishment module is used to establish a preset abnormal behavior rule base, the abnormal behavior rule base contains a known abnormal behavior set; 异常行为识别模块,所述异常行为识别模块用于基于所述多维行为特征向量的输出,结合预设的异常行为规则库,对识别到的行为进行实时分析,识别出目标是否正在进行异常行为。The abnormal behavior recognition module is used to analyze the recognized behavior in real time based on the output of the multi-dimensional behavior feature vector and in combination with a preset abnormal behavior rule library to identify whether the target is performing abnormal behavior.
CN202411782328.3A 2024-12-05 2024-12-05 A method and system for identifying abnormal behavior in financial escort process Pending CN119559701A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411782328.3A CN119559701A (en) 2024-12-05 2024-12-05 A method and system for identifying abnormal behavior in financial escort process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411782328.3A CN119559701A (en) 2024-12-05 2024-12-05 A method and system for identifying abnormal behavior in financial escort process

Publications (1)

Publication Number Publication Date
CN119559701A true CN119559701A (en) 2025-03-04

Family

ID=94747259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411782328.3A Pending CN119559701A (en) 2024-12-05 2024-12-05 A method and system for identifying abnormal behavior in financial escort process

Country Status (1)

Country Link
CN (1) CN119559701A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120689814A (en) * 2025-06-17 2025-09-23 天元电子技术(山东)有限公司 A method for identifying people with abnormal behavior based on multi-source surveillance image integration

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120689814A (en) * 2025-06-17 2025-09-23 天元电子技术(山东)有限公司 A method for identifying people with abnormal behavior based on multi-source surveillance image integration

Similar Documents

Publication Publication Date Title
Miura et al. Extraction of finger-vein patterns using maximum curvature points in image profiles
Xu et al. Online dynamic gesture recognition for human robot interaction
US7881524B2 (en) Information processing apparatus and information processing method
TW202038133A (en) System and method for rapidly locating iris using deep learning
JP2018508888A (en) System and method for performing fingerprint-based user authentication using an image captured using a mobile device
CN109325408A (en) A gesture judgment method and storage medium
Drosou et al. Spatiotemporal analysis of human activities for biometric authentication
Khoh et al. In-air hand gesture signature recognition system based on 3-dimensional imagery
CN114724230A (en) Method and system for identifying identity of signatory
CN119559701A (en) A method and system for identifying abnormal behavior in financial escort process
Wang et al. V-Vibe: A robust ROI extraction method based on background subtraction for vein images collected by infrared device
Dalka et al. Human-Computer Interface Based on Visual Lip Movement and Gesture Recognition.
CN105354468A (en) User identification method based on multi-axis force platform gait analysis
Abdulbaqi et al. Biometrics detection and recognition based-on geometrical features extraction
Takeuchi et al. Multimodal soft biometrie verification by hand shape and handwriting motion in the air
Resmi et al. An empirical study and evaluation on automatic ear detection
Himmatov et al. Development of skeleton-based gait models for human movement recognition based on neural networks
Sharma et al. Improved human identification using finger vein images
CN116631068A (en) Palm vein living body detection method based on deep learning feature fusion
Cherry et al. Photoplethysmography biometric authentication using convolutional neural network
Shahin et al. Multimodal biometric system based on near-infra-red dorsal hand geometry and fingerprints for single and whole hands
Poonia et al. Robust Palm-print Recognition Using Multi-resolution Texture Patterns with Artificial Neural Network
Liu et al. Gait recognition using hough transform and principal component analysis
KR20110068688A (en) Three-dimensional shape measurement system for manufacturing three-dimensional shape
Chien et al. A real-time security surveillance system for personal authentication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination