[go: up one dir, main page]

CN120599681A - Touch screen driving method and system - Google Patents

Touch screen driving method and system

Info

Publication number
CN120599681A
CN120599681A CN202510750449.8A CN202510750449A CN120599681A CN 120599681 A CN120599681 A CN 120599681A CN 202510750449 A CN202510750449 A CN 202510750449A CN 120599681 A CN120599681 A CN 120599681A
Authority
CN
China
Prior art keywords
information
feature
dynamic
face recognition
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202510750449.8A
Other languages
Chinese (zh)
Inventor
方定有
张慧芳
王世英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aerospace Optoelectronics Technology Shenzhen Co ltd
Original Assignee
Aerospace Optoelectronics Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aerospace Optoelectronics Technology Shenzhen Co ltd filed Critical Aerospace Optoelectronics Technology Shenzhen Co ltd
Priority to CN202510750449.8A priority Critical patent/CN120599681A/en
Publication of CN120599681A publication Critical patent/CN120599681A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Collating Specific Patterns (AREA)

Abstract

本申请提供了一种触摸屏驱动方法及系统,适用于图像处理技术领域,该方法包括:对用户人脸视频帧信息进行动态人脸识别计算,得到动态人脸识别概率;当动态人脸识别概率大于等于动态人脸概率阈值时,生成动态人脸视频帧;对动态人脸视频帧进行多角度特征提取与解析处理,生成多个动态人脸识别特征,用于与注册用户人脸特征进行匹配处理,生成人脸特征匹配;响应于特征匹配信息的生成,生成触摸屏驱动信号,将触摸屏驱动信号发送至触摸屏所在终端,以通过触摸屏所在终端进行触摸屏驱动处理。本申请有效区分真实动态人脸与静态照片,避免不法分子利用静态照片启动触摸屏,且有效识别正面人脸与侧脸、下巴等多角度人脸,增加用户启动触摸屏的便捷性。

The present application provides a touch screen driving method and system applicable to the field of image processing technology. The method includes: performing dynamic face recognition calculation on user face video frame information to obtain a dynamic face recognition probability; when the dynamic face recognition probability is greater than or equal to a dynamic face probability threshold, generating a dynamic face video frame; performing multi-angle feature extraction and analysis on the dynamic face video frame to generate multiple dynamic face recognition features for matching with the registered user's face features to generate a face feature match; in response to the generation of feature matching information, generating a touch screen driving signal, sending the touch screen driving signal to the terminal where the touch screen is located, so as to perform touch screen driving processing through the terminal where the touch screen is located. The present application effectively distinguishes between real dynamic faces and static photos, preventing criminals from using static photos to activate the touch screen, and effectively identifies multi-angle faces such as frontal faces, side faces, and chins, increasing the convenience of users activating the touch screen.

Description

Touch screen driving method and system
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a touch screen driving method and system.
Background
Along with the continuous progress of science and technology, intelligent devices are becoming more popular, and touch screens are widely applied to various scenes as important human-computer interaction interfaces, from daily intelligent door locks, intelligent mobile phones to equipment such as automobile central control and the like. In the development process of the touch screen driving unlocking mode, the pursuit of convenience and safety of people from the original simple mechanical key, through the digital coded lock and the radio frequency signal key, to the present intelligent key, face recognition and other technologies pushes the touch screen driving unlocking technology to be continuously innovated.
In the prior art, a common face recognition scheme for unlocking a touch screen mostly focuses on the recognition of a front face, and whether the touch screen is driven or not is judged by extracting features of an acquired face image and matching the features of a registered user face stored in advance so as to realize unlocking.
However, in an actual use scene, the recognition capability of various human face gestures of a user is weak, the convenience of using the touch screen device by the user is greatly limited, the distinguishing capability of the photo and the real dynamic human face is weak, and the risk that an illegal molecule tries to start the touch screen by using a static photo cannot be effectively resisted, so that the safety of the device and user data is reduced.
Disclosure of Invention
In view of the above, the embodiments of the present application provide a method and a system for driving a touch screen, which aim to solve the problems in the prior art that the recognition capability of various human face gestures of a user is weak, the convenience of using a touch screen device is reduced, and the risk that an lawless person starts the touch screen by using a static photo cannot be resisted, so that the security of the touch screen device and user data is difficult to improve.
A first aspect of an embodiment of the present application provides a touch screen driving method, including:
acquiring the face video frame information of a plurality of users;
carrying out dynamic face recognition calculation on a plurality of pieces of user face video frame information to obtain a plurality of pieces of dynamic face recognition probability information, wherein the dynamic face recognition probability information corresponds to the user face video frame information one by one;
when the dynamic face recognition probability information is larger than or equal to a preset dynamic face probability threshold value, taking the user face video frame information corresponding to the dynamic face recognition probability as dynamic face video frame information;
Performing multi-angle feature extraction and analysis processing on the dynamic face video frame information to generate a plurality of dynamic face recognition feature information;
Matching processing is carried out according to the dynamic face recognition feature information and the face feature information of the preset registered user, and face feature matching information is generated;
and generating a touch screen driving signal in response to the generation of the face feature matching information, and sending the touch screen driving signal to a terminal where the touch screen is located so as to carry out touch screen driving processing through the terminal where the touch screen is located.
A second aspect of an embodiment of the present application provides a touch screen driving system, including:
The user face video frame information acquisition module is used for acquiring a plurality of user face video frame information;
The dynamic face recognition probability information generation module is used for carrying out dynamic face recognition calculation on the plurality of user face video frame information to obtain a plurality of dynamic face recognition probability information, wherein the dynamic face recognition probability information corresponds to the user face video frame information one by one;
the dynamic face video frame information determining module is used for taking the user face video frame information corresponding to the dynamic face recognition probability as dynamic face video frame information when the dynamic face recognition probability information is larger than or equal to a preset dynamic face probability threshold value;
The dynamic face recognition feature information generation module is used for carrying out multi-angle feature extraction and analysis processing on the plurality of dynamic face video frame information to generate a plurality of dynamic face recognition feature information;
the face feature matching information generation module is used for carrying out matching processing according to the dynamic face recognition feature information and the face feature information of the preset registered user to generate face feature matching information;
The touch screen driving signal generation module is used for responding to the generation of the face feature matching information, generating a touch screen driving signal and sending the touch screen driving signal to a terminal where the touch screen is located so as to carry out touch screen driving processing through the terminal where the touch screen is located.
Compared with the prior art, the method has the advantages that the method can accurately obtain the multiple dynamic face recognition probability information by acquiring the multiple user face video frame information and performing dynamic face recognition calculation, effectively distinguish the real dynamic face from the static photo, avoid the illegal person from starting the touch screen by using the static photo, greatly improve the safety of equipment and user data, screen out the dynamic face video frame information by comparing the size relation between the dynamic face recognition probability information and the dynamic face probability threshold value, ensure the validity of face identification, further perform multi-angle feature extraction and analysis on the dynamic face video frame information, effectively recognize the face at other angles such as the face, effectively recognize the face at the face, break the limitation of single visual angle in the prior art, enable the user to smoothly start the touch screen at various angles, remarkably improve the convenience of using the touch screen equipment by the user, match the extracted dynamic face feature information with the preset registered user face feature information, and generate a touch screen driving signal according to the matching result, realize accurate and reliable driving and high-efficient driving and high-performance of the touch screen.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation flow of a touch screen driving method according to an embodiment of the present application;
Fig. 2 is a schematic diagram of an implementation flow of a touch screen driving method according to a second embodiment of the present application;
Fig. 3 is a schematic implementation flow chart of a touch screen driving method according to a third embodiment of the present application;
fig. 4 is a schematic implementation flow chart of a touch screen driving method according to a fourth embodiment of the present application;
fig. 5 is a schematic flowchart of an implementation of a touch screen driving method according to a fifth embodiment of the present application;
fig. 6 is a schematic implementation flow chart of a touch screen driving method according to a sixth embodiment of the present application;
fig. 7 is a schematic flowchart of an implementation of a touch screen driving method according to a seventh embodiment of the present application;
fig. 8 is a schematic diagram of an implementation flow of a touch screen driving method according to an eighth embodiment of the present application;
fig. 9 is a schematic implementation flow chart of a touch screen driving method according to a ninth embodiment of the present application;
fig. 10 is a schematic structural diagram of a touch screen driving system according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical scheme of the application, the following description is made by specific examples.
Fig. 1 shows a flowchart of an implementation of a touch screen driving method according to an embodiment of the present application, which is described in detail below:
step S101, acquiring a plurality of user face video frame information.
In this embodiment, the user face video frame information may refer to single frame image information in a continuous video frame including facial features of a user, and records facial states of the user at different times and different angles within a certain period of time, such as gestures of a front face, a side face, a low head, a head lifting or the like, which may be obtained by shooting through a camera installed in a device where the touch screen is located or a device separated from the touch screen.
In this embodiment, optionally, a camera may be mounted on the target terminal, i.e. the intelligent door lock or the mobile device, and when the user triggers an authentication operation, for example, when the user approaches the device or clicks an unlock button, the camera acquires a video stream containing facial information of the user in real time according to a manually preset frame rate, and further disassembles the continuous video stream into a plurality of single-frame image information by using a video decoding technology, so as to serve as a plurality of facial video frame information of the user. Wherein the artificially preset frame rate may be 25 frames/second.
Step S102, carrying out dynamic face recognition calculation on a plurality of pieces of user face video frame information to obtain a plurality of pieces of dynamic face recognition probability information, wherein the dynamic face recognition probability information corresponds to the user face video frame information one by one.
In this embodiment, the method may firstly identify and extract the information of the video frames of the face of the plurality of users from the subtle changes of the video frames, analyze the blink frequency change, expression change, head shake or rotation and other dynamic behaviors of the face, simultaneously capture the change modes of static features such as face textures, outlines and the like under different angles, further construct one or more information bases with combinations of the face dynamic behaviors and the static features by collecting a large amount of real face video information, further compare the blink change, expression change, head shake or rotation and other dynamic behavior features in the information of the face video frames of the users with the dynamic behavior features and static feature combinations in the information bases such as face textures, outlines and the like, so as to evaluate the degree that the information of the face video frames of the plurality of users accords with the rules of the dynamic and the features of the real face, and calculate the value of the degree as the information of each user face video frame to obtain the corresponding probability information of the dynamic face recognition, so as to reflect the possibility that the face information in the face video frame information of the face video frames of the user face is the real dynamic face, thereby effectively distinguishing the real dynamic face and other false interference information.
Step S103, judging whether the dynamic face recognition probability information is larger than or equal to a preset dynamic face probability threshold, if so, taking the user face video frame information corresponding to the dynamic face recognition probability as dynamic face video frame information, and if not, taking the user face video frame information corresponding to the dynamic face recognition probability as static face video frame information.
In this embodiment, the preset dynamic face probability threshold may be set manually, and may be a value of 0.7 or a value of 0.8. When the dynamic face recognition probability information is larger than or equal to a preset dynamic face probability threshold value, the large probability of the user face video frame information corresponding to the dynamic face recognition probability information is real dynamic face information, and an illegal molecule is used for masking and mixing the unlocked user photo, the user face video frame information corresponding to the dynamic face recognition probability information is used as dynamic face video frame information for subsequent further feature extraction and recognition so as to conduct driving unlocking operation of the touch screen, when the dynamic face recognition probability information is smaller than the preset dynamic face probability threshold value, the large probability of the user face video frame information corresponding to the dynamic face recognition probability information is face static image information, the illegal molecule is likely to be used for starting the touch screen so as to achieve illegal static face photo, and the user face video frame information corresponding to the dynamic face recognition probability is used as static face video frame information and is not used for subsequent further feature extraction and recognition.
In this embodiment, when the dynamic face recognition probability information is smaller than a preset dynamic face probability threshold, user face video frame information corresponding to the dynamic face recognition probability is used as static face video frame information, a security early warning mechanism is immediately triggered in response to determination of the static face video frame information, and a driving unlocking request of the static face video frame information for the touch screen is refused to respond, so that an lawless person is prevented from passing verification by using fake means such as a static photo. Meanwhile, the time of the abnormal verification behavior, equipment representation, face feature fragments and other information can be recorded to form a security log for subsequent security audit and tracking tracing. If the static face information is detected for verification trial for many times, the user can be required to verify in an auxiliary way through other verification modes such as passwords or fingerprints, and the user can be prevented from driving and unlocking the touch screen normally due to the fact that the dynamic face information and the static face information are erroneously detected under the condition that the terminal where the touch screen is located and the privacy and property safety of the user are guaranteed.
Step S104, multi-angle feature extraction and analysis processing are carried out on the dynamic face video frame information to generate dynamic face recognition feature information.
In this embodiment, the edge and texture details of each part in the dynamic face video frame information may be first subjected to reinforcement processing, whether the face contour, the facial lines and the skin texture become clearer and more prominent, so that the subsequent accurate capturing of features is facilitated, and further, the focus is on the parts affecting the important recognition result, such as the eyes, the nose, the mouth and the like, the parts endowed with the features of the key parts may be the parts with higher feature weights, the feature interferences of the relatively secondary areas such as the cheeks and the like may be reduced, the comprehensive and detailed feature extraction may be performed on the face video frames with different angles, the multiple dimensions including the overall shape of the face, the specific shape of the facial features, the unique mode of the texture and the like may be covered, and finally, the depth fusion and analysis may be performed on the extracted multi-angle features, the repeated and redundant information may be removed, the feature combination with the most representative and recognition degree may be extracted, the multiple dynamic feature information may be generated, and the accurate and effective data support may be provided for the subsequent identity recognition and matching.
Step 105, performing matching processing according to the plurality of dynamic face recognition feature information and the preset face feature information of the registered user, and generating face feature matching information.
In this embodiment, the preset face feature information of the registered user may refer to a face feature data set pre-stored in a computer and used for identifying a specific user identity, which may be a multi-dimensional feature that comprehensively records the face of the registered user, and covers the overall outline shape of the face, specific morphological parameters of five sense organs, and the change rules of the face features under different angles, etc., and may be that when the user registers in a computer system, a plurality of groups of face images containing different angles and different expressions are shot through a designated image acquisition device, for example, when the user is required to keep gestures of front face, left face, right face, 45 face, etc., and face images in various states of natural expressions, smiles, eyebrows, etc. are recorded, so as to ensure that the acquired face images can fully cover the dynamic and static features of the face of the user, further integrate the face feature vectors of different angles and different expressions of the same user, reject repeated or redundant features, retain core feature data capable of uniquely identifying the identity of the user, and finally form the preset face feature information of the user, and store the preset face feature information in a subsequent database according to a specific face identification format. The method can compare whether the contour shape and the size proportion of the dynamic face recognition characteristic information and the face characteristic information of the registered user are similar or not under different angles, can adopt a specific screening strengthening method to highlight the region which plays a key role in identity recognition, such as eyes, noses and the like, in the process of comparison calculation, can calculate cosine similarity between the dynamic face recognition characteristic information and the face characteristic information of the registered user, and when the cosine similarity is larger than a manually set similarity threshold value, the dynamic face recognition characteristic information is indicated to belong to the face characteristic of the registered user, then the dynamic face recognition characteristic information is taken as face characteristic matching information, then a driving unlocking request of the user for a touch screen is responded, when the cosine similarity is smaller than or equal to the manually set similarity threshold value, the dynamic face recognition characteristic information is indicated to not belong to the face characteristic of the registered user, and then the driving unlocking request of the user for the touch screen is refused. Wherein, the similarity threshold set by people can take a value of 0.8.
The specific screening and strengthening method can be to combine multidimensional information in the dynamic face video frame, such as color information, depth information and optical flow information to perform cross verification and strengthening. For example, the uniformity and blood color change of the complexion of the face can be highlighted by utilizing the color information, the three-dimensional structure of the face can be analyzed through the depth information, such as the height of a nose bridge, the radian of a mandible and the like, the fine movement track of the muscles of the face can be tracked by means of the optical flow information, and the color information, the depth information and the optical flow information are fused and then are used for comprehensively constructing the characteristic map of a registered user, so that the robustness to interference factors such as makeup, illumination change and the like is enhanced.
And step S106, generating a touch screen driving signal in response to the generation of the face feature matching information, and sending the touch screen driving signal to a terminal where the touch screen is located so as to perform touch screen driving processing through the terminal where the touch screen is located.
In this embodiment, it can be understood that the terminal where the touch screen is located may be the same as or different from the device that generates the touch screen driving signal. The condition that the terminal where the touch screen is located is the same as the equipment for generating the touch screen driving signal can be applied to an intelligent door lock, a face recognition module and the touch screen are integrated in the intelligent door lock, when a system in the intelligent door lock completes recognition and matching of face video frame information of a user, after face feature matching information is generated, the touch screen driving signal can be directly generated in the system in the intelligent door lock, and then the touch screen driving signal immediately acts on the touch screen on the door lock, so that the touch screen is activated and lightened, and a user can directly perform subsequent operations on the touch screen, such as inputting passwords, selecting function options and the like. the condition that the terminal where the touch screen is located is the same as the equipment for generating the touch screen driving signal can also be applied to the intelligent vehicle-mounted central control equipment, the intelligent vehicle-mounted central control equipment has a face recognition function and is provided with touch screen hardware, when the intelligent vehicle-mounted central control equipment completes dynamic face recognition calculation, multi-angle feature extraction and analysis and matching processing of face video frame information of a driver or a passenger, the touch screen driving signal is generated inside the intelligent vehicle-mounted central control equipment when the face feature matching information is successfully generated, the touch screen of the intelligent vehicle-mounted central control equipment is immediately driven to wake up the intelligent vehicle-mounted central control equipment from a dormant state, and functional options such as navigation, multimedia playing and the like are displayed for a user to operate. The terminal where the touch screen is located and the equipment for generating the touch screen driving signal are different, the terminal can be applied to the linkage of a user management system of an office building and an intelligent conference panel in the office, wherein the user management system operates through a server in the office building, when a user sends out the requirement information for entering the office building, after the user processes the face video frame information of the user and generates the face feature matching information, the user management system generates the touch screen driving signal through the server of the user management system, and transmits the touch screen driving signal to an intelligent conference panel in the office by means of a wireless communication network in the office building, such as a local area network, when the intelligent conference panel receives the touch screen driving signal, the touch screen is automatically driven to be opened and enter a specific welcome interface or conference preparation interface, and the user can directly write notes on the touch screen of the conference panel without additional operations, And displaying data and other operations, so that efficient collaborative work based on face recognition matching results among different devices is realized. The condition that the terminal of the touch screen is different from the equipment for generating the touch screen driving signal can also be applied to linkage of an intelligent access control system of a residential district and the touch screen of an elevator in a unit building, for example, the intelligent access control system at the entrance of the residential district collects face video frame information of visitors, face feature matching information is generated through a series of processes, then the touch screen driving signal is generated, the touch screen driving signal is transmitted to the terminal of the touch screen of the elevator in a specific unit building through a wireless communication technology, after the terminal of the touch screen of the elevator receives the touch screen driving signal, the touch screen is automatically driven, so that the touch screen displays a floor selection interface, the visitors can directly select a target floor on the touch screen without carrying out identity verification in the elevator again, and intelligent interaction between different equipment based on face feature matching results is realized.
In this embodiment, it may be understood that, when the terminal at which the touch screen is located is the same as the terminal that generates the touch screen driving signal, the signal generation mechanism is immediately triggered in response to the generation of the face feature matching information, the user authentication is directly determined to pass according to the face feature matching information, and then a corresponding touch screen driving signal is generated, the touch screen driving signal is transmitted inside the terminal at which the touch screen is located, driving processing is performed on the terminal at which the touch screen is located, so that the terminal at which the touch screen is located performs operation flows such as corresponding screen unlocking and entering an operation interface, when the terminal at which the touch screen is located is different from the terminal that generates the touch screen driving signal, packaging processing is performed on the face feature matching information through the device that generates the touch screen driving signal, the packaged face feature matching information is sent to the terminal at which the touch screen is located through a wireless communication technology, the packaged face feature matching information is received by the terminal at which the touch screen is located, and then the terminal at which the touch screen is located processes the packaged face feature matching information, and when the valid face feature matching information is identified, the touch screen is performed, and driving function analysis is performed according to the touch screen driving signal is performed to realize touch screen driving and unlocking.
According to the touch screen driving method provided by the embodiment of the application, the plurality of dynamic face recognition probability information can be accurately obtained by acquiring the plurality of user face video frame information and performing dynamic face recognition calculation, the real dynamic face and the static photo can be effectively distinguished, the situation that an illegal molecule starts the touch screen by using the static photo is avoided, the safety of equipment and user data is greatly improved, the dynamic face video frame information is screened out by comparing the size relation between the dynamic face recognition probability information and the dynamic face probability threshold value, the validity of face identification is ensured, the multi-angle feature extraction and analysis are further performed on the dynamic face video frame information, the face in front face can be effectively identified, the face in other angles such as a side face, a chin can be effectively identified, the limitation of the single visual angle in the prior art is broken, the touch screen can be smoothly started at various angles by a user, the convenience of using the touch screen equipment is remarkably improved, the extracted dynamic face recognition feature information is matched with the preset registered user face feature information, the touch screen driving signal is generated according to the matching result, the accuracy of identity verification and the safety, the reliability and the reliability of the touch screen are improved, and the reliability are high.
Fig. 2 shows a flowchart for implementing the touch screen driving method according to the second embodiment of the present application, which is different from the first embodiment in that the step S102 specifically includes:
Step S201, based on the plurality of user face video frame information, generating a plurality of initial face recognition frames according to the preset face recognition frame height information and the preset face recognition frame width information.
In this embodiment, the preset face recognition frame height information and the preset face recognition frame width information may be manually set. It may be understood that the user face video frame information may be presented in a form of a plurality of pixels, and then a plurality of face recognition frames may be dynamically generated, for capturing the pixel content in the user face video frame information of each frame, where the generated plurality of initial face recognition frames may include the pixel content in the user face video frame information captured in the frame, or may not include the pixel content in the user face video frame information captured in the frame.
Step S202, calculating the logical distance among a plurality of initial face recognition frames to obtain a plurality of face recognition frame distance information.
In this embodiment, the interval condition between the initial face recognition frames is quantified by calculating the logical distance between the initial face recognition frames. The method comprises the steps of firstly pairing a plurality of initial face recognition frames in pairs, further calculating the distance between center points of the two initial face recognition frames to be used as face recognition frame distance information, further calculating the minimum distance between frame boundaries of the two initial face recognition frames, namely respectively finding out the distance between the two initial face recognition frames in the upper, lower, left and right directions, then selecting the minimum distance value as face recognition frame distance information between the two frames, further calculating the inverse of the overlapping area proportion of the two initial face recognition frames to represent the logic distance, namely firstly calculating the area of the overlapping part of the two initial face recognition frames, further calculating the area of each of the two initial face recognition frames, then dividing the overlapping area by the sum of the two initial face recognition frames to obtain the overlapping area proportion, further calculating the inverse of the overlapping area proportion to be used as face recognition frame distance information between the two initial face recognition frames, and further understanding that the inverse of the face recognition frame distance information between the two initial face recognition frames is obtained.
Step S203, judging whether the face recognition frame distance information is larger than a preset face recognition frame distance threshold, if so, generating a plurality of intermediate face recognition frames according to the initial face recognition frames corresponding to the face recognition frame distance information, and if not, deleting the initial face recognition frames corresponding to the face recognition frame distance information.
In this embodiment, when the face recognition frame distance information is greater than the preset face recognition frame distance threshold, it is indicated that the two initial face recognition frames corresponding to the face recognition frame distance information are far away, and the two initial face recognition frames can be used as intermediate face recognition frames for performing pixel extraction on the user face video frame information in the subsequent steps. When the face recognition frame distance information is smaller than or equal to the preset face recognition frame distance threshold value, the fact that the two initial face recognition frames corresponding to the face recognition frame distance information are closer in distance is indicated, the face recognition frames do not need to be used as intermediate face recognition frames, and then the initial face recognition frames corresponding to the face recognition frame distance information are deleted and used for avoiding redundant information from being introduced into subsequent calculation.
And step S204, carrying out pixel extraction on the user face video frame information according to the intermediate face recognition frame to obtain pixel information of a plurality of user face video frames.
In this embodiment, according to the pixel ranges defined by the intermediate face recognition frames in the user face video frame information, the pixel ranges defined by the intermediate face recognition frames are used as the pixel information of the plurality of user face video frames. It can be appreciated that the pixel information of a user face video frame corresponds to an intermediate face recognition frame one by one.
Step S205, performing dynamic face recognition calculation on the pixel information of the plurality of user face video frames to obtain a plurality of dynamic face recognition probability information.
In this embodiment, the fine disassembly may be performed on the pixel information of the plurality of user face video frames, which may be the spatial distribution and dynamic variation characteristics of the collected face features of the registered user, the pixel information of the user face video frames is deconstructed into a plurality of user face video frame pixel sub-information units with explicit semantics, the plurality of user face video frame pixel sub-information units are respectively used for focusing on different key areas and dynamic expressions of the faces, for example, the first user face video frame pixel sub-information unit is used for characterizing the fine magic variation of eyes, the second user face video frame pixel sub-information unit is used for characterizing the motion features of mouth, the third user face video frame pixel sub-information unit is used for characterizing the overall morphology of face contours, and further for each user face video frame pixel sub-information unit, a large number of validated user face feature data collected in advance are introduced as reference standards for comparison processing, the depth feature in each user face video frame sub-information unit can be processed from a plurality of angles such as morphology, spatial position relation, dynamic time sequence variation, etc., the depth feature in each user face video frame sub-information unit can be processed, the depth feature in a depth feature can be calculated, the cosine feature can be further normalized, the cosine feature can be calculated, and the distance can be calculated, and the cosine feature can be further normalized, and the distance can be calculated, and the distance is 0, and the cosine feature is further, and the distance is calculated.
According to the touch screen driving method provided by the embodiment of the application, the potential target areas in the user face video frame information are rapidly and accurately positioned through generating a plurality of initial face recognition frames, and then the middle face recognition frame which accords with the face recognition frame distance threshold value condition is screened out through calculating the logic distance between the initial face recognition frames, so that the overlapping or too-close invalid areas are effectively eliminated, the pertinence and the accuracy of the subsequent pixel feature extraction on the user face video frame information are improved, the interference of redundant information is reduced, the pixel feature extraction on the user face video frame information is further carried out by taking the middle face recognition frame as a reference, the integrity and the validity of key features in the user face video frame information are ensured, the recognition capability on dynamic face features is enhanced, the attack of illegal molecules is effectively resisted, and the safety and the reliability of the touch screen driving unlocking are improved.
Fig. 3 shows a flowchart for implementing the touch screen driving method according to the third embodiment of the present application, which is different from the second embodiment in that the step S205 specifically includes:
step S301, generating a plurality of pieces of face video frame mask tensor information according to a plurality of pieces of face video frame pixel information of the user and a preset face video frame pixel multidimensional mask tensor.
In this embodiment, the preset multi-dimensional mask tensor of the pixels of the face video frame may be set manually, and may be designed to have a three-dimensional structure characteristic tensor, so as to perform feature extraction and feature screening on the face video frame information of the user in the space and time dimensions at the same time. Specifically, convolution calculation is performed on the face video frame pixel multidimensional mask tensor and the user face video frame pixel information, and the convolution calculation result is used as the user face video frame mask tensor information.
Step S302, generating a plurality of user face video frame mask tensor variables according to the plurality of user face video frame mask tensor information and preset face video frame mask tensor bias information.
In this embodiment, the preset mask tensor bias information of the face video frame may be set manually. The user face video frame mask tensor information and the face video frame mask tensor bias information can be added, and then the added result is used as a user face video frame mask tensor variable, so that calculation errors caused by the fact that the user face video frame mask tensor information exceeds the dimension of the computer system are avoided.
Step S303, statistics is carried out on the user face video frame mask tensor variables corresponding to the user face video frame information, and a plurality of pieces of user face video single-frame mask tensor information and user face video single-frame time information are obtained.
In this embodiment, it may be understood that one piece of user face video frame information refers to single frame information, and then all user face video frame mask tensor variables in the one piece of user face video frame information are counted to be used as a plurality of pieces of user face video single frame mask tensor information corresponding to the one piece of user face video frame information, and shooting time or acquiring time of the user face video frame information is synchronously extracted to be used as user face video single frame time information.
Step S304, calculating the difference value of each single-frame mask tensor information of the face video of the user according to the single-frame time information of the face video of the user, and obtaining a plurality of inter-frame mask tensor difference information of the face video of the user.
In this embodiment, the difference between the single-frame mask tensor information of the user face video corresponding to the two frames of the user face video frame information is calculated, that is, the difference between the single-frame mask tensor information of the user face video corresponding to the user face video frame information of two adjacent frames is calculated, and the calculated difference is used as the mask tensor difference information between the user face video frames. It can be understood that two frames of user face video frame information can be calculated to obtain mask tensor difference information between two frames of user face video frames, and then more than two frames of multi-frame user face video frame information can be calculated to obtain mask tensor difference information between a plurality of user face video frames.
Step S305, generating a plurality of user face video frame mask feature information according to the plurality of user face video inter-frame mask tensor difference information and the plurality of user face video frame mask tensor information.
In this embodiment, the sum of the user face video inter-frame mask tensor difference information and the corresponding user face video frame mask tensor information may be used as the user face video frame mask feature information.
And step S306, carrying out dynamic face recognition calculation on the mask characteristic information of the plurality of user face video frames to obtain a plurality of dynamic face recognition probability information.
In this embodiment, matching calculation may be performed on mask feature information of each user face video frame and registered user face features collected in advance, similarity of feature information in space structure and time dynamic change may be quantitatively evaluated in the matching process, and matching degree between current user face video frame mask feature information and registered user face features may be calculated through multi-level and multi-dimensional analysis, and the matching degree is used as dynamic face recognition probability information. The matching degree can be quantitatively calculated by using Euclidean distance.
According to the touch screen driving method provided by the embodiment of the application, the spatial-temporal feature extraction is carried out on the pixel information of the face video frames of the user through the multi-dimensional mask tensor of the pixels of the face video frames of the preset face, the texture details of the pixel information of the face video frames of the user in the spatial dimension and the dynamic change of the time dimension are effectively captured, the three-dimensional face feature information is comprehensively reserved, the mask tensor bias information of the face video frames is further introduced to carry out dimension calibration processing, the accuracy and the reliability of the dynamic face recognition probability information are ensured, the tensor information and the time information of the face video frames of each user are further counted, the dynamic feature is extracted through calculating the difference value between frames, the recognition capability of dynamic behaviors such as the face gesture, blink, expression change, head rotation or shaking is effectively enhanced, the limitation of the prior art in view angle adaptability is effectively broken through, the organic combination of the mask tensor difference information of the face video frames of the user and the mask tensor information of the face video frames of the user is realized, the feature expression of the static structure and the dynamic change feature is provided, the feature expression with higher force is provided for calculation, the feature expression of the dynamic face recognition is effectively improved, the dynamic recognition capability is effectively improved, the touch screen is not applied to the touch screen is effectively driven, and the touch screen is not has obvious, and the touch screen is not driven to be authenticated.
Fig. 4 shows a flowchart for implementing the touch screen driving method according to the fourth embodiment of the present application, which is different from the third embodiment in that the step S306 specifically includes:
And S401, flattening and splicing the plurality of user face video frame mask characteristic information according to the user face video single-frame time information to generate a plurality of user face video frame mask characteristic sequences, wherein the user face video frame mask characteristic sequences are in one-to-one correspondence with the user face video single-frame time information.
In this embodiment, it may be understood that the plurality of user face video frame mask feature information may be in a multi-dimensional matrix form, and the user face video frame mask feature information is flattened, that is, all the user face video frame mask feature information is converted into a one-dimensional matrix form, and the one-dimensional matrix is converted into a one-dimensional array, so that all the one-dimensional data converted from each user face video frame mask feature information is spliced according to the front-to-back sequence of the single frame time information of the user face video, and the spliced array is used as the user face video frame mask feature sequence.
Step S402, calculating the average value of a plurality of the face video frame mask feature sequences of the user to obtain average value information of the face video frame mask feature sequences.
In this embodiment, the average value of the mask feature sequence of each user face video frame may be calculated and obtained, and used as the average value information of the mask feature sequence of the face video frame, to perform the dimension reduction processing on the mask feature sequence of the face video frame, and to effectively filter the outlier introduced by the local feature fluctuation or noise.
Step S403, obtaining a plurality of facial video frame mask feature sequence spatial transformation information according to the average value information of the plurality of facial video frame mask feature sequences, the preset facial video frame mask feature sequence transformation coefficient matrix and the preset facial video frame mask feature sequence transformation displacement matrix.
In this embodiment, the preset face video frame mask feature sequence transform coefficient matrix and the preset face video frame mask feature sequence transform displacement matrix may be set manually. The average value information of the mask feature sequence of the face video frame is multiplied by the transformation coefficient matrix of the mask feature sequence of the face video frame, then the multiplication result is added with the transformation displacement matrix of the mask feature sequence of the face video frame, and the addition result is used as the spatial transformation information of the mask feature sequence of the face video frame.
And step S404, performing value domain mapping processing on the facial video frame mask feature sequence space transformation information to obtain a plurality of dynamic facial recognition probability information.
In this embodiment, the Sigmoid function may be used to perform value domain mapping processing on all the face video frame mask feature sequence spatial transformation information, that is, the face video frame mask feature sequence spatial transformation information is mapped into a (0, 1) interval, and the face video frame mask feature sequence spatial transformation information after the value domain mapping processing is used as dynamic face recognition probability information.
According to the touch screen driving method provided by the embodiment of the application, the multi-dimensional mask feature is converted into the ordered low-dimensional sequence by flattening and splicing the mask feature information of the user face video frame, so that the space distance between the mask features of the user face video frame is increased, the space information of the mask features of the user face video frame is effectively integrated, the dynamic change process of the face features is completely reserved, the average value of the mask feature sequence of the user face video frame is calculated, the information processing dimension and complexity are reduced, noise and abnormal values are effectively filtered, the stability and reliability of face feature identification are improved, the spatial transformation processing is carried out on the mask feature of the user face video frame through the transformation coefficient matrix of the mask feature sequence of the preset face video frame and the transformation displacement matrix of the mask feature sequence of the preset face video frame, the spatial distance between the mask features of the user face video frame is increased, the distinguishing degree between the mask features of the user face video frame is enhanced, the accuracy and the robustness of the mask features of the user face video frame are improved, the static photo is prevented from being adopted by an illegal molecule, the touch screen is successfully driven, and the safety of the equipment and the user information is improved.
Fig. 5 shows a flowchart for implementing a touch screen driving method according to a fifth embodiment of the present application, which is different from the first embodiment in that the step S104 specifically includes:
Step S501, scaling the dynamic face video frame information based on a plurality of preset dynamic face video frame scaling granularity information to obtain a plurality of dynamic face video frame scaling information.
In this embodiment, the preset granularity information of scaling the dynamic face video frame may be set manually. And performing scaling processing of various scaling granularities on the dynamic face video frame information according to the multiple preset dynamic face video frame scaling granularities, so as to generate dynamic face video frame scaling information with multiple resolutions and multiple sizes, wherein the dynamic face video frame scaling information is used for covering multi-level face features to capture morphological changes under multiple angles such as side face, low head, head-up and the like, and the adaptability and the accuracy of face recognition of complex visual angles are improved.
Step S502, based on a plurality of preset dynamic face video frame feature granularity information and preset dynamic face video frame feature extraction interval information, performing feature extraction processing on a plurality of dynamic face video frame scaling information to obtain a plurality of dynamic face video frame scaling feature information.
In this embodiment, the preset granularity information of the dynamic face video frame features and the preset interval information of the dynamic face video frame feature extraction may be set manually. The extraction granularity specified according to the dynamic face video frame feature granularity information can be used for extracting features at different spatial positions and at different intervals in the dynamic face video frame scaling information, wherein the intervals are determined by the dynamic face video frame feature extraction interval information, so that the extracted features are used as dynamic face video frame scaling feature information.
Step S503, performing value domain scaling processing on the plurality of dynamic face video frame scaling feature information to obtain a plurality of dynamic face video frame feature certainty information.
In this embodiment, the value range scaling processing may be performed on the dynamic face video frame scaling feature information by using a ReLU function, that is, the dynamic face video frame scaling feature information is mapped into a range from 0 to 1, and the value after the value range scaling processing is used as the certainty factor information of the dynamic face video frame feature.
Step S504, judging whether the dynamic face video frame feature certainty factor information is larger than or equal to a preset dynamic face video frame feature certainty factor threshold, if so, taking dynamic face video frame scaling feature information corresponding to the dynamic face video frame feature certainty factor information as dynamic face video frame information, and if not, skipping the dynamic face video frame scaling feature information corresponding to the dynamic face video frame feature certainty factor information.
In this embodiment, the preset threshold for certainty of a feature of a dynamic face video frame may be set manually. When the certainty factor information of the dynamic face video frame is smaller than the preset certainty factor threshold value of the dynamic face video frame, the dynamic face video frame scaling feature information corresponding to the certainty factor information of the dynamic face video frame is invalid, the dynamic face video frame scaling feature information corresponding to the certainty factor information of the dynamic face video frame cannot be used as the dynamic face video frame information, and then the dynamic face video frame scaling feature information corresponding to the certainty factor information of the dynamic face video frame is skipped.
Step S505, performing parsing processing on the dynamic face recognition video frame information to generate a plurality of dynamic face recognition feature information.
In this embodiment, the dynamic face recognition video frame information may be disassembled first, and then divided into a plurality of dynamic face recognition sub-features according to different characteristics of the facial area, such as eyes, nose, mouth, etc., and then each dynamic face recognition sub-feature is subjected to relevance analysis, and potential links between features are mined by evaluating similarity and complementarity of each dynamic face recognition sub-feature in spatial position, morphological structure and dynamic change, for example, how the change of the eye feature is correlated with the head gesture and mouth motion is analyzed, so that the cooperative change modes of the features of the face at different angles and different moments can be captured, and then corresponding weights can be given to each dynamic face recognition sub-feature according to the correlation strength between each dynamic face recognition sub-feature, so as to highlight the importance degree of each dynamic face recognition sub-feature, and then the weighted dynamic face recognition sub-feature and the unweighted dynamic face recognition sub-feature are subjected to addition processing, that is, fusion processing is, so as to integrate the weighted dynamic face recognition sub-feature information into complete dynamic face recognition feature information.
According to the touch screen driving method provided by the embodiment of the application, the dynamic face video frame information is subjected to scaling processing to generate the dynamic face video frame scaling information covering multi-level face features, the morphological changes of the side faces, low heads and other complex angles are effectively captured, the adaptability of the recognition processing to the face image information of different visual angles is enhanced, the accurate feature sampling is performed on the dynamic face video frame scaling information through the preset dynamic face video frame feature granularity information and the preset dynamic face video frame feature extraction interval information, the calculation complexity is greatly reduced while the key feature information is kept, the processing efficiency is improved, and therefore the high-speed recognition to the dynamic face information is realized, and the safety and the applicability of the touch screen are effectively enhanced.
Fig. 6 shows a flowchart for implementing a touch screen driving method according to a sixth embodiment of the present application, which is different from the fifth embodiment in that the step S505 specifically includes:
And step S601, calculating the offset of a plurality of dynamic face recognition video frame information based on a preset dynamic face recognition video frame feature base point to obtain a plurality of dynamic face recognition video frame feature offset information.
In this embodiment, the preset feature base point of the dynamic face recognition video frame may be set manually. The preset dynamic face recognition video frame feature base point is taken as a reference coordinate origin, and the distance value between each dynamic face recognition video frame information and the dynamic face recognition video frame feature base point is calculated and taken as dynamic face recognition video frame feature offset information for quantifying the position change condition of the face features of different angles relative to a standard visual angle.
Step S602, affine transformation processing is carried out on the dynamic face video frame scaling feature information according to the dynamic face recognition video frame feature offset information and the preset dynamic face recognition feature standard points, so as to obtain the dynamic face recognition feature affine information.
In this embodiment, the preset dynamic face recognition feature standard point may be set manually, and may be a facial feature position used for representing the face under an ideal viewing angle, that is, five-element coordinates of the front face. The method can be that the dynamic face recognition video frame characteristic offset information and the dynamic face recognition characteristic standard point are matched, the dynamic face recognition video frame scaling characteristic information is geometrically corrected through affine transformation processing, namely translation transformation, rotation transformation and scaling processing, and the corrected dynamic face recognition video frame scaling characteristic information is used as dynamic face recognition characteristic affine information. For example, for the side face feature of the face, the side face feature can be mapped to a standard positive face coordinate system through rotation and translation transformation to eliminate feature distortion caused by the visual angle difference, so that dynamic face recognition feature affine information is generated.
Step S603, generating a plurality of dynamic face recognition feature information according to a plurality of the dynamic face recognition feature affine information.
In this embodiment, affine information of the dynamic face recognition features after affine transformation may be structured and integrated, and spatial distribution relations of the face contours, textures and dynamic features after correction are reserved to directly generate the dynamic face recognition feature information with the unified view angle reference.
According to the touch screen driving method provided by the embodiment of the application, the dynamic face recognition video frame characteristic base points are introduced to serve as the reference of geometric transformation, a unified measurement representation system of multi-view face characteristics is established, the spatial offset of the key areas relative to the dynamic face recognition video frame characteristic base points is accurately calculated aiming at the key areas such as eyes and noses in the dynamic face recognition video frame information, the characteristic position change under the gestures such as side faces, head elevation and low head elevation is effectively captured, the geometric correction is carried out on the offset characteristics by combining the dynamic face recognition characteristic standard points through affine transformation processing, so that face characteristics of different angles are mapped to a standard face coordinate system, face contour distortion and proportional deformation caused by the visual angle difference are eliminated, the face side face characteristics can participate in subsequent matching recognition in the form of face viewing angles, the consistency and the comparability of face characteristic expression under the multi-angle face scene are remarkably improved, the adaptability of equipment where the touch screen is located to the natural interactive face gesture of a user is enhanced, and the convenience and the reliability of driving unlocking of the touch screen are effectively improved.
Fig. 7 shows a flowchart for implementing the touch screen driving method according to the seventh embodiment of the present application, which is different from the sixth embodiment in that the step S603 specifically includes:
Step S701, generating a plurality of dynamic face recognition feature affine matrices according to a plurality of the dynamic face recognition feature affine information.
In this embodiment, affine information of each dynamic face recognition feature may be converted into a matrix form, where the matrix elements correspond to the spatial coordinates and intensity values of the face feature to form an affine matrix of the dynamic face recognition feature.
Step S702 generates a plurality of dynamic affine feature multidimensional modulation functions according to a plurality of preset dynamic affine feature modulation frequency information, a plurality of preset dynamic affine feature modulation direction information, a plurality of preset dynamic affine feature modulation phase offset information and a plurality of preset dynamic affine feature modulation spatial range information.
In this embodiment, the plurality of preset dynamic face affine feature modulation frequency information, the plurality of preset dynamic face affine feature modulation direction information, the plurality of preset dynamic face affine feature modulation phase offset information, and the plurality of preset dynamic face affine feature modulation spatial range information may be set manually. The preset dynamic affine feature modulation frequency information of the face can be used for measuring the wavelength range during feature extraction, and the capturing capability of textures of different scales in the face image is determined, for example, high frequency corresponds to detail textures, and low frequency corresponds to the whole contour. The preset dynamic affine feature modulation direction information of the human face can be used for representing the angle range of feature extraction and capturing edges and structures in different directions, such as horizontal, vertical and diagonal lines. The preset dynamic affine facial feature modulation phase offset information can be used for controlling the initial position of feature extraction, and the sensitivity to the periodic variation of textures can be adjusted. The preset dynamic affine feature modulation space range information of the human face can be used for limiting the size of a region for feature extraction and focusing local features or global structures. Specifically, function bases with multi-scale and multi-directional characteristics can be constructed based on dynamic face affine feature modulation frequency information, dynamic face affine feature modulation direction information, dynamic face affine feature modulation phase offset information and dynamic face affine feature modulation space range information, each function base corresponds to a specific frequency-direction combination, periodic changes of textures can be simulated through a sine curve or cosine curve model, initial phases of the curves are adjusted through phase offset parameters to match texture modes of different phases, meanwhile, gaussian window functions are generated by utilizing the dynamic face affine feature modulation space range information, the function bases are spatially truncated to enable the function bases to be effective only in a designated area, and therefore local feature focusing extraction is achieved, and linear superposition processing is conducted on the dynamic face affine feature modulation frequency information, the dynamic face affine feature modulation direction information, the dynamic face affine feature modulation phase offset information and the dynamic face affine feature modulation space range information, so that a dynamic face affine feature multi-dimensional modulation function is formed. In the superposition process, the contributions of different scale and direction characteristics can be balanced by adjusting the weight of each functional group.
Step S703, performing multidimensional modulation processing on the multiple dynamic face recognition feature affine matrices based on the multiple dynamic face recognition feature affine modulation functions, so as to generate multiple dynamic face recognition feature affine modulation matrices.
In this embodiment, based on the multi-dimensional modulation function of the affine feature of the dynamic face, the convolution operation is performed on the affine matrix of the dynamic face recognition feature, and the multi-channel feature is extracted through the multi-dimensional modulation function of the affine feature of the dynamic face with different frequencies and directions, so as to generate the affine modulation matrix of the feature of the dynamic face recognition.
Step S704, generating a plurality of pieces of dynamic face recognition feature information according to a plurality of the dynamic face recognition feature affine modulation matrices, a preset dynamic face recognition feature affine modulation channel function and a preset dynamic face recognition feature affine modulation space function.
In this embodiment, the preset affine modulation channel function with the dynamic face recognition feature and the preset affine modulation space function with the dynamic face recognition feature may be set manually. The method comprises the steps of inputting a dynamic face recognition feature affine modulation matrix into a preset dynamic face recognition feature affine modulation channel function, extracting global statistical information, such as mean value, variance and the like, of each dynamic face recognition feature affine modulation matrix through the dynamic face recognition feature affine modulation channel function, inputting the extracted global statistical information into a dynamic face recognition feature affine modulation space function, and extracting local features of each space position as dynamic face recognition feature information through calculation of the dynamic face recognition feature affine modulation space function.
According to the touch screen driving method provided by the embodiment of the application, the response mechanism of a human visual system to different frequencies, directions and spatial positions is simulated through the construction and application of the dynamic human face affine characteristic multidimensional modulation function, the self-adaptive extraction of the multidimensional texture characteristics in the human face image is realized, the human face video frame characteristics of different dimensions are respectively captured through the preset dynamic human face characteristic affine modulation channel function and the preset dynamic human face characteristic affine modulation spatial function, the dynamic human face characteristic affine modulation matrix containing the multichannel frequency characteristics is extracted, the expression capability and the discrimination of the multi-angle human face characteristics are effectively enhanced, the method is suitable for human face characteristic extraction under complex environments such as low resolution, low illumination and the like, and abundant characteristic dimensions are provided for subsequent high-precision matching, so that the authentication accuracy and robustness of the touch screen equipment are effectively improved.
Fig. 8 shows a flowchart for implementing the touch screen driving method according to the eighth embodiment of the present application, which is different from the seventh embodiment in that the step S704 specifically includes:
Step S801, calculating to obtain a dynamic face recognition feature affine modulation channel weight matrix according to a plurality of the dynamic face recognition feature affine modulation matrices and a preset dynamic face recognition feature affine modulation channel function.
In this embodiment, the preset affine modulation channel function with the dynamic face recognition feature may be set manually. The method comprises the steps of receiving a dynamic face recognition feature affine modulation matrix through a preset dynamic face recognition feature affine modulation channel function, firstly compressing the space dimension of the dynamic face recognition feature affine modulation matrix into a channel level scalar value to form a channel feature vector, then carrying out linear transformation on the channel feature vector, further generating a channel weight vector through a ReLU function, and further expanding the channel weight vector into a channel weight matrix with the same dimension as the affine modulation matrix to serve as the dynamic face recognition feature affine modulation channel weight matrix.
Step S802, calculating to obtain a dynamic face recognition feature affine modulation space weight matrix according to a plurality of the dynamic face recognition feature affine modulation matrices and a preset dynamic face recognition feature affine modulation space function.
In this embodiment, the preset affine modulation space function of the dynamic face recognition feature may be set manually. The method can be that the space dimension weight calculation is carried out on the dynamic face recognition characteristic affine modulation matrix through a preset dynamic face recognition characteristic affine modulation space function, and then the calculated weight matrix is used as the dynamic face recognition characteristic affine modulation space weight matrix.
Step 803, generating a dynamic face recognition feature affine modulation weight matrix according to the dynamic face recognition feature affine modulation channel weight matrix and the dynamic face recognition feature affine modulation space weight matrix.
In this embodiment, the dynamic face recognition feature affine modulation channel weight matrix and the dynamic face recognition feature affine modulation space weight matrix may be multiplied, and the multiplication result is used as the dynamic face recognition feature affine modulation weight matrix.
Step S804, based on the dynamic face recognition feature affine modulation weight matrix, performs weighting processing on a plurality of the dynamic face recognition feature affine information, and generates a plurality of dynamic face recognition feature information.
In this embodiment, the weighting processing may be performed on the dynamic face recognition feature affine information based on the dynamic face recognition feature affine modulation weight matrix, which may be a dot product operation performed on the dynamic face recognition feature affine modulation weight matrix and the dynamic face recognition feature affine information, and the operation result is used as the dynamic face recognition feature information.
According to the touch screen driving method provided by the embodiment of the application, the dynamic face recognition characteristic affine modulation channel function is used for capturing based on global statistical information of the characteristic channels, the high-frequency texture channels corresponding to key areas such as eyes, noses and the like are highlighted, and the dynamic face recognition characteristic affine modulation space function is used for capturing local face characteristics to inhibit background interference and focus on key positions of faces such as nosetips, corners and the like, so that the problems of balanced processing of different channel characteristics and indiscriminate treatment of space areas in the extraction in the prior art are effectively solved, particularly under the condition that shielding exists, such as wearing glasses, hats or changing angles, effective characteristics can be adaptively strengthened, invalid information is inhibited, the robustness and anti-interference capability of the face characteristics are remarkably improved, the accuracy and reliability of dynamic face recognition are ensured, and the applicability, safety and effectiveness of touch screen equipment are ensured.
Fig. 9 shows a flowchart of a touch screen driving method according to a ninth embodiment of the present application, which is different from the first embodiment in that:
The preset registered user face feature information can comprise preset registered user face feature vector information, preset registered user face feature input time information and preset registered user identity information, wherein the preset registered user face feature vector information, the preset registered user face feature input time information and the preset registered user identity information are in one-to-one correspondence;
the step S105 specifically includes:
Step S901, generating a plurality of dynamic face recognition feature vectors according to a plurality of the dynamic face recognition feature information.
In this embodiment, vectorization processing may be performed on each piece of dynamic face recognition feature information, and the multidimensional feature matrix is converted into a one-dimensional feature vector through flattening operation, where the one-dimensional feature vector encodes textures, contours and dynamic feature parameters of each area of the face in sequence, so as to form a compact dynamic face recognition feature vector, which is convenient for subsequent efficient matching calculation.
Step S902, calculating to obtain dynamic face feature matching degree information according to the dynamic face recognition feature vector and preset registered user face feature vector information.
In this embodiment, the dynamic face recognition feature vector may be compared with the preset registered user face feature vector information dimension by dimension, and the similarity degree of the feature vector in the space may be quantized by calculating the cosine similarity or euclidean distance of the two to obtain the dynamic face feature matching degree information. For example, the closer the cosine similarity is to 1, the higher the directional consistency of the current feature and the registered feature, and the greater the likelihood that the user identity match is successful.
Step 903, judging whether the dynamic face feature matching degree information is larger than or equal to a preset dynamic face feature matching degree threshold, if so, generating face feature matching information according to the dynamic face feature vector, preset registered user face feature input time information and preset registered user identity information, and if not, skipping the dynamic face feature information corresponding to the dynamic face feature matching degree information.
In this embodiment, the preset registered user face feature vector information may be facial feature information for uniquely identifying the user, and the preset registered user face feature entry time information may be timestamp information for recording generation or update of the registered user face feature vector, which is accurate to a second level or a millisecond level. The preset registered user identity information may be an identity identifier for identifying the uniqueness of the user, and typically includes structured data such as user ID, name, authority level, etc. The preset dynamic face feature matching degree threshold may be set manually. When the dynamic face feature matching degree information is smaller than the preset dynamic face feature matching degree threshold, the non-registered user is judged, the dynamic face feature information corresponding to the dynamic face feature matching degree information is skipped, the verification request is refused, and meanwhile, an abnormal log is recorded to trigger the security audit. The registered user face feature input time information is used for verifying timeliness of the features.
According to the touch screen driving method provided by the embodiment of the application, the registered user face feature vector information, the registered user face feature input time information and the registered user identity information are extracted and introduced to construct the dynamically updated user feature map, so that the spatial similarity of the dynamic face feature vector and the registered feature vector is compared, the timeliness of feature verification of the registered user face feature input time information is also utilized, the problem that the face feature information is outdated due to factors such as age increase and hairstyle change of the face of the user is avoided, when the dynamic face feature matching degree information is greater than or equal to a preset dynamic face feature matching degree threshold value, the complete face feature matching information is generated, the accuracy and traceability of identity verification are ensured, the adaptability problem of traditional static feature registration under a user feature dynamic change scene is effectively solved, the application scene of the touch screen is expanded, and the applicability and the safety of the touch screen under various complex environments are enhanced.
Corresponding to the method of the above embodiment, fig. 10 shows a block diagram of a touch screen driving system according to an embodiment of the present application, and for convenience of explanation, only a portion related to the embodiment of the present application is shown. The touch screen driving system illustrated in fig. 10 may be an execution subject of the touch screen driving method provided in the first embodiment.
Referring to fig. 10, the touch screen driving system includes:
the user face video frame information obtaining module 1010 is configured to obtain a plurality of user face video frame information;
The dynamic face recognition probability information generation module 1020 is used for carrying out dynamic face recognition calculation on a plurality of pieces of user face video frame information to obtain a plurality of pieces of dynamic face recognition probability information, wherein the dynamic face recognition probability information corresponds to the user face video frame information one by one;
the dynamic face video frame information determining module 1030 is configured to use, when the dynamic face recognition probability information is greater than or equal to a preset dynamic face probability threshold, user face video frame information corresponding to the dynamic face recognition probability as dynamic face video frame information;
the dynamic face recognition feature information generating module 1040 is configured to perform multi-angle feature extraction and analysis processing on a plurality of dynamic face video frame information, so as to generate a plurality of dynamic face recognition feature information;
The face feature matching information generating module 1050 is configured to perform matching processing according to a plurality of the dynamic face recognition feature information and preset face feature information of the registered user, so as to generate face feature matching information;
The touch screen driving signal generating module 1060 is configured to generate a touch screen driving signal in response to the generation of the face feature matching information, and send the touch screen driving signal to a terminal where the touch screen is located, so as to perform touch screen driving processing through the terminal where the touch screen is located.
The process of implementing the respective functions of each module in the touch screen driving system provided by the embodiment of the present application may refer to the description of the first embodiment shown in fig. 1, and will not be repeated here.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Furthermore, the terms "first," "second," "third," and the like in the description of the present specification and in the appended claims, are used for distinguishing between descriptions and not necessarily for indicating or implying a relative importance. It will also be understood that, although the terms "first," "second," etc. may be used herein in some embodiments of the application to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first table may be named a second table, and similarly, a second table may be named a first table without departing from the scope of the various described embodiments. The first table and the second table are both tables, but they are not the same table.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The touch screen driving method provided by the embodiment of the application can be applied to terminal equipment such as mobile phones, tablet computers, wearable equipment, vehicle-mounted equipment, augmented reality (augmented reality, AR)/Virtual Reality (VR) equipment, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal digital assistants (personal DIGITAL ASSISTANT, PDA) and the like, and the embodiment of the application does not limit the specific types of the terminal equipment.
For example, the terminal device may be a Station (ST) in a WLAN, a cellular telephone, a cordless telephone, a Session initiation protocol (Session InitiationProtocol, SIP) telephone, a wireless local loop (Wireless Local Loop, WLL) station, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA) device, a handheld device with wireless communication capabilities, a computing device or other processing device connected to a wireless modem, an in-vehicle device, a car networking terminal, a computer, a laptop computer, a handheld communication device, a handheld computing device, a satellite radio, a wireless modem card, a television Set Top Box (STB), a customer premise equipment (customer premise equipment, CPE) and/or other devices for communicating over a wireless system as well as next generation communication systems, such as a mobile terminal in a 5G network or a mobile terminal in a future evolved public land mobile network (Public Land Mobile Network, PLMN) network, etc.
By way of example, but not limitation, when the terminal device is a wearable device, the wearable device may also be a generic name for applying wearable technology to intelligently design daily wear, developing wearable devices, such as glasses, gloves, watches, apparel, shoes, and the like. The wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also can realize a powerful function through software support, data interaction and cloud interaction. The generalized wearable intelligent device comprises full functions, large size, and complete or partial functions which can be realized independent of a smart phone, such as a smart watch or a smart glasses, and is only focused on certain application functions, and needs to be matched with other devices such as the smart phone for use, such as various smart bracelets, smart jewelry and the like for physical sign monitoring.
An embodiment of the present application provides a terminal device, which includes at least one processor, and a memory, where the memory stores a computer program executable on the processor. The steps in the embodiments of the method for driving a touch screen are implemented by the processor when the processor executes the computer program. Or the processor, when executing the computer program, performs the functions of the modules/units in the system embodiments described above.
The terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory. The terminal device may also include an input transmitting device, a network access device, a bus, etc.
The Processor may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may in some embodiments be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory may also be an external storage device of the terminal device, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the terminal device. Further, the memory may also include both an internal storage unit and an external storage device of the terminal device. The memory is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, etc., such as program code for the computer program, etc. The memory may also be used to temporarily store data that has been transmitted or is to be transmitted.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The embodiment of the application also provides a terminal device, which comprises at least one memory, at least one processor and a computer program stored in the at least one memory and capable of running on the at least one processor, wherein the processor executes the computer program to enable the terminal device to realize the steps in any of the method embodiments.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product enabling a terminal device to carry out the steps of the method embodiments described above when the computer program product is run on the terminal device.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The foregoing embodiments are merely illustrative of the technical solutions of the present application, and not restrictive, and although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that modifications may still be made to the technical solutions described in the foregoing embodiments or equivalent substitutions of some technical features thereof, and that such modifications or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1.一种触摸屏驱动方法,其特征在于,包括:1. A touch screen driving method, comprising: 获取多个用户人脸视频帧信息;Obtain multiple user face video frame information; 对多个所述用户人脸视频帧信息进行动态人脸识别计算,得到多个动态人脸识别概率信息;所述动态人脸识别概率信息与用户人脸视频帧信息一一对应;Performing dynamic face recognition calculations on the plurality of user face video frame information to obtain a plurality of dynamic face recognition probability information; the dynamic face recognition probability information corresponds one-to-one to the user face video frame information; 当所述动态人脸识别概率信息大于等于预设的动态人脸概率阈值时,则将所述动态人脸识别概率对应的用户人脸视频帧信息作为动态人脸视频帧信息;When the dynamic face recognition probability information is greater than or equal to a preset dynamic face probability threshold, the user face video frame information corresponding to the dynamic face recognition probability is used as the dynamic face video frame information; 对多个所述动态人脸视频帧信息进行多角度特征提取与解析处理,生成多个动态人脸识别特征信息;Performing multi-angle feature extraction and analysis processing on the plurality of dynamic face video frame information to generate a plurality of dynamic face recognition feature information; 根据多个所述动态人脸识别特征信息以及预设的注册用户人脸特征信息进行匹配处理,生成人脸特征匹配信息;Perform matching processing based on the plurality of dynamic face recognition feature information and preset registered user face feature information to generate face feature matching information; 响应于所述人脸特征匹配信息的生成,生成触摸屏驱动信号,并将所述触摸屏驱动信号发送至触摸屏所在终端,以通过触摸屏所在终端进行触摸屏驱动处理。In response to the generation of the facial feature matching information, a touch screen driving signal is generated, and the touch screen driving signal is sent to the terminal where the touch screen is located, so that the touch screen driving process is performed by the terminal where the touch screen is located. 2.如权利要求1所述的触摸屏驱动方法,其特征在于,所述对多个所述用户人脸视频帧信息进行动态人脸识别计算,得到多个动态人脸识别概率信息的步骤,具体包括:2. The touch screen driving method according to claim 1, wherein the step of performing dynamic face recognition calculation on the plurality of user face video frames to obtain a plurality of dynamic face recognition probability information specifically comprises: 基于多个所述用户人脸视频帧信息,根据预设的人脸识别框高度信息以及预设的人脸识别框宽度信息,生成多个初始人脸识别框;Based on the plurality of user face video frame information, generating a plurality of initial face recognition frames according to preset face recognition frame height information and preset face recognition frame width information; 计算多个初始人脸识别框之间的逻辑距离,得到多个人脸识别框距离信息;Calculate the logical distance between multiple initial face recognition frames to obtain distance information of multiple face recognition frames; 当所述人脸识别框距离信息大于预设的人脸识别框距离阈值时,则根据所述人脸识别框距离信息对应的初始人脸识别框,生成多个中间人脸识别框;When the face recognition frame distance information is greater than a preset face recognition frame distance threshold, generating a plurality of intermediate face recognition frames according to the initial face recognition frame corresponding to the face recognition frame distance information; 根据所述中间人脸识别框对所述用户人脸视频帧信息进行像素提取,得到多个用户人脸视频帧像素信息;Perform pixel extraction on the user face video frame information according to the middle face recognition frame to obtain pixel information of multiple user face video frames; 对多个所述用户人脸视频帧像素信息进行动态人脸识别计算,得到多个动态人脸识别概率信息。Dynamic face recognition calculation is performed on pixel information of multiple user face video frames to obtain multiple dynamic face recognition probability information. 3.如权利要求2所述的触摸屏驱动方法,其特征在于,所述对多个所述用户人脸视频帧像素信息进行动态人脸识别计算,得到多个动态人脸识别概率信息的步骤,具体包括:3. The touch screen driving method according to claim 2, wherein the step of performing dynamic face recognition calculation on pixel information of the plurality of user face video frames to obtain a plurality of dynamic face recognition probability information specifically comprises: 根据多个所述用户人脸视频帧像素信息以及预设的人脸视频帧像素多维掩码张量,生成多个用户人脸视频帧掩码张量信息;Generate multiple user face video frame mask tensor information according to the multiple user face video frame pixel information and the preset face video frame pixel multi-dimensional mask tensor; 根据多个所述用户人脸视频帧掩码张量信息以及预设的人脸视频帧掩码张量偏置信息,生成多个用户人脸视频帧掩码张量变量;Generate multiple user face video frame mask tensor variables according to the multiple user face video frame mask tensor information and preset face video frame mask tensor bias information; 统计各个所述用户人脸视频帧信息对应的用户人脸视频帧掩码张量变量,得到多个用户人脸视频单帧掩码张量信息以及用户人脸视频单帧时间信息;Counting the user face video frame mask tensor variables corresponding to each of the user face video frame information to obtain multiple user face video single frame mask tensor information and user face video single frame time information; 根据所述用户人脸视频单帧时间信息,计算各个所述用户人脸视频单帧掩码张量信息的差值,得到多个用户人脸视频帧间掩码张量差值信息;Calculating the difference of mask tensor information of each single frame of the user face video according to the time information of the single frame of the user face video to obtain mask tensor difference information between multiple frames of the user face video; 根据多个所述用户人脸视频帧间掩码张量差值信息以及多个所述用户人脸视频帧掩码张量信息,生成多个用户人脸视频帧掩码特征信息;Generate multiple user face video frame mask feature information according to the multiple user face video frame mask tensor difference information and the multiple user face video frame mask tensor information; 对多个所述用户人脸视频帧掩码特征信息进行动态人脸识别计算,得到多个动态人脸识别概率信息。Dynamic face recognition calculation is performed on the mask feature information of the multiple user face video frames to obtain multiple dynamic face recognition probability information. 4.如权利要求3所述的触摸屏驱动方法,其特征在于,所述对多个所述用户人脸视频帧掩码特征信息进行动态人脸识别计算,得到多个动态人脸识别概率信息的步骤,具体包括:4. The touch screen driving method according to claim 3, wherein the step of performing dynamic face recognition calculation on the mask feature information of the plurality of user face video frames to obtain a plurality of dynamic face recognition probability information specifically comprises: 根据所述用户人脸视频单帧时间信息,将多个所述用户人脸视频帧掩码特征信息进行展平处理以及拼接处理,生成多个用户人脸视频帧掩码特征序列;其中,所述用户人脸视频帧掩码特征序列与用户人脸视频单帧时间信息一一对应;Flattening and splicing the plurality of user face video frame mask feature information according to the single-frame time information of the user face video to generate a plurality of user face video frame mask feature sequences; wherein the user face video frame mask feature sequences correspond one-to-one to the single-frame time information of the user face video; 计算多个所述用户人脸视频帧掩码特征序列的平均值,得到多个人脸视频帧掩码特征序列平均值信息;Calculating an average value of a plurality of mask feature sequences of the user's face video frames to obtain average value information of the plurality of mask feature sequences of the face video frames; 根据多个所述人脸视频帧掩码特征序列平均值信息、预设的人脸视频帧掩码特征序列变换系数矩阵以及预设的人脸视频帧掩码特征序列变换位移矩阵,得到多个人脸视频帧掩码特征序列空间变换信息;Obtaining spatial transformation information of multiple face video frame mask feature sequences according to the average value information of the multiple face video frame mask feature sequences, a preset face video frame mask feature sequence transformation coefficient matrix, and a preset face video frame mask feature sequence transformation displacement matrix; 对多个所述人脸视频帧掩码特征序列空间变换信息进行值域映射处理,得到多个动态人脸识别概率信息。A value range mapping process is performed on the spatial transformation information of the mask feature sequences of the multiple face video frames to obtain multiple dynamic face recognition probability information. 5.如权利要求1所述的触摸屏驱动方法,其特征在于,所述对多个所述动态人脸视频帧信息进行多角度特征提取与解析处理,生成多个动态人脸识别特征信息的步骤,具体包括:5. The touch screen driving method according to claim 1 , wherein the step of performing multi-angle feature extraction and analysis processing on the plurality of dynamic face video frames to generate a plurality of dynamic face recognition feature information specifically comprises: 基于多个预设的动态人脸视频帧缩放粒度信息,对所述动态人脸视频帧信息进行缩放处理,得到多个动态人脸视频帧缩放信息;Based on a plurality of preset dynamic face video frame scaling granularity information, scaling the dynamic face video frame information to obtain a plurality of dynamic face video frame scaling information; 基于多个预设的动态人脸视频帧特征粒度信息以及预设的动态人脸视频帧特征提取间隔信息,对多个所述动态人脸视频帧缩放信息进行特征提取处理,得到多个动态人脸视频帧缩放特征信息;Based on a plurality of preset dynamic face video frame feature granularity information and preset dynamic face video frame feature extraction interval information, feature extraction processing is performed on the plurality of dynamic face video frame scaling information to obtain a plurality of dynamic face video frame scaling feature information; 对多个所述动态人脸视频帧缩放特征信息进行值域缩放处理,得到多个动态人脸视频帧特征确信度信息;Performing range scaling processing on the scaled feature information of the plurality of dynamic face video frames to obtain feature confidence information of the plurality of dynamic face video frames; 当所述动态人脸视频帧特征确信度信息大于等于预设的动态人脸视频帧特征确信度阈值时,将所述动态人脸视频帧特征确信度信息对应的动态人脸视频帧缩放特征信息作为动态人脸识别视频帧信息;When the dynamic face video frame feature confidence information is greater than or equal to a preset dynamic face video frame feature confidence threshold, the dynamic face video frame zoom feature information corresponding to the dynamic face video frame feature confidence information is used as the dynamic face recognition video frame information; 对所述动态人脸识别视频帧信息进行解析处理,生成多个动态人脸识别特征信息。The dynamic face recognition video frame information is analyzed and processed to generate a plurality of dynamic face recognition feature information. 6.如权利要求5所述的触摸屏驱动方法,其特征在于,所述对所述动态人脸识别视频帧信息进行解析处理,生成多个动态人脸识别特征信息的步骤,具体包括:6. The touch screen driving method according to claim 5, wherein the step of analyzing the dynamic face recognition video frame information to generate a plurality of dynamic face recognition feature information specifically comprises: 基于预设的动态人脸识别视频帧特征基点,对多个所述动态人脸识别视频帧信息进行偏移量计算,得到多个动态人脸识别视频帧特征偏移信息;Based on a preset dynamic face recognition video frame feature base point, performing offset calculation on a plurality of dynamic face recognition video frame information to obtain a plurality of dynamic face recognition video frame feature offset information; 根据多个所述动态人脸识别视频帧特征偏移信息以及预设的动态人脸识别特征标准点,对多个所述动态人脸视频帧缩放特征信息进行仿射变换处理,得到多个动态人脸识别特征仿射信息;Performing affine transformation processing on the zoomed feature information of the plurality of dynamic face video frames according to the plurality of dynamic face recognition video frame feature offset information and the preset dynamic face recognition feature standard points to obtain a plurality of dynamic face recognition feature affine information; 根据多个所述动态人脸识别特征仿射信息,生成多个动态人脸识别特征信息。A plurality of dynamic face recognition feature information is generated based on the plurality of dynamic face recognition feature affine information. 7.如权利要求6所述的触摸屏驱动方法,其特征在于,所述根据多个所述动态人脸识别特征仿射信息,生成多个动态人脸识别特征信息的步骤,具体包括:7. The touch screen driving method according to claim 6, wherein the step of generating a plurality of dynamic face recognition feature information based on the plurality of dynamic face recognition feature affine information specifically comprises: 根据多个所述动态人脸识别特征仿射信息,生成多个动态人脸识别特征仿射矩阵;generating a plurality of dynamic face recognition feature affine matrices according to the plurality of dynamic face recognition feature affine information; 根据多个预设的动态人脸仿射特征调制频率信息、多个预设的动态人脸仿射特征调制方向信息、多个预设的动态人脸仿射特征调制相位偏移信息以及多个预设的动态人脸仿射特征调制空间范围信息,生成多个动态人脸仿射特征多维调制函数;Generate multiple dynamic face affine feature multidimensional modulation functions according to multiple preset dynamic face affine feature modulation frequency information, multiple preset dynamic face affine feature modulation direction information, multiple preset dynamic face affine feature modulation phase offset information, and multiple preset dynamic face affine feature modulation spatial range information; 基于多个所述动态人脸仿射特征多维调制函数,对多个所述动态人脸识别特征仿射矩阵进行多维调制处理,生成多个动态人脸识别特征仿射调制矩阵;Based on the multiple dynamic face affine feature multidimensional modulation functions, the multiple dynamic face recognition feature affine matrices are subjected to multidimensional modulation processing to generate multiple dynamic face recognition feature affine modulation matrices; 根据多个所述动态人脸识别特征仿射调制矩阵、预设的动态人脸识别特征仿射调制通道函数以及预设的动态人脸识别特征仿射调制空间函数,生成多个动态人脸识别特征信息。A plurality of dynamic face recognition feature information is generated according to the plurality of dynamic face recognition feature affine modulation matrices, the preset dynamic face recognition feature affine modulation channel function and the preset dynamic face recognition feature affine modulation space function. 8.如权利要求7所述的触摸屏驱动方法,其特征在于,所述根据多个所述动态人脸识别特征仿射调制矩阵、预设的动态人脸识别特征仿射调制通道函数以及预设的动态人脸识别特征仿射调制空间函数,生成多个动态人脸识别特征信息的步骤,具体包括:8. The touch screen driving method according to claim 7, wherein the step of generating a plurality of dynamic face recognition feature information based on the plurality of dynamic face recognition feature affine modulation matrices, the preset dynamic face recognition feature affine modulation channel function, and the preset dynamic face recognition feature affine modulation space function specifically comprises: 根据多个所述动态人脸识别特征仿射调制矩阵以及预设的动态人脸识别特征仿射调制通道函数,计算得到动态人脸识别特征仿射调制通道权重矩阵;Calculating a dynamic face recognition feature affine modulation channel weight matrix based on the plurality of dynamic face recognition feature affine modulation matrices and a preset dynamic face recognition feature affine modulation channel function; 根据多个所述动态人脸识别特征仿射调制矩阵以及预设的动态人脸识别特征仿射调制空间函数,计算得到动态人脸识别特征仿射调制空间权重矩阵;Calculating a dynamic face recognition feature affine modulation space weight matrix based on the plurality of dynamic face recognition feature affine modulation matrices and a preset dynamic face recognition feature affine modulation space function; 根据所述动态人脸识别特征仿射调制通道权重矩阵以及动态人脸识别特征仿射调制空间权重矩阵,生成动态人脸识别特征仿射调制权重矩阵;Generate a dynamic face recognition feature affine modulation weight matrix according to the dynamic face recognition feature affine modulation channel weight matrix and the dynamic face recognition feature affine modulation space weight matrix; 基于所述动态人脸识别特征仿射调制权重矩阵,对多个所述动态人脸识别特征仿射信息进行加权处理,生成多个动态人脸识别特征信息。Based on the dynamic face recognition feature affine modulation weight matrix, weighted processing is performed on the multiple dynamic face recognition feature affine information to generate multiple dynamic face recognition feature information. 9.如权利要求1所述的触摸屏驱动方法,其特征在于,9. The touch screen driving method according to claim 1, wherein: 所述预设的注册用户人脸特征信息可以包括预设的注册用户人脸特征向量信息、预设的注册用户人脸特征录入时间信息以及预设的注册用户身份信息;其中,所述预设的注册用户人脸特征向量信息、预设的注册用户人脸特征录入时间信息以及预设的注册用户身份信息一一对应;The preset registered user facial feature information may include preset registered user facial feature vector information, preset registered user facial feature entry time information, and preset registered user identity information; wherein the preset registered user facial feature vector information, the preset registered user facial feature entry time information, and the preset registered user identity information are in one-to-one correspondence; 所述根据多个所述动态人脸识别特征信息以及预设的注册用户人脸特征信息进行匹配处理,生成人脸特征匹配信息的步骤,具体包括:The step of performing matching processing based on the plurality of dynamic face recognition feature information and the preset registered user face feature information to generate face feature matching information specifically includes: 根据多个所述动态人脸识别特征信息,生成多个动态人脸识别特征向量;generating a plurality of dynamic face recognition feature vectors according to the plurality of dynamic face recognition feature information; 根据所述动态人脸识别特征向量以及预设的注册用户人脸特征向量信息,计算得到动态人脸特征匹配度信息;Calculating dynamic face feature matching information based on the dynamic face recognition feature vector and preset registered user face feature vector information; 当所述动态人脸特征匹配度信息大于等于预设的动态人脸特征匹配度阈值时,根据所述动态人脸识别特征向量、预设的注册用户人脸特征录入时间信息以及预设的注册用户身份信息,生成人脸特征匹配信息。When the dynamic facial feature matching degree information is greater than or equal to a preset dynamic facial feature matching degree threshold, facial feature matching information is generated according to the dynamic facial recognition feature vector, the preset registered user facial feature entry time information and the preset registered user identity information. 10.一种触摸屏驱动系统,其特征在于,包括:10. A touch screen driving system, comprising: 用户人脸视频帧信息获取模块,用于获取多个用户人脸视频帧信息;A user face video frame information acquisition module is used to acquire multiple user face video frame information; 动态人脸识别概率信息生成模块,用于对多个所述用户人脸视频帧信息进行动态人脸识别计算,得到多个动态人脸识别概率信息;所述动态人脸识别概率信息与用户人脸视频帧信息一一对应;A dynamic face recognition probability information generation module is used to perform dynamic face recognition calculations on the plurality of user face video frame information to obtain a plurality of dynamic face recognition probability information; the dynamic face recognition probability information corresponds one-to-one to the user face video frame information; 动态人脸视频帧信息确定模块,用于当所述动态人脸识别概率信息大于等于预设的动态人脸概率阈值时,则将所述动态人脸识别概率对应的用户人脸视频帧信息作为动态人脸视频帧信息;a dynamic face video frame information determination module, configured to, when the dynamic face recognition probability information is greater than or equal to a preset dynamic face probability threshold, use the user face video frame information corresponding to the dynamic face recognition probability as the dynamic face video frame information; 动态人脸识别特征信息生成模块,用于对多个所述动态人脸视频帧信息进行多角度特征提取与解析处理,生成多个动态人脸识别特征信息;A dynamic face recognition feature information generation module is used to perform multi-angle feature extraction and analysis processing on the multiple dynamic face video frame information to generate multiple dynamic face recognition feature information; 人脸特征匹配信息生成模块,用于根据多个所述动态人脸识别特征信息以及预设的注册用户人脸特征信息进行匹配处理,生成人脸特征匹配信息;A facial feature matching information generation module is used to perform matching processing based on the plurality of dynamic facial recognition feature information and preset registered user facial feature information to generate facial feature matching information; 触摸屏驱动信号生成模块,用于响应于所述人脸特征匹配信息的生成,生成触摸屏驱动信号,并将所述触摸屏驱动信号发送至触摸屏所在终端,以通过触摸屏所在终端进行触摸屏驱动处理。The touch screen drive signal generating module is used to generate a touch screen drive signal in response to the generation of the facial feature matching information, and send the touch screen drive signal to the terminal where the touch screen is located, so that the touch screen drive processing is performed by the terminal where the touch screen is located.
CN202510750449.8A 2025-06-06 2025-06-06 Touch screen driving method and system Withdrawn CN120599681A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510750449.8A CN120599681A (en) 2025-06-06 2025-06-06 Touch screen driving method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510750449.8A CN120599681A (en) 2025-06-06 2025-06-06 Touch screen driving method and system

Publications (1)

Publication Number Publication Date
CN120599681A true CN120599681A (en) 2025-09-05

Family

ID=96900404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510750449.8A Withdrawn CN120599681A (en) 2025-06-06 2025-06-06 Touch screen driving method and system

Country Status (1)

Country Link
CN (1) CN120599681A (en)

Similar Documents

Publication Publication Date Title
CN107844748B (en) Auth method, device, storage medium and computer equipment
KR102294574B1 (en) Face Recognition System For Real Image Judgment Using Face Recognition Model Based on Deep Learning
US10728242B2 (en) System and method for biometric authentication in connection with camera-equipped devices
Zhang et al. Unsupervised learning-based framework for deepfake video detection
US10521643B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
WO2020207189A1 (en) Method and device for identity authentication, storage medium, and computer device
TW202026948A (en) Methods and devices for biological testing and storage medium thereof
US12266215B2 (en) Face liveness detection using background/foreground motion analysis
CN107169458B (en) Data processing method, device and storage medium
CN109858439A (en) A kind of biopsy method and device based on face
CN110852160A (en) Image-based biometric identification system and computer-implemented method
CN112560753B (en) Face recognition method, device, equipment and storage medium based on feature fusion
CN105335719A (en) Living body detection method and device
CN111339897A (en) Living body identification method, living body identification device, computer equipment and storage medium
CN113033243A (en) Face recognition method, device and equipment
Ma et al. Multi-perspective dynamic features for cross-database face presentation attack detection
CN112581357B (en) Face data processing method, device, electronic device and storage medium
CN113255497B (en) Multi-scene in-vivo detection method, system, server and readable medium based on data synthesis
Shen et al. Iritrack: Face presentation attack detection using iris tracking
CN120599681A (en) Touch screen driving method and system
CN114120386A (en) Face recognition method, device, equipment and storage medium
Srivastava et al. A Machine Learning and IoT-based Anti-spoofing Technique for Liveness Detection and Face Recognition
CN119678197A (en) Fingerprint recognition method and device
CN109271863A (en) Human face in-vivo detection method and device
RU2798179C1 (en) Method, terminal and system for biometric identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20250905