[go: up one dir, main page]

CN119322562A - Input interaction method and device based on fingerprint - Google Patents

Input interaction method and device based on fingerprint Download PDF

Info

Publication number
CN119322562A
CN119322562A CN202411109259.XA CN202411109259A CN119322562A CN 119322562 A CN119322562 A CN 119322562A CN 202411109259 A CN202411109259 A CN 202411109259A CN 119322562 A CN119322562 A CN 119322562A
Authority
CN
China
Prior art keywords
fingerprint
information
fingerprint sensor
action
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411109259.XA
Other languages
Chinese (zh)
Inventor
冯建江
许展玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202411109259.XA priority Critical patent/CN119322562A/en
Publication of CN119322562A publication Critical patent/CN119322562A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides an input interaction method and device based on fingerprints, and relates to the technical field of human-computer interaction. The method comprises the steps of obtaining continuous fingerprint frame images when a user performs input interaction with the intelligent device through fingerprint sensors, performing identity authentication on the user based on the continuous fingerprint frame images and pre-registered user fingerprint information, identifying the action of touch operation corresponding to each fingerprint sensor according to change information of a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor on time sequence when the identity authentication of the user passes, and determining input interaction instructions matched with the input interaction according to application scenes and action information of the intelligent device. The method and the device improve the safety of the input interaction process of the user by means of identity authentication and action recognition of the user. Based on the change information of continuous fingerprint frame images on the time sequence, various actions can be identified, input interaction modes are enriched, and richer, more convenient and personalized interaction experience is provided for users.

Description

Input interaction method and device based on fingerprint
Technical Field
The disclosure relates to the technical field of human-computer interaction, in particular to an input interaction method and device based on fingerprints.
Background
With the advent of AR/VR technology and the popularity of wearable electronics, people have increasingly demanded more convenient and efficient ways of human-computer interaction. Fingerprint sensors are commonly used for authentication as a classical biometric sensor. In the related art, a fingerprint sensor is used for performing motion recognition so as to perform man-machine interaction. At present, intelligent devices such as intelligent glasses and in-ear headphones adopt long and narrow touch control strip input modes, simple pressing and other interaction technologies can be realized, the interaction mode is single, and richer input interaction functions cannot be supported.
Disclosure of Invention
The present disclosure aims to solve, at least to some extent, one of the technical problems in the related art.
To this end, an embodiment of a first aspect of the present disclosure proposes a fingerprint-based input interaction method, which is applied to an intelligent device, including the steps of:
Responding to input interaction between a user and the intelligent equipment by using at least one fingerprint sensor, and acquiring a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor when the user performs touch operation on the at least one fingerprint sensor;
Authenticating the user based on a plurality of continuous fingerprint frame images corresponding to one or more fingerprint sensors in the at least one fingerprint sensor and pre-registered user fingerprint information;
Responding to the identity authentication of the user, and determining whether the touch operation corresponding to each fingerprint sensor is a click action type according to the change information of a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor on a time sequence;
For a first fingerprint sensor which is in a click action type in touch operation in the at least one fingerprint sensor, extracting contact time change information in a plurality of continuous fingerprint frame images corresponding to the first fingerprint sensor, and determining a first action corresponding to the first fingerprint sensor according to the contact time change information;
extracting total displacement information, barycentric displacement information and angle change information of fingerprint features in a plurality of continuous fingerprint frame images corresponding to the second fingerprint sensor for a second fingerprint sensor with a touch operation of a non-click action type in the at least one fingerprint sensor, and determining a second action corresponding to the second fingerprint sensor according to the total displacement information, the barycentric displacement information and the angle change information;
And determining an input interaction instruction matched with the input interaction according to the application scene and the action information of the intelligent equipment, wherein the action information comprises the first action and/or the second action.
In some embodiments of the present disclosure, the determining the second action corresponding to the second fingerprint sensor according to the total displacement information, the gravity center displacement information and the angle change information includes determining that the second action is a sliding action in response to the total displacement information being greater than a first threshold value, the gravity center displacement information being less than or equal to a second threshold value, the angle change information being less than or equal to a third threshold value, or determining that the second action is a rolling action in response to the total displacement information being less than or equal to the first threshold value, the gravity center displacement information being greater than the second threshold value, the angle change information being less than or equal to the third threshold value, or determining that the second action is a rotating action in response to the total displacement information being less than or equal to the first threshold value, the gravity center displacement information being less than or equal to the second threshold value, the angle change information being greater than the third threshold value.
In some embodiments of the disclosure, the determining the second action corresponding to the second fingerprint sensor according to the total displacement information, the gravity center displacement information and the angle change information includes determining the second action corresponding to the second fingerprint sensor through a pre-trained classifier according to the total displacement information, the gravity center displacement information and the angle change information, wherein the classifier learns to obtain the mapping relation among the total displacement information, the gravity center displacement information and the angle change information and the actions through a machine learning method.
In some embodiments of the disclosure, the determining the input interaction instruction matched with the input interaction according to the application scene and the action information of the intelligent device comprises identifying finger information of touch operation on each fingerprint sensor based on a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor and pre-registered user fingerprint information, determining a mapping relation between an action, a finger and the input interaction instruction based on the application scene, and determining the input interaction instruction matched with the input interaction according to the action information, the finger information and the mapping relation.
In some embodiments of the disclosure, the input interaction instruction includes an action execution instruction and/or a symbol input instruction of the smart device.
In some embodiments of the present disclosure, the method further includes generating interactive feedback information based on the touch operation, the interactive feedback information for providing touch feedback to the user, the interactive feedback information including at least one of image feedback information, sound feedback information, and vibration feedback information.
In some embodiments of the disclosure, the at least one fingerprint sensor is located in a side region of the smart device.
An embodiment of a second aspect of the present disclosure provides an input interaction device based on a fingerprint, where the device is set in an intelligent device, and the input interaction device includes:
the acquisition module is used for responding to input interaction between a user and the intelligent equipment by using at least one fingerprint sensor, and acquiring a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor when the user touches the at least one fingerprint sensor;
The identity authentication module is used for authenticating the identity of the user based on a plurality of continuous fingerprint frame images corresponding to one or more fingerprint sensors in the at least one fingerprint sensor and pre-registered user fingerprint information;
The first determining module is used for determining whether the touch operation corresponding to each fingerprint sensor is of a click action type according to the change information of a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor on a time sequence in response to the passing of the identity authentication of the user;
The second determining module is used for extracting contact time change information in a plurality of continuous fingerprint frame images corresponding to the first fingerprint sensor for the first fingerprint sensor which is in a click action type in touch operation in the at least one fingerprint sensor, and determining a first action corresponding to the first fingerprint sensor according to the contact time change information;
The third determining module is used for extracting total displacement information, barycentric displacement information and angle change information of fingerprint features in a plurality of continuous fingerprint frame images corresponding to the second fingerprint sensor for the second fingerprint sensor with the touch operation being a non-click action type in the at least one fingerprint sensor, and determining a second action corresponding to the second fingerprint sensor according to the total displacement information, barycentric displacement information and angle change information;
And the fourth determining module is used for determining an input interaction instruction matched with the input interaction according to the application scene and the action information of the intelligent equipment, wherein the action information comprises the first action and/or the second action.
An embodiment of a third aspect of the present disclosure provides an intelligent device, including a processor, and a memory communicatively connected to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of the first aspect.
An embodiment of a fourth aspect of the present disclosure proposes a computer-readable storage medium, wherein computer-executable instructions are stored in the computer-readable storage medium, the computer-executable instructions being for implementing the method according to the first aspect.
According to the fingerprint-based input interaction method provided by the embodiment of the disclosure, the identity authentication and the action recognition are performed on the user by analyzing the continuous fingerprint frame images corresponding to the at least one fingerprint sensor, so that the safety of the input interaction process of the user is improved. Based on the change information of continuous fingerprint frame images on the time sequence, various actions can be identified, action identification based on the small-area fingerprint sensor is realized, corresponding input interaction instructions are further determined, input interaction modes are enriched, and richer, more convenient and personalized interaction experience is provided for users.
Additional aspects and advantages of the disclosure will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the disclosure.
Drawings
The foregoing and/or additional aspects and advantages of the present disclosure will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a schematic flow chart of a fingerprint-based input interaction method according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a sliding motion, a scrolling motion, and a rotating motion provided by embodiments of the present disclosure;
fig. 3 is a schematic diagram of continuous fingerprint frame images corresponding to a sliding motion, a scrolling motion, and a rotating motion according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a mapping relationship between an action, a finger and a symbol input instruction according to an embodiment of the disclosure;
fig. 5 is a schematic diagram of an input interaction device based on fingerprint according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present disclosure and are not to be construed as limiting the present disclosure.
The disclosure provides an input interaction method and device based on fingerprints. In particular, the fingerprint-based input interaction method and apparatus of the embodiments of the present disclosure are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of an input interaction method based on fingerprint according to an embodiment of the disclosure, where the input interaction method is applied to an intelligent device. As shown in fig. 1, the fingerprint-based input interaction method may include, but is not limited to, the following steps:
In step 101, in response to a user performing input interaction with the intelligent device by using at least one fingerprint sensor, a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor are obtained when the user performs touch operation on the at least one fingerprint sensor.
In embodiments of the present disclosure, a user may use a fingerprint sensor to interact with an input with a smart device using one fingerprint sensor, or may also use multiple fingerprint sensors to interact with an input with a smart device.
The fingerprint sensor is one or more of an optical sensor, an ultrasonic sensor and a capacitance sensor. The fingerprint sensor can be a strip fingerprint sensor, and can also be fingerprint sensors with other shapes, such as round fingerprint sensors and square fingerprint sensors with larger areas. The intelligent device can be wearable intelligent devices such as intelligent glasses, in-ear headphones and intelligent watches, or can also be intelligent devices such as mobile phones and tablet computers.
Optionally, at least one fingerprint sensor in embodiments of the present disclosure may be located in a side region of the smart device. For example, one or more fingerprint sensors may be disposed on the frame side or temple side areas of the smart glasses
To optimize the user experience, optionally, in some embodiments of the present disclosure, when the user performs a touch operation on the fingerprint sensor, the smart device may generate interactive feedback information based on the touch operation, where the interactive feedback information is used to provide touch feedback to the user, and the interactive feedback information includes at least one of image feedback information, sound feedback information, and vibration feedback information.
In some embodiments of the present disclosure, as each fingerprint sensor captures successive fingerprint frame images of a user, the captured original images may be pre-processed, such as converted to a gray-scale image, gaussian blur processing, image enhancement, cropping, correction, and the like.
Step 102, authenticating the user based on a plurality of continuous fingerprint frame images corresponding to one or more fingerprint sensors in the at least one fingerprint sensor and pre-registered user fingerprint information.
In one implementation, the user fingerprint information may be pre-registered when the user first uses the smart device, or first opens the fingerprint rights. The user can slide one or more fingers transversely across the fingerprint sensor, record a sequence frame image of each registered finger fingerprint, extract complete fingerprint features (such as minutiae points, ridge directions and other features in the fingerprint image) based on the sequence frame images, and generate and store user fingerprint information of the user. During registration, a fingerprint quality judging function can be added, a poor fingerprint image is automatically identified, a user is reminded to register the finger again, and the quality of a registered fingerprint template is improved.
When a user performs input interaction with the intelligent device, the user can be matched with continuous fingerprint frame images acquired in the current interaction process based on pre-registered user fingerprint information so as to perform identity authentication on the user, so that the safety of human-computer interaction is ensured, and the personal privacy of the user is ensured.
Step 103, in response to the user passing the identity authentication, determining whether the touch operation corresponding to each fingerprint sensor is a click action type according to the change information of the continuous fingerprint frame images corresponding to each fingerprint sensor in time sequence.
As an example, in the current touch operation, an action in which a time of contact is less than a touch time threshold and/or there is no significant displacement change, gravity center change, angle change of fingerprint features in a plurality of continuous fingerprint frame images may be determined as a click action type, and an action in which a time of contact is greater than or equal to the touch time threshold and there is a significant change in fingerprint features may be determined as a non-click action. Wherein, whether there is a significant change can be confirmed by comparing with a preset threshold.
Taking a click action in the click action type as an example, it can be confirmed that a touch is generated in the touch operation according to continuous fingerprint frame images, and the time for which the user is in contact with the fingerprint sensor is less than a touch time threshold. Taking double click as an example, it can be confirmed from continuous fingerprint frame images that two touches have occurred in the touch operation, and the time of two touches of the user with the fingerprint sensor is less than the touch time threshold. Taking a single long press as an example, it can be confirmed that a touch is generated in the touch operation according to the continuous fingerprint frame images, and although the time of the touch is greater than or equal to the touch time threshold, the touch operation is found to have no obvious displacement change, gravity center change or angle change by comparing the characteristic change between the continuous fingerprint frame images. Taking the combined action of single click and long press as an example, the two touches in the touch operation can be confirmed according to continuous fingerprint frame images, wherein the time of the contact in the two touches is smaller than the touch time threshold, namely the single click, and the time of the contact is larger than or equal to the touch time threshold, namely the long press, but the fingerprint characteristics are not changed obviously. Because both the clicking and the long pressing belong to the clicking action type, the combined action of the clicking and the long pressing can be judged to be the clicking action type.
Step 104, for a first fingerprint sensor of which the touch operation is a click action type in at least one fingerprint sensor, extracting contact time change information in a plurality of continuous fingerprint frame images corresponding to the first fingerprint sensor, and determining a first action corresponding to the first fingerprint sensor according to the contact time change information.
According to the contact time change information in the plurality of continuous fingerprint frame images, the fact that a plurality of clicks are commonly generated in the current touch operation can be confirmed, and whether each click is a single click or a long click is confirmed, so that a first action (such as a single click, a long click or a combined action of the single click and the long click) corresponding to the first fingerprint sensor is confirmed.
Step 105, for a second fingerprint sensor of which the touch operation is a non-click action type in at least one fingerprint sensor, extracting total displacement information, barycenter displacement information and angle change information of fingerprint features in a plurality of continuous fingerprint frame images corresponding to the second fingerprint sensor, and determining a second action corresponding to the second fingerprint sensor according to the total displacement information, barycenter displacement information and angle change information.
Optionally, in some embodiments of the present disclosure, three non-click actions are provided, a sliding action, a scrolling action, and a rotating action, respectively. Fig. 2 is a schematic diagram of a sliding motion, a scrolling motion, and a rotating motion provided by an embodiment of the present disclosure. It should be noted that, the total displacement information indicates total displacement of the finger relative to the smart device or the fingerprint sensor when the user performs the touch operation, and the total displacement information may include information such as total displacement distance, total displacement direction, total displacement speed, and the like. The center of gravity in the center of gravity displacement information represents the center of gravity of the fingerprint pattern or fingerprint feature in the fingerprint frame image, and the center of gravity displacement information may include information such as a center of gravity displacement distance, a center of gravity displacement direction, and a center of gravity displacement speed. The angle change information comprises angle change of fingerprint features in the fingerprint frame image, and the angle change information can comprise angle change amount, angle change direction, angular speed and the like.
In one implementation, the second action may be determined based on the total displacement information, the center of gravity displacement information, the angle change information, and the respective corresponding thresholds. Fig. 3 is a schematic diagram of continuous fingerprint frame images corresponding to a sliding motion, a scrolling motion, and a rotating motion according to an embodiment of the disclosure. As one example, when the total displacement information is greater than the first threshold, the center of gravity displacement information is less than or equal to the second threshold, the angle change information is less than or equal to the third threshold, and the second motion may be determined to be a sliding motion. That is, as shown in fig. 3, when the finger performs a sliding motion (corresponding to the "up-down, left-right translation motion" in fig. 2), the position of the finger itself significantly changes (e.g., moves from one end of the sensor to the other end), but the center of gravity and the rotation angle of the fingerprint pattern do not significantly change in the continuous fingerprint frame images. When the total displacement information is smaller than or equal to the first threshold, the gravity center displacement information is larger than the second threshold, the angle change information is smaller than or equal to the third threshold, and the second action is determined to be a rolling action. When the finger performs rolling motion, the fingerprint patterns in the continuous fingerprint frame images are obviously displaced in the fingerprint acquisition area, but the finger does not obviously move or rotate, and the same fingerprint features in the continuous fingerprint frame images are not obviously displaced. When the total displacement information is smaller than or equal to the first threshold, the gravity center displacement information is smaller than or equal to the second threshold, the angle change information is larger than the third threshold, and the second action is determined to be a rotation action. When the finger rotates, the angle of the fingerprint pattern in the continuous fingerprint frame images is obviously changed, but the gravity centers of the finger and the pattern in the fingerprint acquisition area are not obviously changed.
In another implementation, the second action may also be identified by a pre-trained classifier. As one example, the total displacement information, the center of gravity displacement information, and the angle change information may be input to a pre-trained classifier to determine a second action corresponding to the second fingerprint sensor. The classifier learns and obtains the mapping relation among total displacement information, gravity center displacement information, angle change information and actions through a machine learning method (such as a support vector machine, a neural network and the like) or an empirical algorithm, and identifies the user actions.
Alternatively, the fingerprint features may include any one of minutiae features, local features, ridge features, deep learning descriptors of the fingerprint.
It should be noted that, since the input interaction method based on fingerprint provided in the embodiment of the present disclosure is based on the feature variation of continuous fingerprint frame images on time sequence to identify different actions of the user, the area requirement on the fingerprint sensor is not high, so that the method is applicable to the action identification of a sensor with smaller area, such as a narrow strip fingerprint sensor, and the sensor with larger area also uses the method provided in the embodiment.
And 106, determining an input interaction instruction matched with the input interaction according to the application scene and the action information of the intelligent device, wherein the action information comprises a first action and/or a second action.
The input interaction instruction comprises an action execution instruction and/or a symbol input instruction of the intelligent device.
It should be noted that, under different application scenarios, the interaction instructions triggered by the same touch operation performed by the user on the same intelligent device are different. Taking the smart glasses as an example, in a music playing scene, the following action execution instructions can be confirmed through touch operation on the fingerprint sensor:
scrolling left and right, fast forward/rewind;
panning up and down, namely increasing/decreasing the volume;
Sliding left and right, switching the previous one/the next one.
In a map navigation scenario, the same touch operation may execute instructions corresponding to the following actions:
scrolling left and right, map zoom in/map zoom out;
translation up and down, namely, the map moves up and down;
The map moves left and right;
And (5) rotating left and right, namely rotating the map.
Note that the symbol input instruction means that symbol input interaction is achieved through a touch operation of a user on the fingerprint sensor. Through the mapping relation between the symbol and the action, the user can realize the input of the symbol only through the interaction of the finger and the fingerprint sensor without depending on vision.
It should be further noted that, when the number of fingerprint sensors is one, the first action or the second action corresponding to the fingerprint sensor may be determined according to the touch operation of the user on the single fingerprint sensor, so as to determine the input interaction instruction. When the number of the fingerprint sensors is two, determining the corresponding action of each fingerprint sensor according to the touch operation of the user on the two fingerprint sensors. The two fingerprint sensors may be both a first action (e.g., a left finger clicking on the left fingerprint sensor and a right finger clicking on the right fingerprint sensor in the smart glasses), may be both a second action (e.g., both fingerprint sensors correspond to a forward sliding action), or may be a combination of the first action and the second action (e.g., one fingerprint sensor is a clicking action and the other fingerprint sensor is a rotating action). When the number of the fingerprint sensors is more than two, the fingerprint sensors can be combined with various actions, and corresponding instructions can be set for different actions (or action combinations) by setting the mapping relation between the actions and the input interaction instructions. The user can customize the instructions corresponding to different actions so as to accord with personal habits of the user, so that the interaction process is more personalized.
To further enrich the interaction means, optionally, in some embodiments of the present disclosure, finger information of a touch operation on each fingerprint sensor may be identified based on a plurality of consecutive fingerprint frame images corresponding to each fingerprint sensor and pre-registered user fingerprint information. And determining the mapping relation among the action, the finger and the input interaction instruction based on the application scene. And determining an input interaction instruction matched with the input interaction according to the action information, the finger information and the mapping relation. Thereby determining which finger of the user performs what action, and which corresponds to which input interaction instruction. The mapping relation among the actions, the fingers and the input interaction instructions can be customized by a user so as to meet personalized requirements. Optionally, in the embodiment of the disclosure, when the finger information and the region of the fingerprint feature in the image in the complete fingerprint are identified based on the continuous fingerprint frame image, a sliding window searching method, a correlation matching method, a feature point matching method and a similarity distance judgment can be used, but are not limited to.
Fig. 4 is a schematic diagram of a mapping relationship between an action, a finger and a symbol input instruction according to an embodiment of the disclosure. As shown in fig. 4, when different fingers perform touch operations with different actions, corresponding symbols can be input, so that the traditional button input, keyboard input or language input is replaced, the false touch rate is low, the problems of environmental noise, privacy protection and the like in voice input interaction are avoided, the space or screen space of an intelligent device is not occupied, and the functions and user interaction experience of the fingerprint sensor are enriched.
Optionally, in some embodiments of the present disclosure, after the smart device performs a corresponding operation according to the input interaction instruction, interaction completion feedback information may be generated to feedback that the smart device has performed the input interaction instruction input by the user. When the input interactive instruction is an action execution instruction, the feedback information can be completed through interaction in the forms of image, sound, vibration and the like. When the input interactive instruction is a symbol input instruction, the feedback information can be completed through interaction in the form of an image (display symbol), a sound (read symbol), vibration, or the like.
By implementing the embodiment of the disclosure, identity authentication and action recognition are performed on the user by analyzing continuous fingerprint frame images corresponding to at least one fingerprint sensor, so that the safety of the user in the input interaction process is improved. Based on the change information of continuous fingerprint frame images on the time sequence, various actions can be identified, action identification based on the small-area fingerprint sensor is realized, corresponding input interaction instructions are further determined, man-machine interaction modes are enriched, and richer, more convenient and personalized interaction experience is provided for users.
Fig. 5 is a schematic diagram of an input interaction device based on fingerprint according to an embodiment of the disclosure, where the device is disposed in an intelligent device. As shown in fig. 5, the fingerprint-based input interaction device comprises an acquisition module 501, an identity authentication module 502, a first determination module 503, a second determination module 504, a third determination module 505 and a fourth determination module 506.
The acquiring module 501 is configured to, in response to a user performing input interaction with the smart device using at least one fingerprint sensor, acquire a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor when the user performs a touch operation on the at least one fingerprint sensor.
The identity authentication module 502 is configured to authenticate the user based on a plurality of continuous fingerprint frame images corresponding to one or more fingerprint sensors in the at least one fingerprint sensor and pre-registered user fingerprint information.
The first determining module 503 is configured to determine whether the touch operation corresponding to each fingerprint sensor is a click action type according to the change information of the plurality of continuous fingerprint frame images corresponding to each fingerprint sensor in time sequence in response to the authentication of the user passing.
The second determining module 504 is configured to extract contact time change information in a plurality of continuous fingerprint frame images corresponding to the first fingerprint sensor for a first fingerprint sensor that is a type of click action in the touch operation of the at least one fingerprint sensor, and determine a first action corresponding to the first fingerprint sensor according to the contact time change information.
The third determining module 505 is configured to extract total displacement information, center of gravity displacement information, and angle change information of fingerprint features in a plurality of continuous fingerprint frame images corresponding to at least one fingerprint sensor for a second fingerprint sensor that is a type of non-click action in the touch operation in the at least one fingerprint sensor, and determine a second action corresponding to the second fingerprint sensor according to the total displacement information, the center of gravity displacement information, and the angle change information.
A fourth determining module 506, configured to determine an input interaction instruction that matches the input interaction according to an application scenario and action information of the smart device, where the action information includes the first action and/or the second action.
In some embodiments of the present disclosure, the third determining module 505 is specifically configured to determine that the second action is a sliding action in response to the total displacement information being greater than the first threshold, the center of gravity displacement information being less than or equal to the second threshold, the angle change information being less than or equal to the third threshold, or determine that the second action is a rolling action in response to the total displacement information being less than or equal to the first threshold, the center of gravity displacement information being greater than or equal to the second threshold, the angle change information being greater than the third threshold, or determine that the second action is a rotating action in response to the total displacement information being greater than or equal to the second threshold.
In some embodiments of the present disclosure, the third determining module 505 is specifically configured to determine, according to the total displacement information, the barycentric displacement information, and the angle change information, a second action corresponding to the second fingerprint sensor through a pre-trained classifier, where the classifier learns to obtain a mapping relationship between the total displacement information, the barycentric displacement information, and the angle change information and the action through a machine learning method.
In some embodiments of the present disclosure, the fourth determining module 506 is specifically configured to identify finger information of a touch operation on each fingerprint sensor based on a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor and pre-registered user fingerprint information, determine a mapping relationship between an action, a finger and an input interaction instruction based on an application scene, and determine the input interaction instruction matched with the input interaction according to the action information, the finger information and the mapping relationship.
In some embodiments of the present disclosure, the input interaction instruction includes an action execution instruction and/or a symbol input instruction of the smart device.
In some embodiments of the present disclosure, the fingerprint-based input interaction device may further comprise a feedback module based on the embodiment shown in fig. 5. The feedback module is used for generating interactive feedback information based on touch operation, wherein the interactive feedback information is used for providing touch feedback for a user and comprises at least one of image feedback information, sound feedback information and vibration feedback information.
In some embodiments of the present disclosure, at least one fingerprint sensor is located in a side region of the smart device.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In order to achieve the above embodiment, the disclosure further provides an intelligent device, which includes a processor and a memory communicatively connected to the processor, wherein the memory stores computer-executable instructions, and the processor executes the computer-executable instructions stored in the memory to implement the method provided by the above embodiment. As an example, the smart device may be a wearable smart device such as smart glasses, in-ear headphones, a smart watch, or may also be a smart device such as a mobile phone, tablet computer, or the like.
In order to implement the above-described embodiments, the present disclosure also proposes a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, are adapted to implement the methods provided by the foregoing embodiments.
To achieve the above embodiments, the present disclosure also proposes a computer program product comprising a computer program which, when executed by a processor, implements the method provided by the foregoing embodiments.
The processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user involved in the present disclosure all conform to the regulations of the relevant laws and regulations and do not violate the public order colloquial.
In the foregoing descriptions of embodiments, descriptions of the terms "one embodiment," "some embodiments," "examples," "particular examples," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the disclosure. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is at least two, such as two, three, etc., unless explicitly specified otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present disclosure in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present disclosure.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include an electrical connection (an electronic device) having one or more wires, a portable computer diskette (a magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware as in another embodiment, may be implemented using any one or combination of techniques known in the art, discrete logic circuits with logic gates for implementing logic functions on data signals, application specific integrated circuits with appropriate combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), etc.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in the embodiments of the present disclosure may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. Although embodiments of the present disclosure have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the present disclosure, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the present disclosure.

Claims (10)

1. An input interaction method based on fingerprints, which is applied to intelligent equipment, is characterized by comprising the following steps:
Responding to input interaction between a user and the intelligent equipment by using at least one fingerprint sensor, and acquiring a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor when the user performs touch operation on the at least one fingerprint sensor;
Authenticating the user based on a plurality of continuous fingerprint frame images corresponding to one or more fingerprint sensors in the at least one fingerprint sensor and pre-registered user fingerprint information;
Responding to the identity authentication of the user, and determining whether the touch operation corresponding to each fingerprint sensor is a click action type according to the change information of a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor on a time sequence;
For a first fingerprint sensor which is in a click action type in touch operation in the at least one fingerprint sensor, extracting contact time change information in a plurality of continuous fingerprint frame images corresponding to the first fingerprint sensor, and determining a first action corresponding to the first fingerprint sensor according to the contact time change information;
extracting total displacement information, barycentric displacement information and angle change information of fingerprint features in a plurality of continuous fingerprint frame images corresponding to the second fingerprint sensor for a second fingerprint sensor with a touch operation of a non-click action type in the at least one fingerprint sensor, and determining a second action corresponding to the second fingerprint sensor according to the total displacement information, the barycentric displacement information and the angle change information;
And determining an input interaction instruction matched with the input interaction according to the application scene and the action information of the intelligent equipment, wherein the action information comprises the first action and/or the second action.
2. The method of claim 1, wherein the determining a second action corresponding to the second fingerprint sensor based on the total displacement information, the center of gravity displacement information, and the angle change information comprises:
in response to the total displacement information being greater than a first threshold, the center of gravity displacement information being less than or equal to a second threshold, the angle change information being less than or equal to a third threshold, determining that the second motion is a sliding motion, or
In response to the total displacement information being less than or equal to the first threshold, the center of gravity displacement information being greater than the second threshold, the angle change information being less than or equal to the third threshold, determining that the second action is a scrolling action, or
And in response to the total displacement information being less than or equal to the first threshold, the center of gravity displacement information being less than or equal to the second threshold, the angle change information being greater than the third threshold, determining that the second action is a rotational action.
3. The method of claim 1, wherein the determining a second action corresponding to the second fingerprint sensor based on the total displacement information, the center of gravity displacement information, and the angle change information comprises:
Determining a second action corresponding to the second fingerprint sensor through a pre-trained classifier according to the total displacement information, the gravity center displacement information and the angle change information;
The classifier learns and obtains the mapping relation among total displacement information, gravity center displacement information, angle change information and actions through a machine learning method.
4. The method of claim 1, wherein the determining the input interaction instruction matching the input interaction according to the application scenario and the action information of the smart device comprises:
Identifying finger information of touch operation on each fingerprint sensor based on a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor and pre-registered user fingerprint information;
Determining a mapping relation among actions, fingers and input interaction instructions based on the application scene;
And determining an input interaction instruction matched with the input interaction according to the action information, the finger information and the mapping relation.
5. The method of any of claims 1-4, wherein the input interaction instruction comprises an action execution instruction and/or a symbol input instruction of the smart device.
6. The method according to claim 1, wherein the method further comprises:
And generating interactive feedback information based on the touch operation, wherein the interactive feedback information is used for providing touch feedback for the user and comprises at least one of image feedback information, sound feedback information and vibration feedback information.
7. The method of claim 1, wherein the at least one fingerprint sensor is located in a side region of the smart device.
8. An input interaction device based on fingerprint, said device is set up in smart machine, characterized in that, includes:
the acquisition module is used for responding to input interaction between a user and the intelligent equipment by using at least one fingerprint sensor, and acquiring a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor when the user touches the at least one fingerprint sensor;
The identity authentication module is used for authenticating the identity of the user based on a plurality of continuous fingerprint frame images corresponding to one or more fingerprint sensors in the at least one fingerprint sensor and pre-registered user fingerprint information;
The first determining module is used for determining whether the touch operation corresponding to each fingerprint sensor is of a click action type according to the change information of a plurality of continuous fingerprint frame images corresponding to each fingerprint sensor on a time sequence in response to the passing of the identity authentication of the user;
The second determining module is used for extracting contact time change information in a plurality of continuous fingerprint frame images corresponding to the first fingerprint sensor for the first fingerprint sensor which is in a click action type in touch operation in the at least one fingerprint sensor, and determining a first action corresponding to the first fingerprint sensor according to the contact time change information;
The third determining module is used for extracting total displacement information, barycentric displacement information and angle change information of fingerprint features in a plurality of continuous fingerprint frame images corresponding to the second fingerprint sensor for the second fingerprint sensor with the touch operation being a non-click action type in the at least one fingerprint sensor, and determining a second action corresponding to the second fingerprint sensor according to the total displacement information, barycentric displacement information and angle change information;
And the fourth determining module is used for determining an input interaction instruction matched with the input interaction according to the application scene and the action information of the intelligent equipment, wherein the action information comprises the first action and/or the second action.
9. An intelligent device is characterized by comprising a processor and a memory which is in communication connection with the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1-7.
10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1-7.
CN202411109259.XA 2024-08-13 2024-08-13 Input interaction method and device based on fingerprint Pending CN119322562A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411109259.XA CN119322562A (en) 2024-08-13 2024-08-13 Input interaction method and device based on fingerprint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411109259.XA CN119322562A (en) 2024-08-13 2024-08-13 Input interaction method and device based on fingerprint

Publications (1)

Publication Number Publication Date
CN119322562A true CN119322562A (en) 2025-01-17

Family

ID=94227986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411109259.XA Pending CN119322562A (en) 2024-08-13 2024-08-13 Input interaction method and device based on fingerprint

Country Status (1)

Country Link
CN (1) CN119322562A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945362A (en) * 2012-10-18 2013-02-27 中国科学院计算技术研究所 Isomerous data fusion based coordinated gesture recognition method and system of sensor
CN104331656A (en) * 2014-11-22 2015-02-04 广东欧珀移动通信有限公司 Method and device for safely operating file upon fingerprint identification sensor
CN104932817A (en) * 2015-05-27 2015-09-23 努比亚技术有限公司 Terminal side frame inductive interaction method and device
CN106547338A (en) * 2015-09-22 2017-03-29 小米科技有限责任公司 Instruction generation method and device
CN114578989A (en) * 2022-01-18 2022-06-03 清华大学 Man-machine interaction method and device based on fingerprint deformation
CN117707361A (en) * 2023-12-29 2024-03-15 清华大学 Sign input method based on finger fingerprint identification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945362A (en) * 2012-10-18 2013-02-27 中国科学院计算技术研究所 Isomerous data fusion based coordinated gesture recognition method and system of sensor
CN104331656A (en) * 2014-11-22 2015-02-04 广东欧珀移动通信有限公司 Method and device for safely operating file upon fingerprint identification sensor
CN104932817A (en) * 2015-05-27 2015-09-23 努比亚技术有限公司 Terminal side frame inductive interaction method and device
CN106547338A (en) * 2015-09-22 2017-03-29 小米科技有限责任公司 Instruction generation method and device
CN114578989A (en) * 2022-01-18 2022-06-03 清华大学 Man-machine interaction method and device based on fingerprint deformation
CN117707361A (en) * 2023-12-29 2024-03-15 清华大学 Sign input method based on finger fingerprint identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊大红主编: "《实用信息技术基础》", 31 December 2023, 湖南大学出版社, pages: 204 *

Similar Documents

Publication Publication Date Title
US8649575B2 (en) Method and apparatus of a gesture based biometric system
US9436862B2 (en) Electronic apparatus with segmented guiding function and small-width biometrics sensor, and guiding method thereof
US8941466B2 (en) User authentication for devices with touch sensitive elements, such as touch sensitive display screens
US9195878B2 (en) Method of controlling an electronic device
US9223397B2 (en) Personal computing device control using face detection and recognition
CN106778141B (en) Unlocking method and device based on gesture recognition and mobile terminal
KR20160099497A (en) Method and apparatus for recognizing handwriting
CN107251052B (en) Method for forming fingerprint image and fingerprint sensing system
US20110190060A1 (en) Around device interaction for controlling an electronic device, for controlling a computer game and for user verification
CN102982527A (en) Image segmentation method and image segmentation system
KR100641434B1 (en) Mobile communication terminal equipped with fingerprint recognition means and its operation method
CN105354560A (en) Fingerprint identification method and device
KR20150055342A (en) Method for fingerprint authentication, fingerprint authentication device, and mobile terminal performing thereof
EP4290338A1 (en) Method and apparatus for inputting information, and storage medium
US20250039537A1 (en) Screenshot processing method, electronic device, and computer readable medium
CN112995757A (en) Video clipping method and device
KR20150003501A (en) Electronic device and method for authentication using fingerprint information
Ahmad et al. Analysis of interaction trace maps for active authentication on smart devices
KR20190132885A (en) Apparatus, method and computer program for detecting hand from video
CN111160251A (en) Living body identification method and device
KR20190128536A (en) Electronic device and method for controlling the same
CN119322562A (en) Input interaction method and device based on fingerprint
WO2013145874A1 (en) Information processing device, information processing method and program
CN106126087A (en) A kind of based on the display picture approach of intelligent terminal and the device with touch screen
CN108040284A (en) Radio station control method for playing back, device, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination