WO2018086543A1 - Living body identification method, identity authentication method, terminal, server and storage medium - Google Patents
Living body identification method, identity authentication method, terminal, server and storage medium Download PDFInfo
- Publication number
- WO2018086543A1 WO2018086543A1 PCT/CN2017/109989 CN2017109989W WO2018086543A1 WO 2018086543 A1 WO2018086543 A1 WO 2018086543A1 CN 2017109989 W CN2017109989 W CN 2017109989W WO 2018086543 A1 WO2018086543 A1 WO 2018086543A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- region
- pulse
- characteristic curve
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/15—Biometric patterns based on physiological signals, e.g. heartbeat, blood flow
Definitions
- the present application relates to the field of computer technology, and in particular, to a living body discrimination method, an identity authentication method, a terminal, a server, and a storage medium.
- face recognition has been widely used in face access control, financial and nuclear fields to prevent users from using illegal photos to manually register bank accounts through face recognition systems or machines.
- Traditional living body discrimination technology usually needs to combine certain interactions in practical application scenarios, such as shaking head, blinking, etc., to distinguish real people and photos through the position movement of a certain point on the face; traditional optical plethysmography, using close contact
- the blood volume change at the end of the human body is detected by an additional instrument, the pulse of the person is estimated, and the real person and the photo are distinguished according to the change of the pulse.
- a living body discrimination method an identity authentication method, a terminal, a server, and a storage medium are provided.
- a living body discrimination method comprising:
- the terminal collects a multi-frame face image
- the terminal extracts a face region for each frame of the face image
- the terminal acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region;
- the terminal establishes a pulse characteristic curve according to the pulse feature corresponding to the face image of each frame.
- the terminal compares the pulse characteristic curve with a pre-stored standard non-in-vivo pulse characteristic curve, and if the characteristic value of the pulse characteristic curve and the characteristic value of the pre-stored standard non-living pulse characteristic curve differ by more than a preset feature threshold, It is determined that the living face image is collected, otherwise, it is determined that the collected non-living face image.
- An identity authentication method comprising:
- the server receives a user identity authentication request sent by the terminal, where the user identity authentication request carries the user identifier;
- the server extracts a face region for each frame of the face image
- the server acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region;
- the server establishes a pulse characteristic curve according to the pulse feature corresponding to each frame of the face image
- the server compares the pulse characteristic curve with a pre-stored standard pulse characteristic curve corresponding to the user identifier, if the characteristic value of the pulse characteristic curve is different from the characteristic value of the standard pulse characteristic curve in a preset range If the user identity authentication is passed, the user identity authentication will not pass.
- a terminal includes a memory and a processor, wherein the memory stores computer readable instructions, The computer readable instructions, when executed by the processor, cause the processor to perform the following steps:
- a server comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
- the user identity authentication is performed according to the comparison result. If the feature value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, the user identity authentication is passed, otherwise the user identity authentication fails.
- One or more non-volatile readable storage media storing computer readable instructions for said computing
- the machine readable instructions are executed by one or more processors such that the one or more processors perform the following steps:
- One or more non-transitory readable storage mediums storing computer readable instructions, when executed by one or more processors, cause the one or more processors to perform the following steps:
- 1 is an application environment diagram of a living body discrimination method and an identity authentication method in an embodiment
- 2A is a diagram showing the internal structure of a server in an embodiment
- 2B is an internal structural diagram of a terminal in an embodiment
- FIG. 3 is a flow chart of a living body discrimination method in an embodiment
- FIG. 4 is a diagram showing a result of segmentation of a face of the face of FIG. 3 in an embodiment
- Figure 5 is a flow chart of the pulse characteristic calculation method of Figure 3 in one embodiment
- FIG. 6 is a flow chart of an identity authentication method in an embodiment
- FIG. 7 is a flow chart of the pulse characteristic calculation method of FIG. 6 in one embodiment
- FIG. 9 is a structural block diagram of a terminal in an embodiment
- Figure 10 is a block diagram showing the structure of a server in an embodiment
- Figure 11 is a block diagram showing the structure of a server in another embodiment.
- the living body discrimination method provided by the embodiment of the present application can be applied to the environment as shown in FIG. 1.
- the server 102 can receive and process a plurality of frames of the face image collected by the terminal 104. specific, The server 102 communicates with the terminal 104 through the network, receives the collected multi-frame face image sent by the terminal 104, extracts the face region of the face image of each frame, acquires the illumination intensity of the face region, and calculates the face of each frame.
- the pulse characteristic corresponding to the image is established and the pulse characteristic curve is established, and the established pulse characteristic curve is compared with the standard non-in vivo pulse characteristic curve, the living body discrimination is performed according to the comparison result, and the determination result is sent to the terminal 104.
- Terminals herein include, but are not limited to, various personal computers, notebook computers, smart phones, tablets, portable wearable devices, and the like. It should be noted that, in other embodiments, the terminal may directly process the multi-frame face image to collect the multi-frame face image, and determine whether the image is a living body.
- the identity authentication method provided by the embodiment of the present application is also applicable to the environment shown in FIG. 1.
- the server 102 can receive the user identity authentication request sent by the terminal 104, and can also return the user identity authentication result to the terminal 104.
- the server 102 communicates with the terminal 104 through the network, and receives the user identity authentication request sent by the terminal 104 and the multi-frame face image of the user collected by the terminal 104 according to the user identity authentication request, and extracts the face of each frame face image.
- the area obtains the illumination intensity of the face region, calculates the pulse characteristics corresponding to the face image of each frame, and establishes a pulse characteristic curve, and compares the established pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier, according to the comparison As a result, identity authentication is performed and the identity authentication result is transmitted to the terminal 104.
- Terminals herein include, but are not limited to, various personal computers, notebook computers, smart phones, tablets, portable wearable devices, and the like.
- the terminal 104 obtains the user identity authentication request, and can also directly verify the user identity. Specifically, the terminal 104 can directly process the multi-frame face image of the collected user to obtain a pulse characteristic curve of the user, and compare with the standard pulse characteristic curve of the user to obtain a user identity verification result.
- a server in one embodiment, as shown in FIG. 2A, includes a processor coupled through a system bus, a non-volatile storage medium, an internal memory, and a network interface.
- An operating system and computer readable instructions are stored in the non-volatile storage medium, the computer readable instructions implementing a live discriminating method when executed, or the computer readable instructions implementing an identity authentication method when executed.
- This processor is used to improve computing and control capabilities and support the operation of the entire server.
- the internal memory is used for a living body discriminating device or an identity authentication device in a non-volatile storage medium
- the operating environment is provided, and the internal memory can store computer readable instructions that, when executed by the processor, cause the processor to perform a living body discrimination method or an identity authentication method.
- the network interface is used for network communication with the terminal, receiving or transmitting data, for example, receiving a face image sent by the terminal, and transmitting a living body discrimination result to the terminal; or receiving an identity authentication request sent by the terminal and the collected face image, and The terminal sends the identity authentication result and the like.
- a terminal in one embodiment, as shown in FIG. 2B, includes a processor coupled through a system bus, a non-volatile storage medium, an internal memory, a network interface, and a display screen.
- An operating system and a computer readable instruction are stored in the non-volatile storage medium, the computer readable instructions implementing a live discriminating method when executed, or the computer readable instructions implementing an identity authentication when executed method.
- the processor is used to increase computing and control capabilities to support the operation of the entire terminal.
- the internal memory is configured to provide an environment for the operation of the living body discriminating device in the non-volatile storage medium, wherein the internal memory can store computer readable instructions that, when executed by the processor, cause the processor to execute A living body discrimination method.
- the network interface is used for network communication with the server, receiving or transmitting data, for example, receiving a pulse characteristic comparison result sent by the server.
- the display screen of the terminal may be a liquid crystal display or an electronic ink display screen
- the input device of the terminal may be a touch layer covered on the display screen, or a button, a trackball or a touchpad provided on the terminal housing, or may be an external connection. Keyboard, trackpad or mouse.
- the terminal can be a personal computer, a mobile terminal or a wearable device, such as a mobile phone, a tablet or a personal digital assistant. It will be understood by those skilled in the art that the structure shown in FIG.
- FIG. 2 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the terminal to which the solution of the present application is applied.
- the specific terminal may include a ratio. More or fewer components are shown in Figure 2, or some components are combined, or have different component arrangements.
- a living body discriminating method is provided, which is applied to the server shown in FIG. 2A or the terminal shown in FIG. 2B for illustration, and includes:
- Step 302 The terminal collects a multi-frame face image.
- the multi-frame face image is processed.
- the terminal may collect a video containing a face image within a preset time, or each terminal The face image is collected at a preset time to obtain a multi-frame face image.
- Step 304 The terminal extracts a face region for each frame of the face image.
- the terminal extracts a face region from each frame of the face image.
- the terminal may use an image integration map and an Adaboost method to extract a face region for each frame of the face image.
- the terminal obtains the Haar facial features quickly by calculating the image integral map, and classifies the training samples by using the Adaboost classification algorithm according to the Haar facial features, and classifies the test samples by using the final classifier obtained by the classification, thereby realizing the extractor. Face area.
- Haar feature is a commonly used feature description operator in the field of computer vision. Haar eigenvalue reflects the gray level change of image; Adaboost is an iterative algorithm whose core idea is to train different classifications for the same training set. (weak classifier), then combine these weak classifiers to form a stronger final classifier (strong classifier).
- Step 306 The terminal acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region.
- the face region is first divided into a plurality of face sub-regions, which specifically includes: using a face registration algorithm to obtain some face key points, such as forehead, left eye, right eye, cheek left, and cheek right. According to these key points, the face region is divided into regions, and the face regions such as the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained; and the light intensity of all the pixels in the face sub-area is obtained, If the difference of the illumination intensity of any two pixels in the face sub-area is within the preset light intensity threshold, the segmentation of the face sub-region continues until the difference of the illumination intensity of any two pixels in the face sub-region obtained after the segmentation is not Exceeding the preset light intensity threshold, the face region segmentation is more detailed, such as dividing the forehead region into three parts of the left middle and the right, as shown in FIG.
- face key points such as forehead, left eye, right eye, cheek left, and cheek right.
- the face region is divided into regions, and the face
- the face region in order to reduce the influence of ambient light on the skin color change of the human face, in order to accurately reflect the skin color change caused by the oxygen saturation of the blood vessel and the change of the blood volume in the face sub-area, the face region is finely divided into A plurality of face sub-regions such that the illumination intensity of all positions in each face region can be approximated as a constant, and the illumination intensity of the face sub-region is the constant.
- each face sub-area and the weight corresponding to each face sub-area are obtained.
- the illumination intensity of each face sub-area is weighted and summed according to its corresponding weight, and the summation result is a pulse feature corresponding to the face image.
- Step 308 The terminal establishes a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
- the pulse feature corresponding to each frame of the face image is a static value. If one static value cannot be distinguished whether it is a living body or a non-living body, the pulse characteristics corresponding to all the collected face images are connected to a line, and a line can be established.
- the pulse characteristic curve analyzes some properties of the pulse characteristic curve, such as whether it changes periodically, and if so, the change period, the maximum amplitude, etc., to perform subsequent living body discrimination.
- Step 310 The terminal compares the pulse characteristic curve with the pre-stored standard non-in-vivo pulse characteristic curve. If the characteristic value of the pulse characteristic curve and the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve exceed the preset feature threshold, the determination is that The living face image, otherwise, it is determined that the collected non-living face image.
- Non-living objects here are objects without life features, such as paper.
- a multi-frame non-living image is obtained by collecting a video containing a non-living body within a preset time period, or collecting a non-living video at a preset time. Since the non-living body does not have a skin color change caused by a change in blood volume and oxygen saturation, that is, the light intensity at all positions other than the living body is constant.
- the pre-stored standard non-living pulse characteristic curve is a straight line whose pulse characteristic value is close to the ambient light intensity; if the ambient light intensity changes, the standard non-living pulse characteristic is pre-stored.
- a curve is a curve in which the pulse characteristic value approximates the ambient light intensity.
- the oxygen saturation in the blood vessel and the blood volume are fixed, and the corresponding skin color does not change, so the light intensity in the adjacent one or more frames of the non-living human face region is constant, so
- the pre-existing standard non-in vivo pulse characteristic curve is single and unchanged. If the characteristic value of the pulse characteristic curve obtained in the previous step differs from the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve by more than the preset feature threshold, it indicates that the pulse characteristic curve obtained in the previous step is changed, and then the determination is made.
- the face image corresponding to the pulse characteristic curve is a living face image, and conversely, the pulse characteristic curve is a single constant, that is, a non-living face image.
- the pulse characteristics corresponding to the face image are calculated by using the illumination intensity of the face region, and the comparison between the pulse characteristic curve and the pre-stored standard non-vivo pulse characteristic curve is performed to realize the living body determination without additional equipment.
- the hardware cost is saved; the user interaction is not required to complete the living body determination, and the detection rate of the living body discrimination is improved.
- the foregoing step 306 includes:
- Step 316 The terminal divides the face area to obtain a face sub-area.
- the segmentation algorithm can be used to segment the face region.
- some face key points such as forehead, left eye, right eye, and cheek left
- the face registration algorithm Some points on the side and the right side of the cheek, according to these key points, the face area is divided into regions, and the face regions such as the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained.
- the terminal uses the region segmentation algorithm on the face region, acquires the illumination intensity of all the pixels in the face sub-region. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation continues. In the face sub-region, the difference in illumination intensity of any two pixels in the face sub-region obtained after the segmentation does not exceed the preset light intensity threshold.
- the terminal determines the degree of subdivision of the sub-region of the face by determining whether the difference of the illumination intensity of any two pixels in the sub-region of the face is within a preset light intensity threshold range, so that all positions in the face sub-area of any one of the faces
- the intensity of the light can be approximated as a constant, which reduces the effect of light on skin color changes, so that skin color changes caused by blood flow are accurately reflected.
- Step 336 The terminal acquires the light intensity corresponding to each face sub-region and the weight corresponding to each face sub-region, and calculates a pulse feature corresponding to each frame face image according to the illumination intensity corresponding to the face sub-region and the weight corresponding to the face sub-region.
- the skin color changes in different regions of the face are different. In the region where the blood vessels are concentrated, the skin color changes are relatively obvious, and the weight corresponding to the region is relatively large. Conversely, the weight corresponding to the region is relatively small.
- the acquired illumination intensity corresponding to each face sub-region is weighted and summed according to the weights corresponding to each of the face sub-regions, and each frame is obtained.
- the pulse characteristics can be calculated according to the following formula:
- n is the total number of regions
- G i is the weight corresponding to each region.
- the function I is an indication function, indicating that within a certain time range, when the maximum intensity and the minimum intensity in the region i exceed a certain threshold, the region will be ignored and will not participate in the calculation of the pulse feature.
- the face region is obtained by segmenting the face region, and the pulse feature corresponding to the face image of each frame is calculated according to the light intensity corresponding to the face region and the weight corresponding to the face region.
- the estimated value of the pulse characteristics is obtained by weighted summation, which improves the accuracy of the pulse characteristic calculation.
- an identity authentication method is provided, which is applied to a server as shown in FIG. 1, and the method includes:
- Step 602 The server receives a user identity authentication request sent by the terminal, where the user identity authentication request carries the user identifier.
- the user identity authentication request is a request for verifying the identity sent by the terminal where the user is located to the server; the user identifier is used to identify each user, and is unique, and may be the user's ID number, instant communication number, social account number, email address, or mobile communication. Any of the numbers.
- the user places the ID card in the designated scanable area, and the user's terminal scans the ID card to obtain the user ID of the user, that is, the ID number, and after obtaining the success, sends an identity authentication request to the server, so that The subsequent server can find a standard pulse characteristic curve corresponding to the user identifier from the database.
- Step 604 The server acquires a multi-frame face image of the user collected by the terminal according to the user identity authentication request.
- the server may compare the user identifier carried in the user identity authentication request with the user identifier pre-stored in the server. If the comparison is consistent, the server stores the user information corresponding to the user identifier, such as the user pulse characteristics used subsequently. Curve, then get more users collected by the terminal Frame face image.
- the method for collecting the multi-frame face image of the user by the terminal may be that the terminal collects the video that includes the face image within the preset time, or that the terminal collects the face image every preset time to obtain the multi-frame face image. .
- the server extracts a face region for each frame of the face image.
- the server needs to extract the face region from each frame of the face image.
- the server may use an image integration map and an Adaboost method to extract a face region for each frame of the face image.
- the server obtains the Haar facial features quickly by calculating the image integral map, and classifies the training samples by using the Adaboost classification algorithm according to the Haar facial features, and classifies the test samples by using the final classifier obtained by the classification, thereby realizing the extractor. Face area.
- Haar feature is a commonly used feature description operator in the field of computer vision. Haar eigenvalue reflects the gray level change of image; Adaboost is an iterative algorithm whose core idea is to train different classifications for the same training set. (weak classifier), then combine these weak classifiers to form a stronger final classifier (strong classifier).
- Step 608 The server acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region.
- the server first divides the face region into a plurality of face sub-regions, and specifically includes: using a face registration algorithm to obtain some face key points, such as forehead, left eye, right eye, cheek left, and cheek right According to these key points, the face region is segmented to obtain the face region of the forehead, the left eye, the right eye, the left side of the cheek and the right side of the cheek; the light intensity of all the pixels in the face sub-area is obtained.
- face key points such as forehead, left eye, right eye, cheek left, and cheek right According to these key points, the face region is segmented to obtain the face region of the forehead, the left eye, the right eye, the left side of the cheek and the right side of the cheek; the light intensity of all the pixels in the face sub-area is obtained.
- the preset light intensity threshold is not exceeded, so that the face region segmentation is more detailed, such as dividing the forehead region into three parts of the left middle and the right, as shown in FIG.
- the face region is finely divided into a plurality of face sub-regions, so that the illumination intensity of all the positions in each face region can be approximated as a constant, and the illumination intensity of the face sub-region is the constant.
- the server obtains the light intensity of each face sub-region and the weight corresponding to each face sub-region, and weights the illumination intensity of each face sub-region according to its corresponding weight, and the summation result is the pulse feature corresponding to the face image.
- the maximum intensity and the minimum intensity in the face sub-region exceed a certain threshold within a certain time range, indicating that the region is greatly affected by the ambient light, the region is ignored, that is, the pulse feature is not involved in the calculation.
- Step 610 The server establishes a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
- the pulse feature corresponding to each frame of the face image is a static value. It is impossible to distinguish whether it is a real person or a picture by looking at a static value.
- the pulse features corresponding to all the collected face images are connected into a line to establish a pulse characteristic curve. By analyzing some properties of the pulse characteristic curve, such as whether it is periodically changed, if so, the change period, the maximum amplitude, etc., to perform subsequent living body discrimination.
- Step 612 The server compares the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier. If the characteristic value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, the user identity authentication is passed. Otherwise, the user identity authentication does not pass.
- the standard pulse characteristic curve corresponding to the user identifier is pre-stored in the database, and the multi-frame face image of the user may be pre-acquired, the face region is extracted for each frame face image, and the illumination intensity of the face region is obtained, according to the face The light intensity of the region is calculated, and the pulse feature corresponding to the face image of each frame is calculated, and the pulse characteristic curve is established according to the pulse feature corresponding to the face image of each frame, and the pulse characteristic curve is the standard pulse characteristic curve of the user, and the user identifier Correspondence is stored in the database and compared when authentication is required.
- the server compares the pulse characteristic curve obtained by the living body discrimination with the standard pulse characteristic curve corresponding to the pre-stored user identifier, as long as the difference between the characteristic values (such as the change period, the maximum amplitude, etc.) of the two is acceptable.
- the face image collected by the terminal is a living face image
- the user corresponding to the face image collected by the terminal is the same person as the user corresponding to the user identifier in the server database, that is, the identity verification is passed.
- by pulse The characteristic curve is compared with the pre-stored standard pulse characteristic curve corresponding to the user identification, and the user does not need to perform additional interactions or additional instruments, thereby improving the efficiency of identity authentication and saving hardware costs.
- the method before the server compares the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier, the method further includes: comparing, by the server, the pulse characteristic curve with a pre-standard non-in vivo pulse characteristic curve, if the pulse characteristic If the characteristic value of the curve differs from the characteristic value of the pre-standard non-vivo pulse characteristic curve by more than a threshold value, a step of comparing the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identification is performed.
- the subsequent identity authentication is performed only after the live discrimination is performed, and the possibility of the image impersonation is excluded, and the identity authentication is performed by comparing the currently collected pulse characteristic curve with the pre-stored standard pulse feature corresponding to the user identifier.
- the curve eliminates the possibility of impersonation, so the identity authentication method is highly secure.
- the foregoing step 608 includes:
- step 618 the server segments the face region to obtain a face sub-region.
- the segmentation algorithm can be used to segment the face region.
- some face key points are usually obtained according to the face registration algorithm, such as forehead, left eye, right eye, cheek left, and cheek right. According to these key points, the face area is divided into regions, and the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained.
- the illumination intensity of all the pixels in the face sub-region is obtained. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation of the face sub-region continues. The difference in illumination intensity of any two pixels in the face sub-region obtained after the segmentation does not exceed the preset light intensity threshold.
- determining whether the difference of the illumination intensity of any two pixels in the sub-region of the face is within a preset light intensity threshold range determines the degree of subdivision of the sub-region of the face, so that all positions in the face sub-area of any one of the faces
- the intensity of the light can be approximated as a constant, which reduces the effect of light on skin color changes, so that skin color changes caused by blood flow are accurately reflected.
- Step 638 the server obtains the light intensity corresponding to each face sub-region and the weight corresponding to each face sub-region, according to the light intensity corresponding to the face sub-region and the weight corresponding to the face sub-region.
- the pulse characteristics corresponding to the face image of each frame are calculated.
- the server performs weighted summation on the acquired light intensity corresponding to each face sub-region according to the weight corresponding to each face sub-region, and obtains a pulse feature corresponding to each frame face image.
- the server obtains the face sub-region by segmenting the face region, and then calculates the pulse feature corresponding to the face image of each frame according to the illumination intensity corresponding to the face sub-region and the weight corresponding to the face sub-region. This way, the estimated value of the pulse characteristics is obtained by weighted summation, which improves the accuracy of the pulse characteristic calculation.
- another identity authentication method comprising:
- Step 802 The server receives a user identity authentication request sent by the terminal, where the user identity authentication request carries the user identifier.
- the user identity authentication request is a request for verifying identity sent by the terminal where the user is located to the server;
- the user identifier is a unique credential that distinguishes each user, such as a user's phone number, a social account number, or a mailbox.
- Step 804 The server acquires a multi-frame face image of the user collected by the terminal according to the user identity authentication request.
- the present embodiment is based on the identity authentication of the face image.
- the living body discrimination needs to be performed first, and the living body discrimination needs to be performed by the feature analysis of more images, so the multi-frame person is first collected. Face image.
- Step 806 The server extracts a face region for each frame of the face image.
- the server needs to extract the face region from each frame of the face image.
- the server may use an image integration map and an Adaboost method to face each frame of the image. Extract the face area.
- the Haar facial features are obtained quickly by calculating the image integral map.
- the training samples are classified by the Adaboost classification algorithm, and the final classifier obtained by the classification is used to classify the test samples, thereby extracting the human face. region.
- Haar feature is a commonly used feature description operator in the field of computer vision. Haar eigenvalue reflects the gray level change of image; Adaboost is an iterative algorithm whose core idea is to train different classifications for the same training set. (weak classifier), then combine these weak classifiers to form a stronger final classifier (strong classifier).
- step 808 the server segments the face region to obtain a face sub-region.
- the segmentation algorithm can be used to segment the face region.
- some face key points are usually obtained according to the face registration algorithm, such as forehead, left eye, right eye, cheek left, and cheek right. According to these key points, the face area is divided into regions, and the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained.
- Step 810 The server acquires the illumination intensity of all the pixels in the face sub-area. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation of the face sub-region continues. Otherwise, the segmentation of the face sub-region is stopped.
- the degree of subdivision of the sub-region of the face is determined, so that the illumination intensity of all positions in any face sub-region can be approximated. As a constant, this reduces the effect of light on skin tone changes, so that skin color changes caused by blood flow are accurately reflected.
- Step 812 The server obtains the light intensity corresponding to each face sub-region and the weight corresponding to each face sub-region, and calculates a pulse feature corresponding to each frame face image according to the illumination intensity corresponding to the face sub-region and the weight corresponding to the face sub-region.
- the skin color changes in different regions of the face are different.
- the skin color changes are relatively obvious, and the weight corresponding to the region is relatively large. Conversely, the weight corresponding to the region is relatively small.
- the obtained light intensity corresponding to each face sub-region is weighted and summed according to the weight corresponding to each face sub-region, and the pulse feature corresponding to each frame face image is obtained. Specific calculation The formula is as follows:
- n is the total number of regions
- G i is the weight corresponding to each region.
- the function I is an indication function, indicating that within a certain time range, when the maximum intensity and the minimum intensity in the region i exceed a certain threshold, the region will be ignored and will not participate in the calculation of the pulse feature.
- Step 814 The server establishes a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
- the pulse feature corresponding to each frame of the face image is a static value. It is impossible to distinguish whether it is a real person or a picture by looking at a static value.
- the pulse features corresponding to all the collected face images are connected into a line to establish a pulse characteristic curve. By analyzing some properties of the pulse characteristic curve, such as whether it is periodically changed, if so, the change period, the maximum amplitude, etc., to perform subsequent living body discrimination.
- Step 816 The server compares the pulse characteristic curve with the pre-standard non-in vivo pulse characteristic curve. If the characteristic value of the pulse characteristic curve and the characteristic value of the pre-standard non-in vivo pulse characteristic curve differ from the preset feature threshold, the determination is that Live face image.
- Non-living objects here are objects without life features, such as paper sheets.
- a multi-frame non-living image is obtained by collecting a video containing a non-living body within a preset time period, or collecting a non-living video at a preset time. Since the non-living body does not have a skin color change caused by a change in blood volume and oxygen saturation, that is, the light intensity at all positions of the non-living region is constant.
- the pre-stored standard non-living pulse characteristic curve is a straight line whose pulse characteristic value is close to the ambient light intensity; if the ambient light intensity changes, the standard non-living pulse characteristic is pre-stored.
- a curve is a curve in which the pulse characteristic value approximates the ambient light intensity.
- the oxygen saturation in the blood vessel and the blood volume are fixed, and the corresponding skin color does not change, so the light intensity in the adjacent one or more frames of the non-living human face region is constant, so The pre-stored standard non-in vivo pulse characteristic curve showed no change. If the characteristic value of the pulse characteristic curve obtained in the previous step differs from the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve by more than a preset characteristic threshold, if the pulse characteristic curve obtained in the previous step is changed, the The face image corresponding to the pulse characteristic curve is a living face image.
- Step 818 the server compares the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier. If the characteristic value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, the user identity authentication is passed. Otherwise, the user identity authentication does not pass.
- the standard pulse characteristic curve corresponding to the user identifier is pre-stored in the database, and the pulse characteristic curve obtained by the living body discrimination is compared with the standard pulse characteristic curve corresponding to the pre-stored user identifier, as long as the characteristic values of the two (such as the change period, If the difference between the maximum amplitude and the like is within an acceptable range, it indicates that the user corresponding to the collected image is the same person as the user corresponding to the user identifier in the database, that is, the authentication is passed.
- the database here can be an online database or a local database.
- the subsequent identity authentication is performed only after the live discrimination is performed, and the possibility of the image impersonation is excluded, and the identity authentication is performed by comparing the currently collected pulse characteristic curve with the pre-stored standard pulse feature corresponding to the user identifier.
- the implementation of the curve eliminates the possibility of impersonation, so the identity authentication method is highly secure.
- a terminal is also provided, the internal structure of which can be as shown in FIG. 2B, and each of the following modules can be implemented in whole or in part by software, hardware or a combination thereof.
- the terminal includes a living body discriminating device, and the device includes:
- the face image collecting module 902 is configured to collect a multi-frame face image.
- the face region extraction module 904 is configured to extract a face region for each frame of the face image.
- the pulse feature calculation module 906 is configured to acquire the illumination intensity of the face region, and calculate a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region.
- the pulse characteristic curve establishing module 908 is configured to establish a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
- the living body discriminating module 910 is configured to compare the pulse characteristic curve with the pre-stored standard non-in vivo pulse characteristic curve. If the characteristic value of the pulse characteristic curve and the characteristic value of the pre-stored standard non-in vivo pulse characteristic curve exceed the preset feature threshold, the determination is performed. The image of the living face is obtained, otherwise, it is determined that the non-living face image is collected.
- the pulse feature calculation module 906 is configured to segment the face region to obtain a face sub-region; obtain the illumination intensity corresponding to each face sub-region and the weight corresponding to each face sub-region, according to the illumination intensity corresponding to the face sub-region The weight corresponding to the face sub-region is calculated, and the pulse feature corresponding to the face image of each frame is calculated.
- the pulse feature calculation module 906 is configured to acquire the illumination intensity of all the pixels in the face sub-region. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation of the face sub-region continues. Otherwise, Stop splitting the face sub-area.
- a server is provided, the internal structure of which may be as shown in FIG. 2A, and each of the following modules may be implemented in whole or in part by software, hardware or a combination thereof.
- the server includes an identity authentication device, the device comprising:
- the identity authentication request receiving module 1002 is configured to send and receive a user identity authentication request, and the user identity authentication request carries the user identifier.
- the face image collecting module 1004 is configured to receive a multi-frame face image of the user collected by the terminal according to the user identity authentication request.
- the face region extraction module 1006 is configured to extract a face region for each frame of the face image.
- the pulse feature calculation module 1008 is configured to acquire the illumination intensity of the face region, and calculate a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region.
- the pulse characteristic curve establishing module 1010 is configured to establish a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
- the identity authentication module 1012 is configured to compare the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier. If the feature value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, the user The identity authentication is passed, otherwise the user identity authentication fails.
- the pulse feature calculation module 1008 is configured to segment the face region to obtain a face sub-region; obtain a light intensity corresponding to each face sub-region and a weight corresponding to each face sub-region, according to the light intensity corresponding to the face sub-region The weight corresponding to the face sub-region is calculated, and the pulse feature corresponding to the face image of each frame is calculated.
- the identity authentication device in the server further includes: a living body discriminating module 1011, configured to compare the pulse characteristic curve with a pre-stored standard non-in vivo pulse characteristic curve, if the characteristic value of the pulse characteristic curve and the pre-stored standard non-living body If the characteristic values of the pulse characteristic curve differ by more than the preset feature threshold, it is determined that the collected living face image is obtained.
- a living body discriminating module 1011 configured to compare the pulse characteristic curve with a pre-stored standard non-in vivo pulse characteristic curve, if the characteristic value of the pulse characteristic curve and the pre-stored standard non-living body If the characteristic values of the pulse characteristic curve differ by more than the preset feature threshold, it is determined that the collected living face image is obtained.
- the pulse feature calculation module 1008 is configured to acquire the illumination intensity of all the pixels in the face sub-region. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation of the face sub-region continues until segmentation. The difference in illumination intensity of any two pixels in the obtained face sub-region does not exceed the preset light intensity threshold.
- the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
本申请要求于2016年11月10日提交中国专利局,申请号为201610992121.8,发明名称为“活体判别方法和装置、身份认证方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to Chinese Patent Application No. 201610992121.8, entitled "Living Discriminating Method and Apparatus, Identity Authentication Method and Apparatus" on November 10, 2016, the entire contents of which are incorporated by reference. In this application.
本申请涉及计算机技术领域,特别是涉及一种活体判别方法、身份认证方法、终端、服务器和存储介质。The present application relates to the field of computer technology, and in particular, to a living body discrimination method, an identity authentication method, a terminal, a server, and a storage medium.
随着对信息安全需求的不断增长,人脸活体判别已被广泛应用于人脸门禁、金融核身等领域,以防止用户使用非法照片通过人脸识别系统或机器大规模自动注册银行账户等。With the ever-increasing demand for information security, face recognition has been widely used in face access control, financial and nuclear fields to prevent users from using illegal photos to manually register bank accounts through face recognition systems or machines.
传统的活体判别技术,通常需要在实际应用场景中结合一定的交互,如摇头、眨眼等,通过脸部某一点的位置移动来区分真人和照片;传统的光学体积描记法,采用近距离接触式,通过附加的仪器设备检测人体末端的血液容积变化,估计出人的脉搏,再根据脉搏的变化区分真人与照片。Traditional living body discrimination technology usually needs to combine certain interactions in practical application scenarios, such as shaking head, blinking, etc., to distinguish real people and photos through the position movement of a certain point on the face; traditional optical plethysmography, using close contact The blood volume change at the end of the human body is detected by an additional instrument, the pulse of the person is estimated, and the real person and the photo are distinguished according to the change of the pulse.
然而,这种传统的活体判别需要用户交互,只有用户按照提示做出正确的交互后,才能通过活体检测,导致检测率低;传统的光学体积描记法需要附加仪器设备,使得硬件成本高。However, this traditional living body discrimination requires user interaction. Only when the user makes correct interaction according to the prompts can the living body detection pass, resulting in low detection rate; the conventional optical plethysmography requires additional instruments and equipment, which makes the hardware cost high.
发明内容Summary of the invention
根据本申请的各种实施例,提供一种能活体判别方法、身份认证方法、终端、服务器和存储介质。 According to various embodiments of the present application, a living body discrimination method, an identity authentication method, a terminal, a server, and a storage medium are provided.
一种活体判别方法,所述方法包括:A living body discrimination method, the method comprising:
终端采集多帧人脸图像;The terminal collects a multi-frame face image;
所述终端对每一帧人脸图像,提取人脸区域;The terminal extracts a face region for each frame of the face image;
所述终端获取人脸区域的光照强度,根据所述人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征;The terminal acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region;
所述终端根据所述每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线;及The terminal establishes a pulse characteristic curve according to the pulse feature corresponding to the face image of each frame; and
所述终端将所述脉搏特征曲线与预存标准非活体脉搏特征曲线进行对比,若所述脉搏特征曲线的特征值与所述预存标准非活体脉搏特征曲线的特征值相差超过预设特征阈值,则判定采集到的是活体人脸图像,否则,判定采集到的是非活体人脸图像。The terminal compares the pulse characteristic curve with a pre-stored standard non-in-vivo pulse characteristic curve, and if the characteristic value of the pulse characteristic curve and the characteristic value of the pre-stored standard non-living pulse characteristic curve differ by more than a preset feature threshold, It is determined that the living face image is collected, otherwise, it is determined that the collected non-living face image.
一种身份认证方法,所述方法包括:An identity authentication method, the method comprising:
服务器接收终端发送的用户身份认证请求,所述用户身份认证请求携带用户标识;The server receives a user identity authentication request sent by the terminal, where the user identity authentication request carries the user identifier;
所述服务器根据所述用户身份认证请求,获取终端采集的用户的多帧人脸图像;Obtaining, by the server, the multi-frame face image of the user collected by the terminal according to the user identity authentication request;
所述服务器对每一帧人脸图像,提取人脸区域;The server extracts a face region for each frame of the face image;
所述服务器获取人脸区域的光照强度,根据所述人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征;The server acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region;
所述服务器根据所述每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线;The server establishes a pulse characteristic curve according to the pulse feature corresponding to each frame of the face image;
所述服务器将所述脉搏特征曲线与预存的与所述用户标识对应的标准脉搏特征曲线进行对比,若所述脉搏特征曲线的特征值与所述标准脉搏特征曲线的特征值相差在预设范围内,则用户身份认证通过,否则用户身份认证不通过。The server compares the pulse characteristic curve with a pre-stored standard pulse characteristic curve corresponding to the user identifier, if the characteristic value of the pulse characteristic curve is different from the characteristic value of the standard pulse characteristic curve in a preset range If the user identity authentication is passed, the user identity authentication will not pass.
一种终端,包括存储器和处理器,所述存储器中储存有计算机可读指令, 所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:A terminal includes a memory and a processor, wherein the memory stores computer readable instructions, The computer readable instructions, when executed by the processor, cause the processor to perform the following steps:
采集多帧人脸图像;Collecting multiple frames of face images;
对每一帧人脸图像,提取人脸区域;Extracting a face region for each frame of the face image;
获取人脸区域的光照强度,根据所述人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征;Obtaining an illumination intensity of the face region, and calculating a pulse feature corresponding to each frame of the face image according to the illumination intensity of the face region;
根据所述每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线;及Establishing a pulse characteristic curve according to a pulse characteristic corresponding to each frame of the face image; and
将所述脉搏特征曲线与预存标准非活体脉搏特征曲线进行对比,若所述脉搏特征曲线的特征值与所述预存标准非活体脉搏特征曲线的特征值相差超过预设特征阈值,则判定采集到的是活体人脸图像,否则,判定采集到的是非活体人脸图像。Comparing the pulse characteristic curve with a pre-stored standard non-in-vivo pulse characteristic curve, if the characteristic value of the pulse characteristic curve and the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve are different from a preset feature threshold, the determination is collected. The image of the living face is taken, otherwise, it is determined that the non-living face image is collected.
一种服务器,包括存储器和处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:A server comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
接收终端发送的用户身份认证请求,所述用户身份认证请求携带用户标识;Receiving a user identity authentication request sent by the terminal, where the user identity authentication request carries a user identifier;
根据所述用户身份认证请求,获取终端采集的用户的多帧人脸图像;Obtaining, according to the user identity authentication request, a multi-frame face image of a user collected by the terminal;
对每一帧人脸图像,提取人脸区域;Extracting a face region for each frame of the face image;
获取人脸区域的光照强度,根据所述人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征;Obtaining an illumination intensity of the face region, and calculating a pulse feature corresponding to each frame of the face image according to the illumination intensity of the face region;
根据所述每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线;Establishing a pulse characteristic curve according to the pulse feature corresponding to each frame of the face image;
将所述脉搏特征曲线与预存的与所述用户标识对应的标准脉搏特征曲线进行对比;及Comparing the pulse characteristic curve with a pre-stored standard pulse characteristic curve corresponding to the user identification; and
根据对比结果进行用户身份认证,若所述脉搏特征曲线的特征值与所述标准脉搏特征曲线的特征值相差在预设范围内,则用户身份认证通过,否则用户身份认证不通过。The user identity authentication is performed according to the comparison result. If the feature value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, the user identity authentication is passed, otherwise the user identity authentication fails.
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算 机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:One or more non-volatile readable storage media storing computer readable instructions for said computing The machine readable instructions are executed by one or more processors such that the one or more processors perform the following steps:
采集多帧人脸图像;Collecting multiple frames of face images;
对每一帧人脸图像,提取人脸区域;Extracting a face region for each frame of the face image;
获取人脸区域的光照强度,根据所述人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征;Obtaining an illumination intensity of the face region, and calculating a pulse feature corresponding to each frame of the face image according to the illumination intensity of the face region;
根据所述每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线;及Establishing a pulse characteristic curve according to a pulse characteristic corresponding to each frame of the face image; and
将所述脉搏特征曲线与预存标准非活体脉搏特征曲线进行对比,若所述脉搏特征曲线的特征值与所述预存标准非活体脉搏特征曲线的特征值相差超过预设特征阈值,则判定采集到的是活体人脸图像,否则,判定采集到的是非活体人脸图像。Comparing the pulse characteristic curve with a pre-stored standard non-in-vivo pulse characteristic curve, if the characteristic value of the pulse characteristic curve and the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve are different from a preset feature threshold, the determination is collected. The image of the living face is taken, otherwise, it is determined that the non-living face image is collected.
一个或多个存储有计算机可读指令的非易失性可读存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:One or more non-transitory readable storage mediums storing computer readable instructions, when executed by one or more processors, cause the one or more processors to perform the following steps:
接收用户身份认证请求,所述用户身份认证请求携带用户标识;Receiving a user identity authentication request, where the user identity authentication request carries a user identifier;
根据所述用户身份认证请求,获取终端采集的用户的多帧人脸图像;Obtaining, according to the user identity authentication request, a multi-frame face image of a user collected by the terminal;
对每一帧人脸图像,提取人脸区域;Extracting a face region for each frame of the face image;
获取人脸区域的光照强度,根据所述人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征;Obtaining an illumination intensity of the face region, and calculating a pulse feature corresponding to each frame of the face image according to the illumination intensity of the face region;
根据所述每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线;及Establishing a pulse characteristic curve according to a pulse characteristic corresponding to each frame of the face image; and
将所述脉搏特征曲线与预存的与所述用户标识对应的标准脉搏特征曲线进行对比,若所述脉搏特征曲线的特征值与所述标准脉搏特征曲线的特征值相差在预设范围内,则用户身份认证通过,否则用户身份认证不通过。 Comparing the pulse characteristic curve with a pre-stored standard pulse characteristic curve corresponding to the user identifier, if the characteristic value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, then The user identity authentication is passed, otherwise the user identity authentication fails.
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。Details of one or more embodiments of the present application are set forth in the accompanying drawings and description below. Other features, objects, and advantages of the invention will be apparent from the description and appended claims.
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings to be used in the embodiments will be briefly described below. Obviously, the drawings in the following description are only some embodiments of the present application, Those skilled in the art can also obtain other drawings based on these drawings without any creative work.
图1为一个实施例中活体判别方法和身份认证方法的应用环境图;1 is an application environment diagram of a living body discrimination method and an identity authentication method in an embodiment;
图2A为一个实施例中服务器的内部结构图;2A is a diagram showing the internal structure of a server in an embodiment;
图2B为一个实施例中终端的内部结构图;2B is an internal structural diagram of a terminal in an embodiment;
图3为一个实施例中活体判别方法的流程图;3 is a flow chart of a living body discrimination method in an embodiment;
图4为一个实施例中图3的人脸额头区域分割结果图;4 is a diagram showing a result of segmentation of a face of the face of FIG. 3 in an embodiment;
图5为一个实施例中图3的脉搏特征计算方法的流程图;Figure 5 is a flow chart of the pulse characteristic calculation method of Figure 3 in one embodiment;
图6为一个实施例中身份认证方法的流程图;6 is a flow chart of an identity authentication method in an embodiment;
图7为一个实施例中图6的脉搏特征计算方法的流程图;7 is a flow chart of the pulse characteristic calculation method of FIG. 6 in one embodiment;
图8为另一个实施例中身份认证方法的流程图;8 is a flow chart of an identity authentication method in another embodiment;
图9为一个实施例中终端的结构框图;9 is a structural block diagram of a terminal in an embodiment;
图10为一个实施例中服务器的结构框图;Figure 10 is a block diagram showing the structure of a server in an embodiment;
图11为另一个实施例中服务器的结构框图。Figure 11 is a block diagram showing the structure of a server in another embodiment.
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the objects, technical solutions, and advantages of the present application more comprehensible, the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting.
本申请实施例提供的活体判别方法可应用于如图1所示的环境中。参考图1所示,服务器102可接收终端104采集到的多帧人脸图像并进行处理。具体的,
服务器102通过网络与终端104进行通信,接收终端104发送的采集到的多帧人脸图像,提取每一帧人脸图像的人脸区域,获取人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征并建立脉搏特征曲线,将建立的脉搏特征曲线与标准非活体脉搏特征曲线进行对比,根据对比结果进行活体判别,并将判定结果发送给终端104。这里的终端包括但不限于各种个人计算机、笔记本电脑、智能手机、平板电脑、便携式穿戴设备等。应当说明的是,在其他实施例中,终端采集到多帧人脸图像也可直接对多帧人脸图像进行处理,判别图像是否为活体。The living body discrimination method provided by the embodiment of the present application can be applied to the environment as shown in FIG. 1. Referring to FIG. 1, the
本申请实施例提供的身份认证方法也可应用于如图1所示的环境中。参考图1所示,服务器102可接收终端104发送的用户身份认证请求,也可将用户身份认证结果返回给终端104。具体的,服务器102通过网络与终端104进行通信,接收终端104发送的用户身份认证请求和终端104根据用户身份认证请求采集的用户的多帧人脸图像,提取每一帧人脸图像的人脸区域,获取人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征并建立脉搏特征曲线,将建立的脉搏特征曲线与预存的与用户标识对应的标准脉搏特征曲线进行对比,根据对比结果进行身份认证,并将身份认证结果发送给终端104。这里的终端包括但不限于各种个人计算机、笔记本电脑、智能手机、平板电脑、便携式穿戴设备等。在其它实施例中,终端104获取到用户身份认证请求,也可直接对用户身份进行验证。具体的,终端104可直接对采集到的用户的多帧人脸图像进行处理,得到用户的脉搏特征曲线,与该用户的标准脉搏特征曲线进行对比,得到用户身份验证结果。The identity authentication method provided by the embodiment of the present application is also applicable to the environment shown in FIG. 1. Referring to FIG. 1, the
在一个实施例中,如图2A所示,还提供了一种服务器,该服务器包括通过系统总线连接的处理器、非易失性存储介质、内存储器和网络接口。非易失性存储介质中存储有操作系统和计算机可读指令,该计算机可读指令在被执行时实现一种活体判别方法,或者该计算机可读指令在被执行时实现一种身份认证方法。该处理器用于提高计算和控制能力,支撑整个服务器的运行。内存储器用于为非易失性存储介质中的活体判别装置或身份认证装置的 运行提供环境,该内存储器中可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得该处理器执行一种活体判别方法或身份认证方法。网络接口用于与终端进行网络通信,接收或发送数据,例如接收终端发送的人脸图像,以及向终端发送活体判别结果;或接收终端发送的身份认证请求和采集到的人脸图像,以及向终端发送身份认证结果等。In one embodiment, as shown in FIG. 2A, a server is also provided that includes a processor coupled through a system bus, a non-volatile storage medium, an internal memory, and a network interface. An operating system and computer readable instructions are stored in the non-volatile storage medium, the computer readable instructions implementing a live discriminating method when executed, or the computer readable instructions implementing an identity authentication method when executed. This processor is used to improve computing and control capabilities and support the operation of the entire server. The internal memory is used for a living body discriminating device or an identity authentication device in a non-volatile storage medium The operating environment is provided, and the internal memory can store computer readable instructions that, when executed by the processor, cause the processor to perform a living body discrimination method or an identity authentication method. The network interface is used for network communication with the terminal, receiving or transmitting data, for example, receiving a face image sent by the terminal, and transmitting a living body discrimination result to the terminal; or receiving an identity authentication request sent by the terminal and the collected face image, and The terminal sends the identity authentication result and the like.
在一个实施例中,如图2B所示,还提供了一种终端,该终端包括通过系统总线连接的处理器、非易失性存储介质、内存储器、网络接口和显示屏。非易失性存储介质中存储有操作系统和一种计算机可读指令,该计算机可读指令在被执行时实现一种活体判别方法,或者该计算机可读指令在被执行时实现一种身份认证方法。该处理器用于提高计算和控制能力,支撑整个终端的运行。内存储器用于为非易失性存储介质中的活体判别装置的运行提供环境,该内存储器中可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得该处理器执行一种活体判别方法。网络接口用于与服务器进行网络通信,接收或发送数据,例如接收服务器发送的脉搏特征比对结果等。终端的显示屏可以是液晶显示屏或者电子墨水显示屏,终端的输入装置可以是显示屏上覆盖的触摸层,也可以是终端外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。该终端可以是个人计算机、移动终端或者穿戴式设备,移动终端比如手机、平板电脑或者个人数字助理等。本领域技术人员可以理解,图2中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的终端的限定,具体的终端可以包括比图2中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。In one embodiment, as shown in FIG. 2B, a terminal is also provided that includes a processor coupled through a system bus, a non-volatile storage medium, an internal memory, a network interface, and a display screen. An operating system and a computer readable instruction are stored in the non-volatile storage medium, the computer readable instructions implementing a live discriminating method when executed, or the computer readable instructions implementing an identity authentication when executed method. The processor is used to increase computing and control capabilities to support the operation of the entire terminal. The internal memory is configured to provide an environment for the operation of the living body discriminating device in the non-volatile storage medium, wherein the internal memory can store computer readable instructions that, when executed by the processor, cause the processor to execute A living body discrimination method. The network interface is used for network communication with the server, receiving or transmitting data, for example, receiving a pulse characteristic comparison result sent by the server. The display screen of the terminal may be a liquid crystal display or an electronic ink display screen, and the input device of the terminal may be a touch layer covered on the display screen, or a button, a trackball or a touchpad provided on the terminal housing, or may be an external connection. Keyboard, trackpad or mouse. The terminal can be a personal computer, a mobile terminal or a wearable device, such as a mobile phone, a tablet or a personal digital assistant. It will be understood by those skilled in the art that the structure shown in FIG. 2 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the terminal to which the solution of the present application is applied. The specific terminal may include a ratio. More or fewer components are shown in Figure 2, or some components are combined, or have different component arrangements.
如图3所示,在一个实施例中,提供了一种活体判别方法,该方法以应用于如图2A所示的服务器或图2B所示的终端中进行举例说明,包括:As shown in FIG. 3, in an embodiment, a living body discriminating method is provided, which is applied to the server shown in FIG. 2A or the terminal shown in FIG. 2B for illustration, and includes:
步骤302,终端采集多帧人脸图像。Step 302: The terminal collects a multi-frame face image.
本实施例中,由于需要获取到脉搏特征曲线,因此对多帧人脸图像进行处理。具体的,终端可采集预设时间内的包含人脸图像的视频,或者终端每 隔预设时间采集人脸图像,得到多帧人脸图像。In this embodiment, since the pulse characteristic curve needs to be acquired, the multi-frame face image is processed. Specifically, the terminal may collect a video containing a face image within a preset time, or each terminal The face image is collected at a preset time to obtain a multi-frame face image.
步骤304,终端对每一帧人脸图像,提取人脸区域。Step 304: The terminal extracts a face region for each frame of the face image.
由于人脸图像中,人脸区域最能反映活体的脉搏特征。因此,本实施例中,终端从每一帧人脸图像中提取出人脸区域。Because of the face image, the face area best reflects the pulse characteristics of the living body. Therefore, in this embodiment, the terminal extracts a face region from each frame of the face image.
具体的,终端可采用图像积分图和Adaboost方法来对每一帧人脸图像提取出人脸区域。具体包括:终端通过计算图像积分图,快速得到Haar人脸特征,根据Haar人脸特征,利用Adaboost分类算法对训练样本进行分类,用分类得到的最终分类器对测试样本进行分类,从而实现提取人脸区域。其中,Haar特征是计算机视觉领域的一种常用的特征描述算子,Haar特征值反映了图像的灰度变化情况;Adaboost是一种迭代算法,其核心思想是针对同一个训练集训练不同的分类器(弱分类器),然后把这些弱分类器集合起来,构成一个更强的最终分类器(强分类器)。Specifically, the terminal may use an image integration map and an Adaboost method to extract a face region for each frame of the face image. Specifically, the terminal obtains the Haar facial features quickly by calculating the image integral map, and classifies the training samples by using the Adaboost classification algorithm according to the Haar facial features, and classifies the test samples by using the final classifier obtained by the classification, thereby realizing the extractor. Face area. Among them, Haar feature is a commonly used feature description operator in the field of computer vision. Haar eigenvalue reflects the gray level change of image; Adaboost is an iterative algorithm whose core idea is to train different classifications for the same training set. (weak classifier), then combine these weak classifiers to form a stronger final classifier (strong classifier).
步骤306,终端获取人脸区域的光照强度,根据人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征。Step 306: The terminal acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region.
本实施例中,首先将人脸区域分割成多个人脸子区域,具体包括:使用人脸配准算法得到一些人脸关键点,如额头、左眼、右眼、脸颊左侧、脸颊右侧上的一些点,再根据这些关键点,对人脸区域进行区域分割,得到额头、左眼、右眼、脸颊左侧和脸颊右侧等人脸子区域;获取人脸子区域中所有像素的光照强度,若人脸子区域中任意两个像素的光照强度的差值在预设光强阈值内,继续分割人脸子区域,直至分割后得到的人脸子区域中的任意两个像素的光照强度的差值不超过预设光强阈值,使得人脸区域分割的更加细致化,如将额头区域分割为左中右三部分,如图4所示。In this embodiment, the face region is first divided into a plurality of face sub-regions, which specifically includes: using a face registration algorithm to obtain some face key points, such as forehead, left eye, right eye, cheek left, and cheek right. According to these key points, the face region is divided into regions, and the face regions such as the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained; and the light intensity of all the pixels in the face sub-area is obtained, If the difference of the illumination intensity of any two pixels in the face sub-area is within the preset light intensity threshold, the segmentation of the face sub-region continues until the difference of the illumination intensity of any two pixels in the face sub-region obtained after the segmentation is not Exceeding the preset light intensity threshold, the face region segmentation is more detailed, such as dividing the forehead region into three parts of the left middle and the right, as shown in FIG.
本实施例中,为了减少环境光照对人脸肤色变化的影响,以便准确的反映出人脸子区域因血管的氧气饱和度和血液容积的改变而带来的肤色变化,将人脸区域细分割为多个人脸子区域,使得每一个人脸子区域中所有位置的光照强度可近似表示为一个常量,则该人脸子区域的光照强度即为该常量。In this embodiment, in order to reduce the influence of ambient light on the skin color change of the human face, in order to accurately reflect the skin color change caused by the oxygen saturation of the blood vessel and the change of the blood volume in the face sub-area, the face region is finely divided into A plurality of face sub-regions such that the illumination intensity of all positions in each face region can be approximated as a constant, and the illumination intensity of the face sub-region is the constant.
其次,获取每个人脸子区域的光照强度和每个人脸子区域对应的权重, 将每个人脸子区域的光照强度按照其对应的权重进行加权求和,该求和结果即为人脸图像对应的脉搏特征。其中,若在一定时间范围内,人脸子区域中最大强度与最小强度超过一定阈值,说明该区域受环境光照影响较大,则将该区域忽略,即不参与脉搏特征的计算。Secondly, the light intensity of each face sub-area and the weight corresponding to each face sub-area are obtained. The illumination intensity of each face sub-area is weighted and summed according to its corresponding weight, and the summation result is a pulse feature corresponding to the face image. Where, if the maximum intensity and the minimum intensity in the face sub-region exceed a certain threshold within a certain time range, indicating that the region is greatly affected by the ambient light, the region is ignored, that is, the pulse feature is not involved in the calculation.
步骤308,终端根据每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线。Step 308: The terminal establishes a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
每一帧人脸图像对应的脉搏特征是一个静态值,单看一个静态值无法区分是活体还是非活体,将采集到的所有人脸图像对应的脉搏特征连点成线,则能够建立起一条脉搏特征曲线,通过分析该脉搏特征曲线的一些属性,如是否呈周期性变化,若是则变化周期、最大振幅是多少等,来进行后续的活体判别。The pulse feature corresponding to each frame of the face image is a static value. If one static value cannot be distinguished whether it is a living body or a non-living body, the pulse characteristics corresponding to all the collected face images are connected to a line, and a line can be established. The pulse characteristic curve analyzes some properties of the pulse characteristic curve, such as whether it changes periodically, and if so, the change period, the maximum amplitude, etc., to perform subsequent living body discrimination.
步骤310,终端将脉搏特征曲线与预存标准非活体脉搏特征曲线进行对比,若脉搏特征曲线的特征值与预存标准非活体脉搏特征曲线的特征值相差超过预设特征阈值,则判定采集到的是活体人脸图像,否则,判定采集到的是非活体人脸图像。Step 310: The terminal compares the pulse characteristic curve with the pre-stored standard non-in-vivo pulse characteristic curve. If the characteristic value of the pulse characteristic curve and the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve exceed the preset feature threshold, the determination is that The living face image, otherwise, it is determined that the collected non-living face image.
这里的非活体是没有生命特征的物体,例如纸片。通过采集预设时间内的包含非活体的视频,或者每隔预设时间采集非活体的视频,得到多帧非活体图像。由于非活体没有因血液容积和氧气饱和度改变而引起的肤色变化,即非活体的所有位置的光照强度是不变的。在预设时间内,若环境光照强度保持不变,则预存标准非活体脉搏特征曲线是一条脉搏特征值近似于该环境光照强度的直线;若环境光照强度发生变化,则预存标准非活体脉搏特征曲线是一条脉搏特征值近似于该环境光照强度的变化曲线。Non-living objects here are objects without life features, such as paper. A multi-frame non-living image is obtained by collecting a video containing a non-living body within a preset time period, or collecting a non-living video at a preset time. Since the non-living body does not have a skin color change caused by a change in blood volume and oxygen saturation, that is, the light intensity at all positions other than the living body is constant. In the preset time, if the ambient light intensity remains unchanged, the pre-stored standard non-living pulse characteristic curve is a straight line whose pulse characteristic value is close to the ambient light intensity; if the ambient light intensity changes, the standard non-living pulse characteristic is pre-stored. A curve is a curve in which the pulse characteristic value approximates the ambient light intensity.
非活体因其血管中的氧气饱和度以及血液容积固定不变,相应的其肤色也不会发生变化,那么相邻一帧或多帧非活体人脸区域中的光照强度是不变的,于是,预存的标准非活体脉搏特征曲线呈单一无变化状。若上一步骤得到的脉搏特征曲线的特征值与预存标准非活体脉搏特征曲线的特征值相差超过预设特征阈值,则说明上一步骤得到的脉搏特征曲线是变化的,则可判定 该脉搏特征曲线对应的人脸图像是活体人脸图像,反之,则说明该脉搏特征曲线是单一不变的,即为非活体人脸图像。In the non-living body, the oxygen saturation in the blood vessel and the blood volume are fixed, and the corresponding skin color does not change, so the light intensity in the adjacent one or more frames of the non-living human face region is constant, so The pre-existing standard non-in vivo pulse characteristic curve is single and unchanged. If the characteristic value of the pulse characteristic curve obtained in the previous step differs from the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve by more than the preset feature threshold, it indicates that the pulse characteristic curve obtained in the previous step is changed, and then the determination is made. The face image corresponding to the pulse characteristic curve is a living face image, and conversely, the pulse characteristic curve is a single constant, that is, a non-living face image.
本实施例中,利用人脸区域的光照强度计算人脸图像对应的脉搏特征,通过分析该脉搏特征曲线与预存标准非活体脉搏特征曲线的对比结果,实现活体判定,而不需要额外的仪器设备来估计脉搏特征,节省了硬件成本;也不需要用户的交互来完成活体判定,提高了活体判别的检测率。In this embodiment, the pulse characteristics corresponding to the face image are calculated by using the illumination intensity of the face region, and the comparison between the pulse characteristic curve and the pre-stored standard non-vivo pulse characteristic curve is performed to realize the living body determination without additional equipment. To estimate the pulse characteristics, the hardware cost is saved; the user interaction is not required to complete the living body determination, and the detection rate of the living body discrimination is improved.
在一个实施例中,如图5所示,上述步骤306包括:In an embodiment, as shown in FIG. 5, the foregoing
步骤316,终端对人脸区域进行分割得到人脸子区域。Step 316: The terminal divides the face area to obtain a face sub-area.
本实施例中,使用区域分割算法可实现对人脸区域的分割,但在分割之前,通常会先根据人脸配准算法得到一些人脸关键点,如额头、左眼、右眼、脸颊左侧、脸颊右侧上的一些点,再根据这些关键点,对人脸区域进行区域分割,得到额头、左眼、右眼、脸颊左侧和脸颊右侧等人脸子区域。In this embodiment, the segmentation algorithm can be used to segment the face region. However, before segmentation, some face key points, such as forehead, left eye, right eye, and cheek left, are usually obtained according to the face registration algorithm. Some points on the side and the right side of the cheek, according to these key points, the face area is divided into regions, and the face regions such as the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained.
在一个实施例中,终端对人脸区域使用区域分割算法后,终端获取人脸子区域中所有像素的光照强度,若任意两个像素的光照强度的差值超过预设光强阈值,继续分割人脸子区域,直至分割后得到的人脸子区域中的任意两个像素的光照强度的差值不超过预设光强阈值。本实施例中,终端通过判断人脸子区域中任意两个像素的光照强度的差值是否在预设光强阈值范围内,来决定人脸子区域的细分程度,使得任一人脸子区域中所有位置的光照强度可近似表示为一个常量,这样减少了光照对肤色变化的影响,从而使因血流引起的肤色变化得到准确反映。In an embodiment, after the terminal uses the region segmentation algorithm on the face region, the terminal acquires the illumination intensity of all the pixels in the face sub-region. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation continues. In the face sub-region, the difference in illumination intensity of any two pixels in the face sub-region obtained after the segmentation does not exceed the preset light intensity threshold. In this embodiment, the terminal determines the degree of subdivision of the sub-region of the face by determining whether the difference of the illumination intensity of any two pixels in the sub-region of the face is within a preset light intensity threshold range, so that all positions in the face sub-area of any one of the faces The intensity of the light can be approximated as a constant, which reduces the effect of light on skin color changes, so that skin color changes caused by blood flow are accurately reflected.
步骤336,终端获取每个人脸子区域对应的光照强度和每个人脸子区域对应的权重,根据人脸子区域对应的光照强度和人脸子区域对应的权重,计算每一帧人脸图像对应的脉搏特征。Step 336: The terminal acquires the light intensity corresponding to each face sub-region and the weight corresponding to each face sub-region, and calculates a pulse feature corresponding to each frame face image according to the illumination intensity corresponding to the face sub-region and the weight corresponding to the face sub-region.
由于面部血管分布不均匀,人脸不同区域的肤色变化强弱不同,在血管比较集中的区域,肤色变化相对明显,则该区域对应的权重相对较大,反之,区域对应的权重相对较小。本实施例中,将获取到的每个人脸子区域对应的光照强度按照获取到每个人脸子区域对应的权重进行加权求和,得到每一帧 人脸图像对应的脉搏特征。Due to the uneven distribution of facial blood vessels, the skin color changes in different regions of the face are different. In the region where the blood vessels are concentrated, the skin color changes are relatively obvious, and the weight corresponding to the region is relatively large. Conversely, the weight corresponding to the region is relatively small. In this embodiment, the acquired illumination intensity corresponding to each face sub-region is weighted and summed according to the weights corresponding to each of the face sub-regions, and each frame is obtained. The pulse feature corresponding to the face image.
在一个实施例中,可根据以下公式计算脉搏特征:In one embodiment, the pulse characteristics can be calculated according to the following formula:
其中,为脉搏特征,n为区域总数,Gi为各区域对应的权重,为各区域的光照强度,函数I为指示函数,表示在一定时间范围内,区域i中最大强度与最小强度超过一定阈值时,该区域将被忽略,不参与脉搏特征的计算。among them, For the pulse feature, n is the total number of regions, and G i is the weight corresponding to each region. For the illumination intensity of each region, the function I is an indication function, indicating that within a certain time range, when the maximum intensity and the minimum intensity in the region i exceed a certain threshold, the region will be ignored and will not participate in the calculation of the pulse feature.
本实施例中,通过对人脸区域进行分割得到人脸子区域,再根据人脸子区域对应的光照强度和人脸子区域对应的权重,计算每一帧人脸图像对应的脉搏特征。这种通过加权求和的方式得到脉搏特征的估计值,提高了脉搏特征计算的精准率。In this embodiment, the face region is obtained by segmenting the face region, and the pulse feature corresponding to the face image of each frame is calculated according to the light intensity corresponding to the face region and the weight corresponding to the face region. This way, the estimated value of the pulse characteristics is obtained by weighted summation, which improves the accuracy of the pulse characteristic calculation.
在一个实施例中,如图6所示,提供了一种身份认证方法,该方法以应用于如图1所示的服务器中进行举例说明,该方法包括:In an embodiment, as shown in FIG. 6, an identity authentication method is provided, which is applied to a server as shown in FIG. 1, and the method includes:
步骤602,服务器接收终端发送的用户身份认证请求,该用户身份认证请求携带用户标识。Step 602: The server receives a user identity authentication request sent by the terminal, where the user identity authentication request carries the user identifier.
用户身份认证请求是用户所在终端向服务器发送的验证身份的请求;用户标识用于标识每一个用户,具有唯一性,可以是用户的身份证号、即时通信号码、社交账号、电子邮箱或者移动通信号码中的任意一种。The user identity authentication request is a request for verifying the identity sent by the terminal where the user is located to the server; the user identifier is used to identify each user, and is unique, and may be the user's ID number, instant communication number, social account number, email address, or mobile communication. Any of the numbers.
比如在人脸门禁中,用户将身份证置于指定可扫描区域,用户所在终端扫描该身份证,获取该用户的用户标识,即身份证号,获取成功后,向服务器发送身份认证请求,以便后续服务器可从数据库中查找出与该用户标识相对应的标准脉搏特征曲线。For example, in the face access control, the user places the ID card in the designated scanable area, and the user's terminal scans the ID card to obtain the user ID of the user, that is, the ID number, and after obtaining the success, sends an identity authentication request to the server, so that The subsequent server can find a standard pulse characteristic curve corresponding to the user identifier from the database.
步骤604,服务器根据用户身份认证请求,获取终端采集的用户的多帧人脸图像。Step 604: The server acquires a multi-frame face image of the user collected by the terminal according to the user identity authentication request.
具体地,服务器可将用户身份认证请求中携带的用户标识与服务器中预存的用户标识进行对比,若对比一致,说明服务器中存储有该用户标识对应的用户信息,如后续用到的用户脉搏特征曲线,则获取终端采集的用户的多
帧人脸图像。其中,终端采集用户的多帧人脸图像的采集方式可以是终端采集预设时间内的包含人脸图像的视频,也可以是终端每隔预设时间采集人脸图像,得到多帧人脸图像。步骤606,服务器对每一帧人脸图像,提取人脸区域。Specifically, the server may compare the user identifier carried in the user identity authentication request with the user identifier pre-stored in the server. If the comparison is consistent, the server stores the user information corresponding to the user identifier, such as the user pulse characteristics used subsequently. Curve, then get more users collected by the terminal
Frame face image. The method for collecting the multi-frame face image of the user by the terminal may be that the terminal collects the video that includes the face image within the preset time, or that the terminal collects the face image every preset time to obtain the multi-frame face image. . In
由于人脸图像中,人脸区域是最能反映活体的脉搏特征,例如头发、衣服等区域则反映不出活体特征。因此,本实施例中,服务器需要从每一帧人脸图像中提取出人脸区域。Since the face area is the most representative of the pulse characteristics of the living body in the face image, areas such as hair and clothes do not reflect the living characteristics. Therefore, in this embodiment, the server needs to extract the face region from each frame of the face image.
具体的,服务器可采用图像积分图和Adaboost方法来对每一帧人脸图像提取出人脸区域。具体包括:服务器通过计算图像积分图,快速得到Haar人脸特征,根据Haar人脸特征,利用Adaboost分类算法对训练样本进行分类,用分类得到的最终分类器对测试样本进行分类,从而实现提取人脸区域。其中,Haar特征是计算机视觉领域的一种常用的特征描述算子,Haar特征值反映了图像的灰度变化情况;Adaboost是一种迭代算法,其核心思想是针对同一个训练集训练不同的分类器(弱分类器),然后把这些弱分类器集合起来,构成一个更强的最终分类器(强分类器)。Specifically, the server may use an image integration map and an Adaboost method to extract a face region for each frame of the face image. Specifically, the server obtains the Haar facial features quickly by calculating the image integral map, and classifies the training samples by using the Adaboost classification algorithm according to the Haar facial features, and classifies the test samples by using the final classifier obtained by the classification, thereby realizing the extractor. Face area. Among them, Haar feature is a commonly used feature description operator in the field of computer vision. Haar eigenvalue reflects the gray level change of image; Adaboost is an iterative algorithm whose core idea is to train different classifications for the same training set. (weak classifier), then combine these weak classifiers to form a stronger final classifier (strong classifier).
步骤608,服务器获取人脸区域的光照强度,根据人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征。Step 608: The server acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region.
本实施例中,服务器首先将人脸区域分割成多个人脸子区域,具体包括:使用人脸配准算法得到一些人脸关键点,如额头、左眼、右眼、脸颊左侧、脸颊右侧上的一些点,再根据这些关键点,对人脸区域进行区域分割,得到额头、左眼、右眼、脸颊左侧和脸颊右侧等人脸子区域;获取人脸子区域中所有像素的光照强度,若人脸子区域中任意两个像素的光照强度的差值在预设光强阈值内,继续分割人脸子区域,直至分割后得到的人脸子区域中的任意两个像素的光照强度的差值不超过预设光强阈值,使得人脸区域分割的更加细致化,如将额头区域分割为左中右三部分,如图4所示。In this embodiment, the server first divides the face region into a plurality of face sub-regions, and specifically includes: using a face registration algorithm to obtain some face key points, such as forehead, left eye, right eye, cheek left, and cheek right According to these key points, the face region is segmented to obtain the face region of the forehead, the left eye, the right eye, the left side of the cheek and the right side of the cheek; the light intensity of all the pixels in the face sub-area is obtained. If the difference of the illumination intensity of any two pixels in the face sub-area is within the preset light intensity threshold, continue to segment the face sub-region until the difference in illumination intensity of any two pixels in the face sub-region obtained after the segmentation The preset light intensity threshold is not exceeded, so that the face region segmentation is more detailed, such as dividing the forehead region into three parts of the left middle and the right, as shown in FIG.
本实施例中,为了减少环境光照对人脸肤色变化的影响,以便准确的反映出人脸子区域因血管的氧气饱和度和血液容积的改变而带来的肤色变化, 将人脸区域细分割为多个人脸子区域,使得每一个人脸子区域中所有位置的光照强度可近似表示为一个常量,则该人脸子区域的光照强度即为该常量。In this embodiment, in order to reduce the influence of ambient light on the change of the skin color of the face, in order to accurately reflect the skin color change caused by the oxygen saturation of the blood vessel and the change of blood volume in the face sub-region, The face region is finely divided into a plurality of face sub-regions, so that the illumination intensity of all the positions in each face region can be approximated as a constant, and the illumination intensity of the face sub-region is the constant.
其次,服务器获取每个人脸子区域的光照强度和每个人脸子区域对应的权重,将每个人脸子区域的光照强度按照其对应的权重进行加权求和,该求和结果即为人脸图像对应的脉搏特征。其中,若在一定时间范围内,人脸子区域中最大强度与最小强度超过一定阈值,说明该区域受环境光照影响较大,则将该区域忽略,即不参与脉搏特征的计算。Secondly, the server obtains the light intensity of each face sub-region and the weight corresponding to each face sub-region, and weights the illumination intensity of each face sub-region according to its corresponding weight, and the summation result is the pulse feature corresponding to the face image. . Where, if the maximum intensity and the minimum intensity in the face sub-region exceed a certain threshold within a certain time range, indicating that the region is greatly affected by the ambient light, the region is ignored, that is, the pulse feature is not involved in the calculation.
步骤610,服务器根据每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线。Step 610: The server establishes a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
每一帧人脸图像对应的脉搏特征是一个静态值,单看一个静态值无法区分是真人还是图片,将采集到的所有人脸图像对应的脉搏特征连点成线,建立一个脉搏特征曲线,通过分析该脉搏特征曲线的一些属性,如是否呈周期性变化,若是则变化周期、最大振幅是多少等,来进行后续的活体判别。The pulse feature corresponding to each frame of the face image is a static value. It is impossible to distinguish whether it is a real person or a picture by looking at a static value. The pulse features corresponding to all the collected face images are connected into a line to establish a pulse characteristic curve. By analyzing some properties of the pulse characteristic curve, such as whether it is periodically changed, if so, the change period, the maximum amplitude, etc., to perform subsequent living body discrimination.
步骤612,服务器将脉搏特征曲线与预存的与用户标识对应的标准脉搏特征曲线进行对比,若脉搏特征曲线的特征值与标准脉搏特征曲线的特征值相差在预设范围内,则用户身份认证通过,否则用户身份认证不通过。Step 612: The server compares the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier. If the characteristic value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, the user identity authentication is passed. Otherwise, the user identity authentication does not pass.
数据库中预先存储了与用户标识对应的标准脉搏特征曲线,可预先采集该用户的多帧人脸图像,对每一帧人脸图像提取人脸区域,获取人脸区域的光照强度,根据人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征,根据每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线,该脉搏特征曲线即为用户的标准脉搏特征曲线,与用户标识对应存储在数据库中,在需要进行身份验证时进行对比。The standard pulse characteristic curve corresponding to the user identifier is pre-stored in the database, and the multi-frame face image of the user may be pre-acquired, the face region is extracted for each frame face image, and the illumination intensity of the face region is obtained, according to the face The light intensity of the region is calculated, and the pulse feature corresponding to the face image of each frame is calculated, and the pulse characteristic curve is established according to the pulse feature corresponding to the face image of each frame, and the pulse characteristic curve is the standard pulse characteristic curve of the user, and the user identifier Correspondence is stored in the database and compared when authentication is required.
本实施例中,服务器通过将活体判别时得到的脉搏特征曲线与预存的用户标识对应的标准脉搏特征曲线进行对比,只要两者的特征值(如变化周期、最大振幅等)相差在可接受的范围内,则首先表明终端采集到的人脸图像是活体人脸图像,其次终端采集到的人脸图像对应的用户与服务器数据库中对应用户标识的用户是同一个人,即身份验证通过。本实施例中,通过将脉搏 特征曲线与预存的与用户标识对应的标准脉搏特征曲线进行对比,不需要用户做额外的交互动作,也不需要额外的仪器设备,因此能够提高身份认证的效率以及节省硬件成本。In this embodiment, the server compares the pulse characteristic curve obtained by the living body discrimination with the standard pulse characteristic curve corresponding to the pre-stored user identifier, as long as the difference between the characteristic values (such as the change period, the maximum amplitude, etc.) of the two is acceptable. Within the scope, it first indicates that the face image collected by the terminal is a living face image, and the user corresponding to the face image collected by the terminal is the same person as the user corresponding to the user identifier in the server database, that is, the identity verification is passed. In this embodiment, by pulse The characteristic curve is compared with the pre-stored standard pulse characteristic curve corresponding to the user identification, and the user does not need to perform additional interactions or additional instruments, thereby improving the efficiency of identity authentication and saving hardware costs.
在一个实施例中,在服务器将脉搏特征曲线与预存的与用户标识对应的标准脉搏特征曲线进行对比之前,还包括:服务器将脉搏特征曲线与预先标准非活体脉搏特征曲线进行对比,若脉搏特征曲线的特征值与预先标准非活体脉搏特征曲线的特征值相差超过阈值,则执行将脉搏特征曲线与预存的与用户标识对应的标准脉搏特征曲线进行对比的步骤。本实施例中,只有先通过了活体判别,才进行后续的身份认证,排除了图片冒充的可能,且身份认证是通过对比当前采集到的脉搏特征曲线与预存的与用户标识对应的标准脉搏特征曲线,排除了他人冒充的可能,故该身份认证方法安全性高。In one embodiment, before the server compares the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier, the method further includes: comparing, by the server, the pulse characteristic curve with a pre-standard non-in vivo pulse characteristic curve, if the pulse characteristic If the characteristic value of the curve differs from the characteristic value of the pre-standard non-vivo pulse characteristic curve by more than a threshold value, a step of comparing the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identification is performed. In this embodiment, the subsequent identity authentication is performed only after the live discrimination is performed, and the possibility of the image impersonation is excluded, and the identity authentication is performed by comparing the currently collected pulse characteristic curve with the pre-stored standard pulse feature corresponding to the user identifier. The curve eliminates the possibility of impersonation, so the identity authentication method is highly secure.
在一个实施例中,如7所示,上述步骤608包括:In an embodiment, as shown in FIG. 7, the foregoing
步骤618,服务器对人脸区域进行分割得到人脸子区域。In
使用区域分割算法可实现对人脸区域的分割,但在分割之前,通常会先根据人脸配准算法得到一些人脸关键点,如额头、左眼、右眼、脸颊左侧、脸颊右侧上的一些点,再根据这些关键点,对人脸区域进行区域分割,得到额头、左眼、右眼、脸颊左侧和脸颊右侧等区域。The segmentation algorithm can be used to segment the face region. However, before segmentation, some face key points are usually obtained according to the face registration algorithm, such as forehead, left eye, right eye, cheek left, and cheek right. According to these key points, the face area is divided into regions, and the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained.
在一个实施例中,对人脸区域使用区域分割算法后,获取人脸子区域中所有像素的光照强度,若任意两个像素的光照强度的差值超过预设光强阈值,继续分割人脸子区域,直至分割后得到的人脸子区域中的任意两个像素的光照强度的差值不超过预设光强阈值。本实施例中,通过判断人脸子区域中任意两个像素的光照强度的差值是否在预设光强阈值范围内,来决定人脸子区域的细分程度,使得任一人脸子区域中所有位置的光照强度可近似表示为一个常量,这样减少了光照对肤色变化的影响,从而使因血流引起的肤色变化得到准确反映。In an embodiment, after the region segmentation algorithm is used on the face region, the illumination intensity of all the pixels in the face sub-region is obtained. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation of the face sub-region continues. The difference in illumination intensity of any two pixels in the face sub-region obtained after the segmentation does not exceed the preset light intensity threshold. In this embodiment, determining whether the difference of the illumination intensity of any two pixels in the sub-region of the face is within a preset light intensity threshold range determines the degree of subdivision of the sub-region of the face, so that all positions in the face sub-area of any one of the faces The intensity of the light can be approximated as a constant, which reduces the effect of light on skin color changes, so that skin color changes caused by blood flow are accurately reflected.
步骤638,服务器获取每个人脸子区域对应的光照强度和每个人脸子区域对应的权重,根据人脸子区域对应的光照强度和人脸子区域对应的权重,
计算每一帧人脸图像对应的脉搏特征。
由于面部血管分布不均匀,人脸不同区域的肤色变化强弱不同,在血管比较集中的区域,肤色变化相对明显,则该区域对应的权重相对较大,反之,区域对应的权重相对较小。本实施例中,服务器将获取到的每个人脸子区域对应的光照强度按照获取到每个人脸子区域对应的权重进行加权求和,得到每一帧人脸图像对应的脉搏特征。Due to the uneven distribution of facial blood vessels, the skin color changes in different regions of the face are different. In the region where the blood vessels are concentrated, the skin color changes are relatively obvious, and the weight corresponding to the region is relatively large. Conversely, the weight corresponding to the region is relatively small. In this embodiment, the server performs weighted summation on the acquired light intensity corresponding to each face sub-region according to the weight corresponding to each face sub-region, and obtains a pulse feature corresponding to each frame face image.
本实施例中,服务器通过对人脸区域进行分割得到人脸子区域,再根据人脸子区域对应的光照强度和人脸子区域对应的权重,计算每一帧人脸图像对应的脉搏特征。这种通过加权求和的方式得到脉搏特征的估计值,提高了脉搏特征计算的精准率。In this embodiment, the server obtains the face sub-region by segmenting the face region, and then calculates the pulse feature corresponding to the face image of each frame according to the illumination intensity corresponding to the face sub-region and the weight corresponding to the face sub-region. This way, the estimated value of the pulse characteristics is obtained by weighted summation, which improves the accuracy of the pulse characteristic calculation.
在一个实施例中,如图8所示,提供了另一种身份认证方法,该方法包括:In one embodiment, as shown in FIG. 8, another identity authentication method is provided, the method comprising:
步骤802,服务器接收终端发送的用户身份认证请求,用户身份认证请求携带用户标识。Step 802: The server receives a user identity authentication request sent by the terminal, where the user identity authentication request carries the user identifier.
同样的,用户身份认证请求是用户所在终端向服务器发送的验证身份的请求;用户标识是区分每个用户的唯一凭证,比如说用户的电话号码,社交账号,或邮箱等。Similarly, the user identity authentication request is a request for verifying identity sent by the terminal where the user is located to the server; the user identifier is a unique credential that distinguishes each user, such as a user's phone number, a social account number, or a mailbox.
步骤804,服务器根据用户身份认证请求,获取终端采集的用户的多帧人脸图像。Step 804: The server acquires a multi-frame face image of the user collected by the terminal according to the user identity authentication request.
同样的,本实施例是基于人脸图像的身份认证,为了保证身份认证准确率高,需要先进行活体判别,而活体判别需要通过较多图像的特征分析才可实现,故首先采集多帧人脸图像。Similarly, the present embodiment is based on the identity authentication of the face image. In order to ensure the high accuracy of the identity authentication, the living body discrimination needs to be performed first, and the living body discrimination needs to be performed by the feature analysis of more images, so the multi-frame person is first collected. Face image.
步骤806,服务器对每一帧人脸图像,提取人脸区域。Step 806: The server extracts a face region for each frame of the face image.
由于人脸图像中,人脸区域是最能反映活体的脉搏特征,例如头发、衣服等区域则反映不出活体特征。因此,本实施例中,服务器需要从每一帧人脸图像中提取出人脸区域。Since the face area is the most representative of the pulse characteristics of the living body in the face image, areas such as hair and clothes do not reflect the living characteristics. Therefore, in this embodiment, the server needs to extract the face region from each frame of the face image.
具体的,服务器可采用图像积分图和Adaboost方法来对每一帧人脸图像 提取出人脸区域。具体包括:通过计算图像积分图,快速得到Haar人脸特征,根据Haar人脸特征,利用Adaboost分类算法对训练样本进行分类,用分类得到的最终分类器对测试样本进行分类,从而实现提取人脸区域。其中,Haar特征是计算机视觉领域的一种常用的特征描述算子,Haar特征值反映了图像的灰度变化情况;Adaboost是一种迭代算法,其核心思想是针对同一个训练集训练不同的分类器(弱分类器),然后把这些弱分类器集合起来,构成一个更强的最终分类器(强分类器)。Specifically, the server may use an image integration map and an Adaboost method to face each frame of the image. Extract the face area. Specifically, the Haar facial features are obtained quickly by calculating the image integral map. According to the Haar facial features, the training samples are classified by the Adaboost classification algorithm, and the final classifier obtained by the classification is used to classify the test samples, thereby extracting the human face. region. Among them, Haar feature is a commonly used feature description operator in the field of computer vision. Haar eigenvalue reflects the gray level change of image; Adaboost is an iterative algorithm whose core idea is to train different classifications for the same training set. (weak classifier), then combine these weak classifiers to form a stronger final classifier (strong classifier).
步骤808,服务器对人脸区域进行分割得到人脸子区域。In
使用区域分割算法可实现对人脸区域的分割,但在分割之前,通常会先根据人脸配准算法得到一些人脸关键点,如额头、左眼、右眼、脸颊左侧、脸颊右侧上的一些点,再根据这些关键点,对人脸区域进行区域分割,得到额头、左眼、右眼、脸颊左侧和脸颊右侧等区域。The segmentation algorithm can be used to segment the face region. However, before segmentation, some face key points are usually obtained according to the face registration algorithm, such as forehead, left eye, right eye, cheek left, and cheek right. According to these key points, the face area is divided into regions, and the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained.
步骤810,服务器获取人脸子区域中所有像素的光照强度,若任意两个像素的光照强度的差值超过预设光强阈值,继续分割人脸子区域,否则,停止分割人脸子区域。Step 810: The server acquires the illumination intensity of all the pixels in the face sub-area. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation of the face sub-region continues. Otherwise, the segmentation of the face sub-region is stopped.
通过判断人脸子区域中任意两个像素的光照强度的差值是否在预设光强阈值范围内,来决定人脸子区域的细分程度,使得任一人脸子区域中所有位置的光照强度可近似表示为一个常量,这样减少了光照对肤色变化的影响,从而使因血流引起的肤色变化得到准确反映。By determining whether the difference of the illumination intensity of any two pixels in the sub-region of the face is within a preset light intensity threshold, the degree of subdivision of the sub-region of the face is determined, so that the illumination intensity of all positions in any face sub-region can be approximated. As a constant, this reduces the effect of light on skin tone changes, so that skin color changes caused by blood flow are accurately reflected.
步骤812,服务器获取每个人脸子区域对应的光照强度和每个人脸子区域对应的权重,根据人脸子区域对应的光照强度和人脸子区域对应的权重,计算每一帧人脸图像对应的脉搏特征。Step 812: The server obtains the light intensity corresponding to each face sub-region and the weight corresponding to each face sub-region, and calculates a pulse feature corresponding to each frame face image according to the illumination intensity corresponding to the face sub-region and the weight corresponding to the face sub-region.
由于面部血管分布不均匀,人脸不同区域的肤色变化强弱不同,在血管比较集中的区域,肤色变化相对明显,则该区域对应的权重相对较大,反之,区域对应的权重相对较小。Due to the uneven distribution of facial blood vessels, the skin color changes in different regions of the face are different. In the region where the blood vessels are concentrated, the skin color changes are relatively obvious, and the weight corresponding to the region is relatively large. Conversely, the weight corresponding to the region is relatively small.
将获取到的每个人脸子区域对应的光照强度按照获取到每个人脸子区域对应的权重进行加权求和,得到每一帧人脸图像对应的脉搏特征。具体计算 公式如下:The obtained light intensity corresponding to each face sub-region is weighted and summed according to the weight corresponding to each face sub-region, and the pulse feature corresponding to each frame face image is obtained. Specific calculation The formula is as follows:
其中,为脉搏特征,n为区域总数,Gi为各区域对应的权重,为各区域的光照强度,函数I为指示函数,表示在一定时间范围内,区域i中最大强度与最小强度超过一定阈值时,该区域将被忽略,不参与脉搏特征的计算。among them, For the pulse feature, n is the total number of regions, and G i is the weight corresponding to each region. For the illumination intensity of each region, the function I is an indication function, indicating that within a certain time range, when the maximum intensity and the minimum intensity in the region i exceed a certain threshold, the region will be ignored and will not participate in the calculation of the pulse feature.
步骤814,服务器根据每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线。Step 814: The server establishes a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
每一帧人脸图像对应的脉搏特征是一个静态值,单看一个静态值无法区分是真人还是图片,将采集到的所有人脸图像对应的脉搏特征连点成线,建立一个脉搏特征曲线,通过分析该脉搏特征曲线的一些属性,如是否呈周期性变化,若是则变化周期、最大振幅是多少等,来进行后续的活体判别。The pulse feature corresponding to each frame of the face image is a static value. It is impossible to distinguish whether it is a real person or a picture by looking at a static value. The pulse features corresponding to all the collected face images are connected into a line to establish a pulse characteristic curve. By analyzing some properties of the pulse characteristic curve, such as whether it is periodically changed, if so, the change period, the maximum amplitude, etc., to perform subsequent living body discrimination.
步骤816,服务器将脉搏特征曲线与预先标准非活体脉搏特征曲线进行对比,若脉搏特征曲线的特征值与预先标准非活体脉搏特征曲线的特征值相差超过预设特征阈值,则判定采集到的是活体人脸图像。Step 816: The server compares the pulse characteristic curve with the pre-standard non-in vivo pulse characteristic curve. If the characteristic value of the pulse characteristic curve and the characteristic value of the pre-standard non-in vivo pulse characteristic curve differ from the preset feature threshold, the determination is that Live face image.
这里的非活体是没有生命特征的物体,例如纸片等。通过采集预设时间内的包含非活体的视频,或者每隔预设时间采集非活体的视频,得到多帧非活体图像。由于非活体没有因血液容积和氧气饱和度改变而引起的肤色变化,即非活体区域的所有位置的光照强度是不变的。在预设时间内,若环境光照强度保持不变,则预存标准非活体脉搏特征曲线是一条脉搏特征值近似于该环境光照强度的直线;若环境光照强度发生变化,则预存标准非活体脉搏特征曲线是一条脉搏特征值近似于该环境光照强度的变化曲线。Non-living objects here are objects without life features, such as paper sheets. A multi-frame non-living image is obtained by collecting a video containing a non-living body within a preset time period, or collecting a non-living video at a preset time. Since the non-living body does not have a skin color change caused by a change in blood volume and oxygen saturation, that is, the light intensity at all positions of the non-living region is constant. In the preset time, if the ambient light intensity remains unchanged, the pre-stored standard non-living pulse characteristic curve is a straight line whose pulse characteristic value is close to the ambient light intensity; if the ambient light intensity changes, the standard non-living pulse characteristic is pre-stored. A curve is a curve in which the pulse characteristic value approximates the ambient light intensity.
非活体因其血管中的氧气饱和度以及血液容积固定不变,相应的其肤色也不会发生变化,那么相邻一帧或多帧非活体人脸区域中的光照强度是不变的,于是,预存标准非活体脉搏特征曲线呈单一无变化状。若上一步骤得到的脉搏特征曲线的特征值与预存标准非活体脉搏特征曲线的特征值相差超过预设特征阈值,则说明上一步骤得到的脉搏特征曲线是变化的,则可判定该 脉搏特征曲线对应的人脸图像是活体人脸图像。In the non-living body, the oxygen saturation in the blood vessel and the blood volume are fixed, and the corresponding skin color does not change, so the light intensity in the adjacent one or more frames of the non-living human face region is constant, so The pre-stored standard non-in vivo pulse characteristic curve showed no change. If the characteristic value of the pulse characteristic curve obtained in the previous step differs from the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve by more than a preset characteristic threshold, if the pulse characteristic curve obtained in the previous step is changed, the The face image corresponding to the pulse characteristic curve is a living face image.
步骤818,服务器将脉搏特征曲线与预存的与用户标识对应的标准脉搏特征曲线进行对比,若脉搏特征曲线的特征值与标准脉搏特征曲线的特征值相差在预设范围内,则用户身份认证通过,否则用户身份认证不通过。
数据库中预先存储了与用户标识对应的标准脉搏特征曲线,通过将活体判别时得到的脉搏特征曲线与预存的用户标识对应的标准脉搏特征曲线进行对比,只要两者的特征值(如变化周期、最大振幅等)相差在可接受的范围内,则表明采集到的图像对应的用户与数据库中对应用户标识的用户是同一个人,即身份验证通过。这里的数据库可以是在线数据库,也可以是本地数据库。The standard pulse characteristic curve corresponding to the user identifier is pre-stored in the database, and the pulse characteristic curve obtained by the living body discrimination is compared with the standard pulse characteristic curve corresponding to the pre-stored user identifier, as long as the characteristic values of the two (such as the change period, If the difference between the maximum amplitude and the like is within an acceptable range, it indicates that the user corresponding to the collected image is the same person as the user corresponding to the user identifier in the database, that is, the authentication is passed. The database here can be an online database or a local database.
本实施例中,只有先通过了活体判别,才进行后续的身份认证,排除了图片冒充的可能,且身份认证是通过对比当前采集到的脉搏特征曲线与预存的与用户标识对应的标准脉搏特征曲线实现的,排除了他人冒充的可能,故该身份认证方法安全性高。In this embodiment, the subsequent identity authentication is performed only after the live discrimination is performed, and the possibility of the image impersonation is excluded, and the identity authentication is performed by comparing the currently collected pulse characteristic curve with the pre-stored standard pulse feature corresponding to the user identifier. The implementation of the curve eliminates the possibility of impersonation, so the identity authentication method is highly secure.
在一个实施例中,还提供了一种终端,该终端的内部结构可如图2B所示,下述每个模块可全部或部分通过软件、硬件或其组合来实现。如图9所示,在一个实施例中,该终端包括活体判别装置,该装置包括:In one embodiment, a terminal is also provided, the internal structure of which can be as shown in FIG. 2B, and each of the following modules can be implemented in whole or in part by software, hardware or a combination thereof. As shown in FIG. 9, in one embodiment, the terminal includes a living body discriminating device, and the device includes:
人脸图像采集模块902,用于采集多帧人脸图像。The face
人脸区域提取模块904,用于对每一帧人脸图像,提取人脸区域。The face
脉搏特征计算模块906,用于获取人脸区域的光照强度,根据人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征。The pulse
脉搏特征曲线建立模块908,用于根据每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线。The pulse characteristic
活体判别模块910,用于将脉搏特征曲线与预存标准非活体脉搏特征曲线进行对比,若脉搏特征曲线的特征值与预存标准非活体脉搏特征曲线的特征值相差超过预设特征阈值,则判定采集到的是活体人脸图像,否则,判定采集到的是非活体人脸图像。
The living
在一个实施例中,脉搏特征计算模块906用于对人脸区域进行分割得到人脸子区域;获取每个人脸子区域对应的光照强度和每个人脸子区域对应的权重,根据人脸子区域对应的光照强度和人脸子区域对应的权重,计算每一帧人脸图像对应的脉搏特征。In one embodiment, the pulse
在一个实施例中,脉搏特征计算模块906用于获取人脸子区域中所有像素的光照强度,若任意两个像素的光照强度的差值超过预设光强阈值,继续分割人脸子区域,否则,停止分割人脸子区域。In one embodiment, the pulse
在一个实施例中,提供了一种服务器,该服务器的内部结构可如图2A所示,下述每个模块可全部或部分通过软件、硬件或其组合来实现。在一个实施例中,如图10所示,该服务器包括身份认证装置,该装置包括:In one embodiment, a server is provided, the internal structure of which may be as shown in FIG. 2A, and each of the following modules may be implemented in whole or in part by software, hardware or a combination thereof. In one embodiment, as shown in FIG. 10, the server includes an identity authentication device, the device comprising:
身份认证请求接收模块1002,用于终端发送接收用户身份认证请求,用户身份认证请求携带用户标识。The identity authentication
人脸图像采集模块1004,用于根据用户身份认证请求,接收终端采集的用户的多帧人脸图像。The face
人脸区域提取模块1006,用于对每一帧人脸图像,提取人脸区域。The face
脉搏特征计算模块1008,用于获取人脸区域的光照强度,根据人脸区域的光照强度,计算每一帧人脸图像对应的脉搏特征。The pulse
脉搏特征曲线建立模块1010,用于根据每一帧人脸图像对应的脉搏特征,建立脉搏特征曲线。The pulse characteristic
身份认证模块1012,用于将脉搏特征曲线与预存的与用户标识对应的标准脉搏特征曲线进行对比,若脉搏特征曲线的特征值与标准脉搏特征曲线的特征值相差在预设范围内,则用户身份认证通过,否则用户身份认证不通过。The
在一个实施例中,脉搏特征计算模块1008用于对人脸区域进行分割得到人脸子区域;获取每个人脸子区域对应的光照强度和每个人脸子区域对应的权重,根据人脸子区域对应的光照强度和人脸子区域对应的权重,计算每一帧人脸图像对应的脉搏特征。
In one embodiment, the pulse
在一个实施例中,上述服务器中的身份认证装置还包括:活体判别模块1011,用于将脉搏特征曲线与预存标准非活体脉搏特征曲线进行对比,若脉搏特征曲线的特征值与预存标准非活体脉搏特征曲线的特征值相差超过预设特征阈值,则判定采集到的是活体人脸图像。In an embodiment, the identity authentication device in the server further includes: a living
在一个实施例中,脉搏特征计算模块1008用于获取人脸子区域中所有像素的光照强度,若任意两个像素的光照强度的差值超过预设光强阈值,继续分割人脸子区域,直至分割后得到的人脸子区域中的任意两个像素的光照强度的差值不超过预设光强阈值。In one embodiment, the pulse
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。One of ordinary skill in the art can understand that all or part of the process of implementing the foregoing embodiments can be completed by a computer program to instruct related hardware, and the program can be stored in a computer readable storage medium. When executed, the flow of an embodiment of the methods as described above may be included. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above embodiments may be arbitrarily combined. For the sake of brevity of description, all possible combinations of the technical features in the above embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, It is considered to be the range described in this specification.
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。 The above embodiments are merely illustrative of several embodiments of the present application, and the description thereof is more specific and detailed, but is not to be construed as limiting the scope of the invention. It should be noted that a number of variations and modifications may be made by those skilled in the art without departing from the spirit and scope of the present application. Therefore, the scope of the invention should be determined by the appended claims.
Claims (16)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201610992121.8A CN106570489A (en) | 2016-11-10 | 2016-11-10 | Living body determination method and apparatus, and identity authentication method and device |
| CN201610992121.8 | 2016-11-10 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018086543A1 true WO2018086543A1 (en) | 2018-05-17 |
Family
ID=58541303
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/109989 Ceased WO2018086543A1 (en) | 2016-11-10 | 2017-11-08 | Living body identification method, identity authentication method, terminal, server and storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN106570489A (en) |
| WO (1) | WO2018086543A1 (en) |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108470169A (en) * | 2018-05-23 | 2018-08-31 | 国政通科技股份有限公司 | Face identification system and method |
| CN111666786A (en) * | 2019-03-06 | 2020-09-15 | 杭州海康威视数字技术股份有限公司 | Image processing method, device, electronic device and storage medium |
| CN112016482A (en) * | 2020-08-31 | 2020-12-01 | 成都新潮传媒集团有限公司 | Method and device for distinguishing false face and computer equipment |
| CN112784661A (en) * | 2019-11-01 | 2021-05-11 | 宏碁股份有限公司 | Real face recognition method and real face recognition device |
| US11443527B2 (en) | 2021-01-13 | 2022-09-13 | Ford Global Technologies, Llc | Material spectroscopy |
| US11657589B2 (en) | 2021-01-13 | 2023-05-23 | Ford Global Technologies, Llc | Material spectroscopy |
| CN116389647A (en) * | 2023-06-02 | 2023-07-04 | 深圳市尚哲医健科技有限责任公司 | Emergency first-aid integrated platform |
| US11741747B2 (en) | 2021-01-13 | 2023-08-29 | Ford Global Technologies, Llc | Material spectroscopy |
| US12307814B2 (en) | 2021-01-13 | 2025-05-20 | Ford Global Technologies, Llc | Material spectroscopy |
Families Citing this family (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106570489A (en) * | 2016-11-10 | 2017-04-19 | 腾讯科技(深圳)有限公司 | Living body determination method and apparatus, and identity authentication method and device |
| CN107038428B (en) * | 2017-04-28 | 2020-04-07 | 北京小米移动软件有限公司 | Living body identification method and apparatus |
| CN107392606B (en) * | 2017-06-28 | 2020-08-04 | 阿里巴巴集团控股有限公司 | Account management method and device |
| CN107506713A (en) * | 2017-08-15 | 2017-12-22 | 哈尔滨工业大学深圳研究生院 | Living body faces detection method and storage device |
| CN108875333B (en) * | 2017-09-22 | 2023-05-16 | 北京旷视科技有限公司 | Terminal unlocking method, terminal and computer readable storage medium |
| CN108197279B (en) * | 2018-01-09 | 2020-08-07 | 北京旷视科技有限公司 | Attack data generation method, device, system, and computer-readable storage medium |
| CN110141246A (en) * | 2018-02-10 | 2019-08-20 | 上海聚虹光电科技有限公司 | Biopsy method based on colour of skin variation |
| CN110473311B (en) * | 2018-05-09 | 2021-07-23 | 杭州海康威视数字技术股份有限公司 | Preventing illegal attack methods, devices and electronic equipment |
| CN109446981B (en) * | 2018-10-25 | 2023-03-24 | 腾讯科技(深圳)有限公司 | Face living body detection and identity authentication method and device |
| CN109858375B (en) * | 2018-12-29 | 2023-09-26 | 简图创智(深圳)科技有限公司 | Living body face detection method, terminal and computer readable storage medium |
| CN109766849B (en) * | 2019-01-15 | 2023-06-20 | 深圳市凯广荣科技发展有限公司 | Living body detection method, detection device and self-service terminal equipment |
| CN110335216B (en) * | 2019-07-09 | 2021-11-30 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, terminal device, and readable storage medium |
| CN111464519B (en) * | 2020-03-26 | 2023-06-20 | 支付宝(杭州)信息技术有限公司 | Method and system for account registration based on voice interaction |
| CN111523438B (en) * | 2020-04-20 | 2024-02-23 | 支付宝实验室(新加坡)有限公司 | Living body identification method, terminal equipment and electronic equipment |
| CN111931153B (en) * | 2020-10-16 | 2021-02-19 | 腾讯科技(深圳)有限公司 | Identity verification method and device based on artificial intelligence and computer equipment |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102113883A (en) * | 2010-01-05 | 2011-07-06 | 精工爱普生株式会社 | Biological information detector and biological information measuring device |
| US20110190646A1 (en) * | 2010-02-01 | 2011-08-04 | Seiko Epson Corporation | Biological information measuring device |
| CN103761465A (en) * | 2014-02-14 | 2014-04-30 | 上海云亨科技有限公司 | Method and device for identity authentication |
| CN106570489A (en) * | 2016-11-10 | 2017-04-19 | 腾讯科技(深圳)有限公司 | Living body determination method and apparatus, and identity authentication method and device |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101369315A (en) * | 2007-08-17 | 2009-02-18 | 上海银晨智能识别科技有限公司 | Human face detection method |
| JP5780053B2 (en) * | 2011-08-22 | 2015-09-16 | 富士通株式会社 | Biometric authentication device, biometric authentication method, and program |
| US9734418B2 (en) * | 2014-01-17 | 2017-08-15 | Htc Corporation | Methods for identity authentication and handheld electronic devices utilizing the same |
| CN105844206A (en) * | 2015-01-15 | 2016-08-10 | 北京市商汤科技开发有限公司 | Identity authentication method and identity authentication device |
-
2016
- 2016-11-10 CN CN201610992121.8A patent/CN106570489A/en active Pending
-
2017
- 2017-11-08 WO PCT/CN2017/109989 patent/WO2018086543A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102113883A (en) * | 2010-01-05 | 2011-07-06 | 精工爱普生株式会社 | Biological information detector and biological information measuring device |
| US20110190646A1 (en) * | 2010-02-01 | 2011-08-04 | Seiko Epson Corporation | Biological information measuring device |
| CN103761465A (en) * | 2014-02-14 | 2014-04-30 | 上海云亨科技有限公司 | Method and device for identity authentication |
| CN106570489A (en) * | 2016-11-10 | 2017-04-19 | 腾讯科技(深圳)有限公司 | Living body determination method and apparatus, and identity authentication method and device |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108470169A (en) * | 2018-05-23 | 2018-08-31 | 国政通科技股份有限公司 | Face identification system and method |
| CN111666786A (en) * | 2019-03-06 | 2020-09-15 | 杭州海康威视数字技术股份有限公司 | Image processing method, device, electronic device and storage medium |
| CN111666786B (en) * | 2019-03-06 | 2024-05-03 | 杭州海康威视数字技术股份有限公司 | Image processing method, device, electronic equipment and storage medium |
| CN112784661A (en) * | 2019-11-01 | 2021-05-11 | 宏碁股份有限公司 | Real face recognition method and real face recognition device |
| CN112784661B (en) * | 2019-11-01 | 2024-01-19 | 宏碁股份有限公司 | Real face recognition method and real face recognition device |
| CN112016482A (en) * | 2020-08-31 | 2020-12-01 | 成都新潮传媒集团有限公司 | Method and device for distinguishing false face and computer equipment |
| US11443527B2 (en) | 2021-01-13 | 2022-09-13 | Ford Global Technologies, Llc | Material spectroscopy |
| US11657589B2 (en) | 2021-01-13 | 2023-05-23 | Ford Global Technologies, Llc | Material spectroscopy |
| US11741747B2 (en) | 2021-01-13 | 2023-08-29 | Ford Global Technologies, Llc | Material spectroscopy |
| US12307814B2 (en) | 2021-01-13 | 2025-05-20 | Ford Global Technologies, Llc | Material spectroscopy |
| CN116389647A (en) * | 2023-06-02 | 2023-07-04 | 深圳市尚哲医健科技有限责任公司 | Emergency first-aid integrated platform |
| CN116389647B (en) * | 2023-06-02 | 2023-08-08 | 深圳市尚哲医健科技有限责任公司 | Emergency first-aid integrated platform |
Also Published As
| Publication number | Publication date |
|---|---|
| CN106570489A (en) | 2017-04-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018086543A1 (en) | Living body identification method, identity authentication method, terminal, server and storage medium | |
| CN107423690B (en) | A face recognition method and device | |
| CN108985134B (en) | Face living body detection and face brushing transaction method and system based on binocular camera | |
| US20190012450A1 (en) | Biometric-based authentication method, apparatus and system | |
| US20190034702A1 (en) | Living body detecting method and apparatus, device and storage medium | |
| Marciniak et al. | Influence of low resolution of images on reliability of face detection and recognition | |
| CN108229330A (en) | Face fusion recognition methods and device, electronic equipment and storage medium | |
| US20180034852A1 (en) | Anti-spoofing system and methods useful in conjunction therewith | |
| CN106778450B (en) | Face recognition method and device | |
| WO2020034733A1 (en) | Identity authentication method and apparatus, electronic device, and storage medium | |
| US11367310B2 (en) | Method and apparatus for identity verification, electronic device, computer program, and storage medium | |
| WO2019011165A1 (en) | Face recognition method, device, electronic device and storage medium | |
| CN109410026A (en) | Identity identifying method, device, equipment and storage medium based on recognition of face | |
| US11670069B2 (en) | System and method for face spoofing attack detection | |
| CN108197557A (en) | Testimony of a witness consistency check method, terminal device and computer readable storage medium | |
| CN108833359A (en) | Identity verification method, device, equipment, storage medium and program | |
| WO2018082011A1 (en) | Living fingerprint recognition method and device | |
| CN104143086A (en) | Application technology of portrait comparison to mobile terminal operating system | |
| CN106056083B (en) | A kind of information processing method and terminal | |
| WO2020007191A1 (en) | Method and apparatus for living body recognition and detection, and medium and electronic device | |
| KR101455666B1 (en) | Authentication apparatus and authentication method thereof | |
| Rahouma et al. | Design and implementation of a face recognition system based on API mobile vision and normalized features of still images | |
| CN109657627A (en) | Auth method, device and electronic equipment | |
| CN108596127B (en) | Fingerprint identification method, identity verification method and device and identity verification machine | |
| EP4495901B1 (en) | Methods and systems for enhancing liveness detection of image data |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17870097 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17870097 Country of ref document: EP Kind code of ref document: A1 |