[go: up one dir, main page]

WO2019080797A1 - 一种活体检测方法、终端和存储介质 - Google Patents

一种活体检测方法、终端和存储介质

Info

Publication number
WO2019080797A1
WO2019080797A1 PCT/CN2018/111218 CN2018111218W WO2019080797A1 WO 2019080797 A1 WO2019080797 A1 WO 2019080797A1 CN 2018111218 W CN2018111218 W CN 2018111218W WO 2019080797 A1 WO2019080797 A1 WO 2019080797A1
Authority
WO
WIPO (PCT)
Prior art keywords
preset
sequence
change
image
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/111218
Other languages
English (en)
French (fr)
Inventor
刘尧
李季檩
汪铖杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Publication of WO2019080797A1 publication Critical patent/WO2019080797A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Definitions

  • the present application relates to the field of communications technologies, and in particular, to a living body detecting method, a terminal, and a storage medium.
  • identity verification technologies such as fingerprint recognition, eye pattern recognition, iris recognition, and face recognition have been greatly developed.
  • face recognition technology is the most prominent, and it has been more and more widely applied to various identity authentication systems.
  • the identity authentication system based on face recognition mainly needs to solve two problems, one is face verification and the other is living body detection.
  • the living body detection is mainly used to confirm that the collected face image and the like are from the user himself, rather than playing back or forging materials.
  • a "randomized interaction” technique has been proposed, and the so-called “randomized interaction” technology refers to different parts of the face in the video.
  • the movement changes into the random interaction, which requires the user to actively cooperate, such as blinking, shaking his head, or lip recognition, etc., and judging whether the detection object is a living body, and the like.
  • the inventors of the present application found that the algorithm used in the existing scheme for living body detection has a high accuracy of discriminating, and is also unable to effectively resist synthetic face attacks.
  • the cumbersome active interaction will also greatly reduce the pass rate of the correct sample. Therefore, overall, the live detection of the existing scheme is not good, which greatly affects the accuracy and security of the authentication.
  • the embodiment of the present application provides a living body detecting method, a terminal, and a storage medium, which can improve the living body detection effect, thereby improving the accuracy and security of the identity verification.
  • the embodiment of the present application provides a living body detecting method, including:
  • the terminal receives the living body detection request
  • a surface of the detection object in the image sequence has a reflected light signal generated by the projected light, and the reflected light signal forms an image feature on a surface of the detection object;
  • the recognition result indicates that the type of the object to which the image feature belongs is a living body, it is determined that the detection object is a living body.
  • the embodiment of the present application further provides a living body detecting terminal, including a memory, a processor, and a computer program stored in the memory, where the processor implements the following method steps when executing the computer program:
  • the preset recognition model being trained by a plurality of feature samples, wherein the feature sample is an image feature formed by the reflected light signal on an object surface of the labeled type, if the recognition result indicates the image If the type of the object to which the feature belongs is a living body, it is determined that the detected object is a living body.
  • the embodiment of the present application further provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are adapted to be loaded by the processor to perform any of the living body detection methods provided by the embodiments of the present application. step.
  • FIG. 1 is a schematic diagram of a scene of a living body detecting method according to an embodiment of the present application
  • FIG. 1b is another schematic diagram of a living body detecting method according to an embodiment of the present application.
  • 1c is a flowchart of a living body detecting method provided by an embodiment of the present application.
  • FIG. 2 is another flowchart of a living body detecting method provided by an embodiment of the present application.
  • FIG. 3a is another flowchart of a living body detecting method provided by an embodiment of the present application.
  • FIG. 3b is a diagram showing an example of color change in a living body detecting method provided by an embodiment of the present application.
  • FIG. 3c is another exemplary diagram of color change in the living body detecting method provided by the embodiment of the present application.
  • FIG. 4a is a schematic structural view of a living body detecting device provided by an embodiment of the present application.
  • FIG. 4b is another schematic structural diagram of a living body detecting apparatus according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a terminal provided by an embodiment of the present application.
  • the embodiment of the present application provides a living body detecting method, a terminal, and a storage medium.
  • the living body detecting device may be specifically integrated in a device such as a terminal, and may use the screen light intensity and color change of the terminal, or use other components or devices such as a flash or an infrared emitter as a light source to project onto the detection object, and then, The living body detection is performed by analyzing the surface of the detected object in the received image sequence, such as the reflected light signal of the face.
  • the detection interface when the terminal receives the living body detection request, the detection interface can be started according to the living body detection request, wherein, as shown in FIG. 1a, the detection interface is In addition to the detection area, a non-detection area (the gray part marked in Fig. 1a) is also provided, which is mainly used for flashing a color mask, which can be used as a light source to project light to the detection object, for example, see 1b.
  • the reflected light signal of the real living body and the forged living body (the carrier of the composite picture or video, such as a photo, a mobile phone or a tablet computer) is different, it can be determined by judging whether or not the projected light is present on the surface of the detecting object.
  • Reflecting the optical signal, and determining whether the reflected optical signal meets a preset condition to identify the living body for example, performing image acquisition on the detected object (the monitored condition can be displayed through the detection area in the detection interface), and then Determining whether there is a reflected light signal generated by the projected light in the surface of the detected object in the acquired image sequence, and if present, forming a type of the object to which the image feature belongs on the surface of the detected object by using a preset recognition model
  • the identification is performed, and if the recognition result indicates that the type of the object to which the image feature belongs is a living body, it is determined that the detection object is a non-living body, and the like.
  • a living body detecting device (hereinafter referred to as a living body detecting device), which may be integrated into a device such as a terminal, which may be a mobile phone, a tablet computer, a notebook computer or a personal computer ( PC, Personal Computer) and other devices.
  • a terminal which may be a mobile phone, a tablet computer, a notebook computer or a personal computer ( PC, Personal Computer) and other devices.
  • a living body detecting method includes: receiving a living body detecting request, starting a light source according to the living body detecting request, and projecting a light to the detecting object, and then performing image capturing on the detecting object to obtain an image sequence, and detecting the image sequence detecting
  • the surface of the object has a reflected light signal generated by the projected light, and the reflected light signal forms an image feature on the surface of the detecting object, and uses a preset recognition model to identify the type of the object to which the image feature belongs, if the recognition result indicates the image feature If the type of the object is a living body, it is determined that the detection object is a living body.
  • the specific process of the living body detection method can be as follows:
  • the biometric detection request triggered by the user may be received, or the biometric detection request sent by another device may be received, and the like.
  • the corresponding living body detection process may be invoked according to the living body detection request, the light source is activated according to the living body detection process, and the like.
  • the light source can be set according to the needs of the actual application, for example, by adjusting the brightness of the terminal screen, or by using other light-emitting components such as a flash or an infrared emitter or an external device, or
  • the method is implemented by setting a color mask on the display interface of the terminal or the terminal, and the like, that is, the step of “starting the light source according to the living body detection request” may be implemented by any one of the following methods:
  • the predetermined light-emitting component is turned on according to the living body detection request, so that the light-emitting component emits light as a light source to the detection object.
  • the light emitting part may comprise a component such as a flash lamp or an infrared emitter.
  • a preset color mask is activated according to the living body detection request, and the color mask is used as a light source to project light to the detection object.
  • the color mask on the terminal may be activated according to the living body detection request.
  • the so-called color mask refers to a light-emitting area or component that can flash with color light.
  • a color mask can be flashed at the edge of the terminal shell. Then, after receiving the living body detection request, the component can be activated to flash the color mask; or, by displaying the detection interface, the color mask can be flashed as follows:
  • the detection interface is activated according to the living body detection request, and the detection interface flashes a color mask, and the color mask is used as a light source to project light to the detection object.
  • the area in which the color mask is flashed in the detection interface may be determined according to the requirements of the actual application.
  • the detection interface may include a detection area and a non-detection area, and the detection area is mainly used to display the monitoring situation, and the non-detection
  • the area can flash a color mask that acts as a light source to project light to the object of detection, and so on.
  • the area in which the color mask is flashed in the non-detection area may be determined according to the needs of the actual application, and the entire non-detection area may be provided with a color mask, or may be a certain part or a certain part of the non-detection area. Partial areas are set with color masks, and so on.
  • the color mask and other parameters such as the color mask can be set according to the needs of the actual application.
  • the color mask can be preset by the system and directly retrieved when the detection interface is started, or the living body detection can be received. After the request is automatically generated, that is, after the step of "receiving the living body detection request", the living body detecting method may further include:
  • a color mask is generated such that the light projected by the color mask can be changed according to a preset rule.
  • the preset rule may be determined according to the requirements of the actual application, and the manner of adjusting the intensity of the change of the light may also be various. For example, for the same color of light (ie, the same wavelength of light), it may be adjusted before and after the change. Screen brightness to adjust the intensity of light changes, for example, to set the screen brightness before and after the change to the maximum and minimum, and for different colors of light (ie, different wavelengths of light), you can adjust the light by adjusting the color difference before and after the change Change intensity, and so on. That is, after the color mask is generated, the living body detecting method may further include:
  • For the light of the same color obtain a preset screen brightness adjustment parameter, and adjust a screen brightness of the same color before and after the change according to the screen brightness adjustment parameter to adjust the intensity of the light change;
  • a preset color difference adjustment parameter is obtained, and the color difference of the light of the different colors before and after the change is adjusted according to the color difference adjustment parameter to adjust the intensity of the light change.
  • the adjustment range of the intensity of the change of the light may be set according to the needs of the actual application, and may include a large adjustment, such as maximizing the intensity of the change of the light, etc., and may also include a small adjustment. For the convenience of description, the subsequent steps are maximized.
  • the intensity of light change is taken as an example.
  • the screen changes from the brightest red to the brightest green, the chromaticity of the reflected light changes the most, and so on.
  • the step "generate a color mask so that the light projected by the color mask can follow the preset. Regular changes can include:
  • Obtaining a preset coding sequence where the coding sequence includes multiple codes, and according to a preset coding algorithm, sequentially determining colors corresponding to the codes according to the coding order in the coding sequence, obtaining a color sequence, and generating a color mask based on the color sequence, so that The light projected by the color mask changes according to the color indicated by the color sequence.
  • the preset coding sequence may be randomly generated, or may be set according to actual application requirements, and the preset coding algorithm may also be determined according to actual application requirements, and the coding algorithm may reflect each of the coding sequences.
  • the correspondence between the encoding and the various colors for example, the red can represent the number -1, the green color is 0, the blue color is 1, and so on, if the obtained code sequence is "0, -1, 1, 0 ", then, you can get the color sequence "green, red, blue, green” to generate a color mask, so that the light projected by the color mask can be in the order of "green, red, blue, green” Make changes.
  • the display duration of each color and the waiting time interval when switching between colors can be set according to the needs of the actual application, for example, the display duration of each color can be 2 seconds.
  • the waiting interval is set to 0 seconds or 1 second, and so on.
  • no light may be projected, or a predetermined light may be projected, for example, if the waiting time interval is not 0, and no light is projected during the waiting time interval, for example, if the color mask
  • the color order of the projected light is "green, red, blue, green”, then the projected light is specifically expressed as: "green -> no light -> red -> no light -> blue -> no light - >Green".
  • the light projected by the color mask can directly switch colors, that is, the projected light is specifically expressed as: “green —> red —> blue —> green” Analogy, I won't go into details here.
  • the variation rule of the light can be further complicated, for example, the display duration of each color and the waiting time interval when switching between different colors are also set to inconsistent values, for example, a green display.
  • the duration can be 3 seconds, while the red display duration is 2 seconds, the blue color is 4 seconds, the wait interval between green and red is 1 second, and the wait interval between red and blue is 1.5 seconds, and so on, and so on.
  • the camera device may be specifically called to capture the detected object in real time, obtain an image sequence, and display the captured image sequence in the detection area, and the like.
  • the camera device includes, but is not limited to, a camera that is provided by the terminal, a webcam, a surveillance camera, and other devices that can collect images.
  • a camera that is provided by the terminal
  • a webcam a surveillance camera
  • other devices that can collect images.
  • the light projected to the detection object may be visible light or invisible light
  • different light receivers may be configured according to actual application requirements. For example, an infrared light receiver or the like is used to sense different light rays to acquire a desired image sequence, which will not be described herein.
  • the image sequence can also be denoised.
  • the noise model as the Gaussian noise
  • the timing multi-frame averaging and/or the same-frame multi-scale averaging can be used to reduce the noise as much as possible, and details are not described herein again.
  • pre-processing of the image sequence such as scaling, cropping, sharpening, background blurring, etc.
  • the reflected light signal forms an image feature on the surface of the detection object, and the image feature may include a color feature, a texture feature, a shape feature, and a spatial relationship feature of the image.
  • the color feature is a global feature that describes the surface properties of the scene corresponding to the image or image region.
  • the texture feature is also a global feature. It also describes the surface properties of the image corresponding to the image or image region.
  • shape features There are two types of shape features. Representation methods, one is contour features, the other is regional features, the contour features of the image are mainly for the outer boundary of the object, and the regional features of the image are related to the entire shape region; the spatial relationship features are segmented in the image.
  • the mutual spatial position or the relative direction relationship between the plurality of objects, and the relationship may also be divided into a connection/adjacency relationship, an overlap/overlap relationship, and an inclusion/inclusive relationship; in specific implementation, the image feature may specifically include a partial second LBP (Local Binary Patterns) feature descriptors, scale-invariant feature transform (SIFT) feature descriptors, and/or feature descriptors extracted from convolutional neural networks. I will not repeat them here.
  • LBP Local Binary Patterns
  • SIFT scale-invariant feature transform
  • the method for identifying the reflected light signal generated by the projected light in the image sequence of the image sequence may be various.
  • the reflected light information may be detected by using a change of a frame in the image sequence, for example, Can be as follows:
  • the numerical expression of the chrominance/luminance of each frame in the image sequence may be regressively analyzed, and the numerical expression may be a numerical sequence, and then, according to the numerical expression, such as a numerical sequence, the chromaticity/luminance of the frame in the image sequence is determined. Change, get the regression result; that is, you can use numerical expression, such as the change of the numerical sequence to represent the chrominance change or brightness change of each frame, and the chromaticity change or brightness change of each frame can be used as the regression result.
  • the preset image regression analysis model column may be used to determine the chrominance/luminance of each frame in the image sequence. Perform regression analysis to obtain a numerical expression of chroma/brightness per frame, and so on.
  • the image regression analysis model may be a regression tree or a regression convolutional neural network, etc., and may be set according to actual application requirements.
  • the image regression analysis model may be pre-trained by other devices, or may be trained by the living body detecting device.
  • the training process may be as follows:
  • a preset number of images with different surface reflection information such as facial reflection information
  • surface reflection information such as facial reflection information
  • the sample set is then analyzed by using a preset image regression analysis initial model (such as regression tree or regression convolutional neural network, etc.), and the training sample set is learned to obtain the image regression analysis model.
  • a preset image regression analysis initial model such as regression tree or regression convolutional neural network, etc.
  • the preset strategy of the labeling may be determined according to the requirements of the actual application, for example, the brightness of the regression analysis frame is changed, and the type of the reflective information is divided into three types: strong reflection, weak reflection, and no reflection.
  • the label of the collected sample with strong reflection can be marked as 1
  • the label of the sample with weak reflection is marked as 0.5
  • the label of the sample without reflection is marked as 0, etc.
  • the training sample set is learned to find the regression function in the training sample set, which can best fit the mapping relationship between the original image and the continuous numerical label of the regression analysis.
  • the image regression analysis model is obtained.
  • the regression analysis of the chrominance change of the frame is similar.
  • an image frame having different color lights on the surface of the detection object ie, a sample is collected
  • the image frames are labeled, wherein the labeled label It is no longer a one-dimensional scalar, but a triple of RGB (Red, Green, Blue) colors, such as (255,0,0) for red, and so on, and then using the default image regression analysis
  • the initial model learns the image frames obtained after labeling (ie, the training sample set) to obtain the image regression analysis model, and the like.
  • the image regression analysis model can be used to perform regression analysis on each frame of the image sequence. For example, if the image regression analysis model is trained based on images with different light intensity changes, an image frame having different light intensity changes (ie, brightness changes) on the surface of the detection object may directly return a 0 to 1 continuous image. The value is used to express the degree of facial reflection of the image frame, and so on, and will not be described herein.
  • the change between frames in the image sequence may be obtained by calculating a difference between frames in the image sequence, and the difference between the frames may be an interframe difference or a frame difference, and the interframe difference refers to The difference between two adjacent frames, and the frame difference is the difference between the frames corresponding to before and after the change of the projected light.
  • the pixel coordinates of the adjacent frames in the image sequence may be respectively acquired when determining that the position change degree of the detection object is less than the preset change value, and then the inter-frame difference is calculated based on the pixel coordinates.
  • the pixel coordinates of the frame corresponding to the frame before and after the change of the projected light are respectively obtained from the image sequence, and the frame is calculated based on the pixel coordinates. difference.
  • the method for calculating the inter-frame difference or the frame difference based on the pixel coordinates may be various, for example, as follows:
  • the pixel coordinates of the adjacent frame are transformed to minimize the registration error of the pixel coordinates, and the pixel points whose correlation is in accordance with the preset condition are filtered according to the transformation result, and the inter-frame difference is calculated according to the selected pixel points.
  • the pixel coordinates of the frame corresponding to the change of the projected light are transformed to minimize the registration error of the pixel coordinate, and the pixel corresponding to the preset condition is selected according to the transformation result, and the pixel is calculated according to the selected pixel point. Frame difference.
  • the preset change value and the preset condition may be set according to actual application requirements, and details are not described herein again.
  • the relative value of the change in chromaticity or the relative value of the change in brightness may be, that is, the step "regression analysis of the change of the frame in the image sequence to obtain a regression result" may include:
  • the chromaticity/luminance of the frame corresponding to the change before and after the change of the projected light is respectively obtained from the image sequence, and the corresponding chromaticity/luminance of the detected ray is calculated before and after the change of the projected ray
  • the chrominance change relative value or the luminance change relative value between the frames, and the chromaticity/luminance change relative value is used as the frame difference between the frames corresponding to before and after the projection ray change, and the frame difference is the regression result.
  • the chromaticity/luminance can be calculated by a preset regression function to obtain a relative value of chrominance/luminance change between frames corresponding to before and after the change of the projected ray (ie, a relative value of chromaticity change or a relative value of brightness change). ,and many more.
  • the regression function can be set according to the requirements of the actual application, for example, specifically, a regression neural network, and the like.
  • the preset threshold may be determined according to the requirements of the actual application, and details are not described herein again.
  • the regression result is classified and analyzed by a preset global feature algorithm or a preset recognition model. If the analysis result indicates that the inter-frame variation of the surface of the detection object is greater than a set value, it is determined that the projected light exists on the surface of the detection object in the image sequence. If the analysis result indicates that the inter-frame change of the surface of the detection object is not greater than the set value, it is determined that the reflected light signal generated by the projected light is not present on the surface of the detection object in the image sequence.
  • the setting value may be determined according to the requirements of the actual application, and the manner of “classifying and analyzing the inter-frame difference by using the preset global feature algorithm or the preset recognition model” may also be various, for example, the following may be :
  • the regression result is analyzed to determine whether there is a reflected light signal generated by the projected light in the image sequence. If there is no reflected light signal generated by the projected light, an inter-frame change indicating the surface of the detected object is generated.
  • An analysis result greater than the set value; if there is a reflected light signal generated by the projected light, determining whether the reflector of the reflected light information exists is the detection object by using a preset global feature algorithm or a preset recognition model;
  • an analysis result indicating that the inter-frame change of the surface of the detection target is larger than the set value is generated, and if it is not the detection target, an analysis result indicating that the inter-frame change of the surface of the detection target is not larger than the set value is generated.
  • the image in the image sequence may be classified by a preset global feature algorithm or a preset recognition model to filter out a frame in which the detection object exists, obtain a candidate frame, and analyze an interframe difference of the candidate frame to determine Whether the detected object has a reflected light signal generated by the projected light, and if there is no reflected light signal generated by the projected light, an analysis result indicating that the inter-frame change of the surface of the detection target is not greater than a set value is generated; The reflected light signal generated by the projected light generates an analysis result indicating that the inter-frame change of the surface of the detection target is larger than a set value, and the like.
  • the global feature algorithm refers to an algorithm based on global features, wherein the global features may include mean variance of gray scale, gray level co-occurrence matrix, fast Fourier transform (FFT, Fast Fourier Transformation) and discrete cosine transform (DCT, Discrete) Cosine transform) The transformed spectrum.
  • the preset recognition model may include a classifier or other recognition model (such as a face recognition model), and the classifier may include a support vector machine (SVM), a neural network, a decision tree, and the like.
  • SVM support vector machine
  • the optical signal may be decoded to identify whether the surface of the detected object in the image sequence has a reflected light signal generated by the projected light, for example, as follows:
  • the image sequence is decoded according to a preset decoding algorithm to obtain a decoded sequence, and it is determined whether the decoded sequence and the encoded sequence match, and if matched, determining that the projected light exists on the surface of the detected object in the image sequence.
  • the generated reflected light signal if the decoded sequence does not match the encoded sequence, it is determined that the reflected light signal generated by the projected light does not exist on the surface of the detected object in the image sequence.
  • the preset decoding algorithm may be used. And sequentially calculating the relative value of the chrominance/luminance change in the image sequence (ie, the relative value of the chrominance/luminance change between the frames corresponding to the change before and after the change of the projected ray), and obtaining the frame corresponding to each frame before and after the change of the projected ray.
  • the absolute value of chrominance/luminance, and then the obtained chrominance/brightness absolute value is used as the decoded sequence, or the obtained chrominance/luminance absolute value is converted according to a preset strategy to obtain a decoded sequence.
  • the preset decoding algorithm is matched with the encoding algorithm, and may be determined according to the encoding algorithm.
  • the preset policy may also be set according to the requirements of the actual application, and details are not described herein.
  • Determining whether the decoded sequence and the coding sequence match may be different. For example, it may be determined whether the decoded sequence is consistent with the coding sequence. If they are consistent, it is determined that the decoded sequence matches the coding sequence. Determining that the decoded sequence does not match the coding sequence; or, determining whether the relationship between the decoded sequence and the coding sequence conforms to a preset correspondence relationship, and if so, determining that the decoded sequence matches the coding sequence, otherwise, if If it does not match, it is determined that the decoded sequence does not match the coding sequence, and so on.
  • the preset correspondence may be set according to requirements of an actual application.
  • step 103 the image is collected again, or the activated light source may be detected. If it is determined that there is no problem with the light projected by the light source to the detection object, the process ends, and the detection object is determined to be inactive.
  • step 102 Or re-acquiring the image of the detected object, and if it is determined that there is a problem with the light projected by the light source to the detecting object, returning to step 102, the light source is restarted, and the light is projected to the detecting object, and the specific execution strategy may be based on actual conditions. Set the requirements of the application, and will not go into details here.
  • the recognition result indicates that the type of the object to which the image feature belongs is not a living body, such as a “phone screen” or the like, it may be determined that the detected object is not a living body.
  • the “reflected light signal” can be acquired in the detection.
  • the image feature formed on the surface of the object is then identified by the preset recognition model, and then the detection object is determined to be a living body based on the recognition result, for example, if the recognition result indicates that the type of the object to which the image feature belongs is a living body For example, if the type of the object to which the image feature belongs is “human face”, then the detection object can be determined to be a living body; otherwise, if the recognition result indicates that the type of the object to which the image feature belongs is not a living body, for example, the image is indicated.
  • the type of the object to which the feature belongs is "mobile phone screen", then it can be determined at this time that the detected object is not a living body, and so on, and so on.
  • the preset recognition model may include a classifier or other recognition model (such as a face recognition model), and the classifier may include an SVM, a neural network, a decision tree, and the like.
  • the preset recognition model can be trained from a plurality of feature samples that are image features formed by the reflected light signal on the surface of the object of the labeled type.
  • the image feature formed by the projected light on the person's face may be used as a feature sample and labeled as a “human face”, and the image feature formed by the projected light on the screen of the mobile phone is taken as a feature sample, and Labeled as "mobile phone screen", and so on, after collecting a large number of feature samples, the feature model (ie, the image features of the labeled type) can be used to establish the recognition model.
  • the identification model may be saved in the preset storage space after being established by other devices, and when the living body detecting device needs to identify the type of the object to which the image feature belongs, directly obtain from the storage space, or
  • the identification detection model may also be established by the living body detecting device, that is, before the step of “identifying the type of the object to which the image feature belongs by using the preset recognition model”, the living body detection method may further include:
  • the preset recognition model is used.
  • the recognition result directly determines whether the detection object is a living body, that is, the recognition model determines that the reflected light signal generated by the projected light exists on the surface of the detection object in the image sequence, and can also identify the type of the object to which the image feature belongs.
  • the preset recognition model can be used to identify whether the surface of the detected object in the image sequence has a reflected light signal generated by the projected light, and the reflected light generated by the projected light is recognized.
  • the type of the object to which the corresponding image feature ie, the image feature formed by the reflected light on the surface of the detection object
  • the light source can be started to project light to the detection object, and the image is collected by the detection object, and when the image is detected, the surface of the detection object exists in the captured image sequence.
  • the generated reflected light signal is generated, the type of the object to which the image feature belongs is formed on the surface of the detected object by using a preset recognition model. If the recognition result indicates that the type of the object to which the image feature belongs is a living body, the detection is determined.
  • the object is a living body; since the scheme does not require complicated interaction and operation with the user, the requirement for the hardware configuration can be reduced, and since the scheme is based on detecting the reflected light signal on the surface of the object, the real The reflected light signal of the living body and the forged living body (the carrier of the composite picture or video, such as a photo, a mobile phone or a tablet computer) is different. Therefore, the solution can also effectively resist the synthetic face attack and improve the accuracy of the discrimination; In short, this program can improve the effect of living body detection. Thereby improving authentication accuracy and safety.
  • the living body detecting device is specifically integrated in the terminal, and the light source is specifically a color mask, and the detecting object is specifically a human face, and the regression result is specifically obtained by regression analysis of the change between the frames in the image sequence.
  • the light source is specifically a color mask
  • the detecting object is specifically a human face
  • the regression result is specifically obtained by regression analysis of the change between the frames in the image sequence.
  • a living body detection method can be as follows:
  • the terminal receives a living body detection request.
  • the terminal may specifically receive a biometric detection request triggered by the user, or may also receive a biometric detection request sent by another device, and the like.
  • the living body detecting request may be triggered to be generated, so that the terminal receives the living body detecting request.
  • the terminal generates a color mask, so that the light projected by the color mask can be changed according to a preset rule.
  • the preset rule may be determined according to the needs of the actual application, and the manner of maximizing the intensity of the change of the light may also be various. For example, for the same color of light, the brightness of the screen before and after the change may be maximized.
  • the intensity of the light changes for example, the brightness of the screen before and after the change is set to the maximum and minimum, and for the light of different colors, the intensity of the change of the light can be maximized by adjusting the color difference before and after the change, such as changing the screen from the darkest of black. The brightest white, and so on.
  • the screen changes from the brightest red to the brightest green, the chromaticity of the reflected light changes the most, and so on.
  • the terminal starts the detection interface according to the living body detection request, and flashes a color mask through the non-detection area in the detection interface, so that the color mask acts as a light source to project light to the detection object, such as a person's face.
  • the terminal may specifically invoke the corresponding living body detection process according to the living body detection request, activate a corresponding detection interface according to the living body detection process, and the like.
  • the detection interface may include a detection area and a non-detection area.
  • the detection area is mainly used to display the acquired image sequence, and the non-detection area may flash a color mask, and the color mask is used as a light source to project light to the detection object. Specifically, it can participate in FIG. 1b, so that the reflected light is generated by the light on the object to be detected, and the reflected light generated by the light is different depending on the parameters such as the color and intensity of the light.
  • the detection object needs to be kept within a certain distance from the screen of the mobile device, for example, when the user needs to detect whether a certain face is a living body. You can take the mobile device to the right place directly in front of the face to monitor the face, and so on.
  • the terminal performs image collection on the detection object to obtain an image sequence.
  • the camera of the terminal may be specifically called to capture the detected object in real time to obtain a sequence of images, and the captured image sequence is displayed in the detection area.
  • the image sequence can also be denoised.
  • the noise model as the Gaussian noise
  • the timing multi-frame averaging and/or the same-frame multi-scale averaging can be used to reduce the noise as much as possible, and details are not described herein again.
  • the terminal calculates an interframe difference in the sequence of images.
  • the inter-frame alignment method can be used to more precisely correct the pixel pair of the inter-frame difference in the case where the user's face is detected without a sharp position change. That is, when determining that the position change degree of the detection object is less than the preset change value, respectively acquiring the pixel coordinates of the adjacent frame in the image sequence, and then transforming the pixel coordinates to minimize the registration error of the pixel coordinate, and then The interframe difference is calculated based on the result of the transform, for example, as follows:
  • the transformation type of the transformation matrix M employed is the homography transformation with the highest degree of freedom, so that the registration error can be minimized.
  • MSE Mean Square Error
  • RASAC Random Sample Consensus
  • the step "calculate the inter-frame difference based on the result of the transformation" can include:
  • the pixel points whose correlation is in accordance with the preset condition are filtered, and the inter-frame difference is calculated according to the selected pixel points.
  • the preset change value and the preset condition may be set according to actual application requirements, and details are not described herein again.
  • the terminal determines, according to the interframe difference, whether the reflected light signal generated by the projected light is present in the face of the image in the image sequence. If yes, step 207 is performed. If not, the terminal may operate according to a preset policy.
  • the terminal may determine whether the interframe difference is greater than a preset threshold, and if yes, determining that the reflected light signal generated by the projected light is present in the face of the image sequence, step 207 may be performed, and if not, determining the image sequence
  • the reflected light signal generated by the projected light does not exist on the face of the Chinese person, and can be operated according to a preset strategy.
  • the preset threshold may be determined according to the requirements of the actual application, and the preset policy may also be determined according to the requirements of the actual application, for example, may be set to “end process”, or may be set to “generate no reflection”
  • the prompt information of the optical signal may be set to "determine that the detected object is not a living body", or may return to step 204 to re-image the detected object, or may also be activated.
  • the light source performs detection to determine whether the light source is projected onto the detecting object such as a person's face.
  • the process ends, determining that the detecting object is not a living body, or re-pairing
  • the detection object performs image acquisition, and if it is determined that there is a problem that the light source projects to the detection object such as a person's face, for example, the light source is not projected onto the person's face, but is projected onto the object beside it, or the light source If no light is projected, return to step 203 to restart the light source and project light to the detection object. Line, etc., will not be repeated here.
  • a cascading discriminant model for processing.
  • a global feature algorithm or a preset recognition model (such as a classifier) may be used to advance the interframe difference. Processing to roughly determine the occurrence of the reflected light signal, so that the subsequent processing of most of the normal frames without the reflected light signal can be skipped, that is, only the frame in which the reflected light signal exists is processed later. That is, the step "the terminal determines whether the reflected light signal generated by the projected light is present in the face of the image in the image sequence according to the interframe difference" may include:
  • the inter-frame difference is classified and analyzed by a preset global feature algorithm or a preset recognition model. If the analysis result indicates that the inter-frame variation of the person's face is greater than the set value, it is determined that the projected light is present in the face of the image sequence.
  • the generated reflected light signal if the analysis result indicates that the inter-frame change of the person's face is not greater than the set value, determines that the reflected light signal generated by the projected light is not present in the face of the person in the image sequence.
  • the setting value may be determined according to the requirements of the actual application, and the manner of “classifying and analyzing the inter-frame difference by using the preset global feature algorithm or the preset recognition model” may also be various, for example, the following may be :
  • the inter-frame difference is analyzed to determine whether there is a reflected light signal generated by the projected light in the image sequence. If there is no reflected light signal generated by the projected light, an inter-frame change indicating the face of the person is generated.
  • the image in the image sequence may be classified by a preset global feature algorithm or a preset recognition model to filter out a frame of a person's face, obtain a candidate frame, and analyze an interframe difference of the candidate frame to determine Whether the reflected light signal generated by the projected light exists in the face of the person, and if there is no reflected light signal generated by the projected light, an analysis result indicating that the inter-frame change of the face of the person is not greater than the set value is generated; The reflected light signal generated by the projected light generates an analysis result indicating that the inter-frame change of the face of the person is greater than the set value, and the like.
  • the global feature algorithm refers to an algorithm based on global features, wherein the global features may include a mean variance of gray scales, a gray level co-occurrence matrix, a transformed spectrum such as FFT and DCT.
  • the preset recognition model may specifically be a classifier or other recognition model (such as a face recognition model).
  • the classifier may be set according to actual application requirements, for example, if it is only used to determine whether there is reflection.
  • a simpler classifier can be used, and if it is used to determine whether it is a human face or the like, a more complicated classifier, such as a neural network classifier, can be used for processing, and details are not described herein.
  • the terminal uses a preset recognition model to identify the type of the object to which the image feature (ie, the image feature formed by the reflected light signal on the surface of the detection object, such as a person's face), if the recognition result indicates the object to which the image feature belongs. If the type is a living body, it is determined that the detection object is a living body.
  • the recognition result indicates that the type of the object to which the image feature belongs is not a living body, such as a “phone screen” or the like, it may be determined that the detected object is not a living body.
  • the preset recognition model may include a classifier or other recognition model, and the like, and the classifier may include an SVM, a neural network, a decision tree, and the like.
  • the preset recognition model can be trained from a plurality of feature samples that are image features formed by the reflected light signal on the surface of the object of the labeled type.
  • the identification model may be saved in a preset storage space after being established by other devices.
  • the terminal When the terminal needs to identify the type of the object to which the image feature belongs, the terminal directly acquires the storage space from the storage space, or the recognition model It can also be established by the terminal itself. For example, the terminal can acquire multiple feature samples, train the preset initial recognition model according to the feature sample, obtain a preset recognition model, and the like. For details, refer to the previous embodiment. This will not be repeated here.
  • the living body detection method may further include:
  • the detection object such as a person's face
  • the preset action can be set according to the requirements of the actual application. It should be noted that, in order to avoid cumbersome interaction, the number and difficulty of the preset action may be limited, for example, only one simple operation is needed. The interaction, such as blinking or opening the mouth, can not be repeated here.
  • a color mask can be flashed by setting a non-detection area on the detection interface, wherein the color mask can be used as a light source to project light to a detection object, such as a person's face, so that when a living body needs to be performed
  • a detection object such as a person's face
  • the image of the person's face can be imaged, and then the reflected light signal generated by the projected light is present in the face of the obtained image sequence, and the reflected light signal forms an image feature on the face of the person.
  • the type of the object is a living body, if it exists and the type is a living body, it is determined that the person's face is a living body; since the solution does not require complicated interaction and operation with the user, the requirement for the hardware configuration can be reduced, and Since the scheme is based on detecting the reflected light signal on the surface of the object, the reflected light signal of the real living body and the forged living body (a carrier of a composite picture or video, such as a photo, a mobile phone or a tablet computer) is different. Therefore, the scheme can also effectively resist synthetic face attacks and improve the accuracy of discrimination. Therefore, in summary, this scheme may be limited in the hardware configuration of the terminal, to improve the detection effect in vivo, thereby improving authentication accuracy and safety.
  • the present embodiment is similarly integrated with the living body detecting device in the terminal, the light source is specifically a color mask, and the surface of the detecting object is specifically a human face as an example, and the previous embodiment is described.
  • the difference is that, in this embodiment, a combination of light rays of a preset code is used as the color mask, which will be described in detail below.
  • a living body detection method can be as follows:
  • the terminal receives a living body detection request.
  • the terminal may specifically receive a biometric detection request triggered by the user, or may also receive a biometric detection request sent by another device, and the like.
  • the biometric detection request may be triggered to be generated, so that the terminal receives the biometric detection request.
  • the terminal acquires a preset coding sequence, where the coding sequence includes multiple codes.
  • the preset coding sequence may be randomly generated or set according to actual application requirements.
  • the code sequence can be a sequence of numbers, such as: 0, -1, 1, 0, ..., and so on.
  • the terminal sequentially determines, according to a preset encoding algorithm, colors corresponding to the codes according to the coding sequence in the coding sequence, to obtain a color sequence.
  • the preset encoding algorithm may reflect the correspondence between each encoding in the encoding sequence and various colors, and the corresponding relationship may be determined according to actual application requirements. For example, red may represent digital-1, and green represents 0, blue for 1, and so on.
  • the red represents the number -1
  • the green represents 0,
  • the blue represents 1 as an example. If the obtained code sequence is "0, -1, 1, 0" in step 302, then the terminal can According to the correspondence between each code and various colors, the colors corresponding to the respective codes are sequentially determined according to the order of encoding in the code sequence, and the color sequence "green, red, blue, green" is obtained.
  • the terminal generates a color mask based on the color sequence, so that the light projected by the color mask changes according to a color indicated by the color sequence.
  • the terminal may generate a color mask so that the light projected by the color mask can be in accordance with "green, red, blue”.
  • the order of "green” changes, see Figure 3b and Figure 3c.
  • the display duration of each color and the waiting time interval when switching between colors can be set according to actual application requirements, for example, as shown in FIG. 3b, each color can be made.
  • the display duration is 1 second
  • the waiting interval is set to 0 seconds, and so on, that is, according to the direction indicated by the time axis in FIG. 3b, the projected light can be expressed as “green—> red—> blue—> green”. , where the moment of switching from one color to another is called a color break point.
  • the waiting time interval may not be 0.
  • the display duration of each color may be 1 second, and the waiting time interval is set to 0.5 seconds, and the like, wherein during the waiting time interval , can not project light (ie no light), that is, according to the direction indicated by the time axis in Figure 3c, the projected light can be expressed as "green -> no light -> red -> no light -> blue -> no Light -> Green".
  • the variation rule of the light can be further complicated, for example, the display duration of each color and the waiting time interval when switching between different colors are also set to inconsistent values, for example, a green display.
  • the duration can be 3 seconds, while the red display duration is 2 seconds, the blue color is 4 seconds, the wait interval between green and red is 1 second, and the wait interval between red and blue is 1.5 seconds, and so on, and so on.
  • the terminal performs image collection on the detection object to obtain an image sequence.
  • the terminal may specifically call the camera of the terminal to capture the detected object in real time, obtain an image sequence, and display the captured image sequence, for example, display the captured image sequence in the detection area, and the like.
  • the light projected by the color mask is a color sequence as shown in FIG. 3b (ie, green->red->blue->green), and the detection object is the user's face as an example, then in the four seconds
  • the video there is a green screen reflection light on the face corresponding to the image frame 1 in the first second, and a red screen reflection light on the face corresponding to the image frame 2 in the second second, and the image frame 3 in the third second
  • All of the image frames are the original data with reflected light signals, which is the sequence of images of the embodiment of the present application.
  • the image sequence can also be denoised.
  • the noise model as the Gaussian noise
  • the timing multi-frame averaging and/or the same-frame multi-scale averaging can be used to reduce the noise as much as possible, and details are not described herein again.
  • the terminal When determining that the degree of change of the position of the detection object is less than a preset change value, the terminal respectively obtains chrominance/luminance of the frame corresponding to before and after the change of the projected light from the image sequence.
  • the projected light can be separately obtained from the image sequence.
  • the image sequence includes image frame 1, image frame 2, image frame 3, and image frame 4 in sequence.
  • the image corresponding to the image frame 1 has green screen reflected light
  • the image corresponding to the image frame 2 has red screen reflected light
  • the image corresponding to the image frame 3 has blue screen reflected light
  • the image frame 4 corresponds to There is a green screen reflection light on the face, at this time, the chrominance/brightness of the image frame 1, the image frame 2, the image frame 3, and the image frame 4 can be respectively acquired.
  • the image sequence includes image frame 1, image frame 2, image frame 3, image frame 4, image frame 5, image frame 6, image frame 7, image frame 8, image frame 9, image frame 10, and image, respectively.
  • the image corresponding to the image frame 1, the image frame 2 and the image frame 3 has green screen reflection light
  • the image frame 4 the image frame 5 and the image frame 6 have red screen reflection light on the face corresponding to the image frame 7,
  • the image frame 8 and the image frame 9 have blue screen reflection light on the face
  • the image frame 10 the image frame 11 and the image frame 12 have green screen reflection light on the face corresponding to the image frame 12, and at this time, the image frame can be separately acquired.
  • Image frame 4 image frame 6, image frame 7, image frame 9, and chrominance/luminance of image frame 10, wherein image frame 3 and image frame 4 are two frames before and after the color changes from green to red.
  • image frame 6 and the image frame 7 are two frames before and after the color changes from red to blue, and the image frame 9 and the image frame 10 are two frames before and after the color changes from blue to green.
  • the terminal calculates, according to the acquired chromaticity/luminance, a relative value of chrominance change or a relative value of brightness change between frames corresponding to before and after the change of the projected ray.
  • the terminal may specifically calculate the chrominance/luminance by a preset regression function, and obtain a relative value of chrominance/luminance change between frames corresponding to before and after the change of the projected ray (ie, a relative value of the chromaticity change or a relative value of the change in brightness). ),and many more.
  • the regression function can be set according to the requirements of the actual application, for example, specifically, a regression neural network, and the like.
  • image frame 3 and image frame 4 are two frames before and after the color changes from green to red
  • image frame 6 and image frame 7 are two frames before and after the color changes from red to blue
  • image frame 9 and image are two frames before and after the color changes from red to blue
  • Frame 10 is two frames before and after the color changes from blue to green
  • the relative value of the chrominance change of the image frame 9 and the image frame 10 is obtained by a preset regression function, such as a regression neural network, calculating the difference between the chromaticity of the image frame 9 and the chromaticity of the image frame 10.
  • the relative value of the chrominance change or the relative value of the change in brightness corresponds to a measure ⁇ I of the signal strength
  • the relative value of the change of the image frame 3 and the image frame 4 (the relative value of the chromaticity/luminance change) That is, ⁇ I 34 , the relative value of the change of the image frame 6 and the image frame 7 (the relative value of the chromaticity/luminance change) is ⁇ I 67 , and the relative value of the change of the image frame 9 and the image frame 10 (the relative value of the chromaticity/luminance change) ) is ⁇ I 910 .
  • the terminal decodes the image sequence according to the preset decoding algorithm according to the relative value of the chrominance/luminance change (ie, the relative value of the chrominance/luminance change between frames corresponding to the change before and after the change of the projected light), and obtains the decoded sequence. .
  • the terminal may use a preset decoding algorithm to sequentially calculate the relative value of the chrominance/luminance change in the image sequence (ie, the relative value of the chrominance/luminance change between frames corresponding to before and after the change of the projected light), and obtain each The absolute value of the chrominance/luminance of the corresponding frame before and after the change of the light is projected, and the obtained absolute value of the chrominance/brightness is used as the decoded sequence, or the obtained absolute value of the chromaticity/luminance is converted according to a preset strategy, and then decoded. sequence.
  • a preset decoding algorithm to sequentially calculate the relative value of the chrominance/luminance change in the image sequence (ie, the relative value of the chrominance/luminance change between frames corresponding to before and after the change of the projected light), and obtain each The absolute value of the chrominance/luminance of the corresponding frame before and after the change of the light is projected, and the obtained absolute value of the chromin
  • the preset decoding algorithm is matched with the encoding algorithm, and may be determined according to the encoding algorithm.
  • the preset policy may also be set according to the requirements of the actual application, and details are not described herein.
  • the relative value of the change of the image frame 3 and the image frame 4 (the relative value of the chrominance/luminance change) is ⁇ I 34
  • the relative value of the change of the image frame 6 and the image frame 7 (chroma/luminance)
  • the relative value of the change is ⁇ I 67
  • the relative value of the change of the image frame 9 and the image frame 10 (relative value of the chromaticity/luminance change) is ⁇ I 910 , and then the reflection of all the frames can be obtained based on the relative values of the changes.
  • the absolute intensity value of the signal (such as the absolute value of the color or the absolute value of the brightness), as follows:
  • the digital sequence can be decoded, that is, the decoded sequence is obtained: 0, -1, 1, 0 (representing green, red, blue, and green, respectively).
  • the terminal determines whether the decoded sequence and the coded sequence match. If yes, determining that the reflected light signal generated by the projected light exists on the surface of the detected object in the image sequence, step 310 may be performed; otherwise, if the sequence is decoded If the coded sequence does not match, it is determined that the reflected light signal generated by the projected light does not exist on the surface of the detected object in the image sequence, and the operation may be performed according to a preset strategy.
  • the terminal may determine whether the decoded sequence is consistent with the coded sequence. If it is consistent, determining that the reflected light signal generated by the projected light exists on the surface of the detected object in the image sequence may perform step 310; otherwise, if decoding If the sequence is inconsistent with the coded sequence, it is determined that the reflected light signal generated by the projected light does not exist on the surface of the detected object in the image sequence, and then operates according to a preset strategy.
  • step 308 the decoded sequence "0, -1, 1, 0" is obtained, which is consistent with the code sequence being "0, -1, 1, 0", therefore, the object to be detected in the image sequence can be determined.
  • the reflected light signal generated by the projected light exists on the surface, and the like.
  • the preset policy may be determined according to the requirements of the actual application. For details, refer to step 206 in the previous embodiment, and details are not described herein again.
  • the terminal uses a preset recognition model to identify the type of the object to which the image feature (ie, the image feature formed by the reflected light signal on the surface of the detection object, such as a person's face), if the recognition result indicates the object to which the image feature belongs. If the type is a living body, it is determined that the detection object is a living body.
  • the recognition result indicates that the type of the object to which the image feature belongs is not a living body, such as a “phone screen” or the like, it may be determined that the detected object is not a living body.
  • the preset recognition model may include a classifier or other recognition model, and the like, and the classifier may include an SVM, a neural network, a decision tree, and the like.
  • the preset recognition model can be trained from a plurality of feature samples that are image features formed by the reflected light signal on the surface of the object of the labeled type.
  • the identification model may be established by other devices and provided to the terminal for use, or may be established by the terminal. For details, refer to the previous embodiment, and details are not described herein.
  • the living body detecting method can also include:
  • the terminal generates prompt information indicating that the detection object (such as a person's face) performs a preset action, displays the prompt information, and monitors the detected object. If the detected object performs the preset action, it determines that the detected object is The living body, otherwise, if it is monitored that the detection object does not perform the preset action, it is determined that the detection object is not a living body.
  • the detection object such as a person's face
  • the preset action can be set according to the requirements of the actual application. It should be noted that, in order to avoid cumbersome interaction, the number and difficulty of the preset action may be limited, for example, only one simple operation is needed. The interaction, such as blinking or opening the mouth, can not be repeated here.
  • each frame image with the reflected light signal is recorded with a corresponding time stamp, after determining that the decoded sequence matches the code sequence, it is further determined whether the time stamps can convert the light with the color mask. Corresponding time, if it is possible to correspond, it is determined that the reflected light signal generated by the projected light exists on the surface of the detected object in the image sequence; otherwise, if it is not corresponding, the reflected light signal and the preset light are determined. Signal samples do not match.
  • the attacker wants to use the face synthesis rendering attack, it is not only necessary to match the sequence of the encoded color sequence, but also there is no offset at the absolute time point (because of the real-time synthesis, the synthetic rendering operation is also It takes at least a millisecond of time), and the attack difficulty is greatly improved, so that the security can be further improved.
  • the present embodiment can generate a color mask by using a code sequence, wherein the color mask can be used as a light source to project light to a detection object, such as a person's face, so that when a living body detection is required, the The human face is monitored, and then the reflected light signal on the face of the person in the monitored image sequence is decoded to determine whether it matches the coded sequence. If it matches, the face of the person is determined to be a living body; The cumbersome interaction and operation with the user can reduce the need for hardware configuration.
  • a detection object such as a person's face
  • the scheme is based on detecting the reflected light signal on the surface of the object, the real living body and the forged living body (composite picture)
  • the reflected light signal of the carrier of the video such as a photo, a mobile phone or a tablet computer, is different. Therefore, the scheme can also effectively resist the synthetic face attack and improve the accuracy of the discrimination.
  • the projected ray is generated according to a random coding sequence, and the reflected optical signal needs to be decoded later in the discrimination, even if the attacker synthesizes a corresponding reflective video according to the current color sequence, the attacker cannot It is used in one time, so that the living body detection effect can be further improved with respect to the solution of the previous embodiment, thereby improving the accuracy and safety of identity verification.
  • the embodiment of the present application further provides a living body detecting device, which is referred to as a living body detecting device.
  • the living body detecting device includes a receiving unit 401, a starting unit 402, an collecting unit 403, and a detecting unit. 404, as follows:
  • the receiving unit 401 is configured to receive a living body detection request.
  • the receiving unit 401 may be specifically configured to receive a biometric detection request triggered by a user, or may also receive a biometric detection request sent by another device, and the like.
  • the starting unit 402 is configured to start the light source according to the living body detection request, and project the light to the detection object.
  • the initiating unit 402 may be specifically configured to invoke a corresponding living body detection process according to the living body detection request, activate a light source according to the living body detection process, and the like.
  • the light source can be set according to the needs of the actual application, for example, by adjusting the brightness of the terminal screen, or by using other light-emitting components such as a flash or an infrared emitter or an external device, or By setting a color mask on the display interface, etc., the startup unit 402 can specifically perform any of the following operations:
  • the activation unit 402 is specifically configured to adjust the brightness of the screen according to the living body detection request, so that the screen is used as a light source to project light to the detection object.
  • the activation unit 402 is specifically configured to turn on the preset light-emitting component according to the living body detection request, so that the light-emitting component emits light as a light source to the detection object.
  • the light emitting part may comprise a component such as a flash lamp or an infrared emitter.
  • the activation unit 402 is specifically configured to: start a preset color mask according to the living body detection request, and the color mask is used as a light source to project light to the detection object.
  • the initiating unit 402 may be specifically configured to start a color mask on the terminal according to the living body detection request.
  • a component that can flash a color mask may be disposed at an edge of the terminal shell, and then, after receiving the living body detection request, The component can be activated to flash the color mask; or, by displaying the detection interface, the color mask can be flashed as follows:
  • the detection interface is activated according to the living body detection request, and the detection interface may flash a color mask, and the color mask is used as a light source to project light to the detection object.
  • the area of the flashing color mask may be determined according to the requirements of the actual application.
  • the detection interface may include a detection area and a non-detection area.
  • the detection area is mainly used to display the monitoring situation, and the non-detection area may flash.
  • the area in which the color mask is flashed in the non-detection area may be determined according to the needs of the actual application, and the entire non-detection area may be provided with a color mask, or may be a certain part or a certain part of the non-detection area. Partial areas are set with color masks, and so on.
  • the color mask and other parameters such as the color mask can be set according to the needs of the actual application.
  • the color mask can be preset by the system and directly retrieved when the detection interface is started, or the living body detection can be received. After the request is automatically generated, as shown in FIG. 4b, the living body detecting device may further include a generating unit 405, as follows:
  • the generating unit 405 can be configured to generate a color mask such that the light projected by the color mask can be changed according to a preset rule.
  • the generating unit 405 can also be used to maximize the intensity of the change of the light.
  • the preset rule may be determined according to the needs of the actual application, and the manner of maximizing the intensity of the change of the light may also be various. For example, for the same color of light, the brightness of the screen before and after the change may be maximized.
  • the intensity of the change of light for example, the brightness of the screen before and after the change is set to the maximum and minimum, and for the light of different colors, the intensity of the change of the light can be maximized by adjusting the color difference before and after the change, and so on. which is:
  • the generating unit 405 is specifically configured to obtain a preset screen brightness adjustment parameter for the light of the same color, and adjust a screen brightness of the same color before and after the change according to the screen brightness adjustment parameter to adjust the intensity of the light change;
  • the light of different colors obtains a preset color difference adjustment parameter, and the color difference of the light of the different colors before and after the change is adjusted according to the color difference adjustment parameter to adjust the intensity of the light change.
  • the adjustment range of the intensity of the change of the light may be set according to the needs of the actual application, and may include a large adjustment, such as maximizing the intensity of the change of the light, and may also include a small adjustment, etc., and will not be described here.
  • a preset combination of light combinations can also be used as the color mask, namely:
  • the generating unit 405 is specifically configured to: obtain a preset encoding sequence, where the encoding sequence includes multiple encodings, and according to a preset encoding algorithm, sequentially determine colors corresponding to the encodings according to the encoding order in the encoding sequence, to obtain a color sequence, based on The sequence of colors generates a color mask such that the projected light of the color mask changes according to the color indicated by the sequence of colors.
  • the preset coding sequence may be randomly generated or set according to actual application requirements, and the preset coding algorithm may also be determined according to actual application requirements.
  • the coding algorithm can reflect the correspondence between each code in the coding sequence and various colors. For example, red can represent the number-1, green represents 0, blue represents 1, and so on, if the obtained code is obtained, The sequence is "0, -1, 1, 0", then the color sequence "green, red, blue, green” can be obtained, thereby generating a color mask so that the light projected by the color mask can follow the " The order of green, red, blue, and green changes.
  • the display duration of each color and the waiting time interval when switching between colors may be set according to actual application requirements; in addition, during the waiting time interval, no light may be projected, or The predetermined light can also be projected.
  • no light may be projected, or The predetermined light can also be projected.
  • the collecting unit 403 is configured to perform image collection on the detected object to obtain a sequence of images.
  • the collecting unit 403 may specifically be used to invoke the imaging device to capture the detected object in real time, obtain an image sequence, and display the captured image sequence in the detection area.
  • the camera device includes, but is not limited to, a camera that is provided by the terminal, a webcam, a surveillance camera, and other devices that can collect images.
  • the acquisition unit 403 may perform the denoising processing on the image sequence after the image sequence is obtained. For details, refer to the previous embodiment, and details are not described herein again.
  • the acquisition unit 403 can also perform other pre-processing on the image sequence, such as scaling, cropping, sharpening, background blurring, etc., to improve the efficiency and accuracy of subsequent recognition.
  • the detecting unit 404 is configured to identify a reflected light signal generated by the projected light on the surface of the detected object in the image sequence, the reflected light signal forming an image feature on the surface of the detecting object, and adopting a preset recognition model to the image feature The type of the object to be identified is identified. If the recognition result indicates that the type of the object to which the image feature belongs is a living body, it is determined that the detected object is a living body.
  • the detecting unit 404 is further configured to determine, according to a preset strategy, when the reflected light signal generated by the projected light is not present on the surface of the detected object in the image sequence.
  • the preset policy may be specifically set according to the requirements of the actual application.
  • the detection object may be determined to be inactive, or the collection unit 403 may be triggered to perform image collection on the detection object, or The triggering activation unit 402 restarts the light source and projects the light to the detection object, and so on.
  • the detection object may be determined to be inactive, or the collection unit 403 may be triggered to perform image collection on the detection object, or The triggering activation unit 402 restarts the light source and projects the light to the detection object, and so on.
  • the detecting unit 404 is further configured to determine that the detection object is inactive when the recognition result indicates that the type of the object to which the image feature belongs is not a living body.
  • the detecting unit 404 can include a calculating subunit, a determining subunit, and an identifying subunit, as follows:
  • the calculation subunit can be used to regression analyze the change of the frame in the image sequence to obtain the regression result.
  • the calculation subunit may be specifically used for regression analysis of the numerical expression of the chrominance/luminance of each frame in the image sequence, and the numerical expression may be a numerical sequence, and the representation of the frame in the image sequence is determined according to the numerical expression such as a numerical sequence.
  • the change in chromaticity/luminance gives a regression result.
  • the subunits are calculated, specifically for regression analysis of changes between frames in the sequence of images, regression results, and the like.
  • the difference between the frames may be an interframe difference or a frame difference.
  • the interframe difference refers to the difference between two adjacent frames, and the frame difference is a frame corresponding to the frame before and after the change of the projected light. The difference between the two.
  • the calculating sub-unit may be specifically configured to: when determining that the degree of change of the position of the detecting object is less than a preset change value, respectively acquiring pixel coordinates of adjacent frames in the image sequence, and calculating an inter-frame difference based on the pixel coordinates; for example, The pixel coordinates are transformed to minimize the registration error of the pixel coordinates, and then the pixel points whose correlation is in accordance with the preset condition are filtered according to the transformation result, and the inter-frame difference is calculated according to the selected pixel points, and the like.
  • the calculating sub-unit may be specifically configured to determine, when the degree of change of the position of the detecting object is less than a preset change value, respectively obtain pixel coordinates of the frame corresponding to the change of the projected light from the image sequence, and calculate the pixel coordinate based on the pixel coordinate Frame difference; for example, the pixel coordinates may be transformed to minimize the registration error of the pixel coordinates, and then, according to the transformation result, the pixel points whose correlation is in accordance with the preset condition are filtered, and the frame is calculated according to the selected pixel point. Poor, and so on.
  • the calculating sub-unit may be specifically configured to: when determining that the degree of change of the position of the detecting object is less than a preset change value, respectively acquiring the chromaticity/luminance of the frame corresponding to the frame before and after the change of the projected light from the image sequence, according to the acquired chromaticity/
  • the luminance calculates a relative value of chrominance change or a relative value of luminance change between frames corresponding to before and after the change of the projected ray, and uses a relative value of chrominance/luminance change as a frame difference between frames corresponding to before and after the change of the projected ray.
  • the calculation sub-unit can be specifically used to calculate the chromaticity/luminance by a preset regression function, and obtain a relative value of the chrominance/luminance change between the frames corresponding to before and after the change of the projected ray (ie, the relative value of the chromaticity change) Or relative change in brightness), and so on.
  • the regression function can be set according to the requirements of the actual application, for example, specifically, a regression neural network.
  • the preset change value and the preset condition may be set according to actual application requirements.
  • the determining subunit may be configured to determine, according to the regression result, whether a surface of the detecting object in the image sequence has a reflected light signal generated by the projected light, and the reflected light signal forms an image feature on the surface of the detecting object.
  • the identification subunit may be configured to identify the type of the object to which the image feature belongs by using the preset recognition model when determining that the subunit determines the reflected light signal generated by the projected light on the surface of the detected object in the image sequence, if the identifier is identified
  • the result indicates that the type of the object to which the image feature belongs is a living body, and it is determined that the detected object is a living body.
  • the identification sub-unit may be further configured to operate according to a preset policy when determining that the sub-unit determines that there is no reflected light signal generated by the projected light. For details, refer to the description of the preset policy in the detecting unit 404. No longer.
  • the identification subunit may be further configured to determine that the detection object is inactive when the recognition result indicates that the type of the object to which the image feature belongs is not a living body.
  • the preset recognition model may be trained by a plurality of feature samples, wherein the feature samples are image features formed by the reflected light signal on the surface of the labeled object.
  • the identification model may be established by other devices and provided to the living body detecting device, or may be established by the living body detecting device.
  • the living body detecting device may further include a model establishing unit, as follows:
  • the model establishing unit is configured to acquire a plurality of feature samples, and perform training on the preset initial recognition model according to the feature samples to obtain a preset recognition model.
  • the method for determining whether there is a reflected light signal generated by the projected light in the surface of the detected object in the image sequence may be various according to the regression result. For example, any one of the following methods may be adopted:
  • the determining subunit may be specifically configured to determine whether the regression result is greater than a preset threshold, and if yes, determining that a reflected light signal generated by the projected light exists on a surface of the detected object in the image sequence; if not, determining the image sequence The surface of the object to be detected does not have a reflected light signal generated by the projected light.
  • the determining sub-unit may be configured to perform classification analysis on the regression result by using a preset global feature algorithm or a preset recognition model. If the analysis result indicates that the inter-frame variation of the surface of the detection object is greater than a set value, determining the image sequence. The surface of the detection object has a reflected light signal generated by the projected light; if the analysis result indicates that the inter-frame change of the surface of the detection object is not greater than the set value, it is determined that the projected light is not present on the surface of the detected object in the image sequence. The reflected light signal produced.
  • the setting value may be determined according to the requirements of the actual application, and the manner of “classifying and analyzing the inter-frame difference by using the preset global feature algorithm or the preset recognition model” may also be various. For example, the following may be :
  • the determining sub-unit may be specifically configured to analyze the regression result to determine whether the reflected light signal generated by the projected light exists in the image sequence, and if the reflected light signal generated by the projected light does not exist, generate an indication
  • the inter-frame change of the surface of the detection object is not greater than the analysis result of the set value; if there is a reflected light signal generated by the projected light, the reflector of the reflected light information existing by the preset global feature algorithm or the preset recognition model is determined Whether or not the detection target is the detection target generates an analysis result indicating that the inter-frame change of the surface of the detection target is larger than a set value, and if not the detection target, generating an inter-frame change indicating that the surface of the detection target is not larger than The analysis result of the set value.
  • the determining sub-unit may be configured to classify the image in the image sequence by using a preset global feature algorithm or a preset recognition model to filter out a frame in which the detection object exists, obtain a candidate frame, and analyze the candidate frame.
  • the inter-frame difference is used to determine whether the detected object has a reflected light signal generated by the projected light. If there is no reflected light signal generated by the projected light, the inter-frame change indicating the surface of the detected object is not greater than the setting.
  • the analysis result of the value if there is a reflected light signal generated by the projected light, an analysis result indicating that the inter-frame change of the surface of the detection target is greater than the set value is generated.
  • the global feature algorithm refers to an algorithm based on global features, wherein the global features may include a mean variance of gray scales, a gray level co-occurrence matrix, a transformed spectrum such as FFT and DCT.
  • the third method may be adopted, that is, by decoding the optical signal, determining whether the surface of the detection object in the image sequence has reflected light generated by the projected light.
  • the signal is as follows:
  • the determining subunit may be specifically configured to: according to the regression result, decode the image sequence according to a preset decoding algorithm, obtain a decoded sequence, determine whether the decoded sequence and the encoding sequence match, and if matched, determine the image sequence.
  • the surface of the detection object has a reflected light signal generated by the projected light; if the decoded sequence does not match the encoded sequence, it is determined that the reflected light signal generated by the projected light does not exist on the surface of the detected object in the image sequence.
  • the determining subunit may adopt a preset decoding algorithm to sequentially perform the chromaticity/luminance in the image sequence.
  • the preset decoding algorithm is matched with the encoding algorithm, and may be determined according to the encoding algorithm.
  • the preset policy may also be set according to the requirements of the actual application, and details are not described herein.
  • Determining whether the decoded sequence and the coding sequence match may be different. For example, the determining subunit may determine whether the decoded sequence is consistent with the coding sequence, and if they are consistent, determining that the decoded sequence matches the coding sequence, if If the sequence is inconsistent, it is determined that the decoded sequence does not match the coded sequence; or the determining subunit may determine whether the relationship between the decoded sequence and the coded sequence conforms to a preset correspondence relationship, and if so, the decoded sequence and the code are determined. The sequence matches, otherwise, if not, it is determined that the decoded sequence does not match the coding sequence, and so on.
  • the preset correspondence may be set according to requirements of an actual application.
  • the foregoing units may be implemented as a separate entity, or may be implemented in any combination, and may be implemented as the same or a plurality of entities.
  • the foregoing method embodiments and details are not described herein.
  • the device can be integrated into a device such as a terminal, and the terminal can be a device such as a mobile phone, a tablet computer, a notebook computer, or a PC.
  • the activation unit 402 can activate the light source to project the light to the detection object, and the collection unit 403 can perform an image on the detection object through the detection area in the detection interface. Then, the detecting unit 404 detects the reflected light signal generated by the projected light on the surface of the detected object in the captured image sequence, and forms a image feature on the surface of the detected object by using the preset recognition model. The type of the object is identified.
  • the recognition result indicates that the type of the object to which the image feature belongs is a living body
  • the demand, and, because the scheme is based on the reflected light signal on the surface of the object, the reflected light signal of the real living body and the forged living body (a carrier of a composite picture or video, such as a photo, a mobile phone or a tablet computer) Is different, so the program can also be effective Synthetic human face attack, improve the accuracy of discrimination.
  • the generating unit 405 of the living body detecting device may further generate the projected light according to a random code sequence, and the detecting unit 404 determines by decoding the reflected light signal, so that even if the attacker synthesizes according to the current color sequence A corresponding reflective video can't be used in the next time, so it can greatly improve its security. All in all, the program can greatly improve the effect of living body detection, and is conducive to improving the accuracy and security of identity verification.
  • the embodiment of the present application further provides a terminal.
  • the terminal may include a radio frequency (RF) circuit 501, a memory 502 including one or more computer readable storage media, and an input unit. 503.
  • RF radio frequency
  • the terminal structure shown in FIG. 5 does not constitute a limitation to the terminal, and may include more or less components than those illustrated, or a combination of certain components, or different component arrangements. among them:
  • the RF circuit 501 can be used for transmitting and receiving information or during a call, and receiving and transmitting the signal. Specifically, after receiving the downlink information of the base station, the downlink information is processed by one or more processors 508. In addition, the data related to the uplink is sent to the base station. .
  • the RF circuit 501 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a low noise amplifier (LNA, Low Noise Amplifier), duplexer, etc. In addition, the RF circuit 501 can also communicate with the network and other devices through wireless communication.
  • SIM Subscriber Identity Module
  • LNA Low Noise Amplifier
  • the wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), and Code Division Multiple Access (CDMA). , Code Division Multiple Access), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 502 can be used to store software programs and modules, and the processor 508 executes various functional applications and data processing by running software programs and modules stored in the memory 502.
  • the memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the terminal (such as audio data, phone book, etc.).
  • memory 502 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 502 may also include a memory controller to provide access to memory 502 by processor 508 and input unit 503.
  • the input unit 503 can be configured to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 503 can include a touch-sensitive surface as well as other input devices.
  • Touch-sensitive surfaces also known as touch screens or trackpads, collect touch operations on or near the user (such as the user using a finger, stylus, etc., any suitable object or accessory on a touch-sensitive surface or touch-sensitive Operation near the surface), and drive the corresponding connecting device according to a preset program.
  • the touch sensitive surface can include two portions of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 508 is provided and can receive commands from the processor 508 and execute them.
  • touch-sensitive surfaces can be implemented in a variety of types, including resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 503 can also include other input devices. Specifically, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • Display unit 504 can be used to display information entered by the user or information provided to the user, as well as various graphical user interfaces of the terminal, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display unit 504 may include a display panel, and the display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • the touch-sensitive surface can cover the display panel, and when the touch-sensitive surface detects a touch operation on or near it, it is transmitted to the processor 508 to determine the type of the touch event, and then the processor 508 displays the type according to the type of the touch event. A corresponding visual output is provided on the panel.
  • the touch-sensitive surface and display panel are implemented as two separate components to perform input and input functions, in some embodiments, the touch-sensitive surface can be integrated with the display panel to implement input and output functions.
  • the terminal may also include at least one type of sensor 505, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel according to the brightness of the ambient light, and the proximity sensor may close the display panel and/or the backlight when the terminal moves to the ear.
  • the gravity acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the terminal can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • the audio circuit 506, the speaker, and the microphone provide an audio interface between the user and the terminal.
  • the audio circuit 506 can transmit the converted electrical signal of the audio data to the speaker, and convert it into a sound signal output by the speaker; on the other hand, the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 506 and converted.
  • the audio data is then processed by the audio data output processor 508, sent via RF circuitry 501 to, for example, another terminal, or the audio data is output to memory 502 for further processing.
  • the audio circuit 506 may also include an earbud jack to provide communication between the peripheral earphone and the terminal.
  • WiFi is a short-range wireless transmission technology.
  • the terminal can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 507. It provides wireless broadband Internet access for users.
  • FIG. 5 shows the WiFi module 507, it can be understood that it does not belong to the necessary configuration of the terminal, and may be omitted as needed within the scope of not changing the essence of the invention.
  • Processor 508 is the control center of the terminal, which connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in memory 502, and invoking data stored in memory 502, executing The various functions of the terminal and processing data to monitor the mobile phone as a whole.
  • the processor 508 can include one or more processing cores; preferably, the processor 508 can integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application, etc., and performs modulation and demodulation.
  • the processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 508.
  • the terminal also includes a power source 509 (such as a battery) for powering various components.
  • the power source can be logically coupled to the processor 508 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the power supply 509 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the terminal may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 508 in the terminal loads the executable file corresponding to the process of one or more applications into the memory 502 according to the following instruction, and is stored in the memory by the processor 508.
  • the application in 502 to implement various functions:
  • Receiving a living body detection request starting a light source according to the living body detection request, and projecting a light to the detection object, and then performing image acquisition on the detection object to obtain an image sequence, and identifying that the surface of the detection object in the image sequence has the projected light generated
  • the reflected light signal forms an image feature on the surface of the detection object, and uses a preset recognition model to identify the type of the object to which the image feature belongs. If the recognition result indicates that the type of the object to which the image feature belongs is a living body, then determining The test object is a living body.
  • the method for determining whether the surface of the object to be detected in the image sequence has the reflected light signal generated by the projected light and the type of the object to which the image feature belongs may be various. For details, refer to the previous embodiment. This will not be repeated here.
  • the light source may be implemented in various manners, for example, by adjusting the brightness of the screen of the terminal, or by using other light-emitting components such as a flash or an infrared emitter or an external device, or By setting a color mask on the display interface, etc., that is, the application in the memory 502 can also implement the following functions:
  • the screen brightness is adjusted according to the living body detection request such that the screen as a light source projects light to the detection object.
  • the preset light-emitting component is turned on according to the living body detection request, so that the light-emitting component emits light as a light source to the detection object.
  • the light emitting part may comprise a component such as a flash lamp or an infrared emitter.
  • the color mask is activated according to the living body detection request, such as starting the detection interface, the detection interface may flash a color mask, the color mask is used as a light source to project light to the detection object, and the like.
  • the area of the flashing color mask may be determined according to the requirements of the actual application.
  • the detection interface may include a detection area and a non-detection area.
  • the detection area is mainly used to display the monitoring situation, and the non-detection area may flash.
  • the color mask and other parameters of the color mask can be set according to the requirements of the actual application, and the color mask can be preset by the system and directly retrieved when the detection interface is started, or It can also be automatically generated after receiving the biometric detection request, that is, the application stored in the memory 502, and can also implement the following functions:
  • a color mask is generated such that the light projected by the color mask can be changed according to a preset rule.
  • the preset encoded light combination as the color mask, that is, the application stored in the memory 502, and can also realize the following functions:
  • Obtaining a preset coding sequence where the coding sequence includes multiple codes, and according to a preset coding algorithm, sequentially determining colors corresponding to the codes according to the coding order in the coding sequence, obtaining a color sequence, and generating a color mask based on the color sequence, so that The light projected by the color mask changes according to the color indicated by the color sequence.
  • the preset coding sequence may be randomly generated, or may be set according to actual application requirements, and the preset coding algorithm may also be determined according to actual application requirements, and the coding algorithm may reflect each of the coding sequences.
  • the correspondence between the code and various colors, for example, red can represent the number -1, green for 0, blue for 1, and so on.
  • the optical signal may be decoded to determine whether the surface of the detected object in the image sequence has a reflected light signal generated by the projected light. For details, see the front. The embodiment is not described here.
  • the image sequence can also be subjected to denoising processing, that is, the application stored in the memory 502 can also realize the following functions:
  • the image sequence is subjected to denoising processing.
  • the noise model as Gaussian noise as an example, it is possible to use timing multi-frame averaging and/or co-frame multi-scale averaging to reduce noise as much as possible, and the like.
  • the terminal when the terminal needs to perform the living body detection, the terminal can start the light source to project the light to the detection object, and perform image collection on the detection object, and then determine whether the surface of the detection object in the collected image sequence has the projection.
  • the reflected light signal generated by the light if present, identifies the type of the object to which the image feature belongs by using a preset recognition model.
  • the recognition result indicates that the type of the object to which the image feature belongs is a living body, determining that the detected object is a living body; Since the solution does not require complicated interaction and operation with the user, the requirement for the hardware configuration can be reduced, and the basis for the living body discrimination is to detect the reflected light signal on the surface of the object, and the real living body and the forged
  • the reflected light signal of a living body (a carrier of a composite picture or video, such as a photo, a mobile phone or a tablet computer) is different. Therefore, the solution can also effectively resist the synthetic face attack and improve the accuracy of the discrimination; therefore, in summary, the The solution can be limited at the terminal, especially the mobile terminal Under the hardware configuration, the effect of improving the living body detection, thereby improving authentication accuracy and safety.
  • the embodiment of the present application further provides a storage medium, where a plurality of instructions are stored, and the instructions can be loaded by the processor to perform the steps in any of the living body detection methods provided by the embodiments of the present application.
  • the instruction can be as follows:
  • Receiving a living body detection request starting a light source according to the living body detection request, and projecting a light beam to the detection object, and then performing image acquisition on the detection object to obtain an image sequence, and identifying that the surface of the detection object in the image sequence has the projected light
  • the generated reflected light signal forms an image feature on the surface of the detecting object, and uses a preset recognition model to identify the type of the object to which the image feature belongs. If the recognition result indicates that the type of the object to which the image feature belongs is a living body, then It is determined that the detection object is a living body.
  • the method for determining whether the surface of the object to be detected in the image sequence has the reflected light signal generated by the projected light and the type of the object to which the image feature belongs may be various. For details, refer to the previous embodiment. This will not be repeated here.
  • the light source may be implemented in various manners, for example, by adjusting the brightness of the screen of the terminal, or by using other light-emitting components such as a flash or an infrared emitter or an external device, or By setting a color mask on the display interface, etc., the instruction can also be as follows:
  • the screen brightness is adjusted according to the living body detection request such that the screen as a light source projects light to the detection object.
  • the preset light-emitting component is turned on according to the living body detection request, so that the light-emitting component emits light as a light source to the detection object.
  • the light emitting part may comprise a component such as a flash lamp or an infrared emitter.
  • the color mask is activated according to the living body detection request, such as starting the detection interface, and the detection interface may flash a color mask, and the color mask is used as a light source to project light to the detection object.
  • the area of the flashing color mask may be determined according to the requirements of the actual application.
  • the detection interface may include a detection area and a non-detection area.
  • the detection area is mainly used to display the monitoring situation, and the non-detection area may flash.
  • the color mask and other parameters of the color mask can be set according to the requirements of the actual application, and the color mask can be preset by the system and directly retrieved when the detection interface is started, or It can also be automatically generated after receiving the live detection request, that is, the instruction can also be as follows:
  • a color mask is generated such that the light projected by the color mask can be changed according to a preset rule.
  • a combination of preset encoded light can be used as the color mask, and the like.
  • the storage medium may include: a read only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk.
  • ROM read only memory
  • RAM random access memory
  • magnetic disk a magnetic disk or an optical disk.
  • the steps in the method for detecting a living body provided by the embodiment of the present application can be performed by using the instructions stored in the storage medium. Therefore, any living body detecting method provided by the embodiment of the present application can be implemented. For the beneficial effects, see the previous embodiments in detail, and details are not described herein again.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本申请实施例公开了一种活体检测方法、终端和存储介质;本申请实施例在需要进行活体检测时,可以启动光源向检测对象投射光线,并对该检测对象进行图像采集,当识别出采集得到的图像序列中检测对象的表面存在该投射光线所产生的反射光信号时,采用预设识别模型对该反射光信号在该检测对象表面形成图像特征所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体;该方案可以改善活体检测效果,从而提高身份验证的准确性和安全性。

Description

一种活体检测方法、终端和存储介质
本申请要求于2017年10月26日提交中国专利局、申请号为201711012244.1、申请名称为“一种活体检测方法、装置和存储介质”的中国专利申请的优先权,其中,申请号为201711012244.1的中国专利申请要求了于2016年12月30日提交中国专利局、申请号为2016112570522、申请名称为“一种活体检测方法和装置”的中国专利申请的优先权,其中,申请号为201711012244.1、申请名称为“一种活体检测方法、装置和存储介质”的中国专利申请以及申请号为2016112570522、申请名称为“一种活体检测方法和装置”的中国专利申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,具体涉及一种活体检测方法、终端和存储介质。
背景技术
近年来,身份验证技术,如指纹识别、眼纹识别、虹膜识别、以及人脸识别等都得到了极大的发展。其中,人脸识别技术最为突出,其已经越来越广泛地应用到各类身份认证系统中。
基于人脸识别的身份认证系统,主要需要解决两个问题,一个是人脸验证,另一是活体检测。其中,活体检测主要是用来确认采集到的人脸图像等数据是来自用户本人,而不是回放或者伪造材料。针对目前活体检测的攻击手段,比如照片攻击、视频回放攻击、合成人脸攻击等,现有提出了“随机化交互”技术,所谓“随机化交互”技术,指的是由视频中面部不同部位的运动变化切入,融入需要用户主动配合的随机化交互动作,比如眨眼、摇头、或唇语识别,等等,并据此来判断检测对象是否为活体,等等。
在对现有技术的研究和实践过程中,本申请的发明人发现,现有方案活体检测所采用的算法,其判别的准确率并不高,而且,也无法有效抵挡合成人脸攻击,另外,繁琐的主动交互也会大大降低正确样本的通过率,所以,整体而言,现有方案的活体检测效果并不佳,大大影响身份验证的准确性和安全性。
技术内容
本申请实施例提供一种活体检测方法、终端和存储介质,可以改善活体检测效果,从而提高身份验证的准确性和安全性。
本申请实施例提供一种活体检测方法,包括:
终端接收活体检测请求;
根据所述活体检测请求启动光源,并向检测对象投射光线;
对所述检测对象进行图像采集,以得到图像序列;
识别出所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号,所述反射光信号在所述检测对象表面形成图像特征;
采用预设识别模型对所述图像特征所属对象的类型进行识别,所述预设识别模型由多个特征样本训练而成,所述特征样本为所述反射光信号在已标注类型的对象表面所形成的图像特征;
若识别结果指示所述图像特征所属对象的类型为活体,则确定所述检测对象为活体。
相应的,本申请实施例还提供一种活体检测终端,包括存储器、处理器及存储在存储器上的计算机程序,所述处理器执行所述计算机程序时实现以下方法步骤:
接收活体检测请求;
根据所述活体检测请求启动光源,并向检测对象投射光线;
对所述检测对象进行图像采集,以得到图像序列;
识别出所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号,所述反射光信号在所述检测对象表面形成图像特征,采用预设识别模型对所述图像特征所属对象的类型进行识别,所述预设识别模型由多个特征样本训练而成,所述特征样本为所述反射光信号在已标注类型的对象表面所形成的图像特征,若识别结果指示所述图像特征所属对象的类型为活体,则确定所述检测对象为活体。
此外,本申请实施例还提供一种存储介质,所述存储介质存储有多条指令,所述指令适于处理器进行加载,以执行本申请实施例所提供的任一种活体检测方法中的步骤。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1a是本申请实施例提供的活体检测方法的场景示意图;
图1b是本申请实施例提供的活体检测方法的另一场景示意图;
图1c是本申请实施例提供的活体检测方法的流程图;
图2是本申请实施例提供的活体检测方法的另一流程图;
图3a是本申请实施例提供的活体检测方法的另一流程图;
图3b是本申请实施例提供的活体检测方法中颜色变化的示例图;
图3c是本申请实施例提供的活体检测方法中颜色变化的另一示例图;
图4a是本申请实施例提供的活体检测装置的结构示意图;
图4b是本申请实施例提供的活体检测装置的另一结构示意图;
图5是本申请实施例提供的终端的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供一种活体检测方法、终端和存储介质。
其中,该活体检测装置具体可以集成在终端等设备中,它可以利用终端的屏幕光强和颜色变化、或利用闪光灯或红外发射器等其他部件或设备作为光源,投射至检测对象上,然后,通过分析接收到的图像序列中检测对象的表面,比如面部的反射光信号,来进行活体检测。
例如,以集成在终端中,且该光源为一颜色遮罩为例,当终端接收到活体检测 请求时,可以根据该活体检测请求启动检测界面,其中,如图1a所示,该检测界面除了设置有检测区域之外,还设置有一非检测区域(图1a中所标记的灰色部分),主要用于闪现颜色遮罩,该颜色遮罩可以作为光源向检测对象投射光线,比如,可参见图1b。由于真正的活体与伪造的活体(合成图片或视频的载体,如相片、手机或平板电脑等)的反射光信号是不同的,因此,可以通过判断检测对象的表面是否存在该投射光线所产生的反射光信号,且该反射光信号是否符合预设条件,来对活体进行判别,譬如,可以对该检测对象进行图像采集(可以通过该检测界面中的检测区域对监控的情况进行显示),然后,确定采集得到的图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号,若存在,则采用预设识别模型对该反射光信号在该检测对象表面形成图像特征所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为非活体,等等。
以下分别进行详细说明。需说明的是,以下实施例的序号不作为对实施例优选顺序的限定。
本实施例将从终端的活体检测装置(简称活体检测装置)的角度进行描述,该活体检测装置具体可以集成在终端等设备中,该终端具体可以为手机、平板电脑、笔记本电脑或个人计算机(PC,Personal Computer)等设备。
一种活体检测方法,包括:接收活体检测请求,根据该活体检测请求启动光源,并向检测对象投射光线,然后,对该检测对象进行图像采集,以得到图像序列,识别出该图像序列中检测对象的表面存在该投射光线所产生的反射光信号,该反射光信号在该检测对象表面形成图像特征,采用预设识别模型对该图像特征所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体。
如图1c所示,该活体检测方法的具体流程可以如下:
101、接收活体检测请求。
例如,具体可以接收用户触发的活体检测请求,或者,也可以接收其他设备发送的活体检测请求,等等。
102、根据该活体检测请求启动光源,并向检测对象投射光线。
例如,具体可以根据该活体检测请求调用相应的活体检测进程,根据该活体检 测进程启动光源,等等。
其中,该光源可以根据实际应用的需求进行设置,比如,可以通过调节终端屏幕的亮度来实现,或者,也可以利用闪光灯或红外发射器等其他发光部件或外置设备来实现、或者,还可以通过在终端或终端的显示界面上设置一颜色遮罩来实现,等等,即步骤“根据该活体检测请求启动光源”具体可以采用如下任意一种方式来实现:
(1)根据该活体检测请求调整屏幕亮度,使得该屏幕作为光源向检测对象投射光线。
(2)根据该活体检测请求开启预设发光部件,使得该发光部件作为光源向检测对象投射光线。
其中,该发光部件可以包括闪光灯或红外发射器等部件。
(3)根据该活体检测请求启动预设的颜色遮罩,该颜色遮罩作为光源向检测对象投射光线。
例如,具体可以根据该活体检测请求启动终端上的颜色遮罩,所谓颜色遮罩,指的是可以闪现带有颜色光线的发光区域或部件,比如,可以在终端外壳边缘设置可以闪现颜色遮罩的部件,然后,在接收到该活体检测请求后,便可以启动该部件,以闪现颜色遮罩;或者,也可以通过显示检测界面来闪现颜色遮罩,如下:
根据该活体检测请求启动检测界面,该检测界面闪现颜色遮罩,该颜色遮罩作为光源向检测对象投射光线。
其中,该检测界面中闪现颜色遮罩的区域可以根据实际应用的需求而定,例如,该检测界面可以包括检测区域和非检测区域,检测区域主要用于对监控情况进行显示,而该非检测区域可以闪现颜色遮罩,该颜色遮罩作为光源向检测对象投射光线,等等。
其中,该非检测区域中闪现颜色遮罩的区域可以根据实际应用的需求而定,可以是整个非检测区域均设置有颜色遮罩,也可以是在该非检测区域的某部分区域或某若干个部分区域设置有颜色遮罩,等等。该颜色遮罩的颜色和透明度等参数可以根据实际应用的需求进行设置,该颜色遮罩可以由系统预先进行设定,并在启动检测界面时直接调取,或者,也可以在接收到活体检测请求之后自动生成,即在步骤 “接收活体检测请求”之后,该活体检测方法还可以包括:
生成颜色遮罩,使得该颜色遮罩所投射出的光线能够按照预设规律进行变化。
为了便于后续可以更好地识别出光线的变化,还可以调整该光线的变化强度。
其中,该预设规律可以根据实际应用的需求而定,而调整该光线的变化强度的方式也可以有多种,例如,对于同颜色的光线(即同一波长的光线),可以通过调整变化前后的屏幕亮度来调整光线的变化强度,比如,让变化前后的屏幕亮度设置为最大和最小,而对于不同颜色的光线(即不同波长的光线),则可以通过调整变化前后的色差来调整光线的变化强度,等等。即在生成颜色遮罩之后,该活体检测方法还可以包括:
对于同颜色的光线,获取预设的屏幕亮度调整参数,根据该屏幕亮度调整参数调整该同颜色的光线在变化前后的屏幕亮度,以调整光线的变化强度;
对于不同颜色的光线,获取预设的色差调整参数,根据该色差调整参数调整该不同颜色的光线在变化前后的色差,以调整光线的变化强度。
其中,该光线的变化强度的调整幅度可以根据实际应用的需求进行设置,可以包括大幅调整,比如最大化光线的变化强度等等,也可以包括小幅调整,为了描述方便,后续将均以最大化光线的变化强度为例进行说明。
为了后续可以更好地从图像帧间差中检测出反射光信号,除了可以调整该光线的变化强度之外,还可以在颜色的选择上,尽量选择对信号分析最鲁棒的颜色空间,比如,在预设颜色空间下,屏幕由红色最亮转变到绿色最亮,其反射光的色度变化最大,等等。
为了提高身份验证的准确性和安全性,还可以采用预设编码的光线组合来作为该颜色遮罩,即步骤“生成颜色遮罩,使得所述颜色遮罩所投射出的光线能够按照预设规律进行变化”可以包括:
获取预设编码序列,该编码序列包括多个编码,根据预设编码算法,按照该编码序列中编码的顺序依次确定各个编码对应的颜色,得到颜色序列,基于该颜色序列生成颜色遮罩,使得该颜色遮罩所投射出的光线按照该颜色序列所指示的颜色进行变化。
其中,该预设编码序列可以是随机生成的,也可以根据实际应用的需求进行设 置,而该预设编码算法也可以根据实际应用的需求而定,该编码算法,可以反映编码序列中的各个编码与各种颜色之间的对应关系,比如,可以令红色代表数字-1,绿色代表0,蓝色代表1,等等,则若获取到的编码序列为“0,-1,1,0”,那么,可以得到颜色序列“绿色,红色,蓝色,绿色”,从而生成一颜色遮罩,使得该颜色遮罩所投射出的光线可以按照“绿色,红色,蓝色,绿色”的顺序进行变化。
需说明的是,在投射时,各颜色的显示时长、以及各颜色之间切换时的等待时间间隔可以根据实际应用的需求进行设置,比如,可以让每一种颜色的显示时长为2秒,而等待时间间隔设置为0秒或1秒,等等。
其中,在等待时间间隔期间,可以不投射光线,或者,也可以投射预定的光线,比如,以该等待时间间隔期不为0,且在等待时间间隔期间不投射光线为例,若颜色遮罩所投射出的光线的颜色次序为“绿色,红色,蓝色,绿色”,则该投射光线具体表现为:“绿色—>无光线—>红色—>无光线—>蓝色—>无光线—>绿色”。而若该等待时间间隔期为0秒,则该颜色遮罩所投射出的光线可以直接切换颜色,即该投射光线具体表现为:“绿色—>红色—>蓝色—>绿色”,以此类推,在此不再赘述。
为了进一步提高安全性,还可以进一步复杂化光线的变化规则,比如,将每一种颜色的显示时长、以及不同颜色之间切换时的等待时间间隔也设置为不一致的数值,比如,绿色的显示时长可以为3秒,而红色的显示时长为2秒,蓝色为4秒,绿色与红色之间切换时的等待时间间隔为1秒,而红色与蓝色之间切换时的等待时间间隔为1.5秒,以此类推,等等。
103、对该检测对象进行图像采集,以得到图像序列。
例如,具体可以调用摄像装置实时对检测对象进行拍摄,得到图像序列,并将拍摄得到的图像序列在该检测区域中进行显示,等等。
其中,该摄像装置包括但不限于终端自带的摄像头、网络摄像头、以及监控摄像头、以及其他可以采集图像的设备等。需说明的是,由于向检测对象所投射的光线可以是可见光,也可以是不可见光,因此,本申请实施例所提供的摄像装置中,还可以根据实际应用的需求,配置不同的光线接收器,比如红外光线接收器等,以对不同光线进行感应,从而采集到所需的图像序列,在此不再赘述。
为了减少噪声所造成的数值浮动对信号的影响,在得到图像序列后,还可以对 该图像序列进行去噪声处理。例如,以噪声模型为高斯噪声为例,具体可以使用时序上多帧平均和/或同帧多尺度平均来尽可能地减小噪声,在此不再赘述。
也可以对该图像序列进行其他的预处理,比如缩放、裁剪、锐化、背景模糊等操作,以提高后续识别的效率和准确性。
104、识别出该图像序列中检测对象的表面存在该投射光线所产生的反射光信号。
其中,该反射光信号在该检测对象表面形成图像特征,所谓图像特征,可以包括图像的颜色特征、纹理特征、形状特征和空间关系特征等。颜色特征是一种全局特征,描述了图像或图像区域所对应的景物的表面性质;纹理特征也是一种全局特征,它也描述了图像或图像区域所对应景物的表面性质;形状特征有两类表示方法,一类是轮廓特征,另一类是区域特征,图像的轮廓特征主要针对物体的外边界,而图像的区域特征则关系到整个形状区域;空间关系特征,是指图像中分割出来的多个目标之间的相互的空间位置或相对方向关系,这些关系也可分为连接/邻接关系、交叠/重叠关系和包含/包容关系等;具体实施时,该图像特征具体可以包括局部二值模式(LBP,Local Binary Patterns)特征描述子、尺度不变特征变换(SIFT,Scale-invariant feature transform)特征描述子、和/或卷积神经网络提取出来的特征描述子等信息。在此不作赘述。
其中,识别该图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号的方式可以有多种,例如,具体可以利用图像序列中帧的变化来检测该反射光信息,比如,具体可以如下:
(1)回归分析(Regression Analysis)该图像序列中帧的变化,得到回归结果。
例如,具体可以回归分析图像序列中每帧的色度/亮度的数值表达,该数值表达可以为数值序列,然后,根据该数值表达如数值序列来判断该图像序列中帧的色度/亮度的变化,得到回归结果;也就是说,可以利用数值表达,比如数值序列的变化来代表每帧的色度变化或者亮度变化,而该每帧的色度变化或者亮度变化即可作为该回归结果。
其中,回归分析图像序列中每帧的色度/亮度的数值表达的方式可以有多种,比如,可以利用预设的图像回归分析模型列来对该图像序列中的每帧的色度/亮度进行 回归分析,得到每帧的色度/亮度的数值表达,等等。
其中,该图像回归分析模型可以是回归树或者回归卷积神经网络,等等,具体可以根据实际应用的需求进行设置。该图像回归分析模型可以由其他设备预先进行训练,也可以由该活体检测装置进行训练,比如,该训练过程可以如下:
采集预设数目带有不同表面反光信息(如面部反光信息)的图像作为采集样本集,对该采集样本集中的采集样本按照预设策略进行标注,将标注后的采集样本作为训练样本,得到训练样本集,然后,利用预设的图像回归分析初始模型(比如回归树或者回归卷积神经网络等),对训练样本集进行学习,得到该图像回归分析模型。
其中,该标注的预设策略可以根据实际应用的需求而定,比如,以回归分析帧的亮度变化,且将反光信息类型划分为存在较强反光、较弱反光、无任何反光三类为例进行说明,则此时可以对存在较强反光的采集样本的标签标注为1,对存在较弱反光的采集样本的标签标注为0.5,对无任何反光的采集样本的标签标注为0,等等,然后,利用预设的图像回归分析初始模型,对训练样本集进行学习,以找到在训练样本集中,最能拟合原始图像与回归分析的连续数值标签之间映射关系的回归函数,便可得到该图像回归分析模型。
同理,帧的色度变化的回归分析也与此类似,比如,具体可以采集检测对象表面存在不同颜色光的图像帧(即采集样本),然后对这些图像帧进行标注,其中,标注的标签不再是一维标量,而是对应RGB(Red,Green,Blue,即三原色)色彩的三元组,如(255,0,0)代表红色,等等,然后,利用预设的图像回归分析初始模型,对标注后得到的图像帧(即训练样本集)进行学习,以得到该图像回归分析模型,等等。
在得到该图像回归分析模型后,便可以利用该图像回归分析模型,对该图像序列的每帧进行回归分析。比如,若该图像回归分析模型为根据不同光线强度变化的图像训练而成的,则对于检测对象表面存在不同光线强度变化(即亮度变化)的图像帧,可直接回归出一个0到1的连续数值,来表达该图像帧的面部反光强弱程度,等等,在此不再赘述。
除了可以通过回归分析图像序列中每帧的色度/亮度的数值表达来计算回归结果之外,也可以直接回归分析该图像序列中帧间的变化,得到回归结果。
其中,该图像序列中帧之间的变化可以通过计算该图像序列中帧之间的差分来得到,该帧之间的差分可以是帧间差,也可以是帧差,帧间差指的是相邻两帧之间的差,而帧差为投射光线变化前后所对应的帧之间的差。
例如,以计算帧间差为例,具体可以在确定检测对象的位置变化程度小于预设变化值时,分别获取该图像序列中邻近帧的像素坐标,然后,基于该像素坐标计算帧间差。
又例如,以计算帧差为例,具体可以确定检测对象的位置变化程度小于预设变化值时,分别从该图像序列中获取投射光线变化前后所对应的帧的像素坐标,基于像素坐标计算帧差。
其中,基于该像素坐标计算帧间差或帧差的方式可以有多种,比如,可以如下:
对邻近帧的像素坐标进行变换,以最小化该像素坐标的配准误差,根据变换结果筛选出相关性符合预设条件的像素点,根据筛选出的像素点计算帧间差。
或者,对投射光线变化前后所对应的帧的像素坐标进行变换,以最小化该像素坐标的配准误差,根据变换结果筛选出相关性符合预设条件的像素点,根据筛选出的像素点计算帧差。
其中,预设变化值和预设条件均可根据实际应用的需求进行设置,在此不再赘述。
除了上述计算帧差的方式之外,还可以采用其他的方式,比如,可以在某一个颜色空间的通道上,或任意个能描述色度或亮度变化的维度上,去分析两帧之间的色度变化的相对值或亮度变化的相对值即可,即步骤“回归分析所述图像序列中帧的变化,得到回归结果”可以包括:
在确定检测对象的位置变化程度小于预设变化值时,分别从图像序列中获取投射光线变化前后所对应的帧的色度/亮度,根据获取到的色度/亮度计算投射光线变化前后所对应的帧之间的色度变化相对值或亮度变化相对值,将色度/亮度变化相对值作为投射光线变化前后所对应的帧之间的帧差,该帧差即为回归结果。
比如,具体可以通过预设回归函数对该色度/亮度进行计算,得到投射光线变化前后所对应的帧之间的色度/亮度变化相对值(即色度变化相对值或亮度变化相对值),等等。
其中,该回归函数可以根据实际应用的需求进行设置,比如,具体可以是回归神经网络,等等。
需说明的是,若确定检测对象的位置变化程度大于等于预设变化值,则可从图像序列中获取其他的邻近帧或其他投射光线变化前后所对应的帧来进行计算,或重新获取图像序列。
(2)根据该回归结果确定该图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号,例如,具体可以采用如下任意一种方式:
第一种方式:
确定该回归结果是否大于预设阈值,若是,则确定该图像序列中检测对象的表面存在该投射光线所产生的反射光信号,若否,则确定该图像序列中检测对象的表面不存在该投射光线所产生的反射光信号。
其中,该预设阈值可以根据实际应用的需求而定,在此不再赘述。
第二种方式:
通过预设全局特征算法或预设识别模型对该回归结果进行分类分析,若分析结果指示检测对象的表面的帧间变化大于设定值,则确定该图像序列中检测对象的表面存在该投射光线所产生的反射光信号,若分析结果指示检测对象的表面的帧间变化不大于设定值,则确定该图像序列中检测对象的表面不存在该投射光线所产生的反射光信号。
其中,该设定值可以根据实际应用的需求而定,而该“通过预设全局特征算法或预设识别模型对该帧间差进行分类分析”的方式也可以有多种,例如,可以如下:
对该回归结果进行分析,以判断该图像序列中是否存在该投射光线所产生的反射光信号,若不存在该投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果;若存在该投射光线所产生的反射光信号,则通过预设全局特征算法或预设识别模型判断存在的反射光信息的反射体是否为该检测对象,若为该检测对象,则生成指示检测对象的表面的帧间变化大于设定值的分析结果,若不是该检测对象,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果。
或者,也可以通过预设全局特征算法或预设识别模型对该图像序列中的图像进 行分类,以筛选出存在该检测对象的帧,得到候选帧,分析该候选帧的帧间差,以判断该检测对象是否存在该投射光线所产生的反射光信号,若不存在该投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果;若存在该投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化大于设定值的分析结果,等等。
其中,全局特征算法指的是基于全局特征的算法,其中,全局特征可以包括灰度的均值方差、灰度共生矩阵、快速傅氏变换(FFT,Fast Fourier Transformation)和离散余弦变换(DCT,Discrete cosine transform)等变换后的频谱。预设识别模型可以包括分类器或其他的识别模型(如人脸识别模型)等,分类器可以包括支持向量机(SVM,Support Vector Machine)、神经网络和决策树等。
第三种方式:
若颜色遮罩为根据预设编码序列而生成的,则也可以通过对光信号进行解码,来识别该图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号,例如,具体可以如下:
根据该回归结果,按照预设解码算法对该图像序列进行解码,得到解码后序列,确定该解码后序列与编码序列是否匹配,若匹配,则确定该图像序列中检测对象的表面存在该投射光线所产生的反射光信号;若解码后序列与编码序列不匹配,则确定该图像序列中检测对象的表面不存在该投射光线所产生的反射光信号。
比如,若回归结果为投射光线变化前后所对应的帧之间的色度/亮度变化相对值(详见(1)中关于计算该回归结果的描述),则此时,可以采用预设解码算法,依次对该图像序列中的色度/亮度变化相对值(即投射光线变化前后所对应的帧之间的色度/亮度变化相对值)进行计算,得到各投射光线变化前后所对应的帧的色度/亮度绝对值,然后,将得到的色度/亮度绝对值作为解码后序列,或按照预设策略对得到的色度/亮度绝对值进行转换,得到解码后序列。
其中,预设解码算法与编码算法相匹配,具体可以可根据编码算法而定;而预设策略也可以根据实际应用的需求进行设置,在此不再赘述。
确定该解码后序列与编码序列是否匹配的方式也可以有多种,比如,可以确定该解码后序列与编码序列是否一致,若一致,则确定该解码后序列与编码序列匹配, 若不一致,则确定该解码后序列与编码序列不匹配;或者,也可以确定该解码后序列与编码序列之间的关系是否符合预设对应关系,若是,则定该解码后序列与编码序列匹配,否则,若不符合,则确定该解码后序列与编码序列不匹配,等等。其中,该预设对应关系可以根据实际应用的需求进行设置。
需说明的是,若识别出该图像序列中检测对象的表面不存在该投射光线所产生的反射光信号,则流程结束,或者,可确定该检测对象为非活体,又或者,也可以返回执行步骤103,即重新对该检测对象进行图像采集,又或者,还可以对启动的光源进行检测,若确定光源向检测对象所投射的光线不存在问题,则流程结束、确定该检测对象为非活体、或者重新对该检测对象进行图像采集,而若确定光源向检测对象所投射的光线存在问题,则返回执行步骤102,即重新启动光源,并向检测对象投射光线,具体执行的策略可根据实际应用的需求而进行设置,在此不再赘述。
105、采用预设识别模型对该图像特征(即反射光信号在该检测对象表面所形成的图像特征)所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体。
反之,若识别结果指示该图像特征所属对象的类型为非活体,比如为“手机屏幕”等,则可确定该检测对象为非活体。
由于反射光信号在该检测对象表面会形成图像特征,因此,当识别出检测对象的表面存在该投射光线所产生的反射光信号(即步骤104)时,可以获取该“反射光信号在该检测对象表面所形成的图像特征”,然后,采用预设识别模型对该图像特征进行识别,进而基于识别结果判断该检测对象是否为活体,比如,如果识别结果指示该图像特征所属对象的类型为活体,譬如指示该图像特征所属对象的类型为“人脸”,那么,此时便可以确定该检测对象为活体;否则,如果识别结果指示该图像特征所属对象的类型不为活体,譬如指示该图像特征所属对象的类型为“手机屏幕”,则此时可以确定该检测对象为非活体,以此类推,等等。
其中,预设识别模型可以包括分类器或其他的识别模型(如人脸识别模型)等,分类器可以包括SVM、神经网络和决策树等。
该预设识别模型可以由多个特征样本训练而成,该特征样本为该反射光信号在已标注类型的对象表面所形成的图像特征。比如,可以将该投射光线照射在人的面 部上后会所形成的图像特征作为特征样本,并标注为“人脸”,将该投射光线照射在手机屏幕上所形成的图像特征作为特征样本,并标注为“手机屏幕”,以此类推,在采集到大量的特征样本后,便可以通过这些特征样本(即已标注类型的图像特征)来建立识别模型。
需说明的是,该识别模型可以由其他的设备建立之后,保存在预设存储空间中,当该活体检测装置需要对图像特征所属对象的类型进行识别时,从该存储空间中直接获取,或者,该识别模型也可以由该活体检测装置自行进行建立,即在步骤“采用预设识别模型对该图像特征所属对象的类型进行识别”之前,该活体检测方法还可以包括:
获取多个特征样本,根据该特征样本对预设初始识别模型进行训练,得到预设识别模型。
此外,还需说明的是,若在步骤103中,主要是通过该预设的识别模型来识别出该图像序列中检测对象的表面存在该投射光线所产生的反射光信号的话,则也可以根据识别结果来直接判断该检测对象是否为活体,即识别模型再判断该图像序列中检测对象的表面存在该投射光线所产生的反射光信号的同时,也可以识别出该图像特征所属对象的类型,换而言之,即可以通过该预设的识别模型,来识别该图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号,并且在识别出存在该投射光线所产生的反射光信号时,识别相应图像特征(即该反射光线在检测对象的表面所形成的图像特征)所属对象的类型,在此不再赘述。
由上可知,本实施例在需要进行活体检测时,可以启动光源向检测对象投射光线,并对该检测对象进行图像采集,当识别出采集得到的图像序列中检测对象的表面存在该投射光线所产生的反射光信号时,采用预设识别模型对该反射光信号在该检测对象表面形成图像特征所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体;由于该方案无需与用户进行繁琐的交互操作和运算,因此,可以降低对硬件配置的需求,而且,由于该方案进行活体判别的依据是检测对象表面的反射光信号,而真正的活体与伪造的活体(合成图片或视频的载体,比如相片、手机或平板电脑等)的反射光信号是不同的,因此,该方案也可以有效抵挡合成人脸攻击,提高判别的准确性;所以,总而言之,该方 案可以提高活体检测效果,从而提高身份验证的准确性和安全性。
根据上一个实施例所描述的方法以下将举例作进一步详细说明。
在本实施例中,将以该活体检测装置具体集成在终端中,光源具体为颜色遮罩,检测对象具体为人的面部,且具体通过回归分析该图像序列中帧间的变化来得到回归结果为例进行说明。
如图2所示,一种活体检测方法,具体流程可以如下:
201、终端接收活体检测请求。
例如,终端具体可以接收用户触发的活体检测请求,或者,也可以接收其他设备发送的活体检测请求,等等。
比如,以用户触发为例,当用户启动该活体检测功能,比如点击活体检测的启动键时,便可以触发生成该活体检测请求,从而使得终端接收到该活体检测请求。
202、终端生成颜色遮罩,使得该颜色遮罩所投射出的光线能够按照预设规律进行变化。
为了便于后续可以更好地识别出光线的变化,还可以最大化该光线的变化强度。
其中,该预设规律可以根据实际应用的需求而定,而最大化该光线的变化强度的方式也可以有多种,例如,对于同颜色的光线,可以通过调整变化前后的屏幕亮度来最大化光线的变化强度,比如,让变化前后的屏幕亮度设置为最大和最小,而对于不同颜色的光线,则可以通过调整变化前后的色差来最大化光线的变化强度,比如将屏幕由黑色最暗转变为白色最亮,等等。
为了后续可以更好地从图像帧间差中检测出反射光信号,除了可以最大化该光线的变化强度之外,还可以在颜色的选择上,尽量选择对信号分析最鲁棒的颜色空间,比如,在预设颜色空间下,屏幕由红色最亮转变到绿色最亮,其反射光的色度变化最大,以此类推,等等。
203、终端根据该活体检测请求启动检测界面,并通过该检测界面中的非检测区域闪现颜色遮罩,使得该颜色遮罩作为光源向检测对象,比如人的面部投射光线。
例如,终端具体可以根据该活体检测请求调用相应的活体检测进程,根据该活体检测进程启动相应的检测界面,等等。
其中,该检测界面可以包括检测区域和非检测区域,检测区域主要用于对获取到的图像序列进行显示,而该非检测区域可以闪现颜色遮罩,该颜色遮罩作为光源向检测对象投射光线,具体可以参加图1b,这样,在检测对象上,便会因该光线而产生反射光,而且,根据光的颜色和强度等参数的不同,其产生的反射光也会有所区别。
需说明的是,为了保证颜色遮罩所发射的光线可以投射至检测对象,该检测对象需要与该移动设备的屏幕保持在一定的距离内,比如,当用户需要检测某个人脸是否为活体时,可以将移动设备拿到该人脸的正前方距离适当的地方,以便对该人脸进行监控,等等。
204、终端对该检测对象进行图像采集,以得到图像序列。
例如,具体可以调用终端的摄像头,实时对检测对象进行拍摄,得到图像序列,并将拍摄得到的图像序列在该检测区域中进行显示。
为了减少噪声所造成的数值浮动对信号的影响,在得到图像序列后,还可以对该图像序列进行去噪声处理。例如,以噪声模型为高斯噪声为例,具体可以使用时序上多帧平均和/或同帧多尺度平均来尽可能地减小噪声,在此不再赘述。
为了提高后续识别的效率和准确性,也可以对该图像序列进行其他的预处理,比如缩放、裁剪、锐化、背景模糊等操作。
205、终端计算该图像序列中的帧间差。
其中,用帧间差来检测反射光信号,就需要图像序列中图像之间的二维像素点能够尽量的一一对应。因此,可以在检测用户面部没有剧烈位置变化的情况下,使用帧间对齐方法来更加精细的矫正作帧间差的像素对。即可以在确定检测对象的位置变化程度小于预设变化值时,分别获取该图像序列中邻近帧的像素坐标,然后,对该像素坐标进行变换,以最小化该像素坐标的配准误差,再基于变换结果来计算帧间差,例如,可以如下:
令物体上的同一点在两张邻近帧I和I’的像素坐标分别为p=[x,y,w] T和p’=[x’,y’,w’] T,其中w为齐次坐标项,求解3*3变换矩阵M,如下:
[x',y',w']=Mp'
Figure PCTCN2018111218-appb-000001
在这里,所采用的变换矩阵M的变换类型为自由度最高的单应性变换,从而可以最小化配准误差。
在最优化求解M的方法上,较常用的方法是均方误差(MSE,Mean Square Error)估计和随机抽样一致算法(RANSAC,Random Sample Consensus)。为了得到更鲁棒的结果,还可以使用单应流算法(homography flow)。
由于即便能求解出最优变换矩阵M,帧间还是有可能无法匹配的像素点,因此,可以筛选出相关性较强的像素点,而忽略掉相关性较弱的像素点,然后,基于筛选出的像素点再来作帧间差计算,从而一方面可以减少计算量,另一方面,可以增强结果,即步骤“基于变换结果来计算帧间差”可以包括:
根据变换结果筛选出相关性符合预设条件的像素点,并根据筛选出的像素点计算帧间差。
其中,预设变化值和预设条件均可根据实际应用的需求进行设置,在此不再赘述。
需说明的是,若确定检测对象的位置变化程度大于等于预设变化值,则可从图像序列中获取其他的邻近帧来进行计算,或重新获取图像序列,再进行计算。
206、终端根据该帧间差确定该图像序列中人的面部是否存在该投射光线所产生的反射光信号,若存在,则执行步骤207,若不存在,则可以按照预设策略进行操作。
例如,终端可以确定该帧间差是否大于预设阈值,若是,则确定该图像序列中人的面部存在该投射光线所产生的反射光信号,可以执行步骤207,若否,则确定该图像序列中人的面部不存在该投射光线所产生的反射光信号,可以按照预设策略进行操作。
其中,该预设阈值可以根据实际应用的需求而定,而该预设策略也可以根据实际应用的需求而定,比如,可以设置为“结束流程”,或者,可以设置为“生成不存在反射光信号的提示信息”,或者,也可以设置为“确定该检测对象为非活体”,或者,也可以返回执行步骤204,即重新对该检测对象进行图像采集,又或者,还可以 对启动的光源进行检测,以确定该光源是否投射到该检测对象如人的面部上,若确定光源向该检测对象所投射的光线不存在问题,则流程结束、确定该检测对象为非活体、或者重新对该检测对象进行图像采集,而若确定光源向检测对象如人的面部所投射的光线存在问题,比如该光源并没有投射到人的面部,而是投射到其旁边的物体上,或者,该光源并没有投射出光线,则返回执行步骤203,即重新启动光源,并向检测对象投射光线,等等,在此不再赘述。
为了提高检测的准确率,以及减小检测的计算量,还可以使用级联判别模型来进行处理,比如,可以采用全局特征算法或者预设识别模型(如分类器)来对帧间差进行预先处理,以粗略判定反射光信号的发生,使得可以跳过大部分没有反射光信号的普通帧的后续处理,即后续只需要对存在有反射光信号的帧进行处理即可。即,步骤“终端根据该帧间差确定该图像序列中人的面部是否存在该投射光线所产生的反射光信号”可以包括:
通过预设全局特征算法或预设识别模型对该帧间差进行分类分析,若分析结果指示人的面部的帧间变化大于设定值,则确定该图像序列中人的面部存在该投射光线所产生的反射光信号,若分析结果指示人的面部的帧间变化不大于设定值,则确定该图像序列中人的面部不存在该投射光线所产生的反射光信号。
其中,该设定值可以根据实际应用的需求而定,而该“通过预设全局特征算法或预设识别模型对该帧间差进行分类分析”的方式也可以有多种,例如,可以如下:
对该帧间差进行分析,以判断该图像序列中是否存在该投射光线所产生的反射光信号,若不存在该投射光线所产生的反射光信号,则生成指示人的面部的帧间变化不大于设定值的分析结果;若存在该投射光线所产生的反射光信号,则通过预设全局特征算法或预设识别模型判断存在的反射光信息的反射体是否为人的面部,若为人的面部,则生成指示人的面部的帧间变化大于设定值的分析结果,若不是人的面部,则生成指示人的面部的帧间变化不大于设定值的分析结果。
或者,也可以通过预设全局特征算法或预设识别模型对该图像序列中的图像进行分类,以筛选出存在人的面部的帧,得到候选帧,分析该候选帧的帧间差,以判断该人的面部是否存在该投射光线所产生的反射光信号,若不存在该投射光线所产生的反射光信号,则生成指示人的面部的帧间变化不大于设定值的分析结果;若存 在该投射光线所产生的反射光信号,则生成指示人的面部的帧间变化大于设定值的分析结果,等等。
其中,全局特征算法指的是基于全局特征的算法,其中,全局特征可以包括灰度的均值方差、灰度共生矩阵、FFT和DCT等变换后的频谱。
而预设识别模型具体可以为分类器或其他识别模型(如人脸识别模型),以分类器为例,该分类器可以根据实际应用的需求进行设置,比如,若只用于判别是否存在反射光信号,则可采用较为简单的分类器,而若用于判别是否为人的面部等处,则可采用更加复杂的分类器,比如神经网络分类器等来进行处理,在此不再赘述。
需说明的是,除了可以通过计算帧间差来分析图像序列中人的面部是否存在该投射光线所产生的反射光信号之外,还可以通过计算投射光线变化前后的帧差(即不一定是相邻的两帧)来分析图像序列中人的面部是否存在该投射光线所产生的反射光信号,具体可参见前面的实施例,在此不再赘述。
207、终端采用预设识别模型对该图像特征(即反射光信号在该检测对象表面,如人的面部所形成的图像特征)所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体。
反之,若识别结果指示该图像特征所属对象的类型为非活体,比如为“手机屏幕”等,则可确定该检测对象为非活体。
其中,预设识别模型可以包括分类器或其他的识别模型等,分类器可以包括SVM、神经网络和决策树等。
该预设识别模型可以由多个特征样本训练而成,该特征样本为该反射光信号在已标注类型的对象表面所形成的图像特征。
该识别模型可以由其他的设备建立之后,保存在预设存储空间中,当该终端需要对图像特征所属对象的类型进行识别时,由该终端从该存储空间中直接获取,或者,该识别模型也可以由该终端自行进行建立,比如,终端可以获取多个特征样本,根据该特征样本对预设初始识别模型进行训练,得到预设识别模型,等等,具体可参见前面的实施例,在此不再赘述。
为了更加进一步提高检测的准确率,还可以适当地加入一些互动操作,比如,让用户执行眨眼或张嘴等动作,即在步骤“确定该图像序列中人的面部存在该投射 光线所产生的反射光信号”之后,该活体检测方法还可以包括:
生成指示检测对象(比如人的面部)执行预设动作的提示信息,显示该提示信息,并对该检测对象进行监控,若监控到检测对象执行了该预设动作,才确定该检测对象为活体,否则,若监控到检测对象没有执行该预设动作,则确定该检测对象为非活体。
其中,该预设动作可以根据实际应用的需求进行设置,需说明的是,为了避免繁琐的交互操作,可以对该预设动作的数量和难易程度进行一定限制,比如,只需进行一次简单的交互,如眨眼或张嘴等即可,在此不再赘述。
由上可知,本实施例可以通过在检测界面设置一非检测区域,可以闪现颜色遮罩,其中,该颜色遮罩可以作为光源向检测对象,如人的面部投射光线,这样,当需要进行活体检测时,便可以对该人的面部进行图像采集,然后确定得到的图像序列中人的面部是否存在该投射光线所产生的反射光信号,且该反射光信号在该人的面部所形成图像特征所属对象的类型是否为活体,如果存在且类型为活体,则确定该人的面部为活体;由于该方案无需与用户进行繁琐的交互操作和运算,因此,可以降低对硬件配置的需求,而且,由于该方案进行活体判别的依据是检测对象表面的反射光信号,而真正的活体与伪造的活体(合成图片或视频的载体,比如相片、手机或平板电脑等)的反射光信号是不同的,因此,该方案也可以有效抵挡合成人脸攻击,提高判别的准确性;所以,总而言之,该方案可以在终端有限的硬件配置下,提高活体检测效果,从而提高身份验证的准确性和安全性。
与前一个实施例相同的是,本实施例同样以该活体检测装置具体集成在终端中,光源具体为颜色遮罩,且检测对象的表面具体为人的面部为例进行说明,与前一个实施例不同的是,在本实施例中,将采用预设编码的光线组合来作为该颜色遮罩,以下将进行详细说明。
如图3a所示,一种活体检测方法,具体流程可以如下:
301、终端接收活体检测请求。
例如,终端具体可以接收用户触发的活体检测请求,或者,也可以接收其他设备发送的活体检测请求,等等。
比如,以用户触发为例,当用户启动该活体检测功能,比如点击活体检测的启 动键时,便可以触发生成该活体检测请求,从而使得终端接收到该活体检测请求。
302、终端获取预设编码序列,该编码序列包括多个编码。
其中,该预设编码序列可以是随机生成的,也可以根据实际应用的需求进行设置。
例如,该编码序列可以是数字序列,比如:0,-1,1,0,……,等等。
303、终端根据预设编码算法,按照该编码序列中编码的顺序依次确定各个编码对应的颜色,得到颜色序列。
其中,该预设编码算法可以反映编码序列中的各个编码与各种颜色之间的对应关系,该对应关系具体可以根据实际应用的需求而定,比如,可以令红色代表数字-1,绿色代表0,蓝色代表1,等等。
例如,以令红色代表数字-1,绿色代表0,蓝色代表1为例,若在步骤302中,获取到的编码序列为“0,-1,1,0”,则此时,终端可以根据各个编码与各种颜色之间的对应关系,按照该编码序列中编码的顺序依次确定各个编码对应的颜色,得到颜色序列“绿色,红色,蓝色,绿色”。
304、终端基于该颜色序列生成颜色遮罩,使得该颜色遮罩所投射出的光线按照该颜色序列所指示的颜色进行变化。
例如,若在步骤303中,得到颜色序列“绿色,红色,蓝色,绿色”,则终端可以生成一颜色遮罩,使得该颜色遮罩所投射出的光线可以按照“绿色,红色,蓝色,绿色”的顺序进行变化,参见图3b和图3c。
需说明的是,在投射时,各颜色的显示时长、以及各颜色之间切换时的等待时间间隔可以根据实际应用的需求进行设置,比如,如图3b所示,可以让每一种颜色的显示时长为1秒,而等待时间间隔设置为0秒,等等,即按照图3b中时间轴所指的方向,该投射光线具体可以表现为“绿色—>红色—>蓝色—>绿色”,其中,从一种颜色切换到另一种颜色的时刻称为颜色突变点。
该等待时间间隔也可以不为0,比如,如图3c所示,可以让每一种颜色的显示时长为1秒,而等待时间间隔设置为0.5秒,等等,其中,在等待时间间隔期间,可以不投射光线(即无光线),即按照图3c中时间轴所指的方向,该投射光线具体可以表现为“绿色—>无光线—>红色—>无光线—>蓝色—>无光线—>绿色”。
为了进一步提高安全性,还可以进一步复杂化光线的变化规则,比如,将每一种颜色的显示时长、以及不同颜色之间切换时的等待时间间隔也设置为不一致的数值,比如,绿色的显示时长可以为3秒,而红色的显示时长为2秒,蓝色为4秒,绿色与红色之间切换时的等待时间间隔为1秒,而红色与蓝色之间切换时的等待时间间隔为1.5秒,以此类推,等等。
305、终端对该检测对象进行图像采集,以得到图像序列。
例如,终端具体可以调用终端的摄像头,实时对检测对象进行拍摄,得到图像序列,并对拍摄得到的图像序列进行显示,比如,将拍摄得到的图像序列在该检测区域中进行显示,等等。
比如,以颜色遮罩所投射出的光线为如图3b所示的颜色序列(即绿色—>红色—>蓝色—>绿色),且检测对象为用户的面部为例,那么在这四秒的视频中,第一秒内的图像帧1对应的面部上有绿色的屏幕反射光,第二秒内的图像帧2对应的面部上有红色的屏幕反射光,第三秒内的图像帧3对应的面部上有蓝色的屏幕反射光,第四秒的图像帧4对应的面部上有绿色的屏幕反射光。所有的图像帧,便是原始的带有反射光信号的数据,即为本申请实施例的图像序列。
为了减少噪声所造成的数值浮动对信号的影响,在得到图像序列后,还可以对该图像序列进行去噪声处理。例如,以噪声模型为高斯噪声为例,具体可以使用时序上多帧平均和/或同帧多尺度平均来尽可能地减小噪声,在此不再赘述。
306、终端在确定检测对象的位置变化程度小于预设变化值时,分别从图像序列中获取投射光线变化前后所对应的帧的色度/亮度。
例如,还是以颜色变化为“绿色—>红色—>蓝色—>绿色”为例,当终端在确定检测对象的位置变化程度小于预设变化值时,可以分别从图像序列中获取投射光线由绿色转换为红色时所对应的帧的色度/亮度、由红色转换为蓝色时所对应的帧的色度/亮度、由蓝色转换为绿色时所对应的帧的色度/亮度。
比如,若该图像序列依次包括图像帧1、图像帧2、图像帧3、以及图像帧4。其中,图像帧1对应的面部上有绿色的屏幕反射光,图像帧2对应的面部上有红色的屏幕反射光,图像帧3对应的面部上有蓝色的屏幕反射光,图像帧4对应的面部上有绿色的屏幕反射光,则此时,可以分别获取图像帧1、图像帧2、图像帧3、以及图像帧 4的色度/亮度。
又比如,若该图像序列依次包括图像帧1、图像帧2、图像帧3、图像帧4、图像帧5、图像帧6、图像帧7、图像帧8、图像帧9、图像帧10、图像帧11、以及图像帧12。其中,图像帧1、图像帧2和图像帧3对应的面部上有绿色的屏幕反射光,图像帧4、图像帧5和图像帧6对应的面部上有红色的屏幕反射光,图像帧7、图像帧8和图像帧9对应的面部上有蓝色的屏幕反射光,图像帧10、图像帧11和图像帧12对应的面部上有绿色的屏幕反射光,则此时,可以分别获取图像帧3、图像帧4、图像帧6、图像帧7、图像帧9、以及图像帧l0的色度/亮度,其中,图像帧3和图像帧4为颜色由绿变为红时前后的两个帧,图像帧6和图像帧7为颜色由红变为蓝时前后的两个帧,图像帧9和图像帧10为颜色由蓝变为绿时前后的两个帧。
需说明的是,若确定检测对象的位置变化程度大于等于预设变化值,则可从图像序列中获取其他的邻近帧或其他投射光线变化前后所对应的帧来进行计算,或重新获取图像序列。
307、终端根据获取到的色度/亮度计算投射光线变化前后所对应的帧之间的色度变化相对值或亮度变化相对值。
例如,终端具体可以通过预设回归函数对该色度/亮度进行计算,得到投射光线变化前后所对应的帧之间的色度/亮度变化相对值(即色度变化相对值或亮度变化相对值),等等。
其中,该回归函数可以根据实际应用的需求进行设置,比如,具体可以是回归神经网络,等等。
比如,以图像帧3和图像帧4为颜色由绿变为红时前后的两个帧,图像帧6和图像帧7为颜色由红变为蓝时前后的两个帧,图像帧9和图像帧10为颜色由蓝变为绿时前后的两个帧,且以计算色度变化相对值为例,则可以计算出如下色度变化相对值:
通过预设回归函数,比如回归神经网络计算图像帧3的色度和图像帧4的色度的差值,得到图像帧3和图像帧4的色度变化相对值;
通过预设回归函数,比如回归神经网络计算图像帧6的色度和图像帧7的色度的差值,得到图像帧6和图像帧7的色度变化相对值;
通过预设回归函数,比如回归神经网络计算图像帧9的色度和图像帧10的色度 的差值,得到图像帧9和图像帧10的色度变化相对值。
需说明的是,亮度变化相对值的计算方式与此类似,在此不再赘述。
其中,这些色度变化相对值或亮度变化相对值相当于信号强度的一种度量△I,在上面的例子中,图像帧3与图像帧4的变化相对值(色度/亮度变化相对值)即为△I 34,图像帧6与图像帧7的变化相对值(色度/亮度变化相对值)为△I 67,图像帧9与图像帧10的变化相对值(色度/亮度变化相对值)为△I 910
308、终端根据该色度/亮度变化相对值(即投射光线变化前后所对应的帧之间的色度/亮度变化相对值),按照预设解码算法对该图像序列进行解码,得到解码后序列。
例如,终端可以采用预设解码算法,依次对该图像序列中的色度/亮度变化相对值(即投射光线变化前后所对应的帧之间的色度/亮度变化相对值)进行计算,得到各投射光线变化前后所对应的帧的色度/亮度绝对值,将得到的色度/亮度绝对值作为解码后序列,或按照预设策略对得到的色度/亮度绝对值进行转换,得到解码后序列。
其中,预设解码算法与编码算法相匹配,具体可以可根据编码算法而定;而预设策略也可以根据实际应用的需求进行设置,在此不再赘述。
比如,若在步骤307中,确定图像帧3与图像帧4的变化相对值(色度/亮度变化相对值)为△I 34,图像帧6与图像帧7的变化相对值(色度/亮度变化相对值)为△I 67,图像帧9与图像帧10的变化相对值(色度/亮度变化相对值)为△I 910,则此时可以根据这些变化相对值,得出所有帧的反光信号的绝对强度值(如色度绝对值或亮度绝对值),具体如下:
假设该空间内的原点和最小单位长度已给定,比如令图像帧1的绝对强度值I 1=0,而各个变化相对值△I 34=-1、△I 67=2,△I 910=-1,则由于图像帧1、图像帧2和图像帧3均为绿色,因此,可知图像帧1、图像帧2和图像帧3的反光信号的绝对强度值相同,即I 3=I 2=I 1=0;而由于图像帧4、图像帧5和图像帧6均为红色,因此,可知图像帧4、图像帧5和图像帧6的反光信号的绝对强度值相同;同理,由于图像帧7、图像帧8和图像帧9均为蓝色,图像帧10、图像帧11和图像帧12均为绿色,因此,图像帧7、图像帧8和图像帧9的反光信号的绝对强度值相同,图像帧10、图像帧11和图像帧12的反光信号的绝对强度值相同,据此,可以通过如下公式计算出各个图像帧 的绝对强度值:
I 4=I 5=I 6=I 3+△I 34=0-1=-1;
I 7=I 8=I 9=I 6+△I 67=-1+2=1;
I 10=I 11=I 12=I 9+△I 910=1-1=0;
至此,便可以解码出该数字序列,即得到解码后序列:0,-1,1,0(分别代表绿,红,蓝,绿)。
309、终端确定该解码后序列与编码序列是否匹配,若匹配,则确定该图像序列中检测对象的表面存在所述投射光线所产生的反射光信号,可以执行步骤310;否则,若解码后序列与编码序列不匹配,则确定所述图像序列中检测对象的表面不存在该投射光线所产生的反射光信号,此时可以按照预设策略进行操作。
例如,终端可以确定该解码后序列与编码序列是否一致,若一致,则确定该图像序列中检测对象的表面存在所述投射光线所产生的反射光信号,可以执行步骤310;否则,若解码后序列与编码序列不一致,则确定所述图像序列中检测对象的表面不存在该投射光线所产生的反射光信号,进而按照预设策略进行操作。
比如,若在步骤308中,得到解码后序列“0,-1,1,0”,与编码序列为“0,-1,1,0”一致,因此,可以确定该图像序列中检测对象的表面存在该投射光线所产生的反射光信号,等等。
其中,该预设策略可以根据实际应用的需求而定,具体可参见上一个实施例中的步骤206,在此不再赘述。
310、终端采用预设识别模型对该图像特征(即反射光信号在该检测对象表面,如人的面部所形成的图像特征)所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体。
反之,若识别结果指示该图像特征所属对象的类型为非活体,比如为“手机屏幕”等,则可确定该检测对象为非活体。
其中,预设识别模型可以包括分类器或其他的识别模型等,分类器可以包括SVM、神经网络和决策树等。该预设识别模型可以由多个特征样本训练而成,该特征样本为该反射光信号在已标注类型的对象表面所形成的图像特征。
该识别模型可以由其他的设备建立,并提供给该终端进行使用,也可以由该终 端自行进行建立,具体可参见前面的实施例,在此不再赘述。
为了更加进一步提高检测的准确率,还可以适当地加入一些互动操作,比如,让用户执行眨眼或张嘴等动作,即在“确定该解码后序列与编码序列匹配”之后,该活体检测方法还可以包括:
终端生成指示检测对象(比如人的面部)执行预设动作的提示信息,显示该提示信息,并对该检测对象进行监控,若监控到检测对象执行了该预设动作,才确定该检测对象为活体,否则,若监控到检测对象没有执行该预设动作,则确定该检测对象为非活体。
其中,该预设动作可以根据实际应用的需求进行设置,需说明的是,为了避免繁琐的交互操作,可以对该预设动作的数量和难易程度进行一定限制,比如,只需进行一次简单的交互,如眨眼或张嘴等即可,在此不再赘述。
由于采集到的带有反射光信号的每一帧图像都记录有相应的时间戳,因此,在确定解码后序列与编码序列匹配之后,还可以进一步确定这些时间戳是否能够与颜色遮罩变换光线的时间一一对应上,若可以对应上,才确定该图像序列中检测对象的表面存在所述投射光线所产生的反射光信号;否则,若不能对应上,则确定反射光信号与预设光信号样本不匹配。也就是说,若攻击者想用人脸合成渲染攻击,就不仅仅要在编码的颜色序列顺序上能够匹配,还要在绝对时间点上不能有偏移(因为实时合成的话,合成渲染的运算也需要至少毫秒级的时间),其攻击难度大大提高,因此,可以进一步提高安全性。
由上可知,本实施例可以通过编码序列生成一颜色遮罩,其中,该颜色遮罩可以作为光源向检测对象,如人的面部投射光线,这样,当需要进行活体检测时,便可以对该人的面部进行监控,然后对监控得到的图像序列中人的面部上的反射光信号进行解码,以确定是否与编码序列相匹配,若匹配,则确定该人的面部为活体;由于该方案无需与用户进行繁琐的交互操作和运算,因此,可以降低对硬件配置的需求,而且,由于该方案进行活体判别的依据是检测对象表面的反射光信号,而真正的活体与伪造的活体(合成图片或视频的载体,比如相片、手机或平板电脑等)的反射光信号是不同的,因此,该方案也可以有效抵挡合成人脸攻击,提高判别的准确性。
进一步的,由于该投射光线是根据随机的编码序列生成的,且后续在判别时需要对反射光信号进行解码,因此,即便攻击者根据当前的颜色序列合成了一个对应的反光视频,也无法在下一次中使用,所以,相对于上一个实施例的方案而言,可以进一步改善活体检测效果,进而提高身份验证的准确性和安全性。
为了更好地实施以上方法,本申请实施例还提供一种活体检测装置,简称活体检测装置,如图4a所示,该活体检测装置包括接收单元401、启动单元402、采集单元403和检测单元404,如下:
(1)接收单元401;
接收单元401,用于接收活体检测请求。
例如,接收单元401,具体可以用于接收用户触发的活体检测请求,或者,也可以接收其他设备发送的活体检测请求,等等。
(2)启动单元402;
启动单元402,用于根据该活体检测请求启动光源,并向检测对象投射光线。
比如,该启动单元402,具体可以用于根据该活体检测请求调用相应的活体检测进程,根据该活体检测进程启动光源,等等。
其中,该光源可以根据实际应用的需求进行设置,比如,可以通过调节终端屏幕的亮度来实现,或者,也可以利用闪光灯或红外发射器等其他发光部件或外置设备来实现、或者,还可以通过在显示界面上设置一颜色遮罩来实现,等等,即启动单元402具体可以执行如下任意一种操作:
(1)启动单元402,具体可以用于根据该活体检测请求调整屏幕亮度,使得该屏幕作为光源向检测对象投射光线。
(2)启动单元402,具体可以用于根据该活体检测请求开启预设发光部件,使得该发光部件作为光源向检测对象投射光线。
其中,该发光部件可以包括闪光灯或红外发射器等部件。
(3)启动单元402,具体可以用于)根据该活体检测请求启动预设的颜色遮罩,该颜色遮罩作为光源向检测对象投射光线。
例如,启动单元402,具体可以用于根据该活体检测请求启动终端上的颜色遮罩,比如,可以在终端外壳边缘设置可以闪现颜色遮罩的部件,然后,在接收到该 活体检测请求后,便可以启动该部件,以闪现颜色遮罩;或者,也可以通过显示检测界面来闪现颜色遮罩,如下:
根据该活体检测请求启动检测界面,该检测界面可以闪现颜色遮罩,该颜色遮罩作为光源向检测对象投射光线。
其中,该闪现颜色遮罩的区域可以根据实际应用的需求而定,例如,该检测界面可以包括检测区域和非检测区域,检测区域主要用于对监控情况进行显示,而该非检测区域可以闪现颜色遮罩,该颜色遮罩作为光源向检测对象投射光线,等等。
其中,该非检测区域中闪现颜色遮罩的区域可以根据实际应用的需求而定,可以是整个非检测区域均设置有颜色遮罩,也可以是在该非检测区域的某部分区域或某若干个部分区域设置有颜色遮罩,等等。该颜色遮罩的颜色和透明度等参数可以根据实际应用的需求进行设置,该颜色遮罩可以由系统预先进行设定,并在启动检测界面时直接调取,或者,也可以在接收到活体检测请求之后自动生成,即如图4b所示,该活体检测装置还可以包括生成单元405,如下:
该生成单元405,可以用于生成颜色遮罩,使得该颜色遮罩所投射出的光线能够按照预设规律进行变化。
为了便于后续可以更好地识别出光线的变化,该生成单元405还可以用于最大化该光线的变化强度。
其中,该预设规律可以根据实际应用的需求而定,而最大化该光线的变化强度的方式也可以有多种,例如,对于同颜色的光线,可以通过调整变化前后的屏幕亮度来最大化光线的变化强度,比如,让变化前后的屏幕亮度设置为最大和最小,而对于不同颜色的光线,则可以通过调整变化前后的色差来最大化光线的变化强度,等等。即:
生成单元405,具体可以用于对于同颜色的光线,获取预设的屏幕亮度调整参数,根据该屏幕亮度调整参数调整该同颜色的光线在变化前后的屏幕亮度,以调整光线的变化强度;对于不同颜色的光线,获取预设的色差调整参数,根据该色差调整参数调整该不同颜色的光线在变化前后的色差,以调整光线的变化强度。
其中,该光线的变化强度的调整幅度可以根据实际应用的需求进行设置,可以包括大幅调整,比如最大化光线的变化强度,也可以包括小幅调整,等等,在此不 再赘述。
为了后续可以更好地从图像帧间差中检测出反射光信号,除了可以调整该光线的变化强度之外,还可以在颜色的选择上,尽量选择对信号分析最鲁棒的颜色空间,具体可参见前面的实施例,在此不再赘述。
为了提高身份验证的准确性和安全性,还可以采用预设编码的光线组合来作为该颜色遮罩,即:
生成单元405,具体可以用于:获取预设编码序列,该编码序列包括多个编码,根据预设编码算法,按照该编码序列中编码的顺序依次确定各个编码对应的颜色,得到颜色序列,基于该颜色序列生成颜色遮罩,使得该颜色遮罩所投射出的光线按照该颜色序列所指示的颜色进行变化。
其中,该预设编码序列可以是随机生成的,也可以根据实际应用的需求进行设置,而该预设编码算法也可以根据实际应用的需求而定。该编码算法,可以反映编码序列中的各个编码与各种颜色之间的对应关系,比如,可以令红色代表数字-1,绿色代表0,蓝色代表1,等等,则若获取到的编码序列为“0,-1,1,0”,那么,可以得到颜色序列“绿色,红色,蓝色,绿色”,从而生成一颜色遮罩,使得该颜色遮罩所投射出的光线可以按照“绿色,红色,蓝色,绿色”的顺序进行变化。
需说明的是,在投射时,各颜色的显示时长、以及各颜色之间切换时的等待时间间隔可以根据实际应用的需求进行设置;此外,在等待时间间隔期间,可以不投射光线,或者,也可以投射预定的光线,具体可参见前面的方法实施例,在此不再赘述。
(3)采集单元403;
采集单元403,用于对该检测对象进行图像采集,以得到图像序列。
例如,采集单元403,具体可以用于调用摄像装置,实时对检测对象进行拍摄,得到图像序列,并将拍摄得到的图像序列在该检测区域中进行显示。
其中,该摄像装置包括但不限于终端自带的摄像头、网络摄像头、以及监控摄像头、以及其他可以采集图像的设备等。
为了减少噪声所造成的数值浮动对信号的影响,在得到图像序列后,采集单元403还可以对该图像序列进行去噪声处理,详见前面的实施例,在此不再赘述。
采集单元403还可以对该图像序列进行其他的预处理,比如缩放、裁剪、锐化、背景模糊等操作,以提高后续识别的效率和准确性。
(4)检测单元404;
检测单元404,用于识别出该图像序列中检测对象的表面存在该投射光线所产生的反射光信号,该反射光信号在所述检测对象表面形成图像特征,采用预设识别模型对该图像特征所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体。
该检测单元404,还可以用于确定该图像序列中检测对象的表面不存在该投射光线所产生的反射光信号时,按照预设策略进行操作。
其中,该预设策略具体可以根据实际应用的需求进行设置,比如,可以确定该检测对象为非活体,又或者,也可以触发采集单元403重新对该检测对象进行图像采集,又或者,还可以触发启动单元402重新启动光源,并向检测对象投射光线,等等,具体可参见前面的方法实施例,在此不再赘述。
该检测单元404,还可以用于在识别结果指示该图像特征所属对象的类型为非活体时,确定该检测对象为非活体。
例如,该检测单元404可以包括计算子单元、判断子单元和识别子单元,如下:
计算子单元,可以用于回归分析该图像序列中帧的变化,得到回归结果。
例如,计算子单元,具体可以用于回归分析该图像序列中每帧的色度/亮度的数值表达,该数值表达可以为数值序列,根据该数值表达如数值序列来判断该图像序列中帧的色度/亮度的变化,得到回归结果。
或者,计算子单元,具体可以用于回归分析该图像序列中帧之间的变化,得到回归结果,等等。
其中,回归分析该图像序列中每帧的色度/亮度的数值表达的具体方式可参见前面的方法实施例,而该图像序列中帧之间的变化可以通过计算该图像序列中帧之间的差分来得到,该帧之间的差分可以是帧间差,也可以是帧差,帧间差指的是相邻两帧之间的差,而帧差为投射光线变化前后所对应的帧之间的差。
比如,该计算子单元,具体可以用于确定检测对象的位置变化程度小于预设变化值时,分别获取该图像序列中邻近帧的像素坐标,基于该像素坐标计算帧间差;比如,可以对该像素坐标进行变换,以最小化该像素坐标的配准误差,然后,根据 变换结果筛选出相关性符合预设条件的像素点,并根据筛选出的像素点计算帧间差,等等。
又比如,该计算子单元,具体可以用于确定检测对象的位置变化程度小于预设变化值时,分别从该图像序列中获取投射光线变化前后所对应的帧的像素坐标,基于该像素坐标计算帧差;比如,可以对该像素坐标进行变换,以最小化该像素坐标的配准误差,然后,根据变换结果筛选出相关性符合预设条件的像素点,并根据筛选出的像素点计算帧差,等等。
除了上述计算帧差的方式之外,还可以采用其他的方式,比如,可以在某一个颜色空间的通道上,或任意个能描述色度或亮度变化的维度上,去分析两帧之间的色度变化的相对值或亮度变化的相对值即可,即:
计算子单元,具体可以用于在确定检测对象的位置变化程度小于预设变化值时,分别从图像序列中获取投射光线变化前后所对应的帧的色度/亮度,根据获取到的色度/亮度计算投射光线变化前后所对应的帧之间的色度变化相对值或亮度变化相对值,将色度/亮度变化相对值作为投射光线变化前后所对应的帧之间的帧差。
比如,计算子单元,具体可以用于通过预设回归函数对该色度/亮度进行计算,得到投射光线变化前后所对应的帧之间的色度/亮度变化相对值(即色度变化相对值或亮度变化相对值),等等。
其中,该回归函数可以根据实际应用的需求进行设置,比如,具体可以是回归神经网络等。
其中,该预设变化值和预设条件均可以根据实际应用的需求进行设置。
判断子单元,可以用于根据该回归结果确定该图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号,该反射光信号在所述检测对象表面形成图像特征。
识别子单元,可以用于在判断子单元确定该图像序列中检测对象的表面存在该投射光线所产生的反射光信号时,采用预设识别模型对该图像特征所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体。
该识别子单元,还可以用于在判断子单元确定不存在该投射光线所产生的反射光信号时,按照预设策略进行操作,具体可参见检测单元404中关于预设策略的描述, 在此不再赘述。
该识别子单元,还可以用于在识别结果指示该图像特征所属对象的类型为非活体时,确定该检测对象为非活体。
其中,该预设识别模型可以由多个特征样本训练而成,该特征样本为该反射光信号在已标注类型的对象表面所形成的图像特征。该识别模型可以由其他的设备进行建立,并提供给该活体检测装置,也可以由该活体检测装置自行进行建立,即该活体检测装置还可以包括模型建立单元,如下:
模型建立单元,用于获取多个特征样本,根据该特征样本对预设初始识别模型进行训练,得到预设识别模型。
其中,根据该回归结果确定该图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号的方式可以有多种,例如,可以采用如下任意一种方式:
第一种方式:
判断子单元,具体可以用于确定该回归结果是否大于预设阈值,若是,则确定该图像序列中检测对象的表面存在该投射光线所产生的反射光信号;若否,则确定该图像序列中检测对象的表面不存在该投射光线所产生的反射光信号。
第二种方式:
该判断子单元,具体可以用于通过预设全局特征算法或预设识别模型对该回归结果进行分类分析,若分析结果指示检测对象的表面的帧间变化大于设定值,则确定该图像序列中检测对象的表面存在该投射光线所产生的反射光信号;若分析结果指示检测对象的表面的帧间变化不大于设定值,则确定该图像序列中检测对象的表面不存在该投射光线所产生的反射光信号。
其中,该设定值可以根据实际应用的需求而定,而该“通过预设全局特征算法或预设识别模型对该帧间差进行分类分析”的方式也可以有多种,比如,可以如下:
该判断子单元,具体可以用于对该回归结果进行分析,以判断该图像序列中是否存在该投射光线所产生的反射光信号,若不存在该投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果;若存在该投射光线所产生的反射光信号,则通过预设全局特征算法或预设识别模型判断存在的反射光信息的反射体是否为该检测对象,若为该检测对象,则生成指示检测对象的表面 的帧间变化大于设定值的分析结果,若不是该检测对象,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果。
或者,该判断子单元,具体可以用于通过预设全局特征算法或预设识别模型对该图像序列中的图像进行分类,以筛选出存在该检测对象的帧,得到候选帧,分析该候选帧的帧间差,以判断该检测对象是否存在该投射光线所产生的反射光信号,若不存在该投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果;若存在该投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化大于设定值的分析结果。
其中,全局特征算法指的是基于全局特征的算法,其中,全局特征可以包括灰度的均值方差、灰度共生矩阵、FFT和DCT等变换后的频谱。
若颜色遮罩为根据预设编码序列而生成的,则可以采用第三种方式,即通过对光信号进行解码,来确定该图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号,具体如下:
第三种方式:
判断子单元,具体可以用于根据该回归结果,按照预设解码算法对该图像序列进行解码,得到解码后序列,确定该解码后序列与编码序列是否匹配,若匹配,则确定该图像序列中检测对象的表面存在该投射光线所产生的反射光信号;若解码后序列与编码序列不匹配,则确定该图像序列中检测对象的表面不存在该投射光线所产生的反射光信号。
比如,若回归结果为投射光线变化前后所对应的帧之间的色度/亮度变化相对值,则此时,判断子单元可以采用预设解码算法,依次对该图像序列中的色度/亮度变化相对值(即投射光线变化前后所对应的帧之间的色度/亮度变化相对值)进行计算,得到各投射光线变化前后所对应的帧的色度/亮度绝对值,然后,将得到的色度/亮度绝对值作为解码后序列,或按照预设策略对得到的色度/亮度绝对值进行转换,得到解码后序列。
其中,预设解码算法与编码算法相匹配,具体可以可根据编码算法而定;而预设策略也可以根据实际应用的需求进行设置,在此不再赘述。
确定该解码后序列与编码序列是否匹配的方式也可以有多种,比如,判断子单元可以确定该解码后序列与编码序列是否一致,若一致,则确定该解码后序列与编 码序列匹配,若不一致,则确定该解码后序列与编码序列不匹配;或者,判断子单元也可以确定该解码后序列与编码序列之间的关系是否符合预设对应关系,若是,则定该解码后序列与编码序列匹配,否则,若不符合,则确定该解码后序列与编码序列不匹配,等等。其中,该预设对应关系可以根据实际应用的需求进行设置。
具体实施时,以上各个单元可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个单元的具体实施可参见前面的方法实施例,在此不再赘述。
该活体检测装置具体可以集成在终端等设备中,该终端具体可以为手机、平板电脑、笔记本电脑或PC等设备。
由上可知,本实施例的活体检测装置在需要进行活体检测时,可以由启动单元402启动光源向检测对象投射光线,并由采集单元403通过该检测界面中的检测区域对该检测对象进行图像,然后由检测单元404在识别出采集得到的图像序列中检测对象的表面存在该投射光线所产生的反射光信号时,采用预设识别模型对该反射光信号在该检测对象表面形成图像特征所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体;由于该方案无需与用户进行繁琐的交互操作和运算,因此,可以降低对硬件配置的需求,而且,由于该方案进行活体判别的依据是检测对象表面的反射光信号,而真正的活体与伪造的活体(合成图片或视频的载体,比如相片、手机或平板电脑等)的反射光信号是不同的,因此,该方案也可以有效抵挡合成人脸攻击,提高判别的准确性。
进一步的,该活体检测装置的生成单元405还可以根据随机的编码序列来生成该投射光线,并由检测单元404通过解码该反射光信号来进行判别,因此,即便攻击者根据当前的颜色序列合成了一个对应的反光视频,也无法在下一次中使用,所以,可以使得其安全性得到大大提高。总而言之,该方案可以大大提高活体检测效果,有利于提高身份验证的准确性和安全性。
相应的,本申请实施例还提供一种终端,如图5所示,该终端可以包括射频(RF,Radio Frequency)电路501、包括有一个或一个以上计算机可读存储介质的存储器502、输入单元503、显示单元504、传感器505、音频电路506、无线保真(WiFi,Wireless Fidelity)模块507、包括有一个或者一个以上处理核心的处理器508、以及电 源509等部件。本领域技术人员可以理解,图5中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
RF电路501可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,交由一个或者一个以上处理器508处理;另外,将涉及上行的数据发送给基站。通常,RF电路501包括但不限于天线、至少一个放大器、调谐器、一个或多个振荡器、用户身份模块(SIM,Subscriber Identity Module)卡、收发信机、耦合器、低噪声放大器(LNA,Low Noise Amplifier)、双工器等。此外,RF电路501还可以通过无线通信与网络和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(GSM,Global System of Mobile communication)、通用分组无线服务(GPRS,General Packet Radio Service)、码分多址(CDMA,Code Division Multiple Access)、宽带码分多址(WCDMA,Wideband Code Division Multiple Access)、长期演进(LTE,Long Term Evolution)、电子邮件、短消息服务(SMS,Short Messaging Service)等。
存储器502可用于存储软件程序以及模块,处理器508通过运行存储在存储器502的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器502可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据终端的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器502可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器502还可以包括存储器控制器,以提供处理器508和输入单元503对存储器502的访问。
输入单元503可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。具体地,在一个具体的实施例中,输入单元503可包括触敏表面以及其他输入设备。触敏表面,也称为触摸显示屏或者触控板,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触敏表面上或在触敏表面附近的操作),并根据预先设定的程式驱动相应的连接装置。触敏表面可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号, 将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器508,并能接收处理器508发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触敏表面。除了触敏表面,输入单元503还可以包括其他输入设备。具体地,其他输入设备可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元504可用于显示由用户输入的信息或提供给用户的信息以及终端的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元504可包括显示面板,可以采用液晶显示器(LCD,Liquid Crystal Display)、有机发光二极管(OLED,Organic Light-Emitting Diode)等形式来配置显示面板。进一步的,触敏表面可覆盖显示面板,当触敏表面检测到在其上或附近的触摸操作后,传送给处理器508以确定触摸事件的类型,随后处理器508根据触摸事件的类型在显示面板上提供相应的视觉输出。虽然在图5中,触敏表面与显示面板是作为两个独立的部件来实现输入和输入功能,但是在某些实施例中,可以将触敏表面与显示面板集成而实现输入和输出功能。
终端还可包括至少一种传感器505,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板的亮度,接近传感器可在终端移动到耳边时,关闭显示面板和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于终端还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路506、扬声器,传声器可提供用户与终端之间的音频接口。音频电路506可将接收到的音频数据转换后的电信号,传输到扬声器,由扬声器转换为声音信号输出;另一方面,传声器将收集的声音信号转换为电信号,由音频电路506接收后转换为音频数据,再将音频数据输出处理器508处理后,经RF电路501以发送给比如另一终端,或者将音频数据输出至存储器502以便进一步处理。音频电路506还可能包括耳塞插孔,以提供外设耳机与终端的通信。
WiFi属于短距离无线传输技术,终端通过WiFi模块507可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图5示出了WiFi模块507,但是可以理解的是,其并不属于终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器508是终端的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器502内的软件程序和/或模块,以及调用存储在存储器502内的数据,执行终端的各种功能和处理数据,从而对手机进行整体监控。处理器508可包括一个或多个处理核心;优选的,处理器508可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器508中。
终端还包括给各个部件供电的电源509(比如电池),优选的,电源可以通过电源管理系统与处理器508逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源509还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
尽管未示出,终端还可以包括摄像头、蓝牙模块等,在此不再赘述。具体在本实施例中,终端中的处理器508会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器502中,并由处理器508来运行存储在存储器502中的应用程序,从而实现各种功能:
接收活体检测请求,根据该活体检测请求启动光源,并向检测对象投射光线,然后,对检测对象进行图像采集,以得到图像序列,识别出该图像序列中检测对象的表面存在该投射光线所产生的反射光信号,该反射光信号在该检测对象表面形成图像特征,采用预设识别模型对该图像特征所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体。
其中,确定该图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号,以及对该图像特征所属对象的类型进行识别的方式可以有多种,具体可参见前面的实施例,在此不再赘述。
其中,该光源的实现方式也可以有多种,比如,可以通过调节终端屏幕的亮度 来实现,或者,也可以利用闪光灯或红外发射器等其他发光部件或外置设备来实现、或者,还可以通过在显示界面上设置一颜色遮罩来实现,等等,即该存储器502中的应用程序,也可以实现如下功能:
根据该活体检测请求调整屏幕亮度,使得该屏幕作为光源向检测对象投射光线。
或者,根据该活体检测请求开启预设发光部件,使得该发光部件作为光源向检测对象投射光线。其中,该发光部件可以包括闪光灯或红外发射器等部件。
或者,根据该活体检测请求启动颜色遮罩,比如启动检测界面,该检测界面可以闪现颜色遮罩,该颜色遮罩作为光源向检测对象投射光线,等等。
其中,该闪现颜色遮罩的区域可以根据实际应用的需求而定,例如,该检测界面可以包括检测区域和非检测区域,检测区域主要用于对监控情况进行显示,而该非检测区域可以闪现颜色遮罩,该颜色遮罩作为光源向检测对象投射光线,等等。
另外,需说明的是,该颜色遮罩的颜色和透明度等参数可以根据实际应用的需求进行设置,该颜色遮罩可以由系统预先进行设定,并在启动检测界面时直接调取,或者,也可以在接收到活体检测请求之后自动生成,即该存储在存储器502中的应用程序,还可以实现如下功能:
生成颜色遮罩,使得该颜色遮罩所投射出的光线能够按照预设规律进行变化。
为了便于后续可以更好地识别出光线的变化,还可以最大化该光线的变化强度。
其中,最大化该光线的变化强度的方式也可以有多种,具体可参见前面的实施例,在此不再赘述。
为了后续可以更好地从图像帧间差中检测出反射光信号,除了可以最大化该光线的变化强度之外,还可以在颜色的选择上,尽量选择对信号分析最鲁棒的颜色空间。
为了提高身份验证的准确性和安全性,还可以采用预设编码的光线组合来作为该颜色遮罩,即该存储在存储器502中的应用程序,还可以实现如下功能:
获取预设编码序列,该编码序列包括多个编码,根据预设编码算法,按照该编码序列中编码的顺序依次确定各个编码对应的颜色,得到颜色序列,基于该颜色序列生成颜色遮罩,使得该颜色遮罩所投射出的光线按照该颜色序列所指示的颜色进 行变化。
其中,该预设编码序列可以是随机生成的,也可以根据实际应用的需求进行设置,而该预设编码算法也可以根据实际应用的需求而定,该编码算法,可以反映编码序列中的各个编码与各种颜色之间的对应关系,比如,可以令红色代表数字-1,绿色代表0,蓝色代表1,等等。
若颜色遮罩为根据预设编码序列而生成的,则也可以通过对光信号进行解码,来确定该图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号,具体可参见前面的实施例,在此不再赘述。
为了减少噪声所造成的数值浮动对信号的影响,在得到图像序列后,还可以对该图像序列进行去噪声处理,即该存储在存储器502中的应用程序,还可以实现如下功能:
对该图像序列进行去噪声处理。
例如,以噪声模型为高斯噪声为例,具体可以使用时序上多帧平均和/或同帧多尺度平均来尽可能地减小噪声,等等。
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。
由上可知,本实施例的终端在需要进行活体检测时,可以启动光源向检测对象投射光线,并对该检测对象进行图像采集,然后确定采集得到的图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号,如果存在,则采用预设识别模型对该图像特征所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体;由于该方案无需与用户进行繁琐的交互操作和运算,因此,可以降低对硬件配置的需求,而且,由于该方案进行活体判别的依据是检测对象表面的反射光信号,而真正的活体与伪造的活体(合成图片或视频的载体,比如相片、手机或平板电脑等)的反射光信号是不同的,因此,该方案也可以有效抵挡合成人脸攻击,提高判别的准确性;所以,总而言之,该方案可以在终端,特别是移动终端有限的硬件配置下,提高活体检测效果,从而提高身份验证的准确性和安全性。
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。
为此,本申请实施例还提供一种存储介质,其中存储有多条指令,该指令能够被处理器进行加载,以执行本申请实施例所提供的任一种活体检测方法中的步骤。例如,该指令可以如下步骤:
接收活体检测请求,根据该活体检测请求启动光源,并向检测对象投射光线,然后,对该检测对象进行图像采集,以得到图像序列,识别出该图像序列中检测对象的表面存在该投射光线所产生的反射光信号,该反射光信号在该检测对象表面形成图像特征,采用预设识别模型对该图像特征所属对象的类型进行识别,若识别结果指示该图像特征所属对象的类型为活体,则确定该检测对象为活体。
其中,确定该图像序列中检测对象的表面是否存在该投射光线所产生的反射光信号,以及对该图像特征所属对象的类型进行识别的方式可以有多种,具体可参见前面的实施例,在此不再赘述。
其中,该光源的实现方式也可以有多种,比如,可以通过调节终端屏幕的亮度来实现,或者,也可以利用闪光灯或红外发射器等其他发光部件或外置设备来实现、或者,还可以通过在显示界面上设置一颜色遮罩来实现,等等,即该指令还可以如下步骤:
根据该活体检测请求调整屏幕亮度,使得该屏幕作为光源向检测对象投射光线。
或者,根据该活体检测请求开启预设发光部件,使得该发光部件作为光源向检测对象投射光线。其中,该发光部件可以包括闪光灯或红外发射器等部件。
或者,根据该活体检测请求启动颜色遮罩,比如启动检测界面,该检测界面可以闪现颜色遮罩,该颜色遮罩作为光源向检测对象投射光线。
其中,该闪现颜色遮罩的区域可以根据实际应用的需求而定,例如,该检测界面可以包括检测区域和非检测区域,检测区域主要用于对监控情况进行显示,而该非检测区域可以闪现颜色遮罩,该颜色遮罩作为光源向检测对象投射光线,等等。
另外,需说明的是,该颜色遮罩的颜色和透明度等参数可以根据实际应用的需求进行设置,该颜色遮罩可以由系统预先进行设定,并在启动检测界面时直接调取,或者,也可以在接收到活体检测请求之后自动生成,即该指令还可以如下步骤:
生成颜色遮罩,使得该颜色遮罩所投射出的光线能够按照预设规律进行变化,比如,具体可以用预设编码的光线组合来作为该颜色遮罩,等等。
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。
其中,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
由于该存储介质中所存储的指令,可以执行本申请实施例所提供的任一种活体检测方法中的步骤,因此,可以实现本申请实施例所提供的任一种活体检测方法所能实现的有益效果,详见前面的实施例,在此不再赘述。
以上对本申请实施例所提供的一种活体检测方法、装置和存储介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制

Claims (29)

  1. 一种活体检测方法,其中,包括:
    终端接收活体检测请求;
    根据所述活体检测请求启动光源,并向检测对象投射光线;
    对所述检测对象进行图像采集,以得到图像序列;
    识别出所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号,所述反射光信号在所述检测对象表面形成图像特征;
    采用预设识别模型对所述图像特征所属对象的类型进行识别,所述预设识别模型由多个特征样本训练而成,所述特征样本为所述反射光信号在已标注类型的对象表面所形成的图像特征;
    若识别结果指示所述图像特征所属对象的类型为活体,则确定所述检测对象为活体。
  2. 根据权利要求1所述的方法,其中,所述根据所述活体检测请求启动光源,包括:
    根据所述活体检测请求启动预设的颜色遮罩,所述颜色遮罩作为光源向检测对象投射光线。
  3. 根据权利要求2所述的方法,其中,所述根据所述活体检测请求启动预设的颜色遮罩,包括:
    根据所述活体检测请求启动检测界面,所述检测界面包括非检测区域,所述非检测区域闪现颜色遮罩。
  4. 根据权利要求2所述的方法,其中,所述接收活体检测请求之后,还包括:
    生成颜色遮罩,使得所述颜色遮罩所投射出的光线能够按照预设规律进行变化。
  5. 根据权利要求4所述的方法,其中,所述生成颜色遮罩之后,还包括:
    对于同颜色的光线,获取预设的所述终端的屏幕亮度调整参数,根据所述屏幕亮度调整参数调整所述同颜色的光线在变化前后的屏幕亮度,以调整光线的变化强度;
    对于不同颜色的光线,获取预设的色差调整参数,根据所述色差调整参数调整 所述不同颜色的光线在变化前后的色差,以调整光线的变化强度。
  6. 根据权利要求4所述的方法,其中,所述生成颜色遮罩,使得所述颜色遮罩所投射出的光线能够按照预设规律进行变化包括:
    获取预设编码序列,所述编码序列包括多个编码;
    根据预设编码算法,按照所述编码序列中编码的顺序依次确定各个编码对应的颜色,得到颜色序列;
    基于所述颜色序列生成颜色遮罩,使得所述颜色遮罩所投射出的光线按照所述颜色序列所指示的颜色进行变化。
  7. 根据权利要求1至6任一项所述的方法,其中,所述识别出所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号,包括:
    回归分析所述图像序列中帧的变化,得到回归结果;
    根据所述回归结果识别出所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号。
  8. 根据权利要求7所述的方法,其中,所述根据所述回归结果识别出所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号,包括:
    当所述回归结果大于预设阈值时,则确定所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号;
    当所述回归结果不大于预设阈值时,则确定所述图像序列中检测对象的表面不存在所述投射光线所产生的反射光信号。
  9. 根据权利要求7所述的方法,其中,所述根据所述回归结果识别出所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号,包括:
    通过预设全局特征算法或预设识别模型对所述回归结果进行分类分析;
    若分析结果指示检测对象的表面的帧间变化大于设定值,则确定所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号;
    若分析结果指示检测对象的表面的帧间变化不大于设定值,则确定所述图像序列中检测对象的表面不存在所述投射光线所产生的反射光信号。
  10. 根据权利要求9所述的方法,其中,所述通过预设全局特征算法或预设识别模型对所述回归结果进行分类分析,包括:
    对所述回归结果进行分析;
    若所述图像序列中不存在所述投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果;
    若所述图像序列中存在所述投射光线所产生的反射光信号,若通过预设全局特征算法或预设识别模型确定存在的反射光信息的反射体为所述检测对象,则生成指示检测对象的表面的帧间变化大于设定值的分析结果,若通过预设全局特征算法或预设识别模型确定存在的反射光信息的反射体不是所述检测对象,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果。
  11. 根据权利要求9所述的方法,其中,所述通过预设全局特征算法或预设识别模型对所述回归结果进行分类分析,包括:
    通过预设全局特征算法或预设识别模型对所述图像序列中的图像进行分类,以筛选出存在所述检测对象的帧,得到候选帧;
    分析所述候选帧的帧间差;
    当根据所述帧间差确定所述检测对象的表面不存在所述投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果;
    当根据所述帧间差确定所述检测对象的表面存在所述投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化大于设定值的分析结果。
  12. 根据权利要求7所述的方法,其中,所述颜色遮罩为根据预设编码序列生成的,所述根据所述回归结果识别出所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号,包括:
    根据所述回归结果,按照预设解码算法对所述图像序列进行解码,得到解码后序列;
    若所述解码后序列与所述编码序列匹配,则确定所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号;
    若所述解码后序列与所述编码序列不匹配,则确定所述图像序列中检测对象的表面不存在所述投射光线所产生的反射光信号。
  13. 根据权利要求12所述的方法,其中,所述回归结果为投射光线变化前后所对应的帧之间的色度/亮度变化相对值,所述根据所述回归结果,按照预设解码算法 对所述图像序列进行解码,得到解码后序列,包括:
    采用预设解码算法,依次对所述图像序列中的所述色度/亮度变化相对值进行计算,得到各投射光线变化前后所对应的帧的色度/亮度绝对值;
    将得到的色度/亮度绝对值作为解码后序列,或按照预设策略对得到的色度/亮度绝对值进行转换,得到解码后序列。
  14. 根据权利要求7所述的方法,其中,所述回归分析所述图像序列中帧的变化,得到回归结果,包括:
    确定检测对象的位置变化程度小于预设变化值时,分别从所述图像序列中获取投射光线变化前后所对应的帧的色度/亮度,通过预设回归函数对所述色度/亮度进行计算,得到投射光线变化前后所对应的帧之间的色度/亮度变化相对值,将色度/亮度变化相对值作为回归结果。
  15. 一种活体检测终端,其中,包括存储器、处理器及存储在存储器上的计算机程序,所述处理器执行所述计算机程序时实现以下方法步骤:
    接收活体检测请求;
    根据所述活体检测请求启动光源,并向检测对象投射光线;
    对所述检测对象进行图像采集,以得到图像序列;
    识别出所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号,所述反射光信号在所述检测对象表面形成图像特征,采用预设识别模型对所述图像特征所属对象的类型进行识别,所述预设识别模型由多个特征样本训练而成,所述特征样本为所述反射光信号在已标注类型的对象表面所形成的图像特征,若识别结果指示所述图像特征所属对象的类型为活体,则确定所述检测对象为活体。
  16. 根据权利要求15所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    根据所述活体检测请求启动预设的颜色遮罩,所述颜色遮罩作为光源向检测对象投射光线。
  17. 根据权利要求16所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    根据所述活体检测请求启动检测界面,所述检测界面包括非检测区域,所述非检测区域闪现颜色遮罩。
  18. 根据权利要求16所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    生成颜色遮罩,使得所述颜色遮罩所投射出的光线能够按照预设规律进行变化。
  19. 根据权利要求18所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    对于同颜色的光线,获取预设的所述活体检测终端的屏幕亮度调整参数,根据所述屏幕亮度调整参数调整所述同颜色的光线在变化前后的屏幕亮度,以调整光线的变化强度;
    对于不同颜色的光线,获取预设的色差调整参数,根据所述色差调整参数调整所述不同颜色的光线在变化前后的色差,以调整光线的变化强度。
  20. 根据权利要求18所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    获取预设编码序列,所述编码序列包括多个编码;
    根据预设编码算法,按照所述编码序列中编码的顺序依次确定各个编码对应的颜色,得到颜色序列;
    基于所述颜色序列生成颜色遮罩,使得所述颜色遮罩所投射出的光线按照所述颜色序列所指示的颜色进行变化。
  21. 根据权利要求15至20任一项所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    回归分析所述图像序列中帧的变化,得到回归结果;
    根据所述回归结果识别出所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号。
  22. 根据权利要求21所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    当所述回归结果大于预设阈值时,则确定所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号;
    当所述回归结果不大于预设阈值时,则确定所述图像序列中检测对象的表面不 存在所述投射光线所产生的反射光信号。
  23. 根据权利要求21所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    通过预设全局特征算法或预设识别模型对所述回归结果进行分类分析;
    若分析结果指示检测对象的表面的帧间变化大于设定值,则确定所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号;
    若分析结果指示检测对象的表面的帧间变化不大于设定值,则确定所述图像序列中检测对象的表面不存在所述投射光线所产生的反射光信号。
  24. 根据权利要求23所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    对所述回归结果进行分析;
    若所述图像序列中不存在所述投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果;
    若所述图像序列中存在所述投射光线所产生的反射光信号,若通过预设全局特征算法或预设识别模型确定存在的反射光信息的反射体为所述检测对象,则生成指示检测对象的表面的帧间变化大于设定值的分析结果,若通过预设全局特征算法或预设识别模型确定存在的反射光信息的反射体不是所述检测对象,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果。
  25. 根据权利要求23所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    通过预设全局特征算法或预设识别模型对所述图像序列中的图像进行分类,以筛选出存在所述检测对象的帧,得到候选帧;
    分析所述候选帧的帧间差;
    当根据所述帧间差确定所述检测对象的表面不存在所述投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化不大于设定值的分析结果;
    当根据所述帧间差确定所述检测对象的表面存在所述投射光线所产生的反射光信号,则生成指示检测对象的表面的帧间变化大于设定值的分析结果。
  26. 根据权利要求21所述的终端,其中,所述颜色遮罩为根据预设编码序列生 成的,所述处理器执行所述计算机程序时实现以下方法步骤:
    根据所述回归结果,按照预设解码算法对所述图像序列进行解码,得到解码后序列;
    若所述解码后序列与所述编码序列匹配,则确定所述图像序列中检测对象的表面存在所述投射光线所产生的反射光信号;
    若所述解码后序列与所述编码序列不匹配,则确定所述图像序列中检测对象的表面不存在所述投射光线所产生的反射光信号。
  27. 根据权利要求26所述的终端,其中,所述回归结果为投射光线变化前后所对应的帧之间的色度/亮度变化相对值,所述处理器执行所述计算机程序时实现以下方法步骤:
    采用预设解码算法,依次对所述图像序列中的所述色度/亮度变化相对值进行计算,得到各投射光线变化前后所对应的帧的色度/亮度绝对值;
    将得到的色度/亮度绝对值作为解码后序列,或按照预设策略对得到的色度/亮度绝对值进行转换,得到解码后序列。
  28. 根据权利要求21所述的终端,其中,所述处理器执行所述计算机程序时实现以下方法步骤:
    确定检测对象的位置变化程度小于预设变化值时,分别从所述图像序列中获取投射光线变化前后所对应的帧的色度/亮度,通过预设回归函数对所述色度/亮度进行计算,得到投射光线变化前后所对应的帧之间的色度/亮度变化相对值,将色度/亮度变化相对值作为回归结果。
  29. 一种存储介质,其中,所述存储介质存储有多条指令,所述指令适于处理器进行加载,以执行权利要求1至14任一项所述的活体检测方法中的步骤。
PCT/CN2018/111218 2016-12-30 2018-10-22 一种活体检测方法、终端和存储介质 Ceased WO2019080797A1 (zh)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201611257052 2016-12-30
CN201711012244.1A CN107992794B (zh) 2016-12-30 2017-10-26 一种活体检测方法、装置和存储介质
CN201711012244.1 2017-10-26

Publications (1)

Publication Number Publication Date
WO2019080797A1 true WO2019080797A1 (zh) 2019-05-02

Family

ID=62031297

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2017/117958 Ceased WO2018121428A1 (zh) 2016-12-30 2017-12-22 一种活体检测方法、装置及存储介质
PCT/CN2018/111218 Ceased WO2019080797A1 (zh) 2016-12-30 2018-10-22 一种活体检测方法、终端和存储介质

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117958 Ceased WO2018121428A1 (zh) 2016-12-30 2017-12-22 一种活体检测方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN107992794B (zh)
WO (2) WO2018121428A1 (zh)

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992794B (zh) * 2016-12-30 2019-05-28 腾讯科技(深圳)有限公司 一种活体检测方法、装置和存储介质
CN107832712A (zh) * 2017-11-13 2018-03-23 深圳前海微众银行股份有限公司 活体检测方法、装置和计算机可读存储介质
CN109101881B (zh) * 2018-07-06 2021-08-20 华中科技大学 一种基于多尺度时序图像的实时眨眼检测方法
CN109376592B (zh) 2018-09-10 2021-04-27 创新先进技术有限公司 活体检测方法、装置和计算机可读存储介质
CN111310515A (zh) * 2018-12-11 2020-06-19 上海耕岩智能科技有限公司 编码遮罩生物特征分析方法、存储介质及神经网络
CN111310514A (zh) * 2018-12-11 2020-06-19 上海耕岩智能科技有限公司 编码遮罩生物特征重建方法及存储介质
CN109660745A (zh) * 2018-12-21 2019-04-19 深圳前海微众银行股份有限公司 视频录制方法、装置、终端及计算机可读存储介质
CN111488756B (zh) * 2019-01-25 2023-10-03 杭州海康威视数字技术股份有限公司 基于面部识别的活体检测的方法、电子设备和存储介质
CN109961025B (zh) * 2019-03-11 2020-01-24 烟台市广智微芯智能科技有限责任公司 一种基于图像偏斜度的真假脸部识别检测方法和检测系统
CN110119719B (zh) * 2019-05-15 2025-01-21 深圳前海微众银行股份有限公司 活体检测方法、装置、设备及计算机可读存储介质
CN110414346A (zh) * 2019-06-25 2019-11-05 北京迈格威科技有限公司 活体检测方法、装置、电子设备及存储介质
CN110298312B (zh) * 2019-06-28 2022-03-18 北京旷视科技有限公司 活体检测方法、装置、电子设备及计算机可读存储介质
CN112183156B (zh) * 2019-07-02 2023-08-11 杭州海康威视数字技术股份有限公司 一种活体检测方法和设备
CN110516644A (zh) * 2019-08-30 2019-11-29 深圳前海微众银行股份有限公司 一种活体检测方法及装置
CN110969077A (zh) * 2019-09-16 2020-04-07 成都恒道智融信息技术有限公司 一种基于颜色变化的活体检测方法
CN110688946A (zh) * 2019-09-26 2020-01-14 上海依图信息技术有限公司 基于图片识别的公有云静默活体检测设备和方法
CN111126229A (zh) * 2019-12-17 2020-05-08 中国建设银行股份有限公司 数据处理方法和装置
CN111274928B (zh) * 2020-01-17 2023-04-07 腾讯科技(深圳)有限公司 一种活体检测方法、装置、电子设备和存储介质
CN111310575B (zh) * 2020-01-17 2022-07-08 腾讯科技(深圳)有限公司 一种人脸活体检测的方法、相关装置、设备及存储介质
CN113298747B (zh) * 2020-02-19 2025-02-21 北京沃东天骏信息技术有限公司 图片、视频检测方法和装置
CN113342432B (zh) * 2020-03-02 2024-09-24 北京小米移动软件有限公司 终端界面主题的显示方法、装置及存储介质
CN111444831B (zh) * 2020-03-25 2023-03-21 深圳中科信迅信息技术有限公司 一种活体检测人脸识别的方法
SG10202005395VA (en) * 2020-06-08 2021-01-28 Alipay Labs Singapore Pte Ltd Face liveness detection system, device and method
CN111797735A (zh) * 2020-06-22 2020-10-20 深圳壹账通智能科技有限公司 人脸视频识别方法、装置、设备及存储介质
CN111783640A (zh) * 2020-06-30 2020-10-16 北京百度网讯科技有限公司 检测方法、装置、设备以及存储介质
CN111899232B (zh) * 2020-07-20 2023-07-04 广西大学 利用图像处理对竹木复合集装箱底板无损检测的方法
CN111914763B (zh) * 2020-08-04 2023-11-28 网易(杭州)网络有限公司 活体检测方法、装置和终端设备
CN114596638A (zh) * 2020-11-30 2022-06-07 华为技术有限公司 人脸活体检测方法、装置及存储介质
CN112528909B (zh) * 2020-12-18 2024-05-21 平安银行股份有限公司 活体检测方法、装置、电子设备及计算机可读存储介质
CN113807159B (zh) * 2020-12-31 2024-08-20 京东科技信息技术有限公司 人脸识别处理方法、装置、设备及其存储介质
CN112818782B (zh) * 2021-01-22 2021-09-21 电子科技大学 一种基于媒介感知的泛化性静默活体检测方法
CN113837930B (zh) * 2021-09-24 2024-02-02 重庆中科云从科技有限公司 人脸图像合成方法、装置以及计算机可读存储介质
CN113869219B (zh) * 2021-09-29 2024-05-21 平安银行股份有限公司 人脸活体检测方法、装置、设备及存储介质
CN113888500B (zh) * 2021-09-29 2024-07-02 平安银行股份有限公司 基于人脸图像的炫光程度检测方法、装置、设备及介质
CN115995102A (zh) * 2021-10-15 2023-04-21 北京眼神科技有限公司 人脸静默活体检测方法、装置、存储介质及设备
CN114821822B (zh) * 2022-04-02 2024-11-05 北京海鑫智圣技术有限公司 活体检测方法及装置
CN116978078A (zh) * 2022-04-14 2023-10-31 京东科技信息技术有限公司 活体检测方法和装置、系统、电子设备、计算机可读介质
CN114973315B (zh) * 2022-05-11 2025-10-03 新瑞鹏宠物医疗集团有限公司 基于活体检测的宠物身份验证方法及相关产品
CN115147936B (zh) * 2022-05-16 2025-07-18 北京旷视科技有限公司 一种活体检测方法、电子设备、存储介质及程序产品
WO2023221996A1 (zh) * 2022-05-16 2023-11-23 北京旷视科技有限公司 一种活体检测方法、电子设备、存储介质及程序产品
CN115512447A (zh) * 2022-10-10 2022-12-23 京东科技控股股份有限公司 活体检测方法及装置
CN116052284A (zh) * 2022-12-06 2023-05-02 深圳芯启航科技有限公司 指纹活体检测方法、系统、装置、终端及存储介质
CN116403261A (zh) * 2023-04-07 2023-07-07 平安银行股份有限公司 人脸活体检测方法、系统、计算机设备及存储介质
CN117058420A (zh) * 2023-08-15 2023-11-14 湖南三湘银行股份有限公司 一种基于人脸识别流程处理方法
CN117115410A (zh) * 2023-08-15 2023-11-24 京东科技控股股份有限公司 活体检测信息生成方法、装置、设备、介质和程序产品
CN119851357A (zh) * 2024-07-25 2025-04-18 北京达佳互联信息技术有限公司 视频采集方法及装置、视频核验方法及装置、电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260731A (zh) * 2015-11-25 2016-01-20 商汤集团有限公司 一种基于光脉冲的人脸活体检测系统及方法
CN106529512A (zh) * 2016-12-15 2017-03-22 北京旷视科技有限公司 活体人脸验证方法及装置
CN107220635A (zh) * 2017-06-21 2017-09-29 北京市威富安防科技有限公司 基于多造假方式的人脸活体检测方法
CN107992794A (zh) * 2016-12-30 2018-05-04 腾讯科技(深圳)有限公司 一种活体检测方法、装置和存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105637532B (zh) * 2015-06-08 2020-08-14 北京旷视科技有限公司 活体检测方法、活体检测系统以及计算机程序产品
WO2016197298A1 (zh) * 2015-06-08 2016-12-15 北京旷视科技有限公司 活体检测方法、活体检测系统以及计算机程序产品
CN104951769B (zh) * 2015-07-02 2018-11-30 京东方科技集团股份有限公司 活体识别装置、活体识别方法和活体认证系统
CN105912986B (zh) * 2016-04-01 2019-06-07 北京旷视科技有限公司 一种活体检测方法和系统
CN107273794A (zh) * 2017-04-28 2017-10-20 北京建筑大学 一种人脸识别过程中的活体鉴别方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105260731A (zh) * 2015-11-25 2016-01-20 商汤集团有限公司 一种基于光脉冲的人脸活体检测系统及方法
CN106529512A (zh) * 2016-12-15 2017-03-22 北京旷视科技有限公司 活体人脸验证方法及装置
CN107992794A (zh) * 2016-12-30 2018-05-04 腾讯科技(深圳)有限公司 一种活体检测方法、装置和存储介质
CN107220635A (zh) * 2017-06-21 2017-09-29 北京市威富安防科技有限公司 基于多造假方式的人脸活体检测方法

Also Published As

Publication number Publication date
CN107992794B (zh) 2019-05-28
WO2018121428A1 (zh) 2018-07-05
CN107992794A (zh) 2018-05-04

Similar Documents

Publication Publication Date Title
CN107992794B (zh) 一种活体检测方法、装置和存储介质
CN108399349B (zh) 图像识别方法及装置
CN112037162B (zh) 一种面部痤疮的检测方法及设备
CN113591517B (zh) 一种活体检测方法及相关设备
WO2017054605A1 (zh) 一种图片的处理方法和装置
US20160173752A1 (en) Techniques for context and performance adaptive processing in ultra low-power computer vision systems
CN120833619A (zh) 一种视频图像处理方法及装置
CN115880213B (zh) 显示异常检测方法、装置及系统
WO2019052329A1 (zh) 人脸识别方法及相关产品
WO2017071085A1 (zh) 报警方法及装置
CN109348135A (zh) 拍照方法、装置、存储介质及终端设备
CN108566516A (zh) 图像处理方法、装置、存储介质及移动终端
CN110765924B (zh) 一种活体检测方法、装置以及计算机可读存储介质
US20190019024A1 (en) Method for Iris Recognition and Related Products
US11978231B2 (en) Wrinkle detection method and terminal device
CN105635776A (zh) 虚拟操作界面遥控控制方法及系统
WO2019024718A1 (zh) 防伪处理方法、防伪处理装置及电子设备
CN111557007B (zh) 一种检测眼睛睁闭状态的方法及电子设备
WO2024082976A1 (zh) 文本图像的ocr识别方法、电子设备及介质
CN109639981B (zh) 一种图像拍摄方法及移动终端
US9684828B2 (en) Electronic device and eye region detection method in electronic device
US20170112381A1 (en) Heart rate sensing using camera-based handheld device
CN109040588A (zh) 人脸图像的拍照方法、装置、存储介质及终端
CN109104522B (zh) 一种人脸识别的方法及移动终端
CN108174101B (zh) 一种拍摄方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18870241

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18870241

Country of ref document: EP

Kind code of ref document: A1