Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 is a block diagram of an image processing apparatus 10 according to an embodiment of the present disclosure. In this embodiment of the application, the image processing apparatus 10 may be configured to perform face identity detection with high accuracy on a face image to be detected, and the image processing apparatus 10 includes a face identity detection device 100, a memory 11, a processor 12, and a communication unit 13. The memory 11, the processor 12 and the communication unit 13 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The face identity detection device 100 comprises at least one software functional module capable of being stored in the memory 11 in the form of software or firmware (firmware), and the processor 12 executes various functional applications and data processing by running the corresponding software functional module of the face identity detection device 100 stored in the memory 11. In the present embodiment, the image processing apparatus 10 may be, but is not limited to, a server, a mobile terminal, or the like.
In this embodiment, the memory 11 may be configured to store a feature point extraction model for extracting feature points of a face in a face image, where the feature points of the face include at least one or more combinations of two corner points and a center point of each eyebrow, two corner points of each eye, center points of upper and lower eyelids, and center points of the eyes, nose tip points, nose apex points, two nose wing points, nose septum points, two corner points of the mouth, mouth center points, uppermost points of the upper lip, and lowermost points of the lower lip in the face image. The image processing device 10 may extract all human face feature points in a human face image, which are actually present in the human face image and are within the capability coverage range of the feature point extraction model, from the human face image through the feature point extraction model, where the feature point extraction model is an extraction model obtained by training a human face feature point training sample human face image based on a Convolutional Neural Network (CNN) and manually calibrating the human face feature point, and the feature point extraction model may be obtained by the image processing device 10 through sample training, or may be obtained from an external device and stored in the memory 11.
In this embodiment, the memory 11 is further configured to store a corresponding relationship between the image topology operator vector set and the adjustment parameter. The adjustment parameters are used for expressing image processing parameters required by the pixel point for reducing the corresponding beauty processing effect, the image topological operator vector group is used for expressing the image topological relation between one image pixel point in the image and the connected pixel point of the image pixel point in the RGB color space, the image topological operator can be expressed as a Laplacian operator or a gradient operator, and the specific operator expression condition can be configured differently according to requirements. In the embodiments of the present application, a laplacian is taken as an example for performing a related description in the following description, but it can be understood that the laplacian is only one expression form of the image topological operator, the expression form of the image topological operator is not limited to the laplacian, and the related operations when the image topological operator is expressed by other operators are similar to the related operations when the image topological operator is expressed by the laplacian.
In this embodiment, when the image topology operator is expressed by a laplacian operator, the laplacian operator vector group is used to represent a transformation difference between color vectors (R (Red, Red), G (Green ), and B (Blue) vectors) corresponding to a pixel point of an image and an adjacent pixel point of the image in an RGB color space, and may reflect a beauty processing degree of the pixel point of the image. If the image processing device 10 calculates the laplacian operator vector group by using a 4-neighborhood system for an image pixel point, the image processing device 10 performs the laplacian operator vector group calculation on 4 pixel points adjacent to the periphery of the image pixel point and the image pixel point; if the image processing device 10 calculates the laplacian operator vector group by using an 8-neighborhood system for the image pixel point, the image processing device 10 performs the laplacian operator vector group calculation on the extracted 8 pixel points adjacent to the periphery of the image pixel point and the image pixel point. In an implementation manner of this embodiment, the image processing device 10 calculates the laplacian operator vector group by using an 8-neighborhood system for the image pixel points.
The corresponding relation between the laplacian operator vector group and the adjustment parameter can be represented by a parameter comparison model obtained based on convolutional neural network training. The training samples used by the parameter comparison model in the training process are all Laplacian operator vector groups obtained by corresponding calculation in the process from non-beauty to beauty by adopting different beauty parameters of the face image feature points, the Laplacian operator vector group when the beauty parameters are adopted and the Laplacian operator vector group when the beauty parameters are not adopted are compared by the parameter comparison model in the training process (including vector value comparison and vector direction comparison), and corresponding adjusting parameters are generated based on the comparison result. And performing vector transformation on the Laplacian vector group when the beauty parameter is adopted based on the generated adjustment parameter, then calculating the overall difference between the Laplacian vector group after the transformation and the Laplacian vector group when the beauty is not adopted based on a least square method, and adjusting the adjustment parameter reversely based on the overall difference until the overall difference between the Laplacian vector group when the beauty parameter is adopted and the Laplacian vector group when the beauty is not realized after the vector transformation is performed on the Laplacian vector group based on the adjustment parameter is minimum, and taking the corresponding adjustment parameter when the overall difference is minimum as the adjustment parameter corresponding to the Laplacian vector group. The beautifying parameters comprise any one or more combinations of sub-parameters such as whitening operator parameters, peeling operator parameters, stretching operator parameters and skin tendering operator parameters, and the adjusting parameters comprise any one or more combinations of parameters such as image sharpening parameters, image stretching parameters and image noise parameters. The parametric contrast model may be obtained by sample training of the image processing apparatus 10 itself, or may be obtained from an external apparatus and stored in the memory 11.
In this embodiment, the memory 11 may also be used to store a face identity image indicating a specific identity of a person. The memory 11 may be, but is not limited to, a random access memory, a read only memory, a programmable read only memory, an erasable programmable read only memory, an electrically erasable programmable read only memory, and the like. The memory 11 may also be used to store various applications that the processor 12 executes upon receiving execution instructions. Further, the software programs and modules in the memory 11 may also include an operating system, which may include various software components and/or drivers for managing system tasks (e.g., memory management, storage device control, power management, etc.), and may communicate with various hardware or software components to provide an operating environment for other software components.
In this embodiment, the processor 12 may be an integrated circuit chip having signal processing capabilities. The Processor 12 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Network Processor (NP), and the like, and may implement or execute each method, step, and logic block disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In the present embodiment, the communication unit 13 is configured to establish a communication connection between the image processing apparatus 10 and another external apparatus via a network, and perform data transmission via the network. For example, the image processing apparatus 10 may obtain, from the external apparatus through the communication unit 13, a face image to be detected that needs face identity detection, where the face image to be detected may be a face image subjected to face beautification processing, or may be an original face image without face beautification.
In this embodiment, the image processing apparatus 10 can reduce the influence of the beauty technique on the face identification result and improve the accuracy of the face identification through the face identification device 100 stored in the memory 11.
It is to be understood that the configuration shown in fig. 1 is only a schematic configuration of the image processing apparatus 10, and the image processing apparatus 10 may further include more or less components than those shown in fig. 1, or have a different configuration from that shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Fig. 2 is a schematic flow chart of a face identity detection method according to an embodiment of the present application. In the embodiment of the present application, the face identity detection method is applied to the image processing device 10, and the image processing device 10 stores a corresponding relationship between an image topology operator vector group and an adjustment parameter, and a face identity image for indicating a specific identity of a person. The following describes the specific flow and steps of the face identity detection method shown in fig. 2 in detail.
Step S210, feature point extraction is carried out on the face image to be detected, and the face feature point of the face image to be detected and adjacent pixel points corresponding to the face feature point are obtained.
In this embodiment, after the image processing device 10 acquires the face image to be detected, the feature point extraction module may perform feature point extraction on the face image to be detected based on the feature point extraction model to obtain the face feature points of the face image to be detected, and simultaneously obtain adjacent pixel points corresponding to the face feature points of each person in the face image to be detected.
Step S220, calculating an image topology operator vector group between each facial feature point and the corresponding adjacent pixel point.
In this embodiment, taking laplacian as an expression form of the image topology operator as an example, the image processing apparatus 10 calculates a laplacian vector group between each human face feature point and a corresponding adjacent pixel point in the RGB color space when obtaining the human face feature point in the human face image to be detected and the adjacent pixel point of each human face feature point.
In step S230, the adjustment parameters of each facial feature point corresponding to the image topology operator vector set of the facial feature point are obtained.
Optionally, please refer to fig. 3, which is a flowchart illustrating the sub-steps included in step S230 shown in fig. 2. In this embodiment, the correspondence between the image topology operator vector sets and the adjustment parameters includes adjustment parameters corresponding to different image topology operator vector sets, and the step S230 includes sub-steps S231 and S232.
In sub-step S231, a corresponding matched target vector set is searched for from all image topology operator vector sets stored in the image processing device 10 according to the image topology operator vector set of each facial feature point.
In the substep S232, if the search is successful, the adjustment parameter corresponding to the searched target vector set is used as the adjustment parameter of the face feature point.
In the present embodiment, taking laplacian as an expression form of the image topology operator as an example, the image processing apparatus 10 searches the parameter comparison model for a target vector group having the same vector value and vector direction as the laplacian vector group of each face feature point by inputting the laplacian vector group of each face feature point to the parameter comparison model, and takes an adjustment parameter corresponding to the target vector group in the parameter comparison model as an adjustment parameter of each face feature point. It is understood that the laplacian is only one expression form of the image topological operator, the expression form of the image topological operator is not limited to the laplacian, and the related operation when the image topological operator is expressed by other operators is similar to the related operation when the image topological operator is expressed by the laplacian.
And S240, carrying out image adjustment on the face image to be detected based on the acquired adjustment parameters of all the face characteristic points to obtain a corresponding target adjustment image.
In this embodiment, after obtaining the adjustment parameters of each facial feature point in the facial image to be detected, the image processing device 10 performs image adjustment on the facial image to be detected according to the adjustment influence degree of each pixel point in the facial image to be detected by the adjustment parameters of all facial feature points, and obtains a target adjustment image corresponding to the facial image to be detected after a possible beauty treatment effect is reduced, where the adjustment influence degree indicates the adjustment participation degree of the adjustment parameters at each facial feature point in the image adjustment process of other pixel points.
Optionally, please refer to fig. 4, which is a flowchart illustrating the sub-steps included in step S240 shown in fig. 2. In this embodiment, the step S240 may include a sub-step S241 and a sub-step S242.
And a substep S241 of establishing a blending field among all the pixel points in the face image to be detected, and uniformly spreading the adjustment parameters of the corresponding face characteristic points to other pixel points in the blending field at the positions corresponding to the face characteristic points in the blending field.
And a substep S242, performing pixel adjustment on each pixel point in the blending field according to the adjustment parameter correspondingly obtained by the pixel point, and correspondingly generating the target adjustment image.
In this embodiment, the blending yard is a passive and non-rotating yard, and when each facial feature point propagates the adjustment parameter at the corresponding position in the blending yard, the adjustment parameter will be uniformly radiated and propagated to the peripheral pixels with the facial feature point as the starting point, so that each pixel point in the blending yard correspondingly obtains the adjustment parameter from the radiation propagation of each facial feature point. The image processing device 10 obtains a target adjustment image corresponding to the face image to be detected after adjustment and transformation by performing pixel adjustment and transformation on the pixel points according to the adjustment parameters obtained corresponding to each pixel point.
Step S250, performing image comparison between the target adjustment image and the stored face identity image to obtain a target identity image matched with the target adjustment image in all the face identity images, so as to complete face identity detection on the face image to be detected.
In this embodiment, after obtaining the target adjustment image with the reduced beauty treatment effect corresponding to the face image to be detected, the image processing device 10 performs image comparison between the target adjustment image and the face identity image stored in the image processing device 10 to obtain a target identity image, which is matched with the target adjustment image and used for indicating the specific identity of the person corresponding to the face image to be detected, in all the stored face identity images, so as to reduce the influence of the beauty technology on face identity recognition, and implement face identity detection with high accuracy.
Optionally, please refer to fig. 5, which is a flowchart illustrating the sub-steps included in step S250 shown in fig. 2. In the embodiment of the present application, the step S250 may include a sub-step S251 and a sub-step S252.
And a substep S251 of calculating an image confidence between the target adjustment image and each stored face identity image.
And a substep S252, selecting the face identity image with the maximum image confidence as the target identity image corresponding to the face image to be detected.
Fig. 6 is a schematic flow chart of a face identity detection method according to an embodiment of the present application. In this embodiment of the application, the face identity detection method may further include step S209 before step S210.
Step S209, configuring and storing the corresponding relationship between the image topology vector group and the adjustment parameter.
In this embodiment, taking laplacian as an expression form of the image topology operator as an example, the image processing apparatus 10 may configure the corresponding relationship between the laplacian vector group and the adjustment parameter by training a parameter comparison model between the laplacian vector group and the adjustment parameter by using samples based on a convolutional neural network, and store the corresponding relationship in the memory 11.
When the parameter reference model is trained, the image processing device 10 records each beauty parameter and the laplacian operator vector group corresponding to the face image feature point under different beauty parameters, then compares the laplacian operator vector group under the beauty parameter with the laplacian operator vector group under the non-beauty condition, preliminarily generates an adjustment parameter for the beauty parameter according to the comparison result, performs vector transformation on the laplacian operator vector group under the beauty parameter with the generated adjustment parameter, and then calculates the overall difference between the transformed laplacian operator vector group and the laplacian operator vector group under the non-beauty condition based on the least square method, so as to reversely adjust the adjustment parameter based on the overall difference until the laplacian operator vector group under the beauty parameter performs vector transformation based on the adjustment parameter and then the overall difference between the laplacian operator vector group under the non-beauty condition And taking the corresponding adjusting parameter when the overall difference is minimum as the adjusting parameter corresponding to the Laplace operator vector group.
Fig. 7 is a block diagram of a face identity detection apparatus 100 according to an embodiment of the present application. In the embodiment of the present application, the face identity detection apparatus 100 is applied to the image processing device 10 shown in fig. 1, and the image processing device 10 stores therein a corresponding relationship between an image topology operator vector set and an adjustment parameter, and a face identity image for indicating a specific identity of a person. The face identity detection device 100 includes a feature point extraction module 110, a vector group calculation module 120, a parameter acquisition module 130, an image adjustment module 140, and an identity comparison module 150.
The feature point extraction module 110 is configured to perform feature point extraction on a face image to be detected to obtain a face feature point of the face image to be detected and an adjacent pixel point corresponding to the face feature point.
In this embodiment, the feature point extraction module 110 may execute step S210 shown in fig. 2, and the specific execution process may refer to the above detailed description of step S210.
The vector group calculating module 120 is configured to calculate an image topology operator vector group between each facial feature point and the corresponding adjacent pixel point.
In this embodiment, the vector group calculating module 120 may execute step S220 shown in fig. 2, and the specific execution process may refer to the above detailed description of step S220.
The parameter obtaining module 130 is configured to obtain an adjustment parameter of each facial feature point corresponding to the image topology operator vector set of the facial feature point.
In this embodiment, the parameter obtaining module 130 may execute the step S230 shown in fig. 2, and the sub-steps S231 and S232 shown in fig. 3, and the specific execution process may refer to the above detailed description of the step S220, the sub-step S231, and the sub-step S232.
The image adjusting module 140 is configured to perform image adjustment on the face image to be detected based on the obtained adjustment parameters of all the face feature points, so as to obtain a corresponding target adjustment image.
In this embodiment, the image adjustment module 140 may execute the step S240 shown in fig. 2, and the sub-steps S241 and S242 shown in fig. 4, and the specific execution process may refer to the above detailed description of the step S240, the sub-step S241, and the sub-step S242.
The identity comparison module 150 is configured to perform image comparison on the target adjustment image and the stored face identity image to obtain a target identity image matched with the target adjustment image in all the face identity images, so as to complete face identity detection on the face image to be detected.
In this embodiment, the identity comparison module 150 may execute step S250 shown in fig. 2, and sub-steps S251 and S252 shown in fig. 5, and the specific execution process may refer to the above detailed description of step S250, sub-step S251, and sub-step S252.
Fig. 8 is a schematic block diagram of a face identity detection apparatus 100 according to an embodiment of the present application. In this embodiment, the face identity detection apparatus 100 may further include a relationship configuration module 160.
The relationship configuration module 160 is configured and stores the corresponding relationship between the image topology operator vector group and the adjustment parameter.
In this embodiment, the relationship configuration module 160 may execute step S209 shown in fig. 6, and the specific execution process may refer to the above detailed description of step S209.
In summary, in the face identity detection method and apparatus provided in the embodiment of the present application, the face identity detection method can reduce the influence degree of the beauty technology on the face identity recognition result, and improve the accuracy of face identity recognition. Firstly, the method acquires the face characteristic points in the face image to be detected and adjacent pixel points corresponding to the face characteristic points in the face image to be detected in a manner of extracting the characteristic points. Then, the method obtains the adjustment parameters corresponding to each facial feature point and used for reducing the beauty treatment effect according to the corresponding relation between the image topology operator vector group and the adjustment parameters stored by the image processing equipment in a mode of calculating the image topology operator vector group between each facial feature point and the corresponding adjacent pixel point in the facial image to be detected. Then, the method carries out image adjustment on the face image to be detected based on the obtained adjustment parameters to obtain a corresponding target adjustment image so as to reduce the possible beauty treatment effect on the face image to be detected. Finally, the method obtains the target identity image which is matched with the target adjustment image and used for indicating the specific identity of the person corresponding to the face image to be detected by comparing the target adjustment image with the face identity image stored in the image processing equipment, thereby reducing the influence of the beautifying technology on the face identity recognition and realizing the face identity detection with high accuracy.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.