CN113723386A - Cross-age face recognition method, system, electronic device and storage medium - Google Patents
Cross-age face recognition method, system, electronic device and storage medium Download PDFInfo
- Publication number
- CN113723386A CN113723386A CN202111297685.7A CN202111297685A CN113723386A CN 113723386 A CN113723386 A CN 113723386A CN 202111297685 A CN202111297685 A CN 202111297685A CN 113723386 A CN113723386 A CN 113723386A
- Authority
- CN
- China
- Prior art keywords
- age
- identity
- feature
- face image
- loss
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The application relates to a cross-age face recognition method, a system, an electronic device and a storage medium, after acquiring a mixed feature map of a face image to be recognized; acquiring an age characteristic diagram and an identity characteristic diagram according to the characteristic mask and the mixed characteristic diagram, acquiring expected and predicted ages of all ages according to the age characteristic diagram, acquiring a predicted age group according to the expected of all ages, and acquiring an identity characteristic vector according to the identity characteristic diagram; acquiring a recognition result of a face image to be recognized according to the identity characteristic vector and a face database acquired in advance; the feature encoder, the age regression network, the age group classification network and the feature extraction network are obtained by training according to the predicted age, the predicted age group and the identity feature vector, the reliability of feature separation is improved through an attention mechanism, the network is trained according to the age feature and the identity feature, age information in the identity feature can be better eliminated, and the accuracy of cross-age face recognition is improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, a system, an electronic device, and a storage medium for cross-age face recognition.
Background
Face recognition is becoming more and more widely used as an important biometric feature recognition technology. However, the recognition rate of the current face recognition is greatly influenced by age, and in the face recognition, the face difference between different individuals is often smaller than the face difference of the same individual under different conditions (such as different ages).
In the related art, the age-crossing face recognition method can be divided into two major categories, and is based on a discrimination method and a generation method, wherein the discrimination method separates the age characteristics and the identity characteristics of a face, and only uses the separated identity characteristics for recognition, and the generation-based method is that when face recognition is performed, a confrontation network is generated to generate a face image of a specific age by a synthesis method of the face image to be recognized. However, the discrimination method in the related art cannot completely separate the age feature from the identity feature, resulting in low accuracy.
At present, an effective solution is not provided aiming at the problems that the age characteristic and the identity characteristic cannot be thoroughly separated and the accuracy is low when the age-crossing face recognition is carried out by a distinguishing method in the related technology.
Disclosure of Invention
The embodiment of the application provides a cross-age face recognition method, a cross-age face recognition system, an electronic device and a storage medium, and aims to solve the problems that age features cannot be thoroughly separated from identity features and the accuracy is low when cross-age face recognition is carried out through a distinguishing method in the related technology.
In a first aspect, an embodiment of the present application provides a cross-age face recognition method, where the method includes:
inputting a face image to be recognized into a feature encoder to obtain a mixed feature image;
inputting the mixed feature map into an attention module, wherein the attention module generates a corresponding feature mask, and obtains an age feature map and an identity feature map according to the feature mask and the mixed feature map;
inputting the age characteristic graph into an age regression network to obtain expected ages and predicted ages of all ages, inputting the expected ages of all ages into an age group classification network, outputting predicted age groups, inputting the identity characteristic graph into a characteristic extraction network, and outputting identity characteristic vectors;
acquiring the recognition result of the face image to be recognized according to the identity feature vector and a face database acquired in advance;
wherein the feature encoder, the age regression network, the age group classification network, and the feature extraction network are trained based on the predicted age, the predicted age group, and the identity feature vector.
In some embodiments, after obtaining the recognition result of the facial image to be recognized, the method further includes:
inputting the recognition result of the face image to be recognized into a feature encoder, outputting feature maps of different layers, inputting the identity feature maps into an identity condition module, and outputting identity condition age features, wherein the identity condition module comprises a series of identity condition blocks, and each age group in the identity condition blocks shares a part of channels;
and inputting the different-layer characteristic diagrams and the identity condition age characteristics into a generation countermeasure network, and outputting the target age face image of the face image to be recognized, wherein age loss and identity loss are obtained through the age characteristic diagrams and the identity characteristic diagrams, and the identity condition module and the generation countermeasure network are obtained according to age loss, identity loss and loss training of the face image authenticity.
In some embodiments, after outputting the face image of the target age of the face image to be recognized, the method further comprises:
and acquiring a face image in a face database, and generating a target age face image corresponding to the face image in the face database according to the feature encoder, the attention module, the identity condition module and the generation countermeasure network.
In some embodiments, before inputting the identity characteristic map into the identity condition module and outputting the identity condition age characteristic, the method further comprises:
training an identity condition module and a generated countermeasure network according to the age face generation loss function, wherein the generated countermeasure network comprises a decoder and a discriminator;
the loss function of the discriminator for discriminating the authenticity of the face image is as follows:
wherein,in order to lose the authenticity of the face image,is a face image with the age of t,identification of the age t encoded for one-hot,is the output of the discriminator;
acquiring an age characteristic diagram and an identity characteristic diagram of the target age face image, and acquiring age loss and identity loss through the age characteristic diagram and the identity characteristic diagram of the target age face image;
and constructing an age face generation loss function according to the loss of the authenticity of the face image, the age loss and the identity loss.
In some embodiments, the arbiter is optimized by constructing an arbiter loss function, where the constructed arbiter loss function is:
In some embodiments, before the face image to be recognized is input to the feature encoder and the mixed feature map is obtained, the method further includes:
constructing an age estimation loss function and a final identification loss function according to the predicted age, the predicted age group and the identity characteristic vector;
and training a feature encoder, an age regression network, an age group classification network and a feature extraction network through the age estimation loss function and the final identification loss function to obtain the trained feature encoder, age regression network, age group classification network and feature extraction network.
In some of these embodiments, the age estimation loss function constructed is:
wherein,the loss is estimated for the age of the person,for the said predicted age, the age of the person,in order to be of the true age,in order to be a loss of the mean variance,for the set of predicted ages, the age of the subject,in order to be a group of real ages,is the cross entropy loss;
the final recognition loss function constructed was:
wherein,in order to identify the loss at the end,for the purpose of the identity feature vector,for the purpose of the real identity id,in order to be a loss of identification,the loss is estimated for the age of the person,for gradient inversion, the balance between losses isAnd (5) controlling.
In a second aspect, the embodiment of the present application provides an age-related face recognition system, which includes a feature coding module, an attention module, a feature extraction module, and a face recognition module,
the feature coding module is used for inputting the face image to be recognized into the feature coder to obtain a mixed feature image;
the attention module is used for inputting the mixed feature map into the attention module, generating a corresponding feature mask, and obtaining an age feature map and an identity feature map according to the feature mask and the mixed feature map;
the characteristic extraction module is used for inputting the age characteristic diagram into an age regression network to obtain expected ages and predicted ages of all ages, inputting the expected ages of all ages into an age group classification network to output predicted age groups, inputting the identity characteristic diagram into a characteristic extraction network, and outputting identity characteristic vectors;
the face recognition module is used for acquiring a recognition result of the face image to be recognized according to the identity characteristic vector and a face database acquired in advance;
wherein the feature encoder, the age regression network, the age group classification network, and the feature extraction network are trained based on the predicted age, the predicted age group, and the identity feature vector.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the processor implements the cross-age face recognition method according to the first aspect.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the cross-age face recognition method according to the first aspect.
Compared with the related art, the age-crossing face recognition method provided by the embodiment of the application obtains the mixed feature map by inputting the face image to be recognized into the feature encoder; the attention module generates a feature mask, obtains an age feature map and an identity feature map according to the feature mask and the mixed feature map, inputs the age feature map into an age regression network, obtains expected ages and predicted ages of all ages, inputs the expected ages of all ages into an age group classification network, outputs a predicted age group, inputs the identity feature map into a feature extraction network, and outputs an identity feature vector; acquiring a recognition result of a face image to be recognized according to the identity characteristic vector and a face database acquired in advance; the feature encoder, the age regression network, the age group classification network and the feature extraction network are obtained by training according to the predicted age, the predicted age group and the identity feature vector, the reliability of feature separation is improved through an attention mechanism, the feature encoder, the age regression network, the age group classification network and the feature extraction network are obtained by training according to the predicted age, the predicted age group and the identity feature vector, the age information in the identity feature can be better eliminated, and the accuracy of cross-age face recognition is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a flow chart of a cross-age face recognition method according to an embodiment of the application;
FIG. 2 is a schematic diagram of a cross-age face recognition method according to an embodiment of the application;
FIG. 3 is a schematic diagram of a single identity condition block according to an embodiment of the present application;
fig. 4 is a block diagram of a cross-age face recognition system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The present embodiment provides a cross-age face recognition method, and fig. 1 is a flowchart of the cross-age face recognition method according to the embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S101, inputting a face image to be recognized into a feature encoder to obtain a mixed feature image; in this embodiment, the feature encoder E (encoder) may select a mainstream trunk neural network structure in the deep learning field, such as ResNet, and the feature encoder E that completes training may recognize the face image from the face image to be recognizedExtracting mixed characteristic diagramI.e. by。
Step S102, inputting the mixed feature map into an attention module, generating a corresponding feature mask by the attention module, and obtaining an age feature map and an identity feature map according to the feature mask and the mixed feature map; in this embodiment, the attention moduleAnd attention mechanisms widely used in various deep learning fields, such as SE, CBAM and the like, can be selected. Attention module for completing trainingThe input of (A) is a mixed feature map of the feature encoder E outputBy attention moduleGenerating feature masksAnd masking the feature maskAnd mixed feature mapsThe point-by-point product achieves the purpose of decomposing and mixing the characteristics and outputs an age characteristic diagramAnd identity feature mapI.e. by,。
Step S103, inputting the age characteristic graph into an age regression network to obtain expected ages and predicted ages of all ages, inputting the expected ages of all ages into an age group classification network, outputting predicted age groups, inputting the identity characteristic graph into a characteristic extraction network, and outputting identity characteristic vectors;
in this embodiment, the age regression network a may include 512-dimensional and 101-dimensional linear layers, and a similar depth prediction (DEX) method is used to perform a regression task through a classification network. Calculation of expectations after 0-100 age Classification with the softmax function, i.e.Wherein O = {0, 1.,. 100} is a 101-dimensional output, representing a softmax output probability,,is the discrete age corresponding to each class i, the age at which training is completedRegression network A input is age characteristic graph output by attention moduleThe output is the predicted age, i.e.。
The age group classification network W is an N-dimensional linear layer,and N is the number of age groups. The trained age group classification network W is input as the output of the 101-dimensional linear layer in the age regression network A, and the output is the predicted age group, namely。
The feature extraction network L is a 512-dimensional linear layer, and the trained feature extraction network L is input into the attention moduleOutput identity feature mapOutputting a 512-dimensional identity feature vector, i.e.。
And step S104, acquiring a recognition result of the face image to be recognized according to the identity feature vector and a pre-acquired face database, wherein the feature encoder, the age regression network, the age group classification network and the feature extraction network are obtained by training according to the predicted age, the predicted age group and the identity feature vector.
In this embodiment, the identity feature vector of each face image in the face database is obtained according to the above method, and face similarity between the identity feature vector of the face image to be recognized and the identity feature vector of each face image in the face database is calculated, if the face similarity is smaller than a similarity threshold, it indicates that the face image in the face database is not matched with the face image to be recognized, and if the face similarity is greater than or equal to the similarity threshold, the face image in the face database corresponding to the maximum value of the face similarity is used as a recognition result of the face image to be recognized.
Compared with the prior art, the cross-age face recognition based on the discrimination method cannot completely separate the age characteristic from the identity characteristic, and has the problem of low accuracy, the embodiment of the application improves the reliability of the characteristic separation through the attention mechanism through the steps S101 to S104, and the characteristic encoder, the age regression network, the age group classification network and the characteristic extraction network are obtained by training according to the predicted age, the predicted age group and the identity characteristic vector, so that the age information in the identity characteristic can be better eliminated, and the accuracy of the cross-age face recognition is improved.
In some embodiments, fig. 2 is a schematic diagram of an age-spanning face recognition method according to an embodiment of the present application, as shown in fig. 2, a first module, a second module, and a third module form a face recognition model, a fourth module generates a model for a face, a face image to be recognized is input to the face recognition model as an input image, after a recognition result of the face image to be recognized is output, the recognition result of the face image to be recognized is input to a feature encoder as an input image, feature maps of different layers are output, an identity feature map is input to an identity condition module, and an identity condition age feature is output, where the identity condition module includes a series of identity condition blocks, and each age group in the identity condition blocks shares a partial channel; inputting different-layer characteristic graphs and identity condition age characteristics into a generation countermeasure network, and outputting a target age face image of a face image to be recognized, wherein age loss and identity loss are obtained through the age characteristic graphs and the identity characteristic graphs, and an identity condition module and the generation countermeasure network are obtained according to age loss, identity loss and loss training of authenticity of the face image.
In this embodiment, the Identity Conditional Module (ICM) is composed of a series of Identity Conditional blocks (e.g., (ii) (i.e.,))Identity Conditional Block, ICB) in fig. 3, which is a schematic diagram of a single Identity condition Block according to an embodiment of the present application, as shown in fig. 3, if the age of a person is set to 0 to 100, the age groups may be set to 0-10, 11-20, …, 91-100, if there are 20 channels in the group of 0-10, some of the 20 channels are shared with the group of 11-20, and the rest of the groups are the same; generating a countermeasure network (GAN) comprises a decoder D and a discriminatorInputting the characteristic graphs of different layers and the age characteristics of the identity condition into a generation countermeasure network, and outputting the face image of the target age t of the face image to be recognizedI.e. by。
In the related art, one-hot age coding is usually adopted, the coding mode is that one age or age group has a unique code but the ages or age groups are not related, so that the generated feature change between the ages is inconsistent, the embodiment solves the age smoothness neglected by the one-hot coding by enabling each age group in an identity condition block to share a partial channel, and a generation module in the related art directly generates a face image from an input image without considering the identity feature, so that the generated face image has a feature which is not consistent with the identity.
In the related art, the distinguishing model has higher accuracy for distinguishing after separating the age characteristic from the identity characteristic, but the distinguishing model lacks interpretability for judging whether the age-crossing face is the same ID or not, so that in some key application scenes, such as a scene of looking for a lost child, whether a child photo and an adult photo are matched or not lacks a visual basis, however, according to the embodiment, the identity characteristic vector of the face image to be distinguished is matched with the identity characteristic vector of the face image in the face database, after the distinguishing result of the face image to be distinguished is obtained, a target age face image of the face image to be distinguished is generated through the identity condition module and the generation countermeasure network, exemplarily, the face image to be distinguished is an A face image of an A adult in the scene of looking for the lost child, the matching is carried out according to the identity characteristic vector of the A face image and the identity characteristic vector of the face image in the face database, and if the similarity of the faces of the child B and the adult A is larger than the similarity threshold, the child B and the adult A are the same person, but the child B and the adult A lack visual basis, and then the child face image of the adult A is generated through the identity condition module and the generation confrontation network to obtain the visual basis.
In some embodiments, after a target age face image of a face image to be recognized is output, the face image in the face database is acquired, and the target age face image corresponding to the face image in the face database is generated according to the feature encoder, the attention module, the identity condition module and the generation countermeasure network.
In the related art, when a face recognition model based on a generation method is trained, required training data needs to meet the requirements of large data size and large age span, but the training data is lacked, the quality of generated face images is poor and is not suitable for training, the face recognition accuracy is low, in the embodiment, the face images in a face database can be youngened and/or aged through an identity condition module and a generation confrontation network, and the training data and the face images with large age span are expanded; the augmented training data may also be used to further optimize the feature encoder, the age regression network, the age group classification network, and the feature extraction network.
In some embodiments, before the identity characteristic graph is input into the identity condition module and the identity condition age characteristic is output, the identity condition module and the generation countermeasure network are trained according to the age face generation loss function, and the identity condition module and the generation countermeasure network are trained according to the age face generation loss functionThe generation countermeasure network includes a decoder D and a discriminator;
The loss function of the discriminator for discriminating the authenticity of the face image is shown as the following formula 1:
wherein,in order to lose the authenticity of the face image,is a face image with the age of t,identification of the age t encoded for one-hot,outputting the probability (0 to 1) of the face image being true for the output of the discriminator, and inputting the splicing of the face image and the age mark on the channel;
the method is used for acquiring the face image of the target ageAge characteristic map ofAnd identity profileAcquiring age loss and identity loss through an age characteristic diagram and an identity characteristic diagram of a target age face image; the process of acquiring the age loss and the identity loss of the face image of the target age is shown in the following formulas 2 to 6:
wherein,in order to mix the characteristic maps, the method comprises the following steps,in the case of age loss, the age loss,in order to be a loss of identity,for cross-entropy loss, F represents the Frobenius norm.
According to loss of authenticity of face imageAge lossAnd loss of identityThe constructed age face generation loss function is shown in the following formula 7:
Optionally, a discriminator loss function is constructed to optimize the discriminator, where the constructed discriminator loss function is shown in the following formula 8:
In some embodiments, before the face image to be recognized is input to the feature encoder and the mixed feature map is obtained, an age estimation loss function and a final recognition loss function are constructed according to the predicted age, the predicted age group and the identity feature vector; and training the feature encoder, the age regression network, the age group classification network and the feature extraction network through the age estimation loss function and the final identification loss function to obtain the trained feature encoder, age regression network, age group classification network and feature extraction network.
Alternatively, the constructed age estimation loss function is shown in the following equation 9:
wherein,estimate for ageThe loss is measured and measured,in order to predict the age of the patient,in order to be of the true age,in order to be a loss of the mean variance,in order to predict the age group or groups,in order to be a group of real ages,is the cross entropy loss;
the final recognition loss function constructed is shown in equation 10 below:
wherein,in order to identify the loss at the end,in order to be the identity feature vector,for the purpose of the real identity id,for the loss of identity recognition, a Cosface loss function is adopted,the loss is estimated for the age of the person,for gradient inversion, the balance between losses isControl, to encourage feature separation to make identity features and age independent as much as possible, gradient inversion (GRL) is introduced, by optimizationThe age estimation loss of the identity feature can be maximized to eliminate age related information in the identity feature.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The embodiment also provides an age-related face recognition system, which is used for implementing the above embodiments and preferred embodiments, and the description of the system is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 4 is a block diagram of a structure of a cross-age face recognition system according to an embodiment of the present application, and as shown in fig. 4, the system includes a feature coding module 41, an attention module 42, a feature extraction module 43, and a face recognition module 44, where the feature coding module 41 is configured to input a face image to be recognized to a feature coder to obtain a mixed feature map; the attention module 42 is configured to input the mixed feature map into the attention module, generate a corresponding feature mask, and obtain an age feature map and an identity feature map according to the feature mask and the mixed feature map; the feature extraction module 43 is configured to input the age feature map to an age regression network, obtain the expected and predicted ages of each age, input the expected and predicted ages of each age to an age group classification network, output a predicted age group, input the identity feature map to a feature extraction network, and output an identity feature vector; the face recognition module 44 is configured to obtain a recognition result of a face image to be recognized according to the identity feature vector and a face database obtained in advance; the feature encoder, the age regression network, the age group classification network and the feature extraction network are obtained by training according to the predicted age, the predicted age group and the identity feature vector, the reliability of feature separation is improved through an attention mechanism, the feature encoder, the age regression network, the age group classification network and the feature extraction network are obtained by training according to the predicted age, the predicted age group and the identity feature vector, the age information in the identity feature can be better eliminated, and the accuracy of cross-age face recognition is improved.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, in combination with the age-related face recognition method in the foregoing embodiment, the embodiment of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the cross-age face recognition methods of the above embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a cross-age face recognition method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be understood by those skilled in the art that various features of the above-described embodiments can be combined in any combination, and for the sake of brevity, all possible combinations of features in the above-described embodiments are not described in detail, but rather, all combinations of features which are not inconsistent with each other should be construed as being within the scope of the present disclosure.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A cross-age face recognition method, comprising:
inputting a face image to be recognized into a feature encoder to obtain a mixed feature image;
inputting the mixed feature map into an attention module, wherein the attention module generates a corresponding feature mask, and obtains an age feature map and an identity feature map according to the feature mask and the mixed feature map;
inputting the age characteristic graph into an age regression network to obtain expected ages and predicted ages of all ages, inputting the expected ages of all ages into an age group classification network, outputting predicted age groups, inputting the identity characteristic graph into a characteristic extraction network, and outputting identity characteristic vectors;
acquiring the recognition result of the face image to be recognized according to the identity feature vector and a face database acquired in advance;
wherein the feature encoder, the age regression network, the age group classification network, and the feature extraction network are trained based on the predicted age, the predicted age group, and the identity feature vector.
2. The method according to claim 1, wherein after obtaining the recognition result of the face image to be recognized, the method further comprises:
inputting the recognition result of the face image to be recognized into a feature encoder, outputting feature maps of different layers, inputting the identity feature maps into an identity condition module, and outputting identity condition age features, wherein the identity condition module comprises a series of identity condition blocks, and each age group in the identity condition blocks shares a part of channels;
and inputting the different-layer characteristic diagrams and the identity condition age characteristics into a generation countermeasure network, and outputting a target age face image of the face image to be recognized, wherein age loss and identity loss are obtained through the age characteristic diagrams and the identity characteristic diagrams, and the identity condition module and the generation countermeasure network are obtained according to age loss, identity loss and loss training of the authenticity of the face image.
3. The method according to claim 2, wherein after outputting the face image of the target age of the face image to be recognized, the method further comprises:
and acquiring a face image in a face database, and generating a target age face image corresponding to the face image in the face database according to the feature encoder, the attention module, the identity condition module and the generation countermeasure network.
4. The method of claim 2, wherein before inputting the profile into the condition status module and outputting the condition status age characteristic, the method further comprises:
training an identity condition module and a generated countermeasure network according to the age face generation loss function, wherein the generated countermeasure network comprises a decoder and a discriminator;
the loss function of the discriminator for discriminating the authenticity of the face image is as follows:
wherein,in order to lose the authenticity of the face image,is a face image with the age of t,identification of the age t encoded for one-hot,is the output of the discriminator;
acquiring an age characteristic diagram and an identity characteristic diagram of the target age face image, and acquiring age loss and identity loss through the age characteristic diagram and the identity characteristic diagram of the target age face image;
and constructing an age face generation loss function according to the loss of the authenticity of the face image, the age loss and the identity loss.
6. The method of claim 1, wherein before the face image to be recognized is input to the feature encoder and the mixed feature map is obtained, the method further comprises:
constructing an age estimation loss function and a final identification loss function according to the predicted age, the predicted age group and the identity characteristic vector;
and training a feature encoder, an age regression network, an age group classification network and a feature extraction network through the age estimation loss function and the final identification loss function to obtain the trained feature encoder, age regression network, age group classification network and feature extraction network.
7. The method of claim 6, wherein the age estimation loss function is constructed by:
wherein,the loss is estimated for the age of the person,for the said predicted age, the age of the person,in order to be of the true age,in order to be a loss of the mean variance,for the set of predicted ages, the age of the subject,in order to be a group of real ages,is the cross entropy loss;
the final recognition loss function constructed was:
8. An age-spanning face recognition system is characterized by comprising a feature coding module, an attention module, a feature extraction module and a face recognition module,
the feature coding module is used for inputting the face image to be recognized into the feature coder to obtain a mixed feature image;
the attention module is used for inputting the mixed feature map into the attention module, generating a corresponding feature mask, and obtaining an age feature map and an identity feature map according to the feature mask and the mixed feature map;
the characteristic extraction module is used for inputting the age characteristic diagram into an age regression network to obtain expected ages and predicted ages of all ages, inputting the expected ages of all ages into an age group classification network to output predicted age groups, inputting the identity characteristic diagram into a characteristic extraction network, and outputting identity characteristic vectors;
the face recognition module is used for acquiring a recognition result of the face image to be recognized according to the identity characteristic vector and a face database acquired in advance;
wherein the feature encoder, the age regression network, the age group classification network, and the feature extraction network are trained based on the predicted age, the predicted age group, and the identity feature vector.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the cross-age face recognition method of any one of claims 1 to 7.
10. A storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the cross-age face recognition method of any one of claims 1 to 7 when run.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111297685.7A CN113723386A (en) | 2021-11-04 | 2021-11-04 | Cross-age face recognition method, system, electronic device and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111297685.7A CN113723386A (en) | 2021-11-04 | 2021-11-04 | Cross-age face recognition method, system, electronic device and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN113723386A true CN113723386A (en) | 2021-11-30 |
Family
ID=78686572
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111297685.7A Pending CN113723386A (en) | 2021-11-04 | 2021-11-04 | Cross-age face recognition method, system, electronic device and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN113723386A (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114373213A (en) * | 2022-01-11 | 2022-04-19 | 中国工商银行股份有限公司 | Juvenile identity recognition method and device based on face recognition |
| CN114821706A (en) * | 2022-03-29 | 2022-07-29 | 中国人民解放军国防科技大学 | Fake image detection and positioning method and system based on regional perception |
| CN115100709A (en) * | 2022-06-23 | 2022-09-23 | 北京邮电大学 | Feature-separated image face recognition and age estimation method |
| CN115116116A (en) * | 2022-07-15 | 2022-09-27 | 维沃移动通信(杭州)有限公司 | Image recognition method and device, electronic equipment and readable storage medium |
| CN115909454A (en) * | 2022-11-18 | 2023-04-04 | 智慧眼科技股份有限公司 | Face recognition model training method, face recognition method and related equipment |
| CN119206836A (en) * | 2024-11-20 | 2024-12-27 | 杭州登虹科技有限公司 | Cross-age face recognition method |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019056000A1 (en) * | 2017-09-18 | 2019-03-21 | Board Of Trustees Of Michigan State University | Disentangled representation learning generative adversarial network for pose-invariant face recognition |
| CN111881722A (en) * | 2020-06-10 | 2020-11-03 | 广东芯盾微电子科技有限公司 | Cross-age face recognition method, system, device and storage medium |
| CN113205017A (en) * | 2021-04-21 | 2021-08-03 | 深圳市海清视讯科技有限公司 | Cross-age face recognition method and device |
-
2021
- 2021-11-04 CN CN202111297685.7A patent/CN113723386A/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019056000A1 (en) * | 2017-09-18 | 2019-03-21 | Board Of Trustees Of Michigan State University | Disentangled representation learning generative adversarial network for pose-invariant face recognition |
| CN111881722A (en) * | 2020-06-10 | 2020-11-03 | 广东芯盾微电子科技有限公司 | Cross-age face recognition method, system, device and storage medium |
| CN113205017A (en) * | 2021-04-21 | 2021-08-03 | 深圳市海清视讯科技有限公司 | Cross-age face recognition method and device |
Non-Patent Citations (3)
| Title |
|---|
| ZHIZHONG HUANG: "When Age-Invariant Face Recognition Meets Face Age Synthesis: A Multi-Task Learning Framework", 《HTTPS://ARXIV.ORG/ABS/2103.01520》 * |
| 孙文斌: "基于深度学习的跨年龄人脸识别", 《激光与光电子学进展》 * |
| 焦李成: "《人工智能前沿技术丛书 计算智能导论》", 30 September 2019 * |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114373213A (en) * | 2022-01-11 | 2022-04-19 | 中国工商银行股份有限公司 | Juvenile identity recognition method and device based on face recognition |
| CN114373213B (en) * | 2022-01-11 | 2025-07-01 | 中国工商银行股份有限公司 | Method and device for identifying minors based on face recognition |
| CN114821706A (en) * | 2022-03-29 | 2022-07-29 | 中国人民解放军国防科技大学 | Fake image detection and positioning method and system based on regional perception |
| CN114821706B (en) * | 2022-03-29 | 2024-11-05 | 中国人民解放军国防科技大学 | A method and system for detecting and locating forged images based on region perception |
| CN115100709A (en) * | 2022-06-23 | 2022-09-23 | 北京邮电大学 | Feature-separated image face recognition and age estimation method |
| CN115116116A (en) * | 2022-07-15 | 2022-09-27 | 维沃移动通信(杭州)有限公司 | Image recognition method and device, electronic equipment and readable storage medium |
| CN115909454A (en) * | 2022-11-18 | 2023-04-04 | 智慧眼科技股份有限公司 | Face recognition model training method, face recognition method and related equipment |
| CN119206836A (en) * | 2024-11-20 | 2024-12-27 | 杭州登虹科技有限公司 | Cross-age face recognition method |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113723386A (en) | Cross-age face recognition method, system, electronic device and storage medium | |
| Terhörst et al. | Suppressing gender and age in face templates using incremental variable elimination | |
| Lu et al. | Nonlinear invariant risk minimization: A causal approach | |
| US10691909B2 (en) | User authentication method using fingerprint image and method of generating coded model for user authentication | |
| Mathieu et al. | Disentangling factors of variation in deep representation using adversarial training | |
| Windeatt | Accuracy/diversity and ensemble MLP classifier design | |
| CN110084193B (en) | Data processing method, apparatus, and medium for face image generation | |
| Mahmud et al. | A novel multi-stage training approach for human activity recognition from multimodal wearable sensor data using deep neural network | |
| Görnitz et al. | Support vector data descriptions and $ k $-means clustering: one class? | |
| Chen | Dual linear regression based classification for face cluster recognition | |
| CN112101087B (en) | Facial image identity identification method and device and electronic equipment | |
| JP5214760B2 (en) | Learning apparatus, method and program | |
| Zhong et al. | A Group-Based Personalized Model for Image Privacy Classification and Labeling. | |
| Sankaran et al. | Representation learning through cross-modality supervision | |
| Mallet et al. | Deepfake detection analyzing hybrid dataset utilizing cnn and svm | |
| Aufar et al. | Face recognition based on Siamese convolutional neural network using Kivy framework | |
| Taalimi et al. | Robust coupling in space of sparse codes for multi-view recognition | |
| CN111523649B (en) | Method and device for preprocessing data aiming at business model | |
| CN113269120A (en) | Method, system and device for identifying quality of face image | |
| Póka et al. | Data augmentation powered by generative adversarial networks | |
| US20250005912A1 (en) | Detecting face morphing by one-to-many face recognition | |
| Liu et al. | Learning from small data: A pairwise approach for ordinal regression | |
| Yang et al. | CrossDF: improving cross-domain deepfake detection with deep information decomposition | |
| Irhebhude et al. | Northern Nigeria Human Age Estimation From Facial Images Using Rotation Invariant Local Binary Pattern Features with Principal Component Analysis. | |
| WO2018203551A1 (en) | Signal retrieval device, method, and program |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20211130 |
|
| RJ01 | Rejection of invention patent application after publication |