EP4381471A1 - Procede pour determiner si une image du visage d'une personne est apte a former une photographie d'identite - Google Patents
Procede pour determiner si une image du visage d'une personne est apte a former une photographie d'identiteInfo
- Publication number
- EP4381471A1 EP4381471A1 EP22754477.2A EP22754477A EP4381471A1 EP 4381471 A1 EP4381471 A1 EP 4381471A1 EP 22754477 A EP22754477 A EP 22754477A EP 4381471 A1 EP4381471 A1 EP 4381471A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- vector
- face
- facial landmarks
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Definitions
- TITLE METHOD FOR DETERMINING IF AN IMAGE OF A PERSON'S FACE IS SUITABLE TO FORM AN IDENTITY PHOTOGRAPH
- the technical field of the invention is that of the processing of digital images.
- a piece or an identity document usually includes a photograph of the face of the holder of the piece, the photograph being called “identity”. It is thus possible to verify biometrically (by facial recognition) the correspondence between the identity photograph and the bearer of the identity document or document, and therefore that the bearer is actually the holder thereof.
- Compliance with the applicable rules can be ensured by a human operator, for example a professional photographer when the photograph is taken by such a professional or by a remote operator to whom the photograph has been transmitted when the photograph was taken in a photo booth or by the applicant himself, which is permitted in some countries (United Kingdom).
- facial attack fraud denotes fraud consisting in presenting a made-up face or a representation of a face whose recognition is expected.
- identity photograph it is important to be able to detect that a photograph represents the face of a real person, and not a face extracted from an image or a video. .
- the document EP2751739 addresses this problem and proposes several fraud detection methods implementing the acquisition of two images of a person's face. A treatment is carried out to assess the flatness of the face appearing in these images and a fraud detection is identified if a flatness score exceeds a critical threshold.
- the methods proposed by this document are however complex and limited to certain categories of facial attacks, plane or semi-plane.
- CompactNet learning a compact space for face presentation attack detection
- an object is to propose a method and a program for determining whether an image of a person's face is suitable for forming an identity photograph.
- This method and this program are particularly simple to implement and are not limited to certain categories of facial attacks. This simplicity of implementation makes it possible to carry out the method and the program on a computing device having a limited computing capacity, such as a smartphone (i.e. a multifunction mobile telephone), and therefore to make the identity photograph immediately available to the user.
- the object of the invention proposes a method for detecting an attempted identity theft by facial attack in order to determine whether an image of a person's face is capable of forming an identity photograph , the method comprising the following steps , implemented by a computing device :
- a location step for respectively providing a first vector of N facial landmarks extracted from the first image and a second vector of N facial landmarks extracted from the second image; a step of propagating the facial landmarks of the first vector and the facial landmarks of the second vector in two Siamese branches of a main neural network to respectively provide a first output vector and a second output vector of dimensions N; a step of combining the first output vector and the second output vector through a cost function and establishing an output digital measure evaluating the random or non-random nature of the face displacement between the first image and the second picture ; a step of classifying the output measurement to determine the random or non-random nature of the face displacement and to conclude, if necessary, an identity theft attempt.
- the time elapsed between the acquisition of the first image and the acquisition of the second image is between 0.1 and 2 seconds;
- the location step comprises the identification of bounding boxes of a face respectively present on the first image and on the second image;
- the location step further comprises the identification of the facial landmarks in the areas of the first image and of the second image defined by the bounding boxes;
- the facial landmarks forming the first vector and the second vector are specific descriptors of the face
- the main neural network comprises a plurality of layers downstream of the two Siamese branches and forming a common trunk of the main neural network, the common trunk at least partially implementing the combining step (S4);
- the cost function is a contrastive loss function;
- the classification step comprises comparing the digital output measurement to a predetermined threshold;
- the method comprises a step of transforming the first vector into a first graph of facial landmarks and the second vector into a second graph of facial landmarks, the propagation step comprising the propagation of the first and of the second graphs in, respectively, the branches conjoins of the main neural network.
- an object of the invention proposes a computer program comprising instructions adapted to the implementation of each of the steps of the method, when the program is executed on a computing device.
- an object of the invention proposes a system for determining whether an image of the face of a person is suitable for forming an identity photograph, the system comprising:
- a display device connected to a computing device and to storage means, the computing device being configured to implement the method proposed above.
- FIG. 1 represents an overview of a system 1 according to one embodiment
- FIGS. 2a and 2b represent respectively in the form of functional blocks and method steps of a computer program in accordance with the invention
- FIG. 3 represents an architecture of the branches of a main neural network according to a particular embodiment of the invention.
- FIG. 4 represents the evolution of an optimization criterion established during the training of the main neural network represented in FIG. 3;
- FIG. 5 represents the RPC graph of an example implementation of the invention.
- a system in accordance with the various embodiments presented in this description aims to deliver to a user an identity photograph in accordance with a predetermined acceptance regulation.
- This delivery can be made in paper or digital form. It can be accompanied by the delivery of a certificate of conformity, or incorporate this certificate by means of a marking of the photograph.
- system 1 aims to provide the user with an identity photograph (or a certificate), on the sole condition that no attempt at identity theft by facial attack has been identified.
- System 1 can of course apply other rules depending on the nature of the identity document for which the photograph is intended or according to the national regulations that apply, as mentioned in the introduction to this application.
- the delivery or non-delivery of the identity photograph or the certificate is carried out in an automated manner by the system 1, using a computer program implementing an image processing method which will be the subject of a following section of this description.
- Figure 1 shows an overview of a system 1 according to one embodiment. It comprises a shooting device 2 (an image sensor or a camera), an input interface 3 (for example a keyboard or control buttons, a display device 4 (for example a screen) connected to a computing device 5 and to storage means 6.
- the system 1 can also provide other components, such as for example a communication interface 7 making it possible to connect the system to a computer or telecommunications network, such as the network Internet .
- the calculation device 5 and the storage means 6 have the function, on the one hand, of coordinating the correct operation of the other devices of the system 1 and, on the other hand, of implementing the image processing method allowing to certify the conformity of the identity photograph.
- the computing device 5 and the storage means 6 are in particular configured to execute an operating program of the system 1 making it possible to present to the user, for example on the display device 4, the instructions to be followed to obtain a photograph .
- the operating program collects the information or commands provided by the user using the input interface 3, for example the nature of the document for which the photograph is intended and/or the start command making it possible to initiate an acquisition step of images via the shooting device 2.
- the storage means make it possible to memorize all the data necessary for the proper functioning of the system 1, and in particular to store the images produced by the shooting device 2. These means also store the programs for operating or dragging images, these programs being conventionally made up of instructions capable of implementing all of the processing operations and/or steps detailed in the present description.
- the display device 4 can present the images captured by the shooting device 2 to the user so as to allow this user to check his position and, more generally, his correct attitude before providing the system 1 the start command mentioned above.
- this figure 1 is purely illustrative and other bodies than those shown can be provided.
- the input interface 3 symbolized by a keyboard in FIG. 1 can be implemented by a touch surface associated with the display device.
- These may be simple control buttons (physical or virtually represented on the display device) allowing the user to operate the system 1, for example to obtain, by simply pressing such a button an identity photograph intended to be associated with a predetermined document, such as a driver's license or a passport.
- the identity photograph if it is in conformity, can be stored in the storage means 6, printed, addressed to the user via the communication interface 7 and/or communicated to this user by any appropriate means.
- the system 1 can correspond to a photo booth, to a personal or portable computer, or even to a simple smartphone, that is to say a multifunction mobile telephone.
- the user seeking to exploit the system 1 to receive an identity photograph can specify, in the first place and using the input interface 3, the type of photograph chosen. (driving license, passport ...) and, possibly, the national regulations to be applied, in order to allow the selection of the rules of acceptance that the identity photograph must respect. These rules can of course be predefined, in which case the previous step is not necessary. He positions himself suitably opposite the shooting device 2, possibly with the help of the reproduction of the images acquired by this device 2 on the display device 4. Then he engages the command to start the shooting sequences and image processing. At the end of this processing, and if the photograph resulting from the images acquired complies with the selected rules, and in particular those concerning attempts at identity theft, the photograph may be returned.
- FIGS. 2a and 2b respectively represent, in the form of functional blocks and process steps, the computer program P implementing the processing aimed at determining whether an image of the user acquired via the camera device 2 is able to be delivered to this user.
- this program P can be held in the storage means 6 and executed by the computing device 5 of the system 1.
- the system 1 Prior to the execution of this program P, the system 1 proceeds during a step of acquisition SI of at least a first image II and a second image 12 of the face of the user, upon receipt of the start command.
- the processing then implemented by the program P aims to determine to what extent the displacement of facial landmarks present on the first image II and on the second image 12 is of an unpredictable nature or not. It is indeed expected that if the face represented on images II, 12 is not a real face (but a photo of a face, a mask or any other form of facial attack) the distributions of the facial landmarks on, respectively, the first image II and the second image I2 are correlated with each other. This correlation can take the form of a regular mathematical transformation (for example affine, quadratic or more complex) between the facial landmarks of the first image II and the facial landmarks of the second image 12.
- the expression “random face displacement” will designate in the present application the situation in which the facial landmarks associated with two images are not correlated with each other, that is to say that the images most likely represent a real face.
- “non-random face displacement” will designate the situation in which these spatial landmarks are correlated, that is to say that most likely these images represent a simulated face, for example a photo of a face or a mask .
- facial landmarks is meant points of interest of the first image II and of the second image 12 defined by their coordinates in the image II, 12, for example their rows and columns of pixels . These points of interest can correspond to particular morphological elements of the face (corner of the eye, of the lip, etc.) but this is not necessary.
- the point of interest is placed on the face (and not in the background of the face in the image) without however necessarily corresponding to a precise morphological element.
- the computer program P therefore receives as input the two images II, 12 of the face of the person which has been acquired during the prior acquisition step SI.
- the time elapsed between the acquisition of the first image II and the acquisition of the second image I2 is less than 5, of the order of a few seconds, and typically between 0.5 and 2 seconds. This is a reasonable waiting time for the user and sufficient for a face displacement of significant amplitude to occur while limiting this time to avoid any attempt at complex fraud, for example by replacement of a mask by another or a photograph of a face by another during the period of time separating the two shots.
- the two images II, I2 are supplied, successively or simultaneously, to a tracking module MR.
- This computer module has the function of processing, during a registration step S2, an image or a plurality of images and providing a vector of facial marks associated with each image provided.
- the tracking module MR can thus comprise a first face detection computer sub-module MD, which returns the coordinates/dimensions of a bounding box of the face present in the submitted image.
- a sub-module is well known per se, and it can in particular implement a technique of oriented gradient histogram (HOG for "histogram of oriented gradient” according to the Anglo-Saxon terms of the field) or a technique based on convolutional neural network trained for this task.
- This computer sub-module whatever the technique used is for example available in a pre-trained form in the library of computer functions “Dlib”.
- the detection sub-module MD can be operated successively on the first image II and on the second image 12, in order to respectively provide coordinates/dimensions of a first bounding box and d a second bounding box. These coordinates/dimensions can correspond to the coordinates of a corner of the box and a dimension of a side when this box is square in shape.
- the face detection sub-module MD does not locate any face in at least one of the images II, 12 which are submitted to it, it returns an indication which can be intercepted by the system 1 in order to interrupt processing and inform the user of the anomaly.
- the tracking module MR can also include a location sub-module ML, downstream of the face detection sub-module MD.
- This localization computer sub-module ML receives as input the information from the first and second bounding boxes supplied by the detection sub-module MD as well as the first and the second image II, 12. This information can be supplied to the sub-module MD to be processed successively or simultaneously by this module.
- this sub-module ML processes the data received as input to provide, at its output, a vector of points of interest of the image, and more precisely of the portion of the image arranged in the bounding box.
- these points of interest do not form specific descriptors of the face . It may thus be a question of the S IFT (“Scale Invariant Feature Transform”), SURF (“Speed Up Robust Feature”), ORB (“Oriented FAST and rotated BRIEF”) techniques or any other similar technique, which can be find a detailed description in the paper by Karami, Ebrahim & Prasad, Siva & Shehata, Mohamed. ( 2015 ) . "Image Matching Using SIFT, SURF, BRIEF and ORB: Performance Comparison for Distorted Images". These techniques can be implemented using freely available computer libraries.
- this sub-module ML Used in the context of the program P, this sub-module ML simultaneously establishes a first vector XI of points of interest arranged in the portion of the first image I I included in the first bounding box and a second vector X2 of points of interest arranged in the portion of the second image 12 included in the second bounding box.
- the points of interest of the first and of the second vector are paired between them, that is to say that the same inputs of the first and of the second vector consist of points of interest which correspond in the first image I I and in the second picture 12 .
- the points of interest are specific descriptors of the face (corner of the mouth, of the eye, of the nose, etc.).
- This approach can be implemented by a neural network trained to identify in an image (here a portion of the first image I I and/or of the second image 12 ) these specific descriptors.
- a neural network is also available in the previously mentioned Dlib library.
- the points of interest of the first and of the second vector provided according to this alternative approach are also paired with one another.
- the points of interest of images identified by the various techniques presented above form, within the framework of the present application, facial landmarks on the faces. represented on the processed images.
- it will be chosen to configure the location sub-module ML to identify a number N of points of interest/facial landmarks comprised between 20 and 200, and more particularly between 60 and 90.
- this localization computer sub-module ML outputs a first vector XI of N facial landmarks extracted from the first image I I and a second vector X2 of N facial landmarks extracted of the second image 12 .
- This paired first and second vector X1, X2 also form the outputs of the tracking module MR.
- the computer program P comprises, downstream of the tracking module MR, a main neural network RP formed of two Siamese branches.
- a neural network consists of layers of neurons interconnected with one another according to a determined architecture , and each neuron of each layer is defined by neuron parameters collectively forming the parameters of network learning.
- the two branches RP1, RP2 are themselves neural networks which have precisely the same architecture and the same learning parameters. This is why these two branches are called “Siamese”.
- the first vector XI is applied to the input of the first branch BRI of the main neural network RP.
- the second vector X2 is applied to the input of the second branch BR2 of this network RP.
- the first branch provides a first output vector Y1 composed of N scalar values and therefore defining a point in a vector space of dimension N.
- the second branch BR2 provided a second output vector Y2 also composed of N scalar values.
- the main neural network RP1 has been trained and configured to separate in distinct zones of the vector space the two output vectors Y1, Y2 when the two images II, 12 with which these vectors are associated present a random face displacement, c' that is to say when the faces represented on the two images II, 12 appear very real.
- the main neural network RP is configured to group together in the same zone of the vector space two output vectors Y1, Y2 when the two images II, 12 with which these vectors are associated present a non-random face displacement, that is to say when the faces represented on the two images II, 12 do not appear real, which testifies to an attempt at identity theft by facial attack.
- an inverse operation can naturally be chosen (that is to say group together in the same zone of the vector space two output vectors corresponding to a situation of random face displacement and separate the two output vectors in different zones in the opposite case), the important thing being to attempt to discriminate between the two situations of random and non-random face displacement by grouping the output vectors in the same zone or by separating them in distinct zones as the case may be.
- the processing leading to transforming the first vector of facial landmarks XI and the second vector of facial landmarks X2 extracted from the first and second images II, 12 into a first and a second output vector Y1, Y2 implemented by the main neural network RP form a propagation step S3.
- a propagation step S3 we will illustrate in a specific example presented at the end of this description, an architecture common to the two branches BRI, BR2, but in general this architecture is formed of a sequence of purely convolutional and activation layers allowing the identification spatial relationships between facial landmark vectors.
- the main neural network RP can be supplemented, downstream of the two branches, with a small number of entirely connected layers of decreasing size, forming a common core of the neural network, and making it possible to prepare decision-making.
- the output vectors Y1, Y2 do not form outputs of the main neural network RP as such, but an intermediate state of this network which supplies the layers of the common trunk.
- the last layer of this prepares a combined output vector Z, which combines the two vectors Y1, Y2 together.
- This combined output vector Z can have any dimension, which can in particular be different from those of the output vectors Y1, Y2 and even correspond to a simple scalar value.
- the common trunk part of the main network is trained simultaneously and with the same training data as the two branches BRI, BR2.
- this program P also comprises, downstream of the main neural network RP, a block of cost L combining the first output vector Y1 and the second output vector Y2 by the intermediary of a cost function, and provide a numerical output value a seeking to numerically evaluate the random or non-random nature of face displacement between the first image II and the second image I2.
- the cost block L processes the combined output vector Z to provide this numeric value.
- the block of cost L is fully integrated into the main neural network RP, and that the scalar value provided by this network RP constitutes the digital output value a seeking to numerically assess the random or non-random nature of face displacement.
- This numerical value which can for example be between 0 and 1, measures in a way the “distance” separating the two output vectors Y1, Y2.
- the cost function implemented by the cost block L can correspond to any suitable function, for example a contrastive loss function as is well known per se. In any event, the processing operations implemented by the cost block L are executed during a combination step S4 of the method.
- the program P comprises a classification module K of the output measurement a in order, on the basis of this measurement, to determine the random or non-random nature of the face displacement, and to conclude, if necessary, an attempt to usurp a identify .
- the classification step S5 implemented by this module K can comprise the comparison of the digital measurement a with a predetermined threshold making it possible, depending on whether the digital measurement a is greater than or less than this predetermined threshold, to conclude that there has been an attempt at fraud or No .
- the information supplied by the classification module concludes the execution of the image processing program , and this information can therefore be exploited by the program . of the operating system 1 to validate or not the conformity of the images II, 12 and to provide or not an identity photograph which can correspond to the first or to the second image II, 12.
- the image processing implemented by the program P is not limited to that described and represented in FIG. 2. It is thus possible to provide that this program P performs other processing on at least one of the images II, 12, for example to identify a non-compliant object therein (glasses, headgear for example) or to make them compliant (uniformity of the background, erasing of red eyes) or even to retouch the images, for example to remove non-compliant objects that may have been identified; this for minor alterations accepted by the authority issuing the identity documents.
- this program P performs other processing on at least one of the images II, 12, for example to identify a non-compliant object therein (glasses, headgear for example) or to make them compliant (uniformity of the background, erasing of red eyes) or even to retouch the images, for example to remove non-compliant objects that may have been identified; this for minor alterations accepted by the authority issuing the identity documents.
- the tracking module MR is perfectly identical to that of the main mode of implementation. It therefore prepares a first vector X1 of N facial landmarks extracted from the first image II and a second vector X2 of N facial landmarks extracted from the second image I2.
- the vectors XI, X2 are supplied to an additional module which aims to transform each vector XI, X2 into a graph making it possible to describe the face with more precision.
- This graph is thus constructed by associating each entry of a vector (a facial marker) with a list of other entries (other facial markers) being connected to it.
- a facial landmark associated with the left corner of the lip is connected to the facial landmarks associated with the center points of the lips, at the base of the left wing of the nose, and the horizontal projection of the left corner of the mouth onto the oval of the face.
- each entry of a vector (a facial landmark of the image) to the k other neighboring entries (the k closest facial landmarks on the image)
- k can be chosen typically between 3 and 10.
- the originality of this variant is that it makes it possible to reinforce the quality of the predictions by adding information that can be calculated quickly, while using a neural network suitable for comparing data.
- the results of the two branches of the neural network are compared within the cost block L, a value which will then be introduced into the classification module K in order to determine, just like in the main mode of implementation, if the user tried to make a legitimate acquisition or tried to defraud.
- FIG. 3 shows a particular architecture of the branches BRI, BR2 of the main neural network RP.
- This architecture comprises successively connected to each other:
- the fully connected first and second layers are followed by a linear rectification unit (ReLu) on each of their outputs (not shown in the figure).
- ReLu linear rectification unit
- the facial reference vectors XI, X2 are formed from the 81 coordinates of points of interest of the faces determined using the functions available in the Dlib library.
- the cost block implements a contrastive loss function (generally designated in the field by the Anglo-Saxon expression “contrastive loss”.
- This architecture combined with the cost block L was trained using a dataset composed of 1075 pairs of images of a real face, and 254 pairs of representative images of attempted spoofing. by facial attack. This dataset was split into two parts, 60% of each category was used during training of the main neural network, and the remaining 40% was used to assess fraud detection accuracy.
- the example main neural network was trained using the training data over 100 epochs, using an Adam-type optimizer and a training parameter of 10“ 6 .
- FIG. 4 represents the evolution of the optimization criterion established during this learning. It is observed that this evolution converges whether it is measured on the learning data or on the validation data.
- the curve in FIG. 5 represents the ROC (receiver operating characteristic) curve of this example.
- the graph shows the performance of the program P and of the processing method according to the chosen value of the threshold in the classification module K.
- the graph presents an abscissa axis corresponding to the proportion of false positives and the ordinate the proportion of true positives.
- we aim for the optimal point of coordinates (0,1) that is to say presenting 0% of false positives and 100% of true positives.
- the graph in Figure 5 shows the performance of this example according to the chosen value of the classification modulus threshold. It also makes it possible to choose the value of this threshold S* making it possible to locate as close as possible to the optimum point of coordinates (0,1).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR2108488A FR3126051B1 (fr) | 2021-08-04 | 2021-08-04 | Procédé pour déterminer si une image du visage d’une personne est apte à former une photographie d’identité |
| PCT/FR2022/051421 WO2023012415A1 (fr) | 2021-08-04 | 2022-07-15 | Procede pour determiner si une image du visage d'une personne est apte a former une photographie d'identite |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4381471A1 true EP4381471A1 (fr) | 2024-06-12 |
Family
ID=77821911
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP22754477.2A Pending EP4381471A1 (fr) | 2021-08-04 | 2022-07-15 | Procede pour determiner si une image du visage d'une personne est apte a former une photographie d'identite |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US20240338977A1 (fr) |
| EP (1) | EP4381471A1 (fr) |
| FR (1) | FR3126051B1 (fr) |
| WO (1) | WO2023012415A1 (fr) |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| FR2979728B1 (fr) | 2011-09-01 | 2016-05-13 | Morpho | Detection de fraude pour systeme de controle d'acces de type biometrique |
| US9369625B2 (en) | 2014-08-12 | 2016-06-14 | Kodak Alaris Inc. | System for producing compliant facial images for selected identification documents |
-
2021
- 2021-08-04 FR FR2108488A patent/FR3126051B1/fr active Active
-
2022
- 2022-07-15 US US18/293,630 patent/US20240338977A1/en active Pending
- 2022-07-15 WO PCT/FR2022/051421 patent/WO2023012415A1/fr not_active Ceased
- 2022-07-15 EP EP22754477.2A patent/EP4381471A1/fr active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| US20240338977A1 (en) | 2024-10-10 |
| FR3126051A1 (fr) | 2023-02-10 |
| FR3126051B1 (fr) | 2023-11-03 |
| WO2023012415A1 (fr) | 2023-02-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105787454B (zh) | 用于生物特征验证的方法和系统 | |
| EP2751739B1 (fr) | Detection de fraude pour systeme de controle d'acces de type biometrique | |
| FR2924247A1 (fr) | Procede d'identification d'une personne par son iris. | |
| EP3285209B1 (fr) | Procede de surveillance au moyen d'un systeme multi-capteur | |
| FR3088467A1 (fr) | Procede de classification d'une image d'entree representative d'un trait biometrique au moyen d'un reseau de neurones a convolution | |
| CA3000153A1 (fr) | Procede d'analyse d'un document structure susceptible d'etre deforme | |
| EP3866064B1 (fr) | Procede d'authentification ou d'identification d'un individu | |
| EP3567521B1 (fr) | Procédé de reconnaissance biométrique à partir des iris | |
| EP3582141A1 (fr) | Procédé d'apprentissage de paramètres d'un réseau de neurones à convolution | |
| FR3053500B1 (fr) | Procede de detection de fraude d'un systeme de reconnaissance d'iris | |
| EP3459013A1 (fr) | Procédé d'authentification augmentée d'un sujet matériel | |
| EP4381471A1 (fr) | Procede pour determiner si une image du visage d'une personne est apte a former une photographie d'identite | |
| EP3929809A1 (fr) | Procédé de détection d'au moins un trait biométrique visible sur une image d entrée au moyen d'un réseau de neurones à convolution | |
| FR3028980A1 (fr) | Procede et dispositif d'authentification d'un utilisateur | |
| EP4099200A1 (fr) | Procede et dispositif d'identification et/ou d'authentification biometrique | |
| EP4169002A1 (fr) | Procédé d'authentification d'un élément optiquement variable | |
| EP3908968A1 (fr) | Procédé de traitement d'images numériques | |
| FR3106678A1 (fr) | Traitement biométrique comprenant une pénalisation d’un score de correspondance | |
| FR3054057A1 (fr) | Procede d'authentification augmentee d'un sujet materiel | |
| EP3185178B1 (fr) | Procede et dispositif d`identification biometrique | |
| FR3100070A1 (fr) | Procédé de reconnaissance biométrique à contrôle de dérive et installation associée | |
| FR3110009A1 (fr) | Procédé de reconnaissance et d’identification de clés aux fins de leur duplication | |
| FR3161973A1 (fr) | Procédé de sécurisation d’accès à une source de données | |
| FR3157636A1 (fr) | Procédé et dispositif de détermination de fraudes dans un système de reconnaissance d’images biométriques. | |
| FR3144679A1 (fr) | Procédé d’enrôlement d’une carte à puce dans un smartphone. |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20240119 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAX | Request for extension of the european patent (deleted) | ||
| RAV | Requested validation state of the european patent: fee paid |
Extension state: TN Effective date: 20240119 Extension state: MA Effective date: 20240119 |