WO2018139847A1 - Procédé d'identification personnelle par comparaison faciale - Google Patents
Procédé d'identification personnelle par comparaison faciale Download PDFInfo
- Publication number
- WO2018139847A1 WO2018139847A1 PCT/KR2018/001055 KR2018001055W WO2018139847A1 WO 2018139847 A1 WO2018139847 A1 WO 2018139847A1 KR 2018001055 W KR2018001055 W KR 2018001055W WO 2018139847 A1 WO2018139847 A1 WO 2018139847A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- face
- identification
- image
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/469—Contour-based spatial representations, e.g. vector-coding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention relates to a method of personal identification through face comparison and a personal identification system in which the method is executed. More particularly, the present invention provides a face that can be identified as a sculpture, identified as the same, and additional information such as race can be provided. A method of personal identification through comparison and a personal identification system in which the method is implemented.
- Existing personal identification system is mainly used in access security device or personalized information providing system, etc., and outputs only the result of whether the same person is compared by comparing the face image taken in the field with the already registered image.
- This method does not apply the method of distinguishing the print out of the human face image from the human face and the color printer or artificially manufactured sculptures, which may cause a fatal security problem and the dichotomous judgment result is wrong.
- As a result in spite of being the same person, if it is recognized as a different person, there is a problem that can be judged as the same person even though the person is not the same person, causing inconvenience and the same person.
- the existing system for personal identification based on the similarity only automatically collects statistical information about race and gender for the identified individual and is not applicable to a personalized system.
- the present invention has been proposed to solve the above problems, and can be distinguished whether the object to be prepared is a real face or printed matter or sculpture printed on the face, provides a similarity data of the face, information on race or gender
- Another object of the present invention is to provide a personal identification method through face comparison that can be provided together.
- the present invention provides an individual comprising a photographing unit, a control unit connected to the photographing unit, a storage unit storing an identification image and a personal identification program consisting of a plurality of identification images connected to the control unit photographed by the photographing unit. Executed in the identification system to contrast with DB face information registered in the database;
- a personal identification step wherein the personal identification step includes a facial area derivation step of deriving a face area from an image of an individual photographed, a facial feature point derivation step of deriving facial feature points, and a rotation angle of the derived face area.
- An operation for calculating the identification face rotation angle, a DB face information contrast step in which a plurality of DB face information images and an identification image are contrasted, a pixel color application step, and an identification face DB which is a face 3D database for the individual to be identified are derived. Consisting of the face DB derivation step;
- the present invention provides a personal identification method through face comparison in which information for identifying a photographed individual is provided.
- a minimum error square value of a plurality of DB face information images is calculated for the identification image, and in the pixel color applying step, the smallest minimum square error value for the identification image among the DB face information images is calculated.
- the color information of pixels constituting the DB face information image is changed into an identification image and stored in the storage unit.
- the face identification step includes eye area detection step of eye area derived from each frame constituting the identification image, eye area derivation step of detecting eye area of eye area, and color information of pixels constituting eye area of each frame. It is characterized in that it comprises a step of contrasting the pupil area color information to be provided, and the sculpture identification information providing step of providing information that determines whether the actual face or sculpture is photographed from the change in the color information of the pixels constituting the pupil area of each frame.
- the method may further include a face region deriving step of deriving a face region of a plurality of frames photographed before the eye region is detected, and after the pupil region derivation step, pixel information forming a pupil region in each frame is calculated. Further comprising a computing step; In the color information comparing step, the color information of the pupil area pixels of each frame is contrasted.
- the face identification step may include: an eye area detection step in which an eye region is derived from each frame constituting an image captured by the photographing unit, a face area feature point derivation step in which feature points of a face area of each frame constituting the captured image are derived, and an eye; A pupil area derivation step of detecting a pupil area of an area, a pupil area location derivation step of deriving a pupil area location, and a pupil location from which a distance between a pupil area location and a facial area feature point derived in the pupil area location derivation step is calculated.
- the distance calculation step and the sculpture identification information providing step of providing information that determines whether the actual face is photographed or the sculpture is photographed from the change in the pupil position distance of each frame.
- the face identification step includes deriving a face region from each frame constituting the image captured by the photographing unit, calculating a direction of the face region, and photographing whether a real face is photographed from a change in the face region direction of each frame.
- Sculpture identification information providing step is provided, characterized in that the information is determined.
- system further comprises a distance measuring sensor connected to the control, wherein each frame is stored in the storage with the distance information measured from the distance measuring sensor;
- the face identification step includes a face area derivation step in which a face area is derived from each frame constituting the image photographed by the photographing unit, a face area information calculation step in which face area information derived in the face area derivation step is calculated, and each frame It is characterized in that it comprises a sculpture identification information providing step that provides information that determines whether the actual face is photographed or the sculpture is photographed from the face area information and distance information.
- the storage unit stores the composite product neural network module, the learning photographed image and the learning photographed image information, and further comprising the step of providing additional information in addition to the personal identification step;
- the additional information providing step includes a learning step and a face information calculation step, wherein racial information and gender information of the identification image are provided;
- the learning step includes the step of forming a convolutional layer of the training photographing image for each of the race information and the gender information, the step of forming a pooling layer from a map of the convolutional layer, and the information of the pooling of the pooling layer.
- Hidden layer information operation step of calculating hidden layer node values by operation of hidden layer weight and input information, output layer information calculating step of calculating output layer by operation of hidden layer node value and output layer weight, and error from output layer information and training photographing image information.
- the updated values of the hidden layer weights and the output layer weights are calculated according to the backpropagation algorithm, and the hidden layer information calculation step and the output layer information calculation step are repeatedly calculated, and the process of comparing the calculated output layer information with the photographed image information for training is repeated.
- the update value and output layer weight Made of a weighting stage which is derived singap is derived;
- the hidden layer weight update value and the output layer weight update value derived in the learning step may be the hidden layer weight and the output layer weight for each of the race information and the gender information in the face information calculation step.
- the face information calculation step includes a step of forming a convolutional layer of an identification image for each of the race information and the gender information, a step of forming a pooling layer from a map of the convolutional layer, and a pooling information of the pooling layer.
- a sculpture or an output is distinguished from an actual face, so that when applied to an access device or a security device, a sculpture or an output having a malicious purpose may be used. Unauthorized access is blocked, the same person and the same person is identified by similarity step by step, there is an effect that the race, gender and age information can be provided.
- Figure 1 illustrates the steps of performing a method of personal identification through face comparison according to the present invention
- FIG. 2 is a view illustrating the face identification step of FIG.
- FIG. 3 is a schematic flowchart illustrating a face identification step of the present invention.
- 6 and 7 are schematic flowcharts illustrating a process of deriving a gaze direction constituting a face identification step
- FIGS. 8 and 9 are graphs illustrating a change in gaze according to a frame.
- FIG. 1 is a view illustrating a step of executing a personal identification method through face comparison according to the present invention
- FIG. 2 is illustrated to explain the face identification step of FIG. 1
- FIG. 3 is a face identification step of implementing the present invention.
- 4 and 5 are schematic diagrams illustrating eye regions
- FIGS. 6 and 7 are schematic flowcharts illustrating a process of deriving a gaze direction constituting a face identification step.
- 8 and 9 are graphs illustrating a change in gaze according to a frame.
- a personal identification method through face comparison including a photographing unit, a control unit connected to the photographing unit, and a storage unit storing an identification image and a personal identification program including a plurality of frames connected to the control unit photographed by the photographing unit. Runs on a personal identification system.
- the personal identification system further includes a distance measuring sensor connected to the control unit to provide the measured distance information to the control unit, and a communication module connected to the control unit to transmit and receive a signal, and the compound multiplication neural network module is stored in the storage unit.
- the photographing unit, the control unit, the storage unit, and the distance measuring sensor are provided in a main body (not shown).
- the personal identification system may be installed and operated at the door.
- the personal identification program is operated to execute the personal identification method through face comparison according to the present invention.
- the storage unit stores a plurality of personal 3D face databases (for example, 300 people), and extends from -30 ° to + 30 ° using an axis (first axis) extending in the vertical direction of the face and passing through the center of the face as a rotation axis.
- Face image information pixel color of the face region, coordinates, and feature points of the face region
- DB face information the face image information
- the face with a rotation angle of 0 ° is the front face.
- the first axis is the axis passing through the center of the face that is symmetrical.
- the process of deriving the feature points of the face region and the contents of the derived feature points are well known in the art, and description thereof will be omitted.
- the table exemplarily shows four persons to explain that the face 3D database of the individual is stored in the storage unit.
- A, B, C, and D mean different individuals.
- the face 3D database may also be stored in a server in communication with a personal identification system in which the present invention is implemented.
- the personal identification method through face comparison includes a face identification step ST-100, a personal identification step ST-200, and an additional information provision step ST-300. .
- the face identification step (ST-100) information for determining whether an identification image composed of a plurality of frames photographed by the photographing unit and stored in the storage unit is photographed by a real person or a human sculpture or printed matter is photographed.
- a material for identifying a photographed individual is provided by contrasting the individual photographed by the photographing unit and the data registered in the database.
- step ST-300 information for determining the race and gender of the individual photographed is provided.
- Information derived at each step is transmitted through a communication module and displayed on a monitor such as a security room.
- a material for distinguishing whether an image photographed by the photographing unit photographs a real person or a print such as a human sculpture or a person's photo is provided.
- the face identification step ST-100 includes an eye region detection step of deriving an eye region from each frame constituting the identification image, a pupil area derivation step of detecting a pupil region of an eye region (ST-120), and a frame Sculpture captures whether a real face is photographed from the eye region color information contrast step (ST-120; blink operation) in which the color information of the pixels constituting the eye region is contrasted and the change in the color information of the pixels constituting the eye region of each frame.
- Sculpture determination information providing step (ST-140) is provided is provided that is determined to be contrasted with the information.
- the eye region 100 is derived from each frame of the identification image.
- the eye region 100 may be derived by providing a mask.
- the mask for deriving the eye region 100 may have a size determined from the size of the face region derived as a rectangle, and the pupil region 110 as shown in FIG. 5 and the peripheral region 120 which is a white region around the pupil as shown in FIG. 5.
- a pixel portion of the same size as the mask having a size determined and having an arrangement most similar to the pupil region 110 and the peripheral region 120 which is a white region around the pupil in the rectangular region in the derived face region is the eye region 100.
- the eye region 110 is derived from the color information of the pixel in the derived eye region 100.
- the pixel having the largest ordinate value and the smallest pixel, and the pixel having the largest abscissa value and the smallest pixel are derived, so that the characteristic points 113 and 115 of the pupil region shown in FIG. Derived.
- Eye region feature points 121 which are left and right endpoints of the peripheral region of the eye region, may also be derived.
- the horizontal center coordinates are calculated from the maximum and minimum values of the horizontal coordinates for the pupil, and the vertical center coordinates are calculated from the maximum and minimum values of the vertical coordinates, thereby deriving the pupil center point 111. It is calculated from the information of the pupil area 110 and the pupil center point 111 as described above (ST-130), and the number of eye blinks (ST-131) and the gaze direction (ST-133) are calculated.
- the direction of the face is calculated from the distance change of the facial feature point, and it is also possible to calculate the direction of the face with the gaze direction.
- the face rotation direction may be calculated from the distance between the left and right end feature points of the left eye area and the distance between the left and right end feature points of the right eye area.
- the degree of face rotation in the vertical direction may be calculated from the distance change between the feature point of the eye region and the feature point of the lip region, and the subject may be determined whether the photographing object is a sculpture or a real face from the change of the face direction of each frame, which is the face rotation. It is possible to receive data (see Figures 8 and 9).
- FIG. 5 schematically illustrates a portion of an eye region of a frame photographed with the eyes closed among the frames constituting the photographed image. Since the state of the eyes is captured, the pupil region or the peripheral region of the eye region is a pixel of the same color as the skin. Will have Meanwhile, the eye region feature point 121 may also be derived from the eye region photographed when the eyes are closed.
- the color change of the pixels constituting the eye region 100 is derived from the frame photographed with the eyes open and the frame photographed when the eyes are closed in FIG. 6.
- color information of pixels constituting the pupil regions 110 and 110a is derived.
- Color information of the pixels constituting the pupil region is derived for each frame, an average value of the colors of the pixels constituting the pupil region is calculated, and a change in the color information of the pixels of the pupil region with respect to the time of each frame is displayed through the display.
- the number of eye blinks with respect to time is calculated and derived from color information of pixels constituting the eye region in each frame (ST-131).
- Specifying a threshold for the change in the average value of the colors of the pixels that make up the pupil area information that the person is photographed when the average color value of the pupil area pixels exceeds the threshold, and that the print or sculpture is photographed if it does not exceed the threshold. May be provided through the display unit.
- warning means bells, warning lights, etc.
- the control unit may be operated through the control unit. In this case, the face comparison personal identification step (ST-200) does not proceed and ends.
- the horizontal and vertical displacements of the pupil center point 111 with respect to the frame photographed with the eyes open are derived so that the change to the frame may be displayed on the display unit.
- the horizontal position change and the vertical position change of the pupil center point 111 are derived from the position of the pupil center point 111 in the photographed frame, and the horizontal feature displacement of the pupil center point 111 is the eye feature point 121 and the pupil center point. It is also possible that 111 is calculated and derived.
- FIG. 8 is a graph illustrating a change in gaze according to a frame (a change in face direction) when a real person is photographed
- FIG. 9 is a graph illustrating a change in gaze according to a frame (a change in face direction) when a printed matter is taken.
- the gaze change may be derived from the rotation angle of the face calculated from the facial feature point.
- the threshold value for the line of sight change in the horizontal and vertical directions according to the frame and judge it as a person when the maximum value of the line of change is greater than the threshold, and when the maximum value of the line of change is less than the threshold, It may also be determined that the result is displayed on the display unit.
- warning means bells, warning lights, etc.
- the face comparison personal identification step ST-200
- distance information sensed by a distance measuring sensor connected to a controller and measuring a distance from a subject is stored in a storage unit along with each frame.
- the face region of each frame is extracted (ST-110), and the number (area) of pixels constituting the face region is calculated and stored with the distance information.
- the database stores information on the face area (number of pixels) according to the distance (face area area information according to the distance), and the difference is compared with the face area area according to the distance derived from a frame of the captured identification image. It may be displayed on the display unit.
- a warning means (bell, beacon, etc.) connected to the control unit is activated through the control unit. It is possible.
- the personal identification step ST-200 is provided with a face 3D database (hereinafter referred to as "identification face DB") including a front face image of an individual photographed by the photographing unit. Providing 3D information or a front face image facilitates identification of the photographed individual.
- identity face DB a face 3D database
- the personal identification step includes a face area derivation step ST-210 in which a face area is derived from an image of a photographed individual (hereinafter referred to as an “identification image”), and a face feature point derivation step in which facial feature points are derived. 220, an identification face rotation angle calculation step (ST-230) for calculating a rotation angle with respect to the first axis of the derived face region, a DB face information comparison step (ST-230), and a pixel color application step (ST) And a face DB derivation step (ST-240) in which an identification face DB, which is a face 3D database for the individual to be identified, is derived.
- an identification face DB which is a face 3D database for the individual to be identified
- Deriving a face region from an image of an individual photographed by the photographing unit is well known in the art, and thus description thereof is omitted.
- an area having a predetermined size (number of pixels) of a region of pixels of a specific color in the image of the photographed individual may be derived as the face region.
- a facial specific point is derived from the derived face region (ST-220).
- the process of deriving a feature point from the face region has been described above by way of example, and the description thereof is also omitted in the prior art.
- the rotation angle of the face which is the rotation angle with respect to the first axis, is calculated by the calculation of the specific point (ST-230, for information on calculating the face rotation angle, registered in Republic of Korea Registration No. 10-1215751). See patent publication, etc.).
- the face rotation angle of the face region of the identification image is calculated, it is contrasted with face image information of the 3D face database of the plurality of individuals. It is contrasted with the face image of the DB face information of the angle closest to the identification face rotation angle. For example, if the face rotation angle of the identification image is -28 °, it is contrasted with the face image of DB face information having a rotation angle of -30 °. If the face rotation angle of the identification image is a medium value such as ⁇ 25 °, the face rotation angle of the identification image may be contrasted with the face image of the DB face information having the rotation angle of ⁇ 30 ° or ⁇ 20 °.
- the step of making the same size of the face image of the identification image and the DB face information is further performed.
- the identification image is a 2x2 pixel image and the face image of the DB face information is a 3x3 pixel image, (x1, y1), (x1, y2), (x2, y1) of the identification image ), a pixel having an average value of the color information of the (x1, y1) and (x1, y2) pixels in the (x2, y2) pixels is generated between (x1, y1) and (x1, y2), and (x1, y2) )
- the coordinates of the pixel are (x1, y3), and the coordinates of the generated pixel are (x1, y2);
- a pixel having an average value of the color information of the (x2, y1) and (x2, y2) pixels is generated between (x2, y1) and (x2, y2), and the coordinate of the (x2, y2) pixel is (x2, y3). ), And the coordinates of the generated pixel are (x2, y2);
- a pixel having an average value of the color information of the (x1, y1) and (x2, y1) pixels is generated between (x1, y1) and (x2, y1), and the coordinate of the (x2, y1) pixel is (x3, y1). ), And the coordinate of the generated pixel is (x2, y1);
- a pixel having an average value of the color information of the (x1, y2) and (x2, y2) pixels is generated between (x1, y2) and (x2, y2), and the coordinate of the (x2, y2) pixel is (x3, y2). ), And the coordinates of the generated pixel are (x2, y2);
- a pixel having an average value of the color information of the (x1, y3) and (x2, y3) pixels is generated between (x1, y3) and (x2, y3), and the coordinate of the (x2, y3) pixel is (x3, y3). ), And the coordinates of the generated pixel are (x2, y3) and stored in the storage unit.
- the feature point of the face region is derived again from the identification image, and a feature point such as the feature point of the DB face information is derived.
- the least square error value is calculated for the feature image of the identification image and the face image of each DB face information to contrast the DB face information (ST-230).
- the face image of the DB face information having the smallest least square error is an image to which pixel color is applied (an image of applying color information).
- the color of the pixel constituting the identification image becomes the color of the pixel constituting the color information applied image and is stored in the storage unit. And is displayed on the display unit.
- the least error square value (Equation 1) for the feature points of the face image of the plurality of DB face information having the same face rotation angle as the identification image and the feature points of the identification image among the face images of the plurality of DB face information are calculated.
- the color of the pixels forming the face image of the DB face information having the smallest minimum error square value of the identification image is changed to the color of the pixel of the identification image and stored in the storage unit.
- the DB face information image changed to the color of the pixel of the identification image is called an "identification derived face image".
- n is the number of feature points
- x i and y i are the x and y axis coordinates of the i th feature point of the identification face image
- x il and y il are the x and y axis coordinates of the i th feature point of the DB face information image
- dist rms is the least squares of error.
- the face rotation angle is 30 °
- the face image of the DB face information rotated 30 ° having the color of the pixel of the identification image is obtained.
- Identification image from face image of 30 ° rotated DB face information with color of pixel of identification image by applying to front image information and pixel position information with rotation angle of 0 ° with respect to face image of 30 ° rotated DB face information A front image of the identification face image with a rotation angle of 0 ° having a color of pixels of is obtained.
- an identification face DB which is a face 3D database for an individual to be identified, is derived (ST-240).
- the face image of the DB face information having the smallest least square error value for the identification image is derived and forms the face image of the derived DB face information.
- the color information of the pixel is changed to the color information of the pixel constituting the identification image, thereby obtaining the identification face image, and applying the obtained information of the identification face image to the DB face information of the face image to obtain 3D information of the identification face image. And frontal images are obtained.
- the personal identification step may include: extracting a face region from each frame photographed to form an identification image of an individual stored in a storage unit; extracting a face feature vector from which a feature vector of the extracted face region is derived; The Euclidean distance between the feature vector of the extracted facial region and the facial feature vector of the data registered in the database may be compared with the facial feature vector.
- a facial region feature vector is derived from the facial region information extracted and frontized in the facial region extraction step, and in the facial feature vector contrast step, the facial feature vector of the captured identification image and the database are registered in the database.
- the Euclidean distance of the facial feature vector of the data is calculated. Since the method of deriving the facial region feature vector and the Euclidean distance calculation are conventional techniques, description thereof will be omitted.
- the Euclidean distance for the facial feature vector of the photographed identification image and the data for the plurality of individuals registered in the database is calculated.
- the Euclidean distance becomes face similarity.
- the identification image of the individual photographed together with the calculated Euclidean distance and the image of the individual registered in the database are displayed on a display unit (eg, an LCD panel; not shown) connected to the control unit.
- Class information is divided into 4 classes based on Euclidean distance, and information about the class is also displayed.
- a composite product neural network module is executed, and a face feature vector of the identification image is derived by applying a previously learned personal identification weighting value from the training photographing image.
- the personal identification method through face comparison according to the present invention further includes obtaining face information in addition to the face comparison personal identification step.
- the face information acquiring step includes a learning step and a face information calculation step to provide race information and gender information of the identification image.
- the learning step includes a step of forming a convolutional layer of a training photographing image of gender information, a step of forming a pooling layer from a map of a convolutional layer, and information of pooling of a pooling layer as input information, including hidden layer weights and inputs.
- the updated values of the hidden layer weights and the output layer weights are calculated, and the hidden layer information calculating step and the output layer information calculating step are recalculated, and the process of comparing the calculated output layer information with the photographed image information for training is repeated.
- a map is formed of a convolutional layer by using a kernel for each training photographed image. Color information of each pixel constituting the training shot image becomes a component of the training shot image matrix. Before forming the map, it is also possible to convert the training photographed image into a black and white image.
- the kernel may be a matrix having various sizes, such as a 2 ⁇ 2 matrix, a 3 ⁇ 3 matrix, and a 5 ⁇ 5 matrix.
- the inventors formed a map using the 3 ⁇ 3 matrix as a kernel, and formed a convolutional layer. Five hidden layers were formed, and in the number of kernels, the first hidden layer had 16 kernels, the second hidden layer 32, the third hidden layer 64, the fourth hidden layer 128, and the fifth hidden layer 512. To form a hidden layer. In the formation of the hidden layer, the accuracy is improved by increasing the number of hidden layers by increasing the number of kernels toward the output layer.
- the values of the components of the kernel can be generated using random functions, and the values of each component should not exceed two.
- a plurality of kernels are provided, and a map for each kernel is provided to form a convolutional layer.
- the kernel is a 2 ⁇ 2 matrix and the components are 1, 0, 0, and 1, the kernel is calculated by mapping the kernel to the first 2 ⁇ 2 pixels of the captured training image, and the component values forming the map are calculated.
- the map component values are sequentially calculated by shifting pixel by pixel along the pixels of the training photographed image to prepare a map that is a matrix.
- a kernel having a plurality of different component values as described above is provided, and a map for each kernel is provided to form a convolutional layer. If the kernel is a 2x2 matrix and the size of the training photographic image is 4x4 pixels, the map is formed into a 3x3 matrix size.
- each map constituting the convolution is outputted through an active function.
- the ReLU function is adopted as the activation function. It is also possible to make the sigmoid function the active function.
- a plurality of pools of the pooling layer is formed, and pooling is formed on an output value of each map constituting the convolutional layer.
- the pooling may be pooled to an average value or to a maximum value.
- the pooling may be performed for a 2 ⁇ 2 matrix size, and the maximum value or the average value of the components having a 2 ⁇ 2 matrix size from the components of the map is the component value of the pooling. Therefore, when a map of 4x4 matrix size is pooled to 2x2 matrix size, it becomes a matrix of 2x2 matrix.
- Each pooling component becomes an input value and is calculated based on the weight of the hidden layer to calculate the node value constituting the hidden layer.
- Equation 2 is an example of calculating each node of the hidden layer, in which m is the number of components forming a pool, Ej is a hidden layer node value, pi is a component of pooling, wij is a weight, and po is 1 as a bias. And ⁇ is an active function.
- the sigmoid function is used as the activation function.
- the weight wij may be any value selected from a value between 0 and 2.
- the hidden layer may be formed of one or more layers.
- Equation 2 is a formula in which the node value of the first hidden layer is calculated, and the node value of the second hidden layer is calculated by the node value and the weight of the first hidden layer.
- the weighting value used in calculating the first node value and the weighting value used in calculating the second node value may be different from each other. As described above, in the case of the plurality of hidden layers, calculations are performed in sequence.
- the output layer node value is calculated by the node value and the weight value of the last hidden layer. Since gender is male and female, there are two output layer nodes.
- Equation 3 is an example of calculating a node of an output layer, where Tj is an output layer node value, Ei is a node value of the last hidden layer, vij is a weight value, and Eo is 1 as a bias. And ⁇ is an active function. ReLU is used as an active function.
- the weight vij may be any value selected from a value between 0 and 2.
- the training shot image is male and the node value of the output layer calculated through the above process for the training shot image is larger than 0.5
- the training shot image is female and the training shot image is processed through the above process.
- the node value of the output layer is smaller than 0.5
- the hidden layer node value calculation weight and the output layer calculation weight are stored in the storage unit.
- the output node value is calculated for all of the prepared training images, and the accuracy of the weight is checked. In the classification, if it is larger than 0.5, it can be classified as a male, and if it is smaller than 0.5, it can be classified as a female (and vice versa).
- the training photographing image is male and the output layer node value calculated by the above steps is 0.3, the minimum error is 0.7, and the error value is based on the backpropagation algorithm (the detailed description is omitted since it is a conventional technology).
- the output layer node value is calculated through the above process with respect to the identification image. For example, if the output layer node value for the identification image is greater than 0.5, it is classified as a male, and if it is less than 0.5, it is classified and stored.
- the race information is calculated through the above process, and the output layer node is set to be the same as the race type.
- the sigmoid function is used as the active function for the hidden layer node operation
- the softmax function is used as the active function for the output layer node operation.
- the sum of the output layer node value calculation results is 1, the race is classified according to the node value having the maximum value of the three output layer node values, and the larger the maximum value, the higher the accuracy.
- the sculpture or the printout is distinguished from the real face, so when applied to an access device or a security device, the sculpture or the printout having the malicious purpose Unauthorized entry and exit using the back is blocked, the same person and the same person can be identified by the similarity level, and race, gender, and age information is provided to improve business efficiency.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ophthalmology & Optometry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
La présente invention concerne un procédé d'identification personnelle par comparaison faciale, le procédé étant exécuté dans un système d'identification personnelle comprenant une unité d'imagerie, une unité de commande connectée à l'unité d'imagerie, et une unité de stockage qui est connectée à l'unité de commande et dans laquelle un programme d'identification personnelle et une image d'identification composée d'une pluralité de trames photographiées par l'unité d'imagerie sont stockés, et effectuent une comparaison avec des données enregistrées dans une base de données. Le procédé comprend une étape d'identification personnelle consistant à : une étape d'extraction de vecteur de caractéristiques faciales dans laquelle un vecteur de caractéristiques faciales d'une trame constituant l'image d'identification est déduit ; et une étape dans laquelle la distance euclidienne entre le vecteur de caractéristique faciale extraite dans l'étape d'extraction de vecteur de caractéristique faciale et un vecteur de caractéristique faciale des données enregistrées dans la base de données est calculée, de sorte que le degré de similarité calculé soit fourni en tant qu'informations pour identifier une personne photographiée.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2017-0011653 | 2017-01-25 | ||
| KR1020170011653 | 2017-01-25 | ||
| KR10-2017-0061950 | 2017-05-19 | ||
| KR1020170061950A KR101781361B1 (ko) | 2017-01-25 | 2017-05-19 | 얼굴 비교를 통한 개인 식별 방법 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018139847A1 true WO2018139847A1 (fr) | 2018-08-02 |
Family
ID=60036891
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2018/001055 Ceased WO2018139847A1 (fr) | 2017-01-25 | 2018-01-24 | Procédé d'identification personnelle par comparaison faciale |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR101781361B1 (fr) |
| WO (1) | WO2018139847A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111291668A (zh) * | 2020-01-22 | 2020-06-16 | 北京三快在线科技有限公司 | 活体检测方法、装置、电子设备及可读存储介质 |
| CN111680595A (zh) * | 2020-05-29 | 2020-09-18 | 新疆爱华盈通信息技术有限公司 | 一种人脸识别方法、装置及电子设备 |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108652851B (zh) * | 2018-01-19 | 2023-06-30 | 西安电子科技大学 | 基于视觉定位技术的眼控轮椅控制方法 |
| KR102060694B1 (ko) * | 2018-03-02 | 2019-12-30 | 제주한라대학교산학협력단 | 개인 맞춤형 서비스를 제공하기 위한 고객 인식 시스템 |
| CN108898053A (zh) * | 2018-05-24 | 2018-11-27 | 珠海市大悦科技有限公司 | 一种面部识别方法 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2007140695A (ja) * | 2005-11-15 | 2007-06-07 | Nippon Telegr & Teleph Corp <Ntt> | 不審顔検出システム、不審顔検出方法および不審顔検出プログラム |
| KR20100118363A (ko) * | 2009-04-28 | 2010-11-05 | 삼성전기주식회사 | 얼굴 인증 시스템 및 그 인증 방법 |
| KR20110105458A (ko) * | 2010-03-19 | 2011-09-27 | 한국산업기술대학교산학협력단 | 얼굴 인식 시스템의 학습 영상을 생성하기 위한 장치 및 그 방법 |
| KR20160042646A (ko) * | 2014-10-10 | 2016-04-20 | 인하대학교 산학협력단 | 얼굴 인식 방법 |
| KR20170006355A (ko) * | 2015-07-08 | 2017-01-18 | 주식회사 케이티 | 모션벡터 및 특징벡터 기반 위조 얼굴 검출 방법 및 장치 |
-
2017
- 2017-05-19 KR KR1020170061950A patent/KR101781361B1/ko active Active
-
2018
- 2018-01-24 WO PCT/KR2018/001055 patent/WO2018139847A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2007140695A (ja) * | 2005-11-15 | 2007-06-07 | Nippon Telegr & Teleph Corp <Ntt> | 不審顔検出システム、不審顔検出方法および不審顔検出プログラム |
| KR20100118363A (ko) * | 2009-04-28 | 2010-11-05 | 삼성전기주식회사 | 얼굴 인증 시스템 및 그 인증 방법 |
| KR20110105458A (ko) * | 2010-03-19 | 2011-09-27 | 한국산업기술대학교산학협력단 | 얼굴 인식 시스템의 학습 영상을 생성하기 위한 장치 및 그 방법 |
| KR20160042646A (ko) * | 2014-10-10 | 2016-04-20 | 인하대학교 산학협력단 | 얼굴 인식 방법 |
| KR20170006355A (ko) * | 2015-07-08 | 2017-01-18 | 주식회사 케이티 | 모션벡터 및 특징벡터 기반 위조 얼굴 검출 방법 및 장치 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111291668A (zh) * | 2020-01-22 | 2020-06-16 | 北京三快在线科技有限公司 | 活体检测方法、装置、电子设备及可读存储介质 |
| CN111680595A (zh) * | 2020-05-29 | 2020-09-18 | 新疆爱华盈通信息技术有限公司 | 一种人脸识别方法、装置及电子设备 |
Also Published As
| Publication number | Publication date |
|---|---|
| KR101781361B1 (ko) | 2017-09-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018139847A1 (fr) | Procédé d'identification personnelle par comparaison faciale | |
| CN114494427B (zh) | 一种对吊臂下站人的违规行为检测方法、系统及终端 | |
| US20130188827A1 (en) | Human tracking method and apparatus using color histogram | |
| WO2021177544A1 (fr) | Système et procédé de reconnaissance faciale capable de mettre à jour un modèle facial enregistré | |
| KR100631235B1 (ko) | 스테레오 이미지의 에지를 체인으로 연결하는 방법 | |
| WO2014073841A1 (fr) | Procédé de détection de localisation intérieure basée sur image et terminal mobile utilisant ledit procédé | |
| CN112926464B (zh) | 一种人脸活体检测方法以及装置 | |
| CN113312965A (zh) | 一种人脸未知欺骗攻击活体检测方法及系统 | |
| CN112434545A (zh) | 一种智能场所管理方法及系统 | |
| JP6773825B2 (ja) | 学習装置、学習方法、学習プログラム、及び対象物認識装置 | |
| WO2019088333A1 (fr) | Procédé de reconnaissance d'une activité de corps humain d'après des informations de carte de profondeur et appareil associé | |
| WO2013151205A1 (fr) | Procédé et appareil d'acquisition de l'image d'un visage pour la reconnaissance faciale | |
| CN111444837B (zh) | 在极端环境下提升人脸检测可用性的测温方法及测温系统 | |
| WO2024101466A1 (fr) | Appareil et procédé de suivi de personne disparue basé sur des attributs | |
| WO2020242089A2 (fr) | Procédé de conservation basé sur l'intelligence artificielle et dispositif pour la mise en oeuvre de ce procédé | |
| WO2016104842A1 (fr) | Système de reconnaissance d'objet et procédé de prise en compte de distorsion de caméra | |
| KR20180087812A (ko) | 얼굴 비교를 통한 개인 식별 방법 | |
| WO2020045903A1 (fr) | Procédé et dispositif de détection d'objet indépendamment de la taille au moyen d'un réseau neuronal convolutif | |
| WO2021182670A1 (fr) | Dispositif et procédé de reconnaissance faciale hétérogène basés sur l'extraction de relations entre des éléments | |
| WO2025135266A1 (fr) | Procédé et système de mesure de poussière fine sur la base d'une image | |
| Yamanaka et al. | Tactile Tile Detection Integrated with Ground Detection using an RGB-Depth Sensor. | |
| WO2023075185A1 (fr) | Procédé permettant de tester la pertinence d'une image pour former ou reconnaître une empreinte nasale d'un animal de compagnie | |
| CN114359840A (zh) | 一种小区楼道防盗方法和装置 | |
| CN114373205A (zh) | 一种基于卷积宽度网络的人脸检测和识别方法 | |
| WO2022114406A1 (fr) | Système de gestion de sécurité par détection de squelette de clé sur la base d'une image |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18744775 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18744775 Country of ref document: EP Kind code of ref document: A1 |