WO2018139847A1 - Personal identification method through facial comparison - Google Patents
Personal identification method through facial comparison Download PDFInfo
- Publication number
- WO2018139847A1 WO2018139847A1 PCT/KR2018/001055 KR2018001055W WO2018139847A1 WO 2018139847 A1 WO2018139847 A1 WO 2018139847A1 KR 2018001055 W KR2018001055 W KR 2018001055W WO 2018139847 A1 WO2018139847 A1 WO 2018139847A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- face
- identification
- image
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/469—Contour-based spatial representations, e.g. vector-coding
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- the present invention relates to a method of personal identification through face comparison and a personal identification system in which the method is executed. More particularly, the present invention provides a face that can be identified as a sculpture, identified as the same, and additional information such as race can be provided. A method of personal identification through comparison and a personal identification system in which the method is implemented.
- Existing personal identification system is mainly used in access security device or personalized information providing system, etc., and outputs only the result of whether the same person is compared by comparing the face image taken in the field with the already registered image.
- This method does not apply the method of distinguishing the print out of the human face image from the human face and the color printer or artificially manufactured sculptures, which may cause a fatal security problem and the dichotomous judgment result is wrong.
- As a result in spite of being the same person, if it is recognized as a different person, there is a problem that can be judged as the same person even though the person is not the same person, causing inconvenience and the same person.
- the existing system for personal identification based on the similarity only automatically collects statistical information about race and gender for the identified individual and is not applicable to a personalized system.
- the present invention has been proposed to solve the above problems, and can be distinguished whether the object to be prepared is a real face or printed matter or sculpture printed on the face, provides a similarity data of the face, information on race or gender
- Another object of the present invention is to provide a personal identification method through face comparison that can be provided together.
- the present invention provides an individual comprising a photographing unit, a control unit connected to the photographing unit, a storage unit storing an identification image and a personal identification program consisting of a plurality of identification images connected to the control unit photographed by the photographing unit. Executed in the identification system to contrast with DB face information registered in the database;
- a personal identification step wherein the personal identification step includes a facial area derivation step of deriving a face area from an image of an individual photographed, a facial feature point derivation step of deriving facial feature points, and a rotation angle of the derived face area.
- An operation for calculating the identification face rotation angle, a DB face information contrast step in which a plurality of DB face information images and an identification image are contrasted, a pixel color application step, and an identification face DB which is a face 3D database for the individual to be identified are derived. Consisting of the face DB derivation step;
- the present invention provides a personal identification method through face comparison in which information for identifying a photographed individual is provided.
- a minimum error square value of a plurality of DB face information images is calculated for the identification image, and in the pixel color applying step, the smallest minimum square error value for the identification image among the DB face information images is calculated.
- the color information of pixels constituting the DB face information image is changed into an identification image and stored in the storage unit.
- the face identification step includes eye area detection step of eye area derived from each frame constituting the identification image, eye area derivation step of detecting eye area of eye area, and color information of pixels constituting eye area of each frame. It is characterized in that it comprises a step of contrasting the pupil area color information to be provided, and the sculpture identification information providing step of providing information that determines whether the actual face or sculpture is photographed from the change in the color information of the pixels constituting the pupil area of each frame.
- the method may further include a face region deriving step of deriving a face region of a plurality of frames photographed before the eye region is detected, and after the pupil region derivation step, pixel information forming a pupil region in each frame is calculated. Further comprising a computing step; In the color information comparing step, the color information of the pupil area pixels of each frame is contrasted.
- the face identification step may include: an eye area detection step in which an eye region is derived from each frame constituting an image captured by the photographing unit, a face area feature point derivation step in which feature points of a face area of each frame constituting the captured image are derived, and an eye; A pupil area derivation step of detecting a pupil area of an area, a pupil area location derivation step of deriving a pupil area location, and a pupil location from which a distance between a pupil area location and a facial area feature point derived in the pupil area location derivation step is calculated.
- the distance calculation step and the sculpture identification information providing step of providing information that determines whether the actual face is photographed or the sculpture is photographed from the change in the pupil position distance of each frame.
- the face identification step includes deriving a face region from each frame constituting the image captured by the photographing unit, calculating a direction of the face region, and photographing whether a real face is photographed from a change in the face region direction of each frame.
- Sculpture identification information providing step is provided, characterized in that the information is determined.
- system further comprises a distance measuring sensor connected to the control, wherein each frame is stored in the storage with the distance information measured from the distance measuring sensor;
- the face identification step includes a face area derivation step in which a face area is derived from each frame constituting the image photographed by the photographing unit, a face area information calculation step in which face area information derived in the face area derivation step is calculated, and each frame It is characterized in that it comprises a sculpture identification information providing step that provides information that determines whether the actual face is photographed or the sculpture is photographed from the face area information and distance information.
- the storage unit stores the composite product neural network module, the learning photographed image and the learning photographed image information, and further comprising the step of providing additional information in addition to the personal identification step;
- the additional information providing step includes a learning step and a face information calculation step, wherein racial information and gender information of the identification image are provided;
- the learning step includes the step of forming a convolutional layer of the training photographing image for each of the race information and the gender information, the step of forming a pooling layer from a map of the convolutional layer, and the information of the pooling of the pooling layer.
- Hidden layer information operation step of calculating hidden layer node values by operation of hidden layer weight and input information, output layer information calculating step of calculating output layer by operation of hidden layer node value and output layer weight, and error from output layer information and training photographing image information.
- the updated values of the hidden layer weights and the output layer weights are calculated according to the backpropagation algorithm, and the hidden layer information calculation step and the output layer information calculation step are repeatedly calculated, and the process of comparing the calculated output layer information with the photographed image information for training is repeated.
- the update value and output layer weight Made of a weighting stage which is derived singap is derived;
- the hidden layer weight update value and the output layer weight update value derived in the learning step may be the hidden layer weight and the output layer weight for each of the race information and the gender information in the face information calculation step.
- the face information calculation step includes a step of forming a convolutional layer of an identification image for each of the race information and the gender information, a step of forming a pooling layer from a map of the convolutional layer, and a pooling information of the pooling layer.
- a sculpture or an output is distinguished from an actual face, so that when applied to an access device or a security device, a sculpture or an output having a malicious purpose may be used. Unauthorized access is blocked, the same person and the same person is identified by similarity step by step, there is an effect that the race, gender and age information can be provided.
- Figure 1 illustrates the steps of performing a method of personal identification through face comparison according to the present invention
- FIG. 2 is a view illustrating the face identification step of FIG.
- FIG. 3 is a schematic flowchart illustrating a face identification step of the present invention.
- 6 and 7 are schematic flowcharts illustrating a process of deriving a gaze direction constituting a face identification step
- FIGS. 8 and 9 are graphs illustrating a change in gaze according to a frame.
- FIG. 1 is a view illustrating a step of executing a personal identification method through face comparison according to the present invention
- FIG. 2 is illustrated to explain the face identification step of FIG. 1
- FIG. 3 is a face identification step of implementing the present invention.
- 4 and 5 are schematic diagrams illustrating eye regions
- FIGS. 6 and 7 are schematic flowcharts illustrating a process of deriving a gaze direction constituting a face identification step.
- 8 and 9 are graphs illustrating a change in gaze according to a frame.
- a personal identification method through face comparison including a photographing unit, a control unit connected to the photographing unit, and a storage unit storing an identification image and a personal identification program including a plurality of frames connected to the control unit photographed by the photographing unit. Runs on a personal identification system.
- the personal identification system further includes a distance measuring sensor connected to the control unit to provide the measured distance information to the control unit, and a communication module connected to the control unit to transmit and receive a signal, and the compound multiplication neural network module is stored in the storage unit.
- the photographing unit, the control unit, the storage unit, and the distance measuring sensor are provided in a main body (not shown).
- the personal identification system may be installed and operated at the door.
- the personal identification program is operated to execute the personal identification method through face comparison according to the present invention.
- the storage unit stores a plurality of personal 3D face databases (for example, 300 people), and extends from -30 ° to + 30 ° using an axis (first axis) extending in the vertical direction of the face and passing through the center of the face as a rotation axis.
- Face image information pixel color of the face region, coordinates, and feature points of the face region
- DB face information the face image information
- the face with a rotation angle of 0 ° is the front face.
- the first axis is the axis passing through the center of the face that is symmetrical.
- the process of deriving the feature points of the face region and the contents of the derived feature points are well known in the art, and description thereof will be omitted.
- the table exemplarily shows four persons to explain that the face 3D database of the individual is stored in the storage unit.
- A, B, C, and D mean different individuals.
- the face 3D database may also be stored in a server in communication with a personal identification system in which the present invention is implemented.
- the personal identification method through face comparison includes a face identification step ST-100, a personal identification step ST-200, and an additional information provision step ST-300. .
- the face identification step (ST-100) information for determining whether an identification image composed of a plurality of frames photographed by the photographing unit and stored in the storage unit is photographed by a real person or a human sculpture or printed matter is photographed.
- a material for identifying a photographed individual is provided by contrasting the individual photographed by the photographing unit and the data registered in the database.
- step ST-300 information for determining the race and gender of the individual photographed is provided.
- Information derived at each step is transmitted through a communication module and displayed on a monitor such as a security room.
- a material for distinguishing whether an image photographed by the photographing unit photographs a real person or a print such as a human sculpture or a person's photo is provided.
- the face identification step ST-100 includes an eye region detection step of deriving an eye region from each frame constituting the identification image, a pupil area derivation step of detecting a pupil region of an eye region (ST-120), and a frame Sculpture captures whether a real face is photographed from the eye region color information contrast step (ST-120; blink operation) in which the color information of the pixels constituting the eye region is contrasted and the change in the color information of the pixels constituting the eye region of each frame.
- Sculpture determination information providing step (ST-140) is provided is provided that is determined to be contrasted with the information.
- the eye region 100 is derived from each frame of the identification image.
- the eye region 100 may be derived by providing a mask.
- the mask for deriving the eye region 100 may have a size determined from the size of the face region derived as a rectangle, and the pupil region 110 as shown in FIG. 5 and the peripheral region 120 which is a white region around the pupil as shown in FIG. 5.
- a pixel portion of the same size as the mask having a size determined and having an arrangement most similar to the pupil region 110 and the peripheral region 120 which is a white region around the pupil in the rectangular region in the derived face region is the eye region 100.
- the eye region 110 is derived from the color information of the pixel in the derived eye region 100.
- the pixel having the largest ordinate value and the smallest pixel, and the pixel having the largest abscissa value and the smallest pixel are derived, so that the characteristic points 113 and 115 of the pupil region shown in FIG. Derived.
- Eye region feature points 121 which are left and right endpoints of the peripheral region of the eye region, may also be derived.
- the horizontal center coordinates are calculated from the maximum and minimum values of the horizontal coordinates for the pupil, and the vertical center coordinates are calculated from the maximum and minimum values of the vertical coordinates, thereby deriving the pupil center point 111. It is calculated from the information of the pupil area 110 and the pupil center point 111 as described above (ST-130), and the number of eye blinks (ST-131) and the gaze direction (ST-133) are calculated.
- the direction of the face is calculated from the distance change of the facial feature point, and it is also possible to calculate the direction of the face with the gaze direction.
- the face rotation direction may be calculated from the distance between the left and right end feature points of the left eye area and the distance between the left and right end feature points of the right eye area.
- the degree of face rotation in the vertical direction may be calculated from the distance change between the feature point of the eye region and the feature point of the lip region, and the subject may be determined whether the photographing object is a sculpture or a real face from the change of the face direction of each frame, which is the face rotation. It is possible to receive data (see Figures 8 and 9).
- FIG. 5 schematically illustrates a portion of an eye region of a frame photographed with the eyes closed among the frames constituting the photographed image. Since the state of the eyes is captured, the pupil region or the peripheral region of the eye region is a pixel of the same color as the skin. Will have Meanwhile, the eye region feature point 121 may also be derived from the eye region photographed when the eyes are closed.
- the color change of the pixels constituting the eye region 100 is derived from the frame photographed with the eyes open and the frame photographed when the eyes are closed in FIG. 6.
- color information of pixels constituting the pupil regions 110 and 110a is derived.
- Color information of the pixels constituting the pupil region is derived for each frame, an average value of the colors of the pixels constituting the pupil region is calculated, and a change in the color information of the pixels of the pupil region with respect to the time of each frame is displayed through the display.
- the number of eye blinks with respect to time is calculated and derived from color information of pixels constituting the eye region in each frame (ST-131).
- Specifying a threshold for the change in the average value of the colors of the pixels that make up the pupil area information that the person is photographed when the average color value of the pupil area pixels exceeds the threshold, and that the print or sculpture is photographed if it does not exceed the threshold. May be provided through the display unit.
- warning means bells, warning lights, etc.
- the control unit may be operated through the control unit. In this case, the face comparison personal identification step (ST-200) does not proceed and ends.
- the horizontal and vertical displacements of the pupil center point 111 with respect to the frame photographed with the eyes open are derived so that the change to the frame may be displayed on the display unit.
- the horizontal position change and the vertical position change of the pupil center point 111 are derived from the position of the pupil center point 111 in the photographed frame, and the horizontal feature displacement of the pupil center point 111 is the eye feature point 121 and the pupil center point. It is also possible that 111 is calculated and derived.
- FIG. 8 is a graph illustrating a change in gaze according to a frame (a change in face direction) when a real person is photographed
- FIG. 9 is a graph illustrating a change in gaze according to a frame (a change in face direction) when a printed matter is taken.
- the gaze change may be derived from the rotation angle of the face calculated from the facial feature point.
- the threshold value for the line of sight change in the horizontal and vertical directions according to the frame and judge it as a person when the maximum value of the line of change is greater than the threshold, and when the maximum value of the line of change is less than the threshold, It may also be determined that the result is displayed on the display unit.
- warning means bells, warning lights, etc.
- the face comparison personal identification step ST-200
- distance information sensed by a distance measuring sensor connected to a controller and measuring a distance from a subject is stored in a storage unit along with each frame.
- the face region of each frame is extracted (ST-110), and the number (area) of pixels constituting the face region is calculated and stored with the distance information.
- the database stores information on the face area (number of pixels) according to the distance (face area area information according to the distance), and the difference is compared with the face area area according to the distance derived from a frame of the captured identification image. It may be displayed on the display unit.
- a warning means (bell, beacon, etc.) connected to the control unit is activated through the control unit. It is possible.
- the personal identification step ST-200 is provided with a face 3D database (hereinafter referred to as "identification face DB") including a front face image of an individual photographed by the photographing unit. Providing 3D information or a front face image facilitates identification of the photographed individual.
- identity face DB a face 3D database
- the personal identification step includes a face area derivation step ST-210 in which a face area is derived from an image of a photographed individual (hereinafter referred to as an “identification image”), and a face feature point derivation step in which facial feature points are derived. 220, an identification face rotation angle calculation step (ST-230) for calculating a rotation angle with respect to the first axis of the derived face region, a DB face information comparison step (ST-230), and a pixel color application step (ST) And a face DB derivation step (ST-240) in which an identification face DB, which is a face 3D database for the individual to be identified, is derived.
- an identification face DB which is a face 3D database for the individual to be identified
- Deriving a face region from an image of an individual photographed by the photographing unit is well known in the art, and thus description thereof is omitted.
- an area having a predetermined size (number of pixels) of a region of pixels of a specific color in the image of the photographed individual may be derived as the face region.
- a facial specific point is derived from the derived face region (ST-220).
- the process of deriving a feature point from the face region has been described above by way of example, and the description thereof is also omitted in the prior art.
- the rotation angle of the face which is the rotation angle with respect to the first axis, is calculated by the calculation of the specific point (ST-230, for information on calculating the face rotation angle, registered in Republic of Korea Registration No. 10-1215751). See patent publication, etc.).
- the face rotation angle of the face region of the identification image is calculated, it is contrasted with face image information of the 3D face database of the plurality of individuals. It is contrasted with the face image of the DB face information of the angle closest to the identification face rotation angle. For example, if the face rotation angle of the identification image is -28 °, it is contrasted with the face image of DB face information having a rotation angle of -30 °. If the face rotation angle of the identification image is a medium value such as ⁇ 25 °, the face rotation angle of the identification image may be contrasted with the face image of the DB face information having the rotation angle of ⁇ 30 ° or ⁇ 20 °.
- the step of making the same size of the face image of the identification image and the DB face information is further performed.
- the identification image is a 2x2 pixel image and the face image of the DB face information is a 3x3 pixel image, (x1, y1), (x1, y2), (x2, y1) of the identification image ), a pixel having an average value of the color information of the (x1, y1) and (x1, y2) pixels in the (x2, y2) pixels is generated between (x1, y1) and (x1, y2), and (x1, y2) )
- the coordinates of the pixel are (x1, y3), and the coordinates of the generated pixel are (x1, y2);
- a pixel having an average value of the color information of the (x2, y1) and (x2, y2) pixels is generated between (x2, y1) and (x2, y2), and the coordinate of the (x2, y2) pixel is (x2, y3). ), And the coordinates of the generated pixel are (x2, y2);
- a pixel having an average value of the color information of the (x1, y1) and (x2, y1) pixels is generated between (x1, y1) and (x2, y1), and the coordinate of the (x2, y1) pixel is (x3, y1). ), And the coordinate of the generated pixel is (x2, y1);
- a pixel having an average value of the color information of the (x1, y2) and (x2, y2) pixels is generated between (x1, y2) and (x2, y2), and the coordinate of the (x2, y2) pixel is (x3, y2). ), And the coordinates of the generated pixel are (x2, y2);
- a pixel having an average value of the color information of the (x1, y3) and (x2, y3) pixels is generated between (x1, y3) and (x2, y3), and the coordinate of the (x2, y3) pixel is (x3, y3). ), And the coordinates of the generated pixel are (x2, y3) and stored in the storage unit.
- the feature point of the face region is derived again from the identification image, and a feature point such as the feature point of the DB face information is derived.
- the least square error value is calculated for the feature image of the identification image and the face image of each DB face information to contrast the DB face information (ST-230).
- the face image of the DB face information having the smallest least square error is an image to which pixel color is applied (an image of applying color information).
- the color of the pixel constituting the identification image becomes the color of the pixel constituting the color information applied image and is stored in the storage unit. And is displayed on the display unit.
- the least error square value (Equation 1) for the feature points of the face image of the plurality of DB face information having the same face rotation angle as the identification image and the feature points of the identification image among the face images of the plurality of DB face information are calculated.
- the color of the pixels forming the face image of the DB face information having the smallest minimum error square value of the identification image is changed to the color of the pixel of the identification image and stored in the storage unit.
- the DB face information image changed to the color of the pixel of the identification image is called an "identification derived face image".
- n is the number of feature points
- x i and y i are the x and y axis coordinates of the i th feature point of the identification face image
- x il and y il are the x and y axis coordinates of the i th feature point of the DB face information image
- dist rms is the least squares of error.
- the face rotation angle is 30 °
- the face image of the DB face information rotated 30 ° having the color of the pixel of the identification image is obtained.
- Identification image from face image of 30 ° rotated DB face information with color of pixel of identification image by applying to front image information and pixel position information with rotation angle of 0 ° with respect to face image of 30 ° rotated DB face information A front image of the identification face image with a rotation angle of 0 ° having a color of pixels of is obtained.
- an identification face DB which is a face 3D database for an individual to be identified, is derived (ST-240).
- the face image of the DB face information having the smallest least square error value for the identification image is derived and forms the face image of the derived DB face information.
- the color information of the pixel is changed to the color information of the pixel constituting the identification image, thereby obtaining the identification face image, and applying the obtained information of the identification face image to the DB face information of the face image to obtain 3D information of the identification face image. And frontal images are obtained.
- the personal identification step may include: extracting a face region from each frame photographed to form an identification image of an individual stored in a storage unit; extracting a face feature vector from which a feature vector of the extracted face region is derived; The Euclidean distance between the feature vector of the extracted facial region and the facial feature vector of the data registered in the database may be compared with the facial feature vector.
- a facial region feature vector is derived from the facial region information extracted and frontized in the facial region extraction step, and in the facial feature vector contrast step, the facial feature vector of the captured identification image and the database are registered in the database.
- the Euclidean distance of the facial feature vector of the data is calculated. Since the method of deriving the facial region feature vector and the Euclidean distance calculation are conventional techniques, description thereof will be omitted.
- the Euclidean distance for the facial feature vector of the photographed identification image and the data for the plurality of individuals registered in the database is calculated.
- the Euclidean distance becomes face similarity.
- the identification image of the individual photographed together with the calculated Euclidean distance and the image of the individual registered in the database are displayed on a display unit (eg, an LCD panel; not shown) connected to the control unit.
- Class information is divided into 4 classes based on Euclidean distance, and information about the class is also displayed.
- a composite product neural network module is executed, and a face feature vector of the identification image is derived by applying a previously learned personal identification weighting value from the training photographing image.
- the personal identification method through face comparison according to the present invention further includes obtaining face information in addition to the face comparison personal identification step.
- the face information acquiring step includes a learning step and a face information calculation step to provide race information and gender information of the identification image.
- the learning step includes a step of forming a convolutional layer of a training photographing image of gender information, a step of forming a pooling layer from a map of a convolutional layer, and information of pooling of a pooling layer as input information, including hidden layer weights and inputs.
- the updated values of the hidden layer weights and the output layer weights are calculated, and the hidden layer information calculating step and the output layer information calculating step are recalculated, and the process of comparing the calculated output layer information with the photographed image information for training is repeated.
- a map is formed of a convolutional layer by using a kernel for each training photographed image. Color information of each pixel constituting the training shot image becomes a component of the training shot image matrix. Before forming the map, it is also possible to convert the training photographed image into a black and white image.
- the kernel may be a matrix having various sizes, such as a 2 ⁇ 2 matrix, a 3 ⁇ 3 matrix, and a 5 ⁇ 5 matrix.
- the inventors formed a map using the 3 ⁇ 3 matrix as a kernel, and formed a convolutional layer. Five hidden layers were formed, and in the number of kernels, the first hidden layer had 16 kernels, the second hidden layer 32, the third hidden layer 64, the fourth hidden layer 128, and the fifth hidden layer 512. To form a hidden layer. In the formation of the hidden layer, the accuracy is improved by increasing the number of hidden layers by increasing the number of kernels toward the output layer.
- the values of the components of the kernel can be generated using random functions, and the values of each component should not exceed two.
- a plurality of kernels are provided, and a map for each kernel is provided to form a convolutional layer.
- the kernel is a 2 ⁇ 2 matrix and the components are 1, 0, 0, and 1, the kernel is calculated by mapping the kernel to the first 2 ⁇ 2 pixels of the captured training image, and the component values forming the map are calculated.
- the map component values are sequentially calculated by shifting pixel by pixel along the pixels of the training photographed image to prepare a map that is a matrix.
- a kernel having a plurality of different component values as described above is provided, and a map for each kernel is provided to form a convolutional layer. If the kernel is a 2x2 matrix and the size of the training photographic image is 4x4 pixels, the map is formed into a 3x3 matrix size.
- each map constituting the convolution is outputted through an active function.
- the ReLU function is adopted as the activation function. It is also possible to make the sigmoid function the active function.
- a plurality of pools of the pooling layer is formed, and pooling is formed on an output value of each map constituting the convolutional layer.
- the pooling may be pooled to an average value or to a maximum value.
- the pooling may be performed for a 2 ⁇ 2 matrix size, and the maximum value or the average value of the components having a 2 ⁇ 2 matrix size from the components of the map is the component value of the pooling. Therefore, when a map of 4x4 matrix size is pooled to 2x2 matrix size, it becomes a matrix of 2x2 matrix.
- Each pooling component becomes an input value and is calculated based on the weight of the hidden layer to calculate the node value constituting the hidden layer.
- Equation 2 is an example of calculating each node of the hidden layer, in which m is the number of components forming a pool, Ej is a hidden layer node value, pi is a component of pooling, wij is a weight, and po is 1 as a bias. And ⁇ is an active function.
- the sigmoid function is used as the activation function.
- the weight wij may be any value selected from a value between 0 and 2.
- the hidden layer may be formed of one or more layers.
- Equation 2 is a formula in which the node value of the first hidden layer is calculated, and the node value of the second hidden layer is calculated by the node value and the weight of the first hidden layer.
- the weighting value used in calculating the first node value and the weighting value used in calculating the second node value may be different from each other. As described above, in the case of the plurality of hidden layers, calculations are performed in sequence.
- the output layer node value is calculated by the node value and the weight value of the last hidden layer. Since gender is male and female, there are two output layer nodes.
- Equation 3 is an example of calculating a node of an output layer, where Tj is an output layer node value, Ei is a node value of the last hidden layer, vij is a weight value, and Eo is 1 as a bias. And ⁇ is an active function. ReLU is used as an active function.
- the weight vij may be any value selected from a value between 0 and 2.
- the training shot image is male and the node value of the output layer calculated through the above process for the training shot image is larger than 0.5
- the training shot image is female and the training shot image is processed through the above process.
- the node value of the output layer is smaller than 0.5
- the hidden layer node value calculation weight and the output layer calculation weight are stored in the storage unit.
- the output node value is calculated for all of the prepared training images, and the accuracy of the weight is checked. In the classification, if it is larger than 0.5, it can be classified as a male, and if it is smaller than 0.5, it can be classified as a female (and vice versa).
- the training photographing image is male and the output layer node value calculated by the above steps is 0.3, the minimum error is 0.7, and the error value is based on the backpropagation algorithm (the detailed description is omitted since it is a conventional technology).
- the output layer node value is calculated through the above process with respect to the identification image. For example, if the output layer node value for the identification image is greater than 0.5, it is classified as a male, and if it is less than 0.5, it is classified and stored.
- the race information is calculated through the above process, and the output layer node is set to be the same as the race type.
- the sigmoid function is used as the active function for the hidden layer node operation
- the softmax function is used as the active function for the output layer node operation.
- the sum of the output layer node value calculation results is 1, the race is classified according to the node value having the maximum value of the three output layer node values, and the larger the maximum value, the higher the accuracy.
- the sculpture or the printout is distinguished from the real face, so when applied to an access device or a security device, the sculpture or the printout having the malicious purpose Unauthorized entry and exit using the back is blocked, the same person and the same person can be identified by the similarity level, and race, gender, and age information is provided to improve business efficiency.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Ophthalmology & Optometry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
본 발명은 얼굴 비교를 통한 개인 식별 방법 및 그 방법이 실행되는 개인 식별 시스템에 관한 것으로, 보다 상세하게는 조형물인지 구분되고 동일인 여부가 식별되며 인종 등과 같은 추가 정보가 제공될 수 있는 본 발명은 얼굴 비교를 통한 개인 식별 방법 및 그 방법이 실행되는 개인 식별 시스템에 관한 것이다.The present invention relates to a method of personal identification through face comparison and a personal identification system in which the method is executed. More particularly, the present invention provides a face that can be identified as a sculpture, identified as the same, and additional information such as race can be provided. A method of personal identification through comparison and a personal identification system in which the method is implemented.
기존의 개인 식별 시스템은 주로 출입 보안 장치나 개인 맞춤형 정보 제공 시스템 등에서 많이 사용되며, 현장에서 촬영된 얼굴 이미지와 이미 등록된 이미지를 비교하여 동일인인지 여부에 대한 결과만을 출력하였다. 이러한 방법은 사람의 얼굴과 컬러프린터 등으로부터 사람의 얼굴 이미지를 출력한 출력물 또는 인위적으로 제작된 조형물을 구분하는 방법이 적용되어있지 않아 보안상 치명적인 문제를 야기할 가능성이 있으며 이분법적 판단 결과는 잘못된 결과, 즉 동일인임에도 불구하고 다른 사람으로 인식되는 경우 본인 인증이 되지 않아 불편을 초래하는 경우와 비동일인임에도 불구하고 동일인으로 판단할 수 있는 문제점이 있다.Existing personal identification system is mainly used in access security device or personalized information providing system, etc., and outputs only the result of whether the same person is compared by comparing the face image taken in the field with the already registered image. This method does not apply the method of distinguishing the print out of the human face image from the human face and the color printer or artificially manufactured sculptures, which may cause a fatal security problem and the dichotomous judgment result is wrong. As a result, in spite of being the same person, if it is recognized as a different person, there is a problem that can be judged as the same person even though the person is not the same person, causing inconvenience and the same person.
이분법적 판단 시스템에서 동일인으로 판단하는 기준 유사도를 높게 설정하면 동일인임에도 비동일인으로 결과를 보여주는 경우가 빈번하게 되어 출입 시스템에서 불편을 초래하고, 반대로 기준 유사도를 낮게 설정하면 비동일인임에도 동일인으로 판단하는 문제가 발생하게 된다.In the dichotomous judgment system, if the standard similarity judged as the same person is set high, the result is often the same person and the result is not the same person, which causes inconvenience in the entrance system. On the contrary, if the standard similarity is set low, the same person is judged as the same person. Problems will arise.
일반적으로 대부분의 출입 장치 및 보안 장치들은 사람의 얼굴과 출력물을 구분하지 못하고 판단에서는 기준 유사도를 높게 설정하여 불편함을 감수하고자 하여 알람이 발생하면 보안 요원 등이 알람이 발생된 대상자를 불러 사진과 실제 사람을 육안으로 다시 비교하는 문제가 있다.In general, most access devices and security devices cannot distinguish human faces and printouts, and in case of judgment, it is necessary to set a high standard similarity and take inconvenience. There is a problem of comparing the real person with the naked eye again.
또한, 유사도만을 기준으로 개인 식별을 하는 기존의 시스템은 식별 후 식별된 개인에 대한 인종, 성별에 관한 통계 정보를 자동으로 수집하여 개인 맞춤형 시스템 등에 응용이 불가능하다.In addition, the existing system for personal identification based on the similarity only automatically collects statistical information about race and gender for the identified individual and is not applicable to a personalized system.
본 발명은 상기와 같은 문제점을 해결하기 위하여 제안된 것으로, 대비하려는 대상이 실제 얼굴인지 얼굴을 인쇄한 인쇄물이나 조형물인지 구분되도록 할 수 있으며, 얼굴의 유사도 자료를 제공하며, 인종이나 성별에 대한 정보도 함께 제공할 수 있는 얼굴 비교를 통한 개인 식별 방법을 제공하는 것을 목적으로 한다.The present invention has been proposed to solve the above problems, and can be distinguished whether the object to be prepared is a real face or printed matter or sculpture printed on the face, provides a similarity data of the face, information on race or gender Another object of the present invention is to provide a personal identification method through face comparison that can be provided together.
상기와 같은 목적을 위하여 본 발명은 촬영부와, 상기 촬영부에 연결된 제어부와, 상기 제어부에 연결되어 촬영부에서 촬영된 복수의 식별 이미지로 이루어지는 식별 영상과 개인 식별 프로그램이 저장되는 저장부로 이루어진 개인 식별 시스템에서 실행되어, 데이터 베이스에 등록된 DB 얼굴 정보와 대비하는데 있어서;To this end, the present invention provides an individual comprising a photographing unit, a control unit connected to the photographing unit, a storage unit storing an identification image and a personal identification program consisting of a plurality of identification images connected to the control unit photographed by the photographing unit. Executed in the identification system to contrast with DB face information registered in the database;
개인 식별 단계를 포함하고, 상기 개인 식별 단계는 촬영된 개인의 식별 이미지로부터 얼굴 영역이 도출되는 얼굴영역도출단계와, 얼굴의 특징점들이 도출되는 얼굴특징점도출단계와, 도출된 얼굴영역의 회전각도가 연산되는 식별얼굴회전각도 연산단계와, 복수의 DB 얼굴 정보 이미지와 식별 이미지가 대비되는 DB얼굴정보 대비단계와, 픽셀색상적용단계와, 식별하려는 개인에 대한 얼굴 3D 데이터 베이스인 식별 얼굴 DB가 도출되는 얼굴DB 도출단계로 이루어져; 촬영된 개인을 식별하기 위한 정보가 제공되는 얼굴 비교를 통한 개인 식별 방법을 제공한다.And a personal identification step, wherein the personal identification step includes a facial area derivation step of deriving a face area from an image of an individual photographed, a facial feature point derivation step of deriving facial feature points, and a rotation angle of the derived face area. An operation for calculating the identification face rotation angle, a DB face information contrast step in which a plurality of DB face information images and an identification image are contrasted, a pixel color application step, and an identification face DB which is a face 3D database for the individual to be identified are derived. Consisting of the face DB derivation step; The present invention provides a personal identification method through face comparison in which information for identifying a photographed individual is provided.
상기에서, DB얼굴정보 대비단계에서는 식별 이미지에 대하여 복수의 DB 얼굴 정보 이미지의 최소오류자승값이 연산되고, 상기 픽셀색상적용단계에서는 DB 얼굴 정보 이미지 중 식별 이미지에 대하여 가장 작은 최소오류자승값을 가지는 DB 얼굴 정보 이미지를 이루는 픽셀의 색상 정보가 식별 이미지로 변경되어 저장부에 저장되는 것을 특징으로 한다.In the DB face information contrasting step, a minimum error square value of a plurality of DB face information images is calculated for the identification image, and in the pixel color applying step, the smallest minimum square error value for the identification image among the DB face information images is calculated. The color information of pixels constituting the DB face information image is changed into an identification image and stored in the storage unit.
상기에서, 개인 식별 단계 전에 얼굴 식별 단계를 더 포함하고; 상기 얼굴 식별 단계는 식별 영상을 이루는 각 프레임에서 눈 영역이 도출되는 눈 영역 검출 단계와, 눈 영역의 눈동자 영역이 검출되는 눈동자 영역 도출 단계와, 각 프레임마다 눈동자 영역을 이루는 픽셀의 색상 정보가 대비되는 눈동자 영역 색상 정보 대비 단계와, 각 프레임의 눈동자 영역을 이루는 픽셀의 색상 정보 변화로부터 실제 얼굴이 촬영되는지 조형물이 촬영되는지 판단되는 정보가 제공되는 조형물 판별 정보 제공 단계로 이루어지는 것을 특징으로 한다.In the above, further comprising a face identification step before the personal identification step; The face identification step includes eye area detection step of eye area derived from each frame constituting the identification image, eye area derivation step of detecting eye area of eye area, and color information of pixels constituting eye area of each frame. It is characterized in that it comprises a step of contrasting the pupil area color information to be provided, and the sculpture identification information providing step of providing information that determines whether the actual face or sculpture is photographed from the change in the color information of the pixels constituting the pupil area of each frame.
상기에서, 눈 영역이 검출되는 단계 전에 촬영된 복수 프레임의 얼굴 영역이 도출되는 얼굴 영역 도출 단계를 더 포함하고, 눈동자 영역 도출 단계 후에 각 프레임에서 눈동자 영역을 이루는 픽셀 정보가 연산되는 눈동자 영역 픽셀 정보 연산 단계를 더 포함하며; 상기 색상 정보 대비 단계에서는 각 프레임의 눈동자 영역 픽셀의 색상 정보가 대비되는 것을 특징으로 한다.The method may further include a face region deriving step of deriving a face region of a plurality of frames photographed before the eye region is detected, and after the pupil region derivation step, pixel information forming a pupil region in each frame is calculated. Further comprising a computing step; In the color information comparing step, the color information of the pupil area pixels of each frame is contrasted.
상기에서, 개인 식별 단계 전에 얼굴 식별 단계를 더 포함하고; In the above, further comprising a face identification step before the personal identification step;
상기 얼굴 식별 단계는 촬영부에서 촬영된 영상을 이루는 각 프레임에서 눈 영역이 도출되는 눈 영역 검출 단계와, 촬영된 영상을 이루는 각 프레임의 얼굴 영역의 특징점이 도출되는 얼굴 영역 특징점 도출 단계와, 눈 영역의 눈동자 영역이 검출되는 눈동자 영역 도출 단계와, 눈동자 영역의 위치가 도출되는 눈동자 영역 위치 도출 단계와, 상기 눈동자 영역 위치 도출 단계에서 도출된 눈동자 영역 위치와 얼굴 영역 특징점의 거리가 연산되는 눈동자 위치 거리 연산 단계와, 각 프레임의 눈동자 위치 거리 변화로부터 실제 얼굴이 촬영되는지 조형물이 촬영되는지 판단되는 정보가 제공되는 조형물 판별 정보 제공 단계로 이루어지는 것을 특징으로 한다.The face identification step may include: an eye area detection step in which an eye region is derived from each frame constituting an image captured by the photographing unit, a face area feature point derivation step in which feature points of a face area of each frame constituting the captured image are derived, and an eye; A pupil area derivation step of detecting a pupil area of an area, a pupil area location derivation step of deriving a pupil area location, and a pupil location from which a distance between a pupil area location and a facial area feature point derived in the pupil area location derivation step is calculated. The distance calculation step and the sculpture identification information providing step of providing information that determines whether the actual face is photographed or the sculpture is photographed from the change in the pupil position distance of each frame.
상기에서, 개인 식별 단계 전에 얼굴 식별 단계를 더 포함하고; In the above, further comprising a face identification step before the personal identification step;
상기 얼굴 식별 단계는 촬영부에서 촬영된 영상을 이루는 각 프레임에서 얼굴 영역이 도출되는 단계와, 얼굴 영역의 방향이 연산되는 단계와, 각 프레임의 얼굴 영역 방향 변화로부터 실제 얼굴이 촬영되는지 조형물이 촬영되는지 판단되는 정보가 제공되는 조형물 판별 정보 제공 단계로 이루어지는 것을 특징으로 한다.The face identification step includes deriving a face region from each frame constituting the image captured by the photographing unit, calculating a direction of the face region, and photographing whether a real face is photographed from a change in the face region direction of each frame. Sculpture identification information providing step is provided, characterized in that the information is determined.
상기에서, 시스템은 제어부에 연결된 거리 측정 센서를 더 포함하여, 상기 각 프레임은 거리 측정 센서로부터 측정된 거리 정보와 함께 저장부에 저장되며; In the above, the system further comprises a distance measuring sensor connected to the control, wherein each frame is stored in the storage with the distance information measured from the distance measuring sensor;
상기 개인 식별 단계 전에 얼굴 식별 단계를 더 포함하고; 상기 얼굴 식별 단계는 상기 촬영부에서 촬영된 영상을 이루는 각 프레임에서 얼굴 영역이 도출되는 얼굴 영역 도출 단계와, 얼굴 영역 도출 단계에서 도출된 얼굴 영역 정보가 연산되는 얼굴 영역 정보 연산 단계와, 각 프레임의 얼굴 영역 정보와 거리 정보로부터 실제 얼굴이 촬영되는지 조형물이 촬영되는지 판단되는 정보가 제공되는 조형물 판별 정보 제공 단계로 이루어지는 것을 특징으로 한다.Further comprising a face identification step before the personal identification step; The face identification step includes a face area derivation step in which a face area is derived from each frame constituting the image photographed by the photographing unit, a face area information calculation step in which face area information derived in the face area derivation step is calculated, and each frame It is characterized in that it comprises a sculpture identification information providing step that provides information that determines whether the actual face is photographed or the sculpture is photographed from the face area information and distance information.
*상기에서, 저장부에는 합성곱신경망 모듈과 학습용 촬영 영상 및 학습용 촬영 영상 정보가 저장되고, 상기 개인 식별 단계에 더하여 추가 정보 제공 단계를 더 포함하며; 상기 추가 정보 제공 단계는 학습 단계와 얼굴 정보 연산 단계로 이루어져 식별 영상의 인종 정보와 성별 정보가 제공되며;In the above, the storage unit stores the composite product neural network module, the learning photographed image and the learning photographed image information, and further comprising the step of providing additional information in addition to the personal identification step; The additional information providing step includes a learning step and a face information calculation step, wherein racial information and gender information of the identification image are provided;
상기 학습 단계는 인종 정보와 성별 정보에 각각에 대한 학습용 촬영 영상의 컨벌루션 계층이 형성되는 단계와, 컨벌루션 계층을 이루는 맵으로부터 풀링 계층이 형성되는 단계와, 풀링 계층을 이루는 풀링의 정보가 입력 정보가 되며 은닉층 가중치와 입력 정보의 연산으로 은닉층 노드값들이 연산되는 은닉층 정보 연산 단계와, 은닉층 노드값과 출력층 가중치의 연산으로 출력층이 연산되는 출력층 정보 연산 단계와, 출력층 정보와 학습용 촬영 영상 정보로부터 에러가 연산되고 역전파 알고리즘에 따라 은닉층 가중치와 출력층 가중치의 갱신값이 연산되어 은닉층 정보 연산 단계와 출력층 정보 연산 단계가 다시 반복 연산되고 연산된 출력층 정보가 학습용 촬영 영상 정보와 대비되는 과정이 반복되어 은닉층 가중치의 갱신값과 출력층 가중치의 갱신값이 도출되는 가중치 도출단계로 이루어지며;The learning step includes the step of forming a convolutional layer of the training photographing image for each of the race information and the gender information, the step of forming a pooling layer from a map of the convolutional layer, and the information of the pooling of the pooling layer. Hidden layer information operation step of calculating hidden layer node values by operation of hidden layer weight and input information, output layer information calculating step of calculating output layer by operation of hidden layer node value and output layer weight, and error from output layer information and training photographing image information. The updated values of the hidden layer weights and the output layer weights are calculated according to the backpropagation algorithm, and the hidden layer information calculation step and the output layer information calculation step are repeatedly calculated, and the process of comparing the calculated output layer information with the photographed image information for training is repeated. Of the update value and output layer weight Made of a weighting stage which is derived singap is derived;
상기 학습 단계에서 도출된 은닉층 가중치 갱신값과 출력층 가중치의 갱신값이 얼굴 정보 연산 단계의 인종 정보와 성별 정보 각각에 대한 은닉층 가중치와 출력층 가중치가 되는 것을 특징으로 한다.The hidden layer weight update value and the output layer weight update value derived in the learning step may be the hidden layer weight and the output layer weight for each of the race information and the gender information in the face information calculation step.
상기에서, 얼굴 정보 연산 단계는 인종 정보와 성별 정보 각각에 대한 식별 영상의 컨벌루션 계층이 형성되는 단계와, 컨벌루션 계층을 이루는 맵으로부터 풀링 계층이 형성되는 단계와, 풀링 계층을 이루는 풀링의 정보가 입력 정보가 되며 은닉층 가중치와 입력 정보의 연산으로 은닉층 노드값들이 연산되는 은닉층 정보 연산 단계와, 은닉층 정보와 출력층 가중치의 연산으로 출력층이 연산되는 출력층 정보 연산 단계로 이루어지며; 인종 정보 획득을 위한 연산에서 출력층을 이루는 노드는 인종의 수와 같고, 성별 정보 획득을 위한 연산에서 출력층을 이루는 노드는 2개인 것을 특징으로 한다.In the above operation, the face information calculation step includes a step of forming a convolutional layer of an identification image for each of the race information and the gender information, a step of forming a pooling layer from a map of the convolutional layer, and a pooling information of the pooling layer. A hidden layer information calculating step of calculating the hidden layer node values by the calculation of the hidden layer weight and the input information, and an output layer information calculating step of calculating the output layer by calculating the hidden layer information and the output layer weight; Nodes constituting the output layer in an operation for obtaining race information are equal to the number of races, and two nodes constituting the output layer in an operation for obtaining gender information are provided.
본 발명에 따르는 얼굴 비교를 통한 개인 식별 방법 및 그 방법이 실행되는 개인 식별 시스템에 의하면, 조형물이나 출력물이 실제 얼굴과 구별되므로 출입 장치나 보안 장치에 적용되면 악의적인 목적을 갖는 조형물이나 출력물 등을 사용한 무단 출입이 차단되며, 동일인과 비동일인이 유사도 단계별로 확인되며, 인종, 성별 및 연령 정보가 제공될 수 있는 효과가 있다.According to the personal identification method through face comparison according to the present invention and the personal identification system on which the method is executed, a sculpture or an output is distinguished from an actual face, so that when applied to an access device or a security device, a sculpture or an output having a malicious purpose may be used. Unauthorized access is blocked, the same person and the same person is identified by similarity step by step, there is an effect that the race, gender and age information can be provided.
도 1은 본 발명에 따르는 얼굴 비교를 통한 개인 식별 방법이 실행되는 단계를 도시한 것이며, Figure 1 illustrates the steps of performing a method of personal identification through face comparison according to the present invention,
도 2는 도 1의 얼굴 식별 단계를 설명하기 위하여 도시한 것이며,2 is a view illustrating the face identification step of FIG.
도 3은 본 발명을 이루는 얼굴 식별 단계를 설명하기 위하여 도시한 개략적인 순서도이며,3 is a schematic flowchart illustrating a face identification step of the present invention;
도 4 및 도 5는 눈 영역을 도식적으로 도시한 것이며, 4 and 5 schematically illustrate the eye region,
도 6 및 도 7은 얼굴 식별 단계를 이루는 시선 방향이 도출되는 과정을 도시한 개략적인 순서도이며,6 and 7 are schematic flowcharts illustrating a process of deriving a gaze direction constituting a face identification step,
도 8 및 도 9는 프레임에 따른 시선변화를 도시한 그래프이다.8 and 9 are graphs illustrating a change in gaze according to a frame.
이하에서 도면을 참조하여 본 발명에 따르는 얼굴 비교를 통한 개인 식별 방법에 대하여 상세하게 설명한다.Hereinafter, a personal identification method through face comparison according to the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명에 따르는 얼굴 비교를 통한 개인 식별 방법이 실행되는 단계를 도시한 것이며, 도 2는 도 1의 얼굴 식별 단계를 설명하기 위하여 도시한 것이며, 도 3은 본 발명을 이루는 얼굴 식별 단계를 설명하기 위하여 도시한 개략적인 순서도이며, 도 4 및 도 5는 눈 영역을 도식적으로 도시한 것이며, 도 6 및 도 7은 얼굴 식별 단계를 이루는 시선 방향이 도출되는 과정을 도시한 개략적인 순서도이며, 도 8 및 도 9는 프레임에 따른 시선변화를 도시한 그래프이다.1 is a view illustrating a step of executing a personal identification method through face comparison according to the present invention, FIG. 2 is illustrated to explain the face identification step of FIG. 1, and FIG. 3 is a face identification step of implementing the present invention. 4 and 5 are schematic diagrams illustrating eye regions, and FIGS. 6 and 7 are schematic flowcharts illustrating a process of deriving a gaze direction constituting a face identification step. 8 and 9 are graphs illustrating a change in gaze according to a frame.
본 발명에 따르는 얼굴 비교를 통한 개인 식별 방법은 촬영부와, 상기 촬영부에 연결된 제어부와, 상기 제어부에 연결되어 촬영부에서 촬영된 복수의 프레임으로 이루어지는 식별 영상과 개인 식별 프로그램이 저장되는 저장부로 이루어진 개인 식별 시스템에서 실행된다. According to an aspect of the present invention, there is provided a personal identification method through face comparison, including a photographing unit, a control unit connected to the photographing unit, and a storage unit storing an identification image and a personal identification program including a plurality of frames connected to the control unit photographed by the photographing unit. Runs on a personal identification system.
상기 개인 식별 시스템은 제어부에 연결되어 측정된 거리 정보를 제어부로 제공하는 거리 측정 센서와, 제어부에 연결되어 신호를 송수신하는 통신 모듈이 더 포함하며, 저장부에는 합성곱신경망 모듈이 저장된다. 촬영부, 제어부, 저장부 및 거리 측정 센서는 도시하지 않은 본체에 구비된다. The personal identification system further includes a distance measuring sensor connected to the control unit to provide the measured distance information to the control unit, and a communication module connected to the control unit to transmit and receive a signal, and the compound multiplication neural network module is stored in the storage unit. The photographing unit, the control unit, the storage unit, and the distance measuring sensor are provided in a main body (not shown).
상기 개인 식별 시스템은 출입문에 설치되어 작동될 수 있으며, 전원이 공급되고 작동 명령이 입력되면 개인 식별 프로그램이 작동되어 본 발명에 따르는 얼굴 비교를 통한 개인 식별 방법이 실행된다.The personal identification system may be installed and operated at the door. When the power is supplied and the operation command is input, the personal identification program is operated to execute the personal identification method through face comparison according to the present invention.
상기 저장부에는 복수(예: 300명)의 개인의 3D 얼굴 데이터베이스가 저장되며, 얼굴의 상하 방향으로 연장되며 얼굴의 중심을 지나는 축(제1 축)을 회전축으로 하여 -30°부터 +30°까지 10°간격으로 회전된 얼굴 이미지 정보(얼굴영역의 픽셀 색상, 좌표, 얼굴 영역의 특징점)이 저장된다. 이하에서 상기 얼굴 이미지 정보를 "DB 얼굴 정보"라 한다.The storage unit stores a plurality of personal 3D face databases (for example, 300 people), and extends from -30 ° to + 30 ° using an axis (first axis) extending in the vertical direction of the face and passing through the center of the face as a rotation axis. Face image information (pixel color of the face region, coordinates, and feature points of the face region) rotated at intervals of 10 ° are stored. Hereinafter, the face image information is referred to as "DB face information."
회전각도가 0°인 얼굴이 정면 얼굴이다. 제1 축은 좌우 대칭인 얼굴의 중심을 지나는 축이다. 얼굴영역의 특징점이 도출되는 과정 및 도출된 특징점에 대한 내용은 종래 기술에 공지되어 있는 기술로 설명을 생략한다. The face with a rotation angle of 0 ° is the front face. The first axis is the axis passing through the center of the face that is symmetrical. The process of deriving the feature points of the face region and the contents of the derived feature points are well known in the art, and description thereof will be omitted.
-30°에서 "-"는 제1 축을 회전축으로 얼굴의 오른쪽으로 회전한 것을, +30°에서 "+"는 제1 축을 회전축으로 얼굴의 왼쪽으로 회전한 것을 의미한다. 예시적으로 10°간격으로 저장되는 것으로 기재하였으나 5° 또는 15° 간격으로 저장되는 것도 가능하다."−" At -30 ° means that the first axis is rotated to the right side of the face on the axis of rotation, and at + 30 °, "+" means that the first axis is rotated to the left side of the face on the axis of rotation. By way of example, it is described as being stored at intervals of 10 °, but may be stored at intervals of 5 ° or 15 °.
상기 표는 개인의 얼굴 3D 데이터 베이스가 저장부에 저장되는 것을 설명하기 위하여 4명을 예시적으로 기재한 것이다. 상기 표에서 A, B, C, D는 서로 다른 개인을 의미한다. 얼굴 3D 데이터 베이스는 본 발명이 실행되는 개인 식별 시스템과 통신되는 서버에 저장되는 것도 가능하다. The table exemplarily shows four persons to explain that the face 3D database of the individual is stored in the storage unit. In the table, A, B, C, and D mean different individuals. The face 3D database may also be stored in a server in communication with a personal identification system in which the present invention is implemented.
도 1에 도시된 바와 같이 본 발명에 따르는 얼굴 비교를 통한 개인 식별 방법은 얼굴 식별 단계(ST-100)와, 개인 식별 단계(ST-200)와, 추가 정보 제공 단계(ST-300)로 이루어진다. As shown in FIG. 1, the personal identification method through face comparison according to the present invention includes a face identification step ST-100, a personal identification step ST-200, and an additional information provision step ST-300. .
상기 얼굴 식별 단계(ST-100)에서는 촬영부에서 촬영되어 저장부에 저장된 복수 프레임으로 이루어진 식별 영상이 실제 사람이 촬영된 것인지 사람의 조형물이나 인쇄물이 촬영된 것인지 판단할 수 있는 정보가 도출되고, In the face identification step (ST-100), information for determining whether an identification image composed of a plurality of frames photographed by the photographing unit and stored in the storage unit is photographed by a real person or a human sculpture or printed matter is photographed.
개인 식별 단계(ST-200)에서는 촬영부에서 촬영된 개인과 데이터 베이스에 등록된 자료가 대비되어 촬영된 개인이 식별될 수 있는 자료가 제공되며, In the personal identification step (ST-200), a material for identifying a photographed individual is provided by contrasting the individual photographed by the photographing unit and the data registered in the database.
추가 정보 제공 단계(ST-300)에서는 촬영된 개인의 인종과 성별을 판단할 수 있는 정보가 제공된다. 각 단계에서 도출된 정보들은 통신 모듈을 통하여 송신되어 보안실 등의 모니터에 디스플레이된다.In the additional information providing step ST-300, information for determining the race and gender of the individual photographed is provided. Information derived at each step is transmitted through a communication module and displayed on a monitor such as a security room.
상기 얼굴 식별 단계(ST-100)에서는 촬영부에서 촬영되는 영상이 실제 사람을 촬영하는지 사람의 조형물이나 사람의 사진 등과 같은 인쇄물인지 구분할 수 있는 자료가 마련된다. In the face identification step (ST-100), a material for distinguishing whether an image photographed by the photographing unit photographs a real person or a print such as a human sculpture or a person's photo is provided.
상기 얼굴 식별 단계(ST-100)는 식별 영상을 이루는 각 프레임에서 눈 영역이 도출되는 눈 영역 검출 단계와, 눈 영역의 눈동자 영역이 검출되는 눈동자 영역 도출 단계(ST-120)와, 각 프레임마다 눈동자 영역을 이루는 픽셀의 색상 정보가 대비되는 눈동자 영역 색상 정보 대비 단계(ST-120; 눈 깜빡임수 연산)와, 각 프레임의 눈동자 영역을 이루는 픽셀의 색상 정보 변화로부터 실제 얼굴이 촬영되는지 조형물이 촬영되는지 대비되는 판단되는 정보가 제공되는 조형물 판별 정보 제공 단계(ST-140)로 이루어진다.The face identification step ST-100 includes an eye region detection step of deriving an eye region from each frame constituting the identification image, a pupil area derivation step of detecting a pupil region of an eye region (ST-120), and a frame Sculpture captures whether a real face is photographed from the eye region color information contrast step (ST-120; blink operation) in which the color information of the pixels constituting the eye region is contrasted and the change in the color information of the pixels constituting the eye region of each frame. Sculpture determination information providing step (ST-140) is provided is provided that is determined to be contrasted with the information.
도 4에 도시된 바와 같이 식별 영상을 이루는 각 프레임에서 눈 영역(100)이 도출된다. 눈 영역(100)은 마스크가 마련되어 도출되는 것이 가능하다. 눈 영역(100) 도출을 위한 마스크는 직사각형으로서 도출된 얼굴 영역의 크기로부터 크기가 결정될 수 있으며, 도 5에 도시된 바와 같은 눈동자 영역(110), 눈동자 주변의 흰 영역인 주변 영역(120)의 크기가 결정되고, 도출된 얼굴 영역에서 직사각형의 영역 내에 눈동자 영역(110)과 눈동자 주변의 흰 영역인 주변 영역(120)과 가장 유사한 배열을 가지는 마스크와 같은 크기의 픽셀 부분이 눈 영역(100)으로 도출되고, 도출된 눈 영역(100)에서 픽셀의 색상 정보로부터 눈동자 영역(110)이 도출된다. 그리고 도출된 눈동자 영역을 이루는 픽셀 중 세로 좌표 값이 가장 큰 픽셀과 가장 작은 픽셀 그리고 가로 좌표 값이 가장 큰 픽셀과 가장 작은 픽셀이 도출되어 도 5에 도시된 눈동자 영역의 특징점(113, 115)이 도출된다. 그리고 눈 영역의 주변 영역의 좌우 끝점인 눈 영역 특징점(121)도 도출될 수 있다. 눈동자에 대한 가로 좌표의 최대값과 최소값으로부터 가로 중심 좌표가 연산되어 세로 좌표의 최대값과 최소값으로부터 세로 중심 좌표가 연산되어, 눈동자 중심점(111)이 도출된다. 상기와 같은 눈동자 영역(110), 눈동자 중심점(111) 등의 정보로부터 연산되어(ST-130), 눈 깜빡임 수(ST-131)와 시선 방향(ST-133)이 연산된다.As illustrated in FIG. 4, the
시선 방향의 연산에서 얼굴 특징점의 거리 변화로부터 얼굴의 방향이 연산되며, 얼굴의 방향을 시선 방향으로 하여 연산되도록 하는 것도 가능하다. 예를 들어 왼쪽 눈 영역의 좌우 끝 특징점 사이의 거리와, 오른쪽 눈 영역의 좌우 끝 특징점 사이의 거리 연산으로부터 얼굴 회전 방향이 연산될 수 있다. 얼굴을 오른쪽으로 회전시키면 오른쪽 눈 영역의 좌우 끝 특징점 사이의 거리는 감소하면 감소하는 비율로 얼굴 회전 방향 즉 시선 방향이 연산될 수 있다. 그리고 눈 영역의 특징점과 입술 영역의 특징점 사이의 거리 변화로부터 상하 방향으로의 얼굴 회전 정도가 연산된 수 있고, 얼굴 회전인 각 프레임의 얼굴 방향 변화로부터 촬영 대상이 조형물인지 실제 얼굴인지 판단될 수 있는 자료를 제공 받는 것이 가능하다(도 8 및 도 9 참조).In the calculation of the gaze direction, the direction of the face is calculated from the distance change of the facial feature point, and it is also possible to calculate the direction of the face with the gaze direction. For example, the face rotation direction may be calculated from the distance between the left and right end feature points of the left eye area and the distance between the left and right end feature points of the right eye area. When the face is rotated to the right, the distance between the left and right end feature points of the right eye region decreases and the face rotation direction, that is, the eyeline direction, may be calculated at a decreasing rate. The degree of face rotation in the vertical direction may be calculated from the distance change between the feature point of the eye region and the feature point of the lip region, and the subject may be determined whether the photographing object is a sculpture or a real face from the change of the face direction of each frame, which is the face rotation. It is possible to receive data (see Figures 8 and 9).
도 5는 촬영 영상을 이루는 프레임 중 눈을 감은 상태에서 촬영된 프레임의 눈 영역 부분을 개략적으로 도시한 것으로, 눈을 감은 상태가 촬영되므로 눈 영역에서 눈동자 영역이나 주변 영역이 피부와 같은 색상의 픽셀을 가지게 된다. 한편, 눈이 감긴 시점에서 촬영된 눈 영역에서도 눈 영역 특징점(121)은 도출될 수 있다. FIG. 5 schematically illustrates a portion of an eye region of a frame photographed with the eyes closed among the frames constituting the photographed image. Since the state of the eyes is captured, the pupil region or the peripheral region of the eye region is a pixel of the same color as the skin. Will have Meanwhile, the eye
도 5에 도시한 바와 같이 눈을 뜨고 있는 상태에서 촬영된 프레임과, 도 6의 눈이 감긴 시점에서 촬영된 프레임에서 눈 영역(100)을 이루는 픽셀의 색상 변화가 도출된다. 눈 영역(100)에서 눈동자 영역(110, 110a)을 이루는 픽셀의 색상 정보가 도출된다. 각 프레임에 대하여 눈동자 영역을 이루는 픽셀들의 색상 정보가 도출되고, 눈동자 영역을 이루는 픽셀들의 색상의 평균값이 연산되고, 각 프레임의 시간에 대한 눈동자 영역의 픽셀들의 색상 정보 변화가 표시부를 통하여 디스플레이된다. 상기와 같이 각 프레임에서 눈 영역을 이루는 픽셀의 색상 정보로부터 시간에 대한 눈 깜빡임 수가 연산되어 도출된다(ST-131). As shown in FIG. 5, the color change of the pixels constituting the
눈동자 영역을 이루는 픽셀들의 색상의 평균값의 변화에 대한 임계값을 지정하여, 눈동자 영역 픽셀들의 색상 평균값이 임계값을 넘는 경우 사람이 촬영되는 것이고 임계값을 넘지 않는 경우 인쇄물이나 조형물이 촬영된다고 하는 정보가 표시부를 통하여 제공될 수 있다. 그리고 인쇄물이나 조형물이 촬영된다는 정보가 제공되는 경우 제어부를 통하여 제어부에 연결된 경고수단(벨, 경광등 등)이 작동되도록 하는 것이 가능하며, 이때는 얼굴 비교 개인 식별 단계(ST-200)가 진행되지 않고 종료되도록 한다.Specifying a threshold for the change in the average value of the colors of the pixels that make up the pupil area, information that the person is photographed when the average color value of the pupil area pixels exceeds the threshold, and that the print or sculpture is photographed if it does not exceed the threshold. May be provided through the display unit. In addition, when information that photographs or sculptures are photographed is provided, warning means (bells, warning lights, etc.) connected to the control unit may be operated through the control unit. In this case, the face comparison personal identification step (ST-200) does not proceed and ends. Be sure to
한편, 도 5에 도시된 바와 같이 눈이 떠져 있는 상태에서 촬영된 프레임에 대한 눈동자 중심점(111)의 가로 방향 변위와 세로 방향 변위가 도출되어 프레임에 대한 변화가 표시부에 디스플레이될 수 있다. 눈동자 중심점(111)의 가로 방향 위치 변화와 세로 방향 위치 변화는 촬영된 프레임에서 눈동자 중심점(111)의 위치로부터 도출되며, 눈동자 중심점(111)의 가로 반향 변위로는 눈 특징점(121)과 눈동자 중심점(111)이 연산되어 도출되는 것도 가능하다.Meanwhile, as illustrated in FIG. 5, the horizontal and vertical displacements of the
도 8은 실제 사람이 촬영되는 경우에 프레임에 따른 시선 변화(얼굴 방향 변화)를 도시한 그래프이며, 도 9는 인쇄물이 촬영되는 경우에 프레임에 따른 시선 변화(얼굴 방향 변화)를 도시한 그래프로서 도 8 및 도 9에 도시된 바와 같이 상하 좌우 방향 모두 실제 사람이 촬영되는 경우 프레임에 따른 시선 변화가 큰 것을 확인할 수 있었다. 상기 시선 변화는 얼굴 특징점으로부터 연산되는 얼굴의 회전 각도로부터 도출되는 것도 가능하다. FIG. 8 is a graph illustrating a change in gaze according to a frame (a change in face direction) when a real person is photographed, and FIG. 9 is a graph illustrating a change in gaze according to a frame (a change in face direction) when a printed matter is taken. As shown in FIG. 8 and FIG. 9, when a real person was photographed in both up, down, left, and right directions, it was confirmed that a change in gaze according to a frame was large. The gaze change may be derived from the rotation angle of the face calculated from the facial feature point.
프레임에 따른 가로 방향 및 상하 방향으로의 시선 변화에 대한 임계값을 지정하여, 시선 변화의 최대값이 임계값보다 큰 경우 사람으로 판단하고, 시선 변화의 최대값이 임계값보다 작은 경우 조형물이나 인쇄물로 판단되도록 하여 결과가 표시부에 디스플레이되도록 하는 것도 가능하다. 그리고 인쇄물이나 조형물이 촬영된다는 정보가 제공되는 경우 제어부를 통하여 제어부에 연결된 경고수단(벨, 경광등 등)이 작동되도록 하는 것이 가능하며, 이때는 얼굴 비교 개인 식별 단계(ST-200)가 진행되지 않고 종료되도록 한다.Specify the threshold value for the line of sight change in the horizontal and vertical directions according to the frame, and judge it as a person when the maximum value of the line of change is greater than the threshold, and when the maximum value of the line of change is less than the threshold, It may also be determined that the result is displayed on the display unit. In addition, when information that photographs or sculptures are photographed is provided, warning means (bells, warning lights, etc.) connected to the control unit may be operated through the control unit. In this case, the face comparison personal identification step (ST-200) does not proceed and ends. Be sure to
한편, 제어부에 연결되어 피사체로부터의 거리를 측정하는 거리 측정 센서에서 센싱된 거리 정보는 각 프레임과 함께 저장부에 저장된다. 그리고 상기 얼굴 영역 추출 단계에서 설명한 바와 같이 각 프레임의 얼굴 영역 추출되고(ST-110) 얼굴 영역을 이루는 픽셀의 수(면적)이 연산되어 거리 정보와 함께 저장부에 저장된다. 상기 데이터 베이스에는 거리에 따른 얼굴 면적(픽셀 수)에 대한 정보(거리에 따른 얼굴 영역 면적 정보)가 저장되어, 촬영된 식별 영상의 프레임에서 도출된 거리에 따른 얼굴 영역 면적과 대비되어 그 차이가 표시부에 디스플레이될 수 있다. 거리에 따른 식별 영상 프레임의 얼굴 면적과 거리에 따른 얼굴 영역 면적 차의 절대값에 대한 임계값이 지정되고 임계값보다 큰 경우 제어부를 통하여 제어부에 연결된 경고수단(벨, 경광등 등)이 작동되도록 하는 것이 가능하다.Meanwhile, distance information sensed by a distance measuring sensor connected to a controller and measuring a distance from a subject is stored in a storage unit along with each frame. As described in the face region extraction step, the face region of each frame is extracted (ST-110), and the number (area) of pixels constituting the face region is calculated and stored with the distance information. The database stores information on the face area (number of pixels) according to the distance (face area area information according to the distance), and the difference is compared with the face area area according to the distance derived from a frame of the captured identification image. It may be displayed on the display unit. When a threshold value for the absolute value of the face area of the identification image frame according to the distance and the face area area difference according to the distance is specified and is larger than the threshold value, a warning means (bell, beacon, etc.) connected to the control unit is activated through the control unit. It is possible.
상기에서와 같은 본 발명에서는 얼굴 비교 개인 식별 단계 전에 촬영되는 대상이 사람인지 조형물이나 인쇄물인지 미리 판단하여, 구분되도록 함으로써 연산 시간이 단축됨은 물론 데이터 베이스에 등록된 정보를 도용하여 출입문 등을 출입하는 것을 원천적으로 차단할 수 있는 효과가 있다. In the present invention as described above, before the face comparison personal identification step to determine whether the subject is a person, sculpture or printed matter in advance, by distinguishing the operation time is reduced, as well as steal access to the door and the like by stealing the information registered in the database. There is an effect that can be blocked at source.
상기 개인 식별 단계(ST-200)는 촬영부에서 촬영된 개인에 대한 정면 얼굴 이미지를 포함하여 얼굴 3D 데이터 베이스(이하에서 "식별 얼굴 DB"라 한다.)가 제공된다. 3D 정보나 정면 얼굴 이미지가 제공됨으로써 촬영된 개인의 식별이 용이하게 된다.The personal identification step ST-200 is provided with a face 3D database (hereinafter referred to as "identification face DB") including a front face image of an individual photographed by the photographing unit. Providing 3D information or a front face image facilitates identification of the photographed individual.
상기 개인 식별 단계는 촬영된 개인의 이미지(이하에서 '식별 이미지'라 함)로부터 얼굴 영역이 도출되는 얼굴영역도출단계(ST-210)와, 얼굴의 특징점들이 도출되는 얼굴특징점도출단계(ST-220)와, 도출된 얼굴영역의 제1 축에 대한 회전각도가 연산되는 식별얼굴회전각도 연산단계(ST-230)와, DB얼굴정보 대비단계(ST-230)와, 픽셀색상적용단계(ST-240)와, 식별하려는 개인에 대한 얼굴 3D 데이터 베이스인 식별 얼굴 DB가 도출되는 얼굴DB 도출단계(ST-240)로 이루어진다. The personal identification step includes a face area derivation step ST-210 in which a face area is derived from an image of a photographed individual (hereinafter referred to as an “identification image”), and a face feature point derivation step in which facial feature points are derived. 220, an identification face rotation angle calculation step (ST-230) for calculating a rotation angle with respect to the first axis of the derived face region, a DB face information comparison step (ST-230), and a pixel color application step (ST) And a face DB derivation step (ST-240) in which an identification face DB, which is a face 3D database for the individual to be identified, is derived.
촬영부에서 촬영되는 개인의 이미지에서 얼굴영역을 도출하는 것은 종래 기술에 공지되어 있는바 설명을 생략한다. 예를들어, 촬영된 개인의 이미지에서 특정 색상의 픽셀로 이루어진 영역의 크기(픽셀의 개수)가 일정 범위인 영역이 얼굴 영역으로 도출될 수 있다.Deriving a face region from an image of an individual photographed by the photographing unit is well known in the art, and thus description thereof is omitted. For example, an area having a predetermined size (number of pixels) of a region of pixels of a specific color in the image of the photographed individual may be derived as the face region.
도출된 얼굴 영역에서 얼굴특정점이 도출된다(ST-220). 얼굴 영역에서 특징점이 도출되는 과정은 위에서 예시적으로 설명하였고, 종래 공지 기술에서도 기재되어 있는바 이에 대한 상세한 설명은 생략한다. A facial specific point is derived from the derived face region (ST-220). The process of deriving a feature point from the face region has been described above by way of example, and the description thereof is also omitted in the prior art.
얼굴 영역의 특징점이 도출되면, 특정점의 연산에 의해서 제1 축에 대한 회전 각도인 얼굴의 회전각도가 연산된다(ST-230, 얼굴회전각도 연산에 대한 내용은 대한민국 등록 제10-1215751호 등록특허공보 등에 기재된 내용 참조). When the feature point of the face region is derived, the rotation angle of the face, which is the rotation angle with respect to the first axis, is calculated by the calculation of the specific point (ST-230, for information on calculating the face rotation angle, registered in Republic of Korea Registration No. 10-1215751). See patent publication, etc.).
식별 이미지의 얼굴영역의 얼굴 회전 각도가 연산되면, 복수의 개인의 3D 얼굴 데이터베이스의 얼굴 이미지 정보와 대비된다. 식별얼굴회전각도와 가장 가까운 각도의 DB 얼굴 정보의 얼굴 이미지와 대비된다. 예를 들어 식별 이미지의 얼굴 회전각도가 -28°이면, 회전각도가 -30°인 DB 얼굴 정보의 얼굴 이미지와 대비된다. 식별 이미지의 얼굴 회전각도가 -25°와 같이 중간값이면 회전각도가 -30° 또는 -20°인 DB 얼굴 정보의 얼굴 이미지와 대비될 수 있다. When the face rotation angle of the face region of the identification image is calculated, it is contrasted with face image information of the 3D face database of the plurality of individuals. It is contrasted with the face image of the DB face information of the angle closest to the identification face rotation angle. For example, if the face rotation angle of the identification image is -28 °, it is contrasted with the face image of DB face information having a rotation angle of -30 °. If the face rotation angle of the identification image is a medium value such as −25 °, the face rotation angle of the identification image may be contrasted with the face image of the DB face information having the rotation angle of −30 ° or −20 °.
식별 이미지와 DB 얼굴 정보의 얼굴 이미지를 이루는 픽셀 개수가 다른 경우, 식별 이미지와 DB 얼굴 정보의 얼굴 이미지의 크기를 같게 하는 단계가 더 진행된다. 예를 들어 설명하면, 식별 이미지가 2×2 픽셀 이미지이고, DB 얼굴 정보의 얼굴 이미지가 3×3 픽셀 이미지인 경우, 식별 이미지의 (x1, y1), (x1, y2), (x2, y1), (x2, y2) 픽셀에서 (x1, y1)과 (x1, y2) 픽셀의 색상 정보의 평균값을 가지는 픽셀이 (x1, y1)과 (x1, y2) 사이에 생성되고, (x1, y2) 픽셀의 좌표는 (x1, y3)으로 되고, 생성된 픽셀의 좌표는 (x1, y2)로 되며; If the number of pixels constituting the face image of the identification image and the DB face information is different, the step of making the same size of the face image of the identification image and the DB face information is further performed. For example, if the identification image is a 2x2 pixel image and the face image of the DB face information is a 3x3 pixel image, (x1, y1), (x1, y2), (x2, y1) of the identification image ), a pixel having an average value of the color information of the (x1, y1) and (x1, y2) pixels in the (x2, y2) pixels is generated between (x1, y1) and (x1, y2), and (x1, y2) ) The coordinates of the pixel are (x1, y3), and the coordinates of the generated pixel are (x1, y2);
(x2, y1)과 (x2, y2) 픽셀의 색상 정보의 평균값을 가지는 픽셀이 (x2, y1)과 (x2, y2) 사이에 생성되고, (x2, y2) 픽셀의 좌표는 (x2, y3)으로 되고, 생성된 픽셀의 좌표는 (x2, y2)로 되며;A pixel having an average value of the color information of the (x2, y1) and (x2, y2) pixels is generated between (x2, y1) and (x2, y2), and the coordinate of the (x2, y2) pixel is (x2, y3). ), And the coordinates of the generated pixel are (x2, y2);
(x1, y1)과 (x2, y1) 픽셀의 색상 정보의 평균값을 가지는 픽셀이 (x1, y1)과 (x2, y1) 사이에 생성되고, (x2, y1) 픽셀의 좌표는 (x3, y1)으로 되고, 생성된 픽셀의 좌표는 (x2, y1)로 되며;A pixel having an average value of the color information of the (x1, y1) and (x2, y1) pixels is generated between (x1, y1) and (x2, y1), and the coordinate of the (x2, y1) pixel is (x3, y1). ), And the coordinate of the generated pixel is (x2, y1);
(x1, y2)과 (x2, y2) 픽셀의 색상 정보의 평균값을 가지는 픽셀이 (x1, y2)과 (x2, y2) 사이에 생성되고, (x2, y2) 픽셀의 좌표는 (x3, y2)으로 되고, 생성된 픽셀의 좌표는 (x2, y2)로 되며;A pixel having an average value of the color information of the (x1, y2) and (x2, y2) pixels is generated between (x1, y2) and (x2, y2), and the coordinate of the (x2, y2) pixel is (x3, y2). ), And the coordinates of the generated pixel are (x2, y2);
(x1, y3)과 (x2, y3) 픽셀의 색상 정보의 평균값을 가지는 픽셀이 (x1, y3)과 (x2, y3) 사이에 생성되고, (x2, y3) 픽셀의 좌표는 (x3, y3)으로 되고, 생성된 픽셀의 좌표는 (x2, y3)로 되어 저장부에 저장된다.A pixel having an average value of the color information of the (x1, y3) and (x2, y3) pixels is generated between (x1, y3) and (x2, y3), and the coordinate of the (x2, y3) pixel is (x3, y3). ), And the coordinates of the generated pixel are (x2, y3) and stored in the storage unit.
회전 각도가 연산된 후, 식별 이미지에서 다시 얼굴영역의 특징점이 도출되어 DB 얼굴 정보의 특징점과 같은 특징점이 도출된다.After the rotation angle is calculated, the feature point of the face region is derived again from the identification image, and a feature point such as the feature point of the DB face information is derived.
그리고 식별 이미지와 각 DB 얼굴 정보의 얼굴 이미지의 특징점에 대하여 최소자승오류값이 연산되어 DB얼굴정보가 대비된다(ST-230). DB 얼굴 정보의 얼굴 이미지 중 최소자승오류값이 가장 작은 DB 얼굴 정보의 얼굴 이미지가 픽셀 색상이 적용될 이미지(색상 정보 적용 이미지)로 된다. Then, the least square error value is calculated for the feature image of the identification image and the face image of each DB face information to contrast the DB face information (ST-230). Among the face images of the DB face information, the face image of the DB face information having the smallest least square error is an image to which pixel color is applied (an image of applying color information).
상기 픽셀색상적용단계(ST-240)에서는 식별 이미지를 이루는 픽셀의 색상이 색상 정보 적용 이미지를 이루는 픽셀의 색상으로 되어 저장부에 저장된다. 그리고 표시부에 디스플레이된다. In the pixel color application step ST-240, the color of the pixel constituting the identification image becomes the color of the pixel constituting the color information applied image and is stored in the storage unit. And is displayed on the display unit.
정리하면, 복수의 DB 얼굴 정보의 얼굴 이미지 중에서 식별 이미지와 같은 얼굴 회전 각도를 가지는 복수의 DB 얼굴 정보의 얼굴 이미지의 특징점과 식별 이미지의 특징점에 대한 최소오류자승값(수학식 1)이 연산되고, 식별 이미지에 대하여 가장 작은 최소오류자승값을 가지는 DB 얼굴 정보의 얼굴 이미지를 이루는 픽셀의 색상이 식별 이미지의 픽셀의 색상으로 변경되어 저장부에 저장된다. 이하에서 식별 이미지의 픽셀의 색상으로 변경된 DB 얼굴 정보 이미지를 "식별도출얼굴이미지"라고 한다. In summary, the least error square value (Equation 1) for the feature points of the face image of the plurality of DB face information having the same face rotation angle as the identification image and the feature points of the identification image among the face images of the plurality of DB face information are calculated. For example, the color of the pixels forming the face image of the DB face information having the smallest minimum error square value of the identification image is changed to the color of the pixel of the identification image and stored in the storage unit. Hereinafter, the DB face information image changed to the color of the pixel of the identification image is called an "identification derived face image".
위 식에서 n은 특징점 개수, xi와 yi는 식별 얼굴 이미지의 i번째 특징점의 x축 및 y축 좌표, xil과 yil은 DB 얼굴 정보 이미지의 i번째 특징점의 x축 및 y축 좌표이며, distrms는 최소오류자승값임.Where n is the number of feature points, x i and y i are the x and y axis coordinates of the i th feature point of the identification face image, x il and y il are the x and y axis coordinates of the i th feature point of the DB face information image , dist rms is the least squares of error.
예를 들어 얼굴 회전각이 30°이면, 식별 이미지의 픽셀의 색상을 가지는 30°회전한 DB 얼굴 정보의 얼굴 이미지가 획득된다. 30°회전한 DB 얼굴 정보의 얼굴 이미지에 대하여 회전 각도가 0°인 정면 이미지 정보와 픽셀 위치의 정보에 적용하여 식별 이미지의 픽셀의 색상을 가지는 30°회전한 DB 얼굴 정보의 얼굴 이미지로부터 식별 이미지의 픽셀의 색상을 가지는 회전각도 0°인 식별도출얼굴이미지의 정면 이미지가 획득된다. For example, if the face rotation angle is 30 °, the face image of the DB face information rotated 30 ° having the color of the pixel of the identification image is obtained. Identification image from face image of 30 ° rotated DB face information with color of pixel of identification image by applying to front image information and pixel position information with rotation angle of 0 ° with respect to face image of 30 ° rotated DB face information A front image of the identification face image with a rotation angle of 0 ° having a color of pixels of is obtained.
위와 같은 과정을 거쳐 식별하려는 개인에 대한 얼굴 3D 데이터 베이스인 식별 얼굴 DB가 도출된다(ST-240). Through the above process, an identification face DB, which is a face 3D database for an individual to be identified, is derived (ST-240).
식별 이미지와 같은 회전 각도를 가지는 복수의 DB 얼굴 정보의 얼굴 이미지 중에서, 식별 이미지에 대한 가장 작은 최소자승오류값을 가지는 DB 얼굴 정보의 얼굴 이미지가 도출되고, 도출된 DB 얼굴 정보의 얼굴 이미지를 이루는 픽셀의 색상 정보가 식별 이미지를 이루는 픽셀의 색상 정보로 변경되어 식별도출얼굴이미지가 획득되고, 획득된 식별도출얼굴이미지의 정보를 그 얼굴 이미지의 DB 얼굴 정보에 적용하여 식별도출얼굴이미지의 3D 정보와 정면 이미지 획득된다.Among the face images of the plurality of DB face information having the same rotation angle as the identification image, the face image of the DB face information having the smallest least square error value for the identification image is derived and forms the face image of the derived DB face information. The color information of the pixel is changed to the color information of the pixel constituting the identification image, thereby obtaining the identification face image, and applying the obtained information of the identification face image to the DB face information of the face image to obtain 3D information of the identification face image. And frontal images are obtained.
상기 개인 식별 단계는 촬영되어 저장부에 저장된 개인의 식별 영상을 이루는 각 프레임에서 얼굴 영역이 추출되는 얼굴 영역 추출 단계와, 추출된 얼굴 영역의 특징 벡터가 도출되는 얼굴 특징 벡터 도출 단계와, 상기 추출된 얼굴 영역의 특징 벡터와 데이터 베이스에 등록된 자료의 얼굴 특징 벡터의 유클리디언 거리가 연산되는 얼굴 특징 벡터 대비 단계로 이루어질 수 있다.The personal identification step may include: extracting a face region from each frame photographed to form an identification image of an individual stored in a storage unit; extracting a face feature vector from which a feature vector of the extracted face region is derived; The Euclidean distance between the feature vector of the extracted facial region and the facial feature vector of the data registered in the database may be compared with the facial feature vector.
상기 얼굴 특징 벡터 도출 단계에서는 얼굴 영역 추출 단계에서 추출되고 정면화한 얼굴 영역 정보로부터 얼굴 영역 특징 벡터가 도출되고, 상기 얼굴 특징 벡터 대비 단계에서는 촬영된 식별 영상의 얼굴 특징 벡터와 데이터 베이스에 등록된 자료의 얼굴 특징 벡터의 유클리디언 거리(Euclidean Distance)가 연산된다. 상기에서 얼굴 영역 특징 벡터를 도출하는 방법과 유클리디언 거리 연산은 종래 기술이므로 이에 대한 설명은 생략한다.In the facial feature vector derivation step, a facial region feature vector is derived from the facial region information extracted and frontized in the facial region extraction step, and in the facial feature vector contrast step, the facial feature vector of the captured identification image and the database are registered in the database. The Euclidean distance of the facial feature vector of the data is calculated. Since the method of deriving the facial region feature vector and the Euclidean distance calculation are conventional techniques, description thereof will be omitted.
촬영된 식별 영상과 데이터 베이스에 등록된 복수의 개인들에 대한 자료의 얼굴 특징 벡터에 대한 유클리디언 거리가 연산된다. 상기 유클리디언 거리가 얼굴 유사도가 된다. 연산된 유클리디언 거리와 함께 촬영된 개인의 식별 영상 및 데이터 베이스에 등록된 개인의 영상이 제어부에 연결된 표시부(예, LCD 패널; 도시하지 않음)에 디스플레이된다. 유클리디언 거리를 기준으로 등급을 4등급으로 분할하여 등급에 대한 정보도 함께 디스플레이된다.The Euclidean distance for the facial feature vector of the photographed identification image and the data for the plurality of individuals registered in the database is calculated. The Euclidean distance becomes face similarity. The identification image of the individual photographed together with the calculated Euclidean distance and the image of the individual registered in the database are displayed on a display unit (eg, an LCD panel; not shown) connected to the control unit. Class information is divided into 4 classes based on Euclidean distance, and information about the class is also displayed.
실험값으로 구분되는 유사도는 아래 표 2와 같다.The similarity divided by the experimental value is shown in Table 2 below.
상기와 표에서와 같이 제공되므로, 출입문에 설치되어 작동되는 경우 1등급인 경우에는 출입 통과, 2등급 및 3등급인 경우에는 확인하며, 4등급인 경우에는 출입 불가 등으로 구분하여 관리하는 것이 가능하다.As it is provided in the above and the table, it is possible to check and manage the passage of pass in the case of the 1st class, the case of the 2nd and 3rd grade in case of the 1st class when it is installed and operated at the door, and to manage it separately in the case of the 4th grade. Do.
얼굴 특징 벡터는 합성곱신경망 모듈이 실행되어 학습용 촬영 영상으로부터 미리 학습된 개인 식별 가중값이 적용되어 식별 영상의 얼굴 특징 벡터가 도출된다. In the facial feature vector, a composite product neural network module is executed, and a face feature vector of the identification image is derived by applying a previously learned personal identification weighting value from the training photographing image.
본 발명에 따르는 얼굴 비교를 통한 개인 식별 방법은 상기 얼굴 비교 개인 식별 단계에 더하여, 얼굴 정보 획득 단계를 더 포함한다. 상기 얼굴 정보 획득 단계는 학습 단계와 얼굴 정보 연산 단계로 이루어져 식별 영상의 인종 정보와 성별 정보가 제공된다.The personal identification method through face comparison according to the present invention further includes obtaining face information in addition to the face comparison personal identification step. The face information acquiring step includes a learning step and a face information calculation step to provide race information and gender information of the identification image.
상기 학습 단계는 성별 정보에 대한 학습용 촬영 영상의 컨벌루션 계층이 형성되는 단계와, 컨벌루션 계층을 이루는 맵으로부터 풀링 계층이 형성되는 단계와, 풀링 계층을 이루는 풀링의 정보가 입력 정보가 되며 은닉층 가중치와 입력 정보의 연산으로 은닉층 노드값들이 연산되는 은닉층 정보 연산 단계와, 은닉층 노드값과 출력층 가중치의 연산으로 출력층이 연산되는 출력층 정보 연산 단계와, 출력층 정보와 학습용 촬영 영상 정보로부터 에러가 연산되고 역전파 알고리즘에 따라 은닉층 가중치와 출력층 가중치의 갱신값이 연산되어 은닉층 정보 연산 단계와 출력층 정보 연산 단계가 다시 연산되고 연산된 출력층 정보가 학습용 촬영 영상 정보와 대비되는 과정이 반복되어 은닉층 가중치의 갱신값과 출력층 가중치의 갱신값이 도출되는 가중치 도출단계로 이루어진다.The learning step includes a step of forming a convolutional layer of a training photographing image of gender information, a step of forming a pooling layer from a map of a convolutional layer, and information of pooling of a pooling layer as input information, including hidden layer weights and inputs. Hidden layer information operation step of calculating hidden layer node values by operation of information, output layer information calculating step of calculating an output layer by calculation of hidden layer node value and output layer weight, and error calculation from the output layer information and training photographing image information, and backpropagation algorithm. The updated values of the hidden layer weights and the output layer weights are calculated, and the hidden layer information calculating step and the output layer information calculating step are recalculated, and the process of comparing the calculated output layer information with the photographed image information for training is repeated. Weight from which the update value of Derivation phase
본 발명에서는 40,000명(1명당 50∼100장 촬영)의 학습용 촬영 영상을 마련하여 학습하였다. 40,000명 각각에 대한 인종 정보와 성별 정보는 학습용 촬영 영상과 함께 데이터베이스에 저장된다.In the present invention, 40,000 people (50-100 photographs per person) were prepared by learning training images. Race information and gender information for each of the 40,000 people are stored in the database along with training footage.
각 학습용 촬영 영상에 대하여 커널을 이용하여 컨벌루션 계층을 이루는 맵을 형성한다. 학습용 촬영 영상을 이루는 각 픽셀의 색상 정보가 학습용 촬영 영상 행렬의 성분이 된다. 맵을 형성하기 전에 학습용 촬영 영상을 흑백 영상으로 변환하는 것도 가능하다. A map is formed of a convolutional layer by using a kernel for each training photographed image. Color information of each pixel constituting the training shot image becomes a component of the training shot image matrix. Before forming the map, it is also possible to convert the training photographed image into a black and white image.
커널은 2×2 행렬, 3×3 행렬, 5×5행렬과 같이 다양한 크기의 행렬이 사용될 수 있으며, 본 발명자는 3×3행렬을 커널로 하여 맵을 형성하였으며, 컨벌루션 계층을 형성하였다. 은닉층은 5개층을 형성하였으며, 커널 개수에 있어서, 제1은닉층의 커널은 16개, 제2은닉층은 32개, 제3은닉층은 64개, 제4은닉층은 128개로 하였고, 제5은닉층은 512개로 하여 은닉층을 형성하였다. 은닉층의 형성에 있어서 출력층으로 갈수록 커널수를 증가시켜 은닉층의 개수를 증가시키므로써, 정확도가 향상된다. The kernel may be a matrix having various sizes, such as a 2 × 2 matrix, a 3 × 3 matrix, and a 5 × 5 matrix. The inventors formed a map using the 3 × 3 matrix as a kernel, and formed a convolutional layer. Five hidden layers were formed, and in the number of kernels, the first hidden layer had 16 kernels, the second hidden
커널의 성분들의 값은 랜덤함수를 활용하여 생성할 수 있으며, 각 성분들의 값은 2가 넘지 않도록 한다. 복수의 커널이 마련되고, 각 커널에 대한 맵이 마련되어 컨벌루션 계층이 형성된다. The values of the components of the kernel can be generated using random functions, and the values of each component should not exceed two. A plurality of kernels are provided, and a map for each kernel is provided to form a convolutional layer.
예를 들어, 커널은 2×2 행렬이고 성분이 1, 0, 0, 1이라 하면, 커널을 학습용 촬영 영상의 첫 2×2 픽셀에 대응시켜 연산되어 맵을 이루는 성분값이 연산하며, 커널을 학습용 촬영 영상의 픽셀을 따라 1픽셀씩 이동시켜 맵 성분값이 차례로 연산되어 행렬인 맵이 마련된다. 상기와 같이 복수의 다른 성분값을 가지는 커널이 마련되고 각 커널에 대한 맵이 마련되어 컨벌루션 계층이 형성된다. 커널은 2×2 행렬이고 학습용 촬영 영상의 크기가 4×4 픽셀이면 맵은 3×3 행렬 크기로 형성된다. For example, if the kernel is a 2 × 2 matrix and the components are 1, 0, 0, and 1, the kernel is calculated by mapping the kernel to the first 2 × 2 pixels of the captured training image, and the component values forming the map are calculated. The map component values are sequentially calculated by shifting pixel by pixel along the pixels of the training photographed image to prepare a map that is a matrix. A kernel having a plurality of different component values as described above is provided, and a map for each kernel is provided to form a convolutional layer. If the kernel is a 2x2 matrix and the size of the training photographic image is 4x4 pixels, the map is formed into a 3x3 matrix size.
컨벌루션 계층이 형성되면 컨벌루션을 이루는 각 맵은 활성함수를 거쳐 출력된다. 본 발명에서는 활성함수로 ReLU 함수를 채택하였다. 시그모이드 함수를 활성함수로 하는 것도 가능하다. When the convolution layer is formed, each map constituting the convolution is outputted through an active function. In the present invention, the ReLU function is adopted as the activation function. It is also possible to make the sigmoid function the active function.
상기 풀링 계층의 복수의 풀링으로 이루어지며, 컨벌루션 계층을 이루는 각 맵의 출력값에 대하여 풀링이 형성된다. 상기 풀링은 평균값으로 풀링할 수도 있고, 최대값으로 풀링할 수도 있다. 상기 풀링은 2×2 행렬 크기에 대하여 행해질 수 있으며, 맵을 이루는 성분으로부터 2×2 행렬 크기의 성분 중 최대값이나 성분들의 평균값이 풀링의 성분값이 된다. 따라서 4×4 행렬 크기의 맵이 2×2 행렬 크기로 풀링되면 2×2 행렬의 행렬이 된다.A plurality of pools of the pooling layer is formed, and pooling is formed on an output value of each map constituting the convolutional layer. The pooling may be pooled to an average value or to a maximum value. The pooling may be performed for a 2 × 2 matrix size, and the maximum value or the average value of the components having a 2 × 2 matrix size from the components of the map is the component value of the pooling. Therefore, when a map of 4x4 matrix size is pooled to 2x2 matrix size, it becomes a matrix of 2x2 matrix.
각 풀링을 이루는 성분이 입력값이 되고 은닉층의 가중치에 의하여 연산되어 은닉층을 이루는 노드값이 연산된다.Each pooling component becomes an input value and is calculated based on the weight of the hidden layer to calculate the node value constituting the hidden layer.
위 식 2는 은닉층의 각 노드를 연산하는 식의 예로서 상기 식에서 m은 풀링을 이루는 성분 개수, Ej는 은닉층 노드값, pi는 풀링의 성분, wij는 가중치, po는 바이어스로서 1이다. 그리고 φ는 활성함수이다. 활성함수로 시그모이드 함수가 사용된다. 상기 가중치(wij)는 0과 2 사이의 값에서 선택된 임의의 값으로 할 수 있다.
상기 은닉층은 1개층 이상으로 형성될 수 있다. 은닉층이 2개층으로 형성되는 경우 상기 식 2는 제1 은닉층의 노드값이 연산되는 식이며, 제2 은닉층의 노드값은 식 2에서와 같이 제1 은닉층의 노드값과 가중치에 의해서 연산된다. 제1 노드값 연산시 사용된 가중값과 제2 노드값 연산시 사용된 가중값은 서로 다를 수 있다. 이와 같이 복수의 은닉층으로 이루어진 경우 연산이 차례로 이루어진다. The hidden layer may be formed of one or more layers. When the hidden layer is formed of two layers,
그리고 마지막 은닉층의 노드값과 가중값에 의하여 출력층 노드값이 연산된다. 성별은 남성과 여성이므로 출력층 노드는 2개이다.The output layer node value is calculated by the node value and the weight value of the last hidden layer. Since gender is male and female, there are two output layer nodes.
위 식 3은 출력층의 노드를 연산하는 식의 예로서 상기 식에서 Tj는 출력층 노드값을, Ei는 마지막 은닉층의 노드값, vij는 가중치를, Eo는 바이어스로서 1이다. 그리고 φ는 활성함수이다. 활성함수로 ReLU 사용된다. 상기 가중치(vij)는 0과 2 사이의 값에서 선택된 임의의 값으로 할 수 있다.
예를 들어, 학습용 촬영 영상이 남성이고 학습용 촬영 영상에 대하여 위와 같은 과정을 거쳐 연산된 출력층의 노드값이 0.5보다 큰 경우, 그리고 학습용 촬영 영상이 여성이고 학습용 촬영 영상에 대하여 위와 같은 과정을 거쳐 연산된 출력층의 노드값이 0.5보다 작은 경우, 은닉층 노드값 연산 가중치와 출력층 연산 가중치가 저장부에 저장된다. 준비된 학습용 촬영 영상 모두에 대하여 위와 같은 단계를 거쳐 출력 노드값이 연산되도록 하고 가중치의 정확성을 확인한다. 분류에서 0.5보다 크면 남성으로, 0.5보다 작으면 여성으로 분류되도록 할 수 있으며(물론, 반대로도 가능), 0.5인경우에는 미분류로 설정할 수 있다.For example, if the training shot image is male and the node value of the output layer calculated through the above process for the training shot image is larger than 0.5, and the training shot image is female and the training shot image is processed through the above process. When the node value of the output layer is smaller than 0.5, the hidden layer node value calculation weight and the output layer calculation weight are stored in the storage unit. Through the above steps, the output node value is calculated for all of the prepared training images, and the accuracy of the weight is checked. In the classification, if it is larger than 0.5, it can be classified as a male, and if it is smaller than 0.5, it can be classified as a female (and vice versa).
한편, 학습용 촬영 영상이 남성이고 위와 같은 단계를 거쳐 연산된 출력층 노드값이 0.3인 경우, 최소 에러는 0.7이 되며, 이 에러값에 의하여 역전파 알고리즘(종래 기술이므로 상세한 설명은 생략함)에 의하여 출력층 및 은닉층의 가중치를 갱신하고, 갱신된 가중치를 적용하여 다시 출력층 노드값이 연산되도록 하고 연산된 출력층 노드값에 대하여 확인하고 에러가 있는 경우 다시 가중치가 갱신되도록 하는 과정을 학습용 촬영 영상들에 대하여 적용하여 갱신된 가중치를 도출되도록 한다. On the other hand, if the training photographing image is male and the output layer node value calculated by the above steps is 0.3, the minimum error is 0.7, and the error value is based on the backpropagation algorithm (the detailed description is omitted since it is a conventional technology). The process of updating the weights of the output layer and the hidden layer, applying the updated weights to calculate the output layer node values again, checking the calculated output layer node values, and updating the weights again if there is an error for the captured images for training. Apply to derive the updated weight.
식별 영상에 대하여 위와 같은 과정을 거쳐 출력층 노드값이 연산되도록 하며, 예를 들어 식별 영상에 대한 출력층 노드값이 0.5보다 크면 남성으로, 0.5보다 작으면 여성으로 분류되어 저장된다.The output layer node value is calculated through the above process with respect to the identification image. For example, if the output layer node value for the identification image is greater than 0.5, it is classified as a male, and if it is less than 0.5, it is classified and stored.
인종에 대한 정보는 상기와 같은 과정을 거쳐 연산되며, 출력층 노드가 인종의 종류와 같도록 설정된다. 은닉층 노드 연산에는 활성함수로 시그모이드 함수가 사용되며, 출력층 노드 연산에는 소프트맥스 함수가 활성함수로 사용된다. 3개 인종으로 분류된다고 할 때, 출력층 노드값 연산 결과의 합은 1이 되며, 3개의 출력층 노드값의 최대값을 가지는 노드값에 따라 인종이 분류되고, 최대값이 클수록 정확도는 높아진다.The race information is calculated through the above process, and the output layer node is set to be the same as the race type. The sigmoid function is used as the active function for the hidden layer node operation, and the softmax function is used as the active function for the output layer node operation. In the case of being classified into three races, the sum of the output layer node value calculation results is 1, the race is classified according to the node value having the maximum value of the three output layer node values, and the larger the maximum value, the higher the accuracy.
이상 설명한 본 발명에 따르는 얼굴 비교를 통한 개인 식별 방법 및 그 방법이 실행되는 개인 식별 시스템에 의하면, 조형물이나 출력물이 실제 얼굴과 구별되므로 출입 장치나 보안 장치에 적용되면 악의적인 목적을 갖는 조형물이나 출력물 등을 사용한 무단 출입이 차단되며, 동일인과 비동일인이 유사도 단계별로 확인되며, 인종, 성별 및 연령 정보가 제공되어 업뮤 효율을 향상시킬 수 있다.According to the personal identification method through the face comparison according to the present invention described above and the personal identification system on which the method is executed, the sculpture or the printout is distinguished from the real face, so when applied to an access device or a security device, the sculpture or the printout having the malicious purpose Unauthorized entry and exit using the back is blocked, the same person and the same person can be identified by the similarity level, and race, gender, and age information is provided to improve business efficiency.
Claims (9)
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR10-2017-0011653 | 2017-01-25 | ||
| KR1020170011653 | 2017-01-25 | ||
| KR10-2017-0061950 | 2017-05-19 | ||
| KR1020170061950A KR101781361B1 (en) | 2017-01-25 | 2017-05-19 | A Method Identifying A Personnel By Comparing Face Area |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2018139847A1 true WO2018139847A1 (en) | 2018-08-02 |
Family
ID=60036891
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2018/001055 Ceased WO2018139847A1 (en) | 2017-01-25 | 2018-01-24 | Personal identification method through facial comparison |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR101781361B1 (en) |
| WO (1) | WO2018139847A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111291668A (en) * | 2020-01-22 | 2020-06-16 | 北京三快在线科技有限公司 | Living body detection method, living body detection device, electronic equipment and readable storage medium |
| CN111680595A (en) * | 2020-05-29 | 2020-09-18 | 新疆爱华盈通信息技术有限公司 | Face recognition method and device and electronic equipment |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108652851B (en) * | 2018-01-19 | 2023-06-30 | 西安电子科技大学 | Eye-controlled wheelchair control method based on visual positioning technology |
| KR102060694B1 (en) * | 2018-03-02 | 2019-12-30 | 제주한라대학교산학협력단 | Customer recognition system for providing personalized service |
| CN108898053A (en) * | 2018-05-24 | 2018-11-27 | 珠海市大悦科技有限公司 | A kind of face recognition method |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2007140695A (en) * | 2005-11-15 | 2007-06-07 | Nippon Telegr & Teleph Corp <Ntt> | Suspicious face detection system, suspicious face detection method, and suspicious face detection program |
| KR20100118363A (en) * | 2009-04-28 | 2010-11-05 | 삼성전기주식회사 | Face authentication system and the authentication method |
| KR20110105458A (en) * | 2010-03-19 | 2011-09-27 | 한국산업기술대학교산학협력단 | Apparatus and Method for Generating Learning Image of Face Recognition System |
| KR20160042646A (en) * | 2014-10-10 | 2016-04-20 | 인하대학교 산학협력단 | Method of Recognizing Faces |
| KR20170006355A (en) * | 2015-07-08 | 2017-01-18 | 주식회사 케이티 | Method of motion vector and feature vector based fake face detection and apparatus for the same |
-
2017
- 2017-05-19 KR KR1020170061950A patent/KR101781361B1/en active Active
-
2018
- 2018-01-24 WO PCT/KR2018/001055 patent/WO2018139847A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2007140695A (en) * | 2005-11-15 | 2007-06-07 | Nippon Telegr & Teleph Corp <Ntt> | Suspicious face detection system, suspicious face detection method, and suspicious face detection program |
| KR20100118363A (en) * | 2009-04-28 | 2010-11-05 | 삼성전기주식회사 | Face authentication system and the authentication method |
| KR20110105458A (en) * | 2010-03-19 | 2011-09-27 | 한국산업기술대학교산학협력단 | Apparatus and Method for Generating Learning Image of Face Recognition System |
| KR20160042646A (en) * | 2014-10-10 | 2016-04-20 | 인하대학교 산학협력단 | Method of Recognizing Faces |
| KR20170006355A (en) * | 2015-07-08 | 2017-01-18 | 주식회사 케이티 | Method of motion vector and feature vector based fake face detection and apparatus for the same |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111291668A (en) * | 2020-01-22 | 2020-06-16 | 北京三快在线科技有限公司 | Living body detection method, living body detection device, electronic equipment and readable storage medium |
| CN111680595A (en) * | 2020-05-29 | 2020-09-18 | 新疆爱华盈通信息技术有限公司 | Face recognition method and device and electronic equipment |
Also Published As
| Publication number | Publication date |
|---|---|
| KR101781361B1 (en) | 2017-09-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2018139847A1 (en) | Personal identification method through facial comparison | |
| CN114494427B (en) | Method, system and terminal for detecting illegal behaviors of person with suspension arm going off station | |
| US20130188827A1 (en) | Human tracking method and apparatus using color histogram | |
| WO2021177544A1 (en) | Facial recognition system and method capable of updating registered facial template | |
| KR100631235B1 (en) | How to chain edges in stereo images | |
| WO2014073841A1 (en) | Method for detecting image-based indoor position, and mobile terminal using same | |
| CN112926464B (en) | Face living body detection method and device | |
| CN113312965A (en) | Method and system for detecting unknown face spoofing attack living body | |
| CN112434545A (en) | Intelligent place management method and system | |
| JP6773825B2 (en) | Learning device, learning method, learning program, and object recognition device | |
| WO2019088333A1 (en) | Method for recognizing human body activity on basis of depth map information and apparatus therefor | |
| WO2013151205A1 (en) | Method and apparatus for acquiring image of face for facial recognition | |
| CN111444837B (en) | Temperature measurement method and temperature measurement system for improving face detection usability in extreme environment | |
| WO2024101466A1 (en) | Attribute-based missing person tracking apparatus and method | |
| WO2020242089A2 (en) | Artificial intelligence-based curating method and device for performing same method | |
| WO2016104842A1 (en) | Object recognition system and method of taking account of camera distortion | |
| KR20180087812A (en) | A Method Identifying A Personnel By Comparing Face Area | |
| WO2020045903A1 (en) | Method and device for detecting object size-independently by using cnn | |
| WO2021182670A1 (en) | Heterogeneous face recognition device and method based on extracting relationships between elements | |
| WO2025135266A1 (en) | Method and system for measuring fine dust on basis of image | |
| Yamanaka et al. | Tactile Tile Detection Integrated with Ground Detection using an RGB-Depth Sensor. | |
| WO2023075185A1 (en) | Method for testing suitability of image for training or recognizing nose print of companion animal | |
| CN114359840A (en) | Community corridor anti-theft method and device | |
| CN114373205A (en) | Face detection and recognition method based on convolution width network | |
| WO2022114406A1 (en) | System for safety management through key skeleton detection based on image |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18744775 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18744775 Country of ref document: EP Kind code of ref document: A1 |