[go: up one dir, main page]

WO1998003966A2 - Systeme de verification et d'identification d'objets - Google Patents

Systeme de verification et d'identification d'objets Download PDF

Info

Publication number
WO1998003966A2
WO1998003966A2 PCT/US1997/012716 US9712716W WO9803966A2 WO 1998003966 A2 WO1998003966 A2 WO 1998003966A2 US 9712716 W US9712716 W US 9712716W WO 9803966 A2 WO9803966 A2 WO 9803966A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
person
data
verification
data sets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US1997/012716
Other languages
English (en)
Other versions
WO1998003966A3 (fr
Inventor
Kedu Han
David B. Hertz
Lex Van Gelder
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IDENTIFICATION TECHNOLOGIES INTERNATIONAL Inc
Original Assignee
IDENTIFICATION TECHNOLOGIES INTERNATIONAL Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by IDENTIFICATION TECHNOLOGIES INTERNATIONAL Inc filed Critical IDENTIFICATION TECHNOLOGIES INTERNATIONAL Inc
Priority to AU38064/97A priority Critical patent/AU3806497A/en
Publication of WO1998003966A2 publication Critical patent/WO1998003966A2/fr
Publication of WO1998003966A3 publication Critical patent/WO1998003966A3/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/20Individual registration on entry or exit involving the use of a pass
    • G07C9/22Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
    • G07C9/25Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
    • G07C9/253Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition visually
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Definitions

  • This invention relates generally to information storage and retrieval computer and camera systems for obtaining and storing certain data relating to specific images and providing rapid access to that data for purposes of object targeting, enrollment, identification, verification, and classification.
  • image recognition provides a system that captures data relating to an image area of an object or person and then compares that captured image data to information stored in a computer memory.
  • Kodak's card-based facial security system An example of a system exhibiting these drawbacks is Kodak's card-based facial security system.
  • the Kodak system classifies fifty areas of the card holder's face, identifying each area with a 2-byte code. That data is then stored on the stripe of a magnetic card. The card user must then have his/her face compared with the stored facial code in order for a match to be made.
  • a drawback to this technique is that it requires the local computer to recognize multiple areas of the face, then classify each of these areas, and then compare the instant classification to the code stored on the card. Hence, a significant processing burden is required at the recognition station.
  • the Kodak system is relatively inflexible in the sense that it is limited only to those things that have been classified. Thus, for the Kodak system to operate on other objects, a whole new classification scheme needs to the developed. The classification effort is a significant, labor-intensive process.
  • portable media such as magnetic stripe cards, magnetic discs, printed bar codes, semi-conductor devices such as smart cards, or in data bases.
  • various independent attributes of an object such as, for example, a nose on a human face
  • the data sets are generated during registration or enrollment. These data sets are recorded on a card capable of carrying data, such as a magnetic strip or a 2d-barcode, that is issued to a holder.
  • the advantage of the system is that instead of maintaining a central database, the identification data is now decentralized and held on the small cards.
  • a dramatic reduction in the data that is required to be identified is achieved.
  • the reduction of the data, required to uniquely identify complex objects such as human faces also achieves a faster response in the identi ication process.
  • the comparison process involves the verification of data present on the ID card, with the data sets generated from the video or other image of the object that has been registered through an input device such as an electronic camera.
  • the comparison process utilizes a neural network that has been trained so as to recognize or identify a particular data set (such as human facial image component attributes) .
  • the training of the neural network is based on a process of polling the various attributes that are obtained at the identification station by the computer, against the component attribute data sets that are present on the ID card. The polling assumes that certain distinctive features, if in agreement with the data sets on the ID card, can override other less distinctive attributes. However, as the security needs of the application increases, the polling process increases the required precision of the comparisons.
  • a program for reducing the characteristics of an object image, for example, a human face, to a set of characteristic numbers; later recalling that set of characteristics for comparison with an input from an external source.
  • Keys to this set of numbers are encoded as indices and are stored on local source objects, such as 3-3/8" x 2" computer input cards.
  • the indices having been electronically posted to a central computer program, point to a second set of data retrieved from the computer program. A comparison of that set then occurs with a second set of similar stored data retrieved from a local source, such as the card.
  • FIG. 1 is a block diagram of the enrollment station forming the present invention
  • FIG. 2 is a block diagram of the verification station forming the present invention
  • FIGS. 3A and 3B are flow-chart diagrams showing the functions of the enrollment process
  • FIG. 4 is a flow chart diagram illustrating the one-on-one preprocessing steps of the enrollment process shown in FIGS. 3A-3B;
  • FIG. 5 is a flow chart diagram of the binarization routine of the enrollment process shown in Figs. 3A-3B;
  • FIG. 6 is a flow chart diagram of a first embodiment of the targeting process for the enrollment process shown in Figs. 3A-3B;
  • FIG. 7 is a flow chart diagram of the UD and CP coordinate estimation functions of the enrollment process shown in FIGS. 3A-3B;
  • FIG 8 is a flow chart diagram illustrating the area of interest defining function of the enrollment process shown in FIGS. 3A-3B;
  • FIG. 9 is a flow chart diagram illustrating the normalization procedure of the enrollment process shown in Figs. 3A-3B;
  • FIG. 10 is a flow chart diagram showing the transform step of the enrollment process shown in Figs. 3A-3B;
  • FIG. 11 is a flow chart of the second transformation process for the enrollment process of Figs. 3A-3B;
  • FIG. 12 is a flow chart of the output coding function for the enrollment process of Figs. 3A-3B;
  • FIG. 13 is a flow chart illustrating the encrypt function for determining useful parameter vectors for the enrollment process of Figs. 3A-3B;
  • FIGS. 14A-14B are flow charts of the process for image verification of the present invention;
  • FIG. 15 is a flow chart of the image verification pre-processing function
  • FIG. 16 is a flow chart of the image verification setup control function
  • FIG. 17 is a flow chart of the image verification data decryption function
  • FIG. 18 is a flow chart of the image verification parameter value comparison function.
  • FIG. 19 is a flow chart of the image verification identity decision function
  • FIG. 20 is a diagram showing the dimensional breakdown of the face
  • FIG. 21 is a top view of the array of infra-red light emitting diodes used to light a mini-camera apparatus;
  • FIG. 22 is a perspective transparent diagram of the mini-camera and infra-red lighting device and components thereof of the invention.
  • FIG. 23 is a circuit schematic diagram for the light array of FIG. 21;
  • FIG. 24 is a circuit schematic diagram for the mini-camera of FIG. 21;
  • FIGS. 25 (a) -(c) are respectively front, side and perspective views of a first embodiment of a housing for the mini-camera arrangement shown in FIG. 22;
  • FIGS. 26 (a) -(c) are respectively front, side and perspective views of a second embodiment of a housing for the mini-camera arrangement shown in FIG. 22;
  • FIGS. 27 (a) -(c) are respectively front, side and perspective views of a third embodiment of the housing for the mimcamera arrangement of FIG. 22;
  • FIG. 28 is a flow chart illustrating a second embodiment of the targettmg process for the present invention.
  • the first embodiment of present system is composed of two processes and two hardware systems.
  • the first process of this first embodiment is the enrollment process and is shown in Figs. 3A-13.
  • the purpose of the enrollment process is to code the image of the person or object and to reduce that image to a portable form, and format, such as a card with a magnetic strip, or in a database.
  • the second process of the first embodiment is shown in Figs. 14A-15.
  • This process is the verification process.
  • the verification process performs the tasks of taking a picture or image of the person or object, and comparing the captured image to the coded image obtained from the enrollment process. If the image of the person or object obtained during verification and the coded information obtained from the enrollment process match, identity is verified by the verification process.
  • the enrollment process and the verification process have elements in common, which will be described further below.
  • the two hardware systems used in the present invention are the enrollment station, shown in Fig. 1, and the verification station, shown in Fig. 2.
  • Fig. 1 is a block diagram of the enrollment station 100.
  • the object 10 represents the object that will be coded m the enrollment process for later verification during the verification process.
  • the object 10 is a face of a person.
  • the object under consideration may constitute a machine part under inspection, or a house, or a car key, or an automobile, or a hand.
  • the label on the object or its container carries data related to features of the object. These data permit an automatic verification that the object is correctly labeled, and therefore will be properly stored, packaged, and shipped.
  • Object types can vary, as long as an identifiable characteristic can be extracted, and is stored in the enrollment process.
  • one or more video cameras 20 or other cameras as set forth m this application and equipped with lighting devices 30, are used to record an image of the object 10.
  • the video camera or cameras 20 and lighting device 30 are ordinary devices that are readily available.
  • Panasonic camera, CCD Model No. GP- F602, or similar devices are used with either flash or continuous light sources.
  • the lighting devices can in this first embodiment comprise a ring lamp such as the MICROFLASH 5000, manufactured by VIVITAR Corp., located at Chatworth, California, or a standard photoqraphy lighting fixture. Other camera devices and lighting devices, however, can be substituted.
  • a flash device can be employed with a Panasonic camera.
  • An example of a flash unit is the SUNPAK Softlite 1400M, manufactured by TOCAD Company, Tokyo, Japan.
  • a continuous incandescent light source can be employed. This type of lighting device is particularly useful in conjunction with object identification/verification for quality control applications.
  • an LED lighting device can be employed in conjunction with a mini infra-red camera.
  • the output of the video camera 20 is connected via a port to computer 40.
  • the computer 40 can be a personal computer that includes a digitizer card 42 in an expansion port.
  • the computer also includes other standard components such as a CPU 44, a random access memory 46, and a permanent storage, m the form of a magnetic disk storage 48, a monitor
  • a personal computer having an Intel ® Pentium ® microprocessor designed to have a minimum processor clock speed of 90 MHz.
  • the computer has in this example 32 MBytes of Random Access memory and at least 1 Gigabyte of static storage.
  • any combination of clock speed and memory s ze types can be used m this system.
  • a conventional hard-drive can be used, although other static storage units (e.g., writable optical drives, EPROM) are also acceptable.
  • EPROM writable optical drives
  • a Microsoft Windows® operating system is used. It should be noted that the present invention is designed so that any computer having adequate processor clock speed (i.e. preferably a clock speed of at least 60 MHz) , and a sufficient RAM memory size of (i.e., 16 megabytes of RAM) can be employed.
  • the digitizer 42 used is a frame grabber card for transforming camera signals from analog to digital form. This card is inserted in the computer 40.
  • the card used is a CX100 Imagination Board, manufactured by Image National Corporation, located in Beaverton, Oregon.
  • any digitizer device for video input including direct input from digital video cameras can be used with the invention.
  • the output device 60 receives the data from the computer 40 which is a coded representation of the object 10.
  • Device 60 transforms the coded data into an appropriate signal and code for placement on a central storage 72 or portable memory device.
  • the output device 60 receives the data from the computer 40 which is a coded representation of the object 10.
  • Device 60 transforms the coded data into an appropriate signal and code for placement on a central storage 72 or portable memory device 70.
  • the output device 60 in the preferred embodiment is a card reader/writer, or a printer. Examples of output devices include magnetic or optical reader/writers or smart card reader/writers or barcode printers, etc.
  • that memory device 70 is a magnetic stripe card.
  • the output device 60 and card 70 are well known in the art. However, other portable memory devices such as optical cards, S-RAM storage devices, or carriers containing EPROM or UVPROM memories, can also be used. In addition, known barcodmg schemes can be employed to bar-code the information, where appropriate.
  • Fig. 2 is a block diagram of the verification station hardware 200.
  • the data of an enrolled person or object 210 appropriately lighted by lighting 30 will be compared to the coded representation of the object on the portable memory 70.
  • the object 210 represents the same object as the enrolled object 10 m Fig. 1.
  • the purpose of the verification station 200 is to output to an appropriate security system, a signal via external control 230.
  • the external control 230 can be an electronic access device, such as an electronic door lock, or a motor-driven gate, or any other mechanism.
  • the components in the verification station 200 are the same as the components of the enrollment station 100 with the exception of the input device 220 and the external control unit
  • the card 70 is inserted in the input device 220, such as a magnetic or optical card or a barcode reader, while the object 210 has its image recorded and processed by the computer 40. Image recordation and processing is done in an identical way as in the enrollment station discussed above A program in the computer 40 for the verification station compares the image data of the object 210 with the coded image data on the card 70. If there is a satisfactory match, the verification signal 230 is outputted indicating a match, or a failure to match.
  • the input device 220 such as a magnetic or optical card or a barcode reader
  • Figs. 3A and 3B are overview flow chart diagrams showing the process used by the enrollment station 100 in order to encode enrollment data onto the portable storage medium 70 or central database 72.
  • Each of the steps of Figs. 3A-3B are detailed in the flow charts of Figs. 4-13. Each overview step will be elaborated on below by reference to the later figures.
  • Cameras 20 provide input for the computer 40 which executes the preprocessing function 302.
  • This preprocessing function 302 is described in detail in Fig. 4.
  • the preprocessing function 302 is performed by the computer 40 in combination with a frame grabber digitizer 42.
  • the frame grabber digitizer 42 first transforms the analog signal at step 3002, then filters the digitized data at step 3004, and enhances the image at step 3006.
  • the filtering step 3004 filters out image noise using conventionally known techniques.
  • the enhancement step 3006 enhances image contrast according to the lighting conditions, using known techniques.
  • the output of these functions is the complete image 305 which is composed of a standardized noise-free, digital image matrix of 512 x 480 pixels.
  • the complete image 305 is used as an input to various other subroutines in the enrollment process which will be described below .
  • step 310 receives as the input matrix the complete image 305.
  • step 3102 a center image is taken from the complete image.
  • the coordinates of the upper left hand corner of the center matrix is defined as coordinates 128 x 120 and the coordinates of the bottom right hand corner of the center matrix is defined as coordinates 384 x 360.
  • This central image is then b arized. This process results an image where each pixel is either black or white, depending on whether a pixel exceeds or falls below a preset threshold.
  • the coordinates are chosen for the preferred embodiment to focus on the center of the image. However, if the object does not have distinguishing features in its central area, different coordinates can be used.
  • This output image is then made available to the targeting procedure 320 shown in Fig. 3A of the enrollment process.
  • the targeting procedure 320 is shown in more detail in Fig. 6.
  • the purpose of the targeting procedure is to find a distinguishing feature in the object in order to detect the presence of the object and determine the location of the object in the image matrix.
  • the distinguishing features looked at are the two irises of the eyes.
  • the input to the tarqet g function 320 is the bmarized image 3104 which is then processed by the labeling function 3202.
  • the labeling function 3202 locates and labels all areas that exhibit similar characteristics as irises.
  • the threshold set for the bmarization process 3102 is set to filter out gray scale levels that are not relevant.
  • the gray scale color typically associated with irises can be used as the indicator for the threshold.
  • the output of the labeling process 3202 comprises the object labels 3204.
  • each labeled object produced at step 3204 has the XY coordinates calculated for placement in the complete image matrix 305. This provides a geometric center of each object that was labeled in the previous step.
  • step 3206 the irises are located and are distinguished by their contrast with the surrounding area.
  • other contrasting areas may also be labeled. These contrasting areas, are for example this exemplified application, nostrils or lips.
  • Step 3208 involves looking at the XY coordinates of a pair of objects and then determining whether their absolute and relative locations are valid.
  • the validation step 3208 assumes that labeled objects, such as the irises, are appropriately positioned on a face. For example, the eyes cannot be on top of each other and must fall within acceptable distances from each other. Therefore, the validate coordinate step 3208 function determines those pairs of labeled objects that can possibly be irises.
  • the calculations for iris targeting consists of comparing the XY coordinates for each iris to determine if they are within a preset distance apart and on approximately the same horizontal axis.
  • the difference m the X coordinates are measured and compared to a prestored value to make sure that the irises are located at certain specific locations.
  • the coordinates Yl and Y2 represent the horizontal coordinates
  • XI and X2 represent the vertical coordinates.
  • Y2 and X2 in the preferred embodiment represent the left iris coordinate
  • Yl and XI the right iris.
  • a first calculation determines if Y2 is greater than Yl. In the second calculation, the result of Y2 minus Yl should be greater than a value of 40 pixels. The third calculation determines the absolute value of XI minus X2. In the preferred embodiment, that value should be less than 16 pixels. If all three equations are met, then at step 3208 the object's pair of irises is confirmed, and processing passes to step 3216.
  • an output message is sent at step 3212 to monitor 50 (Fig. 1) stating that the process has been unable to target the eyes.
  • a new image is acquired again and reprocessed, beginning at step 302.
  • the next step 3216 is to validate the object. This step compares the spots with the eye template to determine whether a cross correlation coefficient fits. If so, it confirms that the system successfully targeted the eyes.
  • any one input to the validate object step 3216 is determined at step 315.
  • This input is an average eye template value, which is an average of the iris position on the face across a wide population.
  • the other input, determined at step 305, which was discussed previously, is the complete image.
  • the complete image is a reference standardized noise-free image matrix of 512 x 480 pixels, 8-bit gray scale.
  • the validate object step 3216 performs a gray scale correlation using the complete image 305 and the average eye template 315 and the valid object XY coordinates. This complete image is compared to the average eye template at the valid XY coordinates. If the maximum correlation is above a preset threshold, the object is identified as an eye.
  • the correlation coefficient of two areas ⁇ A i3 ⁇ and (b l ⁇ ⁇ is calculated as:
  • a and b are pixels of the two areas.
  • the threshold of correlation is 0.9.
  • the outputs of this comparison are two "valid" iris values with the associated XY coordinates in the complete image matrix 305.
  • the outputted values are provided at 3218.
  • the system retrieves the calculated unit distance/center point by initiating the process set forth at step 325.
  • a detailed flow chart of this process is shown in Fig. 7.
  • the calculate unit distance and center point routine 325 establishes a unit distance for the center point in the image based on the coordinates of the iris provided from the targeting step 320.
  • the unit distance (UD) equals ( (X1-X2) exp 2 + (Y1-Y2) exp 2) exp .
  • the next step of the enrollment process shown in Fig. 3A is to define an area of interest at step 330.
  • the area of interest procedure 330 is shown in detail the flow chart diagram of Figure 8.
  • the function of step 3301 is to define the areas of interest on the object m relation to the unit distance (UD) and center point values (CX and CY) .
  • the areas of interest are predetermined depending on the object to be identified. In the preferred embodiment, eight areas of interest have been selected. These areas of interest are a one-dimensional horizontal bar on the forehead, a one dimensional vertical bar over the center of the face, a two-dimensional right and left eye section, a two-dimensional right and left eyebrow section, and a two-dimensional right and left cheek section.
  • the areas of interest for a face in the preferred embodiment are dissected into two one-dimensional areas and six two-dimensional areas of interest (see Fig. 20) .
  • Step 335 resizes the area of interest to a standard pixel size.
  • the standard pixel size for the one-dimensional pixel area of interest is 8 x 64 pixels.
  • the standard pixel size is 64 x 64 pixels.
  • the purpose of this normalization procedure step is to standardize the input to the transform procedures.
  • Step 340 shown in Fig. 3A then performs several transforms, each of which is applicable to particular areas of interest.
  • One of these transform processes is step 342.
  • step 342 applies transforms to the one- dimensional pixel arrays (representing the eight areas of interest) outputted from the resized areas of interest step.
  • FFTs fast-fourier transforms
  • DCT discrete cosine transform
  • each 64 x 64 pixel array is divided into 64 separate pixel arrays of 8 x 8 pixels at step 3440. Then each 8 x 8 pixel array is compressed using the DCT at step 3442. The output of the DCT for each 8 x 8 pixel array is a transformed array with the most significant cell in the upper left hand corner. Using all the 8 x 8 transformed arrays, ten 1 x 64 vectors arrays of the most significant cells are then created at step 3444. Other techniques can be employed, such as edge detection, Kohonen's and/or geometrical analysis. Step 346 in Fig. 3A, depicts these other alternative transforms which can be used to compress and analyze identified areas of interest.
  • each of the 64 transformed arrays comprises the first 1 x 64 vector array
  • the second most significant cells comprises the second 1 x 64 array
  • so on The result is that each 64 x 64 pixel area of interest is transformed into ten 1 x 64 vector arrays of the most significant transformed cells. These arrays are then sent to the coding routine 350.
  • each layer can be bmarized, so that if each cell's coefficient is greater than zero, then the value for that cell is equal to one. If that cell's value is less than zero, then it's bmarized value is equal to zero.
  • relatively few bytes for multiple layers are necessary. For example, if each layer is 8x16 bytes, then the bmarization will create and an 8x16 bit layer. For a 6-layer image, for example, 96 8-bit bytes (6x16) will be created for the captured image.
  • 6x16 96 8-bit bytes
  • Fig. 12 sets forth routine 350 in more detail.
  • input to the coding routine array are the sixty-two, 1 x 64 vector arrays produced by the transform routine at step 340 (Fig. 3A) .
  • one 1 x 64 vector array is inputted.
  • ten 1 x 64 vector arrays are inputted. Therefore, in the preferred embodiment, 62 1 x 64 vector arrays are inputted to routine 350.
  • the other input 355 is the eigenspace.
  • the use of eigenspaces is well-known the art as a method for determining the characteristics of an individual observation to a sample of the general population.
  • the first coding step 3502 calculates residuals of the vectors.
  • the residuals are the differences between the sixty-two vectors and the mean vectors estimated for a general population. Next, these residuals are projected into their sixty-two separate eigenspaces, permitting one per parameter. The result of this process provides the two most significant coordinates, per parameter, m their respective eigenspaces. In total, 124 coordinates are calculated.
  • Process step 3504 is repeated several times to insure a statistically appropriate sampling of the enrollment images calculates the mean and standard deviation of the 124 parameters coordinates generated at step 350 .
  • Step 3508 evaluates the coordinates with the smallest standard deviation and highest coefficient with the average of the population. Based on those criteria, the coordinates and their respective weights are then passed to the encryption process 370.
  • the encryption routine 370 of Fig. 3B is shown in detail in the flow chart of Fig. 13.
  • Such a routine is well known in the art.
  • the encryption algorithm shown at step 3702 determines usable parameters according to encryption criteria 3704 which are related to the mean and the standard deviation of the parameter coordinates.
  • the result is the encryption key and verification data which are written at step 3706 onto the portable storage 70.
  • a code or any other well known technique the art of recording information can be used. Therefore, the card 70 contains the coded information pertaining to the object that was enrolled. The enrollment process is now complete.
  • Fig. 14A & Fig. 14B show an overview of the verification process using the verification station hardware shown in Fig. 2. Most of the procedures the verification process are similar to the procedures previously discussed regarding the enrollment process. Thus, a detailed explanation is reserved for those processes that differ. A detailed description of the verification steps are set forth in Figs. 15-19.
  • a prerequisite to the verification process 400 is for the enrollment process to be repeated, up to a certain point.
  • the person that needs to be verified would go through step 302 (preprocessing) through step 350.
  • the output of step 350 the verification process provides parameter values corresponding to the images of the person or object to be verified.
  • card 70 which contains the data from the enrollment process, is inserted into a reader device which then decrypts the magnetic data, yielding process control data (Fig. 16 step 410) and parameter values that correspond to each area of interest (Fig. 17, process 420) .
  • the process control data instructs the machine on how to accomplish verification.
  • the parameters values determine what is to be verified.
  • the parameter values are compared to the output of the coding step 350 in the verification process (Fig.
  • One verification methodology for example can rely on increasing the Hamming distance between the enrolled image and the image to be verified.
  • the image vector stored in the card, or other non-volatile storage media is lined up, bit-by-bit to the generated image vector.
  • the bits for each vector are compared. For different bits, a value of "1" is generated, for identical bits, a "0".
  • a Hamming distance is then generated, as follows:
  • N total length of vecot
  • the HD value can be used as a threshold value, from which system sensitivity can be varied. If the values are set at around, for example .23, other sensitivities for retest, for example, or for reject can also be set. If for example, accept is .23 (HD) , retest is .24 - .74 and reject is .75 or greater, then it is possible that over time the retests will migrate to either direction (i.e., accept, reject).
  • Figs. 21-27 respectively illustrate a micro-camera assembly and LED lighting apparatus which provide numerous operational advantages both to the various embodiment of the invention, as well as to any other known image enrollment/recognition systems.
  • Fig. 21 illustrates a front-view of an array of light emitting diodes ("LEDs") 2102 located along the same plane of a plate 2104.
  • the arrangement of the LED's has a specific size, and intensity, to optimize lighting intensity of the target and capture of the image.
  • a configuration is shown for maximizing iris capture.
  • the LED's are designed to light the iris at a lower visible infra-red spectrum, rather than at the "heat detecting" spectrum.
  • the spectrum tends to have a wave length of approximately 880nm, although other low visible spectra are considered applicable. Infra-red spectrum light has been found to be optimal, since flash lighting is distracting, if not painful.
  • infra-red at low levels, standardizes facial fill so that all features of the face are equally exposed.
  • a further advantage to low level infra-red is that, when combined with an appropriate filter, ambient light is cut out altogether. Consequently, image wash-out is avoided.
  • the higher spectrum heat detecting level has been found to be less accurate in measuring biomet ⁇ c characteristics.
  • an array of nine LED's 2106 are arranged in a square that is angled a 45° relative to the horizontal axis 2108 of plate 2104.
  • Fig. 22 is a transparent perspective view of the microcamera device 2200 incorporating the aforedescribed LED array 2100.
  • the device includes four basic elements: a micro-camera lens 2202, a microcamera circuit board 2204, the aforedescribed IR LED array 2100 and the LED circuit board 2206. These four elements are contained in a housing 2208.
  • the housing 2208 is designed so that the lens and LED array are held m a potting material in order that the microcamera unit may be contained, and sealed. As a consequence, the microcamera can be used m underwater applications without substantial water leakage to circuit boards 2204 and 2205.
  • the potting has a sufficient circumferential clearance around the lens element 2202 in order to allow the lens to freely rotate.
  • the top surface of the housing 2210 contains a recess 2210 the top surface of which is co-planar with the top surfaces of the lens 2202 and LEDs 2102. Further, a pair of flanges are arranged parallel to each other and the longitudinal axis of housing 2208 so that a flat filter element (not shown) that is sized to fit between the flanges can slide across the top surface 2210 and be held in place by the flanges.
  • the filter comprises a sheet of mirrored glass or plastic that is near infra-red. The filter is thus able to cut off the visible light spectrum.
  • the mini camera housing includes a communications port 2220 which provides the image output to an external frame grabber.
  • the port 220 also connects to an external power supply for the m i-camera.
  • the port 220 may use any optimal wirmg-confirmation which in this embodiment a 9 pm DIN that can connect to the I/O port of a PC.
  • the camera device 220 has no potting.
  • a wall 2222 would be placed between the lens 2202 and LED's 2102 to avoid direct reflection on the filter by the LED's.
  • the camera lens 2202 and camera circuit 2204 are manufactured by Sony Corp. as a 1/3 CCD, 12 VDC Board Camera, Model No. UL-BC460/T8.
  • a schematic circuit board layout 2300 is shown for the LED array 2100 (elements D1-D8) .
  • the lighting for diodes D1-D8 is continuous but has a quick "on-off" to cover a video view. This on-off cycle is approximately l/30th of a second.
  • the flash component of the video view period is l/7000 ,n of a second. Since the period of lighting is so brief, the flash and the lighting exposure render sensitivity to movement of the subject pratically irrelevant. Flash nonetheless is essential since in security applications, movement of the subject occurs frequently. However, the flash can be changed to a continuous lighting mode, if desired.
  • Each of the IR LED's is a focused beam diode, which improves efficiency and also reduces power consumption.
  • pin connections 2301 are adapted to connect directly into a personal computer I/O port.
  • Fig. 24 is an illustration of the circuitry supporting the camera electronics 2400.
  • a constant power source of about 100 million amps is provided.
  • a 12 volt power supply is used along with a 5V control power supply.
  • FIGS. 21-24 a micro camera arrangement for image capture is created whereby the lighting is located below the camera. Moreover, the position at which the lighting is below such camera is critical, since a subject farther than 3 feet away from the lens will not be captured. Placement of the camera is also sensitive since direct sunlight, incandescent or halogen light will wash out features. Thus any direct light to the camera is problematic.
  • Figures 25a-c are different views (front, side, and perspective) of a housing designed to contain the camera-LED unit.
  • a recess 2502 is shown in the unit, through which the entire housing 2200 can be inserted.
  • the modular plug 2220 (Fig. 22) would also be connected through cable 2504 (Fig. 25c) to the PC I/O port (not shown) .
  • the housing 2504 includes a stand 2506 which pivots about axle 2508 in the direction of arrow 2510.
  • the camera can be supported m a substantially upright position (Fig. 25(c)) when placed with the stand in an extended position on a horizontal surface.
  • Figs. 26a-c show a second embodiment of the mini-camera housing 2600.
  • the housing includes a stand 2602, which in a closed position (as shown m Figs. 26a, 26b), completely covers the camera lens and LED's. When fully opened, however, which is accomplished by rotating the stand 2602 about axis 2604, the camera, and LED light unit are fully exposed, and is also supported upright by stand 2602 (Fig. 25c).
  • Figs. 27a-c are views of a third embodiment of the m icamera housing 2700.
  • a stand 2702 is partially cut away, to expose the camera lens only.
  • the LED array and the camera 2202 are both exposed for use.
  • Fig. 28 illustrates a second or alternative embodiment for the targeting process set forth Fig. 6 of this invention.
  • the advantage of the alternative technique is that it allows targeting without reference to fixed areas, by dynamically finding the image centers.
  • the process, 2800 begins at step 2802 where a desired area is isolated and captured by the camera. A histogram for this captured area is then computed by the computer 40. The computer then dynamically determines thresholds by calculating desired threshold barriers that are preprogrammed into the computer 40. For example, high and low rejects can be set to be above the lowest 5% and below the highest 5%, and high and low thresholds between the bottom 45% and below the top 45%. As a consequence, when the threshold is compared to the histogram at step 2808, a 10% middle portion of the histogram can be defined reflecting particular gray-scale characteristics .
  • the below, between, and above threshold values are then bmarized at bmarization step 2810 as shown in 28(c) .
  • the first bmarization step is the threshold comparison itself, which sets values as follows:
  • Fig. 28(d) represents the true binarized area of the targeted object 2812.
  • the targeted area is then geometrically tested at step 2814 on two candidate points based on preset values which define appropriate quadrants.
  • the points i ⁇ and x,,y, can be isolated based on preset template values. For example, if iris targeting is desired, eye templates can be set so that
  • an iteration loop can take three (3) images, binarize those values, average those binarized value and store the averaged value in the portable memory. As a result of this iteration process shown at steps 2822 and 2824, a high percentage of accuracy is achieved dynamically.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Système et procédé d'identification ou de vérification de l'identité d'objets tels que des visages. Ce système identifie un objet tel qu'un visage, à partir de signaux et de valeurs uniques provenant d'attributs des éléments de l'objet, tels que les yeux, les sourcils, la bordure frontale, la bouche, les joues, les oreilles et le menton, sur un visage. L'image de l'objet est analysé de telle sorte que les différents éléments soient déterminés. Ces éléments subissent à leur tour une transformation choisie parmi plusieurs transformations du type géométrique, à cosinus ou de Kohonen, pour produire une description unique de chaque élément. Les données obtenues de l'objet sont ensuite transférées soit vers une mémoire portable, en autres, une carte à pistes magnétiques, une carte à puce ou une carte à code à barres en une ou deux dimensions, soit vers une base de données. Un processus de décision consistant à comparer les deux résultats, celui provenant de l'image de l'objet et celui provenant des données codées dans la carte d'identification ou dans la base de données, est ultérieurement réalisé. En fonction des exigences de sécurité des applications, ce processus de décision peut être réglé en ce qui concerne le nombre de paramètres à utiliser et la façon dont leurs attributs doivent être pondérés proportionnellement à la fiabilité de leur contribution. Ces attributs peuvent être utilisés selon différentes combinaisons et avec différentes pondérations pour produire des algorithmes d'identification dont la sensibilité varie en fonction de l'application de sécurité spécifique. Dans un exemple, on utilise cette technologie pour identifier des objets tels que des visages humains afin de vérifier l'authenticité d'une carte d'identité et de donner ou de refuser l'autorisation à effectuer certaines actions.
PCT/US1997/012716 1996-07-19 1997-07-18 Systeme de verification et d'identification d'objets Ceased WO1998003966A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU38064/97A AU3806497A (en) 1996-07-19 1997-07-18 System for object verification and identification

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US68470796A 1996-07-19 1996-07-19
US08/684,707 1996-07-19

Publications (2)

Publication Number Publication Date
WO1998003966A2 true WO1998003966A2 (fr) 1998-01-29
WO1998003966A3 WO1998003966A3 (fr) 1998-04-30

Family

ID=24749226

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/012716 Ceased WO1998003966A2 (fr) 1996-07-19 1997-07-18 Systeme de verification et d'identification d'objets

Country Status (2)

Country Link
AU (1) AU3806497A (fr)
WO (1) WO1998003966A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038333A (en) * 1998-03-16 2000-03-14 Hewlett-Packard Company Person identifier and management system
WO2001078021A3 (fr) * 2000-04-07 2002-02-28 Micro Dot Security Systems Inc Carte, systeme et procede pour authentification biometrique
WO2009035377A3 (fr) * 2007-09-13 2009-05-07 Inst Of Applied Physics Ras Procédé et système destinés à l'identification d'une personne sur la base de l'image de son visage

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4712103A (en) * 1985-12-03 1987-12-08 Motohiro Gotanda Door lock control system
US4754487A (en) * 1986-05-27 1988-06-28 Image Recall Systems, Inc. Picture storage and retrieval system for various limited storage mediums
US4975969A (en) * 1987-10-22 1990-12-04 Peter Tal Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same
JPH0546743A (ja) * 1991-08-09 1993-02-26 Matsushita Electric Ind Co Ltd 個人識別装置
US5432864A (en) * 1992-10-05 1995-07-11 Daozheng Lu Identification card verification system
US5466918A (en) * 1993-10-29 1995-11-14 Eastman Kodak Company Method and apparatus for image compression, storage, and retrieval on magnetic transaction cards

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038333A (en) * 1998-03-16 2000-03-14 Hewlett-Packard Company Person identifier and management system
WO2001078021A3 (fr) * 2000-04-07 2002-02-28 Micro Dot Security Systems Inc Carte, systeme et procede pour authentification biometrique
WO2009035377A3 (fr) * 2007-09-13 2009-05-07 Inst Of Applied Physics Ras Procédé et système destinés à l'identification d'une personne sur la base de l'image de son visage
RU2382408C2 (ru) * 2007-09-13 2010-02-20 Институт прикладной физики РАН Способ и система для идентификации человека по изображению лица

Also Published As

Publication number Publication date
AU3806497A (en) 1998-02-10
WO1998003966A3 (fr) 1998-04-30

Similar Documents

Publication Publication Date Title
Hamouz et al. Feature-based affine-invariant localization of faces
Chan et al. Face liveness detection using a flash against 2D spoofing attack
Beymer Face recognition under varying pose
US7715596B2 (en) Method for controlling photographs of people
JP3975248B2 (ja) ニューラルネットワーク分類を使用する生物測定認識
JP4543423B2 (ja) 対象物自動認識照合方法および装置
JP5955133B2 (ja) 顔画像認証装置
Bagherian et al. Facial feature extraction for face recognition: a review
Akarun et al. 3D face recognition for biometric applications
Tsai et al. Face detection using eigenface and neural network
Aydın et al. Face recognition approach by using dlib and k-nn
Das et al. Face Recognition Using ESP32-Cam for Real-Time Tracking and Monitoring
WO2002009024A1 (fr) Systemes d'identite
CN118230395B (zh) 一种基于InsightFace与LIS文件管理的人脸识别方法及装置
WO1998003966A2 (fr) Systeme de verification et d'identification d'objets
WO1997005566A1 (fr) Systeme de verification et d'identification d'objet
WO1997005566A9 (fr) Systeme de verification et d'identification d'objet
Popoola et al. Comparative analysis of selected facial recognition algorithms
Ibitayo et al. Development Of Iris Based Age And Gender Detection System
Mekami et al. Towards a new approach for real time face detection and normalization
EP1615160A2 (fr) Appareil et procédé d'extraction de caractéristiques pour la reconnaissance d'images
JP4606955B2 (ja) 映像認識システム、映像認識方法、映像補正システムおよび映像補正方法
DRICI et al. Inmates Tracking Based on Face and Visual Markers
Tejasri et al. Development of Real Time Face Detection and Recognition System
Lee et al. Measurement of face recognizability for visual surveillance

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN ZW AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH KE LS MW SD SZ UG ZW AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 98507154

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA