WO2024195055A1 - Information processing device, information processing method, and recording medium - Google Patents
Information processing device, information processing method, and recording medium Download PDFInfo
- Publication number
- WO2024195055A1 WO2024195055A1 PCT/JP2023/011271 JP2023011271W WO2024195055A1 WO 2024195055 A1 WO2024195055 A1 WO 2024195055A1 JP 2023011271 W JP2023011271 W JP 2023011271W WO 2024195055 A1 WO2024195055 A1 WO 2024195055A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- authentication
- information processing
- learning data
- subject
- eye image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/117—Identification of persons
- A61B5/1171—Identification of persons based on the shapes or appearances of their bodies or parts thereof
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Definitions
- This disclosure relates to the technical fields of information processing devices, information processing methods, and recording media.
- Patent Document 1 describes a technology in which a target iris image, which is the iris image to be processed, is obtained, and one or more similar registered iris images that are similar to the target iris image are searched for from a registered iris image, and if the person corresponding to the target iris image and the person corresponding to the similar registered iris image are different people, the target iris image and the similar registered iris image are determined to be iris images of color contact lenses.
- Patent document 2 describes a technology that acquires an image of a subject's eye, compares the image of the eye with a reference image, identifies the coloring pattern of the color contact lenses worn by the subject, and identifies the subject using features of the iris region of the eye other than the coloring region of the coloring pattern.
- Patent document 3 describes a technology in which the edge of the iris is detected in an image, texture is obtained from the image, and the edge and texture are combined to generate inner and outer boundaries of the iris, a method is provided for selecting between an ellipse model and a circle model to improve the accuracy of iris boundary detection, and a mask image is obtained using a dome model, and occlusion by the eyelid in the unwrapped image is removed, and the iris is extracted from the image.
- Patent Literature 4 describes a technique for inputting a captured image into a first learning model and determining whether iris information can be extracted based on the output from the first learning model.
- the first learning model is constructed by machine learning using training data that associates a plurality of eye images with labels indicating whether the iris can be extracted.
- Patent document 5 describes a technology in which different iris images corresponding to the same person are extracted as cosmetic lens candidates, the iris features of the cosmetic lens candidate are compared with the iris features of other cosmetic lens candidates, a reliability indicating the likelihood of the lens being a cosmetic lens is calculated based on the comparison results, and if the calculated reliability is equal to or greater than a predetermined threshold, the cosmetic lens candidate is determined to be a cosmetic lens.
- Non-Patent Document 1 describes a technique for dividing an image into patches, calculating weights based on the degree of occlusion of each patch, and applying weights to areas with less occlusion to estimate facial expressions.
- the objective of this disclosure is to provide an information processing device, an information processing method, and a recording medium that aim to improve upon the technology described in prior art documents.
- One aspect of the information processing device includes a learning data generation means for generating second learning data based on first learning data including an eye image showing the eyes of a subject and information about the subject's accessories shown in the eye image included in the first learning data, a model generation means for generating an authentication model using the second learning data, and an authentication means for authenticating the subject using the authentication model.
- One aspect of the information processing method is to generate second learning data based on first learning data including an eye image showing the subject's eyes and information about the subject's accessories that are shown in the eye image included in the first learning data, generate an authentication model using the second learning data, and authenticate the subject using the authentication model.
- a computer program is recorded to cause a computer to execute an information processing method that generates second learning data based on first learning data including an eye image showing the eyes of a subject and information about the subject's accessories that are shown in the eye image included in the first learning data, generates an authentication model using the second learning data, and authenticates the subject using the authentication model.
- FIG. 1 is a block diagram showing the configuration of an information processing apparatus according to the first embodiment.
- FIG. 2 is a block diagram showing the configuration of an information processing device according to the second embodiment.
- FIG. 3 is a conceptual diagram showing a flow of information processing operations of the information processing device in the second embodiment.
- FIG. 4 is a block diagram showing the configuration of an information processing apparatus according to the third embodiment.
- FIG. 5 is a conceptual diagram showing a flow of information processing operations of the information processing device in the third embodiment.
- FIG. 6 is a block diagram showing the configuration of an information processing device according to the fourth embodiment.
- FIG. 7 is a conceptual diagram showing a flow of information processing operations of the information processing device in the fourth embodiment.
- FIG. 8 is a block diagram showing the configuration of an information processing apparatus according to the fifth embodiment.
- FIG. 9 is a conceptual diagram showing the flow of information processing operations of the information processing device in the fifth embodiment.
- FIG. 10 is a block diagram showing the configuration of an information processing device according to the sixth embodiment.
- FIG. 11 is a conceptual diagram showing the flow of information processing operations of the information processing device in the sixth embodiment.
- FIG. 12 is a block diagram showing the configuration of an information processing device according to the seventh embodiment.
- FIG. 13 is a conceptual diagram showing the flow of information processing operations of the information processing device in the seventh embodiment.
- FIG. 14 is a block diagram showing the configuration of an information processing device according to the eighth embodiment.
- FIG. 15 is a block diagram showing the configuration of an information processing device according to the ninth embodiment.
- FIG. 16 is a conceptual diagram showing the flow of information processing operations of the information processing device in the ninth embodiment.
- a first embodiment of an information processing device, an information processing method, and a recording medium will be described below.
- the first embodiment of the information processing device, the information processing method, and the recording medium will be described using an information processing device 1 to which the first embodiment of the information processing device, the information processing method, and the recording medium is applied.
- FIG. 1 is a block diagram showing the configuration of an information processing device 1 in the first embodiment.
- the information processing device 1 includes a learning data generation unit 11, a model generation unit 12, and an authentication unit 13.
- the training data generating unit 11 generates second training data based on first training data including an eye image showing the eye of a subject and information on an accessory of the subject shown in the eye image included in the first training data.
- the model generating unit 12 generates an authentication model using the second training data.
- the authentication unit 13 authenticates the subject using the authentication model.
- the information processing device 1 in the first embodiment generates second learning data based on information about the subject's attached object shown in the eye image.
- the information processing device 1 authenticates the subject using an authentication model generated using this second learning data, and therefore can perform highly accurate authentication regardless of whether the subject is wearing an attached object or not.
- FIG. 2 is a block diagram showing the configuration of the information processing device 2 in the second embodiment.
- the information processing device 2 includes a calculation device 21 and a storage device 22.
- the information processing device 2 may include a communication device 23, an input device 24, and an output device 25.
- the information processing device 2 does not have to include at least one of the communication device 23, the input device 24, and the output device 25.
- the calculation device 21, the storage device 22, the communication device 23, the input device 24, and the output device 25 may be connected via a data bus 26.
- the arithmetic device 21 includes, for example, at least one of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and an FPGA (Field Programmable Gate Array).
- the arithmetic device 21 reads a computer program.
- the arithmetic device 21 may read a computer program stored in the storage device 22.
- the arithmetic device 21 may read a computer program stored in a computer-readable and non-transient recording medium using a recording medium reading device (e.g., an input device 24 described later) not shown in the figure that is provided in the information processing device 2.
- a recording medium reading device e.g., an input device 24 described later
- the arithmetic device 21 may acquire (i.e., download or read) a computer program from a device (not shown) located outside the information processing device 2 via the communication device 23 (or other communication device).
- the arithmetic device 21 executes the read computer program.
- a logical functional block for executing the operation to be performed by the information processing device 2 is realized within the calculation device 21.
- the calculation device 21 can function as a controller for realizing a logical functional block for executing the operation (in other words, processing) to be performed by the information processing device 2.
- the computing device 21 realizes a learning data generating unit 211, which is a specific example of a "learning data generating means” described in the appendix described later, a model generating unit 212, which is a specific example of a “model generating means” described in the appendix described later, an authentication unit 213, which is a specific example of an "authentication means” described in the appendix described later, a learning data acquiring unit 214, which is a specific example of a “learning data acquiring means” described in the appendix described later, an accessory detecting unit 215, which is a specific example of an "accessory detecting means” described in the appendix described later, and an eye image acquiring unit 216.
- a learning data generating unit 211 which is a specific example of a "learning data generating means” described in the appendix described later
- a model generating unit 212 which is a specific example of a “model generating means” described in the appendix described later
- an authentication unit 213 which
- the learning data acquiring unit 214 does not have to be realized in the computing device 21. Details of the operation of each of the learning data generation unit 211, the model generation unit 212, the authentication unit 213, the learning data acquisition unit 214, the accessory detection unit 215, and the eye image acquisition unit 216 will be described later with reference to FIG. 3.
- the storage device 22 can store desired data.
- the storage device 22 may temporarily store a computer program executed by the arithmetic device 21.
- the storage device 22 may temporarily store data that is temporarily used by the arithmetic device 21 when the arithmetic device 21 is executing a computer program.
- the storage device 22 may store data that the information processing device 2 stores for a long period of time.
- the storage device 22 may include at least one of a RAM (Random Access Memory), a ROM (Read Only Memory), a hard disk device, a magneto-optical disk device, an SSD (Solid State Drive), and a disk array device.
- the storage device 22 may include a non-temporary recording medium.
- the communication device 23 is capable of communicating with devices external to the information processing device 2 via a communication network (not shown).
- the communication device 23 may be a communication interface based on standards such as Ethernet (registered trademark), Wi-Fi (registered trademark), Bluetooth (registered trademark), and USB (Universal Serial Bus).
- the input device 24 is a device that accepts information input to the information processing device 2 from outside the information processing device 2.
- the input device 24 may include an operating device (e.g., at least one of a keyboard, a mouse, and a touch panel) that can be operated by an operator of the information processing device 2.
- the input device 24 may include a reading device that can read information recorded as data on a recording medium that can be attached externally to the information processing device 2.
- the output device 25 is a device that outputs information to the outside of the information processing device 2.
- the output device 25 may output information as an image. That is, the output device 25 may include a display device (so-called a display) capable of displaying an image showing the information to be output.
- the output device 25 may output information as sound. That is, the output device 25 may include an audio device (so-called a speaker) capable of outputting sound.
- the output device 25 may output information on paper. That is, the output device 25 may include a printing device (so-called a printer) capable of printing desired information on paper. [2-2: Information Processing Operation Performed by Information Processing Device 2]
- Fig. 3 is a flowchart showing the flow of the information processing operation performed by the information processing device 2.
- Fig. 3(a) is a flowchart showing the flow of the learning operation performed by the information processing device 2
- Fig. 3(b) is a flowchart showing the flow of the authentication operation performed by the information processing device 2.
- the learning data acquisition unit 214 acquires first learning data including an eye image showing the eyes of a target (step S20).
- the target includes a person.
- the target includes animals such as dogs and snakes.
- the first learning data includes eye images of a person not wearing accessories, and eye images of a person wearing accessories.
- the eye images of a person wearing accessories include eye images of people wearing various types of accessories.
- accessories refer to accessories worn around the eyes, such as glasses, contact lenses, masks, etc.
- Accessories are objects that affect authentication processing, matching processing, etc., using the area of a person's eyes included in the eye image, particularly the area of the person's iris.
- the first learning data may be stored in the storage device 22, in which case the learning data acquisition unit 214 may acquire the first learning data from the storage device 22. Alternatively, the learning data acquisition unit 214 may acquire the first learning data from a device external to the information processing device 2 via the communication device 23.
- the accessory detection unit 215 detects the accessory of the person appearing in the eye image from the eye image included in the first learning data (step S21).
- the type of accessory may be determined in advance. That is, the accessory detection unit 215 may detect whether the person appearing in the eye image is wearing an accessory, and if the person appearing in the eye image is wearing an accessory, what type of accessory the person is wearing.
- the type of accessory may be classified according to the effect that the accessory has on the area of the person's eyes included in the eye image.
- the type of accessory may be classified according to the manner in which the area of the person's eyes included in the eye image is hidden by the accessory.
- the type of accessory may be classified according to the effect that the area of the person's eyes included in the eye image is hidden by the accessory.
- the type of accessory may be classified according to the effect that the area of the person's eyes included in the eye image is hidden by the accessory, on the authentication result.
- eyeglasses as an accessory may be classified into multiple types according to the type of frame, type of lens, etc.
- the accessory detection unit 215 may detect whether the person in the eye image is (0) not wearing any accessories, (1) wearing a first type of accessory, (2) wearing a second type of accessory, ..., or (N) wearing an Nth type of accessory.
- the accessory detection unit 215 may calculate the likelihood that the person is not wearing any accessories, and the likelihood that the person is wearing each type of accessory when wearing accessories.
- the accessory detection unit 215 may detect which of the above (0) to (N) cases is most likely.
- the person in the eye image may be wearing multiple accessories (for example, a first type of accessory and a second type of accessory). In this case, the accessory detection unit 215 may detect the above (1) and (2).
- this embodiment will be described using an example in which N types of accessories are defined.
- each of the above (0) to (N) may be referred to as an accessory class WC.
- An eye image in which a person is not wearing any accessory may be said to belong to the 0th accessory class WC0.
- An eye image in which a person is wearing the first type of accessory may be said to belong to the first accessory class WC1.
- An eye image in which a person is wearing the Nth type of accessory may be said to belong to the Nth accessory class WCN.
- the learning data generating unit 211 generates second learning data from the first learning data (step S22).
- the second learning data may be learning data generated for learning iris characteristics.
- the second learning data may be learning data having desired characteristics related to the wearable item.
- the second learning data may be learning data having desired characteristics of the iris related to the wearable item.
- the second learning data may be learning data generated in particular for learning the effect on the iris when a person wears the wearable item.
- the learning data generation unit 211 generates second learning data based on the first learning data and information about the accessories of the person appearing in the eye image included in the first learning data.
- the learning data generation unit 211 generates second learning data from the first learning data according to the detection result by the accessory detection unit 215.
- the learning data generation unit 211 may classify the eye images included in the first learning data based on the detection results by the accessory detection unit 215 into one of the following: a 0th accessory class WC0 in which the person is not wearing any accessory, a 1st accessory class WC1 in which the person is wearing a 1st type of accessory, a 2nd accessory class WC2 in which the person is wearing a 2nd type of accessory, ..., an Nth accessory class WCN in which the person is wearing an Nth type of accessory.
- the learning data generating unit 211 may generate the second learning data so that the ratio of the number of eye images belonging to each accessory class WC is a desired ratio.
- the learning data generating unit 211 may generate the second learning data by making adjustments so that there are no accessory classes WC that have too few eye images or no accessory classes WC that have too many eye images.
- the learning data generating unit 211 may perform undersampling to select the necessary eye images, and oversampling to pad out the necessary eye images. Oversampling may be achieved by copying the relevant eye images.
- the learning data generating unit 211 may generate the second learning data so that the number of eye images belonging to each accessory class WC is the same. Alternatively, the learning data generating unit 211 may generate the second learning data based on the detection results of the accessory detection unit 215 so that the ratio of eye images belonging to each accessory class WC is a desired ratio. In other words, the number of eye images belonging to each accessory class WC does not have to be the same. The learning data generating unit 211 may adjust the number of eye images belonging to each accessory class WC according to the number of eye images belonging to each accessory class WC included in the first learning data, starting from a state in which the number of eye images belonging to each accessory class WC is constant.
- the model generation unit 212 generates an authentication model using the second learning data (step S23).
- the authentication model is an iris authentication model.
- the iris authentication model may be a model that outputs an iris authentication result when an eye image including an iris is input.
- the eye image included in the acquired first learning data may have information regarding attached items of a person appearing in the eye image.
- the eye image acquisition unit 216 acquires an eye image of the subject (step S24).
- the authentication unit 213 performs iris authentication of the subject using an iris authentication model as an authentication model (step S25).
- identity is determined using the subject's iris region and a pre-stored iris region. Even if the subject's iris region and the pre-registered iris region belong to the same person, there will be differences. For example, if the subject is wearing glasses or similar when an iris photograph is taken, the subject's iris region image will reflect the effects of the glasses or other accessories, which may affect the accuracy of authentication.
- an authentication model may be generated that is constructed by performing heavy learning on the 0th wearable object class WC0, which has the largest number of eye images, but light learning on wearable object classes WC with fewer eye images.
- the information processing device 2 in the second embodiment generates second learning data for generating an authentication model from first learning data obtained by detecting and classifying the wear of a person captured in an eye image, and can generate second learning data with a balanced ratio of wear classes suitable for learning according to the wear of a person.
- the information processing device 2 generates an authentication model using the second learning data, and can therefore perform learning for authentication including not only the iris region but also the wear.
- the information processing device 2 can construct an authentication model that is robust against occlusion including the wear.
- the information processing device 2 uses the authentication model to authenticate the subject, and can therefore perform accurate authentication regardless of whether the subject is wearing wear. In other words, the information processing device 2 can perform accurate iris authentication even if the subject is wearing wear.
- FIG. 4 is a block diagram showing the configuration of an information processing device 3 according to the third embodiment.
- the information processing device 3 according to the third embodiment differs from the information processing device 2 according to the second embodiment in the operation of a learning data generating unit 311, the operation of a model generating unit 312, and the operation of an authentication unit 313.
- Fig. 5 is a flowchart showing the flow of information processing operations performed by the information processing device 3 in the third embodiment.
- Fig. 5(a) is a flowchart showing the flow of a learning operation performed by the information processing device 3
- Fig. 5(b) is a flowchart showing the flow of an authentication operation performed by the information processing device 3. [3-1-1: Learning Operation]
- the learning data acquisition unit 214 acquires first learning data including an eye image showing a person's eyes (step S20).
- the accessory detection unit 215 detects the accessory of the person shown in the eye image from the eye image included in the first learning data (step S21).
- the learning data generation unit 311 Based on the detection result of the accessory detection unit 215, the learning data generation unit 311 generates second learning data including classification learning data CD obtained by classifying the first learning data according to whether or not the person in the eye image is wearing an accessory and, if so, what type of accessory the person is wearing (step S30).
- the second learning data generated by the learning data generation unit 311 includes 0th classification learning data CD0 consisting of eye images belonging to the 0th accessory class WC0, 1st classification learning data CD1 consisting of eye images belonging to the 1st accessory class WC1, ..., and Nth classification learning data CDN consisting of eye images belonging to the Nth accessory class WCN.
- the learning data generation unit 311 in the third embodiment may appropriately adjust the number of eye images belonging to each accessory class WC, as described in the second embodiment.
- the model generation unit 312 uses each of the categorized learning data CD to generate each of the categorized authentication models CM according to the category (step S31). For example, the model generation unit 312 uses the 0th category learning data CD0 to generate a 0th category authentication model CM0 according to the 0th accessory class WC0. The model generation unit 312 uses the first category learning data CD1 to generate a first category authentication model CM1 according to the first accessory class WC1. ...The model generation unit 312 uses the Nth category learning data CDN to generate an Nth category authentication model CMN according to the Nth accessory class WCN. Note that, in the information processing device in other embodiments, when the model generation unit 312 is realized in a computing device, each of the categorized authentication models CM according to the category is generated. [3-1-2: Authentication Operation]
- the eye image acquisition unit 216 acquires an eye image of the subject (step S24).
- the authentication unit 313 performs iris authentication of the subject using a classification authentication model CM (step S32).
- the authentication unit 313 may perform iris authentication using at least one of the classification authentication models CM from the 0th classification authentication model CM0 to the Nth classification authentication model CMN.
- the information processing device 3 in the third embodiment generates second learning data including classification learning data obtained by classifying the first learning data according to whether or not a person is wearing an accessory and, if so, what type of accessory the person is wearing, based on the detection result of the accessory of the person captured in the eye image, so that it is possible to obtain learning data suitable for the accessory class.
- the information processing device 3 can perform learning using the learning data suitable for the accessory class, so that it can generate each of the classification authentication models CM suitable for the accessory class.
- FIG. 6 is a block diagram showing the configuration of the information processing device 4 in the fourth embodiment.
- the information processing device 4 in the fourth embodiment differs from the information processing device 2 in the second embodiment and the information processing device 3 in the third embodiment in that an authentication model selection unit 417 is realized in the calculation device 21. Also, in the information processing device 4, as in the information processing device 3, the model generation unit 312 generates each of the categorized authentication models CM according to the categories. Other features of the information processing device 4 may be the same as other features of at least one of the information processing device 2 and the information processing device 3. Therefore, hereinafter, the parts that differ from the embodiments already described will be described in detail, and the explanation of other overlapping parts will be omitted as appropriate. [4-2: Information Processing Operation Performed by Information Processing Device 4]
- FIG. 7 is a flowchart showing the flow of the authentication operation performed by the information processing device 4 in the fourth embodiment.
- the eye image acquisition unit 216 acquires an eye image of the subject (step S24).
- the accessory detection unit 415 detects the accessory of the subject from the eye image of the subject (step S40).
- the authentication model selection unit 417 selects which of the categorized authentication models CM the authentication unit 413 will use based on the detection result of the accessory detection unit 415 (step S41).
- the authentication model selection unit 417 uses the accessory detection result to select a categorized authentication model CM suitable for the accessory worn by the subject.
- the authentication unit 413 performs iris authentication of the subject using the selected categorized authentication model CM (step S42).
- the information processing device 4 selects a categorized authentication model CM according to the subject's clothing and authenticates the subject using the selected categorized authentication model CM, thereby enabling highly accurate authentication.
- the authentication result may differ depending on whether the subject is wearing an accessory or not.
- the information processing device 3 uses the classification authentication model CM according to the accessory class WC, so that the authentication result is stable.
- the fifth embodiment of the information processing device, the information processing method, and the recording medium will be described below.
- the fifth embodiment of the information processing device, the information processing method, and the recording medium will be described below using an information processing device 5 to which the fifth embodiment of the information processing device, the information processing method, and the recording medium is applied.
- FIG. 8 is a block diagram showing the configuration of an information processing device 5 according to the fifth embodiment.
- the information processing device 5 according to the fifth embodiment differs from the information processing device 2 according to the second embodiment to the information processing device 4 according to the fourth embodiment in the operation of an authentication unit 513.
- FIG. 9 is a flowchart showing the flow of authentication operations performed by the information processing device 5 in the fifth embodiment.
- the eye image acquisition unit 216 acquires an eye image of the subject (step S24).
- the authentication unit 513 performs iris authentication of the subject using each of the categorical authentication models CM (step S50).
- the authentication unit 513 determines whether or not the iris of the subject can be authenticated based on each of the authentication results by each of the categorical authentication models CM (step S51). For example, the authentication unit 513 may perform matching using all of the 0th categorical authentication model CM0 to the Nth categorical authentication model CMN, and determine whether or not authentication can be performed using the highest matching score among the respective matching scores.
- the authentication unit 513 may perform matching using all of the 0th categorical authentication model CM0 to the Nth categorical authentication model CMN, calculate an average score by averaging each matching score, and determine whether or not authentication can be performed using the average score.
- the information processing device 5 in the fifth embodiment performs authentication using each of the categorized authentication models CM, and therefore can perform appropriate authentication regardless of whether the subject person is wearing an attachment or not.
- a sixth embodiment of an information processing device, an information processing method, and a recording medium will be described below.
- the sixth embodiment of the information processing device, the information processing method, and the recording medium will be described using an information processing device 6 to which the sixth embodiment of the information processing device, the information processing method, and the recording medium is applied.
- FIG. 10 is a block diagram showing the configuration of the information processing device 6 in the sixth embodiment.
- the information processing device 6 in the sixth embodiment differs from the information processing device 2 in the second embodiment to the information processing device 5 in the fifth embodiment in that a classification registration data generating unit 618 and a classification registration data selecting unit 619 are realized in the arithmetic device 21. Also, it differs in that a registration data holding unit 621 is realized in the storage device 22. Also, in the information processing device 6, similarly to the information processing device 3, the model generating unit 312 generates each of the classification authentication models CM according to the classification. Other features of the information processing device 6 may be the same as at least one other feature of the information processing device 2 to the information processing device 5. Therefore, hereinafter, the parts that are different from the already described embodiments will be described in detail, and the description of other overlapping parts will be omitted as appropriate. [6-2: Information Processing Operation Performed by Information Processing Device 6]
- FIG. 11 is a flowchart showing the flow of information processing operations performed by the information processing device 6 in the sixth embodiment.
- FIG. 11(a) is a flowchart showing the flow of registration data generation operations performed by the information processing device 6, and
- FIG. 11(b) is a flowchart showing the flow of authentication operations performed by the information processing device 6.
- the enrollment data storage unit 621 in the sixth embodiment stores enrollment data including enrollment images of eyes of a person.
- the enrollment data may include an eye image of only one eye, or may include an eye image of both eyes.
- the categorized registration data generating unit 618 acquires registration data (step S60).
- the accessory detection unit 615 detects the accessory of the person appearing in the eye image from the eye image included in the registration data (step S61). Based on the detection result of the accessory detection unit 615, the categorized registration data generating unit 618 generates categorized registration data CRD that classifies the registration data according to whether the person appearing in the eye image is wearing an accessory and, if so, what type of accessory the person is wearing (step S62).
- the categorized registration data generating unit 618 generates 0th categorized registration data CRD0 consisting of eye images belonging to the 0th accessory class WC0, 1st categorized registration data CRD1 consisting of eye images belonging to the 1st accessory class WC1, ..., and Nth categorized registration data CRDN consisting of eye images belonging to the Nth accessory class WCN.
- the classification registration data generation unit 618 may generate registration data including the 0th classification registration data CRD0, the 1st classification registration data CRD1, ..., and the Nth classification registration data CRDN, and store the data in the registration data storage unit 621.
- the eye image acquisition unit 216 acquires an eye image of the subject (step S24).
- the attachment detection unit 615 detects the attachment of the subject from the eye image of the subject (step S63).
- the categorized registration data selection unit 619 selects which of the categorized registration data CRD the authentication unit 613 uses based on the detection result of the attachment detection unit 615 (step S64).
- the authentication unit 613 performs iris authentication of the subject using the categorized registration data CRD selected by the categorized registration data selection unit 619 (step S65). That is, in the sixth embodiment, the registration data is used differently depending on the attachment of the subject.
- the authentication unit 613 may also select the categorized authentication model CM to use depending on the detection result of the attachment.
- the information processing device 6 in the sixth embodiment classifies the registration data based on whether or not the person appearing in the eye image is wearing an accessory and, if so, what type of accessory the person is wearing, and selects the categorized registration data CRD to be used for authentication based on the subject's accessory, thereby enabling more accurate authentication.
- a seventh embodiment of an information processing device, an information processing method, and a recording medium will be described below.
- the seventh embodiment of the information processing device, the information processing method, and the recording medium will be described using an information processing device 7 to which the seventh embodiment of the information processing device, the information processing method, and the recording medium is applied.
- FIG. 12 is a block diagram showing the configuration of the information processing device 7 in the seventh embodiment.
- the information processing device 7 in the seventh embodiment is different from the information processing device 2 in the second embodiment to the information processing device 6 in the sixth embodiment in that the authentication unit 713 has a matching unit 7131, a weighting unit 7132, and a judgment unit 7133, and the attachment detection unit 715 has a calculation unit 7151 and a determination unit 7152.
- the model generation unit 312 generates each of the categorized authentication models CM according to the categories.
- Other features of the information processing device 7 may be the same as at least one of the other features of the information processing device 2 to the information processing device 6. Therefore, hereinafter, the parts that are different from the respective embodiments already described will be described in detail, and the explanation of the other overlapping parts will be omitted as appropriate.
- FIG. 13 is a flowchart showing the flow of authentication operations performed by the information processing device 7 in the seventh embodiment.
- the eye image acquisition unit 216 acquires an eye image of the subject (step S24).
- the calculation unit 7151 calculates the likelihood that the subject is not wearing any accessories, and the likelihood that the subject is wearing each type of accessory (step S70).
- the determination unit 7152 determines the weight of each accessory class WC according to the likelihood of each accessory class WC (step S71). The determination unit 7152 may use the likelihood value as the weight as is.
- the collation unit 7131 performs iris authentication of the subject using each of the categorized authentication models CM (step S72).
- the weighting unit 7132 weights each authentication result using the weight of the attachment class WC corresponding to the categorized authentication model CM (step S73). For example, the weighting unit 7132 may calculate a weighted average score by weighting the scores output by each of the categorized authentication models CM using weights according to the likelihood.
- the possibility determination unit 7133 determines whether or not the iris of the subject can be authenticated based on each of the weighted authentication results (step S74). [7-3: Technical Effects of Information Processing Device 7]
- the information processing device 7 in the seventh embodiment determines whether or not to authenticate the subject based on each of the authentication results weighted according to the likelihood of the attachment, and therefore can perform authentication with high accuracy.
- the eighth embodiment of the information processing device, information processing method, and recording medium will be described below.
- the eighth embodiment of the information processing device, information processing method, and recording medium will be described below using an information processing device 8 to which the eighth embodiment of the information processing device, information processing method, and recording medium is applied.
- FIG. 14 is a block diagram showing the configuration of an information processing device 8 according to the eighth embodiment.
- the information processing device 8 according to the eighth embodiment differs from the information processing device 2 according to the second embodiment to the information processing device 7 according to the seventh embodiment in the operation of a model generation unit 812 and the operation of an authentication unit 813.
- the authentication model has a function of outputting features of an eye image when an eye image is input.
- the model generation unit 812 When eye images of the same person are input, the model generation unit 812 generates an authentication model so that it outputs features similar to or more than a predetermined level to features output when an eye image of a person not wearing any accessories is input, regardless of whether the person in the eye image is wearing accessories or not.
- the model generation unit 812 generates an authentication model so that the features output when an eye image of a person wearing any accessories is input is similar to or more than a predetermined level to the features output when an eye image of a person not wearing accessories is input.
- the authentication unit 813 in the eighth embodiment performs iris authentication using an eye image without any accessories as the registration data. Unlike the sixth embodiment described above, the authentication unit 813 in the eighth embodiment can extract features similar to those without any accessories, so the authentication unit 813 can perform iris authentication if only the registration data without any accessories is prepared. That is, in the eighth embodiment, it is sufficient to register only the eye image without any accessories as the registration data. [8-2: Technical Effects of Information Processing Device 8]
- the information processing device 8 in the eighth embodiment can perform authentication if an eye image taken when no attachment is worn is registered. [9: Ninth embodiment]
- a ninth embodiment of an information processing device, an information processing method, and a recording medium will be described below.
- the ninth embodiment of the information processing device, the information processing method, and a recording medium will be described using an information processing device 9 to which the ninth embodiment of the information processing device, the information processing method, and a recording medium is applied.
- Eye surrounding authentication is performed in addition to or instead of iris authentication.
- Eye surrounding authentication may be performed by focusing on the area around the eyes, extracting features from the area around the eyes, and comparing the extraction results with registered data around the eyes.
- the features around the eyes may include the position and shape of the corners and corners of the eyes, the shape of the eyelids, etc.
- Fig. 16 is a flowchart showing the flow of information processing operations performed by the information processing device 9 in the ninth embodiment. [9-2-1: Learning Operation]
- the learning data acquisition unit 214 acquires first learning data including an eye image showing a person's eyes (step S20).
- the first learning data includes an eye surrounding image including the area around the eye. Note that the first learning data used in the ninth embodiment may be the same as the first learning data used in the second to eighth embodiments.
- the covering object detection unit 915 detects coverings covering the area around the eyes from the eye image included in the first learning data (step S91).
- the covering object detection unit 915 may detect clothing worn by a person appearing in the eye image from the eye image included in the first learning data.
- the covering object detection unit 915 may detect coverings other than clothing worn by a person appearing in the eye image from the eye image included in the first learning data. Coverings other than clothing may include, for example, items that cover the area around the eyes, such as makeup.
- the covering object detection unit 915 may detect factors that affect the features around the eyes from the eye image.
- the covering object detection unit 915 may detect the state of the area around the eyes, such as occlusion or distortion due to the influence of physical condition, etc.
- the covering object detection unit 915 may detect which covering object class CC is classified according to the influence on the area around the eyes included in the eye image. In addition, the covering object detection unit 915 may not detect as a covering an object that affects the iris but does not affect the features around the eyes. In other words, the detection operation by the wearing object detection unit 215 and the detection operation by the covering object detection unit 915 may be different.
- the learning data generating unit 911 generates third learning data from the first learning data based on the detection result of the covering object detection unit 915 (step S92).
- the third learning data may be learning data generated to learn the characteristics of the area around the eyes.
- the third learning data may be learning data having desired characteristics of the area around the eyes related to the covering.
- the third learning data may be learning data generated to learn the influence on the area around the eyes when the area around the eyes is covered by a covering.
- the learning data generating unit 911 may generate the third learning data by adjusting the number of items belonging to each covering object, for example, as described in the second embodiment.
- the learning data generating unit 911 may generate the third learning data including classification learning data classified for each covering object, for example, as described in the third embodiment.
- the model generating unit 912 generates an eye-periphery authentication model using the third learning data (step S93).
- the model generating unit 912 may generate an eye-periphery authentication model corresponding to any covering, for example, as described in the second embodiment.
- the model generating unit 912 may generate an eye-periphery authentication model specialized for each covering, for example, as described in the third embodiment. [9-2-2: Authentication Operation]
- the eye surrounding image acquisition unit 916 acquires the eye surrounding image of the subject (step S94).
- the authentication unit 913 performs iris authentication of the subject using the iris authentication model and the iris area included in the eye surrounding image of the subject (step S95).
- the authentication unit 913 performs eye surrounding authentication of the subject using the eye surrounding authentication model and the eye surrounding image of the subject (step S96). That is, the authentication unit 913 performs two-factor authentication by performing iris authentication of the subject using the eye surrounding authentication model in addition to iris authentication of the subject using the iris authentication model.
- the authentication unit 913 may calculate a first score using the iris authentication model, calculate a second score using the eye surrounding authentication model, and authenticate the subject using the first score and the second score.
- the information processing device 9 in the ninth embodiment learns the area around the eye in order to perform eye area authentication with high accuracy even when the condition of the area around the eye has changed, such as when there is an area around the eye that is covered.
- the information processing device 9 can perform eye area authentication with high accuracy even when there is an area around the eye that is covered.
- the information processing device 9 can perform two-factor authentication, and therefore can authenticate the subject with higher accuracy.
- [Appendix 1] a learning data generating means for generating second learning data based on first learning data including an eye image showing the eye of a target and information on an attachment of the target shown in the eye image included in the first learning data; a model generating means for generating an authentication model using the second training data; and an authentication means for authenticating a target person using the authentication model.
- [Appendix 2] A learning data acquisition means for acquiring the first learning data; and an attachment detection means for detecting an attachment of a target appearing in an eye image included in the first learning data, The information processing device according to claim 1, wherein the learning data generating means generates the second learning data from the first learning data in response to a detection result by the attachment detecting means.
- the learning data generation means generates the second learning data including classification learning data obtained by classifying the first learning data based on a detection result by the accessory detection means, based on whether the subject shown in the eye image is wearing an accessory and, if so, what type of accessory the subject is wearing.
- the information processing device according to claim 3, wherein the model generating means generates each of the categorized authentication models according to the categories by using each of the categorized learning data.
- the attachment detection means detects an attachment of the subject from the eye image of the subject, The information processing device according to claim 4, further comprising an authentication model selection unit that selects which of the categorized authentication models the authentication unit is to use based on a detection result by the attachment detection unit.
- the authentication means includes: authenticating the subject using each of the classification authentication models; The information processing device according to claim 4, further comprising: determining whether or not the subject can be authenticated based on each of the authentication results obtained by each of the classification authentication models.
- a classification means for generating classified registration data based on an accessory of the subject captured in an eye image detected by the accessory detection means from an eye image included in registration data including the eye image showing the subject's eyes, by classifying the registration data according to whether the subject captured in the eye image is wearing an accessory and, if so, what type of accessory the subject is wearing;
- the information processing device described in Appendix 2 further comprising a registration data selection means for selecting which of the categorized registration data the authentication means will use based on the attachment of the subject detected by the attachment detection means from the eye image of the subject.
- the attachment detection means includes a calculation means for calculating a likelihood that the subject is not wearing an attachment and a likelihood that the subject is wearing each type of attachment when the subject is wearing an attachment, and calculating each of the weights of the classifications in accordance with the likelihoods;
- the authentication means includes: authenticating the subject using each of the classification authentication models; weighting each authentication result using the category weights corresponding to the category authentication model;
- the information processing device further comprising: determining whether or not to authenticate the subject based on each of the weighted authentication results.
- the authentication model has a function of outputting features of the eye image when the eye image is input;
- the information processing device described in Appendix 1 or 2 wherein the model generation means generates the authentication model so that, when an eye image of the same subject is input, features similar to or more than a predetermined level are output when an eye image of the subject not wearing any accessories is input, regardless of whether the subject in the eye image is wearing any accessories.
- the first learning data includes an eye surrounding image including an area around the eye
- the attachment detection means detects an attachment of a target appearing in the eye-periphery image from the eye-periphery image included in the first learning data
- the learning data generating means generates third learning data from the first learning data based on a detection result by the attachment detecting means;
- the information processing device according to claim 2, wherein the model generating means generates an eye-periphery authentication model by using the third learning data.
- the authentication using the authentication model is iris authentication
- the information processing apparatus according to claim 10 , wherein the authentication means performs eye-periphery authentication of the subject using the eye-periphery authentication model in addition to iris authentication of the subject using the authentication model.
- [Appendix 12] generating second learning data based on first learning data including an eye image showing the eye of the target and information on an attachment of the target shown in the eye image included in the first learning data; generating an authentication model using the second training data; An information processing method for authenticating a subject using the authentication model.
- [Appendix 13] On the computer, generating second learning data based on first learning data including an eye image showing the eye of the target and information on an attachment of the target shown in the eye image included in the first learning data; generating an authentication model using the second training data; A recording medium having a computer program recorded thereon for executing an information processing method for authenticating a subject by using the authentication model.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Ophthalmology & Optometry (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
この開示は、情報処理装置、情報処理方法、及び、記録媒体の技術分野に関する。 This disclosure relates to the technical fields of information processing devices, information processing methods, and recording media.
特許文献1には、処理対象の虹彩画像である対象虹彩画像を取得し、登録済虹彩画像から、対象虹彩画像と類似する1以上の類似登録済虹彩画像を検索し、対象虹彩画像に対応する人物と、類似登録済虹彩画像に対応する人物とが別人である場合に、対象虹彩画像及び類似登録済虹彩画像をカラーコンタクトレンズの虹彩画像であると判定する技術が記載されている。
特許文献2には、対象者の眼の画像を取得し、参照画像と、眼の画像と、を比較することにより、対象者が装着しているカラーコンタクトレンズの着色パターンを特定し、眼の虹彩領域のうち着色パターンの着色領域以外の領域における特徴を用いて、対象者を特定する技術が記載されている。
特許文献3には、虹彩のエッジが画像中で検出され、画像からテクスチャが取得され、エッジ及びテクスチャが結合されて、虹彩の内側境界及び外側境界が生成され、虹彩境界の検出精度を高めるために、楕円モデルと円モデルとの間で選択を行う方法が提供され、ドームモデルを用いて、マスク画像が求められると共に、アンラップされた画像中の目蓋による遮蔽が除去され、画像から虹彩を抽出する技術が記載されている。
特許文献4には、撮影画像を第1の学習モデルに入力し、第1の学習モデルからの出力に基づいて虹彩情報を抽出できるか否かを判定する技術が記載されている。第1の学習モデルは、複数の目の画像と、虹彩を抽出できるか否かを示すラベルと、を対応付けた教師データを用いた機械学習により構築されている。
特許文献5には、同一人物に対応する異なる虹彩画像を、コスメティックレンズ候補として抽出し、コスメティックレンズ候補の虹彩特徴量と、他のコスメティックレンズ候補の虹彩特徴量との照合を行い、照合結果に基づいて、コスメティックレンズらしさを示す信頼度を算出し、算出された信頼度が所定の閾値以上である場合、コスメティックレンズ候補をコスメティックレンズと判定する技術が記載されている。
非特許文献1には、画像をパッチに分割し、各パッチのオクルージョン度合から重みを算出し、オクルージョンが少ない領域に重みを掛けて顔の表情を推定する技術が記載されている。
Patent Document 1 describes a technology in which a target iris image, which is the iris image to be processed, is obtained, and one or more similar registered iris images that are similar to the target iris image are searched for from a registered iris image, and if the person corresponding to the target iris image and the person corresponding to the similar registered iris image are different people, the target iris image and the similar registered iris image are determined to be iris images of color contact lenses.
Patent document 2 describes a technology that acquires an image of a subject's eye, compares the image of the eye with a reference image, identifies the coloring pattern of the color contact lenses worn by the subject, and identifies the subject using features of the iris region of the eye other than the coloring region of the coloring pattern.
Patent document 3 describes a technology in which the edge of the iris is detected in an image, texture is obtained from the image, and the edge and texture are combined to generate inner and outer boundaries of the iris, a method is provided for selecting between an ellipse model and a circle model to improve the accuracy of iris boundary detection, and a mask image is obtained using a dome model, and occlusion by the eyelid in the unwrapped image is removed, and the iris is extracted from the image.
Patent Literature 4 describes a technique for inputting a captured image into a first learning model and determining whether iris information can be extracted based on the output from the first learning model. The first learning model is constructed by machine learning using training data that associates a plurality of eye images with labels indicating whether the iris can be extracted.
Patent document 5 describes a technology in which different iris images corresponding to the same person are extracted as cosmetic lens candidates, the iris features of the cosmetic lens candidate are compared with the iris features of other cosmetic lens candidates, a reliability indicating the likelihood of the lens being a cosmetic lens is calculated based on the comparison results, and if the calculated reliability is equal to or greater than a predetermined threshold, the cosmetic lens candidate is determined to be a cosmetic lens.
Non-Patent Document 1 describes a technique for dividing an image into patches, calculating weights based on the degree of occlusion of each patch, and applying weights to areas with less occlusion to estimate facial expressions.
この開示は、先行技術文献に記載された技術の改良を目的とする情報処理装置、情報処理方法、及び、記録媒体を提供することを課題とする。 The objective of this disclosure is to provide an information processing device, an information processing method, and a recording medium that aim to improve upon the technology described in prior art documents.
情報処理装置の一の態様は、対象の目が写る目画像を含む第1の学習データと、当該第1の学習データに含まれる目画像に写る対象の装着物に関する情報とに基づいて、第2の学習データを生成する学習データ生成手段と、前記第2の学習データを用いて認証モデルを生成するモデル生成手段と、前記認証モデルを用いて、対象者の認証を行う認証手段とを備える。 One aspect of the information processing device includes a learning data generation means for generating second learning data based on first learning data including an eye image showing the eyes of a subject and information about the subject's accessories shown in the eye image included in the first learning data, a model generation means for generating an authentication model using the second learning data, and an authentication means for authenticating the subject using the authentication model.
情報処理方法の一の態様は、対象の目が写る目画像を含む第1の学習データと、当該第1の学習データに含まれる目画像に写る対象の装着物に関する情報とに基づいて、第2の学習データを生成し、前記第2の学習データを用いて認証モデルを生成し、前記認証モデルを用いて、対象者の認証を行う。 One aspect of the information processing method is to generate second learning data based on first learning data including an eye image showing the subject's eyes and information about the subject's accessories that are shown in the eye image included in the first learning data, generate an authentication model using the second learning data, and authenticate the subject using the authentication model.
記録媒体の一の態様は、コンピュータに、対象の目が写る目画像を含む第1の学習データと、当該第1の学習データに含まれる目画像に写る対象の装着物に関する情報とに基づいて、第2の学習データを生成し、前記第2の学習データを用いて認証モデルを生成し、前記認証モデルを用いて、対象者の認証を行う情報処理方法を実行させるためのコンピュータプログラムが記録されている。 In one embodiment of the recording medium, a computer program is recorded to cause a computer to execute an information processing method that generates second learning data based on first learning data including an eye image showing the eyes of a subject and information about the subject's accessories that are shown in the eye image included in the first learning data, generates an authentication model using the second learning data, and authenticates the subject using the authentication model.
以下、図面を参照しながら、情報処理装置、情報処理方法、及び、記録媒体の実施形態について説明する。
[1:第1実施形態]
Hereinafter, embodiments of an information processing device, an information processing method, and a recording medium will be described with reference to the drawings.
[1: First embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第1実施形態について説明する。以下では、情報処理装置、情報処理方法、及び記録媒体の第1実施形態が適用された情報処理装置1を用いて、情報処理装置、情報処理方法、及び記録媒体の第1実施形態について説明する。
[1-1:情報処理装置1の構成]
A first embodiment of an information processing device, an information processing method, and a recording medium will be described below. In the following, the first embodiment of the information processing device, the information processing method, and the recording medium will be described using an information processing device 1 to which the first embodiment of the information processing device, the information processing method, and the recording medium is applied.
[1-1: Configuration of information processing device 1]
図1は、第1実施形態における情報処理装置1の構成を示すブロック図である。図1に示すように、情報処理装置1は、学習データ生成部11と、モデル生成部12と、認証部13とを備える。 FIG. 1 is a block diagram showing the configuration of an information processing device 1 in the first embodiment. As shown in FIG. 1, the information processing device 1 includes a learning data generation unit 11, a model generation unit 12, and an authentication unit 13.
学習データ生成部11は、対象の目が写る目画像を含む第1の学習データと、当該第1の学習データに含まれる目画像に写る対象の装着物に関する情報とに基づいて、第2の学習データを生成する。モデル生成部12は、第2の学習データを用いて認証モデルを生成する。認証部13は、認証モデルを用いて、対象者の認証を行う。
[1-2:情報処理装置1の技術的効果]
The training data generating unit 11 generates second training data based on first training data including an eye image showing the eye of a subject and information on an accessory of the subject shown in the eye image included in the first training data. The model generating unit 12 generates an authentication model using the second training data. The authentication unit 13 authenticates the subject using the authentication model.
[1-2: Technical Effects of Information Processing Device 1]
第1実施形態における情報処理装置1は、目画像に写る対象の装着物に関する情報に基づいて第2の学習データを生成する。情報処理装置1は、この第2の学習データを用いて生成された認証モデルを用いて対象者の認証を行うので、対象者が装着物を装着しているか否かに関わらず、精度の良い認証をすることができる。
[2:第2実施形態]
The information processing device 1 in the first embodiment generates second learning data based on information about the subject's attached object shown in the eye image. The information processing device 1 authenticates the subject using an authentication model generated using this second learning data, and therefore can perform highly accurate authentication regardless of whether the subject is wearing an attached object or not.
[2: Second embodiment]
続いて、情報処理装置、情報処理方法、及び記録媒体の第2実施形態について説明する。以下では、情報処理装置、情報処理方法、及び記録媒体の第2実施形態が適用された情報処理装置2を用いて、情報処理装置、情報処理方法、及び記録媒体の第2実施形態について説明する。
[2-1:情報処理装置2の構成]
Next, a second embodiment of the information processing device, the information processing method, and the recording medium will be described. In the following, the second embodiment of the information processing device, the information processing method, and the recording medium will be described using an information processing device 2 to which the second embodiment of the information processing device, the information processing method, and the recording medium is applied.
[2-1: Configuration of information processing device 2]
図2は、第2実施形態における情報処理装置2の構成を示すブロック図である。図2に示すように、情報処理装置2は、演算装置21と、記憶装置22とを備えている。更に、情報処理装置2は、通信装置23と、入力装置24と、出力装置25とを備えていてもよい。但し、情報処理装置2は、通信装置23、入力装置24及び出力装置25のうちの少なくとも一つを備えていなくてもよい。演算装置21と、記憶装置22と、通信装置23と、入力装置24と、出力装置25とは、データバス26を介して接続されていてもよい。 FIG. 2 is a block diagram showing the configuration of the information processing device 2 in the second embodiment. As shown in FIG. 2, the information processing device 2 includes a calculation device 21 and a storage device 22. Furthermore, the information processing device 2 may include a communication device 23, an input device 24, and an output device 25. However, the information processing device 2 does not have to include at least one of the communication device 23, the input device 24, and the output device 25. The calculation device 21, the storage device 22, the communication device 23, the input device 24, and the output device 25 may be connected via a data bus 26.
演算装置21は、例えば、CPU(Central Processing Unit)、GPU(Graphics Proecssing Unit)及びFPGA(Field Programmable Gate Array)のうちの少なくとも一つを含む。演算装置21は、コンピュータプログラムを読み込む。例えば、演算装置21は、記憶装置22が記憶しているコンピュータプログラムを読み込んでもよい。例えば、演算装置21は、コンピュータで読み取り可能であって且つ一時的でない記録媒体が記憶しているコンピュータプログラムを、情報処理装置2が備える図示しない記録媒体読み取り装置(例えば、後述する入力装置24)を用いて読み込んでもよい。演算装置21は、通信装置23(或いは、その他の通信装置)を介して、情報処理装置2の外部に配置される不図示の装置からコンピュータプログラムを取得してもよい(つまり、ダウンロードしてもよい又は読み込んでもよい)。演算装置21は、読み込んだコンピュータプログラムを実行する。その結果、演算装置21内には、情報処理装置2が行うべき動作を実行するための論理的な機能ブロックが実現される。つまり、演算装置21は、情報処理装置2が行うべき動作(言い換えれば、処理)を実行するための論理的な機能ブロックを実現するためのコントローラとして機能可能である。 The arithmetic device 21 includes, for example, at least one of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and an FPGA (Field Programmable Gate Array). The arithmetic device 21 reads a computer program. For example, the arithmetic device 21 may read a computer program stored in the storage device 22. For example, the arithmetic device 21 may read a computer program stored in a computer-readable and non-transient recording medium using a recording medium reading device (e.g., an input device 24 described later) not shown in the figure that is provided in the information processing device 2. The arithmetic device 21 may acquire (i.e., download or read) a computer program from a device (not shown) located outside the information processing device 2 via the communication device 23 (or other communication device). The arithmetic device 21 executes the read computer program. As a result, a logical functional block for executing the operation to be performed by the information processing device 2 is realized within the calculation device 21. In other words, the calculation device 21 can function as a controller for realizing a logical functional block for executing the operation (in other words, processing) to be performed by the information processing device 2.
図2には、情報処理動作を実行するために演算装置21内に実現される論理的な機能ブロックの一例が示されている。図2に示すように、演算装置21内には、後述する付記に記載された「学習データ生成手段」の一具体例である学習データ生成部211と、後述する付記に記載された「モデル生成手段」の一具体例であるモデル生成部212と、後述する付記に記載された「認証手段」の一具体例である認証部213と、後述する付記に記載された「学習データ取得手段」の一具体例である学習データ取得部214と、後述する付記に記載された「装着物検出手段」の一具体例である装着物検出部215と、目画像取得部216とが実現される。但し、演算装置21内には、学習データ取得部214、装着物検出部215、及び目画像取得部216の少なくとも一方が実現されなくてもよい。学習データ生成部211、モデル生成部212、認証部213、学習データ取得部214、装着物検出部215、及び目画像取得部216の各々の動作の詳細については、図3を参照しながら後に説明する。 2 shows an example of a logical functional block realized in the computing device 21 to execute an information processing operation. As shown in FIG. 2, the computing device 21 realizes a learning data generating unit 211, which is a specific example of a "learning data generating means" described in the appendix described later, a model generating unit 212, which is a specific example of a "model generating means" described in the appendix described later, an authentication unit 213, which is a specific example of an "authentication means" described in the appendix described later, a learning data acquiring unit 214, which is a specific example of a "learning data acquiring means" described in the appendix described later, an accessory detecting unit 215, which is a specific example of an "accessory detecting means" described in the appendix described later, and an eye image acquiring unit 216. However, at least one of the learning data acquiring unit 214, the accessory detecting unit 215, and the eye image acquiring unit 216 does not have to be realized in the computing device 21. Details of the operation of each of the learning data generation unit 211, the model generation unit 212, the authentication unit 213, the learning data acquisition unit 214, the accessory detection unit 215, and the eye image acquisition unit 216 will be described later with reference to FIG. 3.
記憶装置22は、所望のデータを記憶可能である。例えば、記憶装置22は、演算装置21が実行するコンピュータプログラムを一時的に記憶していてもよい。記憶装置22は、演算装置21がコンピュータプログラムを実行している場合に演算装置21が一時的に使用するデータを一時的に記憶してもよい。記憶装置22は、情報処理装置2が長期的に保存するデータを記憶してもよい。尚、記憶装置22は、RAM(Random Access Memory)、ROM(Read Only Memory)、ハードディスク装置、光磁気ディスク装置、SSD(Solid State Drive)及びディスクアレイ装置のうちの少なくとも一つを含んでいてもよい。つまり、記憶装置22は、一時的でない記録媒体を含んでいてもよい。 The storage device 22 can store desired data. For example, the storage device 22 may temporarily store a computer program executed by the arithmetic device 21. The storage device 22 may temporarily store data that is temporarily used by the arithmetic device 21 when the arithmetic device 21 is executing a computer program. The storage device 22 may store data that the information processing device 2 stores for a long period of time. The storage device 22 may include at least one of a RAM (Random Access Memory), a ROM (Read Only Memory), a hard disk device, a magneto-optical disk device, an SSD (Solid State Drive), and a disk array device. In other words, the storage device 22 may include a non-temporary recording medium.
通信装置23は、不図示の通信ネットワークを介して、情報処理装置2の外部の装置と通信可能である。通信装置23は、イーサネット(登録商標)、Wi-Fi(登録商標)、Bluetooth(登録商標)、USB(Universal Serial Bus)等の規格に基づく通信インターフェースであってもよい。 The communication device 23 is capable of communicating with devices external to the information processing device 2 via a communication network (not shown). The communication device 23 may be a communication interface based on standards such as Ethernet (registered trademark), Wi-Fi (registered trademark), Bluetooth (registered trademark), and USB (Universal Serial Bus).
入力装置24は、情報処理装置2の外部からの情報処理装置2に対する情報の入力を受け付ける装置である。例えば、入力装置24は、情報処理装置2のオペレータが操作可能な操作装置(例えば、キーボード、マウス及びタッチパネルのうちの少なくとも一つ)を含んでいてもよい。例えば、入力装置24は情報処理装置2に対して外付け可能な記録媒体にデータとして記録されている情報を読み取り可能な読取装置を含んでいてもよい。 The input device 24 is a device that accepts information input to the information processing device 2 from outside the information processing device 2. For example, the input device 24 may include an operating device (e.g., at least one of a keyboard, a mouse, and a touch panel) that can be operated by an operator of the information processing device 2. For example, the input device 24 may include a reading device that can read information recorded as data on a recording medium that can be attached externally to the information processing device 2.
出力装置25は、情報処理装置2の外部に対して情報を出力する装置である。例えば、出力装置25は、情報を画像として出力してもよい。つまり、出力装置25は、出力したい情報を示す画像を表示可能な表示装置(いわゆる、ディスプレイ)を含んでいてもよい。例えば、出力装置25は、情報を音声として出力してもよい。つまり、出力装置25は、音声を出力可能な音声装置(いわゆる、スピーカ)を含んでいてもよい。例えば、出力装置25は、紙面に情報を出力してもよい。つまり、出力装置25は、紙面に所望の情報を印刷可能な印刷装置(いわゆる、プリンタ)を含んでいてもよい。
[2-2:情報処理装置2が行う情報処理動作]
The output device 25 is a device that outputs information to the outside of the information processing device 2. For example, the output device 25 may output information as an image. That is, the output device 25 may include a display device (so-called a display) capable of displaying an image showing the information to be output. For example, the output device 25 may output information as sound. That is, the output device 25 may include an audio device (so-called a speaker) capable of outputting sound. For example, the output device 25 may output information on paper. That is, the output device 25 may include a printing device (so-called a printer) capable of printing desired information on paper.
[2-2: Information Processing Operation Performed by Information Processing Device 2]
図3を参照しながら、情報処理装置2が行う情報処理動作について説明する。図3は、情報処理装置2が行う情報処理動作の流れを示すフローチャートである。図3(a)は、情報処理装置2が行う学習動作の流れを示すフローチャートであり、図3(b)は、情報処理装置2が行う認証動作の流れを示すフローチャートである。
[2-2-1:学習動作]
An information processing operation performed by the information processing device 2 will be described with reference to Fig. 3. Fig. 3 is a flowchart showing the flow of the information processing operation performed by the information processing device 2. Fig. 3(a) is a flowchart showing the flow of the learning operation performed by the information processing device 2, and Fig. 3(b) is a flowchart showing the flow of the authentication operation performed by the information processing device 2.
[2-2-1: Learning Operation]
図3(a)に示す様に、学習データ取得部214は、対象の目が写る目画像を含む第1の学習データを取得する(ステップS20)。本実施形態において、対象は、人物を含む。また、本実施形態において、対象は、犬、蛇等の動物を含む。以下、対象が人物である場合を例に挙げて説明を行う。第1の学習データは、装着物を装着していない人物の目画像、及び装着物を装着している人物の目画像を含んでいる。また、装着物を装着している人物の目画像は、様々な種類の装着物を装着している人物の目画像を含んでいる。本実施形態において、装着物とは、例えば、眼鏡、コンタクトレンズ、マスク等の目の周辺に装着する装着物を指している。装着物は、目画像に含まれる人物の目の領域、特に人物の虹彩の領域を用いて認証処理、照合処理等を行う場合に影響を及ぼす物である。 As shown in FIG. 3(a), the learning data acquisition unit 214 acquires first learning data including an eye image showing the eyes of a target (step S20). In this embodiment, the target includes a person. Also, in this embodiment, the target includes animals such as dogs and snakes. Below, an example will be described in which the target is a person. The first learning data includes eye images of a person not wearing accessories, and eye images of a person wearing accessories. Also, the eye images of a person wearing accessories include eye images of people wearing various types of accessories. In this embodiment, accessories refer to accessories worn around the eyes, such as glasses, contact lenses, masks, etc. Accessories are objects that affect authentication processing, matching processing, etc., using the area of a person's eyes included in the eye image, particularly the area of the person's iris.
第1の学習データは記憶装置22に記憶されていてもよく、この場合、学習データ取得部214は、記憶装置22から第1の学習データを取得してもよい。または、学習データ取得部214は、情報処理装置2の外部の装置から、通信装置23を介して第1の学習データを取得してもよい。 The first learning data may be stored in the storage device 22, in which case the learning data acquisition unit 214 may acquire the first learning data from the storage device 22. Alternatively, the learning data acquisition unit 214 may acquire the first learning data from a device external to the information processing device 2 via the communication device 23.
装着物検出部215は、第1の学習データに含まれる目画像から、当該目画像に写る人物の装着物を検出する(ステップS21)。本実施形態では、装着物の種類は予め定められていてもよい。すなわち、装着物検出部215は、目画像に写る人物が装着物を装着しているか否か、及び、目画像に写る人物が装着物を装着している場合に何れの種類の装着物を装着しているかを検出してもよい。 The accessory detection unit 215 detects the accessory of the person appearing in the eye image from the eye image included in the first learning data (step S21). In this embodiment, the type of accessory may be determined in advance. That is, the accessory detection unit 215 may detect whether the person appearing in the eye image is wearing an accessory, and if the person appearing in the eye image is wearing an accessory, what type of accessory the person is wearing.
装着物の種類は、目画像に含まれる人物の目の領域が装着物により受けた影響により分類された種類であってもよい。装着物の種類は、目画像に含まれる人物の目の領域が装着物により隠れた隠れ方により分類された種類であってもよい。装着物の種類は、目画像に含まれる人物の目の領域が装着物により隠れたことによる影響により分類された種類であってもよい。装着物の種類は、目画像に含まれる人物の目の領域が装着物により隠されたことに起因して、認証結果に与える影響により分類された種類であってもよい。また、例えば、装着物としての眼鏡は、フレームの種類、レンズの種類等によって、複数の種類に分類されてもよい。 The type of accessory may be classified according to the effect that the accessory has on the area of the person's eyes included in the eye image. The type of accessory may be classified according to the manner in which the area of the person's eyes included in the eye image is hidden by the accessory. The type of accessory may be classified according to the effect that the area of the person's eyes included in the eye image is hidden by the accessory. The type of accessory may be classified according to the effect that the area of the person's eyes included in the eye image is hidden by the accessory, on the authentication result. For example, eyeglasses as an accessory may be classified into multiple types according to the type of frame, type of lens, etc.
例えば、N種類の装着物を定めたとする。この場合、装着物検出部215は、目画像に写る人物が、(0)装着物を装着していない、(1)1種類目の装着物を装着している、(2)2種類目の装着物を装着している、・・・、及び、(N)N種類目の装着物を装着している、の何れなのかを検出してもよい。装着物検出部215は、人物が装着物を装着していない尤度、及び、人物が装着物を装着している場合の各々の種類の装着物を装着している尤度を算出してもよい。装着物検出部215は、上記(0)から(N)の何れの場合がもっともらしいかにより、上記(0)から(N)の何れなのかを検出してもよい。また、目画像に写る人物は、複数の装着物(例えば、1種類目の装着物、及び2種類目の装着物)を装着している場合がある。この場合、装着物検出部215は、上記(1)、及び(2)を検出してもよい。以下、N種類の装着物を定めた場合を例に挙げて、本実施形態を説明する。 For example, assume that N types of accessories are defined. In this case, the accessory detection unit 215 may detect whether the person in the eye image is (0) not wearing any accessories, (1) wearing a first type of accessory, (2) wearing a second type of accessory, ..., or (N) wearing an Nth type of accessory. The accessory detection unit 215 may calculate the likelihood that the person is not wearing any accessories, and the likelihood that the person is wearing each type of accessory when wearing accessories. The accessory detection unit 215 may detect which of the above (0) to (N) cases is most likely. In addition, the person in the eye image may be wearing multiple accessories (for example, a first type of accessory and a second type of accessory). In this case, the accessory detection unit 215 may detect the above (1) and (2). Below, this embodiment will be described using an example in which N types of accessories are defined.
また、上記(0)から(N)の各々を、装着物クラスWCと呼んでもよい。人物が装着物を装着していない目画像は、第0装着物クラスWC0に属すると表現してもよい。人物が1種類目の装着物を装着している目画像は、第1装着物クラスWC1に属すると表現してもよい。・・・人物がN種類目の装着物を装着している目画像は、第N装着物クラスWCNに属すると表現してもよい。 Furthermore, each of the above (0) to (N) may be referred to as an accessory class WC. An eye image in which a person is not wearing any accessory may be said to belong to the 0th accessory class WC0. An eye image in which a person is wearing the first type of accessory may be said to belong to the first accessory class WC1. ... An eye image in which a person is wearing the Nth type of accessory may be said to belong to the Nth accessory class WCN.
学習データ生成部211は、第1の学習データから、第2の学習データを生成する(ステップS22)。本実施形態において、第2の学習データは、虹彩の特徴を学習するために生成された学習データであってもよい。第2の学習データは、装着物に関する所望の特徴を備えた学習データであってもよい。第2の学習データは、装着物に関する虹彩の所望の特徴を備えた学習データであってもよい。第2の学習データは、特に、人物が装着物を装着した場合の虹彩への影響を学習するために生成された学習データであってもよい。 The learning data generating unit 211 generates second learning data from the first learning data (step S22). In this embodiment, the second learning data may be learning data generated for learning iris characteristics. The second learning data may be learning data having desired characteristics related to the wearable item. The second learning data may be learning data having desired characteristics of the iris related to the wearable item. The second learning data may be learning data generated in particular for learning the effect on the iris when a person wears the wearable item.
学習データ生成部211は、第1の学習データと、第1の学習データに含まれる目画像に写る人物の装着物に関する情報とに基づいて、第2の学習データを生成する。学習データ生成部211は、装着物検出部215による検出結果に応じて、第1の学習データから第2の学習データを生成する。 The learning data generation unit 211 generates second learning data based on the first learning data and information about the accessories of the person appearing in the eye image included in the first learning data. The learning data generation unit 211 generates second learning data from the first learning data according to the detection result by the accessory detection unit 215.
学習データ生成部211は、装着物検出部215による検出結果に基づいて、第1の学習データに含まれる目画像を、人物が装着物を装着していない第0装着物クラスWC0、第1種類目の装着物を装着している第1装着物クラスWC1、第2種類目の装着物を装着している第2装着物クラスWC2、・・・第N種類目の装着物を装着している第N装着物クラスWCNの何れかに分類してもよい。 The learning data generation unit 211 may classify the eye images included in the first learning data based on the detection results by the accessory detection unit 215 into one of the following: a 0th accessory class WC0 in which the person is not wearing any accessory, a 1st accessory class WC1 in which the person is wearing a 1st type of accessory, a 2nd accessory class WC2 in which the person is wearing a 2nd type of accessory, ..., an Nth accessory class WCN in which the person is wearing an Nth type of accessory.
学習データ生成部211は、各々の装着物クラスWCに属する目画像の数の割合が所望の割合になるように第2の学習データを生成してもよい。学習データ生成部211は、属する目画像の数が少なすぎる装着物クラスWC、又は、属する目画像の数が多すぎる装着物クラスWCが生じないように調整して第2の学習データを生成してもよい。例えば、学習データ生成部211は、必要な目画像を選択するアンダーサンプリング、及び必要な目画像を水増しするオーバーサンプリングを行ってもよい。オーバーサンプリングは、該当目画像のコピーにより実現してもよい。 The learning data generating unit 211 may generate the second learning data so that the ratio of the number of eye images belonging to each accessory class WC is a desired ratio. The learning data generating unit 211 may generate the second learning data by making adjustments so that there are no accessory classes WC that have too few eye images or no accessory classes WC that have too many eye images. For example, the learning data generating unit 211 may perform undersampling to select the necessary eye images, and oversampling to pad out the necessary eye images. Oversampling may be achieved by copying the relevant eye images.
学習データ生成部211は、各々の装着物クラスWCに属する目画像の数が同じになるように第2の学習データを生成してもよい。または、学習データ生成部211は、装着物検出部215の検出結果を基に、各々の装着物クラスWCに属する目画像の割合が所望の割合になるように第2の学習データを生成してもよい。すなわち、各々の装着物クラスWCに属する目画像の数は同じでなくてもよい。学習データ生成部211は、各々の装着物クラスWCに属する目画像の数が一定である状態から、第1の学習データに含まれる各々の装着物クラスWCに属する目画像の数に応じて、各々の装着物クラスWCに属する目画像の数を調整してもよい。 The learning data generating unit 211 may generate the second learning data so that the number of eye images belonging to each accessory class WC is the same. Alternatively, the learning data generating unit 211 may generate the second learning data based on the detection results of the accessory detection unit 215 so that the ratio of eye images belonging to each accessory class WC is a desired ratio. In other words, the number of eye images belonging to each accessory class WC does not have to be the same. The learning data generating unit 211 may adjust the number of eye images belonging to each accessory class WC according to the number of eye images belonging to each accessory class WC included in the first learning data, starting from a state in which the number of eye images belonging to each accessory class WC is constant.
第2の学習データは、各々の装着物クラスWCに属する、数が調整された目画像を含んだ学習データであってもよい。つまり、学習データ生成部211は、第1の学習データに含まれる目画像を、所望の虹彩認証をするために数を調整して、第2の学習データを生成する。 The second learning data may be learning data that includes eye images, the number of which is adjusted and which belong to each of the attachment classes WC. In other words, the learning data generating unit 211 adjusts the number of eye images included in the first learning data in order to perform the desired iris authentication, and generates the second learning data.
モデル生成部212は、第2の学習データを用いて認証モデルを生成する(ステップS23)。本実施形態において、認証モデルは、虹彩認証モデルである。虹彩認証モデルは、虹彩を含む目画像が入力された場合、虹彩認証の結果を出力するモデルであってもよい。 The model generation unit 212 generates an authentication model using the second learning data (step S23). In this embodiment, the authentication model is an iris authentication model. The iris authentication model may be a model that outputs an iris authentication result when an eye image including an iris is input.
なお、情報処理装置2内において装着物の検出をしなくてもよい。この場合、例えば、取得した第1の学習データに含まれる目画像は、目画像に写る人物の装着物に関する情報を有していてもよい。
[2-2-2:認証動作]
It is not necessary to detect attached items within the information processing device 2. In this case, for example, the eye image included in the acquired first learning data may have information regarding attached items of a person appearing in the eye image.
[2-2-2: Authentication Operation]
図3(b)に示す様に、目画像取得部216は、対象者の目画像を取得する(ステップS24)。認証部213は、認証モデルとしての虹彩認証モデルを用いて、対象者の虹彩認証を行う(ステップS25)。
[2-3:情報処理装置2の技術的効果]
3B, the eye image acquisition unit 216 acquires an eye image of the subject (step S24). The authentication unit 213 performs iris authentication of the subject using an iris authentication model as an authentication model (step S25).
[2-3: Technical Effects of Information Processing Device 2]
対象者の目画像の虹彩領域を用いた虹彩認証では、対象者の虹彩領域と予め保存された虹彩領域とを用いて本人判定を行う。対象者の虹彩領域と、予め登録された虹彩領域とでは、同一人物の虹彩であっても差が存在する。例えば、対象が眼鏡等を装着した状態で虹彩撮影を行った場合、対象の虹彩領域画像には眼鏡等の装着物の影響が反映され、認証の精度に影響を与える可能性がある。 In iris authentication using the iris region of the subject's eye image, identity is determined using the subject's iris region and a pre-stored iris region. Even if the subject's iris region and the pre-registered iris region belong to the same person, there will be differences. For example, if the subject is wearing glasses or similar when an iris photograph is taken, the subject's iris region image will reflect the effects of the glasses or other accessories, which may affect the accuracy of authentication.
また、認証モデルの構築に用いる学習データに含まれる各々の装着物クラスWCに属する目画像の数のばらつきが大きいと、属する目画像の数が少ない装着物クラスWCを軽視した学習が行われてしまう場合がある。例えば、属する目画像の数が最も多い第0装着物クラスWC0には重めの学習をするのに対し、属する目画像の数が少ない装着物クラスWCに関しては軽めの学習をして構築された認証モデルが生成されてしまう場合等が生じ得る。 In addition, if there is a large variation in the number of eye images belonging to each of the wearable object classes WC contained in the learning data used to construct the authentication model, learning may be performed with less emphasis on wearable object classes WC with fewer eye images. For example, an authentication model may be generated that is constructed by performing heavy learning on the 0th wearable object class WC0, which has the largest number of eye images, but light learning on wearable object classes WC with fewer eye images.
第2実施形態における情報処理装置2は、目画像に写る人物の装着物を検出して分類した第1の学習データから、認証モデルを生成するための第2の学習データを生成するので、人物の装着物に応じた学習に好適な、装着物クラスの割合がバランスした第2の学習データを生成することができる。情報処理装置2は、当該第2の学習データ用いて認証モデルを生成するので、虹彩領域だけでなく、装着物も含めて認証の学習をすることができる。情報処理装置2は、装着物を含めたオクルージョンに頑健な認証モデルを構築することができる。情報処理装置2は、当該認証モデルを用いて対象者の認証を行うので、対象者が装着物を装着しているか否かに関わらず、精度の良い認証をすることができる。すなわち、情報処理装置2は、対象者が装着物を着用している場合であっても、精度よく虹彩認証を行うことができる。
[3:第3実施形態]
The information processing device 2 in the second embodiment generates second learning data for generating an authentication model from first learning data obtained by detecting and classifying the wear of a person captured in an eye image, and can generate second learning data with a balanced ratio of wear classes suitable for learning according to the wear of a person. The information processing device 2 generates an authentication model using the second learning data, and can therefore perform learning for authentication including not only the iris region but also the wear. The information processing device 2 can construct an authentication model that is robust against occlusion including the wear. The information processing device 2 uses the authentication model to authenticate the subject, and can therefore perform accurate authentication regardless of whether the subject is wearing wear. In other words, the information processing device 2 can perform accurate iris authentication even if the subject is wearing wear.
[3: Third embodiment]
続いて、情報処理装置、情報処理方法、及び記録媒体の第3実施形態について説明する。以下では、情報処理装置、情報処理方法、及び記録媒体の第3実施形態が適用された情報処理装置3を用いて、情報処理装置、情報処理方法、及び記録媒体の第3実施形態について説明する。 Next, a third embodiment of the information processing device, information processing method, and recording medium will be described. Below, the third embodiment of the information processing device, information processing method, and recording medium will be described using an information processing device 3 to which the third embodiment of the information processing device, information processing method, and recording medium is applied.
図4は、第3実施形態における情報処理装置3の構成を示すブロック図である。第3実施形態における情報処理装置3は、学習データ生成部311の動作、モデル生成部312の動作、及び認証部313の動作が第2実施形態における情報処理装置2と異なる。
[3-1:情報処理装置3が行う情報処理動作]
4 is a block diagram showing the configuration of an information processing device 3 according to the third embodiment. The information processing device 3 according to the third embodiment differs from the information processing device 2 according to the second embodiment in the operation of a learning data generating unit 311, the operation of a model generating unit 312, and the operation of an authentication unit 313.
[3-1: Information Processing Operation Performed by Information Processing Device 3]
図5を参照して、第3実施形態における情報処理装置3が行う情報処理動作の流れを説明する。図5は、第3実施形態における情報処理装置3が行う情報処理動作の流れを示すフローチャートである。図5(a)は、情報処理装置3が行う学習動作の流れを示すフローチャートであり、図5(b)は、情報処理装置3が行う認証動作の流れを示すフローチャートである。
[3-1-1:学習動作]
The flow of information processing operations performed by the information processing device 3 in the third embodiment will be described with reference to Fig. 5. Fig. 5 is a flowchart showing the flow of information processing operations performed by the information processing device 3 in the third embodiment. Fig. 5(a) is a flowchart showing the flow of a learning operation performed by the information processing device 3, and Fig. 5(b) is a flowchart showing the flow of an authentication operation performed by the information processing device 3.
[3-1-1: Learning Operation]
図5(a)に示す様に、学習データ取得部214は、人物の目が写る目画像を含む第1の学習データを取得する(ステップS20)。装着物検出部215は、第1の学習データに含まれる目画像から、当該目画像に写る人物の装着物を検出する(ステップS21)。 As shown in FIG. 5(a), the learning data acquisition unit 214 acquires first learning data including an eye image showing a person's eyes (step S20). The accessory detection unit 215 detects the accessory of the person shown in the eye image from the eye image included in the first learning data (step S21).
学習データ生成部311は、装着物検出部215の検出結果に基づいて、第1の学習データを、目画像に写る人物が装着物を装着しているか否か、及び、装着物を装着している場合に何れの種類の装着物を装着しているかにより分類した分類学習データCDを含む第2の学習データを生成する(ステップS30)。言い換えると、学習データ生成部311に生成された第2の学習データは、第0装着物クラスWC0に属する目画像からなる第0分類学習データCD0、第1装着物クラスWC1に属する目画像からなる第1分類学習データCD1、・・・、及び第N装着物クラスWCNに属する目画像からなる第N分類学習データCDNを含んでいる。第3実施形態における学習データ生成部311は、第2実施形態において説明した様に、各々の装着物クラスWCに属する目画像の数を、適切に調整してもよい。 Based on the detection result of the accessory detection unit 215, the learning data generation unit 311 generates second learning data including classification learning data CD obtained by classifying the first learning data according to whether or not the person in the eye image is wearing an accessory and, if so, what type of accessory the person is wearing (step S30). In other words, the second learning data generated by the learning data generation unit 311 includes 0th classification learning data CD0 consisting of eye images belonging to the 0th accessory class WC0, 1st classification learning data CD1 consisting of eye images belonging to the 1st accessory class WC1, ..., and Nth classification learning data CDN consisting of eye images belonging to the Nth accessory class WCN. The learning data generation unit 311 in the third embodiment may appropriately adjust the number of eye images belonging to each accessory class WC, as described in the second embodiment.
モデル生成部312は、分類学習データCDの各々を用いて、分類に応じた分類認証モデルCMの各々を生成する(ステップS31)。例えば、モデル生成部312は、第0分類学習データCD0を用いて、第0装着物クラスWC0に応じた第0分類認証モデルCM0を生成する。モデル生成部312は、第1分類学習データCD1を用いて、第1装着物クラスWC1に応じた第1分類認証モデルCM1を生成する。・・・モデル生成部312は、第N分類学習データCDNを用いて、第N装着物クラスWCNに応じた第N分類認証モデルCMNを生成する。なお、他の実施形態における情報処理装置は、演算装置内にモデル生成部312が実現される場合、分類に応じた分類認証モデルCMの各々を生成する。
[3-1-2:認証動作]
The model generation unit 312 uses each of the categorized learning data CD to generate each of the categorized authentication models CM according to the category (step S31). For example, the model generation unit 312 uses the 0th category learning data CD0 to generate a 0th category authentication model CM0 according to the 0th accessory class WC0. The model generation unit 312 uses the first category learning data CD1 to generate a first category authentication model CM1 according to the first accessory class WC1. ...The model generation unit 312 uses the Nth category learning data CDN to generate an Nth category authentication model CMN according to the Nth accessory class WCN. Note that, in the information processing device in other embodiments, when the model generation unit 312 is realized in a computing device, each of the categorized authentication models CM according to the category is generated.
[3-1-2: Authentication Operation]
図5(b)に示す様に、目画像取得部216は、対象者の目画像を取得する(ステップS24)。認証部313は、分類認証モデルCMを用いて、対象者の虹彩認証を行う(ステップS32)。第3実施形態において、認証部313は、上記第0分類認証モデルCM0から第N分類認証モデルCMNの少なくとも1つの分類認証モデルCMを用いて虹彩認証を行ってもよい。
[3-2:情報処理装置3の技術的効果]
5B, the eye image acquisition unit 216 acquires an eye image of the subject (step S24). The authentication unit 313 performs iris authentication of the subject using a classification authentication model CM (step S32). In the third embodiment, the authentication unit 313 may perform iris authentication using at least one of the classification authentication models CM from the 0th classification authentication model CM0 to the Nth classification authentication model CMN.
[3-2: Technical Effects of Information Processing Device 3]
第3実施形態における情報処理装置3は、第1の学習データを、目画像に写る人物の装着物の検出結果に基づいて、人物が装着物を装着しているか否か、及び、装着している場合に何れの種類の装着物を装着しているかにより分類した分類学習データを含む第2の学習データを生成するので、装着物に関するクラスに適した学習データを取得することができる。情報処理装置3は、装着物に関するクラスに適した学習データを用いた学習をさせることができるので、装着物に関するクラスに適した分類認証モデルCMの各々を生成することができる。
[4:第4実施形態]
The information processing device 3 in the third embodiment generates second learning data including classification learning data obtained by classifying the first learning data according to whether or not a person is wearing an accessory and, if so, what type of accessory the person is wearing, based on the detection result of the accessory of the person captured in the eye image, so that it is possible to obtain learning data suitable for the accessory class. The information processing device 3 can perform learning using the learning data suitable for the accessory class, so that it can generate each of the classification authentication models CM suitable for the accessory class.
[4: Fourth embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第4実施形態について説明する。以下では、情報処理装置、情報処理方法、及び記録媒体の第4実施形態が適用された情報処理装置4を用いて、情報処理装置、情報処理方法、及び記録媒体の第4実施形態について説明する。
[4-1:情報処理装置4の構成]
A fourth embodiment of the information processing device, the information processing method, and the recording medium will be described below. In the following, the fourth embodiment of the information processing device, the information processing method, and the recording medium will be described using an information processing device 4 to which the fourth embodiment of the information processing device, the information processing method, and the recording medium is applied.
[4-1: Configuration of information processing device 4]
図6を参照しながら、第4実施形態における情報処理装置4の構成について説明する。図6は、第4実施形態における情報処理装置4の構成を示すブロック図である。 The configuration of the information processing device 4 in the fourth embodiment will be described with reference to FIG. 6. FIG. 6 is a block diagram showing the configuration of the information processing device 4 in the fourth embodiment.
図6に示すように、第4実施形態における情報処理装置4は、第2実施形態における情報処理装置2、及び第3実施形態における情報処理装置3と、演算装置21内に認証モデル選択部417が実現される点で異なる。また、情報処理装置4においても、情報処理装置3と同様に、モデル生成部312は、分類に応じた分類認証モデルCMの各々を生成する。情報処理装置4のその他の特徴は、情報処理装置2、及び情報処理装置3の少なくとも一方のその他の特徴と同一であってもよい。このため、以下では、すでに説明した各実施形態と異なる部分について詳細に説明し、その他の重複する部分については適宜説明を省略するものとする。
[4-2:情報処理装置4が行う情報処理動作]
As shown in Fig. 6, the information processing device 4 in the fourth embodiment differs from the information processing device 2 in the second embodiment and the information processing device 3 in the third embodiment in that an authentication model selection unit 417 is realized in the calculation device 21. Also, in the information processing device 4, as in the information processing device 3, the model generation unit 312 generates each of the categorized authentication models CM according to the categories. Other features of the information processing device 4 may be the same as other features of at least one of the information processing device 2 and the information processing device 3. Therefore, hereinafter, the parts that differ from the embodiments already described will be described in detail, and the explanation of other overlapping parts will be omitted as appropriate.
[4-2: Information Processing Operation Performed by Information Processing Device 4]
図7を参照して、第4実施形態における情報処理装置4が行う情報処理動作の流れを説明する。第4実施形態における情報処理装置4が行う情報処理動作は、第3実施形態における情報処理装置3が行う情報処理動作と認証動作が異なる。図7は、第4実施形態における情報処理装置4が行う認証動作の流れを示すフローチャートである。 The flow of the information processing operation performed by the information processing device 4 in the fourth embodiment will be described with reference to FIG. 7. The information processing operation performed by the information processing device 4 in the fourth embodiment differs from the information processing operation performed by the information processing device 3 in the third embodiment in terms of authentication operation. FIG. 7 is a flowchart showing the flow of the authentication operation performed by the information processing device 4 in the fourth embodiment.
図7に示すように、目画像取得部216は、対象者の目画像を取得する(ステップS24)。装着物検出部415は、対象者の目画像から、対象者の装着物を検出する(ステップS40)。認証モデル選択部417は、装着物検出部415の検出結果に基づいて、認証部413が分類認証モデルCMの各々の何れを用いるかを選択する(ステップS41)。認証モデル選択部417は、装着物の検出結果を用いて、対象者が装着している装着物に適した分類認証モデルCMを選択する。認証部413は、選択された分類認証モデルCMを用いて対象者の虹彩認証を行う(ステップS42)。
[4-3:情報処理装置4の技術的効果]
As shown in Fig. 7, the eye image acquisition unit 216 acquires an eye image of the subject (step S24). The accessory detection unit 415 detects the accessory of the subject from the eye image of the subject (step S40). The authentication model selection unit 417 selects which of the categorized authentication models CM the authentication unit 413 will use based on the detection result of the accessory detection unit 415 (step S41). The authentication model selection unit 417 uses the accessory detection result to select a categorized authentication model CM suitable for the accessory worn by the subject. The authentication unit 413 performs iris authentication of the subject using the selected categorized authentication model CM (step S42).
[4-3: Technical Effects of Information Processing Device 4]
第4実施形態における情報処理装置4は、対象者の装着物に応じた分類認証モデルCMを選択し、選択した分類認証モデルCMを用いて対象者を認証するので、精度の良い認証をすることができる。 In the fourth embodiment, the information processing device 4 selects a categorized authentication model CM according to the subject's clothing and authenticates the subject using the selected categorized authentication model CM, thereby enabling highly accurate authentication.
同じ認証モデルを用いて虹彩認証を実施した場合、対象者が装着物を装着している場合と、対象者が装着物を装着していない場合とでは、認証結果が異なる場合がある。情報処理装置3は、装着物クラスWCに応じて分類認証モデルCMを使い分けるので、認証結果が安定する。
[5:第5実施形態]
When iris authentication is performed using the same authentication model, the authentication result may differ depending on whether the subject is wearing an accessory or not. The information processing device 3 uses the classification authentication model CM according to the accessory class WC, so that the authentication result is stable.
[5: Fifth embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第5実施形態について説明する。以下では、情報処理装置、情報処理方法、及び記録媒体の第5実施形態が適用された情報処理装置5を用いて、情報処理装置、情報処理方法、及び記録媒体の第5実施形態について説明する。 The fifth embodiment of the information processing device, the information processing method, and the recording medium will be described below. The fifth embodiment of the information processing device, the information processing method, and the recording medium will be described below using an information processing device 5 to which the fifth embodiment of the information processing device, the information processing method, and the recording medium is applied.
図8は、第5実施形態における情報処理装置5の構成を示すブロック図である。第5実施形態における情報処理装置5は、認証部513の動作が第2実施形態における情報処理装置2から第4実施形態における情報処理装置4と異なる。
[5-1:情報処理装置5が行う情報処理動作]
8 is a block diagram showing the configuration of an information processing device 5 according to the fifth embodiment. The information processing device 5 according to the fifth embodiment differs from the information processing device 2 according to the second embodiment to the information processing device 4 according to the fourth embodiment in the operation of an authentication unit 513.
[5-1: Information Processing Operation Performed by Information Processing Device 5]
図9を参照して、第5実施形態における情報処理装置5が行う情報処理動作の流れを説明する。第5実施形態における情報処理装置5が行う情報処理動作は、第3実施形態における情報処理装置3、及び第4実施形態における情報処理装置4の少なくとも一方が行う情報処理動作と認証動作が異なる。また、情報処理装置5においても、情報処理装置3と同様に、モデル生成部312は、分類に応じた分類認証モデルCMの各々を生成する。図9は、第5実施形態における情報処理装置5が行う認証動作の流れを示すフローチャートである。 The flow of information processing operations performed by the information processing device 5 in the fifth embodiment will be described with reference to FIG. 9. The information processing operations performed by the information processing device 5 in the fifth embodiment differ in authentication operations from the information processing operations performed by at least one of the information processing device 3 in the third embodiment and the information processing device 4 in the fourth embodiment. Also, in the information processing device 5, as in the information processing device 3, the model generation unit 312 generates each of the categorized authentication models CM according to the categories. FIG. 9 is a flowchart showing the flow of authentication operations performed by the information processing device 5 in the fifth embodiment.
図9に示すように、目画像取得部216は、対象者の目画像を取得する(ステップS24)。認証部513は、分類認証モデルCMの各々を用いて対象者の虹彩認証を行う(ステップS50)。認証部513は、分類認証モデルCMの各々による認証結果の各々に基づいて、対象者の虹彩認証の可否を判定する(ステップS51)。例えば、認証部513は、第0分類認証モデルCM0から第N分類認証モデルCMNの全てを用いて照合を行い、各々の照合スコアの内の最も高い照合スコアを用いて認証可否を判定してもよい。または、認証部513は、第0分類認証モデルCM0から第N分類認証モデルCMNの全てを用いて照合を行い、各々の照合スコアを平均して平均スコアを算出し、平均スコアを用いて認証可否を判定してもよい。
[5-2:情報処理装置5の技術的効果]
As shown in FIG. 9, the eye image acquisition unit 216 acquires an eye image of the subject (step S24). The authentication unit 513 performs iris authentication of the subject using each of the categorical authentication models CM (step S50). The authentication unit 513 determines whether or not the iris of the subject can be authenticated based on each of the authentication results by each of the categorical authentication models CM (step S51). For example, the authentication unit 513 may perform matching using all of the 0th categorical authentication model CM0 to the Nth categorical authentication model CMN, and determine whether or not authentication can be performed using the highest matching score among the respective matching scores. Alternatively, the authentication unit 513 may perform matching using all of the 0th categorical authentication model CM0 to the Nth categorical authentication model CMN, calculate an average score by averaging each matching score, and determine whether or not authentication can be performed using the average score.
[5-2: Technical Effects of Information Processing Device 5]
第5実施形態における情報処理装置5は、分類認証モデルCMの各々を用いた認証をするので、対象者が装着物を装着しているか否かに関わらず、適切な認証をすることができる。
[6:第6実施形態]
The information processing device 5 in the fifth embodiment performs authentication using each of the categorized authentication models CM, and therefore can perform appropriate authentication regardless of whether the subject person is wearing an attachment or not.
[6: Sixth embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第6実施形態について説明する。以下では、情報処理装置、情報処理方法、及び記録媒体の第6実施形態が適用された情報処理装置6を用いて、情報処理装置、情報処理方法、及び記録媒体の第6実施形態について説明する。
[6-1:情報処理装置6の構成]
A sixth embodiment of an information processing device, an information processing method, and a recording medium will be described below. In the following, the sixth embodiment of the information processing device, the information processing method, and the recording medium will be described using an information processing device 6 to which the sixth embodiment of the information processing device, the information processing method, and the recording medium is applied.
[6-1: Configuration of information processing device 6]
図10を参照しながら、第6実施形態における情報処理装置6の構成について説明する。図10は、第6実施形態における情報処理装置6の構成を示すブロック図である。 The configuration of the information processing device 6 in the sixth embodiment will be described with reference to FIG. 10. FIG. 10 is a block diagram showing the configuration of the information processing device 6 in the sixth embodiment.
図10に示すように、第6実施形態における情報処理装置6は、第2実施形態における情報処理装置2から第5実施形態における情報処理装置5と、演算装置21内に分類登録データ生成部618と、分類登録データ選択部619とが実現される点で異なる。また、記憶装置22内に登録データ保持部621が実現される点でも異なる。また、情報処理装置6においても、情報処理装置3と同様に、モデル生成部312は、分類に応じた分類認証モデルCMの各々を生成する。情報処理装置6のその他の特徴は、情報処理装置2から情報処理装置5の少なくとも1つのその他の特徴と同一であってもよい。このため、以下では、すでに説明した各実施形態と異なる部分について詳細に説明し、その他の重複する部分については適宜説明を省略するものとする。
[6-2:情報処理装置6が行う情報処理動作]
As shown in FIG. 10, the information processing device 6 in the sixth embodiment differs from the information processing device 2 in the second embodiment to the information processing device 5 in the fifth embodiment in that a classification registration data generating unit 618 and a classification registration data selecting unit 619 are realized in the arithmetic device 21. Also, it differs in that a registration data holding unit 621 is realized in the storage device 22. Also, in the information processing device 6, similarly to the information processing device 3, the model generating unit 312 generates each of the classification authentication models CM according to the classification. Other features of the information processing device 6 may be the same as at least one other feature of the information processing device 2 to the information processing device 5. Therefore, hereinafter, the parts that are different from the already described embodiments will be described in detail, and the description of other overlapping parts will be omitted as appropriate.
[6-2: Information Processing Operation Performed by Information Processing Device 6]
図11を参照して、第6実施形態における情報処理装置6が行う情報処理動作の流れを説明する。図11は、第6実施形態における情報処理装置6が行う情報処理動作の流れを示すフローチャートである。図11(a)は、情報処理装置6が行う登録データの生成動作の流れを示すフローチャートであり、図11(b)は、情報処理装置6が行う認証動作の流れを示すフローチャートである。 The flow of information processing operations performed by the information processing device 6 in the sixth embodiment will be described with reference to FIG. 11. FIG. 11 is a flowchart showing the flow of information processing operations performed by the information processing device 6 in the sixth embodiment. FIG. 11(a) is a flowchart showing the flow of registration data generation operations performed by the information processing device 6, and FIG. 11(b) is a flowchart showing the flow of authentication operations performed by the information processing device 6.
第6実施形態おける登録データ保持部621は、登録済の目画像を保持している。登録データ保持部621は、人物の目が写る登録済の目画像を含む登録データを保持している。登録データには、片目のみの目画像が含まれていてもよいし、両目の目画像が含まれていてもよい。
[6-2-1:登録データの生成動作]
The enrollment data storage unit 621 in the sixth embodiment stores enrollment data including enrollment images of eyes of a person. The enrollment data may include an eye image of only one eye, or may include an eye image of both eyes.
[6-2-1: Registration data generation operation]
図11(a)に示すように、分類登録データ生成部618は、登録データを取得する(ステップS60)。装着物検出部615は、登録データに含まれる目画像から、当該目画像に写る人物の装着物を検出する(ステップS61)。分類登録データ生成部618は、装着物検出部615の検出結果に基づいて、登録データを、目画像に写る人物が装着物を装着しているか否か、及び、装着している場合に何れの種類の装着物を装着しているかにより分類した分類登録データCRDを生成する(ステップS62)。言い換えると、分類登録データ生成部618は、第0装着物クラスWC0に属する目画像からなる第0分類登録データCRD0、第1装着物クラスWC1に属する目画像からなる第1分類登録データCRD1、・・・、及び第N装着物クラスWCNに属する目画像からなる第N分類登録データCRDNを生成する。分類登録データ生成部618は、第0分類登録データCRD0、第1分類登録データCRD1、・・・、及び第N分類登録データCRDNを含む登録データを生成し、登録データ保持部621に保持させてもよい。 As shown in FIG. 11A, the categorized registration data generating unit 618 acquires registration data (step S60). The accessory detection unit 615 detects the accessory of the person appearing in the eye image from the eye image included in the registration data (step S61). Based on the detection result of the accessory detection unit 615, the categorized registration data generating unit 618 generates categorized registration data CRD that classifies the registration data according to whether the person appearing in the eye image is wearing an accessory and, if so, what type of accessory the person is wearing (step S62). In other words, the categorized registration data generating unit 618 generates 0th categorized registration data CRD0 consisting of eye images belonging to the 0th accessory class WC0, 1st categorized registration data CRD1 consisting of eye images belonging to the 1st accessory class WC1, ..., and Nth categorized registration data CRDN consisting of eye images belonging to the Nth accessory class WCN. The classification registration data generation unit 618 may generate registration data including the 0th classification registration data CRD0, the 1st classification registration data CRD1, ..., and the Nth classification registration data CRDN, and store the data in the registration data storage unit 621.
[6-2-2:認証動作]
図11(b)に示すように、目画像取得部216は、対象者の目画像を取得する(ステップS24)。装着物検出部615は、対象者の目画像から、対象者の装着物を検出する(ステップS63)。分類登録データ選択部619は、装着物検出部615の検出結果に基づいて、認証部613が分類登録データCRDの各々の何れを用いるかを選択する(ステップS64)。認証部613は、分類登録データ選択部619により選択された分類登録データCRDを用いて、対象者の虹彩認証を行う(ステップS65)。すなわち、第6実施形態では、対象者の装着物に応じて、登録データの使い分けを行う。また、認証部613は、装着物の検出結果に応じて、用いる分類認証モデルCMを選択してもよい。
[6-3:情報処理装置6の技術的効果]
[6-2-2: Authentication Operation]
As shown in FIG. 11B, the eye image acquisition unit 216 acquires an eye image of the subject (step S24). The attachment detection unit 615 detects the attachment of the subject from the eye image of the subject (step S63). The categorized registration data selection unit 619 selects which of the categorized registration data CRD the authentication unit 613 uses based on the detection result of the attachment detection unit 615 (step S64). The authentication unit 613 performs iris authentication of the subject using the categorized registration data CRD selected by the categorized registration data selection unit 619 (step S65). That is, in the sixth embodiment, the registration data is used differently depending on the attachment of the subject. The authentication unit 613 may also select the categorized authentication model CM to use depending on the detection result of the attachment.
[6-3: Technical Effects of Information Processing Device 6]
第6実施形態における情報処理装置6は、登録データを、目画像に写る人物が装着物を装着しているか否か、及び、装着している場合に何れの種類の装着物を装着しているかにより分類し、対象者の装着物に基づいて認証に用いる分類登録データCRDを選択するので、より精度の良い認証をすることができる。
[7:第7実施形態]
The information processing device 6 in the sixth embodiment classifies the registration data based on whether or not the person appearing in the eye image is wearing an accessory and, if so, what type of accessory the person is wearing, and selects the categorized registration data CRD to be used for authentication based on the subject's accessory, thereby enabling more accurate authentication.
[7: Seventh embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第7実施形態について説明する。以下では、情報処理装置、情報処理方法、及び記録媒体の第7実施形態が適用された情報処理装置7を用いて、情報処理装置、情報処理方法、及び記録媒体の第7実施形態について説明する。
[7-1:情報処理装置7の構成]
A seventh embodiment of an information processing device, an information processing method, and a recording medium will be described below. In the following, the seventh embodiment of the information processing device, the information processing method, and the recording medium will be described using an information processing device 7 to which the seventh embodiment of the information processing device, the information processing method, and the recording medium is applied.
[7-1: Configuration of information processing device 7]
図12を参照しながら、第7実施形態における情報処理装置7の構成について説明する。図12は、第7実施形態における情報処理装置7の構成を示すブロック図である。 The configuration of the information processing device 7 in the seventh embodiment will be described with reference to FIG. 12. FIG. 12 is a block diagram showing the configuration of the information processing device 7 in the seventh embodiment.
図12に示すように、第7実施形態における情報処理装置7は、第2実施形態における情報処理装置2から第6実施形態における情報処理装置6と、認証部713が照合部7131、重み付け部7132、及び可否判定部7133を有し、装着物検出部715が算出部7151、及び決定部7152を有する点で異なる。また、情報処理装置7においても、情報処理装置3と同様に、モデル生成部312は、分類に応じた分類認証モデルCMの各々を生成する。情報処理装置7のその他の特徴は、情報処理装置2から情報処理装置6の少なくとも1つのその他の特徴と同一であってもよい。このため、以下では、すでに説明した各実施形態と異なる部分について詳細に説明し、その他の重複する部分については適宜説明を省略するものとする。
[7-2:情報処理装置7が行う情報処理動作]
As shown in FIG. 12, the information processing device 7 in the seventh embodiment is different from the information processing device 2 in the second embodiment to the information processing device 6 in the sixth embodiment in that the authentication unit 713 has a matching unit 7131, a weighting unit 7132, and a judgment unit 7133, and the attachment detection unit 715 has a calculation unit 7151 and a determination unit 7152. Also, in the information processing device 7, as in the information processing device 3, the model generation unit 312 generates each of the categorized authentication models CM according to the categories. Other features of the information processing device 7 may be the same as at least one of the other features of the information processing device 2 to the information processing device 6. Therefore, hereinafter, the parts that are different from the respective embodiments already described will be described in detail, and the explanation of the other overlapping parts will be omitted as appropriate.
[7-2: Information Processing Operation Performed by Information Processing Device 7]
図13を参照して、第7実施形態における情報処理装置7が行う情報処理動作の流れを説明する。第7実施形態における情報処理装置7が行う情報処理動作は、第3実施形態における情報処理装置3から第6実施形態における情報処理装置6の少なくとも一つが行う情報処理動作と認証動作が異なる。図13は、第7実施形態における情報処理装置7が行う認証動作の流れを示すフローチャートである。 The flow of information processing operations performed by the information processing device 7 in the seventh embodiment will be described with reference to FIG. 13. The information processing operations performed by the information processing device 7 in the seventh embodiment differ in authentication operations from the information processing operations performed by at least one of the information processing device 3 in the third embodiment to the information processing device 6 in the sixth embodiment. FIG. 13 is a flowchart showing the flow of authentication operations performed by the information processing device 7 in the seventh embodiment.
図13に示すように、目画像取得部216は、対象者の目画像を取得する(ステップS24)。算出部7151は、対象者が装着物を装着していない尤度、及び、対象者が装着物を装着している場合の各々の種類の装着物を装着している尤度を算出する(ステップS70)。決定部7152は、各々の装着物クラスWCの尤度に応じて、各々の装着物クラスWCの重みを決定する(ステップS71)。決定部7152は、尤度の値をそのまま重みとして採用してもよい。 As shown in FIG. 13, the eye image acquisition unit 216 acquires an eye image of the subject (step S24). The calculation unit 7151 calculates the likelihood that the subject is not wearing any accessories, and the likelihood that the subject is wearing each type of accessory (step S70). The determination unit 7152 determines the weight of each accessory class WC according to the likelihood of each accessory class WC (step S71). The determination unit 7152 may use the likelihood value as the weight as is.
照合部7131は、分類認証モデルCMの各々を用いて、対象者の虹彩認証を行う(ステップS72)。重み付け部7132は、分類認証モデルCMに対応する装着物クラスWCの重みを用いて、各々の認証結果に重み付けをする(ステップS73)。例えば、重み付け部7132は、尤度に応じた重みにより、各々の分類認証モデルCMが出力したスコアを重み付け平均して、重み付け平均スコアを算出してもよい。可否判定部7133は、重み付けされた認証結果の各々に基づいて、対象者の虹彩認証の可否を判定する(ステップS74)。
[7-3:情報処理装置7の技術的効果]
The collation unit 7131 performs iris authentication of the subject using each of the categorized authentication models CM (step S72). The weighting unit 7132 weights each authentication result using the weight of the attachment class WC corresponding to the categorized authentication model CM (step S73). For example, the weighting unit 7132 may calculate a weighted average score by weighting the scores output by each of the categorized authentication models CM using weights according to the likelihood. The possibility determination unit 7133 determines whether or not the iris of the subject can be authenticated based on each of the weighted authentication results (step S74).
[7-3: Technical Effects of Information Processing Device 7]
第7実施形態における情報処理装置7は、装着物の尤度に応じて重み付けした認証結果の各々に基づいて、対象者の認証の可否を判定するので、精度の良い認証をすることができる。
[8:第8実施形態]
The information processing device 7 in the seventh embodiment determines whether or not to authenticate the subject based on each of the authentication results weighted according to the likelihood of the attachment, and therefore can perform authentication with high accuracy.
[8: Eighth embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第8実施形態について説明する。以下では、情報処理装置、情報処理方法、及び記録媒体の第8実施形態が適用された情報処理装置8を用いて、情報処理装置、情報処理方法、及び記録媒体の第8実施形態について説明する。 The eighth embodiment of the information processing device, information processing method, and recording medium will be described below. The eighth embodiment of the information processing device, information processing method, and recording medium will be described below using an information processing device 8 to which the eighth embodiment of the information processing device, information processing method, and recording medium is applied.
図14は、第8実施形態における情報処理装置8の構成を示すブロック図である。第8実施形態における情報処理装置8は、モデル生成部812の動作、及び認証部813の動作が第2実施形態における情報処理装置2から第7実施形態における情報処理装置7と異なる。
[8-1:情報処理装置8が行う情報処理動作]
14 is a block diagram showing the configuration of an information processing device 8 according to the eighth embodiment. The information processing device 8 according to the eighth embodiment differs from the information processing device 2 according to the second embodiment to the information processing device 7 according to the seventh embodiment in the operation of a model generation unit 812 and the operation of an authentication unit 813.
[8-1: Information Processing Operation Performed by Information Processing Device 8]
認証モデルは、目画像が入力されると、目画像の特徴を出力する機能を有する。モデル生成部812は、同一人物の目画像が入力された場合、目画像に写る人物が装着物を装着しているか否かに関わらず、装着物を装着していない場合の人物の目画像が入力された場合に出力される特徴と所定以上類似する特徴を出力するように認証モデルを生成する。モデル生成部812は、任意の装着物を装着している目画像が入力された場合に出力する特徴と、装着物を装着していない目画像が入力された場合に出力する特徴とが所定以上類似するように認証モデルを生成する。 The authentication model has a function of outputting features of an eye image when an eye image is input. When eye images of the same person are input, the model generation unit 812 generates an authentication model so that it outputs features similar to or more than a predetermined level to features output when an eye image of a person not wearing any accessories is input, regardless of whether the person in the eye image is wearing accessories or not. The model generation unit 812 generates an authentication model so that the features output when an eye image of a person wearing any accessories is input is similar to or more than a predetermined level to the features output when an eye image of a person not wearing accessories is input.
第8実施形態における認証部813は、登録データとして、装着物を装着していない目画像を用いて虹彩認証を行う。上述した第6実施形態の場合とは異なり、第8実施形態における認証部813は、装着物無の特徴と同じような特徴が抽出できるので、装着物無の登録データのみを用意すれば認証部813は虹彩認証をすることができる。すなわち、第8実施形態では、登録データとして、装着物を装着していない目画像像のみを登録すればよい。
[8-2:情報処理装置8の技術的効果]
The authentication unit 813 in the eighth embodiment performs iris authentication using an eye image without any accessories as the registration data. Unlike the sixth embodiment described above, the authentication unit 813 in the eighth embodiment can extract features similar to those without any accessories, so the authentication unit 813 can perform iris authentication if only the registration data without any accessories is prepared. That is, in the eighth embodiment, it is sufficient to register only the eye image without any accessories as the registration data.
[8-2: Technical Effects of Information Processing Device 8]
第8実施形態における情報処理装置8は、装着物を装着していない場合の目画像を登録しておけば、認証を実施することができる。
[9:第9実施形態]
The information processing device 8 in the eighth embodiment can perform authentication if an eye image taken when no attachment is worn is registered.
[9: Ninth embodiment]
情報処理装置、情報処理方法、及び、記録媒体の第9実施形態について説明する。以下では、情報処理装置、情報処理方法、及び記録媒体の第9実施形態が適用された情報処理装置9を用いて、情報処理装置、情報処理方法、及び記録媒体の第9実施形態について説明する。
[9-1:目周辺認証]
A ninth embodiment of an information processing device, an information processing method, and a recording medium will be described below. In the following, the ninth embodiment of the information processing device, the information processing method, and a recording medium will be described using an information processing device 9 to which the ninth embodiment of the information processing device, the information processing method, and a recording medium is applied.
[9-1: Eye Area Recognition]
第9実施形態では、虹彩認証に加え、又は代えて目周囲認証を実施する。目周囲認証は、目の周辺に重点を置き、目の周辺から特徴を抽出し、抽出結果と目周辺の登録データとの照合を行うことにより実施してもよい。目の周辺の特徴は、目じり、目頭の位置、形状、瞼の形状等を採用してもよい。
[9-2:情報処理装置9が行う情報処理動作]
In the ninth embodiment, eye surrounding authentication is performed in addition to or instead of iris authentication. Eye surrounding authentication may be performed by focusing on the area around the eyes, extracting features from the area around the eyes, and comparing the extraction results with registered data around the eyes. The features around the eyes may include the position and shape of the corners and corners of the eyes, the shape of the eyelids, etc.
[9-2: Information Processing Operation Performed by Information Processing Device 9]
図16を参照して、第9実施形態における情報処理装置9が行う情報処理動作の流れを説明する。図16は、第9実施形態における情報処理装置9が行う情報処理動作の流れを示すフローチャートである。
[9-2-1:学習動作]
The flow of information processing operations performed by the information processing device 9 in the ninth embodiment will be described with reference to Fig. 16. Fig. 16 is a flowchart showing the flow of information processing operations performed by the information processing device 9 in the ninth embodiment.
[9-2-1: Learning Operation]
図16(a)に示すように、学習データ取得部214は、人物の目が写る目画像を含む第1の学習データを取得する(ステップS20)。第9実施形態において、第1の学習データは、目の周辺を含む目周辺画像を含む。なお、第9実施形態において用いる第1の学習データは、第2実施形態から第8実施形態において用いる第1の学習データと同じ学習データであってもよい。 As shown in FIG. 16(a), the learning data acquisition unit 214 acquires first learning data including an eye image showing a person's eyes (step S20). In the ninth embodiment, the first learning data includes an eye surrounding image including the area around the eye. Note that the first learning data used in the ninth embodiment may be the same as the first learning data used in the second to eighth embodiments.
被覆物検出部915は、第1の学習データに含まれる目周辺画像から、目の周辺を被覆する被覆物を検出する(ステップS91)。被覆物検出部915は、第1の学習データに含まれる目周辺画像から、目周辺画像に写る人物の装着物を検出してもよい。被覆物検出部915は、第1の学習データに含まれる目周辺画像から、目周辺画像に写る人物の装着物以外の被覆物を検出してもよい。装着物以外の被覆物は、例えば、化粧等の目の周辺をカバーする物を含んでいてもよい。被覆物検出部915は、目周辺画像から、目の周辺の特徴に影響を与える要因を検出してもよい。被覆物検出部915は、体調等の影響による遮蔽、歪み等の目の周辺の状態を検出してもよい。被覆物検出部915は、目周辺画像に含まれる目の周辺の領域が受ける影響により分類される何れの被覆物クラスCCかを検出してもよい。また、虹彩には影響を与えるが、目の周辺の特徴には影響を与えない物を、被覆物検出部915は、被覆物として検出しなくてもよい。すなわち、装着物検出部215による検出の動作と、被覆物検出部915による検出の動作とは異なっていてもよい。 The covering object detection unit 915 detects coverings covering the area around the eyes from the eye image included in the first learning data (step S91). The covering object detection unit 915 may detect clothing worn by a person appearing in the eye image from the eye image included in the first learning data. The covering object detection unit 915 may detect coverings other than clothing worn by a person appearing in the eye image from the eye image included in the first learning data. Coverings other than clothing may include, for example, items that cover the area around the eyes, such as makeup. The covering object detection unit 915 may detect factors that affect the features around the eyes from the eye image. The covering object detection unit 915 may detect the state of the area around the eyes, such as occlusion or distortion due to the influence of physical condition, etc. The covering object detection unit 915 may detect which covering object class CC is classified according to the influence on the area around the eyes included in the eye image. In addition, the covering object detection unit 915 may not detect as a covering an object that affects the iris but does not affect the features around the eyes. In other words, the detection operation by the wearing object detection unit 215 and the detection operation by the covering object detection unit 915 may be different.
学習データ生成部911は、被覆物検出部915の検出結果に基づいて、第1の学習データから第3の学習データを生成する(ステップS92)。本実施形態において、第3の学習データは、目の周辺の特徴を学習するために生成された学習データであってもよい。第3の学習データは、被覆物に関する所望の特徴を備えた学習データであってもよい。第3の学習データは、被覆物に関する目の周辺の所望の特徴を備えた学習データであってもよい。第3の学習データは、特に、目の周辺が被覆物により被覆されている場合の目の周辺への影響を学習するために生成された学習データであってもよい。学習データ生成部911は、例えば第2実施形態で説明したように、各々の被覆物に属する数を調整した第3の学習データを生成してもよい。または、学習データ生成部911は、例えば第3実施形態で説明したように、各々の被覆物毎に分類した分類学習データを含む第3の学習データを生成してもよい。 The learning data generating unit 911 generates third learning data from the first learning data based on the detection result of the covering object detection unit 915 (step S92). In this embodiment, the third learning data may be learning data generated to learn the characteristics of the area around the eyes. The third learning data may be learning data having desired characteristics of the area around the eyes related to the covering. The third learning data may be learning data generated to learn the influence on the area around the eyes when the area around the eyes is covered by a covering. The learning data generating unit 911 may generate the third learning data by adjusting the number of items belonging to each covering object, for example, as described in the second embodiment. Alternatively, the learning data generating unit 911 may generate the third learning data including classification learning data classified for each covering object, for example, as described in the third embodiment.
モデル生成部912は、第3の学習データを用いて、目周辺認証モデルを生成する(ステップS93)。モデル生成部912は、例えば第2実施形態で説明したように、何れの被覆物にも対応する目周辺認証モデルを生成してもよい。または、または、モデル生成部912は、例えば第3実施形態で説明したように、各々の被覆物に特化した目周辺認証モデルを生成してもよい。
[9-2-2:認証動作]
The model generating unit 912 generates an eye-periphery authentication model using the third learning data (step S93). The model generating unit 912 may generate an eye-periphery authentication model corresponding to any covering, for example, as described in the second embodiment. Alternatively, the model generating unit 912 may generate an eye-periphery authentication model specialized for each covering, for example, as described in the third embodiment.
[9-2-2: Authentication Operation]
図16(b)に示す様に、目周辺画像取得部916は、対象者の目周辺画像を取得する(ステップS94)。認証部913は、虹彩認証モデル、及び対象者の目周辺画像に含まれる虹彩領域を用いて、対象者を虹彩認証する(ステップS95)。認証部913は、目周辺認証モデル、及び対象者の目周辺画像を用いて、対象者を目周辺認証する(ステップS96)。すなわち、認証部913は、虹彩認証モデルを用いた対象者の虹彩認証に加え、目周辺認証モデルを用いた対象者の目周辺認証を行い、2要素認証を実施する。認証部913は、虹彩認証モデルを用いて第1のスコアを算出し、目周辺認証モデルを用いて第2のスコアを算出し、第1のスコア、及び第2のスコアを用いて、対象者の認証を行ってもよい。
[9-3:情報処理装置9の技術的効果]
As shown in FIG. 16B, the eye surrounding image acquisition unit 916 acquires the eye surrounding image of the subject (step S94). The authentication unit 913 performs iris authentication of the subject using the iris authentication model and the iris area included in the eye surrounding image of the subject (step S95). The authentication unit 913 performs eye surrounding authentication of the subject using the eye surrounding authentication model and the eye surrounding image of the subject (step S96). That is, the authentication unit 913 performs two-factor authentication by performing iris authentication of the subject using the eye surrounding authentication model in addition to iris authentication of the subject using the iris authentication model. The authentication unit 913 may calculate a first score using the iris authentication model, calculate a second score using the eye surrounding authentication model, and authenticate the subject using the first score and the second score.
[9-3: Technical Effects of Information Processing Device 9]
対象者の目の周辺が、登録データを登録した際から変化している場合、認証の精度に影響を与える可能性がある。第9実施形態における情報処理装置9は、目の周辺に覆い隠された領域がある場合等、目の周辺の状態に変化が生じている場合にも、精度よく目周辺認証を行うべく、目の周辺を学習する。情報処理装置9は、目の周辺に覆い隠された領域があった場合にも、精度よく目周辺認証をすることができる。また、情報処理装置9は、二要素認証をすることができるので、より精度よく対象者を認証することができる。
[10:付記]
If the area around the eye of the subject has changed since the registration data was registered, this may affect the accuracy of authentication. The information processing device 9 in the ninth embodiment learns the area around the eye in order to perform eye area authentication with high accuracy even when the condition of the area around the eye has changed, such as when there is an area around the eye that is covered. The information processing device 9 can perform eye area authentication with high accuracy even when there is an area around the eye that is covered. In addition, the information processing device 9 can perform two-factor authentication, and therefore can authenticate the subject with higher accuracy.
[10: Supplementary Note]
以上説明した実施形態に関して、更に以下の付記を開示する。
[付記1]
対象の目が写る目画像を含む第1の学習データと、当該第1の学習データに含まれる目画像に写る対象の装着物に関する情報とに基づいて、第2の学習データを生成する学習データ生成手段と、
前記第2の学習データを用いて認証モデルを生成するモデル生成手段と、
前記認証モデルを用いて、対象者の認証を行う認証手段と
を備える情報処理装置。
[付記2]
前記第1の学習データを取得する学習データ取得手段と、
前記第1の学習データに含まれる目画像から、当該目画像に写る対象の装着物を検出する装着物検出手段と
を更に備え、
前記学習データ生成手段は、前記装着物検出手段による検出結果に応じて、前記第1の学習データから前記第2の学習データを生成する
付記1に記載の情報処理装置。
[付記3]
前記学習データ生成手段は、前記装着物検出手段による検出結果に基づいて、前記第1の学習データを、前記目画像に写る対象が装着物を装着しているか否か、及び、装着している場合に何れの種類の装着物を装着しているかにより分類した分類学習データを含む前記第2の学習データを生成する
付記2に記載の情報処理装置。
[付記4]
前記モデル生成手段は、前記分類学習データの各々を用いて、前記分類に応じた分類認証モデルの各々を生成する
付記3に記載の情報処理装置。
[付記5]
前記装着物検出手段は、前記対象者の目画像から、当該対象者の装着物を検出し、
前記装着物検出手段による検出結果に基づいて、前記認証手段が前記分類認証モデルの各々の何れを用いるかを選択する認証モデル選択手段を更に備える
付記4に記載の情報処理装置。
[付記6]
前記認証手段は、
前記分類認証モデルの各々を用いて前記対象者の認証を行い、
前記分類認証モデルの各々による認証結果の各々に基づいて、前記対象者の認証の可否を判定する
付記4に記載の情報処理装置。
[付記7]
対象の目が写る目画像を含む登録データに含まれる目画像から、前記装着物検出手段が検出した当該目画像に写る対象の装着物に基づいて、前記登録データを、前記目画像に写る対象が装着物を装着しているか否か、及び、装着している場合に何れの種類の装着物を装着しているかにより分類した分類登録データを生成する分類手段と、
前記装着物検出手段が前記対象者の目画像から検出した、当該対象者の装着物に基づいて、前記認証手段が前記分類登録データの各々の何れを用いるかを選択する登録データ選択手段とを更に備える
付記2に記載の情報処理装置。
[付記8]
前記装着物検出手段は、前記対象者が装着物を装着していない尤度、及び、前記対象者が装着物を装着している場合の各々の種類の装着物を装着している尤度を算出し、当該尤度に応じて前記分類の重みの各々を算出する算出手段を有し、
前記認証手段は、
前記分類認証モデルの各々を用いて前記対象者の認証を行い、
前記分類認証モデルに対応する前記分類の重みを用いて、各々の認証結果に重み付けをし、
当該重み付けした認証結果の各々に基づいて、前記対象者の認証の可否を判定する
付記6に記載の情報処理装置。
[付記9]
前記認証モデルは、前記目画像が入力されると、当該目画像の特徴を出力する機能を有し、
前記モデル生成手段は、同一の対象の目画像が入力された場合、当該目画像に写る対象が装着物を装着しているか否かに関わらず、装着物を装着していない場合の当該対象の目画像が入力された場合に出力される特徴と所定以上類似する特徴を出力するように前記認証モデルを生成する
付記1又は2に記載の情報処理装置。
[付記10]
前記第1の学習データは、目の周辺を含む目周辺画像を含み、
前記装着物検出手段は、前記第1の学習データに含まれる前記目周辺画像から、当該目周辺画像に写る対象の装着物を検出し、
前記学習データ生成手段は、前記装着物検出手段による検出結果に基づいて、前記第1の学習データから第3の学習データを生成し、
前記モデル生成手段は、前記第3の学習データを用いて、目周辺認証モデルを生成する
付記2に記載の情報処理装置。
[付記11]
前記認証モデルを用いた認証は虹彩認証であり、
前記認証手段は、前記認証モデルを用いた前記対象者の虹彩認証に加え、前記目周辺認証モデルを用いた前記対象者の目周辺認証を行う
請求項10に記載の情報処理装置。
[付記12]
対象の目が写る目画像を含む第1の学習データと、当該第1の学習データに含まれる目画像に写る対象の装着物に関する情報とに基づいて、第2の学習データを生成し、
前記第2の学習データを用いて認証モデルを生成し、
前記認証モデルを用いて、対象者の認証を行う
情報処理方法。
[付記13]
コンピュータに、
対象の目が写る目画像を含む第1の学習データと、当該第1の学習データに含まれる目画像に写る対象の装着物に関する情報とに基づいて、第2の学習データを生成し、
前記第2の学習データを用いて認証モデルを生成し、
前記認証モデルを用いて、対象者の認証を行う
情報処理方法を実行させるためのコンピュータプログラムが記録されている記録媒体。
The following supplementary notes are further disclosed regarding the above-described embodiment.
[Appendix 1]
a learning data generating means for generating second learning data based on first learning data including an eye image showing the eye of a target and information on an attachment of the target shown in the eye image included in the first learning data;
a model generating means for generating an authentication model using the second training data;
and an authentication means for authenticating a target person using the authentication model.
[Appendix 2]
A learning data acquisition means for acquiring the first learning data;
and an attachment detection means for detecting an attachment of a target appearing in an eye image included in the first learning data,
The information processing device according to claim 1, wherein the learning data generating means generates the second learning data from the first learning data in response to a detection result by the attachment detecting means.
[Appendix 3]
The learning data generation means generates the second learning data including classification learning data obtained by classifying the first learning data based on a detection result by the accessory detection means, based on whether the subject shown in the eye image is wearing an accessory and, if so, what type of accessory the subject is wearing.
[Appendix 4]
The information processing device according to claim 3, wherein the model generating means generates each of the categorized authentication models according to the categories by using each of the categorized learning data.
[Appendix 5]
The attachment detection means detects an attachment of the subject from the eye image of the subject,
The information processing device according to claim 4, further comprising an authentication model selection unit that selects which of the categorized authentication models the authentication unit is to use based on a detection result by the attachment detection unit.
[Appendix 6]
The authentication means includes:
authenticating the subject using each of the classification authentication models;
The information processing device according to claim 4, further comprising: determining whether or not the subject can be authenticated based on each of the authentication results obtained by each of the classification authentication models.
[Appendix 7]
a classification means for generating classified registration data based on an accessory of the subject captured in an eye image detected by the accessory detection means from an eye image included in registration data including the eye image showing the subject's eyes, by classifying the registration data according to whether the subject captured in the eye image is wearing an accessory and, if so, what type of accessory the subject is wearing;
The information processing device described in Appendix 2, further comprising a registration data selection means for selecting which of the categorized registration data the authentication means will use based on the attachment of the subject detected by the attachment detection means from the eye image of the subject.
[Appendix 8]
the attachment detection means includes a calculation means for calculating a likelihood that the subject is not wearing an attachment and a likelihood that the subject is wearing each type of attachment when the subject is wearing an attachment, and calculating each of the weights of the classifications in accordance with the likelihoods;
The authentication means includes:
authenticating the subject using each of the classification authentication models;
weighting each authentication result using the category weights corresponding to the category authentication model;
The information processing device according to claim 6, further comprising: determining whether or not to authenticate the subject based on each of the weighted authentication results.
[Appendix 9]
the authentication model has a function of outputting features of the eye image when the eye image is input;
The information processing device described in Appendix 1 or 2, wherein the model generation means generates the authentication model so that, when an eye image of the same subject is input, features similar to or more than a predetermined level are output when an eye image of the subject not wearing any accessories is input, regardless of whether the subject in the eye image is wearing any accessories.
[Appendix 10]
The first learning data includes an eye surrounding image including an area around the eye,
The attachment detection means detects an attachment of a target appearing in the eye-periphery image from the eye-periphery image included in the first learning data,
the learning data generating means generates third learning data from the first learning data based on a detection result by the attachment detecting means;
The information processing device according to claim 2, wherein the model generating means generates an eye-periphery authentication model by using the third learning data.
[Appendix 11]
The authentication using the authentication model is iris authentication,
The information processing apparatus according to claim 10 , wherein the authentication means performs eye-periphery authentication of the subject using the eye-periphery authentication model in addition to iris authentication of the subject using the authentication model.
[Appendix 12]
generating second learning data based on first learning data including an eye image showing the eye of the target and information on an attachment of the target shown in the eye image included in the first learning data;
generating an authentication model using the second training data;
An information processing method for authenticating a subject using the authentication model.
[Appendix 13]
On the computer,
generating second learning data based on first learning data including an eye image showing the eye of the target and information on an attachment of the target shown in the eye image included in the first learning data;
generating an authentication model using the second training data;
A recording medium having a computer program recorded thereon for executing an information processing method for authenticating a subject by using the authentication model.
この開示は、請求の範囲及び明細書全体から読み取るこのできる技術的思想に反しない範囲で適宜変更可能である。そのような変更を伴う情報処理装置、情報処理方法、及び記録媒体もまた、この開示の技術的思想に含まれる。 This disclosure may be modified as appropriate within the scope of the claims and the technical idea that can be read from the entire specification. Information processing devices, information processing methods, and recording media that incorporate such modifications are also included in the technical idea of this disclosure.
1,2,3,4,5,6,7,8,9 情報処理装置
11,211,311,911 学習データ生成部
12,212,312,812,912 モデル生成部
13,213,313,413,513,613,713,813,913 認証部
214 学習データ取得部
215,415,615,715 装着物検出部
216 目画像取得部
417 認証モデル選択部
618 分類登録データ生成部
619 分類登録データ選択部
621 登録データ保持部
7131 照合部
7132 重み付け部
7133 可否判定部
7151 算出部
7152 決定部
915 被覆物検出部
916 目周辺画像取得部
1, 2, 3, 4, 5, 6, 7, 8, 9 Information processing device 11, 211, 311, 911 Learning data generation unit 12, 212, 312, 812, 912 Model generation unit 13, 213, 313, 413, 513, 613, 713, 813, 913 Authentication unit 214 Learning data acquisition unit 215, 415, 615, 715 Wearing object detection unit 216 Eye image acquisition unit 417 Authentication model selection unit 618 Classification registration data generation unit 619 Classification registration data selection unit 621 Registration data storage unit 7131 Collation unit 7132 Weighting unit 7133 Possibility determination unit 7151 Calculation unit 7152 Decision unit 915 Covering object detection unit 916 Eye surroundings image acquisition unit
Claims (13)
前記第2の学習データを用いて認証モデルを生成するモデル生成手段と、
前記認証モデルを用いて、対象者の認証を行う認証手段と
を備える情報処理装置。 a learning data generating means for generating second learning data based on first learning data including an eye image showing the eye of a target and information on an attachment of the target shown in the eye image included in the first learning data;
a model generating means for generating an authentication model using the second training data;
and an authentication means for authenticating a target person using the authentication model.
前記第1の学習データに含まれる目画像から、当該目画像に写る対象の装着物を検出する装着物検出手段と
を更に備え、
前記学習データ生成手段は、前記装着物検出手段による検出結果に応じて、前記第1の学習データから前記第2の学習データを生成する
請求項1に記載の情報処理装置。 A learning data acquisition means for acquiring the first learning data;
and an attachment detection means for detecting an attachment of a target appearing in an eye image included in the first learning data,
The information processing apparatus according to claim 1 , wherein the learning data generating means generates the second learning data from the first learning data in accordance with a detection result by the attachment detecting means.
請求項2に記載の情報処理装置。 3. The information processing device according to claim 2, wherein the learning data generation means generates the second learning data including classified learning data obtained by classifying the first learning data based on a detection result by the accessory detection means, according to whether or not the subject shown in the eye image is wearing an accessory and, if so, what type of accessory the subject is wearing.
請求項3に記載の情報処理装置。 The information processing apparatus according to claim 3 , wherein the model generating means generates each of the categorized authentication models according to the categories by using each of the categorized learning data.
前記装着物検出手段による検出結果に基づいて、前記認証手段が前記分類認証モデルの各々の何れを用いるかを選択する認証モデル選択手段を更に備える
請求項4に記載の情報処理装置。 The attachment detection means detects an attachment of the subject from the eye image of the subject,
The information processing apparatus according to claim 4 , further comprising an authentication model selection unit that selects which of the categorized authentication models is to be used by the authentication unit based on a detection result by the attachment detection unit.
前記分類認証モデルの各々を用いて前記対象者の認証を行い、
前記分類認証モデルの各々による認証結果の各々に基づいて、前記対象者の認証の可否を判定する
請求項4に記載の情報処理装置。 The authentication means includes:
authenticating the subject using each of the classification authentication models;
The information processing apparatus according to claim 4 , further comprising: determining whether or not the subject can be authenticated based on each of the authentication results obtained by each of the classification authentication models.
前記装着物検出手段が前記対象者の目画像から検出した、当該対象者の装着物に基づいて、前記認証手段が前記分類登録データの各々の何れを用いるかを選択する登録データ選択手段とを更に備える
請求項2に記載の情報処理装置。 a classification means for generating classified registration data based on an accessory of the subject captured in an eye image detected by the accessory detection means from an eye image included in registration data including the eye image showing the subject's eyes, by classifying the registration data according to whether the subject captured in the eye image is wearing an accessory and, if so, what type of accessory the subject is wearing;
The information processing device according to claim 2 , further comprising a registration data selection means for selecting which of the categorized registration data the authentication means will use based on the subject's clothing detected by the clothing detection means from the subject's eye image.
前記認証手段は、
前記分類認証モデルの各々を用いて前記対象者の認証を行い、
前記分類認証モデルに対応する前記分類の重みを用いて、各々の認証結果に重み付けをし、
当該重み付けした認証結果の各々に基づいて、前記対象者の認証の可否を判定する
請求項6に記載の情報処理装置。 the attachment detection means includes a calculation means for calculating a likelihood that the subject is not wearing an attachment and a likelihood that the subject is wearing each type of attachment when the subject is wearing an attachment, and calculating each of the weights of the classifications in accordance with the likelihoods;
The authentication means includes:
authenticating the subject using each of the classification authentication models;
weighting each authentication result using the category weights corresponding to the category authentication model;
The information processing apparatus according to claim 6 , further comprising: determining whether or not to authenticate the subject person based on each of the weighted authentication results.
前記モデル生成手段は、同一の対象の目画像が入力された場合、当該目画像に写る対象が装着物を装着しているか否かに関わらず、装着物を装着していない場合の当該対象の目画像が入力された場合に出力される特徴と所定以上類似する特徴を出力するように前記認証モデルを生成する
請求項1又は2に記載の情報処理装置。 the authentication model has a function of outputting features of the eye image when the eye image is input;
3. The information processing device according to claim 1, wherein the model generation means generates the authentication model so that, when an eye image of the same subject is input, it outputs features similar to or more than a predetermined level to features output when an eye image of the subject not wearing any accessories is input, regardless of whether the subject in the eye image is wearing any accessories.
前記装着物検出手段は、前記第1の学習データに含まれる前記目周辺画像から、当該目周辺画像に写る対象の装着物を検出し、
前記学習データ生成手段は、前記装着物検出手段による検出結果に基づいて、前記第1の学習データから第3の学習データを生成し、
前記モデル生成手段は、前記第3の学習データを用いて、目周辺認証モデルを生成する
請求項2に記載の情報処理装置。 The first learning data includes an eye surrounding image including an area around the eye,
The attachment detection means detects an attachment of a target appearing in the eye-periphery image from the eye-periphery image included in the first learning data,
the learning data generating means generates third learning data from the first learning data based on a detection result by the attachment detecting means;
The information processing apparatus according to claim 2 , wherein the model generating means generates an eye-periphery authentication model by using the third learning data.
前記認証手段は、前記認証モデルを用いた前記対象者の虹彩認証に加え、前記目周辺認証モデルを用いた前記対象者の目周辺認証を行う
請求項10に記載の情報処理装置。 The authentication using the authentication model is iris authentication,
The information processing apparatus according to claim 10 , wherein the authentication means performs eye-periphery authentication of the subject using the eye-periphery authentication model in addition to iris authentication of the subject using the authentication model.
前記第2の学習データを用いて認証モデルを生成し、
前記認証モデルを用いて、対象者の認証を行う
情報処理方法。 generating second learning data based on first learning data including an eye image showing the eye of the target and information on an attachment of the target shown in the eye image included in the first learning data;
generating an authentication model using the second training data;
An information processing method for authenticating a subject using the authentication model.
対象の目が写る目画像を含む第1の学習データと、当該第1の学習データに含まれる目画像に写る対象の装着物に関する情報とに基づいて、第2の学習データを生成し、
前記第2の学習データを用いて認証モデルを生成し、
前記認証モデルを用いて、対象者の認証を行う
情報処理方法を実行させるためのコンピュータプログラムが記録されている記録媒体。 On the computer,
generating second learning data based on first learning data including an eye image showing the eye of the target and information on an attachment of the target shown in the eye image included in the first learning data;
generating an authentication model using the second training data;
A recording medium having a computer program recorded thereon for executing an information processing method for authenticating a subject by using the authentication model.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2025508022A JPWO2024195055A5 (en) | 2023-03-22 | Information processing device, information processing method, and computer program | |
| PCT/JP2023/011271 WO2024195055A1 (en) | 2023-03-22 | 2023-03-22 | Information processing device, information processing method, and recording medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/JP2023/011271 WO2024195055A1 (en) | 2023-03-22 | 2023-03-22 | Information processing device, information processing method, and recording medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024195055A1 true WO2024195055A1 (en) | 2024-09-26 |
Family
ID=92841448
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2023/011271 Pending WO2024195055A1 (en) | 2023-03-22 | 2023-03-22 | Information processing device, information processing method, and recording medium |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024195055A1 (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020179378A1 (en) * | 2019-03-04 | 2020-09-10 | 日本電気株式会社 | Information processing system, information processing method, and recording medium |
| JP2021157497A (en) * | 2020-03-27 | 2021-10-07 | キッセイコムテック株式会社 | Data cleansing method, data cleansing program, and data cleansing device |
| WO2021235061A1 (en) * | 2020-05-21 | 2021-11-25 | 株式会社Ihi | Image classification device, image classification method, and image classification program |
| WO2022195819A1 (en) * | 2021-03-18 | 2022-09-22 | 日本電気株式会社 | Feature quantity conversion learning device, authentication device, feature quantity conversion learning method, authentication method, and recording medium |
| JP2022182960A (en) * | 2021-05-26 | 2022-12-08 | キヤノン株式会社 | Image processing device, image processing method and program |
| JP2023025914A (en) * | 2021-08-11 | 2023-02-24 | キヤノン株式会社 | Face authentication device, face authentication method, and computer program |
-
2023
- 2023-03-22 WO PCT/JP2023/011271 patent/WO2024195055A1/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2020179378A1 (en) * | 2019-03-04 | 2020-09-10 | 日本電気株式会社 | Information processing system, information processing method, and recording medium |
| JP2021157497A (en) * | 2020-03-27 | 2021-10-07 | キッセイコムテック株式会社 | Data cleansing method, data cleansing program, and data cleansing device |
| WO2021235061A1 (en) * | 2020-05-21 | 2021-11-25 | 株式会社Ihi | Image classification device, image classification method, and image classification program |
| WO2022195819A1 (en) * | 2021-03-18 | 2022-09-22 | 日本電気株式会社 | Feature quantity conversion learning device, authentication device, feature quantity conversion learning method, authentication method, and recording medium |
| JP2022182960A (en) * | 2021-05-26 | 2022-12-08 | キヤノン株式会社 | Image processing device, image processing method and program |
| JP2023025914A (en) * | 2021-08-11 | 2023-02-24 | キヤノン株式会社 | Face authentication device, face authentication method, and computer program |
Non-Patent Citations (2)
| Title |
|---|
| JUNJEA KAPIL: "WSF-RBF based mining model to identify eye-Glasses worn people from face-images pool", 2015 THIRD INTERNATIONAL CONFERENCE ON IMAGE INFORMATION PROCESSING (ICIIP), IEEE, 21 December 2015 (2015-12-21), pages 462 - 467, XP032870031, DOI: 10.1109/ICIIP.2015.7414817 * |
| MATSUO KENJI, HASHIMOTO MASAYUKI, KOIKE ATSUSHI: "Face Similarity Calculation between Other Persons for Similar Face Retrieval", IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, vol. J92-D, no. 8, 1 August 2009 (2009-08-01), pages 1383 - 1392, XP093212361 * |
Also Published As
| Publication number | Publication date |
|---|---|
| JPWO2024195055A1 (en) | 2024-09-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6449516B2 (en) | Image and feature quality for ocular blood vessel and face recognition, image enhancement and feature extraction, and fusion of ocular blood vessels with facial and / or sub-facial regions for biometric systems | |
| Peixoto et al. | Face liveness detection under bad illumination conditions | |
| KR20220150868A (en) | Method of motion vector and feature vector based fake face detection and apparatus for the same | |
| Rathgeb et al. | Differential detection of facial retouching: A multi-biometric approach | |
| JP5795443B2 (en) | Method, apparatus, and computer-readable recording medium for detecting the location of facial feature points using an Adaboost learning algorithm | |
| CN105320950A (en) | A video human face living body detection method | |
| CN113327212B (en) | Face driving method, face driving model training device, electronic equipment and storage medium | |
| De Silva et al. | Cloud basis function neural network: a modified RBF network architecture for holistic facial expression recognition | |
| Samatha et al. | Securesense: Enhancing person verification through multimodal biometrics for robust authentication | |
| JP2015094973A (en) | Image processing apparatus, image processing method, image processing program, and recording medium | |
| Mohammad et al. | Towards ethnicity detection using learning based classifiers | |
| Sarode et al. | Review of iris recognition: an evolving biometrics identification technology | |
| JP2011133977A (en) | Image processor, image processing method, and program | |
| WO2024195055A1 (en) | Information processing device, information processing method, and recording medium | |
| JP7552698B2 (en) | Image processing device, image processing method, and recording medium | |
| Travieso et al. | Using a Discrete Hidden Markov Model Kernel for lip-based biometric identification | |
| JP7679904B2 (en) | Information processing device, information processing method, and recording medium | |
| CN118171261B (en) | Security password verification method, device, equipment, medium and computer product thereof | |
| Kumar et al. | Iris recognition system in the context of authentication | |
| JP4739087B2 (en) | Red-eye correction device, red-eye correction method, and red-eye correction program | |
| Gofman et al. | Quality-based score-level fusion for secure and robust multimodal biometrics-based authentication on consumer mobile devices | |
| WO2024100891A1 (en) | Information processing device, information processing method, and recording medium | |
| WO2025062645A1 (en) | Information processing system, information processing method, and recording medium | |
| Dandotiya et al. | Intelligent Indian Twins Identification by MNIST Dataset Using Machine Learning | |
| WO2025062630A1 (en) | Information processing device, information processing system, information processing method, and recording medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23928635 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2025508022 Country of ref document: JP Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2025508022 Country of ref document: JP |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |