[go: up one dir, main page]

WO2012020591A1 - Système d'identification de personnes, dispositif de spécification de valeur de caractéristique, procédé de spécification de caractéristique et support d'enregistrement - Google Patents

Système d'identification de personnes, dispositif de spécification de valeur de caractéristique, procédé de spécification de caractéristique et support d'enregistrement Download PDF

Info

Publication number
WO2012020591A1
WO2012020591A1 PCT/JP2011/062313 JP2011062313W WO2012020591A1 WO 2012020591 A1 WO2012020591 A1 WO 2012020591A1 JP 2011062313 W JP2011062313 W JP 2011062313W WO 2012020591 A1 WO2012020591 A1 WO 2012020591A1
Authority
WO
WIPO (PCT)
Prior art keywords
individual
information
feature
feature amount
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2011/062313
Other languages
English (en)
Japanese (ja)
Inventor
昭裕 早坂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to JP2012528607A priority Critical patent/JPWO2012020591A1/ja
Publication of WO2012020591A1 publication Critical patent/WO2012020591A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/179Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition

Definitions

  • the present invention relates to an individual identification system, a feature quantity specifying device, a feature quantity specifying method, and a recording medium, and in particular, an individual identification system, a feature quantity specifying device, and a feature quantity for specifying a feature quantity for each of a plurality of individuals to be identified.
  • the present invention relates to a specifying method and a recording medium.
  • Patent Document 2 describes a technique for simultaneously authenticating a plurality of individuals.
  • the authentication system described in Patent Literature 2 includes a weight measurement floor configured by a plurality of weight measurement units, a weight sensor unit, a controller, and an information processing device. When a plurality of authentication subjects pass through the weight measurement floor of this authentication system, each weight sensor unit measures the weight.
  • Patent Document 3 describes a technique for associating each of a plurality of feature amounts output from a plurality of individuals with an individual that has output each feature amount.
  • the conference system described in Patent Document 3 includes a plurality of microphones, voice recognition means, position specifying means, association means, and synthesis means.
  • the voice recognition means recognizes the voice input to each microphone.
  • the position specifying means specifies the position of the speaker in the captured image.
  • the personal recognition system described in Patent Document 1 determines that the acquired facial feature quantity and voice feature quantity are information obtained from the same person. That is, the personal recognition system described in Patent Document 1 results in erroneous identification.
  • the authentication system described in Patent Document 2 described above performs authentication based on one type of feature amount. Therefore, in the authentication system described in Patent Document 2, when a plurality of feature amounts are collected, it is difficult to specify an appropriate feature amount for use in individual authentication for each of a plurality of authentication targets. is there.
  • the conference system described in Patent Document 3 described above associates each feature quantity based on the position where the feature quantity obtained from the image information is detected and the position where the feature quantity obtained from the audio information is detected.
  • the conference system identifies the individual by performing individual recognition based on the feature amount obtained from the image information. Then, the conference system associates the feature quantity obtained from the individual and the image information with the feature quantity obtained from the audio information. That is, since the conference system described in Patent Document 3 cannot identify an individual based on a plurality of feature amounts, it is necessary to identify an appropriate feature amount for use in individual identification for each of a plurality of identification targets. It is difficult.
  • An example of an object of the present invention is to provide an individual identification system, a feature amount specifying device, a feature amount specifying method, and a recording medium for specifying an appropriate feature amount for use in individual identification for each of a plurality of identification objects. It is in.
  • the first individual identification system includes an environmental information acquisition unit that acquires environmental information that is information resulting from an environment of a space where a plurality of individuals to be subjected to individual identification processing exist, and the environmental information
  • An individual detection unit for detecting the plurality of individuals together with position information indicating the position of each individual, and feature amount extraction for extracting a plurality of feature amounts from the environment information together with attribute information indicating the type of each feature amount
  • a registration unit that determines, for each feature quantity, an individual in which the feature quantity has occurred, based on the position information of each of the individuals and the attribute information of the feature quantities, and the environment information
  • an effective value calculation unit for obtaining an effective value indicating the quality of the feature quantity, and for each detected individual based on the result of the discrimination and the effective value.
  • the first feature amount specifying method includes: information indicating an individual detected from environment information that is information derived from an environment in a space where an individual that is an object of individual identification processing is present; and the individual Each of the position information indicating the position, the feature amount extracted from the environment information, and the attribute information indicating each attribute of the feature amount, respectively, and based on the position information and the attribute information, For each feature amount, an individual in which the feature amount has occurred is determined, and for each feature amount extracted from the environment information, an effective value indicating the quality of the feature amount is obtained, and the determination result and the effective value are obtained.
  • the first recording medium in one aspect of the present invention includes information indicating the individual detected from environmental information that is information derived from the environment of the space where the individual that is the target of the individual identification process exists, and each of the individual Receiving the position information indicating the position of the feature, the feature quantity extracted from the environment information, and the attribute information indicating the attribute of the feature quantity, respectively, and based on the position information and the attribute information, the feature quantity A process for discriminating an individual in which the feature quantity has occurred, a process for obtaining an effective value indicating the quality of the feature quantity for each feature quantity extracted from the environment information, a result of the discrimination and the effective value
  • a program for causing the computer to execute a process for specifying a feature amount used for the individual identification process of the individual is recorded.
  • One of the effects of the present invention is that it is possible to specify an appropriate feature amount for use in individual identification for each of a plurality of identification targets.
  • FIG. 1 is a block diagram illustrating a configuration example of an individual identification system 1 according to the first embodiment.
  • FIG. 2 is a block diagram illustrating a configuration example of the sensor unit 1000 according to the first embodiment.
  • FIG. 3 is a block diagram illustrating a configuration example of the feature amount determination unit 1103 according to the first embodiment.
  • FIG. 4 is a flowchart illustrating an operation example of the individual identification system 1 according to the first embodiment.
  • FIG. 5 is a block diagram illustrating a configuration example of the individual identification system 2 according to the second embodiment.
  • FIG. 6 is a block diagram illustrating a configuration example of the composite sensor unit 2200 according to the second embodiment.
  • FIG. 7 is a flowchart illustrating an operation example of the individual identification system 2 according to the second embodiment.
  • FIG. 1 is a block diagram illustrating a configuration example of an individual identification system 1 according to the first embodiment.
  • FIG. 2 is a block diagram illustrating a configuration example of the sensor unit 1000 according to the first embodiment.
  • FIG. 8 is a block diagram illustrating a configuration example of the individual identification system 3 according to the third embodiment.
  • FIG. 9 is a block diagram illustrating a configuration example of the biological information acquisition unit 3200 according to the third embodiment.
  • FIG. 10 is a flowchart illustrating an operation example of the individual identification system 3 according to the third embodiment.
  • FIG. 11 is a block diagram illustrating a configuration example of the individual identification system 4 according to the fourth embodiment.
  • FIG. 12 is a flowchart illustrating an operation example of the individual identification system 4 according to the fourth embodiment.
  • FIG. 13 is a block diagram illustrating a configuration example of the feature quantity specifying device 50 according to the fifth embodiment.
  • FIG. 14 is a diagram illustrating an example of information stored in the database unit 1107.
  • FIG. 1 is a block diagram showing a configuration of an individual identification system 1 according to the first embodiment of the present invention.
  • the individual identification system 1 according to the first embodiment of the present invention includes a sensor unit 1000 and an identification processing device 1100.
  • the configuration of the sensor unit 1000 will be described.
  • the sensor unit 1000 acquires environmental information that is information resulting from the environment of the space in which the individual that is the target of the individual identification process exists.
  • a space in which an individual subject to individual identification processing exists is simply referred to as a real space.
  • the environmental information may include image information in real space.
  • the sensor unit 1000 may include an image information acquisition unit 1001 that acquires real space image information as illustrated in FIG. 2.
  • the environment information may include audio information in real space.
  • the sensor unit 1000 may include an audio information acquisition unit 1002 that acquires audio information in real space as illustrated in FIG.
  • the image information acquisition unit 1001 may be a video camera capable of capturing a real space image or a still image.
  • the sound information acquisition unit 1002 may be a microphone that can acquire sound in real space.
  • the microphone constituting the audio information acquisition unit 1002 may have a directivity function that can specify the position where the audio is generated.
  • the means for acquiring environment information of the real space is not limited to the above configuration.
  • the means for acquiring the real space environment information may be acquired by reading the real space environment information held in the external storage device.
  • the environment information is information including image information and audio information.
  • the configuration of the identification processing device 1100 will be described.
  • the identification processing device 1100 includes an individual detection unit 1101, a feature amount extraction unit 1102, a feature amount determination unit 1103, a feature amount identification unit 1104, a database addition processing unit 1105, a collation unit 1106, and a database unit 1107.
  • the identification processing device 1100 determines an individual in which each feature amount extracted from the environment information has occurred. Then, the identification processing device 1100 obtains an effective value indicating the quality of the feature value for each feature value extracted from the environment information. Then, the identification processing device 1100 specifies a feature amount used for the individual identification process for each individual detected from the environment information based on the determined result and the calculated effective value. Therefore, the identification processing device 1100 according to the first embodiment uses a feature value having a high effective value, that is, a higher quality feature among the feature values associated with each individual, and uses the feature value for individual identification processing. Can do.
  • each component provided in the identification processing device 1100 according to the first embodiment will be described in detail.
  • the individual detection unit 1101 receives environmental information of the real space acquired by the sensor unit 1000 from the sensor unit 1000. Then, the individual detection unit 1101 performs a process of detecting an individual to be identified that exists in the real space from the received environment information. The individual detection unit 1101 also identifies the position of the individual together with the individual from the received environment information. Specifically, the individual detection unit 1101 may specify the individual to be identified and the position of the individual by applying a background difference method or pattern matching to the image information included in the acquired environment information. . Alternatively, when the individual detection unit 1101 receives information including a multi-viewpoint image as environment information, the individual detection unit 1101 may specify an individual to be identified and a three-dimensional position in the space of the individual using the multi-viewpoint image.
  • the individual detection part 1101 may detect the individual used as identification object as follows, and may specify the position of the individual.
  • the individual detection unit 1101 may specify the position of the pronunciation individual by the following method. That is, the individual detection unit 1101 detects a sounding individual based on the environmental information received from the sensor unit 1000 and the estimated position of the sounding individual specified by the function of the directional microphone included in the sensor unit 1000, and determines the position of the sounding individual. You may specify.
  • the individual detection unit 1101 generates position information indicating the position of the specified individual for each detected individual.
  • the individual detection unit 1101 includes, for each detected individual, a feature quantity extraction unit 1102 that includes information indicating the individual, position information of the individual, and image information and audio information of at least a part of the environment information corresponding to the position information. Output to. At least a part of the image information may be image information including a region of the individual. The image information may be video information including the area of the individual. Further, at least a part of the voice information may be voice information indicating a voice estimated to be generated within a predetermined distance from the position indicated by the position information of the individual. In addition, the individual detection unit 1101 outputs, for each detected individual, information indicating the individual and position information of the individual to a feature amount specifying unit 1104 described later.
  • the feature amount extraction unit 1102 receives information indicating an individual, position information of the individual, image information corresponding to the individual, and audio information from the individual detection unit 1101.
  • the feature amount extraction unit 1102 extracts feature amounts from the received image information and audio information.
  • the feature amount extraction unit 1102 may extract feature amounts based on the color, shape, size, pattern, etc. of the object that can be acquired from the received image information.
  • the feature amount extraction unit 1102 may extract a feature amount based on an action of an object that can be acquired from the received video information.
  • the feature amount extraction unit 1102 may extract a feature amount based on sound emitted from an object that can be acquired from the received audio information.
  • the feature quantity extraction unit 1102 specifies the attribute of the feature quantity when extracting the feature quantity.
  • the attribute of the feature amount may be, for example, the following attribute.
  • A Type of information from which feature quantity is acquired
  • B Whether or not the feature quantity originated from a human
  • C Whether the feature quantity was generated from a man or a woman
  • D The race of the person who generated the feature
  • E Feature quantity strength
  • F Feature level
  • G Whether the feature quantity is a language
  • H How many words the feature value is
  • I Position where the feature amount is estimated to have occurred
  • the feature amount extraction unit 1102 generates attribute information indicating the attribute of the specified feature amount.
  • the feature value extraction unit 1102 determines whether the feature value is obtained from audio information in the environment information, or Specify whether it was obtained from image information. Then, the feature amount extraction unit 1102 generates information indicating the specified acquisition source as attribute information. For example, if the attribute of the feature quantity is (b) “whether or not the feature quantity originated from a person”, the feature quantity extraction unit 1102 analyzes the feature quantity and includes information peculiar to the person. Judge whether or not. Any known method can be applied as this determination method. When the feature quantity includes information unique to a person, the feature quantity extraction unit 1102 generates information indicating that the feature quantity is generated from a person as attribute information.
  • the feature value extraction unit 1102 analyzes the feature value and determines whether the feature value is generated from a man or a woman. Judgment is made from Any known method can be applied as this determination method.
  • the feature quantity includes information specific to men or women
  • the feature quantity extraction unit 1102 generates, as attribute information, information indicating that the feature quantity is generated from a man or a woman. Even if the attribute of the feature quantity is other than the above (a) to (c), the feature quantity extraction unit 1102 generates predetermined attribute information in the same manner as described above.
  • the feature quantity extraction unit 1102 outputs the extracted feature quantity and attribute information of each feature quantity to a feature quantity determination unit 1103 described later.
  • the feature quantity discriminating unit 1103 discriminates from which individual in the real space the feature quantity extracted by the feature quantity extracting unit 1102 is a feature quantity, and outputs a discrimination result. Then, the feature amount determination unit 1103 calculates an effective value of each feature amount.
  • the feature amount determination unit 1103 includes a registration unit 1113 that associates an individual with a feature amount, and an effective value calculation unit 1123 that calculates an effective value of each feature amount.
  • the registration unit 1113 receives information indicating an individual and position information of each individual from the individual detection unit 1101.
  • the registration unit 1113 receives the feature amount and attribute information of each feature amount from the feature amount extraction unit 1102. Then, the registration unit 1113 determines an individual in which each of the received feature amounts has occurred based on the received position information and attribute information. Specifically, the registration unit 1113 may associate the individual and the feature amount by the following method. First, the registration unit 1113 specifies the difference between the position of each individual specified by the individual detection unit 1101 based on the environment information and the position where each feature amount specified by the feature amount extraction unit 1102 is estimated to have occurred. . The registration unit 1113 associates the individual with the feature amount when the identified difference is equal to or less than a predetermined threshold. Alternatively, the registration unit 1113 may associate an individual with a feature amount by the following method.
  • the registration unit 1113 associates the individual with the feature quantity when the difference in time of the position of each individual corresponds to the difference with time of the position where each feature quantity is estimated to occur.
  • “corresponding” may mean a case where each difference is a vector and the vector specified by the difference between the vectors is less than a predetermined length.
  • the individual detection unit 1101 and the feature amount extraction unit 1102 may associate information indicating the position of the individual or the time when the attribute of the feature amount is specified with information or feature amount indicating each individual. Then, the individual detection unit 1101 and the feature amount extraction unit 1102 may output information or feature amount indicating an individual associated with information indicating time to the feature amount determination unit 1103.
  • the sensor unit 1000 may associate the time when the environment information is acquired with the environment information, and pass the environment information to the individual detection unit 1101 or the feature amount extraction unit 1102.
  • the individual detection unit 1101 or the feature amount extraction unit 1102 may associate the time associated with the received environment information with the position information indicating the position of the individual or the attribute information indicating the attribute of the feature amount. Then, the individual detection unit 1101 or the feature amount extraction unit 1102 may output the position information of the individual associated with the time or the attribute information of the feature amount to the feature amount determination unit 1103.
  • the registration unit 1113 may associate an individual with a feature amount by the following method.
  • the registration unit 1113 uses the following method to Correspond with the feature value. That is, the registration unit 1113 may perform speaker estimation based on the feature amount indicating the movement of the lips among the feature amounts included in the image information. Then, the registration unit 1113 associates the feature amount extracted from the image information estimated to indicate the speaker with the feature amount extracted from the speech information. Finally, the registration unit 1113 may specify an individual to be associated with the above-described associated feature quantity based on the relationship between each feature quantity and the individual.
  • the registration unit 1113 can perform the association with higher accuracy than the association based on the one-to-one relationship between the feature amount and the individual.
  • the registration unit 1113 may assign one feature amount to a plurality of individuals by probabilistic weighting in addition to assigning one feature amount to only one individual. For example, the registration unit 1113 associates one feature amount with a weight of a probability of 80% for the first individual and associates it with a weight of the probability of 20% for the second individual. Good.
  • This probability is a probability indicating the likelihood that the feature amount has occurred from the individual. This probability may be calculated by the following method.
  • the registration unit 1113 is specified by the position information of each individual and the attribute information of each feature quantity.
  • the above-described probability may be calculated based on the difference between the positions. For example, the registration unit 1113 may assign a higher probability as the difference is smaller and assign a lower probability as the difference is larger. Specifically, the registration unit 1113 may assign probabilities so as to be inversely proportional to the difference. Further, this probability may be determined based on the feature amount attribute information. For example, when the feature quantity attribute information includes information indicating the type of feature quantity, the registration unit 1113 may weight each of the above probabilities according to the type of feature quantity.
  • the feature quantity specifying unit 1104 uses the feature quantity discrimination result and effective value obtained by the feature quantity discrimination unit 1103 to specify the feature quantity used for collation for individual identification processing.
  • the feature quantity specifying unit 1104 refers to the effective value of the feature quantity determined to be generated from one individual for each individual detected by the individual detection unit 1101.
  • the feature amount specifying unit 1104 specifies a feature amount whose effective value referred to is equal to or greater than a predetermined threshold as a feature amount to be used for individual identification processing of the individual.
  • This predetermined threshold may be a predetermined constant.
  • the predetermined threshold may be a value calculated by the feature amount specifying unit 1104 based on the feature amount attribute information.
  • the feature amount specifying unit 1104 detects the feature amount used for individual identification by the following method. It may be specified for each individual detected by the unit 1101. That is, for each individual detected by the individual detection unit 1101, the feature amount specifying unit 1104 is estimated that the effective value of the feature amount determined to have occurred from one individual and that the feature amount has been generated from that one individual. Each product with the probability is calculated. Then, when the calculated product is equal to or greater than a predetermined threshold, the feature amount specifying unit 1104 specifies the feature amount as a feature amount used for the individual identification process of the one individual.
  • the collation unit 1106 determines whether or not the correlation value for all the feature amounts stored in the database unit 1107 is equal to or less than a threshold value or a distance between feature amounts is equal to or greater than a threshold value.
  • the matching unit 1106 performs the following processing. That is, the collation unit 1106 determines that the target individual is not registered in the database unit 1107.
  • the database unit 1107 associates the feature quantity specified by the feature quantity specifying unit 1104 with information indicating the individual and the database addition processing unit 1105 described later.
  • FIG. 4 is a flowchart showing the operation of the individual identification system 1 in the first embodiment for carrying out the present invention.
  • the image information acquisition unit 1001 acquires image information included in environment information in the real space.
  • the voice information acquisition unit 1002 acquires voice information included in the environment information in the real space (step S1).
  • the sensor unit 1000 outputs the acquired environment information, that is, image information and audio information to the individual detection unit 1101.
  • the individual detection unit 1101 analyzes image information and audio information output from the sensor unit 1000, detects an individual to be identified, and specifies position information of the individual.
  • the individual detection unit 1101 outputs information indicating the detected individual and the position information of the individual to the feature amount extraction unit 1102 and the feature amount determination unit 1103. Further, for each detected individual, the individual detection unit 1101 includes information indicating the individual, position information of the individual, image information of at least a part of the environment information corresponding to the position information, and at least one of the environment information. Are output to the feature quantity extraction unit 1102 (step S2).
  • the individual detection unit 1101 performs object detection processing based on a background difference method and pattern matching on image information, three-dimensional position detection processing using a multi-viewpoint image, and position detection of a sounding object by sound using a directional microphone. Use a combination of processing. Then, the individual detection unit 1101 detects an individual to be identified by the above-described combined process, and specifies position information of the individual.
  • the feature quantity extraction unit 1102 extracts a feature quantity for identifying the individual from the image information and audio information received from the individual detection unit 1101 and outputs the feature quantity to the feature quantity identification unit 1104 (step S3). Specifically, the feature amount extraction unit 1102 may extract a feature amount based on the color, shape, size, pattern, etc.
  • the feature amount extraction unit 1102 may extract a feature amount based on an action of an object that can be acquired from video information.
  • the feature amount extraction unit 1102 may extract a feature amount based on sound emitted by an object that can be acquired from audio information.
  • the feature quantity discriminating unit 1103 uses the image information and audio information corresponding to the individual input from the individual detection unit 1101 and the position information of the individual, and each of the feature quantities extracted by the feature quantity extracting unit 1102 is generated. Is discriminated (step S4).
  • the feature quantity discriminating unit 1103 considers the relation between the individual spatial position and the direction of the audio signal, the relation between the individual movement obtained from the video information and the movement direction of the audio signal, and the like. Identify the individual in which each occurred. Then, the feature amount determination unit 1103 outputs the determination result to the feature amount specifying unit 1104. Further, the feature amount determination unit 1103 calculates an effective value representing the validity of the individual image information and audio information input from the individual detection unit 1101, and outputs the effective value to the feature amount specifying unit 1104 (step S5).
  • an effective value calculation method for example, a method of calculating quantitatively using information such as signal SN ratio and signal strength, a method of calculating by a neural network or a support vector machine using previously collected learning data and so on.
  • the feature quantity specifying unit 1104 uses a plurality of types of feature quantities input from the feature quantity extraction unit 1102, the discrimination results input from the feature quantity discrimination unit 1103, and the effective value of each signal to identify the target individual.
  • An effective feature amount is specified (step S6).
  • the identified feature amount is output to the matching unit 1106.
  • the collation unit 1106 collates the feature quantity input from the feature quantity identification unit 1104 with a plurality of types of feature quantities stored in the database unit 1107, and calculates a collation score indicating the correlation between the two feature quantities (step S7). ).
  • the matching unit 1106 calculates a correlation value or a distance for each set of feature values, and calculates a matching score by integrating them.
  • a method of integrating correlation values or distances there are a method of taking an average value of each value, a method of taking a maximum value, or addition or multiplication of each value. Further, as a method of integrating correlation values or distances, there is a method of integrating by a neural network or a support vector machine using learning data prepared in advance.
  • the collation unit 1106 determines whether or not the calculated collation score is equal to or less than a first threshold value (or greater than the first threshold value when the distance-based collation score is used). That is, the collation unit 1106 determines whether there is an individual corresponding to the database unit 1107 (step S8).
  • the database addition processing unit 1105 executes the following processing. That is, the database addition processing unit 1105 newly registers the feature amount specified by the feature amount specifying unit 1104 in the database unit 1107 (step S9).
  • the matching unit 1106 determines that there is an individual corresponding to the database unit 1107 ( If “NO” (registered) in step S8). In this case, the collation by the collation unit 1106 is completed.
  • the collation unit 1106 may have the following functions. First, the collation unit 1106 determines whether or not the calculated collation score is significantly higher than the first threshold value. When the calculated matching score is a distance-based matching score, the matching unit 1106 determines whether or not the calculated matching score is significantly smaller than the first threshold value.
  • the second threshold indicates a value larger than the first threshold (a value smaller than the first threshold when the matching score is a distance-based matching score).
  • the collation unit 1106 determines whether or not the calculated collation score is greater than or equal to the second threshold value based on a predetermined second threshold value.
  • the collation unit 1106 determines that the individual collation reliability is extremely high when the collation score is equal to or greater than a predetermined second threshold.
  • the collation unit 1106 adds the individual feature amount to the database unit 1107 when the collation score is equal to or greater than a predetermined second threshold.
  • the feature amount specifying unit 1104 may determine whether or not a feature amount associated with information indicating a certain individual is stored in the database unit 1107.
  • the feature amount specifying unit 1104 determines that the feature amount is stored in the database unit 1107
  • the feature amount having attribute information corresponding to the attribute information of the feature amount is used for the above-described individual identification processing. You may specify with quantity.
  • the individual identification system 1 determines whether there is an individual that has not yet been identified among the individuals detected by the individual detection unit 1101 (step S10).
  • step S10 the individual steps from the feature determination process in step S4 are performed. repeat. By this operation, the individual identification system 1 can identify all the individuals existing in the real space.
  • the individual identification system 1 determines that the identification process has been executed for all the individuals detected by the individual detection unit 1101 (in the case of “NO” (no unidentified individual) in step S10), the identification process is performed. finish.
  • the individual identification system 1 in the first embodiment determines an individual in which each of the feature amounts extracted from the environmental information has occurred.
  • the individual identification system 1 obtains an effective value indicating the quality of the feature value for each feature value extracted from the environment information. Then, the individual identification system 1 specifies the feature amount used for the individual identification process for each individual detected from the environment information based on the determined result and the calculated effective value. Therefore, the individual identification system 1 according to the first embodiment uses a feature value having a high effective value, that is, a higher quality feature amount among the feature amounts associated with each individual, and uses the feature amount in the individual identification process. Can do. As a result, the individual identification system 1 in the first embodiment can specify an appropriate feature amount to be used for individual identification for each of a plurality of identification targets.
  • the individual identification system 1 can perform more accurate individual identification processing even when a plurality of individuals must be identified at the same time while being robust against environmental changes.
  • the feature amount determination unit 1103 may be included in the individual detection unit 1101.
  • the individual detection unit 1101 when detecting the individual to be identified from the image information and audio information received from the sensor unit 1000, the individual detection unit 1101 includes image information or audio information in a predetermined region including the position of the individual and the individual. May be associated with each other. Then, the individual detection unit 1101 calculates an effective value of the feature amount included in the image information or audio information based on the signal intensity of the image information or audio information associated with the individual.
  • the composite sensor unit 2200 includes a shape measuring unit 2201, a weight measuring unit 2202, a calorie measuring unit 2203, a speed measuring unit 2204, an optical property measuring unit 2205, and an odor measurement as shown in FIG.
  • One or more of the part 2206 and the material inspection part 2207 are provided.
  • the composite sensor unit 2200 is connected to the individual detection unit 1101. That is, the difference of the second embodiment from the first embodiment is that the composite sensor unit 2200 is added to the configuration of the individual identification system. Other components of the individual identification system 2 are the same as those of the individual identification system 1 in the first embodiment.
  • the shape measuring unit 2201 acquires information about the three-dimensional shape and volume of the individual.
  • the weight measuring unit 2202 measures the weight of the individual.
  • the calorie measurement unit 2203 measures the temperature of the individual.
  • the speed measuring unit 2204 measures the speed of the individual.
  • the optical characteristic measurement unit 2205 measures optical characteristics such as reflectance, transmittance, and refractive index of the solid surface.
  • the odor measuring unit 2206 measures the odor of the individual.
  • the material inspection unit 2207 acquires information such as hardness and material of the individual surface by infrared spectroscopy, ultrasonic inspection, or the like.
  • various sensor information acquired by the composite sensor unit 2200 is output to the feature amount extraction unit 1102 in the same manner as individual image information and audio information.
  • a feature amount extraction unit 1102 extracts feature amounts from input image information, audio information, and various sensor information.
  • the feature quantity discriminating unit 1103 performs processing for associating the feature quantity extracted from the sensor information with the individual and calculating the effective value of each feature quantity.
  • FIG. 7 is a flowchart showing an example of the operation of the individual identification system 2 in the second embodiment for carrying out the present invention. About the operation
  • the second embodiment is different from the first embodiment in the following points.
  • the first point is a point where image information and audio information included in the environment information as sensing information and sensing data acquired from the composite sensor unit 2200 are given as inputs to the individual detection unit 1101 (step S21).
  • FIG. 8 is a block diagram showing the configuration of the individual identification system 3 according to the third embodiment of the present invention. As shown in FIG.
  • the individual identification system 3 according to the third embodiment of the present invention is different from the individual identification system 1 according to the first embodiment in a biological information acquisition unit 3200 and a person detection unit 3101. That is, the individual detection unit 1101 of the identification processing device 1100 in the first embodiment is changed to the person detection unit 3101 of the identification processing device 3100 in the third embodiment.
  • Other components of the individual identification system 3 are the same as those of the individual identification system 1 in the first embodiment.
  • the same components as those of the individual identification system 1 in the first embodiment of the individual identification system 3 are denoted by the same reference numerals as those in FIG. 1, and detailed description thereof is omitted.
  • the biometric information acquisition unit 3200 of the individual identification system 3 according to the third embodiment includes the components shown in FIG.
  • a fingerprint pattern acquisition unit 3202 acquires a fingerprint pattern of a person.
  • the fingerprint pattern acquisition unit 3202 may be configured to acquire a fingerprint pattern using a contact sensor, or may be configured to acquire a fingerprint pattern in a non-contact manner using a camera or the like.
  • a palm print pattern acquisition unit 3203 acquires a palm print and a palm pattern of a person.
  • the palm pattern acquisition unit 3203 may be configured to acquire a palm pattern and a palm pattern by a contact sensor, or acquire a palm pattern and a palm pattern without contact with a camera or the like. Such a configuration may be adopted.
  • the palm print pattern acquisition unit 3203 may be configured to simultaneously acquire a fingerprint pattern.
  • the vein pattern acquisition unit 3204 acquires a vein pattern of a person.
  • the vein pattern acquisition unit 3204 may be configured to acquire a vein pattern from a part such as a finger, palm, back of the hand, face, or neck, or may be configured to acquire a vein pattern from another part. Also good. Further, the fingerprint pattern acquisition unit 3202 or the palm print pattern acquisition unit 3203 may also acquire the vein pattern at the same time.
  • the dentition pattern acquisition unit 3205 acquires the shape and arrangement pattern of human teeth.
  • the dentition pattern acquisition unit 3205 may acquire three-dimensional shape information as a dentition pattern in addition to image information captured by a camera.
  • An auricle pattern acquisition unit 3206 acquires a shape pattern of a human ear.
  • the auricle pattern acquisition unit 3206 may be configured to acquire three-dimensional shape information in addition to image information.
  • the gene sequence information acquisition unit 3207 acquires gene sequence information of a person.
  • the gene sequence information acquisition unit 3207 may be configured to acquire gene sequence information from human skin, body hair, body fluid, or the like.
  • the person detection unit 3101 detects a person from the image information acquired by the sensor unit 1000.
  • the person detection unit 3101 detects a person by using face detection for image information, walking pattern detection for video information, and the like.
  • the person detection unit 3101 also has a function of detecting a person from image information and acquiring a person's face pattern and walking pattern.
  • the identification processing device 4100 is a so-called personal computer (PC).
  • the database unit 4107 includes at least communication means such as a network interface and a magnetic storage device.
  • the magnetic storage device stores facial feature amount information and voice feature amount information of a plurality of persons.
  • the magnetic storage device stores at least one or more of face feature amount information and voice feature amount information per person.
  • the magnetic storage device may store a plurality of both feature quantities per person.
  • the facial feature amount information and the voice feature amount information may be managed by a relational database management system (RDBMS).
  • RDBMS relational database management system
  • the fourth embodiment is an embodiment in which feature amounts are a face pattern and a voice pattern.
  • the individual identification system 4 in 4th Embodiment is naturally applicable also to the individual identification system using another feature-value.
  • the person detection unit 4101 detects that there are two persons in the space by processing such as face detection, and passes the detected face image data and audio data to the feature amount extraction unit 4102 and the feature amount determination unit 4103 (Ste S42).
  • the feature amount extraction unit 4102 extracts feature amounts for personal identification from the face image data and audio data acquired from the person detection unit 4101 (step S43).
  • the feature amount discriminating unit 4103 uses the face image data acquired from the person detecting unit 4101 to specify the speaker by speaker estimation or the like, and associates the speaker's face image and voice data with the speaker (step S44). Also, the feature amount discriminating unit 4103 calculates the effectiveness of the feature amount extracted from the face image or audio signal from the state of the face image and audio signal acquired from the person detection unit 4101 (step S45).
  • each component in each embodiment of the present invention can be realized by a computer and a program as well as its function in hardware.
  • the program is provided by being recorded on a computer-readable recording medium such as a magnetic disk or a semiconductor memory, and is read by the computer when the computer is started up.
  • the read program causes the computer to function as a component in each of the embodiments described above by controlling the operation of the computer.
  • the computer includes a central processing unit (CPU) on which the individual identification program is read and executed, a storage device (such as a hard disk) that stores feature quantities as a database, and input means such as a camera and a microphone.
  • the individual identification program read into the CPU causes the computer to function as the identification processing device 1100 described in the first embodiment.
  • An example of the effect of the present invention is that it is possible to specify an appropriate feature amount for use in individual identification for each of a plurality of identification targets.
  • Appendix 1 Extracted from the environmental information, information indicating the individual detected from the environmental information, which is information resulting from the environment of the space where the individual subject to the individual identification process exists, position information indicating the position of the individual, and the environmental information And the attribute information indicating each attribute of the feature quantity is received, and the individual in which the feature quantity has occurred is determined for each feature quantity based on the position information and the attribute information.
  • a registration section For each feature quantity extracted from the environment information, an effective value calculation unit for obtaining an effective value indicating the quality of the feature quantity; Based on the determination result and the effective value, for each detected individual, a feature amount specifying unit that specifies a feature amount used for individual identification processing of the individual; A feature quantity specifying device.
  • the feature quantity specifying unit specifies a feature quantity having a product of the effective value and the probability equal to or greater than a predetermined threshold as a feature quantity used for individual identification processing of the one individual. apparatus.
  • the attribute information of the feature amount includes position information indicating a position where the feature amount has occurred,
  • the registration unit includes: Based on the difference between the position specified based on the position information of each individual and the position specified based on the attribute information of each feature amount, each of the feature amounts is a feature amount generated from the individual.
  • the feature quantity specifying device according to claim 3, wherein certain probabilities are respectively calculated.
  • the feature amount specifying unit when the feature amount has attribute information corresponding to the attribute information of the feature amount, specifies the feature amount used for the individual identification processing of the one individual.
  • a feature amount specifying method for specifying, for each detected individual, a feature amount used for individual identification processing of the individual based on the result of the determination and the effective value. (Appendix 10) Extracted from the environmental information, information indicating the individual detected from the environmental information, which is information resulting from the environment of the space where the individual subject to the individual identification process exists, position information indicating the position of the individual, and the environmental information And the attribute information indicating each attribute of the feature quantity is received, and the individual in which the feature quantity has occurred is determined for each feature quantity based on the position information and the attribute information.
  • the feature quantity specifying unit includes: Storing associating information, which is information associating individual position information with feature amount attribute information; The feature quantity specification according to appendix 1, wherein the feature quantity having the attribute information of the feature quantity corresponding to the association information including the position information of one individual is identified as a feature quantity used for the individual identification process of the one individual. apparatus.
  • the individual position information includes information indicating the movement direction of the individual
  • the attribute information of the feature amount includes information indicating whether the volume is increasing or decreasing
  • the feature quantity specifying unit includes: When the attribute information of the first individual includes information indicating the first direction, the feature amount having the attribute information including the information whose volume is increased is used for the individual identification processing of the first individual And identify When the attribute information of the second individual includes information indicating a second direction that is the opposite direction to the first direction, the feature amount having attribute information including information in which the volume is decreased
  • the feature amount specifying device according to attachment 1, wherein the feature amount is specified as a feature amount used for individual identification processing of the individual.
  • the feature quantity identifying device comprising: (Appendix 15)
  • the individual identification system includes a database unit that stores a matching feature amount used for individual identification, attribute information of the feature amount, and information indicating the individual in association with each other.
  • the collation unit reads a feature quantity for collation used for identifying an individual associated with the feature quantity based on the feature quantity identified by the feature quantity identification unit, and identifies the feature quantity.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

La présente invention porte sur un dispositif selon lequel des valeurs de caractéristiques appropriées pour une utilisation dans l'identification de personnes sont spécifiées pour chaque cible d'une pluralité de cibles d'identification. Un dispositif de spécification de valeurs de caractéristiques comprend une unité d'enregistrement, une unité de calcul de valeurs effectives, et une unité de spécification de valeurs de caractéristiques. L'unité d'enregistrement reçoit : des informations signalant une personne qui est une cible d'identification et qui est détectée à partir d'informations d'environnement provenant de l'environnement spatial dans lequel se trouve la personne, des informations de position indiquant la position de chaque personne, des valeurs de caractéristiques extraites des informations d'environnement et des informations d'attribut indiquant les attributs de chaque valeur de caractéristique. Pour chaque valeur de caractéristique, l'unité d'enregistrement utilise les informations de position et les informations d'attribut pour déterminer la personne qui a généré la valeur de caractéristique. Pour chaque valeur de caractéristique extraite des informations d'environnement, l'unité de calcul de valeurs effectives obtient une valeur effective indiquant la qualité de cette valeur de caractéristique. Pour chaque personne détectée, l'unité de spécification de valeur de caractéristique utilise le résultat de la détermination et la valeur effective pour spécifier chaque valeur de caractéristique utilisée pour identifier cette personne.
PCT/JP2011/062313 2010-08-09 2011-05-24 Système d'identification de personnes, dispositif de spécification de valeur de caractéristique, procédé de spécification de caractéristique et support d'enregistrement Ceased WO2012020591A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012528607A JPWO2012020591A1 (ja) 2010-08-09 2011-05-24 個体識別システム、特徴量特定装置、特徴量特定方法およびプログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010-178644 2010-08-09
JP2010178644 2010-08-09

Publications (1)

Publication Number Publication Date
WO2012020591A1 true WO2012020591A1 (fr) 2012-02-16

Family

ID=45567561

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/062313 Ceased WO2012020591A1 (fr) 2010-08-09 2011-05-24 Système d'identification de personnes, dispositif de spécification de valeur de caractéristique, procédé de spécification de caractéristique et support d'enregistrement

Country Status (2)

Country Link
JP (1) JPWO2012020591A1 (fr)
WO (1) WO2012020591A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014092904A (ja) * 2012-11-02 2014-05-19 Nissan Motor Co Ltd ワーク認識方法およびワーク認識装置
JP2015529365A (ja) * 2012-09-05 2015-10-05 エレメント,インク. カメラ付きデバイスに関連する生体認証のためのシステム及び方法
CN105264542A (zh) * 2013-02-06 2016-01-20 索纳维森股份有限公司 用于对嵌入手指组织内的皮下结构进行三维成像的生物特征感测设备
US9913135B2 (en) 2014-05-13 2018-03-06 Element, Inc. System and method for electronic key provisioning and access management in connection with mobile devices
US9965728B2 (en) 2014-06-03 2018-05-08 Element, Inc. Attendance authentication and management in connection with mobile devices
JP2019175081A (ja) * 2018-03-28 2019-10-10 株式会社日立パワーソリューションズ 移動経路特定システム及び方法
US10735959B2 (en) 2017-09-18 2020-08-04 Element Inc. Methods, systems, and media for detecting spoofing in mobile authentication
JP2021092809A (ja) * 2021-02-26 2021-06-17 日本電気株式会社 音声処理装置、音声処理方法、および音声処理プログラム
US11250860B2 (en) 2017-03-07 2022-02-15 Nec Corporation Speaker recognition based on signal segments weighted by quality
US11343277B2 (en) 2019-03-12 2022-05-24 Element Inc. Methods and systems for detecting spoofing of facial recognition in connection with mobile devices
US11507248B2 (en) 2019-12-16 2022-11-22 Element Inc. Methods, systems, and media for anti-spoofing using eye-tracking
JP2024016283A (ja) * 2019-09-29 2024-02-06 ザックダン カンパニー 機械学習を利用した客体画像提供方法及び装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0484277A (ja) * 1990-07-26 1992-03-17 Nec Corp 特徴量選択方法及び装置と高速識別方法及び装置
JP2006293644A (ja) * 2005-04-08 2006-10-26 Canon Inc 情報処理装置、情報処理方法
JP2009194857A (ja) * 2008-02-18 2009-08-27 Sharp Corp 通信会議システム、通信装置、通信会議方法、コンピュータプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0484277A (ja) * 1990-07-26 1992-03-17 Nec Corp 特徴量選択方法及び装置と高速識別方法及び装置
JP2006293644A (ja) * 2005-04-08 2006-10-26 Canon Inc 情報処理装置、情報処理方法
JP2009194857A (ja) * 2008-02-18 2009-08-27 Sharp Corp 通信会議システム、通信装置、通信会議方法、コンピュータプログラム

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015529365A (ja) * 2012-09-05 2015-10-05 エレメント,インク. カメラ付きデバイスに関連する生体認証のためのシステム及び方法
US10135815B2 (en) 2012-09-05 2018-11-20 Element, Inc. System and method for biometric authentication in connection with camera equipped devices
JP2018200716A (ja) * 2012-09-05 2018-12-20 エレメント,インク. カメラ付きデバイスに関連する生体認証のためのシステム及び方法
US10728242B2 (en) 2012-09-05 2020-07-28 Element Inc. System and method for biometric authentication in connection with camera-equipped devices
JP2014092904A (ja) * 2012-11-02 2014-05-19 Nissan Motor Co Ltd ワーク認識方法およびワーク認識装置
US10621404B2 (en) 2013-02-06 2020-04-14 Sonavation, Inc. Biometric sensing device for three dimensional imaging of subcutaneous structures embedded within finger tissue
CN105264542A (zh) * 2013-02-06 2016-01-20 索纳维森股份有限公司 用于对嵌入手指组织内的皮下结构进行三维成像的生物特征感测设备
JP2016513983A (ja) * 2013-02-06 2016-05-19 ソナベーション, インコーポレイテッド 指組織内に埋め込まれた皮下構造の3次元撮像のためのバイオメトリック感知デバイス
US10528785B2 (en) 2013-02-06 2020-01-07 Sonavation, Inc. Method and system for beam control in biometric sensing
US9913135B2 (en) 2014-05-13 2018-03-06 Element, Inc. System and method for electronic key provisioning and access management in connection with mobile devices
US9965728B2 (en) 2014-06-03 2018-05-08 Element, Inc. Attendance authentication and management in connection with mobile devices
US11250860B2 (en) 2017-03-07 2022-02-15 Nec Corporation Speaker recognition based on signal segments weighted by quality
US11837236B2 (en) 2017-03-07 2023-12-05 Nec Corporation Speaker recognition based on signal segments weighted by quality
US10735959B2 (en) 2017-09-18 2020-08-04 Element Inc. Methods, systems, and media for detecting spoofing in mobile authentication
US11425562B2 (en) 2017-09-18 2022-08-23 Element Inc. Methods, systems, and media for detecting spoofing in mobile authentication
JP2019175081A (ja) * 2018-03-28 2019-10-10 株式会社日立パワーソリューションズ 移動経路特定システム及び方法
US11343277B2 (en) 2019-03-12 2022-05-24 Element Inc. Methods and systems for detecting spoofing of facial recognition in connection with mobile devices
JP2024016283A (ja) * 2019-09-29 2024-02-06 ザックダン カンパニー 機械学習を利用した客体画像提供方法及び装置
US11507248B2 (en) 2019-12-16 2022-11-22 Element Inc. Methods, systems, and media for anti-spoofing using eye-tracking
JP2021092809A (ja) * 2021-02-26 2021-06-17 日本電気株式会社 音声処理装置、音声処理方法、および音声処理プログラム
JP7216348B2 (ja) 2021-02-26 2023-02-01 日本電気株式会社 音声処理装置、音声処理方法、および音声処理プログラム

Also Published As

Publication number Publication date
JPWO2012020591A1 (ja) 2013-10-28

Similar Documents

Publication Publication Date Title
WO2012020591A1 (fr) Système d'identification de personnes, dispositif de spécification de valeur de caractéristique, procédé de spécification de caractéristique et support d'enregistrement
KR101189765B1 (ko) 음성 및 영상에 기반한 성별-연령 판별방법 및 그 장치
Jain et al. Integrating faces, fingerprints, and soft biometric traits for user recognition
KR100940902B1 (ko) 손가락 기하학 정보를 이용한 바이오 인식 방법
KR20190009361A (ko) 신원 인증 방법 및 장치
AlMahafzah et al. A survey of multibiometric systems
Nandakumar Integration of multiple cues in biometric systems
TW201201115A (en) Facial expression recognition systems and methods and computer program products thereof
WO2009131209A1 (fr) Dispositif de mise en correspondance d'images, procédé de mise en correspondance d'images et programme de mise en correspondance d'images
Soltane et al. Multi-modal biometric authentications: concept issues and applications strategies
CN113343198A (zh) 一种基于视频的随机手势认证方法及系统
Senarath et al. Behaveformer: A framework with spatio-temporal dual attention transformers for imu-enhanced keystroke dynamics
JP2021015443A (ja) 補完プログラム、補完方法、および補完装置
Majekodunmi et al. A review of the fingerprint, speaker recognition, face recognition and iris recognition based biometric identification technologies
Derawi Smartphones and biometrics: Gait and activity recognition
KR102563153B1 (ko) 마스크 착용에 강인한 얼굴 인식 방법 및 시스템
CN116612542B (zh) 基于多模态生物特征一致性的音视频人物识别方法及系统
Vasavi et al. Novel Multimodal Biometric Feature Extraction for Precise Human Identification.
Abate et al. Smartphone enabled person authentication based on ear biometrics and arm gesture
KR20110100008A (ko) 연령 및 성별을 이용한 사용자 인식 장치 및 방법
JP2002208011A (ja) 画像照合処理システムおよび画像照合方法
Bigun et al. Combining biometric evidence for person authentication
Derawi et al. Fusion of gait and fingerprint for user authentication on mobile devices
CN115668186A (zh) 认证方法、信息处理装置、以及认证程序
Cruz et al. Lip Biometric Authentication Using Viola-Jones and Appearance Based Model (AAM) System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11816257

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2012528607

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11816257

Country of ref document: EP

Kind code of ref document: A1