WO2005038716A1 - 画像照合システム及び画像照合方法 - Google Patents
画像照合システム及び画像照合方法 Download PDFInfo
- Publication number
- WO2005038716A1 WO2005038716A1 PCT/JP2004/015612 JP2004015612W WO2005038716A1 WO 2005038716 A1 WO2005038716 A1 WO 2005038716A1 JP 2004015612 W JP2004015612 W JP 2004015612W WO 2005038716 A1 WO2005038716 A1 WO 2005038716A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- reference image
- dimensional
- distance value
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Definitions
- the present invention relates to an image collation system, an image collation method, and an image collation program.
- a 3D model of an object cannot be registered in advance, and each database in the database in the system has Image collating system, image collating method and image collating method that can perform collation with high accuracy even when there are only one or a few reference images of the object and are captured under different conditions such as posture and lighting It is about the program.
- FIG. 25 is a block diagram showing a conventional image matching system.
- the conventional image matching system includes an image input unit 115, an image conversion unit 117, an image matching unit 157, a reference image storage unit 130, and a standard three-dimensional object model storage unit 135.
- the reference image storage unit 130 stores in advance a reference image of an object.
- the standard three-dimensional object model storage unit 135 stores a standard three-dimensional object model in advance.
- the image conversion means 117 is a three-dimensional object model obtained from the standard three-dimensional object model storage part 135 with respect to the common partial area of the input image input from the image input means 115 and each reference image obtained from the reference image storage part 130. Is used to convert the input image and / or the reference image so that the posture conditions are the same, and generate a partial image.
- a partial region is a characteristic portion such as an eye 'nose' and a mouth, and correspondence is established by designating feature points for each image and a 3D object model in advance. Can be.
- the image matching unit 157 compares the input image converted by the image conversion unit 117 with the partial image of each reference image, calculates the average similarity, and selects the reference image having the highest similarity for each object (for example, And JP-A-2000-322577 (Patent Document 1).
- FIG. 26 is a block diagram showing another conventional image matching system.
- This conventional image matching system includes an image input unit 115, an illumination variation correction unit 122, an image conversion unit 118, an image matching unit 158, a reference image storage unit 130, and a standard three-dimensional object model storage unit 13. It is composed of five.
- the reference image storage unit 130 stores in advance a reference image obtained by photographing an object.
- the standard three-dimensional object model storage unit 135 stores a standard three-dimensional object model in advance.
- the illumination variation correction unit 122 estimates the illumination condition (surface reflectance) of the input image input from the image input unit 115 using the three-dimensional object model obtained from the standard three-dimensional object model storage unit 135.
- the image conversion means 118 generates an image obtained by converting the input image using the three-dimensional object model so as to match the illumination condition of the reference image.
- the image matching unit 158 compares the input image converted by the image conversion unit 118 with each reference image, calculates the similarity, and selects the reference image having the highest similarity for each object (see, for example, -02 4830 (Patent Document 2)).
- FIG. 27 is a block diagram showing still another conventional image matching system.
- This conventional image matching system includes an image input unit 115, a reference three-dimensional object model storage unit 137, and a posture estimation / matching unit 150.
- Posture estimation / collation means 150 includes posture candidate determination means 120, comparative image generation means 140, and image collation means 155.
- the reference three-dimensional object model storage unit 137 stores a reference three-dimensional object model generated by measuring an object in advance.
- the posture estimation 'matching means 150 obtains the minimum distance value (or maximum similarity) between the input image obtained from the image input means 115 and the reference three-dimensional object model obtained from the reference three-dimensional object model storage unit 137, and Select the model with the smallest minimum distance value.
- posture candidate determining means 120 generates at least one posture candidate.
- the comparison image generation means 140 generates a comparison image close to the input image while projecting the reference three-dimensional object model onto a two-dimensional image according to the posture candidate.
- the image matching means 155 obtains a distance value between the comparative image and the input image, and selects a comparative image having the smallest distance value for each model, thereby estimating an optimal posture and simultaneously referring to the input image. Find the minimum distance value with the three-dimensional object model. Further, a model having the smallest minimum distance value is selected (see, for example, JP-A-2003-058896 (Patent Document 3)).
- Patent Document 1 The reason is that in Patent Document 1, the posture is estimated for the image and the image is converted so as to match the posture condition. However, it is difficult to accurately estimate the posture for the image. Therefore, it is not possible to correctly match the images. In addition, since the image is converted using a standard three-dimensional object model that is different from the three-dimensional shape of the object to be observed, distortion due to the image conversion is reduced when the shape is complicated or the posture conditions are significantly different. This is because it becomes larger.
- Patent Document 2 illumination conditions are estimated using a standard three-dimensional object model different from the three-dimensional shape of the object to be observed, and image conversion is performed. This is because erroneous corrections may be made in details even if they come.
- Patent Document 3 when a three-dimensional object model of each object is not registered in advance or when there are few reference images, it is difficult to perform matching.
- Patent Document 3 registers a three-dimensional object model in advance and matches it with an input image.
- it is necessary to measure each object with a three-dimensional shape measuring device before matching, but this is often difficult.
- it is also possible to generate a three-dimensional object model from a plurality of images. If there are few force reference images, it is difficult to generate a three-dimensional object model.
- the present invention has been made in view of the above-described conventional problems, and has as its object to achieve high-precision matching and high-precision matching even when a reference image of each object is photographed under different conditions such as posture and lighting.
- the goal is to enable search.
- Another object of the present invention is to enable high-precision collation and retrieval even when a three-dimensional object model of each object cannot be obtained in advance.
- Another object of the present invention is to enable high-precision collation and retrieval even when only one or a small number of reference images of each object exist.
- an image matching system includes an input unit that inputs three-dimensional data of an object and a reference unit that stores at least one reference image of the object.
- Image storage means attitude candidate generating means for generating an attitude candidate that is a candidate for the attitude of the object, and generating a comparison image close to a reference image while projecting three-dimensional data onto a two-dimensional image according to the attitude candidate
- a comparison image generation unit that performs comparison based on one of a distance value and a similarity between the reference image and the comparison image.
- the image matching system includes a step of inputting three-dimensional data of the object, a step of generating a posture candidate that is a candidate of the posture of the object, and a step of generating three-dimensional data according to the posture candidate.
- the image collation program includes a step of inputting three-dimensional data of an object, a step of generating a posture candidate that is a candidate of the posture of the object, and a step of generating the three-dimensional data according to the posture candidate.
- a procedure for generating a comparative image that is close to the reference image while projecting the image on a two-dimensional image, and a procedure for performing matching based on either the distance value or the similarity between the reference image and the comparative image, i. are executed by a computer.
- a first effect of the present invention is that collation and search can be performed with high accuracy even when a reference image of each object is photographed under different conditions such as posture and lighting.
- the reason is to measure the three-dimensional data of the object, generate a comparison image that matches the shooting conditions such as the posture and lighting of each reference image, and compare the comparison image with the reference image for collation.
- the second effect is that collation and search can be performed with high accuracy even when a three-dimensional object model of each object is not obtained in advance or when only one or a small number of reference images exist. It is.
- the reason for this is to measure the three-dimensional data of the object at the time of matching, generate a comparison image that matches the pre-existing reference image, and compare the comparison image with the reference image to perform the matching.
- FIG. 1 is a diagram showing a configuration of a first embodiment of an image matching system according to the present invention.
- FIG. 2 is a block diagram illustrating a configuration of an image matching unit according to the first embodiment.
- FIG. 3] is a flowchart showing an operation in one-to-one matching in the first embodiment.
- FIG. 4 is a flowchart showing an operation in one-to-N matching of the first embodiment.
- C FIG. 5 is a diagram showing a specific example of a reference image of the first embodiment.
- FIG. 6 is a diagram showing a specific example of three-dimensional data of the first embodiment.
- FIG. 7 is a diagram showing a specific example of a comparative image of the first embodiment.
- FIG. 8 is a block diagram showing a configuration of a second exemplary embodiment of the present invention.
- FIG. 9 is a flowchart showing the operation in the one-to-N matching of the second embodiment.
- FIG. 10 is a block diagram showing the configuration of the third embodiment of the present invention. .
- FIG. 11 is a block diagram showing the configuration of the image matching means of the third embodiment.
- FIG. 12 is a flowchart showing the operation in the 1: N comparison of the third embodiment.
- FIG. 13 is a diagram showing a specific example of a standard three-dimensional reference point of the third embodiment.
- FIG. 14 is a diagram showing a specific example of the standard three-dimensional weighting factor of the third embodiment.
- FIG. 15 is a diagram showing a specific example of the reference weight coefficient of the third embodiment.
- FIG. 16 is a diagram showing a specific example of the input three-dimensional reference points of the third embodiment.
- FIG. 17 is a diagram showing a specific example of the two-dimensional weighting factor of the third embodiment.
- FIG. 18 is a block diagram showing the configuration of the fourth embodiment of the present invention.
- FIG. 19 is a flowchart showing the operation of the fourth embodiment.
- FIG. 20 is a flowchart showing the operation of the fourth embodiment.
- FIG. 21 is a diagram showing a specific example of a representative three-dimensional object model of the fourth embodiment.
- C FIG. 22] FIG. 22 is a block diagram showing a configuration of a fifth embodiment of the present invention. .
- FIG. 23 is a flowchart showing the operation of the fifth embodiment.
- FIG. 24 is a diagram showing a specific example of a representative image of the fifth embodiment.
- FIG. 25 is a block diagram showing a conventional image matching system.
- FIG. 26 is a block diagram showing another conventional image matching system.
- FIG. 27 is a block diagram showing still another conventional image matching system. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 is a block diagram showing a first embodiment of the image matching system according to the present invention.
- reference numeral 10 denotes three-dimensional data input means for inputting three-dimensional data of an object
- reference numeral 30 denotes a reference image storage unit
- reference numeral 50 denotes posture estimation / collation means.
- Posture estimation / collation means 50 includes posture candidate determination means 20, comparison image generation means 40, and image collation means 55.
- the reference image storage unit 30 stores in advance reference images obtained by photographing at least one object.
- the imaging conditions such as the posture and illumination of the reference image are not limited.
- the reference image storage unit 30 may be provided inside the system or outside the system and may be used by connecting to a network.
- the three-dimensional data input means 10 inputs three-dimensional data of an object to be collated (or an object to be searched).
- the three-dimensional data can be obtained, for example, by using a three-dimensional shape measuring device described in JP-A-2001-12925, or from a plurality of images taken with a large number of force cameras described in JP-A-9-91436. It can be generated by using a device that restores the dimensional shape.
- the posture estimation 'collating means 50 obtains a minimum distance value (or maximum similarity) between the three-dimensional data input from the three-dimensional data input means 10 and the reference image obtained from the reference image storage unit 30. More specifically, the posture candidate determination means 20 generates a posture candidate that is a candidate for the posture of at least one object (the posture of the object is represented by the position and orientation of the object). The comparison image generating means 40 generates a close comparison image as a reference image while projecting the three-dimensional data onto a two-dimensional image according to the posture candidate.
- the image matching means 55 includes a calculation unit 55a, a selection unit 55b, and a matching unit 55c shown in FIG.
- the image matching means 55 obtains the distance value between the comparison image and the reference image in the calculation unit 55a, and selects the comparison image having the smallest distance value for each reference image in the selection unit 55b, so that the optimum value is obtained.
- the posture is estimated and the minimum distance between the 3D data and the reference image is obtained.
- the collation unit 55c compares the minimum distance value with a threshold to determine whether or not the force is the same object. .
- the matching unit 55c selects the reference image with the smallest minimum distance value. If the similarity between the comparison image and the reference image is used for the determination, the similar object is determined to be the same if the similarity is equal to or greater than the threshold, and the object is not determined to be the same if the similarity is equal to or less than the threshold. judge.
- step 100 three-dimensional data is input by the three-dimensional data input means 10 (step 100).
- a posture candidate group ⁇ e ⁇ is determined by the posture candidate determination means 20 (step 110).
- the comparison image generation means 40 generates a comparison image close to the reference image R while projecting the three-dimensional data onto a two-dimensional image according to the posture candidate (step 120).
- the combining unit 55 obtains a distance value between the comparison image and the reference image (Step 130). Further, by selecting the comparison image having the smallest distance value, the optimum posture is estimated, and the minimum distance value between the three-dimensional data and the reference image R is obtained (step 140).
- the posture candidate having the smallest distance value is selected from the group of posture candidates determined in advance.
- the distance candidate is selected. May be searched for.
- the posture estimating / comparing means 50 compares the minimum distance value with the threshold to determine whether or not the objects are the same (step 155).
- step 100 three-dimensional data is input by the three-dimensional data input means 10 (step 100).
- the posture estimating / collating means 50 increments the image number k by 1 (step 151), compares the image number k with the number of images M (the number of reference images), (step 152), If k is equal to or less than the number M of images, the process returns to step 110 to perform the same processing, and calculates the minimum distance value of the next reference image. Finally, when the image number k becomes equal to or more than the number M of images in step 152, the reference image R having the smallest minimum distance value is set as the comparison result (step 153).
- the reference image storage unit 30 stores a reference image R (r) of the object k (r is an index of a pixel or a feature).
- each image is not always the same (differences regarding lighting conditions are not shown).
- one reference image is used for each object, a plurality of reference images may be used.
- the three-dimensional data as shown in FIG. 6 is input from the three-dimensional data input means 10 (step 100 in FIG. 4).
- the three-dimensional data consists of the shape P (X, y, z) in the three-dimensional space (X, y, z) of the object surface and the
- a learning CG image is generated in advance by computer graphics under various lighting conditions with three-dimensional data power, and a base image group is obtained by performing principal component analysis on the learning CG image. deep.
- a posture candidate group ⁇ e ⁇ is determined by the posture candidate determination means 20 (step 110).
- the posture candidate group may be set in advance irrespective of the reference image.
- the reference points such as eyes, nose, mouth, etc. are manually or automatically extracted from the reference image and the three-dimensional data.
- an approximate posture may be estimated and stored in advance by collating with a reference image using representative three-dimensional data (model) prepared in advance instead of using the input three-dimensional data. ,.
- the comparative image generation means 40 generates the comparative image G (r) while projecting the three-dimensional data on the two-dimensional image in accordance with the posture candidate e and approaching the illumination condition of the reference image R (step 120
- the image matching means 55 obtains a distance value between the comparison image and the reference image (Step 130). For example, when using the Euclidean distance,
- the posture estimating / collating means 50 estimates the optimal posture by selecting a comparison image having the smallest distance value, and calculates the minimum k distance value D between the three-dimensional data and the reference image R.
- Step 140 In the case of FIG. 7, for example, G is selected.
- the image number k is incremented by 1 (step 151), and the image number k is compared with the number M of images (step 152). At this time, if the image number k is equal to or less than the image number M, the process returns to step 110 to perform the same processing, and calculates the minimum distance value of the next reference image. Finally, when the image number k becomes equal to or more than the image number M in step 152, the reference image R having the smallest minimum distance value is set as the comparison result (step 153). In the case of the three-dimensional data of FIG. 6, for example, if the minimum distance value to the reference image R k is found to be ⁇ 20, 50, 25 ⁇ , the reference image R of FIG.
- the distance value between the comparison image and the reference image is determined, but a similarity may be used instead of the distance value.
- the similarity can be obtained by the above-described calculation method as an example.
- the reference image having the largest maximum similarity is used as the comparison result.
- the reference image of each object is compared with the posture. Even when images are captured under different conditions such as lighting and lighting, it is possible to perform a collation 'search with high accuracy.
- the 3D data of the object is measured, and the 3D data and the reference image are compared and compared. Since the configuration is such that the three-dimensional object model of each object can be obtained in advance, or even when only one or several reference images exist, the matching and searching can be performed with high accuracy.
- FIG. 8 is a block diagram showing a second embodiment of the present invention.
- the same parts as those in FIG. 1 are denoted by the same reference numerals.
- the posture estimating / collating means 51 includes a posture candidate determining means 20, a comparative image generating means 40, an image comparing means 55, and a score correcting means 60.
- the difference from FIG. 1 is that a score correction unit 60 and a reference correction coefficient storage unit 65 are added.
- each of these means operates as follows. First, the three-dimensional data input unit 10, the reference image storage unit 30, the posture candidate determination unit 20, the comparison image generation unit 40, and the image matching unit 55 are the same as those of the first embodiment shown in FIG. Is performed.
- the reference correction coefficient storage unit 65 a coefficient for correcting the matching score (distance value or similarity) corresponding to the reference image is stored in advance.
- the posture estimation 'matching means 51 obtains a minimum distance value (or maximum similarity) between the three-dimensional data input from the three-dimensional data input means 10 and the reference image obtained from the reference image storage unit 30.
- the minimum distance value is corrected using the correction coefficient obtained from the reference correction coefficient storage unit 65.
- the posture candidate determination means 20 generates at least one posture candidate.
- the comparative image generation means 40 generates a comparative image close to the reference image while projecting the three-dimensional data onto a two-dimensional image according to the posture candidate.
- the image matching means 55 calculates the distance value between the comparison image and the reference image, and selects the smallest distance value for each reference image! ⁇ By selecting the comparison image, the optimal posture is estimated and the three-dimensional data is obtained. Find the minimum distance between the image and the reference image.
- the score correcting means 60 corrects the minimum distance value using a correction coefficient corresponding to the reference image. Furthermore, in the case of a matching process (one-to-one matching) with one object (reference image), the corrected minimum distance is compared with a threshold to determine whether or not the force is the same object. In the case of a process of searching for an object (reference image) closest to the input 3D data from a plurality of objects (one-to-N matching), a reference image having the smallest corrected minimum distance value is selected.
- step 100 three-dimensional data is input by the three-dimensional data input means 10 (step 100).
- the score correction means 60 uses a correction coefficient corresponding to the reference image R,
- the posture estimating / comparing means 51 increments the image number k by 1 (step 151), compares the image number k with the number of images M (step 152), and determines that the image number k is less than or equal to the number of images M. In this case, the process returns to step 110 to perform the same processing, calculate the minimum distance value of the next reference image, and correct the minimum distance value using the correction coefficient corresponding to the reference image.
- the reference image R having the smallest corrected minimum distance value is set as the comparison result (step 153).
- steps 100, 110, 120, 130, 140, and 160 in FIG. 9 are performed in the same manner as in the first embodiment. Perform processing.
- the reference image storage unit 30 stores the reference image R (r) as shown in FIG.
- the reference correction coefficient storage unit 65 stores correction coefficients as shown in Table 1.
- the correction coefficient A is, for example, a representative 3D data (a representative
- k A is as shown in Table 1. This is because, for example, the reference image R has poor shooting conditions and the average distance k 1
- the posture candidate determining means 20, the comparative image generating means 40, and the image matching means 55 estimate the optimal posture and simultaneously determine the minimum distance value between the three-dimensional data and the reference image R. (Steps 110—140).
- the score correction means 60 uses the correction coefficient corresponding to the reference image R to calculate the minimum k
- the minimum distance value D is D k k k
- the posture estimating / collating means 51 increases the image number k by 1 (s k k
- Step 151 The image number k is compared with the image number M (Step 152). At this time, if the image number k is equal to or less than the number M of images, the process returns to step 110 to perform the same processing, calculate the minimum distance value of the next reference image, and similarly correspond to the reference image. The minimum distance value obtained using the correction coefficient is corrected. Finally, when the image number k becomes equal to or larger than the number M of images in step 152, the reference image R having the smallest corrected minimum distance value is set as the comparison result (k
- Step 153 For example, the minimum distance value for the reference image R is ⁇ 40, 60, 25 ⁇ .
- the corrected minimum distance value is ⁇ 16, 30, 25 ⁇ , and the reference image R
- the three-dimensional data of the object is measured, the three-dimensional data is compared with the reference image while correcting the posture and the lighting conditions, and the reference image is collated. Even when images are captured under different conditions such as lighting and lighting, it is possible to perform a collation 'search with high accuracy.
- the 3D data of the object is measured at the time of matching, and the 3D data and the reference image are compared and compared, so that a 3D object model of each object cannot be obtained in advance. Even when there is only one or several reference images, it is possible to perform a collation search with high accuracy.
- the matching is corrected by correcting the change in the matching score caused by the shooting conditions of the reference image, even if the shooting conditions of each reference image are different or there is a reference image with poor image quality, the matching is performed with high accuracy. Can be.
- the correction coefficient A is stored, and the distance value is multiplied by the correction coefficient A to compensate.
- the minimum distance value D h may be stored in its entirety.
- the parameters may be stored. For example, assuming a normal distribution, the average value E
- FIG. 10 is a block diagram showing a third embodiment of the present invention.
- the same parts as those in FIG. 1 are given the same reference numerals.
- the three-dimensional data input unit 10, the reference image storage unit 30, the posture estimation / collation unit 52, the three-dimensional reference point extraction unit 12, the standard three-dimensional reference point storage unit 72, and the standard three-dimensional It is composed of a dimension weighting coefficient storage unit 75 and a reference weighting coefficient storage unit 77.
- the posture estimating / collating means 52 includes the posture candidate determining means 20, the comparative image generating means 40, the image comparing means 56, and the input weight coefficient converting means 70.
- each of these means operates roughly as follows. First, the three-dimensional data input unit 10, the reference image storage unit 30, the posture candidate determination unit 20, and the comparison image generation unit 40 perform the same processing as that of the first embodiment.
- the standard three-dimensional reference point storage unit 72 stores standard three-dimensional reference points corresponding to the standard three-dimensional object model.
- the standard three-dimensional weight coefficient storage unit 75 stores standard three-dimensional weight coefficients.
- the reference weight coefficient storage unit 77 stores a weight coefficient corresponding to the reference image.
- the three-dimensional reference point extracting means 12 manually or automatically extracts three-dimensional reference points from the three-dimensional data obtained from the three-dimensional data input means 10.
- the posture estimation 'matching means 52 calculates the minimum distance value (or maximum similarity) between the three-dimensional data obtained from the three-dimensional data input means 10 and the reference image obtained from the reference image storage unit 30, It is obtained using a weight coefficient corresponding to the input data obtained from the input weight coefficient conversion means 70 and a weight coefficient corresponding to the reference image obtained from the reference weight coefficient storage unit 77.
- the posture candidate determination means 20 generates at least one posture candidate.
- the comparison image generation means 40 generates a comparison image close to the reference image while projecting the three-dimensional data onto a two-dimensional image according to the posture candidate.
- the input weighting factor conversion means 70 uses the standard three-dimensional reference points obtained from the standard three-dimensional reference point storage part 72 and the three-dimensional reference points of the three-dimensional data obtained from the three-dimensional reference point extracting means 12 to generate standard three-dimensional data.
- the correspondence between the coordinates of the standard three-dimensional weight coefficient obtained from the weight coefficient storage unit 75 and the three-dimensional data obtained from the three-dimensional data input means 10 is obtained.
- the dimension weighting factor is converted into a two-dimensional weighting factor.
- the image matching means 56 includes a calculation unit 56a, a selection unit 56b, and a matching unit 56c shown in FIG.
- the image matching means 56 calculates the distance value between the comparison image and the reference image in the arithmetic section 56a by using a weight coefficient corresponding to the input three-dimensional data obtained from the input weight coefficient conversion means 70 and a reference weight coefficient storage section 77.
- the selection unit 56b selects the comparison image with the smallest distance value for each reference image, thereby estimating the optimal posture and combining the three-dimensional data. Find the minimum distance value from the reference image.
- the collation unit 56c compares the minimum distance with the threshold to determine whether or not they are the same object. In the process of searching for an object (reference image) closest to the input 3D data from a plurality of objects (one-to-N matching), the matching unit 56c selects the reference image having the smallest minimum distance value. I do.
- three-dimensional data is input by the three-dimensional data input means 10 (step 100).
- the three-dimensional reference point extracting means 12 manually or automatically extracts three-dimensional reference points from the three-dimensional data (step 170).
- a posture candidate group ⁇ e ⁇ is determined by the posture candidate determination means 20 (step 110).
- the comparison image generation means 40 generates a comparison image close to the reference image R while projecting the three-dimensional data onto a two-dimensional image according to the posture candidate (step 120).
- the input weight is input by the three-dimensional data input means 10 (step 100).
- the three-dimensional reference point extracting means 12 manually or automatically extracts three-dimensional reference points from the three-dimensional data (step 170).
- a posture candidate group ⁇ e ⁇ is determined by the posture candidate determination means 20 (step 110).
- the number conversion means 70 uses the standard three-dimensional reference point and the three-dimensional reference point of the three-dimensional data, finds the correspondence between the standard three-dimensional weight coefficient and the coordinates of the three-dimensional data, and obtains the standard three-dimensional weight according to the posture candidate.
- the coefficients are converted into two-dimensional weight coefficients (step 180).
- the image matching means 56 calculates the distance value between the comparison image and the reference image by using a weight coefficient corresponding to the input three-dimensional data obtained by the input weight coefficient conversion means 70 and a reference weight coefficient storage unit. Using a weighting coefficient corresponding to the reference image obtained from 77 (step 131), and further selecting a comparative image having the smallest distance value for each reference image, an optimal posture is estimated and The minimum distance between the three-dimensional data and the reference image is obtained (step 140).
- the posture estimation / matching means 52 increments the image number k by 1 (step 151), compares the image number k with the number M of images (step 152), and when the image number k is equal to or less than the number M of images, Returning to step 110, the same processing is performed, and the minimum distance value of the next reference image is calculated. Finally, when the image number k becomes equal to or larger than the number M of images, the reference image R having the smallest minimum distance value is set as the comparison result (step 153).
- Step 155 is performed.
- step 155 of FIG. 3 it is determined whether or not the objects are the same by comparing the distance value and the threshold value as described above.
- the reference image storage unit 30 stores the reference image R (r) as shown in FIG.
- the standard three-dimensional reference point storage unit 72 stores standard three-dimensional reference points N Q (i is an index of the reference point) corresponding to the standard three-dimensional object model as shown in FIG.
- the three-dimensional reference point is a point for performing alignment, and in the example of FIG. 13, for example, five points of a left eye middle point, a right eye middle point, a nose tip, a left mouth corner point, and a right mouth corner point are shown. .
- the three-dimensional reference point may be manually set in advance. For example, September 2002, FIT (Information Science and Technology Forum) 2002, G100, pp. 199-200, Marugame et al., “Shape Information Extraction of facial 3D data features using color and color information together '' May be set automatically.
- the standard three-dimensional reference point is the average coordinate of each of the three-dimensional reference points of the learning three-dimensional object model prepared in advance, or the standard three-dimensional reference point obtained by averaging the standard three-dimensional object model force of the learning three-dimensional object model. Can be obtained by
- the standard three-dimensional weight coefficient storage unit 75 stores a standard three-dimensional weight coefficient as shown in FIG.
- V Q is stored.
- the number is calculated using the 3D weighting coefficients of the 3D object model for learning prepared in advance, and the 3D weighting factor is adjusted so that the 3D reference point of each 3D object model for learning matches the standard 3D reference point. And then averaging.
- each point other than the reference point is determined by interpolating or extrapolating the correspondence of the reference point, so that the coordinate values ⁇ s, t ⁇ of the three-dimensional weighting factor and the coordinates of the standard three-dimensional weighting factor are obtained.
- the original weighting factor can be learned in advance using learning images obtained by shooting the object of the learning 3D object model under various conditions.
- the learning 3D object model is used as input 3D data, and the learning image is used as a reference image.
- the error between each pixel of the generated comparison image and the reference image is obtained.
- the weighting factor is an amount indicating the importance of a pixel in collation. For example, a pixel having a small average error can be set to a large weight.
- the three-dimensional weighting factor can be set by averaging the error of each pixel of the comparative image and the reference image on the three-dimensional object model based on the correspondence between the pixels of the comparative image and the three-dimensional object model to obtain an average error. .
- the reference weight coefficient storage unit 77 stores a weight coefficient U (r) corresponding to the reference image as shown in FIG.
- U (r) l
- U (1:) 0
- a hatched area has a value of 0 ⁇ 1; (r) ⁇ l. . K k for the reference image
- the corresponding weight coefficient is set manually or automatically in advance, for example, by setting the weight of an area other than the face area to 0, or by setting the weight of an area having a large or small luminance value to be small.
- FIG. 6 shows an example of the three-dimensional reference points extracted from the three-dimensional data of FIG.
- a posture candidate group ⁇ e ⁇ is determined by the posture candidate determination means 20 (step 110).
- the comparison image generation means 40 generates a comparison image G (r) close to the reference image R while projecting the three-dimensional data into a two-dimensional image according to the posture candidate (step 120).
- FIG. 7 shows an example of a comparison image generated for R.
- the input weight coefficient conversion means 70 uses the standard three-dimensional reference point and the three-dimensional reference point of the three-dimensional data to determine the correspondence between the standard three-dimensional weight coefficient and the coordinates of the three-dimensional data, and determines the correspondence between the standard three-dimensional weight coefficient and the three-dimensional data.
- the standard three-dimensional weighting factor V Q is converted to a two-dimensional weighting factor W (r) (step 180
- FIG. 17 shows an example of a two-dimensional weighting factor generated corresponding to the comparison image of FIG.
- the image matching means 56 calculates a distance value D ′ between the comparison image and the reference image by using a weight coefficient W (r) corresponding to the input data obtained from the input weight kj-only coefficient conversion means 70, Reference weight kj
- Step 131 and further, by selecting a comparison image having the smallest distance value for each reference image, an optimal posture is estimated, and a minimum distance value between the three-dimensional data and the reference image is obtained (step 140). ).
- any one of the weight coefficients W (r) and U (r) may be used.
- the posture estimation 'matching means 52 increments the image number k by 1 (step 151), compares the image number k with the number of images M (step 152), and when the image number k is equal to or less than the number of images M, Returning to step 110, the same processing is performed to calculate the minimum distance value of the next reference image. Finally, when the image number k becomes equal to or more than the image number M in step 152, the minimum distance value V and the reference image R are set as the comparison result (step 153).
- the three-dimensional data of the object is measured, and the three-dimensional data is compared with the reference image while correcting the posture and lighting conditions. Even when the image is captured under the following conditions, the collation 'search can be performed with high accuracy. Also, at the time of matching, the 3D data of the object is measured, and the 3D data and the reference image are compared and compared.Therefore, a 3D object model of each object cannot be obtained in advance. Even when there are only cards, it is possible to perform a high-precision collation search. Further, since the image matching is performed by the weight matching using the weighting coefficient according to the part, the matching 'search can be performed with higher accuracy.
- the number of the standard three-dimensional weighting coefficients (and the standard three-dimensional reference points) is described as one, but there may be a plurality.
- information on which standard three-dimensional weighting factor is used for each reference image is stored in advance.
- the standard three-dimensional weighting factor is not limited to the force obtained by calculating the average error of pixels between the generated comparison image and the learning image.
- the weighting distance is used to calculate the weighting distance in the posture estimation.However, in the posture estimation, the distance calculation without using the weighting coefficient is used, the optimal posture is obtained, and the weighting distance may be calculated again. .
- FIG. 18 is a block diagram showing a fourth embodiment of the present invention.
- the same parts as those in FIG. 1 of the first embodiment are denoted by the same reference numerals.
- the three-dimensional data input unit 10 the reference image storage unit 30, the posture estimation and matching unit 53, the representative three-dimensional object model storage unit 36, the three-dimensional matching unit 80, and the group storage unit 85 And reference image selecting means 82.
- the posture estimating / collating means 53 includes a posture candidate determining means 20, a comparative image generating means 40, and an image comparing means 55.
- each of these means operates roughly as follows. First, the three-dimensional data input unit 10, the reference image storage unit 30, the posture candidate determination unit 20, the comparison image generation unit 40, and the image matching unit 55 are the same as those of the first embodiment shown in FIG. Is performed.
- the representative three-dimensional object model storage unit 36 stores a representative three-dimensional object model prepared in advance.
- the group storage unit 85 stores in advance information related to the representative three-dimensional object model and the reference image (information for associating the representative three-dimensional object model with the reference image).
- the three-dimensional collation means 80 includes the three-dimensional data obtained from the three-dimensional data input means 10 and the representative three-dimensional data. The matching is performed with each representative 3D object model obtained from the 3D object model storage unit 36, and the most similar representative 3D object model is selected.
- the reference image selection means 82 selects a reference image group corresponding to the selected representative three-dimensional object model obtained by the three-dimensional collation means 80 from the related information obtained from the group storage part 85.
- the posture estimation 'matching means 53 obtains the minimum distance value (or maximum similarity) between the three-dimensional data obtained from the three-dimensional data input means 10 and the reference image obtained from the reference image storage unit 30, The reference image having the smallest minimum distance value is selected.
- the target reference image is a group of reference images obtained by the reference image selecting means 82.
- three-dimensional data is input by the three-dimensional data input means 10 (step 100 in FIG. 19).
- the three-dimensional matching means 80 calculates the similarity S between the three-dimensional data and each representative three-dimensional object model C (step 220).
- the Dell number h is incremented by 1 (step 211), and the model number h is compared with the model number H (step 212). If the model number h is equal to or smaller than the model number H, the process returns to step 210 and the same. And calculate the similarity with the next representative 3D object model.
- Step 212 When the matching with all the representative 3D object models is completed in Step 212, the model C having the highest similarity is selected (Step 221). Next, the reference image selecting means 82
- a reference image group corresponding to the selected representative 3D object model is selected from the related information obtained from the loop storage unit 85 (step 230). Note that step 230 in FIG. 19 follows step 150 in FIG.
- step 240 if the reference image R is included in the selected reference image group,
- step 151 Proceed to the next step 110, and if not included, proceed to step 151.
- step 110 the posture candidate determining means 20, the comparison image generating means 40, and the image matching means 55 perform the same processing as in the first embodiment to estimate the optimal posture and to perform The minimum distance value between the image and the reference image R is obtained (steps 110-140).
- the posture candidate determining means 20 the comparison image generating means 40, and the image matching means 55 perform the same processing as in the first embodiment to estimate the optimal posture and to perform The minimum distance value between the image and the reference image R is obtained (steps 110-140).
- the estimation 'matching means 53 increments the image number k by 1 (step 151), compares the image number k with the number of images M (step 152), and when the image number k is equal to or less than the number of images M, the step 24 0 And the same processing is performed. Finally, when the image number k is equal to or more than the number M of images, the minimum distance value, ie, the smallest reference value R, is set as the comparison result (step 153).
- the reference image storage unit 30 stores the reference image R (r) as shown in FIG.
- the representative three-dimensional object model storage unit 36 stores a representative three-dimensional object model C as shown in FIG. As shown in Table 2,
- the image number of the top candidate (reference image group) when the reference image is collated using the representative three-dimensional object model is stored. This is because the matching when each representative 3D object model C is input to the image matching system of the first embodiment.
- step 211 the model number h is incremented by one (step 211), and the model number h is compared with the number of models H (step 212). Return and perform the same process to calculate the similarity with the next representative 3D object model.
- step 212 the model C having the highest similarity is selected (step 221).
- the reference image selection means 82 selects a reference image group ⁇ R, R ⁇ corresponding to the selected representative 3D object model C from the list obtained from the group storage unit 85 shown in Table 2.
- Step 240 if the reference image R is included in the selected reference image group,
- step 110 proceeds to step 110, and if not included, proceed to step 151.
- step 110 by performing the same processing as in the first embodiment, the posture candidate determining means 20, the comparative image generating means 40, and the image matching means 55 estimate the optimal posture and Find the minimum distance between the image and the reference image R (steps 110—140) k
- the posture estimation / matching means 53 increments the image number k by 1 (step 151), compares the image number k with the number of images M (step 152), and when the image number k is equal to or less than the image number H, Returning to step 240, similar processing is performed.
- R the minimum distance value
- the three-dimensional data of the object is measured, and the three-dimensional data is compared with the reference image while correcting the posture and lighting conditions. Even when the image is captured under the following conditions, the collation 'search can be performed with high accuracy. Also, at the time of matching, the 3D data of the object is measured, and the 3D data and the reference image are compared and compared.Therefore, a 3D object model of each object cannot be obtained in advance. Even when there are only cards, it is possible to perform a high-precision collation search. In addition, the reference 3D object model Since it is configured to select a reference image in some cases, high-speed search can be performed.
- one representative three-dimensional object model is described as being selected, but a plurality of representative three-dimensional object models may be selected.
- the union set of the reference image group corresponding to each representative 3D object model is set as the reference image group.
- FIG. 22 is a block diagram showing a configuration of the fifth exemplary embodiment of the present invention.
- the same parts as those in FIGS. 1 and 18 are denoted by the same reference numerals.
- the three-dimensional data input means 10 the reference image storage unit 30, the posture estimation and matching unit 53, the representative image storage unit 31, and the second posture estimation and matching unit (representative image selection unit) 54, a group storage unit 86, and reference image selecting means 82.
- Attitude estimation / collation means 50 and 53 include attitude candidate determination means 20, comparative image generation means 40, and image collation means 55.
- each of these means operates roughly as follows. First, the three-dimensional data input unit 10, the reference image storage unit 30, the posture candidate determination unit 20, the comparison image generation unit 40, and the image matching unit 55 are the same as those of the first embodiment shown in FIG. Is performed.
- the representative image storage unit 31 stores a representative image prepared in advance! RU
- This may be a part of the reference image in the reference image storage unit 30, or may be a new image generated by averaging the reference images. If the reference image is a part of the reference image in the reference image storage unit 30, only the image number may be stored, and the reference image in the reference image storage unit 30 may be referred to.
- the group storage unit 86 information related to the representative image and the reference image (information for associating the representative image with the reference image) is stored in advance.
- the second posture estimation 'matching means 54 compares the three-dimensional data obtained from the three-dimensional data input means 10 with each representative image obtained from the representative image storage unit 31 and determines the most similar representative image. select.
- the reference image selection means 82 selects a reference image group corresponding to the selected representative image obtained by the second posture estimation / collation means 54 from the related information obtained from the group storage section 86.
- the posture estimation 'collating means 53 obtains a minimum distance value (or maximum similarity) between the three-dimensional data obtained from the three-dimensional data input means 10 and the reference image obtained from the reference image storage unit 30, The reference image having the smallest minimum distance value is selected.
- the target reference image I is a reference image group obtained by the reference image selecting means 82.
- three-dimensional data is input by the three-dimensional data input means 10 (step 100 in FIG. 23).
- the similarity S between the three-dimensional data and each representative image R is determined (step 225).
- the image number h is incremented by 1 (step 211), and the image number h is compared with the image number H.
- Step 217) If the image number h is equal to or smaller than the image number H, the process returns to Step 225 to perform the same processing and calculate the similarity with the next representative image.
- the representative image R ′ having the highest similarity is selected (Step 22).
- the reference image selecting means 82 selects a reference image group corresponding to the selected representative image from the related information obtained from the group storage section 86 (step 235).
- Step 235 in Figure 23 235 Step 150 in Figure 20 [Continue!
- Step 240 if the reference image R is included in the selected reference image group, the next step
- step 110 the posture candidate determining means 20, the comparative image generating means 40, and the image matching means 55 perform the same processing as in the first embodiment to estimate the optimal posture and The minimum distance value between the image and the reference image R is obtained (steps 110-140).
- the posture candidate determining means 20 the comparative image generating means 40, and the image matching means 55 perform the same processing as in the first embodiment to estimate the optimal posture and The minimum distance value between the image and the reference image R is obtained (steps 110-140).
- the estimation 'matching means 53 increments the image number k by 1 (step 151), compares the image number k with the number of images M (step 152), and if the image number k is less than the number of images H, Returning to 0, the same processing is performed, and the minimum distance value to the next reference image is obtained. Finally, if the number k of images is equal to or more than the number M of images in step 152, the minimum distance value is the smallest! / And the reference image R is set as the comparison result (step 153).
- the reference image storage unit 30 stores the reference image R (r) as shown in FIG.
- the representative image storage unit 31 stores a representative image R ′ force S as shown in FIG. It is remembered.
- the group storage unit 86 stores the image numbers of the top candidates (reference image group) when the reference image is collated using the representative image as shown in Table 4. For this collation, an existing image collation system described in Patent Documents 1 and 2 and the like can be used.
- step 217 If the image number h is equal to or less than the number of images H, the process returns to step 215 to perform the same processing and calculate the degree of similarity with the next representative image. .
- the representative image R having the highest similarity is selected (Step 226). For example, if the similarity to the representative image R is ⁇ 0.7, 0.9 ⁇ and h h
- the representative image R ' is selected.
- the reference image selection means 82 is shown in Table 4.
- a reference image group ⁇ R, R ⁇ is selected (step 235). Thereafter, the processing in FIG. 20 is performed.
- Step 240 if the reference image R is included in the selected reference image group,
- step 110 proceeds to step 110, and if not included, proceed to step 151.
- the posture candidate determining means 20, the comparative image generating means 40, and the image matching means 55 perform the same processing as in the first embodiment to estimate the optimal posture and to perform the 3D data and reference image processing.
- the minimum distance value from R is obtained (step 110—step 140).
- the posture estimation 'matching means 53 increments the image number k by 1 (step 151), compares the image number k with the number of images M (step 152), and when the image number k is equal to or less than the number M of images, Returning to step 240, the same processing is performed.
- the minimum distance value is calculated for R and R
- Step 153 the reference image R having the smallest minimum distance value is set as the comparison result (Step 153).
- one representative image is selected, but a plurality of representative images may be selected.
- a merged set of the reference image group corresponding to each representative image is defined as a reference image group.
- the three-dimensional data of the object is measured, and the three-dimensional data is compared with the reference image while correcting the posture and lighting conditions. Even when the image is captured under the following conditions, the collation 'search can be performed with high accuracy. Also, at the time of matching, the 3D data of the object is measured, and the 3D data and the reference image are compared and compared.Therefore, a 3D object model of each object cannot be obtained in advance. Even when there are only cards, it is possible to perform a high-precision collation search. Furthermore, since a reference image is selected by matching with a representative image, high-speed search can be performed.
- the three-dimensional data has the shape and texture in the three-dimensional space (X, y, z) of the object surface as information.
- the present invention is not limited to this as long as equivalent information can be obtained.
- a distance image expressing a distance to the object surface with a certain directional force as an image, a texture image captured from the direction, and the like may be used.
- the image collating system of the present invention executes the functions of the respective units of the above-described first to fifth embodiments, not to mention that the functions of the respective units as the constituent elements can be realized in a hardware manner.
- This can be realized by loading the image matching program (application) into the memory of the computer processing device and controlling the computer processing device.
- This image collation program is stored on a magnetic disk, semiconductor memory, or other recording medium, loaded from the recording medium into a computer processing device, and controls the operation of the computer processing device to realize the functions described above. I do.
- the present invention is suitable for an image collating system for retrieving an image of an object such as a human face in a database, a program for realizing an image collating system by a computer, and other applications. Can be used.
- the present invention can be applied to a search for an image of an object such as a human face existing on a network or the Internet. Furthermore, when it is determined whether an image such as an identification photograph and the person holding the image are the same person, it can be suitably used for any purpose.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Geometry (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| AU2004282790A AU2004282790A1 (en) | 2003-10-21 | 2004-10-21 | Image collation system and image collation method |
| CN2004800308445A CN1871622B (zh) | 2003-10-21 | 2004-10-21 | 图像比较系统和图像比较方法 |
| EP04792761A EP1677250B9 (en) | 2003-10-21 | 2004-10-21 | Image collation system and image collation method |
| JP2005514861A JP4556873B2 (ja) | 2003-10-21 | 2004-10-21 | 画像照合システム及び画像照合方法 |
| US10/576,498 US7715619B2 (en) | 2003-10-21 | 2004-10-21 | Image collation system and image collation method |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2003360713 | 2003-10-21 | ||
| JP2003-360713 | 2003-10-21 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2005038716A1 true WO2005038716A1 (ja) | 2005-04-28 |
Family
ID=34463419
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2004/015612 Ceased WO2005038716A1 (ja) | 2003-10-21 | 2004-10-21 | 画像照合システム及び画像照合方法 |
Country Status (7)
| Country | Link |
|---|---|
| US (1) | US7715619B2 (ja) |
| EP (2) | EP1677250B9 (ja) |
| JP (1) | JP4556873B2 (ja) |
| KR (1) | KR100816607B1 (ja) |
| CN (1) | CN1871622B (ja) |
| AU (1) | AU2004282790A1 (ja) |
| WO (1) | WO2005038716A1 (ja) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2006338313A (ja) * | 2005-06-01 | 2006-12-14 | Nippon Telegr & Teleph Corp <Ntt> | 類似画像検索方法,類似画像検索システム,類似画像検索プログラム及び記録媒体 |
| JP2008040774A (ja) * | 2006-08-07 | 2008-02-21 | Fujitsu Ltd | 形状データ検索プロブラム及び方法 |
| GB2411532B (en) * | 2004-02-11 | 2010-04-28 | British Broadcasting Corp | Position determination |
| WO2010122721A1 (ja) * | 2009-04-22 | 2010-10-28 | 日本電気株式会社 | 照合装置、照合方法および照合プログラム |
| JP2014517380A (ja) * | 2011-04-28 | 2014-07-17 | コーニンクレッカ フィリップス エヌ ヴェ | 顔の位置検出 |
| JP2017120632A (ja) * | 2015-12-30 | 2017-07-06 | ダッソー システムズDassault Systemes | 探索のための3dから2dへの再画像化 |
| WO2017149755A1 (ja) * | 2016-03-04 | 2017-09-08 | 楽天株式会社 | 検索装置、検索方法、プログラム、ならびに、非一時的なコンピュータ読取可能な情報記録媒体 |
Families Citing this family (43)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2004008392A1 (ja) * | 2002-07-10 | 2004-01-22 | Nec Corporation | 3次元物体モデルを用いた画像照合システム、画像照合方法及び画像照合プログラム |
| KR100831187B1 (ko) * | 2003-08-29 | 2008-05-21 | 닛본 덴끼 가부시끼가이샤 | 웨이팅 정보를 이용하는 객체 자세 추정/조합 시스템 |
| AU2004212605A1 (en) * | 2003-09-26 | 2005-04-14 | Nec Australia Pty Ltd | Computation of soft bits for a turbo decoder in a communication receiver |
| EP1722331B1 (en) * | 2004-03-03 | 2010-12-01 | NEC Corporation | Image similarity calculation system, image search system, image similarity calculation method, and image similarity calculation program |
| JP4216824B2 (ja) * | 2005-03-07 | 2009-01-28 | 株式会社東芝 | 3次元モデル生成装置、3次元モデル生成方法および3次元モデル生成プログラム |
| US7720316B2 (en) * | 2006-09-05 | 2010-05-18 | Microsoft Corporation | Constraint-based correction of handwriting recognition errors |
| CN101785025B (zh) * | 2007-07-12 | 2013-10-30 | 汤姆森特许公司 | 用于从二维图像进行三维对象重构的系统和方法 |
| CN101350016B (zh) * | 2007-07-20 | 2010-11-24 | 富士通株式会社 | 三维模型检索装置及方法 |
| JP2009054018A (ja) * | 2007-08-28 | 2009-03-12 | Ricoh Co Ltd | 画像検索装置、画像検索方法及びプログラム |
| KR100951890B1 (ko) * | 2008-01-25 | 2010-04-12 | 성균관대학교산학협력단 | 상황 모니터링을 적용한 실시간 물체 인식 및 자세 추정 방법 |
| JP5176572B2 (ja) * | 2008-02-05 | 2013-04-03 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
| US8401276B1 (en) * | 2008-05-20 | 2013-03-19 | University Of Southern California | 3-D reconstruction and registration |
| US20100118037A1 (en) * | 2008-09-08 | 2010-05-13 | Apple Inc. | Object-aware transitions |
| US7721209B2 (en) * | 2008-09-08 | 2010-05-18 | Apple Inc. | Object-aware transitions |
| JP4963306B2 (ja) | 2008-09-25 | 2012-06-27 | 楽天株式会社 | 前景領域抽出プログラム、前景領域抽出装置、及び前景領域抽出方法 |
| US9495583B2 (en) | 2009-01-05 | 2016-11-15 | Apple Inc. | Organizing images by correlating faces |
| US8503720B2 (en) | 2009-05-01 | 2013-08-06 | Microsoft Corporation | Human body pose estimation |
| KR101068465B1 (ko) * | 2009-11-09 | 2011-09-28 | 한국과학기술원 | 삼차원 물체 인식 시스템 및 방법 |
| JP5560722B2 (ja) * | 2010-01-12 | 2014-07-30 | セイコーエプソン株式会社 | 画像処理装置、画像表示システム、および画像処理方法 |
| JP5434708B2 (ja) * | 2010-03-15 | 2014-03-05 | オムロン株式会社 | 照合装置、デジタル画像処理システム、照合装置制御プログラム、コンピュータ読み取り可能な記録媒体、および照合装置の制御方法 |
| JP5045827B2 (ja) * | 2011-02-01 | 2012-10-10 | カシオ計算機株式会社 | 画像処理装置、画像処理方法、及び、プログラム |
| US8942917B2 (en) | 2011-02-14 | 2015-01-27 | Microsoft Corporation | Change invariant scene recognition by an agent |
| JP5467177B2 (ja) * | 2011-05-31 | 2014-04-09 | 楽天株式会社 | 情報提供装置、情報提供方法、情報提供処理プログラム、情報提供処理プログラムを記録した記録媒体、及び情報提供システム |
| JP6058256B2 (ja) * | 2011-06-13 | 2017-01-11 | アルパイン株式会社 | 車載カメラ姿勢検出装置および方法 |
| US9644942B2 (en) * | 2012-11-29 | 2017-05-09 | Mitsubishi Hitachi Power Systems, Ltd. | Method and apparatus for laser projection, and machining method |
| DE102012113009A1 (de) * | 2012-12-21 | 2014-06-26 | Jenoptik Robot Gmbh | Verfahren zum automatischen Klassifizieren von sich bewegenden Fahrzeugen |
| US9857470B2 (en) | 2012-12-28 | 2018-01-02 | Microsoft Technology Licensing, Llc | Using photometric stereo for 3D environment modeling |
| US9940553B2 (en) * | 2013-02-22 | 2018-04-10 | Microsoft Technology Licensing, Llc | Camera/object pose from predicted coordinates |
| GB2536493B (en) * | 2015-03-20 | 2020-11-18 | Toshiba Europe Ltd | Object pose recognition |
| CN105654048A (zh) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | 一种多视角人脸比对方法 |
| CN107305556A (zh) * | 2016-04-20 | 2017-10-31 | 索尼公司 | 用于3d打印的装置及方法 |
| CN106017420B (zh) * | 2016-05-24 | 2019-03-29 | 武汉轻工大学 | 焊接衬垫片的姿态识别方法及识别装置 |
| US10089756B2 (en) * | 2016-06-30 | 2018-10-02 | Zhiping Mu | Systems and methods for generating 2D projection from previously generated 3D dataset |
| US11443233B2 (en) * | 2017-02-21 | 2022-09-13 | Nec Corporation | Classification apparatus, classification method, and program |
| EP3460756B1 (en) * | 2017-07-24 | 2021-02-17 | HTC Corporation | Tracking system and method thereof |
| CN108062390B (zh) * | 2017-12-15 | 2021-07-23 | 广州酷狗计算机科技有限公司 | 推荐用户的方法、装置和可读存储介质 |
| US11681303B2 (en) * | 2018-07-06 | 2023-06-20 | Verity Ag | Methods and systems for estimating the orientation of an object |
| US11521460B2 (en) | 2018-07-25 | 2022-12-06 | Konami Gaming, Inc. | Casino management system with a patron facial recognition system and methods of operating same |
| AU2019208182B2 (en) | 2018-07-25 | 2021-04-08 | Konami Gaming, Inc. | Casino management system with a patron facial recognition system and methods of operating same |
| CN109708649B (zh) * | 2018-12-07 | 2021-02-09 | 中国空间技术研究院 | 一种遥感卫星的姿态确定方法及系统 |
| US11803585B2 (en) * | 2019-09-27 | 2023-10-31 | Boe Technology Group Co., Ltd. | Method and apparatus for searching for an image and related storage medium |
| CN112135121B (zh) * | 2020-09-17 | 2023-04-28 | 中国信息通信研究院 | 智能视频监控识别性能评价系统及方法 |
| CN113469134B (zh) * | 2021-07-27 | 2025-06-17 | 浙江大华技术股份有限公司 | 动作识别方法、装置、电子设备及存储介质 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10232934A (ja) * | 1997-02-18 | 1998-09-02 | Toshiba Corp | 顔画像登録装置及びその方法 |
| JPH11238135A (ja) * | 1998-02-23 | 1999-08-31 | Sony Corp | イメージ認識方法およびイメージ認識装置 |
| JP2000306106A (ja) * | 1999-02-15 | 2000-11-02 | Medeikku Engineering:Kk | 3次元有向体の定位方法及び画像処理装置 |
| JP2001134765A (ja) * | 1999-11-09 | 2001-05-18 | Canon Inc | 画像検索方法及び装置 |
| JP2001222716A (ja) * | 2000-02-08 | 2001-08-17 | Minolta Co Ltd | 人物認証方法および同装置 |
Family Cites Families (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH04119475A (ja) | 1990-09-10 | 1992-04-20 | Nippon Telegr & Teleph Corp <Ntt> | 三次元形状識別装置 |
| US5555316A (en) * | 1992-06-30 | 1996-09-10 | Matsushita Electric Industrial Co., Ltd. | Inspecting apparatus of mounting state of component or printing state of cream solder in mounting line of electronic component |
| KR100201739B1 (ko) * | 1995-05-18 | 1999-06-15 | 타테이시 요시오 | 물체 관측 방법 및 그 방법을 이용한 물체 관측장치와,이 장치를 이용한 교통흐름 계측장치 및 주차장 관측장치 |
| JPH0991436A (ja) | 1995-09-21 | 1997-04-04 | Toyota Central Res & Dev Lab Inc | 画像処理方法及びその装置 |
| US6002782A (en) * | 1997-11-12 | 1999-12-14 | Unisys Corporation | System and method for recognizing a 3-D object by generating a 2-D image of the object from a transformed 3-D model |
| JP3417377B2 (ja) | 1999-04-30 | 2003-06-16 | 日本電気株式会社 | 三次元形状計測方法及び装置並びに記録媒体 |
| JP3926059B2 (ja) | 1999-05-12 | 2007-06-06 | 日本電気株式会社 | 画像照合装置及びその画像照合方法並びにその制御プログラムを記録した記録媒体 |
| JP4341135B2 (ja) * | 2000-03-10 | 2009-10-07 | コニカミノルタホールディングス株式会社 | 物体認識装置 |
| US6956569B1 (en) * | 2000-03-30 | 2005-10-18 | Nec Corporation | Method for matching a two dimensional image to one of a plurality of three dimensional candidate models contained in a database |
| US6580821B1 (en) | 2000-03-30 | 2003-06-17 | Nec Corporation | Method for computing the location and orientation of an object in three dimensional space |
| JP2001283216A (ja) * | 2000-04-03 | 2001-10-12 | Nec Corp | 画像照合装置、画像照合方法、及びそのプログラムを記録した記録媒体 |
| JP2002024830A (ja) | 2000-07-05 | 2002-01-25 | Nec Corp | 画像照合装置、方法及びコンピュータ読み取り可能な記憶媒体 |
| JP4573085B2 (ja) | 2001-08-10 | 2010-11-04 | 日本電気株式会社 | 位置姿勢認識装置とその位置姿勢認識方法、及び位置姿勢認識プログラム |
| JP3880818B2 (ja) | 2001-08-30 | 2007-02-14 | シャープ株式会社 | メモリ膜、メモリ素子、半導体記憶装置、半導体集積回路および携帯電子機器 |
| US7853085B2 (en) * | 2003-03-06 | 2010-12-14 | Animetrics, Inc. | Viewpoint-invariant detection and identification of a three-dimensional object from two-dimensional imagery |
-
2004
- 2004-10-21 US US10/576,498 patent/US7715619B2/en not_active Expired - Lifetime
- 2004-10-21 JP JP2005514861A patent/JP4556873B2/ja not_active Expired - Lifetime
- 2004-10-21 AU AU2004282790A patent/AU2004282790A1/en not_active Abandoned
- 2004-10-21 EP EP04792761A patent/EP1677250B9/en not_active Expired - Lifetime
- 2004-10-21 EP EP12164443.9A patent/EP2479726B9/en not_active Expired - Lifetime
- 2004-10-21 KR KR1020067007592A patent/KR100816607B1/ko not_active Expired - Fee Related
- 2004-10-21 CN CN2004800308445A patent/CN1871622B/zh not_active Expired - Lifetime
- 2004-10-21 WO PCT/JP2004/015612 patent/WO2005038716A1/ja not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10232934A (ja) * | 1997-02-18 | 1998-09-02 | Toshiba Corp | 顔画像登録装置及びその方法 |
| JPH11238135A (ja) * | 1998-02-23 | 1999-08-31 | Sony Corp | イメージ認識方法およびイメージ認識装置 |
| JP2000306106A (ja) * | 1999-02-15 | 2000-11-02 | Medeikku Engineering:Kk | 3次元有向体の定位方法及び画像処理装置 |
| JP2001134765A (ja) * | 1999-11-09 | 2001-05-18 | Canon Inc | 画像検索方法及び装置 |
| JP2001222716A (ja) * | 2000-02-08 | 2001-08-17 | Minolta Co Ltd | 人物認証方法および同装置 |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| GB2411532B (en) * | 2004-02-11 | 2010-04-28 | British Broadcasting Corp | Position determination |
| JP2006338313A (ja) * | 2005-06-01 | 2006-12-14 | Nippon Telegr & Teleph Corp <Ntt> | 類似画像検索方法,類似画像検索システム,類似画像検索プログラム及び記録媒体 |
| JP2008040774A (ja) * | 2006-08-07 | 2008-02-21 | Fujitsu Ltd | 形状データ検索プロブラム及び方法 |
| WO2010122721A1 (ja) * | 2009-04-22 | 2010-10-28 | 日本電気株式会社 | 照合装置、照合方法および照合プログラム |
| US8958609B2 (en) | 2009-04-22 | 2015-02-17 | Nec Corporation | Method and device for computing degree of similarly between data sets |
| JP2014517380A (ja) * | 2011-04-28 | 2014-07-17 | コーニンクレッカ フィリップス エヌ ヴェ | 顔の位置検出 |
| US9582706B2 (en) | 2011-04-28 | 2017-02-28 | Koninklijke Philips N.V. | Face location detection |
| US9740914B2 (en) | 2011-04-28 | 2017-08-22 | Koninklijke Philips N.V. | Face location detection |
| JP2017120632A (ja) * | 2015-12-30 | 2017-07-06 | ダッソー システムズDassault Systemes | 探索のための3dから2dへの再画像化 |
| WO2017149755A1 (ja) * | 2016-03-04 | 2017-09-08 | 楽天株式会社 | 検索装置、検索方法、プログラム、ならびに、非一時的なコンピュータ読取可能な情報記録媒体 |
Also Published As
| Publication number | Publication date |
|---|---|
| EP1677250B9 (en) | 2012-10-24 |
| CN1871622A (zh) | 2006-11-29 |
| JPWO2005038716A1 (ja) | 2007-01-25 |
| KR100816607B1 (ko) | 2008-03-24 |
| KR20060058147A (ko) | 2006-05-29 |
| EP1677250A1 (en) | 2006-07-05 |
| EP2479726B9 (en) | 2013-10-23 |
| EP2479726A1 (en) | 2012-07-25 |
| EP2479726B1 (en) | 2013-07-10 |
| US7715619B2 (en) | 2010-05-11 |
| EP1677250B1 (en) | 2012-07-25 |
| JP4556873B2 (ja) | 2010-10-06 |
| EP1677250A4 (en) | 2011-03-16 |
| US20070031001A1 (en) | 2007-02-08 |
| AU2004282790A1 (en) | 2005-04-28 |
| CN1871622B (zh) | 2010-07-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2005038716A1 (ja) | 画像照合システム及び画像照合方法 | |
| JP4553141B2 (ja) | 重み情報を用いた物体姿勢推定・照合システム | |
| JP4692773B2 (ja) | 物体の姿勢推定及び照合システム、物体の姿勢推定及び照合方法、並びにそのためのプログラム | |
| US7894636B2 (en) | Apparatus and method for performing facial recognition from arbitrary viewing angles by texturing a 3D model | |
| JP4466951B2 (ja) | 立体結合顔形状の位置合わせ | |
| CN109299643B (zh) | 一种基于大姿态对准的人脸识别方法及系统 | |
| JP3926059B2 (ja) | 画像照合装置及びその画像照合方法並びにその制御プログラムを記録した記録媒体 | |
| JP2005339288A (ja) | 画像処理装置及びその方法 | |
| JP2008176645A (ja) | 3次元形状処理装置、3次元形状処理装置の制御方法、および3次元形状処理装置の制御プログラム | |
| US20100098301A1 (en) | Method and Device for Recognizing a Face and Face Recognition Module | |
| JP4141090B2 (ja) | 画像認識装置、陰影除去装置、陰影除去方法及び記録媒体 | |
| CN110990604A (zh) | 图像底库生成方法、人脸识别方法和智能门禁系统 | |
| JP4816874B2 (ja) | パラメータ学習装置、パラメータ学習方法、およびプログラム | |
| JP7643096B2 (ja) | 認識装置、ロボット制御システム、認識方法、およびプログラム | |
| González-Jiménez et al. | Automatic pose correction for local feature-based face authentication | |
| WO2022190533A1 (ja) | テンプレート生成装置、照合システム、照合装置、テンプレート生成方法、照合方法およびプログラム |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 200480030844.5 Country of ref document: CN |
|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| WWE | Wipo information: entry into national phase |
Ref document number: 2007031001 Country of ref document: US Ref document number: 2005514861 Country of ref document: JP Ref document number: 1020067007592 Country of ref document: KR Ref document number: 10576498 Country of ref document: US |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2004792761 Country of ref document: EP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2004282790 Country of ref document: AU |
|
| ENP | Entry into the national phase |
Ref document number: 2004282790 Country of ref document: AU Date of ref document: 20041021 Kind code of ref document: A |
|
| WWP | Wipo information: published in national office |
Ref document number: 2004282790 Country of ref document: AU |
|
| WWP | Wipo information: published in national office |
Ref document number: 1020067007592 Country of ref document: KR |
|
| WWP | Wipo information: published in national office |
Ref document number: 2004792761 Country of ref document: EP |
|
| WWP | Wipo information: published in national office |
Ref document number: 10576498 Country of ref document: US |