WO2009150361A2 - Procede et dispositif de reconnaissance invariante-affine de formes - Google Patents
Procede et dispositif de reconnaissance invariante-affine de formes Download PDFInfo
- Publication number
- WO2009150361A2 WO2009150361A2 PCT/FR2009/050923 FR2009050923W WO2009150361A2 WO 2009150361 A2 WO2009150361 A2 WO 2009150361A2 FR 2009050923 W FR2009050923 W FR 2009050923W WO 2009150361 A2 WO2009150361 A2 WO 2009150361A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- tilt
- image
- digital image
- sifs
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/754—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries involving a deformation of the sample pattern or of the reference pattern; Elastic matching
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/11—Technique with transformation invariance effect
Definitions
- the present invention relates to a method and a device for recognizing objects in at least one digital image.
- a pattern recognition method is intended to recognize an object or type of object that has been photographed when the relative position of the object and the actual or simulated camera is unknown, or when the object eventually distorted.
- the object itself can be a graphic and non-physical object (such as a digital logo or the result of a simulation).
- the shooting device or simulation of shooting
- the invention relates to any acquisition of images, and any distortion or geometric deformation of the view of an object driven by the change of position relative to the object of the camera, or by the peculiarities of the device for acquiring or simulating images.
- the objects photographed or simulated do not need to be identical, it is enough that they are similar, a current situation for objects resulting from an industrial or graphic production.
- One or more images of the object to be recognized are available: these are the "request” images.
- the image or images where the object is searched do not necessarily contain it. The goal is to find reliable clues as to whether the object is present in these analyzed images, and to give its position in the image.
- the first simplification proposed by all the methods dealing with the problem of recognition is to suppose that the object has a fairly regular relief so that one can interpret the local deformations in the target images like plane affine deformations of the image request.
- Most of the physical objects of interest are indeed volumes whose surface has flat or slightly curved faces. Exceptions are rare.
- An example of an exception is a leafless tree, whose appearance can change drastically by a change of viewing angle, or the eddy of a liquid.
- any regular deformation in the mathematical sense of the term (differentiable) is, locally in the image, close to an affine deformation.
- the distortion of its image caused by a change of position of the camera observing it is a flat homography, which is in every point tangent to an affine application. If further the camera is far enough from the observed object, this distortion of the image looks more and more like a global affine transformation.
- any affine transform of the plane of the positive determinant image can be interpreted as a distortion of the image due to the movement in the space of a camera observing the image and located far from the image (virtually 'infinite).
- the problem of pattern recognition can be reduced to the search for local features of images which are invariant modulo an affine transformation. These characteristics are then robust to the apparent local deformations caused by the relative movements of the object and the camera, as well as to the distortions caused by the acquisition device, such as for example the optical distortion of a lens, and finally distortions due to deformations of the object itself.
- tilt and “digital” which are terms commonly used by those skilled in the art and which mean respectively tilt and digital.
- SIF and SIFT which are abbreviations known by those skilled in the art and which respectively mean “scale invariant feature” and “scale invariant feature transform”, that is, “invariant characteristic in scale” and “invariant scale feature transform” .
- the invention also aims to allow object recognition in an image for which the shooting is oblique, in comparison with a frontal shot facing the object, or also oblique.
- the invention therefore aims to improve the recognition rate regardless of the shots.
- At least one of the above-mentioned objects is reached with a method for the recognition of objects in at least one digital image in which: a) from said digital image, a plurality of digital rotations and at least two digital tilts are simulated different from 1, so as to develop for each rotation-tilt pair, a simulated image, and b) we apply an algorithm producing invariant values by translation, rotation and zoom on the simulated images so as to determine local characteristics called SIFs ( "Scale invariant features”) that we use for object recognition.
- SIFs Scale invariant features
- each position of the camera is defined by a rotation-tilt pair
- those skilled in the art will readily understand that other more or less complex transformations can be used to define a position of the camera.
- the invention is remarkable in that - A - that any change of orientation of the axis of the camera can be summarized to a rotation followed by a tilt.
- the method according to the present invention is based on the observation that any affine transformation of the plane can be interpreted as a transformation of the image due to a change of position of a camera to infinity. Thanks to this interpretation one can decompose an affine transformation into the product:
- Prior art algorithms make it possible to recognize an image for which the first three transformations are arbitrary. They correspond to the four parameters of axial rotation of the camera, zoom, and translation parallel to the focal plane (and therefore perpendicular to the optical axis).
- the SIFT method makes it possible to determine SIFs ("scale invariant features"), that is to say more precisely invariant characteristics by zoom, translation and rotation of the image, but does not take into account the last two parameters relating to the optical axis direction change of the camera.
- SIFs scale invariant features
- Lowe provides additional views to improve the sensitivity of the SIFT method, but these are real views, which involves additional manipulations as well as a considerable increase in the data to be processed.
- Pritchard only provides for four simulated images because it was considered that going beyond would be counterproductive and prohibitive in terms of computing time. The present invention goes beyond a generally accepted prejudice, according to which the calculation time would be prohibitive if the number of simulated images was increased.
- the method according to the present invention it is possible to simulate with sufficient accuracy all the distortions of the image due to the variations of the two parameters not processed in the SIFT method, which are the change of direction parameters of the optical axis of the camera.
- several simulated images are first produced according to said last two parameters which are described by a rotation and a tilt.
- rotation-tilt pairs can fit in a half-sphere above the digital image.
- Rotation and tilt are considered to correspond to longitude and latitude in space, respectively.
- Pritchard actually describes four rotations and a single tilt value from a frontal image.
- the initial images can be obtained by non-frontal, i.e. oblique to about 80 degrees.
- the systems of the prior art make it possible to recognize objects with a tolerance for changes in camera axis orientation resulting in real tilts up to 3 or 4.
- a tilt much greater than 2, up to 30 for example or more is possible, and the method according to the invention makes it possible to recognize such oblique views one from the other.
- This process is therefore capable of recognizing all the possible views of the image at infinity, since the simulated views need only an invariant recognition algorithm by translation, rotation and zoom, a well-controlled problem in the state of the art that knows how to calculate SIFs.
- the method is applied to a so-called request image and a so-called target image, the SIFs of the simulated images of the request being compared to the SIFs of the simulated images of the target so as to recognize similar or identical objects between the query and the target.
- the SIFs relating to the request can be determined during a preliminary step of calibration to form a dictionary of SIFs. And the SIFs relating to the targets can be respectively determined during an operating step during which the SIFs obtained from each target are compared with the SIFs of said dictionary.
- the method according to the invention is carried out in which the request contains any shot of an object of similar or identical shape. to the shape of another object contained in the target under any shot, and rotational-tilt pairs are determined, ie this optimum number and these optimal positions as those for which the SIFs of the two objects are similar, for a large number of objects tested.
- the method according to the invention provides for producing the same number of simulated images for the request and for the target, and for the same rotation-tilt pairs. But it also provides the case where we develop a different number of simulated images for the query and for the target, especially with different or identical tilts.
- the number of rotations per tilt increases as the tilt value increases.
- the tilt is defined as a function of the latitude in a half-sphere above the digital image, and the difference in latitude between two consecutive tilts decreases as the tilt increases.
- the tilts considered form approximately that is to say with a tolerance, a finite geometric sequence 1, a, a 2 , a 3 , ..., a ", a being a number greater than 1.
- a is in the root order of 2 (V2) and n can range from 2 to 6 if the rotation-tilt pairs are applied as well. on the target only on the query, and from 2 to 12 if the rotation-tilt pairs are applied to only one of the two images.
- b is of the order of 72 degrees
- k is the last integer value such that kb / t is less than 180 degrees.
- t images 2.5.f images.
- applying a tilt t consists in subsampling the digital image in a direction of a value equal to t, which divides its area by t.
- a tilt can also be applied by combining subsampling of the digital image in one direction with oversampling in a direction orthogonal to the previous one.
- the method according to the invention takes a time comparable to the SIFT method, for example, while allowing the recognition of oblique views up to a transition tilt of 16.
- a tilt can be simulated by combining an oversampling in a direction and a subsampling in the orthogonal direction, so that the surface of the image remains constant and does not decrease (see definition of the tilt further).
- simulating all the views depending on two parameters while maintaining a computation time and a reasonable memory is made possible thanks to the fact that the space of the two parameters rotation and tilt is sampled with relatively few values for each parameter, and that simulating skew distortions can decrease the size of the images by downsampling. This makes it possible to virtually generate all the possible views at a given accuracy, while not extending the required memory capacity too much.
- the method according to the invention can be applied to said digital image in comparison with the same digital image or a transformation thereof, for example an axial symmetry, so as to determine in this digital image symmetries, repeated forms or forms with periodicities.
- a device for implementing a method for recognizing objects in at least one digital image comprises a processing circuit configured to: a) apply, from said digital image, a plurality of digital rotations and at least two digital tilts t different from 1 so as to develop for each rotation-tilt pair, a simulated image , and b) apply an invariant algorithm by translating, rotating and zooming the simulated images to determine local characteristics called SIFs ("scale invariant features") that are used for object recognition.
- SIFs scale invariant features
- This device advantageously comprises a memory space in which is stored a dictionary of SIFs; and the processing circuit is configured to compare the SIFs of said digital image (initial image) with the SIFs of said dictionary.
- the processing circuit can be configured to process an arbitrary number of images in parallel.
- Figure 1 is a general view of a device implementing the method according to the invention
- FIG. 2 is a flowchart illustrating in a simplified manner steps of the method according to the invention.
- Figure 3 is a general view illustrating the four main parameters describing the positions of a camera;
- Figure 4 is a general view illustrating multiple comparisons between simulated images;
- FIG. 5 is a general view illustrating a sphere in which rotation-tilt pairs are inscribed
- FIG. 6 is a general view illustrating a distribution of the positions of the simulated tilts and rotations on the sphere of FIG. 5;
- Figures 7 and 8 are views illustrating the difference between absolute tilts and relative tilts, or transition tilts.
- a processing unit 1 such as a computer with software and hardware means necessary for its proper operation. It comprises in particular a processing circuit 2 such as a microprocessor or a dedicated microcontroller which is configured to process images according to the method according to the present invention. There is also a conventional memory space 3 capable of storing in particular SIFs in the form of a dictionary. This computer is equipped with a visualization monitor 4 on which the processed images can be displayed.
- a processing circuit 2 such as a microprocessor or a dedicated microcontroller which is configured to process images according to the method according to the present invention.
- a conventional memory space 3 capable of storing in particular SIFs in the form of a dictionary.
- This computer is equipped with a visualization monitor 4 on which the processed images can be displayed.
- a camera 5 is connected to the computer 1 via a connection cable. But other means of connection including wireless can be used. It is also possible to recover images previously acquired and stored in stationary storage means of the computer or laptop.
- the flowchart illustrates the parallel processing of the two images request 6 and target 10.
- a first simulated image is produced for a rotation torque 7.11 and tilt 8.12, and steps 7, 11 and 8 are performed several times, for example p times, 12, so as to generate p simulated images at 9 and 13.
- each of the images undergoes the same treatment consisting in simulating all the possible distortions due to the changes of orientation of the axis of the camera, which is a two-parameter space, called longitude and latitude.
- theta angle ⁇ is the latitude and the angle phi ⁇ the longitude.
- the output 15 may be a list (possibly empty) of sub-image pairs of the request and of the target where there is an object recognized on the two images, as well as the affine transformation identified as making it possible to transform one of the sub-images into the other.
- FIG. 3 illustrates the four main parameters inducing a deformation of the image taken by a camera: the camera can turn on itself by an angle ⁇ , its optical axis can adopt a theta angle ⁇ (latitude) with respect to the frontal axis, and this inclination of a theta angle is made in a vertical plane making an angle phi ⁇ (longitude) with a fixed direction.
- the method according to the invention makes it possible to generate all the affine deformations that would be due to the changes of direction of the axis of the camera at infinity frontally observing the plane image, these deformations therefore depending on the two parameters, the longitude and the latitude, which are sampled so that the number of views generated is a few tens.
- the simulated longitudes are more and more numerous when the latitude increases. But when the latitude increases, the images are also possibly more and more subsampled in one direction and therefore smaller and smaller, the subsampling rates then being a geometric sequence.
- Longitude is described by a parameter ⁇ (see Figure 3).
- the values of tilt t are logarithmically scaled and those of ⁇ arithmetically.
- Transform A is a linear transform of the plane associated with a 2x2 matrix with four elements (a, b, c, d).
- the application u (x, y) -> u (A (x, y)) is then interpreted as the deformation of the image that will be observed when the camera rotates on its optical axis by an angle psi ⁇ , that it slides on its optical axis while moving away (or getting closer if lambda ⁇ ⁇ l) on this axis of a factor lambda, and that its optical axis moves away from its frontal position by a combination of a change of latitude theta ⁇ and a change of longitude phi ⁇ .
- the camera can also begin to move by a translational movement perpendicular to its optical axis, resulting in a prior translation of the image (e, f) not taken into account in the previous formula.
- This translation (e, f), the lambda zoom ⁇ and the rotation psi ⁇ are the four parameters mastered by the state of the art.
- the present invention relates to the manner of recognizing an image when it has also undergone the deformations caused by the changes of latitude and longitude.
- Figure 5 illustrates a sphere on which tilts and rotations are positioned. In this figure in perspective we see the positions of the cameras that would be simulated for tilts 2, 2V2 and 4, respectively for angles 60 °, 69.30 ° and 75.52 °. There are more and more angles of rotation as the tilts increase.
- Figure 6 illustrates a distribution of the positions of the tilts and rotations.
- Each circle corresponds to a tilt.
- the points indicated therefore have for coordinates sin ( ⁇ ) cos ( ⁇ ) and sin ( ⁇ ) sin ( ⁇ ).
- the rectangles indicate the distortion of a square image caused by each tilt.
- the visual effect is that the image rotates on the computer screen at an angle ⁇ . This operation simulates the effect that would have had a rotation around its optical axis of a camera taking the image in front view.
- This operation simulates the result on an image u (x, y), assumed to be frontally observed by a camera at infinity, of the inclination in the x direction of the optical axis of the camera.
- the image u (x, y) is the front view and the image v (x, y) is the oblique view after tilt t of an angle ⁇ in the direction of x.
- This operation simulates the distance of the camera from the image, the distance to the object before the distance being in the ratio h to the distance after the distance.
- the function G (x, y) often a Gaussian, simulates the optical convolution core of a camera.
- a digital zoom is obtained by simple interpolation. Zooming out or zooming out is a short zoom.
- t l /
- SIF scale invariant feature
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2011510028A JP5442721B2 (ja) | 2008-05-19 | 2009-05-18 | 形状の不変量アフィン認識方法及びデバイス |
| EP09761912A EP2289026A2 (fr) | 2008-05-19 | 2009-05-18 | Procédé et dispositif de reconnaissance invariante-affine de formes |
| US12/993,499 US8687920B2 (en) | 2008-05-19 | 2009-05-18 | Method and device for the invariant-affine recognition of shapes |
| CN200980127996.XA CN102099815B (zh) | 2008-05-19 | 2009-05-18 | 用于形状的仿射不变识别的方法和装置 |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR0853244 | 2008-05-19 | ||
| FR0853244A FR2931277B1 (fr) | 2008-05-19 | 2008-05-19 | Procede et dispositif de reconnaissance invariante-affine de formes |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2009150361A2 true WO2009150361A2 (fr) | 2009-12-17 |
| WO2009150361A3 WO2009150361A3 (fr) | 2010-02-25 |
Family
ID=40352778
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/FR2009/050923 Ceased WO2009150361A2 (fr) | 2008-05-19 | 2009-05-18 | Procede et dispositif de reconnaissance invariante-affine de formes |
Country Status (7)
| Country | Link |
|---|---|
| US (1) | US8687920B2 (fr) |
| EP (1) | EP2289026A2 (fr) |
| JP (1) | JP5442721B2 (fr) |
| KR (1) | KR20110073386A (fr) |
| CN (1) | CN102099815B (fr) |
| FR (1) | FR2931277B1 (fr) |
| WO (1) | WO2009150361A2 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12462422B2 (en) | 2019-01-24 | 2025-11-04 | Panasonic Intellectual Property Management Co., Ltd. | Calibration method and calibration apparatus |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110054872A1 (en) * | 2009-08-31 | 2011-03-03 | Aptina Imaging Corporation | Optical simulator using parallel computations |
| KR101165357B1 (ko) * | 2011-02-14 | 2012-07-18 | (주)엔써즈 | 이미지 특징 데이터 생성 장치 및 방법 |
| CN102231191B (zh) * | 2011-07-17 | 2012-12-26 | 西安电子科技大学 | 基于asift的多模态图像特征提取与匹配方法 |
| US9020982B2 (en) * | 2012-10-15 | 2015-04-28 | Qualcomm Incorporated | Detection of planar targets under steep angles |
| CN103076943B (zh) * | 2012-12-27 | 2016-02-24 | 小米科技有限责任公司 | 一种图标的转换方法和图标的转换装置 |
| JP6014008B2 (ja) * | 2013-11-13 | 2016-10-25 | 日本電信電話株式会社 | 幾何検証装置、幾何検証方法及びプログラム |
| CN104156413A (zh) * | 2014-07-30 | 2014-11-19 | 中国科学院自动化研究所 | 一种基于商标密度的个性化商标匹配识别方法 |
| US9026407B1 (en) * | 2014-10-16 | 2015-05-05 | Christine Marie Kennefick | Method of making and using a material model of elements with planar faces |
| CN109801234B (zh) * | 2018-12-28 | 2023-09-22 | 南京美乐威电子科技有限公司 | 图像几何校正方法及装置 |
Family Cites Families (33)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5161204A (en) * | 1990-06-04 | 1992-11-03 | Neuristics, Inc. | Apparatus for generating a feature matrix based on normalized out-class and in-class variation matrices |
| US5903454A (en) * | 1991-12-23 | 1999-05-11 | Hoffberg; Linda Irene | Human-factored interface corporating adaptive pattern recognition based controller apparatus |
| JP3244798B2 (ja) * | 1992-09-08 | 2002-01-07 | 株式会社東芝 | 動画像処理装置 |
| US5911035A (en) * | 1995-04-12 | 1999-06-08 | Tsao; Thomas | Method and apparatus for determining binocular affine disparity and affine invariant distance between two image patterns |
| US5802525A (en) * | 1996-11-26 | 1998-09-01 | International Business Machines Corporation | Two-dimensional affine-invariant hashing defined over any two-dimensional convex domain and producing uniformly-distributed hash keys |
| US6005986A (en) * | 1997-12-03 | 1999-12-21 | The United States Of America As Represented By The National Security Agency | Method of identifying the script of a document irrespective of orientation |
| US6181832B1 (en) * | 1998-04-17 | 2001-01-30 | Mclean Hospital Corporation | Methods and systems for removing artifacts introduced by image registration |
| US6912293B1 (en) * | 1998-06-26 | 2005-06-28 | Carl P. Korobkin | Photogrammetry engine for model construction |
| US7016539B1 (en) * | 1998-07-13 | 2006-03-21 | Cognex Corporation | Method for fast, robust, multi-dimensional pattern recognition |
| US6711293B1 (en) | 1999-03-08 | 2004-03-23 | The University Of British Columbia | Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image |
| US6507661B1 (en) * | 1999-04-20 | 2003-01-14 | Nec Research Institute, Inc. | Method for estimating optical flow |
| US6975755B1 (en) * | 1999-11-25 | 2005-12-13 | Canon Kabushiki Kaisha | Image processing method and apparatus |
| US6771808B1 (en) * | 2000-12-15 | 2004-08-03 | Cognex Corporation | System and method for registering patterns transformed in six degrees of freedom using machine vision |
| US7142718B2 (en) * | 2002-10-28 | 2006-11-28 | Lee Shih-Jong J | Fast pattern searching |
| US7289662B2 (en) * | 2002-12-07 | 2007-10-30 | Hrl Laboratories, Llc | Method and apparatus for apparatus for generating three-dimensional models from uncalibrated views |
| JP4865557B2 (ja) * | 2003-08-15 | 2012-02-01 | スカーペ テクノロジーズ アクティーゼルスカブ | 有界三次元対象物の分類と空間ローカライゼーションのためのコンピュータ視覚システム |
| WO2005055138A2 (fr) * | 2003-11-26 | 2005-06-16 | Yesvideo, Inc. | Modelisation statistique en reponse-processus d'une image visuelle permettant de determiner la similarite entre des images visuelles et utilisation de cette similarite d'images pour generer une interaction avec une collection d'images visuelles |
| US7697792B2 (en) * | 2003-11-26 | 2010-04-13 | Yesvideo, Inc. | Process-response statistical modeling of a visual image for use in determining similarity between visual images |
| US7382897B2 (en) * | 2004-04-27 | 2008-06-03 | Microsoft Corporation | Multi-image feature matching using multi-scale oriented patches |
| US7493243B2 (en) * | 2004-12-27 | 2009-02-17 | Seoul National University Industry Foundation | Method and system of real-time graphical simulation of large rotational deformation and manipulation using modal warping |
| US7653264B2 (en) * | 2005-03-04 | 2010-01-26 | The Regents Of The University Of Michigan | Method of determining alignment of images in high dimensional feature space |
| US7929775B2 (en) * | 2005-06-16 | 2011-04-19 | Strider Labs, Inc. | System and method for recognition in 2D images using 3D class models |
| EP1736928A1 (fr) * | 2005-06-20 | 2006-12-27 | Mitsubishi Electric Information Technology Centre Europe B.V. | Recalage robuste d'images |
| US7454037B2 (en) * | 2005-10-21 | 2008-11-18 | The Boeing Company | System, method and computer program product for adaptive video processing |
| US7912321B1 (en) * | 2005-12-19 | 2011-03-22 | Sandia Corporation | Image registration with uncertainty analysis |
| EP1850270B1 (fr) * | 2006-04-28 | 2010-06-09 | Toyota Motor Europe NV | Détecteur robuste et descripteur de point d'intérêt |
| CN100587518C (zh) * | 2006-07-20 | 2010-02-03 | 中国科学院自动化研究所 | 遥感影像高精度控制点自动选择方法 |
| KR100796849B1 (ko) * | 2006-09-04 | 2008-01-22 | 삼성전자주식회사 | 휴대 단말기용 파노라마 모자이크 사진 촬영 방법 |
| CN100530239C (zh) * | 2007-01-25 | 2009-08-19 | 复旦大学 | 基于特征匹配与跟踪的视频稳定方法 |
| JP4668220B2 (ja) * | 2007-02-20 | 2011-04-13 | ソニー株式会社 | 画像処理装置および画像処理方法、並びにプログラム |
| KR100884904B1 (ko) * | 2007-09-12 | 2009-02-19 | 아주대학교산학협력단 | 평행 투영 모델을 이용한 자기위치 인식 방법 |
| US8090160B2 (en) * | 2007-10-12 | 2012-01-03 | The University Of Houston System | Automated method for human face modeling and relighting with application to face recognition |
| US8004576B2 (en) * | 2008-10-31 | 2011-08-23 | Digimarc Corporation | Histogram methods and systems for object recognition |
-
2008
- 2008-05-19 FR FR0853244A patent/FR2931277B1/fr not_active Expired - Fee Related
-
2009
- 2009-05-18 US US12/993,499 patent/US8687920B2/en not_active Expired - Fee Related
- 2009-05-18 EP EP09761912A patent/EP2289026A2/fr not_active Withdrawn
- 2009-05-18 WO PCT/FR2009/050923 patent/WO2009150361A2/fr not_active Ceased
- 2009-05-18 KR KR1020107028499A patent/KR20110073386A/ko not_active Ceased
- 2009-05-18 JP JP2011510028A patent/JP5442721B2/ja not_active Expired - Fee Related
- 2009-05-18 CN CN200980127996.XA patent/CN102099815B/zh not_active Expired - Fee Related
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12462422B2 (en) | 2019-01-24 | 2025-11-04 | Panasonic Intellectual Property Management Co., Ltd. | Calibration method and calibration apparatus |
Also Published As
| Publication number | Publication date |
|---|---|
| US20110069889A1 (en) | 2011-03-24 |
| US8687920B2 (en) | 2014-04-01 |
| FR2931277A1 (fr) | 2009-11-20 |
| KR20110073386A (ko) | 2011-06-29 |
| CN102099815B (zh) | 2014-09-17 |
| WO2009150361A3 (fr) | 2010-02-25 |
| JP2011521372A (ja) | 2011-07-21 |
| FR2931277B1 (fr) | 2010-12-31 |
| CN102099815A (zh) | 2011-06-15 |
| JP5442721B2 (ja) | 2014-03-12 |
| EP2289026A2 (fr) | 2011-03-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2009150361A2 (fr) | Procede et dispositif de reconnaissance invariante-affine de formes | |
| EP1421442B1 (fr) | Procede de capture d'une image panoramique au moyen d'un capteur d'image de forme rectangulaire | |
| Lumsdaine et al. | Full resolution lightfield rendering | |
| EP1386480B1 (fr) | Procede de capture et d'affichage d'une image panoramique numerique a resolution variable | |
| EP2923330B1 (fr) | Procede de reconstruction 3d et de mosaïquage 3d panoramique d'une scene | |
| EP0977148B1 (fr) | Dispositif électronique de recalage automatique d'images | |
| US20100033551A1 (en) | Content-Aware Wide-Angle Images | |
| EP2828834A2 (fr) | Modèle et procédé de production de modèle 3d photo-réalistes | |
| Jung et al. | Deep360Up: A deep learning-based approach for automatic VR image upright adjustment | |
| FR2955409A1 (fr) | Procede d'integration d'un objet virtuel dans des photographies ou video en temps reel | |
| CN101356546A (zh) | 图像高分辨率化装置、图像高分辨率化方法、图像高分辨率化程序以及图像高分辨率化系统 | |
| WO2005010820A2 (fr) | Procede et dispositif automatise de perception avec determination et caracterisation de bords et de frontieres d'objets d'un espace, construction de contours et applications | |
| Chandramouli et al. | Convnet-based depth estimation, reflection separation and deblurring of plenoptic images | |
| FR3047103A1 (fr) | Procede de detection de cibles au sol et en mouvement dans un flux video acquis par une camera aeroportee | |
| EP2432660A1 (fr) | Procede et dispositif pour etendre une zone de visibilite | |
| FR2729236A1 (fr) | Guidage de robot par eclairage actif | |
| EP2994813B1 (fr) | Procede de commande d'une interface graphique pour afficher des images d'un objet tridimensionnel | |
| EP3018625B1 (fr) | Procédé de calibration d'un système de visée | |
| WO2011033186A1 (fr) | Procédé de numérisation tridimensionnelle d'une surface comprenant la projection d'un motif combiné | |
| CA3230088A1 (fr) | Procede de mise en relation d'une image candidate avec une image de reference | |
| WO2011033187A1 (fr) | Procédé de numérisation tridimensionnelle comprenant une double mise en correspondance | |
| WO2020094441A1 (fr) | Capteur d'image pour la reconnaissance optique de code(s) | |
| EP4567755A1 (fr) | Procédé et système de capture sans contact d'empreinte biométrique | |
| Kopanas | Point based representations for novel view synthesis | |
| EP3454118B1 (fr) | Dispositif et procédé pour reconstruire la surface 3d du tour complet d'un sujet |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| WWE | Wipo information: entry into national phase |
Ref document number: 200980127996.X Country of ref document: CN |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09761912 Country of ref document: EP Kind code of ref document: A2 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2011510028 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 12993499 Country of ref document: US Ref document number: 12010502606 Country of ref document: PH |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 20107028499 Country of ref document: KR Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2009761912 Country of ref document: EP |