WO2003073367A2 - Procédé de mesure de la localisation d'un objet par détection de phase - Google Patents
Procédé de mesure de la localisation d'un objet par détection de phase Download PDFInfo
- Publication number
- WO2003073367A2 WO2003073367A2 PCT/FR2003/000636 FR0300636W WO03073367A2 WO 2003073367 A2 WO2003073367 A2 WO 2003073367A2 FR 0300636 W FR0300636 W FR 0300636W WO 03073367 A2 WO03073367 A2 WO 03073367A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- network
- pattern
- periodic pattern
- lines
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
Definitions
- the present invention relates to a method for measuring the location of an object by phase detection.
- the invention relates to a method for measuring the location of an object located in a space observed by a fixed observation system connected to a processing unit to generate an image composed of a matrix of pixels.
- the latter In order to locate the object with precision, the latter is, in a manner known per se, provided with a test pattern having two periodic gratings whose representation in the pixel matrix is formed by two periodic gratings intended to constitute, after passage in the frequency domain, two bidirectional phase references able to be used by extracting phase information by means of a frequency analysis function such as Morlet wavelet transformations.
- the phase information thus detected is then combined to determine the Cartesian coordinates of the reference point of the target as well as the orientation of the target relative to the observation system.
- the application of this measurement method to the image of an appropriate target, obtained by a standard sensor allows a high resolution of the location of the reference point of the target.
- This measurement method mainly consists in using a test pattern formed by a first network comprising a plurality first parallel and regularly spaced lines, and by a second network comprising a plurality second parallel lines and also regularly spaced. Furthermore, these first and second networks are arranged so that the first lines are substantially perpendicular to the second lines, the first and second networks being however physically separated from each other by a certain distance.
- the image of this target or more exactly the image of the two arrays in the pixel matrix obtained by the observation system is then processed by a processing unit making it possible mainly to perform the following operations for each array:
- 1 use the frequency in pixels of this network to define an analysis function which is applied to this network according to the first alignment of pixels, - extract the phase and the module associated with this network by correlation with the function of analysis to calculate the Cartesian position of the middle of at least one line of the network in the direction of the first alignment of pixels, - successively extract the phase and the module associated with this network by correlation with the analysis function according to a plurality of alignments of pixels parallel to the first alignment of pixels to independently determine the Cartesian position of each medium of said at least one line in the direction of each corresponding alignment of pixels,
- FIG. 1 An example of a known device making it possible to implement the method described above is shown schematically in FIG. 1.
- This device comprises an observation system 1 comprising a matrix image sensor such as a CCD camera 2 and a lens 3 making it possible to form the image of the scene observed on the matrix image sensor 2.
- This image sensor matrix image is connected to a processing unit 4 intended to allow the phase analysis of the image formed by a matrix of pixels obtained from the matrix image sensor 2.
- This processing unit 4 is also suitable for performing logical and arithmetic operations of the recorded images and coming from the matrix image sensor.
- a test pattern 6 is fixed on this mobile object 5.
- the test pattern 6 comprises a first network PI formed of NI parallel lines T1 and regularly spaced and a second network P2 formed of N2 parallel and regularly spaced lines.
- the PI and P2 networks are, for example, etched by photolithography on a glass mask, the latter being illuminated by a lighting device making it possible to obtain from the matrix image sensor, a matrix of pixels representing the images of the PI and P2 networks.
- the processing unit 4 ′ therefore makes it possible to calculate the equation of a line D2 of the network P2 and the equation of a median line Dl of the network PI, these median lines Dl and D2 being respectively defined by all of the environments in the central feature of each PI and P2 network in the set considered.
- the position of the target 6 and therefore of the object 5, is given by the Cartesian coordinates ⁇ x, ⁇ y of the point of intersection P of the two median lines Dl and D2.
- the orientation of the object 5 is, for its part, defined by the angle ⁇ formed, for example, by the middle line Dl of the network PI chosen as reference with one of the axes x, y of the Cartesian coordinate system provided, for example, by the matrix of pixels constituting the image.
- the present invention aims in particular to overcome the aforementioned drawbacks.
- the method of measuring the genre in question is essentially characterized in that it comprises the following steps:
- test pattern comprising at least one two-dimensional periodic pattern formed by a plurality of substantially punctual elements arranged in parallel lines and parallel columns and substantially perpendicular to the lines, the punctual elements being regularly spaced along the lines and columns ,
- a first image of the pattern is recorded and a digital processing of the first image of the pattern is carried out to generate, from said pattern, an image containing a first network comprising a plurality of first regularly spaced parallel lines and an image containing a second network comprising a plurality of second regularly spaced parallel lines, the second lines being substantially perpendicular to the first lines, and for each of the first and second networks,
- the frequency in pixels of this network is calculated according to a first alignment of pixels which intersects all the lines of this network, - the frequency in pixels of this network is used to define an analysis function which is applied to this network following the first alignment of pixels,
- phase and the module associated with this network are successively extracted by correlation with the analysis function according to a plurality of alignments of pixels parallel to the first alignment of pixels, each alignment of pixels intersecting all of the lines of this network to independently determine the Cartesian position of each medium of said at least one line in the direction of each corresponding pixel alignment,
- a middle line is calculated for each network passing substantially through all of the media of said at least one line, the middle line of the first network being perpendicular to the middle line of the second network,
- a second image of said at least one periodic pattern is recorded after a displacement of the object in the space observed by the fixed observation system and the Cartesian position of the point of intersection of the two median lines of the first and second is calculated networks obtained from the second recorded image to calculate the displacement of the object;
- the digital processing of the first image of said at least one periodic pattern comprises the following steps:
- two independent filterings are carried out to obtain, on the one hand, a first filtered Fourier spectrum, associated with the direction of the columns of the periodic pattern, and on the other hand, a second filtered Fourier spectrum , associated with the direction of the lines of the periodic pattern, and
- the test pattern comprises a matrix of identical periodic patterns arranged along parallel lines and parallel columns and substantially perpendicular to the lines, the periodic patterns being regularly spaced along the lines and the columns, and each periodic pattern is associated with a positioning element allowing the location of the periodic pattern associated therewith within the matrix of periodic patterns;
- each positioning element comprises a row number index and a column number index to allow the localization of the pattern associated therewith within the matrix of patterns;
- the image in the pixel matrix of each index of row and column number is in the form of a bar code which is read by the processing unit;
- the fixed observation system comprises first and second matrix image sensors which are contained in a plane perpendicular to a plane defined by the two dimensions of the periodic pattern of the test pattern, the first and second sensors having axes of sight each delimiting a predetermined angle with the axis perpendicular to the sight plan, and
- the position of the intersection point is calculated from the first and second Cartesian positions of the point of intersection and of the predetermined angles, in a direction parallel to the plane defined by the two dimensions of said at least one periodic pattern and a direction perpendicular to the plane defined by the two dimensions of said at least one periodic pattern;
- the fixed observation system comprises a first matrix image sensor having an aiming axis perpendicular to the plane defined by the two dimensions of the periodic pattern of the test pattern, and a second matrix image sensor having an aiming axis parallel to the plane defined by the two dimensions of the periodic pattern of the pattern, a light beam splitting object being, moreover, interposed between the periodic pattern and the first and second sensors, and. an image of said at least one periodic pattern is recorded for each sensor,
- the Cartesian position of the point of intersection in a plane parallel to the plane defined by the two dimensions of the periodic pattern is calculated from the image obtained with the first sensor
- the Cartesian position of the point of intersection in a plane perpendicular to the defined plane is calculated from the image obtained with the second sensor by the two dimensions of the periodic pattern;
- the frequency of the periodic pattern calculated from the processing unit is compared to the real frequency of the periodic pattern to determine the position of the point of intersection in a direction as a function of the magnification index of the fixed observation system perpendicular to the plane defined by the two dimensions of said at least one periodic pattern.
- FIG. 1 represents the device making it possible to implement the method previously described and in accordance with the prior art
- FIG. 2 represents an example of a network of lines in accordance with the prior art for calculating position
- FIG. 3 represents a measuring device making it possible to implement the method according to the invention
- - Figure 4 represents a test pattern according to the invention to allow a position calculation
- - Figure 5 shows an image of the test pattern according to the invention obtained from the observation system of the device
- FIG. 6 represents an enlargement of a portion of the image of the test pattern in FIG. 5
- FIG. 7 represents the Fourier spectrum of the image of the test pattern according to the invention
- FIG. 8a and 8b show a reproduction of the Fourier spectrum respectively along the direction of columns of the target and in the direction of the target lines
- FIGS. 9a and 9b are views of the spatial representations in pixels of two arrays obtained from the frequency processing of the image of the target
- FIGS. 10a and 10b represent areas of interest of the networks of FIG. 9a and 9b, to allow the calculation of position
- FIG. 11 represents the intensity of the signal emitted by the network of FIG. 10b along a column
- FIG. 12 is a view of the Fourier spectrum of the intensity of the signal shown in FIG. 11,
- FIG. 13 represents the module of the wavelet transform along the column C c shown in FIG. 10b
- FIG. 14 represents the phase of the wavelet transform along the column C c shown in FIG. 10b
- FIG. 15 shows the product of the derivative of the module shown in Figure 13 by the phase shown in Figure 14 (the peaks defining the ends of the network of lines along the column C c , shown in Figure 10b);
- FIG. 16 represents the superposition of the unwound phase and of the intensity along the column C c of FIG. 10b
- FIGS. 17, 18 and 19 represent the images of the networks of lines reconstituted by digital processing as well as the intersecting lines calculated from each of the networks of lines and their point of intersection representing the position of the mobile object relative to at the fixed mark formed by the frame of the pixels of the recorded image
- FIG. 20 represents an alternative embodiment of the test pattern according to the invention
- FIGS. 21 and 22 represent positioning elements intended to be produced on the target shown in FIG. 20,
- FIG. 23 represents an alternative embodiment of the device allowing the implementation of the procedure according to the invention.
- FIG. 24 represents another alternative embodiment of the device making it possible to implement the method according to the invention.
- FIG. 25 shows yet another alternative embodiment of the device allowing the implementation of the method.
- the same references designate identical or similar elements.
- FIG. 3 represents an example of a measuring device necessary for implementing the method according to the invention.
- This device comprises a matrix image sensor such as a CCD camera 2, a microscope objective 3 and an adaptation tube 7 connecting the sensor 2 to the microscope objective 3 to form the observation system 1 of said device .
- the device can also simply comprise an imaging lens and a matrix image sensor.
- This observation system 1 is intended to remain stationary.
- An object 5 is placed in the field of vision of the sensor 2 and this object 5 is provided with a target 8 fixed on the support or more exactly, in the example considered, on a backlight table 13 itself fixed on object 5.
- This object 5 is intended to move in a two-dimensional space defined by the plane [xoy].
- the sensor 2 is also arranged so that its line of vision 2a or substantially perpendicular to the plane [xoy].
- the test pattern 8, represented in FIG. 4, comprises, in this embodiment, a two-dimensional periodic pattern 8a formed by a plurality of punctual elements 9 arranged in parallel lines (12 in number in the example considered) and parallel columns (also 12 in number in the example considered) perpendicular to the lines ,.
- the target 8 can for example be formed by a glass mask 8b covered with an opaque layer over its entire surface and in which the transparent point elements 9 are obtained by photolithography, so that the surface of the target 8 is opaque except at the point elements 9.
- the number of rows and columns of the test pattern 8 can vary significantly depending on the type of test pattern used, without departing from the scope of the invention.
- the target 8 is arranged above a lighting table 13 diffusing so that the point elements 9 produce light points distributed over the dark background of the target and detectable by the matrix image sensor.
- a variant could consist in giving the point elements 9 a different reflectivity from the rest of the pattern so that these point elements have a different luminosity from the rest of the pattern, the whole being illuminated from above.
- test pattern 8 is arranged so that its periodic pattern 8a is substantially arranged in the reference plane [xoy].
- the distance dl which separates two lines of the periodic pattern 8a, as well as the distance d2 which separates two columns are constant, while the distance d2 can be equal or different from the distance dl.
- the point elements 9 can be of substantially square shape with sides having a length of the order of 5 ⁇ m.
- test pattern 8 can also be formed by any support on which the point elements 9 are arranged, which can also take the form of reflective elements that reflect the light of an excitation source illuminating the test pattern 8 so as to obtain an image of a periodic network at the sensor.
- the test pattern 8 can also be formed according to an alternative embodiment, of a support on which are made a plurality of periodic through holes 9 allowing, following lighting by a backlighting table, obtaining of an image of a periodic network.
- FIG. 5 represents an image of the test pattern 8 represented in FIG. 4, this image being taken by a CCD sensor with a pixel matrix of 578 rows of pixels for 760 columns of pixels.
- the first step of the method consists in carrying out a preliminary digital processing of the image of this test pattern 8 in order to generate two distinct images by computer representing respectively a first network formed of a first series of parallel lines and a second network formed of a second series of lines parallel and perpendicular to the first series of lines.
- the image of the test pattern 8 represented in FIG. 5 and obtained by the CCD sensor is recorded by means of the processing unit 4 (FIG. 3) of the device.
- This frequency treatment consists for example into a direct Fourier transform in order to obtain, as can be seen in FIG. 7, the Fourier spectrum of the recorded image of the two-dimensional periodic pattern 8a of the test pattern 8.
- two appropriate and independent filterings are carried out in order to obtain, on the one hand, a filtered Fourier spectrum associated with the direction of the columns of the periodic pattern of the test pattern (FIG. 8a), and on the other hand, a Fourier spectrum filtered associated with the direction of the lines of the periodic pattern of the test pattern ( Figure 8b).
- the network RI is therefore formed by 12 lines T1 parallel to one another and substantially vertical while the network R2 is formed by 12 lines T2 also parallel to each other and substantially horizontal.
- the phase information associated with the rows and columns of the periodic pattern 8a is preserved and the networks RI and R2 generated in this way contain all of the position information already available with the test pattern 8 or more exactly with the recorded digital image of the periodic pattern 8a of the test pattern 8.
- the calculation of the location of the test pattern 8 in the image amounts to calculating respectively the location of the network RI in the first generated image and the location of the network R2 in the second generated image.
- RIO, R20 of each network RI, R2. This area of interest RIO, R20 of each network RI, R2 is determined by systematically excluding the end edges of the lines T1, T2.
- Each area of interest RIO, R20 comprises sides which are two by two parallel to the axes defined by the pixel frame of the sensor, namely the axis of the lines and the axis of the columns of the pixel matrix.
- FIGS. 10a and 10b respectively give the pixel images of the two areas of interest RIO and R20.
- the spatial frequency in pixels of the network R2 is determined.
- the spatial frequency of the network R2 is determined for example by Fourier transform.
- the frequency of the imaged line network corresponds to a maximum in the Fourier spectrum.
- the analysis function can be a Morlet wavelet allowing, by correlation with the network R2, the extraction of the phase and of the module associated with this network.
- Lw defines the width of the wavelet.
- the objective of the digital processing being to reconstruct the total excursion in phase of the pictorial network R2 which is equal to 2N ⁇ where N equals the total number N2 of lines of the network R2, it is therefore necessary to extract the phase of the wavelet transform which is itself equal, except for noise, to 2N ⁇ .
- the phase and the module are given respectively by the argument and the module of the complex number k , ⁇ .
- the wavelet frequency being fixed at f 0 which is the pixel frequency of the network R2 according to column C c
- the transform into wavelets of the network R2 is reduced to a convolution between the wavelet and the imaged network R2 according to a direction.
- the processing unit allows, after calculation, the extraction of the module and of the phase of the wavelet transform along the column C c .
- the representations of the module and of the phase of this wavelet transform along the column C c are given respectively in FIGS. 13 and 14.
- FIG. 15 The result of this operation along the column C c is shown in FIG. 15 in which the indices ibi and ib 2 correspond respectively to the upper and lower edges of the network R2 according to column C c .
- This least squares line allows to pass from the discrete domain of the image to a continuous space, this least squares line having for equation:
- the test pattern 8 comprises a single periodic pattern 8a.
- the presence of a single periodic pattern thus makes it possible to measure subpixel displacement by successively recording two images of the test pattern 8.
- the distribution of incident light intensity on the pixels of the matrix image sensor changes, giving rise to a recorded image of the different test pattern, which leads to a different phase distribution during digital processing and therefore to the measurement of the new position of the moving target.
- the displacement value is provided by the difference between the positions measured after and before the displacement. In other words, all of these variations therefore cause a significant modification of the phase between the two images.
- This modification of the phase distribution is detected and measured by the method described above, which makes it possible to calculate the new Cartesian coordinates of the point of intersection P of the median lines D1 and D2 for the second recorded image of the test pattern 8.
- the displacement measurement from two recorded images is limited by the fact that the periodic pattern 8a of the test pattern 8 must necessarily be contained in its entirety in the pixel matrix. of the matrix image sensor.
- the periodic pattern is capable of at least partially leaving the field of vision of the fixed sensor, which then makes it impossible to determine the position of the point P and of the angular orientation of the target 8.
- the test pattern 8 is provided with a plurality of periodic patterns 8n identical to the periodic pattern 8a.
- the periodic patterns 8n are arranged regularly, for example periodically along parallel and regularly spaced lines and also along columns parallel and perpendicular to the lines.
- Each periodic pattern is for example engraved by photolithography.
- each periodic pattern 8n is associated with a positioning element lOn, which is adapted to store position information making it possible to locate the periodic pattern which is associated with it inside the matrix itself. formed by the set of periodic patterns 8n.
- Each positioning element lOn comprises, for example, a row number index and a column number index making it possible to know with precision the position of the periodic pattern 8n which is associated with it inside the matrix of periodic patterns.
- the spacing between two adjacent periodic patterns 8n is physically known to be chosen when designing the test pattern 8. Consequently, the displacements can be measured with two degrees of precision, namely the spacing between two patterns periodic and subpixel precision inside the image of the periodic pattern 8n being processed by the processing unit 4.
- the displacements are calculated from two complementary values, namely the spacing between the patterns observed during recordings before and after displacement and the position of the pattern observed in the pixel matrix of the images recorded before and after displacement.
- the processing unit can for example process the recorded image of a periodic pattern seen as a whole by the observation system.
- This periodic pattern 8n is identified in the pattern matrix by its line index il and its line index jl.
- the processing unit can then determine the location point P of this pattern by means of the various processing described above and this, for example, for the sixth line of its imaged networks RI and R2.
- the field of vision of the sensor detects another periodic pattern. Thanks to its positioning element, this other periodic pattern is identified in the pattern matrix by its line index i2 and its line index j2.
- the processing unit can then determine the location point P of this new periodic pattern by taking as reference the sixth line of its imaged networks RI and R2. From this processing, the displacement of the test pattern 8 is therefore deduced therefrom which in this example amounts to calculating the known spacing between the lines il and i2 and the columns jl and j2 and the displacement. subpixel by means of the two localization points P of the two periodic patterns.
- FIG. 21 represents an embodiment of a positioning element 10 according to the invention.
- the positioning element mainly comprises a reference part 11 and an information writing part 12 intended to allow the localization of the periodic pattern associated with it.
- each positioning element is for example in the form of a succession of white and black bands in order to allow the reading of the part 12 by means of the processing unit 4.
- This writing part information 12 includes for example a portion 12a of line number inscription i and a portion 12b of column number inscription j, the two portions 12a and 12b each being formed by five bands arranged in alignment with the bands white and black of the reference part 11.
- each positioning element lOn makes it possible to code 10 bits of information (5 bits for the lines and 5 bits for the columns) thus making it possible to work with matrices of 32 ⁇ 32 periodic patterns 8n.
- the bands forming the two portions 12a and 12b are also obtained during the etching of the target and black or white bands can be produced depending on the position assigned to each positioning element.
- FIG. 22 represents a positioning element lOn obtained by photolithography and which is intended to precisely locate a periodic pattern in the matrix of patterns.
- the use of the matrix of periodic patterns 8n associated with positioning elements offers the possibility of detecting a periodic pattern located near the center of the image of the sensor, which thus makes it possible to reduce the distortions linked to the 1 lens optic.
- the object 5 on which is reported the target 8 is intended to move in the plane
- the fixed observation system 1 comprises a first matrix image sensor 2 as well as a second matrix image sensor 21 which are both substantially contained in the plane (YOZ) which is perpendicular to the plane ( XOY), i.e. on the plane defined by the two dimensions of the periodic pattern of the test pattern 8.
- the first sensor 2 has an aiming axis 2a which extends along the axis (OZ) and the second sensor 21 has an aiming axis 21a which forms an angle oc with the axis (OZ), this angle ⁇ being determined during the mounting of the two sensors 2 and 21.
- the two sensors are arranged so that the point of intersection of the two viewing axes 2a and 21a is located in the vicinity of the test pattern 8.
- this device it is then possible to record for each of the sensors 2 and 21 an image of the same periodic pattern of the test pattern 8. Then, it suffices to calculate the first Cartesian position (x, y) of the point of intersection P obtained at from the first sensor 2 and also calculate the second Cartesian position (x, y ') of the same point of intersection P obtained from the second sensor 21. After these calculations, and if the two sensors 2 and 21 are indeed contained in the same plane (YOZ), then the Cartesian values (x, y) and (x, y ') of the point of intersection P must have the same value x. Conversely, the value y 'given from the image obtained by the second sensor 21 is different from the value y obtained from the image of the first sensor 2. In fact, this value y' is a function of the value of the angle ⁇ as well as the position of the point of intersection P along the axis Z.
- the sensor 2 can also have a sighting axis 2a which forms an angle ⁇ 2 with the axis (OZ), the sensor 2 remaining substantially contained in the plane (YOZ) and the sensor 21 also remaining in a position in which its line of sight 21a forms an angle ⁇ l with the axis (OZ).
- the value yl given from the image obtained by the camera 21 is different from the value y2 obtained from the image from the camera 2, these two values yl and y2 being themselves different from the actual value .y of the point of intersection P.
- FIG. 24 represents another variant embodiment of the device making it possible to implement the method of the invention.
- the aiming axis 2a of the sensor 2 is arranged perpendicular to the plane (XOY) containing the periodic pattern of the test pattern 8.
- the sensor 21, meanwhile, has an aiming axis 21a perpendicular to the viewing axis 2a of the sensor 2 and therefore parallel to the plane (XOY) containing the periodic pattern of the test pattern 8.
- a beam splitter object secured to the test pattern 8 and which can be in the form of a cube 15 or a separating blade is interposed between the periodic pattern 8a or the patterns 8n of the test pattern 8 and the sensors 2 and 21.
- the test pattern 8 it is possible to light the test pattern 8 by backlighting in order to allow the light beam passing through the periodic pattern of this test pattern 8 to go for a part towards the sensor 2 while another part of the light beam is directed towards the sensor 21.
- the image of the first sensor 2 makes it possible to determine the Cartesian position (x, y) of the point of intersection P, while the image obtained from the second sensor 21 allows the calculation of the Cartesian position (x, z) of the point of intersection P.
- these coordinates are obtained for each position of the test pattern 8.
- the calculation of the frequency f 0 of the periodic pattern is carried out by the processing unit.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Claims
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| EP03732624A EP1479044A2 (fr) | 2002-02-28 | 2003-02-27 | Procede de mesure de la localisation d'un objet par detection de phase |
| AU2003239652A AU2003239652A1 (en) | 2002-02-28 | 2003-02-27 | Method for measuring the location of an object by phase detection |
| US10/506,021 US20050226533A1 (en) | 2002-02-28 | 2003-02-27 | Method for measuring the location of an object by phase detection |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR02/02547 | 2002-02-28 | ||
| FR0202547A FR2836575B1 (fr) | 2002-02-28 | 2002-02-28 | Procede de mesure de la localisation d'un objet par detection de phase |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2003073367A2 true WO2003073367A2 (fr) | 2003-09-04 |
| WO2003073367A3 WO2003073367A3 (fr) | 2004-04-01 |
Family
ID=27676171
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/FR2003/000636 Ceased WO2003073367A2 (fr) | 2002-02-28 | 2003-02-27 | Procédé de mesure de la localisation d'un objet par détection de phase |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20050226533A1 (fr) |
| EP (1) | EP1479044A2 (fr) |
| AU (1) | AU2003239652A1 (fr) |
| FR (1) | FR2836575B1 (fr) |
| WO (1) | WO2003073367A2 (fr) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2006026211A1 (fr) * | 2004-08-31 | 2006-03-09 | Hewlett-Packard Development Company L.P. | Mesures de deplacement au moyen de changements de phase |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| IT1391829B1 (it) * | 2008-11-21 | 2012-01-27 | C N R Consiglio Naz Delle Ricerche | Apparato basato sugli ultrasuoni per misurare parametri indicatori di avanzamento di un parto |
| CN106408553B (zh) * | 2015-07-29 | 2019-10-22 | 北京空间飞行器总体设计部 | 斜扫的红外线阵探测器目标响应分析方法 |
| CN107977994B (zh) * | 2016-10-21 | 2023-02-28 | 上海交通大学 | 用于测量被测物在参考物上的平面位置的方法 |
| CN113478068A (zh) * | 2021-06-16 | 2021-10-08 | 西安理工大学 | 一种激光加工薄壁零件热变形实时检测方法 |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0830632B9 (fr) * | 1995-04-28 | 2002-12-18 | Forskningscenter Riso | Imagerie a contraste de phase |
| JPH09189519A (ja) * | 1996-01-11 | 1997-07-22 | Ushio Inc | パターン検出方法およびマスクとワークの位置合わせ装置 |
| US7043082B2 (en) * | 2000-01-06 | 2006-05-09 | Canon Kabushiki Kaisha | Demodulation and phase estimation of two-dimensional patterns |
| WO2002039055A1 (fr) * | 2000-10-25 | 2002-05-16 | Electro Scientific Industries, Inc. | Alignement integre et calibrage de dispositif d'optique |
| AUPR676201A0 (en) * | 2001-08-01 | 2001-08-23 | Canon Kabushiki Kaisha | Video feature tracking with loss-of-track detection |
| CN1271569C (zh) * | 2001-10-15 | 2006-08-23 | Dsmip资产有限公司 | 定位物体的设备和方法 |
-
2002
- 2002-02-28 FR FR0202547A patent/FR2836575B1/fr not_active Expired - Fee Related
-
2003
- 2003-02-27 EP EP03732624A patent/EP1479044A2/fr not_active Withdrawn
- 2003-02-27 AU AU2003239652A patent/AU2003239652A1/en not_active Abandoned
- 2003-02-27 US US10/506,021 patent/US20050226533A1/en not_active Abandoned
- 2003-02-27 WO PCT/FR2003/000636 patent/WO2003073367A2/fr not_active Ceased
Non-Patent Citations (3)
| Title |
|---|
| BANI-HASHEMI A: "A FOURIER APPROACH TO CAMERA ORIENTATION" IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE INC. NEW YORK, US, vol. 15, no. 11, 1 novembre 1993 (1993-11-01), pages 1197-1202, XP000413110 ISSN: 0162-8828 * |
| MATAS J ET AL: "OBJECT RECOGNITION USING A TAG" PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING. ICIP 1997. SANTA BARBARA, CA, OCT. 26 - 29, 1997, LOS ALAMITOS, CA: IEEE, US, vol. 1, 26 octobre 1997 (1997-10-26), pages 877-880, XP000792903 ISBN: 0-8186-8184-5 * |
| SANDOZ P ET AL: "PHASE-SENSITIVE VISION TECHNIQUE FOR HIGH ACCURACY POSITION MEASUREMENT OF MOVING TARGETS" IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, IEEE INC. NEW YORK, US, vol. 49, no. 4, août 2000 (2000-08), pages 867-872, XP000959262 ISSN: 0018-9456 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2006026211A1 (fr) * | 2004-08-31 | 2006-03-09 | Hewlett-Packard Development Company L.P. | Mesures de deplacement au moyen de changements de phase |
| US7609858B2 (en) | 2004-08-31 | 2009-10-27 | Hewlett-Packard Development Company, L.P. | Displacement measurements using phase changes |
Also Published As
| Publication number | Publication date |
|---|---|
| US20050226533A1 (en) | 2005-10-13 |
| WO2003073367A3 (fr) | 2004-04-01 |
| EP1479044A2 (fr) | 2004-11-24 |
| FR2836575B1 (fr) | 2004-07-02 |
| AU2003239652A8 (en) | 2003-09-09 |
| AU2003239652A1 (en) | 2003-09-09 |
| FR2836575A1 (fr) | 2003-08-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10684589B2 (en) | Apparatus and method for in-line holographic imaging | |
| FR2849245A1 (fr) | Procede d'authentification et d'identification optique d'objets et dispositif de mise en oeuvre | |
| EP0985902A1 (fr) | Dispositif interférométrique pour relever les caractéristiques de réflexion et/ou de transmission optiques en profondeur d'un objet | |
| WO2014016526A1 (fr) | Dispositif et procede de caracterisation d'un echantillon par des mesures localisees | |
| WO2011117539A1 (fr) | Methode et installation pour detecter la presence et l'altitude de defauts dans un composant optique | |
| FR2718871A1 (fr) | Procédé pour la reconnaissance automatique d'un panneau de signalisation routière, et dispositif pour la mise en Óoeuvre de ce procédé. | |
| WO2014067886A1 (fr) | Systeme d'imagerie holographique auto-reference | |
| FR2915601A1 (fr) | Dispositif de comptage de cartes dans des petites series. | |
| EP1157261B1 (fr) | Procede et dispositif d'analyse d'un front d'onde a grande dynamique | |
| EP1012549A1 (fr) | Proc d et dispositif optiques d'analyse de surface d'onde | |
| FR3010182A1 (fr) | Methode et dispositif de determination de la position et de l'orientation d'une surface speculaire formant un dioptre | |
| EP1479044A2 (fr) | Procede de mesure de la localisation d'un objet par detection de phase | |
| BE1017316A7 (fr) | Appareil pour determiner le forme d'une gemme. | |
| CA1314619C (fr) | Procede de numerisation de la surface d'un objet tridimensional et appareil de releve en vue de sa mise en oeuvre | |
| EP1430271B1 (fr) | Procede et dispositif de mesure d'au moins une grandeur geometrique d'une surface optiquement reflechissante | |
| EP2078184B1 (fr) | Procédé de correction d'un analyseur de front d'onde, et analyseur implémentant ce procédé | |
| FR2940423A1 (fr) | Dispositif de numerisation tridimensionnelle a reconstruction dense | |
| EP3751278B1 (fr) | Système d'imagerie acousto-optique | |
| EP1371958A1 (fr) | Procédé et dispositif d'extraction de signature spectrale d'une cible ponctuelle | |
| EP2877979B1 (fr) | Methode monocamera de determination d'une direction d'un solide | |
| WO1980002882A1 (fr) | Procede et dispositif de traitement optique d'objets par intercorrelation avec des anneaux | |
| EP3749919B1 (fr) | Procédé et dispositif d'inspection d'une surface d'un objet comportant des matériaux dissimilaires | |
| EP1626402A1 (fr) | Equipement pour la lecture optique des disques phonographiques analogiques | |
| FR2950139A1 (fr) | Procede de numerisation tridimensionnelle comprenant l'acquisition d'un quadruplet d'images stereoscopique | |
| WO2024170734A1 (fr) | Dispositif de localisation d'une particule individualisée dans un échantillon et procédé associé |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| WWE | Wipo information: entry into national phase |
Ref document number: 2003732624 Country of ref document: EP |
|
| WWP | Wipo information: published in national office |
Ref document number: 2003732624 Country of ref document: EP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 10506021 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: JP |
|
| WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |
|
| WWW | Wipo information: withdrawn in national office |
Ref document number: 2003732624 Country of ref document: EP |