[go: up one dir, main page]

WO2014181581A1 - Dispositif d'étalonnage, système d'étalonnage, et dispositif d'imagerie - Google Patents

Dispositif d'étalonnage, système d'étalonnage, et dispositif d'imagerie Download PDF

Info

Publication number
WO2014181581A1
WO2014181581A1 PCT/JP2014/056816 JP2014056816W WO2014181581A1 WO 2014181581 A1 WO2014181581 A1 WO 2014181581A1 JP 2014056816 W JP2014056816 W JP 2014056816W WO 2014181581 A1 WO2014181581 A1 WO 2014181581A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
imaging
correction information
imaging system
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2014/056816
Other languages
English (en)
Japanese (ja)
Inventor
青木 利幸
健 志磨
笹田 義幸
松浦 一雄
未来 樋口
謙 大角
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Astemo Ltd
Original Assignee
Hitachi Automotive Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Automotive Systems Ltd filed Critical Hitachi Automotive Systems Ltd
Priority to JP2015515807A priority Critical patent/JP6186431B2/ja
Publication of WO2014181581A1 publication Critical patent/WO2014181581A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Definitions

  • the present invention relates to an imaging device that calculates a distance image from images acquired from a plurality of imaging devices, a calibration device for the imaging device, and a calibration system.
  • Patent Document 1 in a geometric calibration process, an image obtained by photographing a chart on which a plurality of feature points are drawn from one viewpoint is used to correct a luminance distribution caused by lens aberration in the image. Then, the feature point position on the corrected image is measured, and the camera parameter is estimated. As a result, errors due to lens aberration can be removed, accurate camera parameters can be estimated, and accurate geometric correction parameters (geometric correction table, geometric correction information) can be obtained using the camera parameters. Further, the distance to the object is accurately calculated by correcting the image using the geometric correction parameter in the imaging apparatus.
  • an image pickup apparatus stereo camera
  • a manufacturing error occurs at the mounting position of the image pickup means.
  • an error also occurs in the installation position of the imaging device with respect to the chart.
  • the estimated camera parameters and geometric correction information are calculated based on the error in the mounting position between a plurality of imaging system means in the imaging device and the error in the installation position of the imaging device with respect to the chart. An error occurs.
  • An object of the present invention is to reduce an error of geometric correction information due to a manufacturing error of an installation position of an imaging system means and an installation position of the imaging apparatus with respect to a chart in an imaging apparatus having a plurality of imaging system means.
  • an imaging apparatus calibration apparatus having a plurality of imaging system means of the present invention
  • an imaging apparatus that is separated from a chart on which a plurality of feature points are drawn by a first predetermined distance.
  • a first image capturing means for acquiring a captured first image; a first image; camera parameters of a plurality of imaging system means; a baseline length between the plurality of imaging system means; and a plurality of feature point positions; , Based on, a correction information generating means for generating first correction information, and a second image acquired by the imaging apparatus separated from the chart by a predetermined second distance different from the first distance.
  • First correction information based on the image capturing means, the second image, the camera parameters of the plurality of imaging system means, the baseline length between the plurality of imaging system means, and the plurality of feature point positions. Modify the second correction information A structure having a correction information correcting means for forming.
  • the imaging device can be installed at a position that is separated from the chart by a predetermined first distance from a chart on which a plurality of feature points are drawn.
  • the first correction is performed.
  • Correction information generating means for generating information
  • second image capturing means for acquiring a second image captured by an imaging device installed at a position away from the chart by a second distance, a second image, , Camera parameters of multiple imaging system means
  • a correction information correction means for correcting the first correction information and generating the second correction information based on the base line length between the plurality of imaging system means and the plurality of feature point positions. It is set as the structure which has these.
  • the imaging apparatus of the present invention receives the light that has passed through the first optical element means, and first imaging system means having first imaging element means for outputting a first image processed as a reference image.
  • a second imaging system means having a second imaging element means for receiving the light that has passed through the second optical element means and outputting a second image to be processed as a comparison image; a reference image and a comparison image; Calculating means for generating a distance image by using the geometric correction means for correcting the reference image and the comparison image based on the geometric correction information of the first imaging system means and the second imaging system means.
  • an imaging apparatus having a plurality of imaging system means, it is possible to reduce errors in the geometric correction information due to manufacturing errors in the mounting position of the imaging system means and errors in the installation position of the imaging apparatus with respect to the chart.
  • An embodiment of the calibration device of the present invention and the calibration system using the same shown in FIG. 1 is the distortion on the image from the imaging device having two imaging system means (cameras) arranged on the left and right, the design value.
  • the correction amount (geometric correction information) of each pixel that corrects the deviation in the horizontal and vertical directions from is calculated.
  • FIG. 1 shows the configuration of an embodiment of the calibration apparatus and system of the present invention.
  • One embodiment of the calibration apparatus of the present invention includes a calculation means 110, and one embodiment of the calibration system includes a chart 101, an installation table 102, and an installation jig 103a in addition to the calculation means 110 that is a calibration apparatus. And an installation jig 103b and a screen output means 130.
  • the chart 101 is installed in the visual fields 104a and 104b of the imaging device 100 (the imaging device 100a is installed in the installation jig 103a, and the imaging device 100b is installed in the installation jig 103b).
  • the size includes 104a and 104b.
  • FIG. 2 a plurality of feature points are drawn.
  • An example 101a of the chart shown in FIG. 2A is a case where the shape of the feature point is a rectangle
  • an example 101b of the chart shown in FIG. 2B is a case where the shape of the feature point is a ring.
  • the installation table 102 is a table on which the imaging apparatus 100 is attached via the installation jigs 103a and 103b.
  • the installation jig 103a attaches and fixes the imaging device 100.
  • the imaging device 100 is attached so that the direction perpendicular to the surface of the chart 101 is the optical axis.
  • the imaging apparatus can be installed at a position (viewpoint 1) that is a predetermined first distance away from the chart 101, and is separated from the chart 101 by a predetermined second distance that is different from the first distance.
  • This is a table on which the imaging device can be installed at the position (viewpoint 2).
  • the height of the position away from the chart 101 by the first distance is set to be lower than the height of the position away from the chart 101 by the second distance shorter than the first distance.
  • the installation jig 103a has three pins 401a to 401c.
  • portions 402a to 402c corresponding to the pins 401a to 401c are processed into a flat surface.
  • the three pins 401a to 401c are respectively applied to the flat portions 402a to 402c, and the imaging apparatus 100 is attached and fixed to the installation jig 103a. Also, all three pins 401a to 401c are not on one straight line.
  • the installation jig 103b is installed at a distance different from the distance between the chart 101 and the installation jig 103b. Moreover, it is installed in a place that is not within the field of view 104a of the imaging device 100 attached to the installation jig 103b.
  • the visual fields 104a and 104b on the chart 101 of the imaging apparatus 100 installed on the installation jigs 103a and 103b are installed so as to overlap each other. Similar to the installation jig 103b, as shown in FIG. 4, the installation jig 103b has three pins 401a to 401c.
  • the calculation means 110 including a CPU (central processing unit) and a memory (storage device) includes an image storage means 111a, an image storage means 111b, a luminance correction information storage means 112, a design information storage means 113, Feature point position storage means 114, geometric correction information storage means 115, error storage means 116, image capture means 117a, image capture means 117b, brightness correction means 118, feature point position design value calculation means 119, feature point position measurement means 120, a geometric correction information generation unit 121, a correction parameter calculation unit 122, a geometric correction information correction unit 123, a geometric correction information transmission unit 124, and a geometric correction unit 125.
  • the image storage unit 111a such as a memory or a hard disk stores an image when the imaging apparatus 100 is installed on the installation jig 103a (viewpoint 1), and an image obtained by correcting the luminance and geometrically correcting the image.
  • the image storage unit 111b such as a memory or a hard disk stores an image when the imaging apparatus 100 is installed on the installation jig 103b (viewpoint 2), and an image obtained by performing luminance correction and geometric correction on the image.
  • the feature point position storage means 114 such as a memory or a hard disk stores the design value, the measured value, and the correction value of the feature point position.
  • the luminance correction information storage unit 112 such as a memory or a hard disk stores the luminance correction coefficient of each pixel in the image output by the imaging system unit such as the two cameras included in the imaging apparatus 100.
  • This correction coefficient is a value that makes the luminance of the image uniform when shooting a uniform light or an object on the entire surface of the image.
  • the design information storage means 113 such as a memory or a hard disk includes camera parameters of the left and right imaging system means, the position and orientation of the two imaging system means in the imaging apparatus 100, the position and orientation of the viewpoint 1 with respect to the chart 101, and the viewpoint with respect to the chart 101. 2, the number of feature points drawn on the chart 101, and design values (design information) of the position and shape are stored.
  • Camera parameters are focal length, pixel pitch, horizontal and vertical resolution, and optical axis position on the image.
  • the geometric correction information storage unit 115 such as a memory or a hard disk stores the geometric correction amount (geometric correction information) of each pixel in the image output by the imaging system unit such as two cameras included in the imaging apparatus 100.
  • the geometric correction information output by the geometric correction information generation means 121 and the geometric correction information correction means 123 is stored.
  • the error storage means 116 such as a memory or a hard disk stores a positional error between the imaging system means in the imaging apparatus 100 and an installation error of the imaging apparatus 100.
  • the error calculated by the correction parameter calculation means 122 is stored.
  • the image capturing unit 117a acquires an image output from the imaging device 100 installed at the viewpoint 1. In other words, a first image captured by an imaging device that is a predetermined first distance away from the chart 101 on which a plurality of feature points are drawn is acquired.
  • the image capturing unit 117b acquires an image output from the imaging device 100 installed at the viewpoint 2. In other words, a second image captured by the imaging device that is separated from the chart 101 by a predetermined second distance different from the first distance is acquired.
  • the luminance correction unit 118 reads the luminance correction coefficient of each pixel from the luminance correction information storage unit 112 and corrects the left and right images, respectively.
  • the feature point position design value calculation means 119 stores the camera parameters from the design information storage means 113, the positions and orientations of the two imaging system means, the distance between the chart 101 and the viewpoint 1, the distance between the chart 101 and the viewpoint 2, and the chart 101. Based on the design values of the number and positions of drawn feature points, the design values of the feature point positions when there are no manufacturing errors and installation errors of the imaging apparatus 100 are calculated.
  • the feature point position measuring unit 120 is drawn from the design information storage unit 113 on the camera parameter, the position and orientation of the two imaging system units, the distance between the chart 101 and the viewpoint 1, the distance between the chart 101 and the viewpoint 2, and the chart 101.
  • the design values of the number, position, and shape of the feature points are read, and an image of one feature point is created for viewpoint 1 and viewpoint 2 based on the design information. Since the distance from the chart 101 to the viewpoint 1 and the distance from the chart 101 to the viewpoint 2 are different, the sizes of the feature point images are different.
  • the feature point example 301a shown in FIG. 3A is a case where the shape of the feature point is a rectangle, and the feature point example 301b shown in FIG. This is the case.
  • the image of the viewpoint 1 corrected for luminance is read from the image storage unit 111a, and the feature point image and the feature points of the left and right images of the viewpoint 1 are subjected to matching processing, and the position (measurement value) of the feature point on the left and right images Measure.
  • the luminance-corrected viewpoint 2 image is read from the image storage unit 111b, the feature point image and the left and right image points of the viewpoint 2 are subjected to matching processing, and the feature point positions (measurements) on the left and right images are measured. Value).
  • the geometric correction information generation unit 121 reads the design value and measurement value of the feature point position on the left and right images from the feature point position storage unit 114, and calculates the difference between the design value and measurement value of the feature point position for the left and right special images, respectively. The difference is calculated and used as the correction amount in the design value of the feature point position.
  • a correction amount (geometric correction information) in each pixel is calculated by interpolating a correction amount between design values of feature point positions.
  • the geometric correction information generation unit 121 includes a first image (an image captured from the viewpoint 1) captured by an imaging device that is a predetermined first distance away from the chart 101 on which a plurality of feature points are drawn, and 2 Geometric correction information (first correction information) is generated based on the camera parameters of the two imaging system means, the base line length between the two imaging system means, and the plurality of feature point positions.
  • the correction parameter calculation unit 122 corrects the measurement value of the feature point position on the image taken from the viewpoint 2 by using the correction amount in each pixel generated by the geometric correction information generation unit 121 to correct the feature point position. Calculate the value. If there is no manufacturing error in the position of the imaging system means of the imaging apparatus 100 and no error in the installation position of the imaging means, the correction value of the feature point position matches the design value. However, in reality, these errors occur. Therefore, more accurate correction is realized by correcting the corrected image in the horizontal, vertical, rotational, and enlargement / reduction directions.
  • the horizontal and vertical manufacturing errors of the position of the imaging system unit of the imaging apparatus 100 and the depth direction error of the installation position of the imaging apparatus 100 are calculated.
  • Parameter values for correcting the left and right images in the horizontal, vertical, rotational, and enlargement / reduction directions are calculated.
  • An error in the lateral direction of the position of the imaging system means of the imaging apparatus 100 that is, an error in the baseline length between the two imaging system means, is determined in advance from a chart 101 on which a plurality of feature points are drawn, which is different from the first distance.
  • a second image (an image captured from the viewpoint 2) captured by an imaging device at a distance of 2, a camera parameter of the two imaging system means, a baseline length between the two imaging system means, and geometric correction information (first 1 correction information).
  • the geometric correction information correcting unit 123 calculates the correction amount in each pixel when the image is corrected in the horizontal, vertical, rotational, and enlargement / reduction directions by the correction parameter value calculated by the correction parameter calculation unit 122, respectively. Then, the correction amount is added to the correction amount in each pixel calculated by the geometric correction information generation unit 121 to obtain a new correction amount in each pixel. That is, the geometric correction information correcting unit 123 includes the second image captured by the image capturing unit 117b, the camera parameters of the two imaging system units, the base line length between the two imaging system units, and a plurality of feature point positions. Based on the above, the first correction information (geometric correction information) generated by the geometric correction information generation means 121 is corrected to generate second correction information (geometric correction information). The second correction information (geometric correction information) is stored in the geometric correction information storage unit 115.
  • the geometric correction information transmission unit 124 captures the geometric correction information (second correction information) from the geometric correction information storage unit 115 corrected and stored by the geometric correction information correction unit 123, and captures the image in the imaging apparatus 100 from the error storage unit 116. Reads horizontal and vertical position errors between system means. The geometric correction information and the horizontal and vertical position errors between the imaging system means in the imaging apparatus 100 are sent to the imaging means.
  • the geometric correction unit 125 reads images corrected from the viewpoints 1 and 2 and brightness corrected from the image storage unit 111 a or 111 b, and the image correction information corrected by the geometric correction information correction unit 123 as geometric correction information storage unit 115. Read from. The left and right images are corrected based on the correction amount of each pixel.
  • the screen output means 130 such as a monitor reads the left and right input images (reference image and comparison image), luminance correction image, or geometric correction image from the image storage means 111a or 111b, and displays them on the screen. Also, the design value, measurement value, and correction value of the feature point position are read from the feature point position storage unit 114, and marks are displayed on any or all of the positions on the image.
  • the imaging apparatus 100 outputs an image captured by the left and right imaging system means while the imaging apparatus 100 is installed on the installation jig 103a (viewpoint 1).
  • the image capturing means 117a acquires the left and right images from the photographing apparatus and sends them to the image storage means 111a.
  • the image storage unit 111a stores the image.
  • the luminance correction unit 118 reads the luminance correction coefficient of each pixel in the left and right images from the luminance correction information storage unit 112, and reads the image at the viewpoint 1 output from the imaging device 100 from the image storage unit 111a.
  • the left and right images are corrected by multiplying the luminance of each pixel by a correction coefficient.
  • the corrected image is stored in the image storage unit 111a.
  • the feature point position measurement unit 120 stores the camera parameters from the design information storage unit 113, the positions and orientations of the two imaging system units in the imaging apparatus 100, the position and orientation of the viewpoint 1 with respect to the chart 101, and the chart 101.
  • the image from the viewpoint 1 whose luminance has been corrected is read from the image storage means 111a with the design values of the number, position, and shape of the drawn feature points.
  • a feature point image at the viewpoint 1 is created, and the feature point position is measured by matching the brightness corrected viewpoint 1 image with the feature point image.
  • the measured value of the feature point position is stored in the feature point position storage unit 114. Detailed processing will be described later with reference to FIG.
  • the feature point position design value calculation unit 119 receives the camera parameters from the design information storage unit 113, the positions and orientations of the two imaging system units in the imaging apparatus 100, the position and orientation of the viewpoint 1 with respect to the chart 101, and the chart. A design value of the number and position of feature points drawn in 101 is read. Based on the design information, perspective conversion is performed from the feature point position on the chart 101 to the feature point position on the image, and the design value of the feature point position on the image from the viewpoint 1 is calculated. This process is performed for the left and right images. The design value of the feature point position is stored in the feature point position storage unit 114.
  • the geometric correction information calculation means reads design values and measurement values of feature point positions on the left and right images. The difference between the design value of the feature point position and the measured value is calculated, and this difference is used as the correction amount in the design value of the feature point position.
  • a correction amount (geometric correction information) in each pixel is calculated by interpolating a correction amount between design values of feature point positions. This process is performed for the left and right images. Geometric correction information for the left and right images is stored in the geometric correction storage means.
  • Step 506 outputs an image captured by the left and right imaging system means in a state where the imaging apparatus 100 is installed on the installation jig 103b (viewpoint 2).
  • the image capturing unit 117b acquires the left and right images from the photographing apparatus and sends them to the image storage unit 111b.
  • the image storage unit 111b stores the image.
  • the luminance correction unit 118 reads the luminance correction coefficient of each pixel in the left and right images from the luminance correction information storage unit 112, and reads the image at the viewpoint 2 output from the imaging device 100 from the image storage unit 111b.
  • the left and right images are corrected by multiplying the luminance of each pixel by a correction coefficient.
  • the corrected image is stored in the image storage unit 111b.
  • the feature point position measurement unit 120 stores the camera parameters from the design information storage unit 113, the positions and orientations of the two imaging system units in the imaging apparatus 100, the position and orientation of the viewpoint 2 with respect to the chart 101, and the chart 101.
  • the image from the viewpoint 2 whose luminance is corrected is read from the image storage unit 111b with the design values of the number, position, and shape of the drawn feature points.
  • a feature point image at the viewpoint 2 is created, and the feature point position is measured by performing a matching process between the image of the viewpoint 2 corrected for luminance and the feature point image.
  • the measured value of the feature point position is stored in the feature point position storage unit 114. Detailed processing will be described later with reference to FIG.
  • the feature point position design value calculation unit 119 receives the camera parameters from the design information storage unit 113, the positions and orientations of the two imaging system units in the imaging apparatus 100, the position and orientation of the viewpoint 2 with respect to the chart 101, and the chart. A design value of the number and position of feature points drawn in 101 is read. Based on the design information, perspective conversion is performed from the feature point position on the chart 101 to the feature point position on the image, and the design value of the feature point position on the image from the viewpoint 1 is calculated. This process is performed for the left and right images. The design value of the feature point position is stored in the feature point position storage unit 114.
  • the correction parameter calculation unit 122 uses the geometric correction information created in step 505 from the geometric correction information storage unit 115 as a measurement value of the feature point position on the image from the viewpoint 2 from the feature point position storage unit 114. Read design values. Using the correction amount (geometric correction information) in each pixel, the measurement value of the feature point position on the image taken from the viewpoint 2 is corrected, and the correction value of the feature point position is calculated. Using the correction value and the design value of the feature point position, the horizontal and vertical manufacturing errors of the position of the imaging system means of the imaging apparatus 100 and the depth direction error of the installation position of the imaging apparatus 100 are calculated, and the left and right images The parameter values for correcting the horizontal, vertical, rotational and enlargement / reduction directions are calculated.
  • the correction amount of the feature point position is stored in the feature point position storage unit 114, the manufacturing error in the horizontal and vertical directions of the position of the imaging system unit of the imaging device 100, and the error in the depth direction of the installation position of the imaging device 100 at the viewpoint 1. Is stored in the error storage means 116 and the correction parameter is sent to the geometric correction information correction means 123. Detailed processing will be described later with reference to FIG.
  • the geometric correction information correction unit 123 reads the geometric correction information created in step 505 from the geometric correction information storage unit 115.
  • the correction parameter calculation unit 122 receives correction parameter values in the horizontal, vertical, rotation, and enlargement / reduction directions for the left and right images, respectively.
  • the conversion amount in each pixel when the image is converted in the horizontal, vertical, rotation, and enlargement / reduction directions by the correction parameter value is calculated for each of the left and right images, and the converted position of each pixel is calculated.
  • the rotation and the conversion in the enlargement / reduction direction are performed around the position of the optical axis on the image.
  • the correction amount of each pixel at the converted position is calculated by interpolation.
  • a sum of the conversion amount and the correction amount in each pixel is set as a new correction amount in each pixel.
  • a new correction amount (geometric correction information) for each pixel is stored in the geometric correction storage means.
  • the geometric correction information transmitting unit 124 reads the geometric correction information from the geometric correction information storage unit 115, and the error storage unit 116 reads the position error in the horizontal and vertical directions between the imaging system units in the imaging apparatus 100.
  • the correction information and the position error in the horizontal and vertical directions between the imaging system means in the imaging apparatus 100 are sent to the imaging means.
  • the imaging apparatus 100 receives the geometric correction information and the horizontal and vertical position errors between the imaging system means in the imaging apparatus 100, and stores them in the geometric correction information storage means 115 and the error storage means 116 (described in the fifth embodiment), respectively. .
  • Step 513 reads the left and right images whose luminance has been corrected from the image storage unit 111 a or 111 b by the geometric correction unit 125 and the geometric correction information corrected in step 511 from the geometric correction information storage unit 115.
  • the left and right images are corrected based on the correction amount of each pixel.
  • the image storage means 111a or 111b stores the left and right geometric correction images.
  • the screen output unit 130 reads the left and right input images, the luminance correction image, or the geometric correction image from the image storage unit 111a or 111b and displays them on the screen. Also, the design value, measurement value, and correction value of the feature point position are read from the feature point position storage unit 114, and marks are displayed on any or all of the positions on the image.
  • Steps 503 and 508 of the operation procedure of the embodiment of the calibration apparatus of the present invention shown in FIG. 1 will be described with reference to FIG.
  • step 508 information on viewpoint 2 is used instead of information on viewpoint 1.
  • the feature point position measuring unit 120 receives the camera parameters from the design information storage unit 113, the positions and orientations of the two imaging system units in the imaging apparatus 100, and the position and orientation of the viewpoint 1 (viewpoint 2) with respect to the chart 101. From the image storage means 111a (111b), the brightness corrected viewpoint 1 (viewpoint 2) image is read from the image storage means 111a (111b) with the design values of the feature points drawn on the chart 101.
  • Step 602 is drawn on the chart 101 by the feature point position measuring means 120, the camera parameters, the positions and orientations of the two imaging system means in the imaging apparatus 100, the position and orientation of the viewpoint 1 (viewpoint 2) with respect to the chart 101. Based on the design value of the shape of the feature point, an image of one feature point shown in FIG. 3A or 3B is created for the viewpoint 1 (viewpoint 2).
  • the feature point position measuring means 120 calculates the sum of absolute values of differences in brightness between the brightness corrected image from the viewpoint 1 (viewpoint 2) and the feature point image (SAD, Sum of Absolute Difference).
  • SAD Sum of Absolute Difference
  • the sum of the absolute values of the differences in luminance with respect to the feature point image is calculated at a position adjacent in the left-right direction to the position where the sum of absolute values of the respective luminances is the smallest.
  • the following equation 1 is used to calculate the measurement value u in the horizontal direction of the feature point position to the sub-pixel by polygonal line approximation (Equiangular fitting, first-order symmetric function fitting, equiangular fitting).
  • Sc, Sl, and Sr are the sum of the absolute values of the luminance differences at the position where the sum of the absolute values of the luminance differences is the smallest, the positions of the pixels adjacent to the left and right.
  • the same processing is performed for the vertical direction, and the vertical measurement value of the feature point position is calculated. Thereby, the position on the image of one feature point is measured. This process is sequentially performed for each feature point, and the positions of all feature points on the image are measured. The measured value of the feature point position is stored in the feature point position storage unit 114.
  • Step 510 of the operation procedure of the embodiment of the calibration apparatus of the present invention shown in FIG. 1 will be described with reference to FIG.
  • steps 711 to 714 calculate correction parameters in the enlargement / reduction direction
  • steps 710 to 724 calculate rotation and vertical correction parameters
  • step 720 calculates rotation and vertical correction parameters
  • steps 731 to 734 indicate horizontal directions.
  • step 730 for calculating the correction parameter.
  • step 701 the correction parameter calculation unit 122 uses the geometric correction information created in step 505 from the geometric correction information storage unit 115 and the feature point position measurement value on the image from the viewpoint 2 from the feature point position storage unit 114. Read design values. Using the correction amount (geometric correction information) in each pixel, the measurement value of the feature point position on the image taken from the viewpoint 2 is corrected, and the correction value of the feature point position is calculated.
  • Step 711 reads the design value of the feature point position on the left and right images from the viewpoint 2 by the correction parameter calculation means 122.
  • a correction value and a measurement value of the interval between the rightmost feature point and the leftmost feature point position are calculated for each column in the horizontal direction.
  • the ratio between the correction value of the interval between the feature point positions and the measured value is calculated, and the average is calculated.
  • the average value of the ratio between the correction value of the interval between the feature point positions in the vertical direction and the measured value is calculated. From the average value of the ratio between the correction value of the interval between the feature point positions in the horizontal and vertical directions and the measured value, the average value is calculated.
  • Step 712 calculates an installation error ⁇ z1 of the depth between the chart 101 and the viewpoint 1 using the following formulas 2 to 4.
  • rl and rr are ratios between the correction values of the feature point positions on the left and right images and the design values
  • L1 and L2 are distances from the chart 101 to the viewpoints 1 and 2.
  • Step 713 calculates the correction parameter m in the enlargement / reduction direction of the geometric correction information using the following equation (5).
  • the correction parameter values in the enlargement / reduction direction in the left and right images are the same.
  • Step 714 converts the correction value of the feature point position on the viewpoint 2 image by the correction parameter value in the enlargement / reduction direction around the optical axis position.
  • the value of the depth installation error between the chart 101 and the viewpoint 2 is the same as the value of the depth installation error between the chart 101 and the viewpoint 1
  • the position of the feature point on the chart 101 using perspective transformation is used.
  • the difference between the conversion value of the feature point position and the design value is calculated, and the correction amount of the feature point position of the image from the viewpoint 2 is changed by the difference.
  • step 721 the correction parameter calculation means 122 calculates the difference between the correction values of the vertical positions of the feature points in the left image and the right image from the viewpoint 2 and the design value, respectively.
  • the difference between the correction value of the direction position and the difference between the design values is calculated, and the average of the differences is calculated.
  • Step 722 calculates a manufacturing error ⁇ h between the positions in the vertical direction of the two imaging system means in the imaging apparatus 100 using Equation 6 below.
  • ⁇ v is an average value of the difference between the correction values of the vertical positions of the feature points in the left image and the right image from the viewpoint 2 and the design value
  • c is the pixel pitch of the image element of the imaging system means
  • f is This is the focal length of the imaging system means.
  • Step 723 calculates the geometric correction information rotation and vertical correction parameters ⁇ and tv using Equations 7 and 8 below.
  • B is the baseline length (distance between the imaging system means).
  • step 724 the correction value of the feature point position on the image of viewpoint 2 is converted by the correction parameter ⁇ in the rotation direction around the optical axis position, and the correction value of the feature point position in the left and right images is converted in the vertical direction.
  • the design value of the feature point position on the chart 101 is the feature on the image from the viewpoint 2 using perspective transformation. Convert to point position (conversion value). The difference between the conversion value of the feature point position and the design value is calculated, and the correction amount of the feature point position of the image from the viewpoint 2 is changed by the difference.
  • step 731 the correction parameter calculation unit 122 calculates a correction value difference and a design value difference in the horizontal position of each feature point in the left image and the right image from the viewpoint 2, and the horizontal difference for each feature point is calculated. The difference between the correction value of the direction position and the difference between the design values is calculated, and the average of the differences is calculated.
  • Step 732 calculates the manufacturing error ⁇ B between the positions in the horizontal direction of the two imaging system means in the imaging apparatus 100 using Equation 9 below.
  • ⁇ u is an average value of the difference between the correction value of the lateral position of the feature point in the left image and the right image from the viewpoint 2 and the difference between the design values.
  • Step 733 calculates the correction parameter tu in the horizontal direction of the geometric correction information using the following equation (10).
  • Step 734 converts the correction value of the feature point position on the image of the viewpoint 2 into the correction value of the feature point position in the left and right images in the horizontal direction by tu / 2 and ⁇ tu / 2, respectively.
  • the design value of the feature point position on the chart 101 is the feature on the image from the viewpoint 2 using perspective transformation. Convert to point position (conversion value). The difference between the conversion value of the feature point position and the design value is calculated, and the correction amount of the feature point position of the image from the viewpoint 2 is changed by the difference.
  • the correction parameters in the horizontal, vertical, rotation, and enlargement / reduction directions are sent to the geometric correction information correction means 123.
  • the correction amount of the feature point position is stored in the feature point position storage unit 114, and the error in the horizontal direction and the vertical direction of the position of the imaging system unit of the imaging apparatus 100 and the error in the depth direction of the installation position of the imaging apparatus 100 are stored in the error storage unit. 116.
  • FIG. 8A if there is no manufacturing error between the two imaging system means 801a and 801b in the imaging apparatus 100 and there is no installation error at the viewpoint 1 of the imaging apparatus 100, FIG. As shown in c), in the left and right images 802a and 802b corrected using the geometric correction information generated based on the viewpoint 1 image in step 505, the images 803a and 803b appearing at the center of the image are centered according to the design information. Reflected in.
  • an error 903 occurs in the installation position of the imaging apparatus 100 in the depth direction with respect to the chart 101, and when installed far away, as shown in FIGS. 9B and 9C.
  • the images 901a and 901b that appear in the center of the image by design are smaller than in FIGS. 8B and 8C. .
  • the feature point position at the center of the left image is as designed, but the corresponding measured value of the feature point position in the left region of the right image has a rightward error than the design value.
  • an error occurs in the geometric correction information due to the measured value of the feature point position having an error in the horizontal direction.
  • an error occurs in the parallax and an error occurs in the distance image.
  • step 701 the feature of viewpoint 2 is calculated using the geometric correction information calculated from the image from viewpoint 1.
  • the measurement value of the point position is corrected, and in step 711 to step 714, based on the ratio between the correction value of the feature point interval in the horizontal and vertical directions in the image from the viewpoint 2 and the design value, between the chart 101 and the viewpoint 1
  • a depth error is calculated, a correction parameter in the enlargement / reduction direction for correcting the error is calculated, and in step 512, the geometric correction information is corrected in the enlargement / reduction direction with the correction parameter.
  • the influence of the depth error between the chart 101 and the viewpoint 1 can be removed from the geometric correction information, and correction is performed using the geometric correction information corrected in step 511 as shown in FIGS. 9 (d) and 9 (e).
  • the images 902a and 902b shown in the center of the left and right images 804a and 804b are corrected as designed in the same manner as in FIGS. 8B and 8C, and the imaging apparatus 100 using the corrected geometric correction information is accurate.
  • a distance image can be calculated.
  • step 505 is performed.
  • the images 1001a and 1001b appearing at the center of the image are shifted in the vertical direction.
  • the measured value of the feature point position has an error in the vertical direction.
  • an error occurs in the geometric correction information due to the measured value of the feature point position having an error in the vertical direction.
  • the image is corrected using the geometric correction information having an error in the vertical direction in the imaging apparatus 100, a position error occurs in the vertical direction on the image, and there is no region of the comparison image that matches the template image of the reference image. The distance image cannot be obtained.
  • step 701 the feature of viewpoint 2 is calculated using the geometric correction information calculated from the image from viewpoint 1.
  • the measurement value of the point position is corrected, and in steps 721 to 724, based on the average of the difference between the correction value of the vertical position of the feature point in the left image and the right image from the viewpoint 2 and the design value difference,
  • a manufacturing error between the vertical positions of the two imaging system means is calculated, and a correction parameter for rotation and vertical correction for correcting the error is calculated. Correct with the correct parameter in the direction.
  • the influence of the manufacturing error between the vertical positions of the two imaging system means in the imaging apparatus 100 can be removed from the geometric correction information, and corrected in step 511 as shown in FIGS. 10 (d) and 10 (e).
  • the images 1002a and 1002b shown in the center of the left and right images 804a and 804b corrected using the corrected geometric correction information are corrected as designed.
  • the template image of the reference image is used.
  • the area of the comparative image that matches can be searched, and the distance image can be obtained.
  • the image is corrected by removing the influence of the manufacturing error in the rotation direction about the optical axis of each imaging system means by the geometric correction information generated in step 505. Further, according to the rotation direction correction parameters calculated in steps 721 to 724, as shown in Equation 7 and FIGS. 10 (d) and 10 (e), a line connecting the actual positions (focus points) between the imaging system means, The left and right images are rotated by an angle 1003 formed by a line connecting the design positions (focal points) between the imaging system means, and the influence of the manufacturing error is exerted on the vertical position of the two imaging system means in the imaging device 100. Remove.
  • the manufacturing system in the rotation direction of the center of the optical axis of each imaging system means and the line connecting the actual position between the imaging system means and the imaging system
  • the image is rotated by an angle obtained by adding an angle 1003 formed by a line connecting the design positions between the means.
  • step 505 is performed.
  • the images 1101a and 1101b appearing in the center of the image by design are shifted in the left-right direction.
  • the measured value of the feature point position has an error in the left-right direction.
  • an error occurs in the geometric correction information due to the measured value of the feature point position having an error in the horizontal direction.
  • an error occurs in the parallax and an error occurs in the distance image.
  • step 701 the feature of viewpoint 2 is calculated using the geometric correction information calculated from the image from viewpoint 1.
  • step 731 to step 734 the measured value of the point position is corrected, and based on the average of the difference between the correction value of the lateral position of the feature point in the left image from the viewpoint 2 and the right image and the difference in the design value, A manufacturing error between the horizontal positions of the two imaging system means in the imaging apparatus 100 is calculated, a horizontal correction parameter for correcting the error is calculated, and in step 512, the geometric correction information is corrected in the horizontal direction. To fix.
  • the influence of the manufacturing error between the horizontal positions of the two imaging system means in the imaging apparatus 100 can be removed from the geometric correction information, and the left and right images 804a corrected using the geometric correction information corrected in step 511 and Images 1102a and 1102b shown in the center of 804b are corrected as designed, and the imaging apparatus 100 using the corrected geometric correction information can calculate an accurate distance image.
  • steps 731 to 734 errors in the horizontal (parallax) direction on the image caused by manufacturing errors between the horizontal positions of the two imaging system means in the imaging apparatus 100 are removed. That is, when the parallax between the left and right images corrected by the geometric correction information obtained in step 510 is obtained, the design value of the distance between the horizontal positions of the two imaging system means in the imaging apparatus 100 and the manufacturing error are added. It is possible to obtain the parallax when assuming that.
  • an installation shift occurs when the imaging apparatus 100 is installed on the installation jig 103a and the installation jig 103b, an error occurs in the depth position of the imaging apparatus 100 with respect to the chart 101, and a lateral error occurs in the measured value of the feature point position on the image.
  • an error occurs in the parallax and an error occurs in the distance image.
  • three pins 401a to 401c are arranged on the installation jigs 103a and 103b of the imaging apparatus 100, and the imaging apparatus 100
  • the image pickup apparatus 100 has three planes that apply the pins 401a to 401c to the portions 402a to 402c that are processed into planes in the housing, and the three pins 401a to 401c are respectively installed on the three planes. Can be prevented, and errors due to the installation deviation of the imaging apparatus 100 can be removed from the geometric correction information, and the imaging apparatus 100 can calculate an accurate distance image.
  • the calibration apparatus of the present invention is not limited to the embodiment described above, and can be applied with various modifications. Below, the modification of the calibration apparatus of this invention is demonstrated.
  • the luminance correction unit 118 receives each pixel in the left and right images from the luminance correction information storage unit 112. The luminance correction coefficient of each pixel is read, and the correction coefficient is multiplied by the luminance coefficient of each pixel for the left and right images to correct the image, and when the image is a color, the luminance value of the RGB pixel is corrected. .
  • the average of the luminance values of the RGB pixels in the black and white portions of the feature point image is calculated, and the black and white portions of the RGB pixels have the same predetermined luminance value.
  • a linear function to be converted is obtained for each RGB.
  • two coefficients of a linear function of RGB are calculated for each feature point.
  • Two coefficients of a linear function are calculated for each pixel by linearly interpolating between the feature point positions.
  • the luminance value is corrected for each pixel using two coefficients of a linear function for each pixel.
  • the luminance value differs for each RGB in the same image area, and an error occurs in the measured value of the feature point position when matching the feature point image with the corrected image in step 504 and step 508. As a result, an error occurs in the geometric correction information.
  • steps 502 and 507 of the operation procedure (FIGS. 5 to 7) of the embodiment of the calibration apparatus of the present invention shown in FIG. 1 when the image is a color, a process of correcting the luminance value of the RGB pixel is performed. By doing so, the luminance value can be made substantially the same for each RGB in the same image area, and the error of the measured value of the feature point position and the error of the geometric correction information can be removed. In addition, an accurate distance image can be calculated in the imaging apparatus 100 using geometric correction information from which errors due to different RGB luminance values are removed in the same image region.
  • step 603 of the operation procedure (FIGS. 5 to 7) of the embodiment of the calibration apparatus of the present invention shown in FIG. 1 the feature point position measurement means 120 performs the brightness correction image and feature from the viewpoint 1 (viewpoint 2).
  • SAD sum of absolute values
  • SSD Sum of Squared Differences
  • ZSSD Zero-mean Normalized Cross Correlation
  • any one of these values with the feature point image is calculated, and the measured value of the feature point position is calculated using Equation 1. Even sub-pixels can be calculated.
  • Sc, Sl, and Sr are sums of absolute values of differences in luminance (SAD) or values of ZSAD, SSD, ZSSD, NCC, and ZNCC.
  • Step 501 to Step of the operation procedure (FIGS. 5 to 7) of the embodiment of the calibration apparatus of the present invention shown in FIG. 513 is performed for each combination of two imaging system means.
  • the imaging apparatus 100 calculates a distance image by correcting an image with each geometric correction information for each combination of two imaging system means. This enables geometric calibration of the imaging apparatus 100 having three or more imaging system means.
  • step 506 is performed after step 501.
  • FIG. As described above, even if the processing order of step 506 is changed, the effects of the manufacturing error of the two imaging system means and the installation error of the imaging device 100 and the installation error of the imaging device 100 can be removed from the geometric correction information. A distance image can be calculated.
  • the image capturing means 117 for acquiring images from the imaging devices 100 installed at the viewpoint 1 and the viewpoint 2 instead of the image capturing means 117a and 117b.
  • the image storage unit 111 that stores the images from the viewpoints 1 and 2, the luminance corrected image, and the geometrically corrected image is used.
  • an installation jig 103c is newly provided on the installation table 102, and the image capturing means 117b is installed on the installation jig 103c (viewpoint 3).
  • An image of the imaging device 100 when installed (the imaging device 100c is the imaging device 100 when installed on the installation jig 103c) is acquired (a first distance from the chart 101, a third distance different from the second distance).
  • the image storage unit 111b stores the image from the viewpoint 3, the luminance corrected image, and the geometrically corrected image.
  • the installation table 102 can install the imaging device at a position (viewpoint 1) away from the chart 101 by a first distance, and is a position away from the chart 101 by a predetermined second distance different from the first distance.
  • An imaging device can be installed at (viewpoint 2), and an imaging device can be installed at a position (viewpoint 3) that is separated from the chart 101 by a predetermined third distance different from the first distance and the second distance. It is a stand.
  • the visual field 104c of the imaging device 100 installed in the installation jig 103c overlaps the visual fields 104a and 104b of the imaging device 100 installed in the installation jigs 103a and 104b, and the installation jig 103c overlaps the visual fields 104a and 104b. It is arranged not to enter.
  • steps 506 to 510 are also performed on the image from the viewpoint 3.
  • the correction parameter calculation unit 122 uses the image of the viewpoint 3 to correct the correction parameters in the horizontal, vertical, rotation, and enlargement / reduction directions of the geometric correction information, the manufacturing error, and the installation error. Is calculated.
  • the geometric correction information is corrected based on the average value of the correction parameters calculated using the viewpoint 2 and viewpoint 3 images. Thereby, the geometric correction information can be corrected based on the images from the two viewpoints, and the accuracy of the geometric correction information is improved. In addition, a more accurate distance image can be calculated in the imaging apparatus 100 using the geometric correction information with improved accuracy.
  • FIG. 13 shows the configuration of an embodiment of the calibration apparatus and calibration system of the present invention.
  • One embodiment of the present invention includes a chart 101, an installation table 142, an installation jig 103, slider means 105, a calculation means 150, and a screen output means 130.
  • the chart 101, the installation jig 103, and the screen output means 130 are the same as those shown in FIG.
  • the installation table 142 is a table on which the imaging apparatus 100 is attached via the slider means 105 and the installation jig 103. That is, as shown in FIG. 1 and FIG. 12, the slider means 105 is provided instead of the installation base provided with a step for each viewpoint, and the distance between the chart 101 and the imaging device is varied, so that images can be displayed at different distances. I'm shooting and capturing. This makes it possible to capture images from different viewpoints.
  • FIG. 15 shows the slider system 105 of this embodiment applied to the calibration system of FIG. However, the contents are exactly the same as in FIG.
  • Slider means 105 such as a slider receives the control signal and moves the table on which the installation jig 103 is attached to the position specified by the control signal.
  • the imaging apparatus 100 is installed via the installation jig 103 so as to be an optical axis in a direction perpendicular to the surface of the chart.
  • the slider means 105 moves the installation jig 103 and the imaging device 100 in a direction perpendicular to the chart surface.
  • the calculation unit 150 which is a calibration device including a CPU and a memory, includes an image storage unit 111a, an image storage unit 111b, a luminance correction information storage unit 112, a design information storage unit 113, a feature point position storage unit 114, and a geometric correction.
  • a correction information correction unit 123, a geometric correction information transmission unit 124, a geometric correction unit 125, and a control unit 126 are provided.
  • the image capturing unit 117 captures an image output from the imaging device 140.
  • the control means 126 determines the position of the imaging device (viewpoint 1 ′) (the imaging device when the imaging device 140a is positioned at the viewpoint 1 ′), or the position of the imaging device 140b (viewpoint 2 ′) (the imaging device 140b is the viewpoint 2).
  • a control signal to move to the image pickup apparatus when positioned at ' is sent to the slider means 105.
  • the viewpoint 1 'and the viewpoint 2' have different distances from the chart 101 to the imaging device.
  • the fields of view 141 a and 141 b of the imaging device at the viewpoint 1 ′ and the viewpoint 2 ′ include the surface of the chart 101. Since the platform of the slider means 105 moves perpendicularly to the surface of the chart 101 and the distance between the chart 101 and the viewpoint 2 ′ is shorter than the distance between the chart 101 and the viewpoint 1 ′, the field of view 141b of the imaging device at the viewpoint 2 ′ is It is included in the visual field 141a of the imaging device at the viewpoint 1 ′.
  • steps 502 to 505 and steps 507 to 514 is the same as that in steps 502 to 505 and steps 507 to 514 in FIG.
  • step 1401 the control means 126 sends a control signal to the slider means 105 to move the platform to the position of the viewpoint 1 '.
  • the slider means 105 receives this control signal and moves the table to which the installation jig 103 is attached to the viewpoint 1 '.
  • an image captured by the left and right imaging system means is output by the imaging apparatus.
  • the image acquisition unit 117 acquires the left and right images from the photographing apparatus and sends them to the image storage unit 111a.
  • the image storage unit 111a stores the image.
  • step 1403 the control means 126 sends a control signal to the slider means 105 to move the platform to the position of the viewpoint 2 '.
  • the slider means 105 receives this control signal and moves the table to which the installation jig 103 is attached to the viewpoint 2 '.
  • Step 1404 outputs an image photographed by the left and right photographing system means in the imaging device.
  • the image acquisition unit 117 acquires the left and right images from the photographing apparatus 100 and sends them to the image storage unit 111b.
  • the image storage unit 111b stores the image.
  • the slider means 105 moves the imaging apparatus 100 to the viewpoint 1 'and the viewpoint 2' in step 1401 and step 1403. Thereby, the movement of the imaging device 100 can be automated.
  • the calibration apparatus of the present invention is not limited to the embodiment described above, and can be applied with various modifications. Below, the modification of the calibration apparatus of this invention is demonstrated.
  • Step 510 the imaging apparatus is positioned at the position 130c (viewpoint 3 ′) (when the imaging apparatus 140c is positioned at the viewpoint 3 ′).
  • Steps 1403, 1404, and Steps 507 to 510 are performed, and in Step 511, based on the average value of the correction parameters calculated using the images of the viewpoints 2 ′ and 3 ′.
  • Correct the geometric correction information Thereby, the geometric correction information can be corrected based on the images from the two viewpoints, and the accuracy of the geometric correction information is improved. In addition, it is possible to calculate a more accurate distance image in the imaging apparatus using the geometric correction information with improved accuracy.
  • FIG. 16 shows the configuration of an embodiment of the calibration system of the present invention.
  • One embodiment of the present invention includes a chart 1601, an installation table 1602, a first installation jig (not shown), a second installation jig (not shown), calculation means 1610a to 1610e, and screen output means 1620a to 1620e.
  • a plurality of imaging devices (five imaging devices 1600a to 1600e) are arranged in a direction parallel to the surface of the chart 101, and each of the plurality of imaging devices is arranged so that imaging regions overlap each other. Has been.
  • the chart 1601 has a plurality of feature points.
  • the feature point example 101a shown in FIG. 3A is a rectangle
  • the feature point example 301b shown in FIG. 3B is a ring.
  • the installation table 1602 is a table on which the five imaging devices 1600a to 1600e are respectively attached via the first installation jig and the second installation jig. That is, the installation table 1602 is a table in which a plurality of imaging devices are arranged in a direction parallel to the surface of the chart 101, and each of the plurality of imaging devices is arranged so that imaging regions overlap each other.
  • the first installation jig is installed at the same distance from the chart 1601.
  • the first installation jig is installed so that the visual fields 1604a to 1604e of the imaging devices 1600a to 1600e installed on the first installation jig overlap.
  • the second installation jig is installed at the same distance from the chart 1601 and is different from the distance between the chart 1601 and the first installation jig.
  • the visual fields of the imaging devices 1600a to 1600e installed on the second installation jig are installed so as to overlap the visual fields 1604a to 1604e of the imaging devices 1600a to 1600e installed on the first installation jig.
  • the configuration of the calculation means 1610a to 1610e, which are calibration apparatuses, is the same as that of the calculation means 110 of one embodiment of the calibration apparatus of the present invention shown in FIG. For this reason, description of explanation is omitted.
  • the screen output means 1620a to 1620e are the same as the screen output means 130 of the embodiment of the calibration apparatus of the present invention shown in FIG. For this reason, description of explanation is omitted.
  • a plurality of imaging devices can be calibrated and a plurality of imaging devices are installed so that the fields of view of the plurality of imaging devices overlap.
  • the space for installing a plurality of embodiments of the calibration apparatus of the present invention shown in FIG. 1 can be made narrower.
  • FIG. 17 shows the configuration of another embodiment of the calibration system of the present invention.
  • One embodiment of the present invention includes a chart 1701, an installation base 1702, slider means 1705a to 1705e, an installation jig (not shown), five imaging devices 1700a to 1700e, arithmetic means 1710a to 1710e, and screen output means 1720a to 1720e. It has.
  • the chart 1701 is the same as the chart of the embodiment of the calibration apparatus of the present invention shown in FIG. For this reason, description of explanation is omitted.
  • the installation table 1702 is a table on which the five imaging devices 1700a to 1700e are respectively attached via the slider means 1705a to 1705e and the installation jig.
  • Slider means 1705a to 1705e such as sliders receive the control signal, and move the stage to which the installation jig is attached to the position specified by the control signal.
  • the imaging devices 1700a to 1700e are installed via an installation jig so that the optical axis is in a direction perpendicular to the surface of the chart 1701.
  • the slider means 1705a to 1705e move the installation jig and imaging devices 1700a to 1700e in this direction.
  • Slider means 1705a to 1705e are installed so that the visual fields 1704a to 1704e of the imaging devices 1700a to 1700e overlap each other.
  • the installation jig has the same configuration as the installation jig of one embodiment of the calibration apparatus of the present invention shown in FIG. For this reason, description of explanation is omitted.
  • the configuration of the calculation means 1710a to 1710e is the same as that of the calculation means 150 of the embodiment of the calibration apparatus of the present invention shown in FIG. For this reason, description of explanation is omitted.
  • the screen output means 1720a to 1720e are the same as the screen output means 130 of the embodiment of the calibration apparatus of the present invention shown in FIG. For this reason, description of explanation is omitted.
  • a plurality of imaging devices can be calibrated and a plurality of imaging devices are installed so that the fields of view of the plurality of imaging devices overlap. Accordingly, the space for installing a plurality of embodiments of the calibration apparatus of the present invention shown in FIG. 13 can be made narrower.
  • the slider means moves to the viewpoint 1 ′ and the viewpoint 2 ′ in step 1401 and step 1403. Move. Thereby, movement of an imaging device can be automated.
  • FIG. 18 shows a configuration of an embodiment of the imaging apparatus of the present invention.
  • One embodiment of the present invention includes an imaging system unit 1800a, an imaging system unit 1800b, a calculation unit 1810, a screen audio output unit 1830, and a control unit 1840.
  • the imaging system unit 1800a such as a camera includes an optical element unit 1801a, a shutter unit 1802a, and an imaging element unit 1803a.
  • Optical element means 1801a such as a lens refracts light and forms an image on the image sensor.
  • a shutter unit 1802a such as a shutter is installed at a location where light passing through the optical element unit 1801a passes and opens the shutter mechanism so that the light passes only for exposure time immediately after receiving the shutter open / close signal and exposure time information. At other times, the shutter mechanism is closed to block the light.
  • the image sensor unit 1803a such as an image sensor receives the image of the light refracted by the optical element unit 1801a and generates an image corresponding to the intensity of the light.
  • An imaging system unit 1800b such as a camera includes an optical element unit 1801b, a shutter unit 1802b, and an imaging element unit 1803b.
  • the design values of the focal lengths of the imaging system unit 1800a and the imaging system unit 1800b are the same.
  • the directions of the optical axes of the imaging system means 1800a and the imaging system means 1800b are substantially the same.
  • Optical element means 1801b such as a lens refracts light and forms an image on the image sensor.
  • Shutter means 1802b such as a shutter is installed at a location where light that has passed through optical element means 1801b passes, and opens the shutter mechanism so that light passes only for exposure time immediately after receiving the shutter open / close signal and exposure time information. At other times, the shutter mechanism is closed to block the light.
  • Imaging element means 1803b such as an imaging element receives the image of the light refracted by the optical element means 1801b and generates an image corresponding to the intensity of the light.
  • An arithmetic unit 1810 including a CPU and a memory includes a reference image storage unit 1811, a comparison image storage unit 1812, a processed image storage unit 1813, a luminance correction information storage unit 1814, a geometric correction information storage unit 1815, and a design information storage unit 1816.
  • Reference image storage means 1811 such as a memory or a hard disk stores an image photographed by the imaging system means 1800a. In the parallax calculation, since the template image is cut out from the image stored in the reference image storage unit 1811, this image is a reference image.
  • Comparison image storage means 1812 such as a memory or a hard disk stores an image photographed by the imaging system means 1800b. In the parallax calculation, an image stored in the comparison image storage unit 1812 is searched for using a template image, so this image is a comparison image.
  • Processed image storage means 1813 such as a memory or a hard disk stores an image processed by the computing means 1810 and generated.
  • the luminance correction information storage unit 1814 such as a memory or a hard disk stores the luminance correction coefficient of each pixel in the images (reference image and comparison image) of the imaging system unit 1800a and the imaging system unit 1800b.
  • This correction coefficient is a value that makes the luminance of the image uniform when shooting a uniform light or an object on the entire surface of the image.
  • the geometric correction information storage unit 115 such as a memory or a hard disk stores the geometric correction amount (geometric correction information) of each pixel in the images (reference image and comparison image) of the imaging system unit 1800a and the imaging system unit 1800b. This correction amount is obtained when the distortion of the optical element means 1801a and the optical element means 1801b, the focal length error of the imaging system means 1800a and the imaging system means 1800b, the error of the optical axis position on the image, and the mounting error are zero. This is the value to be corrected.
  • the stored geometric correction information is calculated by one embodiment of the calibration apparatus of the present invention shown in FIGS. 1, 13, 16, and 17. For this reason, a line connecting the manufacturing error in the rotation direction of the optical axis center of the imaging system means and the actual position (focal point) between the imaging system means and the design position (focus) between the imaging system means. This is geometric correction information for rotating the image by an angle obtained by adding the angles formed by.
  • Design information storage means such as a memory or a hard disk stores the distance (baseline length) between the imaging system means, the focal length design value of the imaging system means, the pixel pitch of the imaging element means, and the resolution design value.
  • the error storage means such as a memory or a hard disk stores a lateral position error with respect to the imaging system means.
  • the position error with the imaging system means is calculated by one embodiment of the calibration apparatus of the present invention shown in FIGS. 1, 13, 16, and 17. FIG.
  • Synchronization signal transmission means 1818 generates and transmits a synchronization signal.
  • the reference image capturing means 1819a sends a signal for opening the shutter to the shutter means 1802a in accordance with the synchronization signal of the synchronization signal transmitting means 1818, and obtains an image generated by the image sensor means 1803a.
  • the comparison image capturing means 1819b sends a signal for opening the shutter to the shutter means 1802b in accordance with the synchronization signal of the synchronization signal transmitting means 1818 and obtains an image generated by the image sensor means 1803b.
  • the luminance correction unit 1820 reads the luminance correction coefficient of each pixel from the luminance correction information storage unit 1814, and corrects the luminance of the reference image and the comparison image.
  • the geometric correction unit 1821 reads the geometric two-dimensional correction amount (geometric correction information) of each pixel from the geometric correction information storage unit 115, geometrically corrects the reference image and the comparison image, and corrects the shape of the captured image. . That is, the reference image and the comparison image are corrected based on the geometric correction information of the imaging system unit 1800a (first imaging system unit) and the imaging system unit 1800b (second imaging system unit). Further, the angle formed by the line of the actual baseline length connecting the imaging system means 1800a and the imaging system means 1800b and the ideal baseline length line connecting the imaging system means 1800a and the imaging system means 1800b, and the rotation of the imaging system means 1800a The error from the ideal value of the angle is added to correct the angle in the rotation direction.
  • the parallax calculation means 1822 searches for a region on the comparison image corresponding to a region (template image) of a predetermined size extracted from the reference image. The difference between the position of the region on the comparison image that matches the template image and the position of the template image on the reference image, that is, the parallax is calculated. A parallax image is calculated by calculating the parallax for each pixel.
  • the distance calculation means 1823 is based on the parallax calculated by the parallax calculation means 1822, the focal distance (baseline length) between the imaging system means 1800a and the imaging system means 1800b, the focal distance, and the pixel pitch.
  • the distance from the imaging device to the object on the image is calculated in the optical axis direction of the means 1800b.
  • a distance image is calculated by calculating the distance for each pixel.
  • a distance image is calculated using a value obtained by adding the baseline length between the imaging system unit 1800a and the imaging system unit 1800b and the error of the baseline length from the ideal value.
  • the recognizing unit 1824 recognizes the position of the object in the reference image and the position of the object on the reference image using the reference image and the distance image, and determines the three-dimensional relative position and relative speed of the object with respect to the imaging apparatus. calculate.
  • the three-dimensional relative position coordinate system with respect to the imaging apparatus has an x point in the right direction with respect to the imaging system means 1800a and the imaging system means 1800b, with the midpoint between the focal points of the imaging system means 1800a and the imaging system means 1800b as the origin.
  • the coordinate, the y coordinate in the upward direction, and the z coordinate in the optical axis direction are taken.
  • the time until the collision is calculated based on the relative position and relative speed between the imaging device and the object, and it is determined whether or not the collision occurs within a predetermined time.
  • the relative position, relative speed, collision determination result, and collision time between the imaging device and the object are sent to the screen audio output means 1830 and the control means 1840.
  • Screen audio output means 1830 such as a monitor and a speaker displays a reference image, a parallax image, or a distance image on the screen.
  • a frame or marker is displayed at the position of the object.
  • the frame of the object or the color of the marker which is the determination that the collision determination result from the recognition unit 1824 collides, is different from the object that does not collide.
  • a warning sound is output.
  • the control means 1840 such as a CPU generates a control signal based on the relative position, relative speed, collision time, and collision determination result between the imaging apparatus and the object, and outputs the control signal to the outside of the imaging apparatus.
  • the synchronization signal transmitting means 1818 generates a synchronization signal and sends it to the reference image capturing means 1819a and the comparison image capturing means 1819b.
  • the reference image capturing unit 1819a sends the shutter open / close signal and exposure time information to the shutter unit 1802a.
  • the shutter unit 1802a opens the shutter mechanism for the exposure time immediately after receiving the shutter opening / closing signal and the exposure time information from the reference image capturing unit 1819a, and then closes the shutter mechanism.
  • the image sensor unit 1803a receives the image of the light refracted by the optical element unit 1801a, generates an image according to the intensity of the light, and sends the image to the reference image capturing unit 1819a.
  • the reference image capturing unit 1819a receives the image from the image sensor unit 1803a and stores it in the reference image storage unit 1811.
  • the comparison image capturing unit 1819b Immediately after receiving the synchronization signal from the synchronization signal transmitting unit 1818, the comparison image capturing unit 1819b sends the shutter open / close signal and the exposure time information to the shutter unit 1802b.
  • the shutter unit 1802b opens the shutter mechanism for the exposure time immediately after receiving the shutter opening / closing signal and the exposure time information from the comparison image capturing unit 1819b, and then closes the shutter mechanism.
  • the image sensor unit 1803b receives the image of the light refracted by the optical element unit 1801b, generates an image according to the intensity of the light, and sends the image to the comparative image capturing unit 1819b.
  • the comparative image capturing unit 1819b receives the image from the image sensor unit 1803b and stores it in the comparative image storage unit 1812.
  • the luminance correction unit 1820 reads the correction coefficient of each pixel in the image of the image sensor unit 1803 a and the image sensor unit 1803 b from the luminance correction information storage unit 1814, and from the reference image storage unit 1811 and the comparison image storage unit 1812.
  • Each of the reference image and the comparison image is read.
  • the luminance value of the reference image is corrected by multiplying the luminance value of each pixel of the reference image by the correction coefficient of each pixel in the image of the image sensor unit 1803a.
  • the luminance value of each comparison image is corrected by multiplying the luminance value of each pixel of the comparison image by the correction coefficient of each pixel in the image of the image sensor unit 1803b.
  • the corrected reference image and comparison image are stored in the reference image storage unit 1811 and the comparison image storage unit 1812, respectively.
  • the geometric correction unit 1821 reads the geometric two-dimensional correction amount of each pixel in the image of the image sensor unit 1803a and the image sensor unit 1803b from the geometric correction information storage unit 1815, and the reference image storage unit 1811 and the comparison image.
  • the reference image and the comparison image are read from the storage unit 1812, respectively.
  • a position on the reference image in which the two-dimensional correction amount is changed is calculated from each pixel of the reference image, and a luminance value at the position is calculated by interpolation calculation from luminance values of pixels around the position. This calculation is performed for all pixels on the reference image.
  • the position on the comparison image in which the two-dimensional correction amount is changed is calculated from each pixel of the comparison image, and the luminance value at the position is calculated by interpolation calculation from the luminance value of the pixels around the position. This calculation is performed for all pixels on the comparative image.
  • the corrected reference image and comparison image are stored in the reference image storage unit 1811 and the comparison image storage unit 1812, respectively.
  • the parallax calculation means 1822 extracts an image 2003 (template image) of an area of a predetermined size on the reference image 2001 as shown in FIG.
  • an image of an area in which the same object as the template image 2003 is shown is searched by the following template matching.
  • An image 2004 of a predetermined size area on the comparison image 2002 is extracted, and the absolute value of the difference between the luminance value of the template image 2003 on the reference image 2001 and the luminance value of the image 2004 of the predetermined size area on the comparison image 2002 is extracted.
  • a sum of values (SAD) is calculated for each region image 2004 on the comparison image 2002 to obtain an image 2004 of the region on the comparison image 2002 having the smallest value.
  • the sum of the absolute values of the luminance value differences of the image 2004 is calculated.
  • the distance calculation unit 1823 receives the parallax image from the processed image storage unit 1813, the base line length, the focal length, and the pixel pitch design values from the design information storage unit, and the horizontal (between the error storage unit and the imaging system unit). Read the position error in the (baseline length) direction.
  • the following formula 12 is used to calculate the distance L in the optical axis direction between the image shown in the image 2003 of each region on the reference image and the imaging device.
  • B is a design value of the distance (baseline length) between the focal points of the imaging system unit 1800a and the imaging system unit 1800b
  • ⁇ B is an error in the baseline length
  • f is a design value of the focal length
  • d is calculated in step 1904.
  • c is the design value of the pixel pitch of the image sensor means.
  • This processing is performed for all regions on the reference image, and the distance in the optical axis direction between each image and the imaging device in the entire reference image is calculated.
  • the distance image calculated in this way is stored in the processed image storage means 1813.
  • the recognition unit 1824 reads the reference image from the reference image storage unit 1811 and the distance image from the processed image storage unit 1813. Therefore, calculation of the position of the vanishing point on the reference image, determination of an object such as an automobile or a pedestrian, calculation of the relative position and relative speed of the object with respect to the imaging apparatus, and determination of collision between the object and the imaging apparatus are performed.
  • the recognition unit 1824 calculates the position of the vanishing point on the reference image according to the following procedure.
  • the white line on both sides at the lane boundary on the reference image is detected, and the slope of the white line on the reference image is calculated. Assuming that the white lines on both sides are straight lines, the position on the reference image of the point where the white lines on both sides intersect is calculated based on the calculated inclination. This is the position of the vanishing point.
  • the recognition means 1824 performs detection of an object such as an automobile or a pedestrian in the following procedure.
  • a region 1 in which pixels having a distance within a predetermined range are connected is obtained.
  • the predetermined range a plurality of ranges having a width of 5 m and a range overlapping every 2.5 m are set, such as 5 to 10 m, 7.5 to 12.5 m, and 10 to 15 m.
  • the lengths in the vertical and horizontal directions on the reference image of each region 1 where pixels having a distance within a predetermined range are connected are obtained.
  • a value obtained by multiplying the vertical length and distance on the reference image of each region 1 by the pixel pitch is divided by the focal length to calculate the three-dimensional vertical length of each region 1.
  • a value obtained by multiplying the horizontal length and distance on the reference image of each region 1 by the pixel pitch is divided by the focal length to calculate the three-dimensional horizontal length of each region 1.
  • Vv is the height of the vanishing point
  • Hi is the mounting height of the imaging device
  • Lr is the average distance of the region 1.
  • the three-dimensional vertical and horizontal lengths of the region 1 are within a predetermined range of the automobile, and the vertical position on the reference image at the lower limit of the region 1 and the ground of the region 1 calculated by the above equation 1 If the difference in the vertical position on the reference image is within the threshold, it is determined that the object in the region 1 is a car. Similarly, the three-dimensional vertical and horizontal lengths of the region 1 are within a predetermined range of the pedestrian, and the vertical position on the reference image at the lower limit of the region 1 is calculated by the above formula 1. When the difference in the vertical position on the reference image of the ground in the region 1 is within the threshold value, it is determined that the object in the region 1 is a pedestrian. These processes are carried out for all areas 1 to determine whether the vehicle is a pedestrian or not.
  • the relative position (Xo, Yo, Zo) of the object with respect to the imaging device is calculated using the following equations 14 to 16.
  • (Uo, Vo) is a position on the reference image with respect to the center of the region 1 determined to be a car or a pedestrian.
  • steps 1901 to 1908 is repeatedly performed at a predetermined cycle. If the difference between the position of the region 1 on the reference image detected in step 1906 of the previous process and the current process is within the threshold value, it is determined that the objects are the same, and the target for the imaging apparatus calculated in the current process is determined. The value obtained by subtracting the relative position calculated in step 1906 of the previous process from the relative position of the object is divided by the time interval of the processing cycle of steps 1901 to 1908 to obtain the relative speed (Vx, Vy of the object with respect to the imaging device). , Vz).
  • the collision determination between the object and the imaging device is performed according to the following procedure.
  • the relative speed Vz of the object with respect to the imaging device is 0 or more, it is determined that the object does not collide with the object in the region 1 determined as an automobile or a pedestrian.
  • the relative speed Vz of the target object with respect to the imaging device is negative, the relative position Zo of the target object with respect to the imaging device calculated in this processing is divided by the absolute value of the relative speed Vz of the target object with respect to the imaging device. Calculate time (collision time).
  • the relative position Xo of the object with respect to the imaging apparatus at the time of the collision is calculated by adding the relative position Xo of the object to the value obtained by multiplying the relative speed Vx of the object with respect to the imaging apparatus by the collision time.
  • the collision time is within a threshold value
  • the absolute value of the relative position Xo of the object with respect to the imaging device at the time of the collision is within the threshold value, It determines with colliding with the target object of the area
  • the recognizing unit 1824 controls the screen audio output unit 1830 and controls the positions of the four corners on the reference image relating to the region 1 determined to be a car or a pedestrian, the relative position and relative speed of the object with respect to the imaging device, the collision determination result, and the collision time. Send to means 1840.
  • the screen audio output means 1830 determines the positions of the four corners on the reference image relating to the region 1 determined to be a car or a pedestrian by the recognition means 1824, the relative position and relative speed of the object relative to the imaging device, and the collision determination result. And receive collision time.
  • the reference image is read from the reference image storage means 1811.
  • a reference image is displayed on the screen, and a region 1 determined to be a car or a pedestrian is displayed as a frame.
  • the color of the frame of the region 1 that is the determination result that the collision determination result is a collision is changed to the color of the frame of the region 1 of the target object that is the determination result that the collision does not collide and is displayed on the screen.
  • a warning sound is output.
  • Step 1908 includes four corner positions on the reference image relating to the region 1 determined by the control means 1840 as a car or a pedestrian from the recognition means 1824, the relative position and relative speed of the object relative to the imaging device, the collision determination result, and the collision. Receive time. When there is a determination result that the collision determination result collides in the region 1 determined to be an automobile or a pedestrian, a control signal for avoiding the collision is generated and output to the outside of the imaging apparatus.
  • the viewpoint 1 is displayed in step 505 as shown in FIGS. 10B and 10C.
  • the image shown in the center of the image is shifted in the vertical direction by design. For this reason, the measured value of the feature point position has an error in the vertical direction.
  • an error occurs in the geometric correction information due to the measurement value of the feature point position having an error in the vertical direction as described above.
  • a position error occurs in the vertical direction on the image, and there is no comparison image area that matches the template image of the reference image. The distance image cannot be obtained.
  • step 512 geometric correction information is rotated and corrected in the vertical direction with correction parameters.
  • Geometric correction information is stored in the geometric correction information storage means, and the reference image and the comparison image are geometrically corrected using the geometric correction information in step 1903. Thereby, since the error in the vertical direction of the image can be removed, the area of the comparative image that matches the template image of the reference image can be searched, and the distance image can be obtained.
  • the viewpoint 1 is displayed in step 505 as shown in FIGS. 11B and 11C.
  • the image shown in the center of the image is shifted in the left-right direction by design. For this reason, the measured value of the feature point position has an error in the left-right direction.
  • an error occurs in the geometric correction information due to the measurement value of the feature point position having an error in the horizontal direction. If an image is corrected using geometric correction information having a lateral error in the imaging device, an error occurs in the parallax and an error occurs in the distance image.
  • step 512 geometric correction information is corrected in the horizontal direction with correction parameters.
  • Geometric correction information is stored in the geometric correction information storage means, and the reference image and the comparison image are geometrically corrected using the geometric correction information in step 1903.
  • the region of the comparison image that matches the template image of the reference image can be searched accurately, and an accurate distance image can be calculated.
  • an installation shift occurs when the imaging apparatus 100 is installed on the installation jig 103a and the installation jig 103b, an error occurs in the depth position of the imaging apparatus 100 with respect to the chart 101, and a lateral error occurs in the measured value of the feature point position on the image.
  • an error occurs in the parallax and an error occurs in the distance image.
  • installation jigs 103a and 103b for installing the imaging apparatus 100 are provided.
  • the three pins 401a to 401c are arranged on the three planes, and the three planes are formed by applying the pins 401a to 401c to the portions 402a to 402c processed on the plane of the housing of the imaging device 100.
  • imaging apparatus of the present invention is not limited to the embodiment described above, and can be applied with various modifications. Below, the modification of the imaging device of this invention is demonstrated.
  • the parallax calculation means 1822 has the position where the sum of the absolute values of the luminance differences is the smallest, its left neighbor and right neighbor. Based on the sums Sc, Sl, and Sr of absolute values of the luminance differences at the pixel positions, the horizontal direction of the region 2004 on the comparison image 2002 and the region on the template image 2003 is approximated by a polygonal line using Equation 1. In other words, instead of calculating the parallax, the distance between the region 2004 on the comparison image 2002 and the region on the template image 2003 by parabolic fitting (parabolic fitting) is used instead of calculating the parallax. u, that is, the parallax can be calculated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Analysis (AREA)

Abstract

L'objectif de la présente invention est de réduire, dans un dispositif d'imagerie possédant une pluralité de moyens d'imagerie, les erreurs dans les informations d'étalonnage géométrique causées par des erreurs de fabrication dans les positions de fixation des moyens d'imagerie et des erreurs de mise en place du dispositif d'imagerie par rapport à un graphique. La présente invention possède un premier moyen d'acquisition d'image destiné à l'acquisition d'une première image prise par un dispositif d'imagerie séparé d'une première distance prédéfinie d'un graphique sur lequel sont dessinés une pluralité de points caractéristiques ; un moyen de production d'informations d'étalonnage destiné à produire des premières informations d'étalonnage sur base de la première image, des paramètres de caméra de la pluralité de moyens d'imagerie, des longueurs de ligne de base entre la pluralité de moyens d'imagerie, et d'une pluralité de positions de points caractéristiques ; un second moyen d'acquisition d'image destiné à l'acquisition d'une seconde image prise par le dispositif d'imagerie séparée d'une seconde distance prédéfinie du graphique, laquelle est différente de la première distance ; et un moyen de correction d'informations d'étalonnage destiné à corriger les premières informations d'étalonnage sur base de la seconde image, des paramètres de caméra de la pluralité de moyens d'imagerie, des longueurs de ligne de base entre la pluralité de moyens d'imagerie, et de la pluralité de positions de points caractéristiques et à produire des secondes informations d'étalonnage.
PCT/JP2014/056816 2013-05-07 2014-03-14 Dispositif d'étalonnage, système d'étalonnage, et dispositif d'imagerie Ceased WO2014181581A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2015515807A JP6186431B2 (ja) 2013-05-07 2014-03-14 キャリブレーション装置、キャリブレーションシステム、及び撮像装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013097272 2013-05-07
JP2013-097272 2013-05-07

Publications (1)

Publication Number Publication Date
WO2014181581A1 true WO2014181581A1 (fr) 2014-11-13

Family

ID=51867067

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/056816 Ceased WO2014181581A1 (fr) 2013-05-07 2014-03-14 Dispositif d'étalonnage, système d'étalonnage, et dispositif d'imagerie

Country Status (2)

Country Link
JP (1) JP6186431B2 (fr)
WO (1) WO2014181581A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018179981A (ja) * 2017-04-18 2018-11-15 パナソニックIpマネジメント株式会社 カメラ校正方法、カメラ校正プログラム及びカメラ校正装置
CN109945779A (zh) * 2017-12-21 2019-06-28 汉辰科技股份有限公司 具有至少一摄像机的校正系统与相应方法
CN112655022A (zh) * 2018-08-27 2021-04-13 Lg伊诺特有限公司 图像处理装置和图像处理方法
US11050999B1 (en) * 2020-05-26 2021-06-29 Black Sesame International Holding Limited Dual camera calibration
US20210241541A1 (en) * 2020-02-05 2021-08-05 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
US11346663B2 (en) 2017-09-20 2022-05-31 Hitachi Astemo, Ltd. Stereo camera
RU2785952C1 (ru) * 2022-03-21 2022-12-15 Федеральное государственное автономное образовательное учреждение высшего образования "Севастопольский государственный университет" Способ внешней калибровки бинокулярной системы технического зрения

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0727514A (ja) * 1993-07-12 1995-01-27 Sumitomo Electric Ind Ltd 画像計測装置の校正方法
JPH10115506A (ja) * 1996-10-11 1998-05-06 Fuji Heavy Ind Ltd ステレオカメラの調整装置
JP2007263669A (ja) * 2006-03-28 2007-10-11 Denso It Laboratory Inc 3次元座標取得装置
JP2012202694A (ja) * 2011-03-23 2012-10-22 Canon Inc カメラ校正方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0727514A (ja) * 1993-07-12 1995-01-27 Sumitomo Electric Ind Ltd 画像計測装置の校正方法
JPH10115506A (ja) * 1996-10-11 1998-05-06 Fuji Heavy Ind Ltd ステレオカメラの調整装置
JP2007263669A (ja) * 2006-03-28 2007-10-11 Denso It Laboratory Inc 3次元座標取得装置
JP2012202694A (ja) * 2011-03-23 2012-10-22 Canon Inc カメラ校正方法

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018179981A (ja) * 2017-04-18 2018-11-15 パナソニックIpマネジメント株式会社 カメラ校正方法、カメラ校正プログラム及びカメラ校正装置
US11346663B2 (en) 2017-09-20 2022-05-31 Hitachi Astemo, Ltd. Stereo camera
CN109945779A (zh) * 2017-12-21 2019-06-28 汉辰科技股份有限公司 具有至少一摄像机的校正系统与相应方法
CN109945779B (zh) * 2017-12-21 2023-09-05 汉辰科技股份有限公司 具有至少一摄像机的校正系统与相应方法
CN112655022A (zh) * 2018-08-27 2021-04-13 Lg伊诺特有限公司 图像处理装置和图像处理方法
CN112655022B (zh) * 2018-08-27 2023-11-03 Lg伊诺特有限公司 图像处理装置和图像处理方法
US20210241541A1 (en) * 2020-02-05 2021-08-05 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium
US11836879B2 (en) * 2020-02-05 2023-12-05 Canon Kabushiki Kaisha Information processing apparatus, information processing method, and storage medium for correcting a shift between three-dimensional positions
US11050999B1 (en) * 2020-05-26 2021-06-29 Black Sesame International Holding Limited Dual camera calibration
RU2785952C1 (ru) * 2022-03-21 2022-12-15 Федеральное государственное автономное образовательное учреждение высшего образования "Севастопольский государственный университет" Способ внешней калибровки бинокулярной системы технического зрения
RU2785952C9 (ru) * 2022-03-21 2023-02-10 Федеральное государственное автономное образовательное учреждение высшего образования "Севастопольский государственный университет" Способ внешней калибровки бинокулярной системы технического зрения

Also Published As

Publication number Publication date
JPWO2014181581A1 (ja) 2017-02-23
JP6186431B2 (ja) 2017-08-23

Similar Documents

Publication Publication Date Title
JP3983573B2 (ja) ステレオ画像特性検査システム
US9965870B2 (en) Camera calibration method using a calibration target
US9197866B2 (en) Method for monitoring a traffic stream and a traffic monitoring device
JP6186431B2 (ja) キャリブレーション装置、キャリブレーションシステム、及び撮像装置
JP5898475B2 (ja) 車載カメラシステム及びその較正方法、及びその較正プログラム
US7342669B2 (en) Three-dimensional shape measuring method and its device
JP5811327B2 (ja) カメラキャリブレーション装置
US8836766B1 (en) Method and system for alignment of a pattern on a spatial coded slide image
WO2021259151A1 (fr) Procédé et appareil d'étalonnage pour un système d'étalonnage laser, et système d'étalonnage laser
CN111263142B (zh) 一种摄像模组光学防抖的测试方法、装置、设备及介质
JP6209833B2 (ja) 検査用具、検査方法、ステレオカメラの生産方法及びシステム
EP3332387B1 (fr) Procédé d'étalonnage d'une caméra stéréo
KR101597163B1 (ko) 스테레오 카메라 교정 방법 및 장치
Xu et al. An omnidirectional 3D sensor with line laser scanning
JP5228614B2 (ja) パラメータ計算装置、パラメータ計算システムおよびプログラム
JP7405710B2 (ja) 処理装置及び車載カメラ装置
KR20130075712A (ko) 레이저비전 센서 및 그 보정방법
JP2016200557A (ja) 校正装置、距離計測装置及び校正方法
US20210256729A1 (en) Methods and systems for determining calibration quality metrics for a multicamera imaging system
Xu et al. 3D multi-directional sensor with pyramid mirror and structured light
KR102498028B1 (ko) 감시 카메라 시스템 및 그 시스템 사용 방법
WO2019058729A1 (fr) Appareil de prise de vues stéréoscopique
WO2019087253A1 (fr) Procédé d'étalonnage d'appareil de prise de vues stéréo
KR102065337B1 (ko) 비조화비를 이용하여 대상체의 이동 정보를 측정하는 장치 및 방법
Fasogbon et al. Calibration of fisheye camera using entrance pupil

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14794862

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015515807

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14794862

Country of ref document: EP

Kind code of ref document: A1