[go: up one dir, main page]

WO2019021591A1 - Image processing device, image processing method, program, and image processing system - Google Patents

Image processing device, image processing method, program, and image processing system Download PDF

Info

Publication number
WO2019021591A1
WO2019021591A1 PCT/JP2018/019088 JP2018019088W WO2019021591A1 WO 2019021591 A1 WO2019021591 A1 WO 2019021591A1 JP 2018019088 W JP2018019088 W JP 2018019088W WO 2019021591 A1 WO2019021591 A1 WO 2019021591A1
Authority
WO
WIPO (PCT)
Prior art keywords
gradient
polarization
depth
unit
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2018/019088
Other languages
French (fr)
Japanese (ja)
Inventor
穎 陸
康孝 平澤
雄飛 近藤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of WO2019021591A1 publication Critical patent/WO2019021591A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images

Definitions

  • This technology relates to an image processing apparatus, an image processing method, a program, and an image processing system, and performs depth calculation for an area without texture.
  • an epipolar plane image is generated from images obtained by a plurality of imaging devices to calculate depth which is distance information to a subject.
  • calculation of depth is performed based on a gradient calculated from an epipolar plane image using texture information in images obtained by a plurality of imaging devices.
  • the method of utilizing the texture information in the image can not detect the gradient from the epipolar plane image when there is no texture. Therefore, it is not possible to calculate the depth for an area without texture.
  • an object of the present technology to provide an image processing apparatus, an image processing method, a program, and an image processing system for calculating the depth of an area without texture.
  • an image processing apparatus including a depth calculation unit that performs depth calculation of a depth calculation target pixel based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.
  • classification is performed on the basis of an epipolar plane image generated from a plurality of polarization imaging images, in which the depth calculation target pixel is either a pixel having texture information or a pixel having no texture information.
  • the gradient is calculated based on the epipolar plane image, and for pixels without texture information, the gradient is calculated based on the polarization epipolar plane image, and the calculated gradient is converted to depth
  • the depth of the depth calculation target pixel is calculated.
  • a gradient having a calculated reliability of the gradient equal to or higher than a predetermined reliability threshold may be converted into the depth.
  • the gradient based on polarization information is fit to a trigonometric function using pixels on a straight line based on the position of the pixel for depth calculation in the polarization epipolar plane image, and the waveform of the trigonometric function generated by the difference in pixels used for fitting
  • the slope of the straight line that minimizes the difference is taken as the slope of the depth calculation target pixel.
  • the polarization directions of the plurality of polarization imaging images are set to four or more directions within the range of the angle difference less than 180 degrees.
  • the normal to the depth calculation target pixel is calculated using the pixel located in the direction of the gradient calculated using the polarization epipolar plane image, and interpolation processing is performed using the depth and the normal to the depth calculation target pixel. , Get depth at a higher resolution than pixel units.
  • the depth of the depth calculation target pixel is calculated by interpolation processing using the depth and the normal line of the pixel close to the depth calculation target pixel. Further, the normal to the depth calculation target pixel is calculated using a pixel located in the direction of an arbitrary gradient from the depth calculation target pixel for which the gradient can not be calculated.
  • the gradient of the pixel for depth calculation calculated based on the first polarization epipolar plane image or the first epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in the first direction is different from the first direction
  • Either the second polarization epipolar plane image generated from a plurality of polarization imaging images different in viewpoint position in the second direction or the gradient of the depth calculation target pixel calculated based on the second epipolar plane image is selected and selected
  • the slope may be converted to depth.
  • one of the gradients is a gradient calculated based on the texture information and the other gradient is a gradient calculated based on the polarization information
  • the gradient calculated based on the texture information is selected.
  • the gradients are both calculated based on the texture information, and if the gradients are both calculated based on the polarization information, then a highly reliable gradient is selected.
  • the second aspect of this technology is The present invention is an image processing method including performing depth calculation of a depth calculation target pixel based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.
  • the third aspect of this technology is A program that causes a computer to execute image processing using a polarization image. Generating a polarization epipolar plane image from a plurality of polarization imaging images different in polarization direction and viewpoint position; And a program for causing the computer to execute the steps of calculating the depth of the depth calculation target pixel based on the generated polarization epipolar plane image.
  • the program of the present technology is, for example, a storage medium, communication medium such as an optical disc, a magnetic disc, a semiconductor memory, etc., provided in a computer readable format to a general-purpose computer capable of executing various program codes. It is a program that can be provided by a medium or a communication medium such as a network. By providing such a program in a computer readable form, processing according to the program is realized on the computer.
  • the fourth aspect of this technology is An imaging device for acquiring a plurality of polarization imaging images having different polarization directions and viewpoint positions; It has an image processing device which performs depth calculation of a depth calculation target pixel from the plurality of polarization imaging images acquired by the imaging device,
  • the image processing apparatus is A polarization epipolar plane image generation unit that generates a polarization epipolar plane image including pixels for depth calculation from the plurality of polarization imaging images; And a depth calculation unit for calculating the depth of the depth calculation target pixel based on the polarization epipolar plane image generated by the polarization epipolar plane image generation unit.
  • the depth of the depth calculation target pixel is calculated based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions. For this reason, It becomes possible to calculate the depth based on the polarization information for the area without texture.
  • the effects described in the present specification are merely examples and are not limited, and additional effects may be present.
  • FIG. 1 illustrates the configuration of an imaging system using the image processing device of the present technology.
  • the imaging system 10 is configured using an imaging device that acquires a plurality of polarization imaging images different in polarization direction and viewpoint position, and an image processing device that performs image processing using the acquired polarization imaging image.
  • polarizing elements such as a polarizing plate or a polarizing filter, are provided in front of an imaging lens or an imaging element to obtain a plurality of polarized images.
  • a plurality of imaging units are arranged linearly in parallel, and the polarization directions of the polarization elements are set in different directions in each imaging unit.
  • the polarization direction of the polarizing element is set to an angle of 0 degrees or more and less than 180 degrees.
  • six imaging units 20-1 to 20-6 are linearly arranged in parallel.
  • the polarization elements 21-1 to 21-6 of the imaging units 20-1 to 20-6 are set so that the angle difference of the polarization direction with the adjacent imaging units is, for example, 30 degrees and the angle of the polarization direction increases. ing.
  • the six imaging units 20-1 to 20-6 perform imaging, for example, by including a rectangular columnar object OBa and a cylindrical object OB provided at a position farther than the object OBa in the imaging range.
  • Polarized imaging images different in position and polarization direction from each other are acquired and output to the image processing device 30.
  • the imaging apparatus acquires four or more polarization imaging images different in viewpoint position and polarization direction as described later.
  • the image processing apparatus 30 generates polarization epipolar plane images and epipolar plane images from a plurality of polarization imaging images having different polarization directions and viewpoint positions, and uses the generated polarization epipolar plane images to obtain distance information to the subject. Calculate the depth.
  • FIG. 2 illustrates the configuration of the image processing apparatus.
  • the image processing apparatus 30 includes a preprocessing unit 31, a parameter holding unit 32, a polarization epipolar plane image generation unit 33, and a depth calculation unit 34.
  • the preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on the polarization imaging image acquired by the imaging device.
  • the parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board.
  • the internal parameter is a parameter unique to each imaging unit, and is a focal length of the imaging unit, a lens distortion coefficient, or the like.
  • the external parameter is a parameter for specifying the arrangement of the imaging unit, and indicates parallel movement and rotation.
  • the preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image.
  • the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter. Therefore, the pre-processed polarization captured image has no positional displacement in the vertical direction of the pixels indicating the desired position of the subject between the images, and causes lateral positional displacement in accordance with the difference in depth which is distance information to the subject. Image.
  • the preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.
  • the polarization epipolar plane image generation unit 33 generates a plurality of polarization imaging images having different polarization directions and viewpoint positions so as to include an image indicating a desired position on the subject, in the arrangement direction of viewpoint positions, for example, in the horizontal direction in the configuration shown in FIG.
  • An image is extracted, and an epipolar plane image is generated by arranging the extracted images in the order of the viewpoint position in the vertical direction orthogonal to the alignment direction, at the interval according to the interval of the viewpoint positions (the interval between the imaging units (Baseline)) Do.
  • the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions.
  • Each pixel of the polarization epipolar plane image has polarization direction information indicating the polarization direction of the polarization imaging image to be extracted.
  • FIG. 3 illustrates a plurality of polarization imaging images and an epipolar plane image and a polarization epipolar plane image.
  • FIG. 3A illustrates the polarization directions of the imaging units 20-1 to 20-6.
  • the polarization element 21-1 provided in the imaging unit 20-1 has an angle indicating “0 °” indicating the polarization direction
  • the polarization element 21-2 provided in the imaging unit 20-2 has a polarization direction. The angle is "30 °”.
  • the polarization elements provided in the other imaging units are also set so that the angle difference between the adjacent imaging units in the polarization direction is 30 degrees and the polarization direction angle is increased.
  • a distance ds is set between the respective imaging units, and for example, with the imaging unit 20-1 as a reference, the imaging unit 20-6 is positioned at a distance 5ds.
  • FIG. 3 shows polarization captured images acquired by the imaging units 20-1 to 20-6, and indicates the position of the image indicating the desired position on the subject by a broken line.
  • the position PS1 of the side of the prismatic object OBa the position PS2 of the side surface of the cylindrical object OBb provided at a position farther than the object OBa, and the position PS3 in the side of the prismatic object OBa
  • the left end portion in the image at the position indicated by the broken line is taken as the reference pixel position in the lateral direction in the epipolar plane image and the polarization epipolar plane image.
  • (C) of FIG. 3 shows an epipolar plane image.
  • the horizontal axis is the position in the horizontal direction in the polarization imaging image
  • the vertical axis is the physical distance of the viewpoint position (physical distance between the imaging units).
  • FIG. 3 (d) shows a polarized epipolar plane image.
  • the horizontal axis is the position in the lateral direction in the polarization imaging image
  • the vertical axis is the physical distance of the viewpoint position (physical distance between imaging units).
  • the line connecting the points (PS1-1 to PS1-6) at the position PS1 in each polarization imaging image is a straight line since the intervals of the viewpoint positions are equal.
  • the line connecting the points (PS1-1 to PS1-6) at the position PS1 in each polarization imaging image is a straight line since the distance between the viewpoint positions is equal.
  • a line connecting points PS2 and PS3 is a straight line.
  • the polarization epipolar plane image generation unit 33 generates an epipolar plane image and a polarization epipolar plane image using the plurality of pre-processed polarization imaging images, and outputs the image to the depth calculation unit 34.
  • the depth calculation unit 34 calculates the depth using the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image.
  • the depth calculating unit 34 calculates the depth at the position indicating the texture based on the epipolar plane image, and calculates the depth at the position not having the texture based on the polarized epipolar plane image.
  • FIG. 4 illustrates the configuration of the depth calculation unit.
  • the depth calculation unit 34 includes a slope calculation unit 341 and a depth conversion unit 343.
  • the gradient calculation unit 341 calculates the gradient of each pixel in the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image.
  • the gradient calculating unit 341 includes a pixel classification unit 3411, a texture gradient calculating unit 3412, and a polarization gradient calculating unit 3413.
  • the pixel classification unit 3411 classifies the pixels of the epipolar plane image into pixels having texture information and an image having no texture information.
  • the pixel classification unit 3411 uses a first derivative filter such as a Sobel filter or a Prewitt filter, or a second derivative filter such as a Laplacian filter, for the determination target pixel, to generate the horizontal differential value Iu and the vertical differential value Is.
  • the pixel classification unit 3411 calculates the texture determination value G by performing the calculation of Expression (1) using the horizontal differential value Iu and the vertical differential value Is.
  • the pixel classification unit 3411 determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold ⁇ . In this case, it is determined that the pixel does not have texture information.
  • the texture gradient calculation unit 3412 calculates the gradient of the image classified as the pixel having the texture information by the pixel classification unit 3411 using the texture information (color information and texture information of the texture).
  • the texture gradient calculation unit 3412 calculates the gradient ⁇ t by performing the calculation of Equation (2) using, for example, the horizontal differential value Iu and the vertical differential value Is calculated by the pixel classification unit 3411.
  • the texture gradient calculation unit 3412 may calculate the gradient ⁇ t with high accuracy by using pixels in the vicinity region of the gradient calculation target pixel.
  • FIG. 5 is a diagram for explaining the case where the gradient of a pixel having texture information is calculated with high accuracy.
  • the optimal gradient is a direction in which the difference between the pixel value of the calculation target pixel O and the differential value of the differential value with respect to the pixel q in a certain direction is smallest in the vicinity region to the calculation target pixel O.
  • the direction satisfies the equation (3).
  • the evaluation value C ⁇ (O) is a value calculated by Expression (4), and the larger the evaluation value C ⁇ (0), the lower the reliability.
  • “ ⁇ ” is a set (0, 180) of gradient directions.
  • “S ( ⁇ )” is a set of pixels in the direction along the gradient ⁇ t in the vicinity region of the calculation target pixel O.
  • “I (O)” is the pixel value of the calculation target pixel
  • “Iu (O), Is (O)” is the differential value of the calculation target pixel
  • “I (q)” is the pixel value of the pixel q in the vicinity region.
  • “Iu (q), Is (q)” are differential values of the pixel q in the near region.
  • “ ⁇ ” is a parameter for adjusting the weight of the pixel value difference and the gradient difference.
  • the polarization gradient calculation unit 3413 calculates the gradient using polarization information for an image classified by the pixel classification unit 3411 as a pixel having no texture information.
  • FIG. 6 is a diagram for explaining the relationship between the polarization direction and the pixel value (brightness).
  • the light source LT is used to illuminate the subject OB, and the subject OB is imaged by the imaging device CM via the polarization element PL. In this case, it is known that, in the polarization imaging image generated by the imaging device CM, the luminance of the subject OB changes in accordance with the rotation of the polarization element PL.
  • the highest luminance when the polarizing element PL is rotated is Imax, and the lowest luminance is Imin.
  • the x-axis and y-axis in two-dimensional coordinates are the plane direction of the polarizing element PL
  • an angle on the xy plane with respect to the x-axis when the polarizing element PL is rotated is taken as a polarization angle ⁇ .
  • the polarization element PL returns to the original polarization state when rotated 180 degrees and has a cycle of 180 degrees.
  • the polarization angle ⁇ ⁇ when the maximum luminance Imax is observed is taken as the azimuth angle ⁇ .
  • the luminance I observed when the polarizing element PL is rotated can be expressed as Expression (5). That is, the change in luminance caused when the polarization direction of the polarizing element is changed at the position where the polarization angle and the azimuth angle are equal shows a waveform of a trigonometric function. Also, the phase of the trigonometric function waveform changes depending on the polarization angle and the azimuth angle.
  • FIG. 7 illustrates the relationship between the luminance and the polarization angle.
  • the pixels located in the gradient direction when the gradient is a true value are physically the same point viewed from different viewpoint positions.
  • the gradient of the depth calculation object pixel which is a true value
  • the polarization direction of the polarizing element it becomes a waveform of trigonometric function (for example, Sin wave).
  • the phase and the amplitude of the waveform of the trigonometric function are equal.
  • the polarization gradient calculating unit 3413 sets a gradient that minimizes the difference in the waveform of the trigonometric function obtained in the combination in which the polarization directions are different as the gradient of the pixel for which the depth is to be calculated.
  • polarization imaging drawings having four or more polarization directions within the range of angle differences less than 180 degrees are used as the plurality of polarization imaging drawings.
  • FIG. 8 is a diagram for explaining the case where the gradient can be calculated using polarization information.
  • (A) of FIG. 8 shows the gradient of the true value by a solid line, and shows the case where the gradient is incorrect by a broken line.
  • the polarization direction is “0 °, 30 °, 60 °, 90 °, 120 °, 120 °, 150 °” in the longitudinal direction in the polarization epipolar plane image.
  • a pixel value of 6 pixels is obtained.
  • each polarization direction when the gradient is incorrect indicates, for example, different positions of the subject as shown in FIG. 8B. Therefore, if the combination of the polarization directions when obtaining the waveform of the trigonometric function from the pixel values is different, the phase and the amplitude of the trigonometric function are different as shown in (c) of FIG. For example, a triangular function waveform obtained from pixel values of three pixels whose polarization directions are "0 °, 60 °, 120 °" and three pixel pixels whose polarization directions are "30 °, 90 °, 150 °" The waveforms of trigonometric functions determined from the values differ in phase and amplitude.
  • the polarization direction in obtaining the waveform of the trigonometric function from the pixel value Even if the combination of pixel values is different, as shown in (e) of FIG. 8, the phase and amplitude of the trigonometric function match.
  • the waveforms of the trigonometric functions obtained from the values have the same phase and amplitude.
  • the polarization gradient calculating unit 3413 calculates a gradient that minimizes the difference in the waveform of the trigonometric function when the combination of the polarization directions when obtaining the waveform of the trigonometric function is different from the pixel value. For example, the polarization gradient calculating unit 3413 generates a histogram in which the horizontal axis indicates the angle of inclination and the vertical axis indicates the difference, and sets the angle of inclination at which the difference is minimum as the gradient of the true value. In addition, the polarization gradient calculating unit 3413 may determine that the gradient can not be calculated if the difference between the maximum value and the minimum value of the difference is smaller than a preset threshold.
  • the combination of pixels used to detect the amplitude and phase of the waveform of the trigonometric function is not limited to the case where pixels are selected every other pixel.
  • the polarization gradient calculation unit 3413 may detect the amplitude and phase of the waveform of the trigonometric function for each set, with the pixel values at successive positions being a set.
  • the polarization gradient calculation unit 3413 may select pixels at random and make a set, and detect the amplitude and phase of the waveform of the trigonometric function for each set.
  • the pixels used for the combination can be less susceptible to noise compared to using pixels at successive positions. The influence of noise or the like can be reduced by increasing the angle difference in the polarization direction.
  • the polarization gradient calculation unit 3413 detects the phase and amplitude of the waveform of the trigonometric function based on the pixel values of pixels randomly selected from pixels located in the direction of an arbitrary gradient, for example, to reduce the influence of noise. Is repeated a plurality of times to calculate the variance (for example, standard deviation) of the phase and amplitude of the waveform. Furthermore, the polarization gradient calculating unit 3413 repeatedly calculates the dispersion of the phase and amplitude of the waveform of the trigonometric function with the gradient within the range of 0 ° to 180 °.
  • the polarization gradient calculating unit 3413 generates a histogram in which the horizontal axis indicates the angle of the gradient and the vertical axis indicates the dispersion, and determines the gradient with the smallest dispersion as the gradient of the true value.
  • the polarization gradient calculating unit 3413 determines the angle difference between the angle at which the phase difference is minimum and the angle at which the difference in amplitude is minimum, or the angle between the angle at which the phase dispersion is minimum and the angle at which the amplitude dispersion is minimum. When the difference is larger than a preset threshold value, it may be determined that the pixel can not calculate the gradient.
  • the polarization gradient calculating unit 3413 may set either one of the angles as the true value of the gradient, and the difference between the angle and the amplitude at which the phase difference (dispersion) becomes minimum (dispersion).
  • the average value of weighted angles and the weighted average value or the like at which) is minimized may be used as the true value of the gradient.
  • the polarization gradient calculating unit 3413 may use more pixel values than three points. Furthermore, the difference or variance of the phase and amplitude of the waveform of the trigonometric function in the true value gradient may be used as the evaluation value of the gradient. In this case, the larger the difference or variance in the gradient of the true value, the smaller the reliability.
  • the amplitude and the phase of the trigonometric function waveform are constant regardless of the gradient, and the phase is compared with the point on the changing surface. And the variance of the amplitude is too small to calculate the slope of the true value.
  • the normal direction can be calculated based on the pixel values of the polarization image with three or more polarization directions. Therefore, in the present technology, in pixels that do not have texture information, it is a pixel that can not calculate the gradient of the true value, and the amplitude and phase of the trigonometric function waveform are constant regardless of the gradient.
  • a pixel with a small variance is taken as a pixel with normal and not calculated gradient.
  • pixels that do not have texture information pixels that are different from normal pixels and have not calculated gradients, and pixels whose reliability of the gradient is lower than a predetermined reliability threshold value are pixels that have no normal gradients and that have not been calculated. I assume.
  • the depth conversion unit 343 converts the gradients calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 of the gradient calculation unit 341 into depths.
  • the depth conversion unit 343 performs the calculation of Expression (6) using the gradient ⁇ p calculated by the polarization gradient calculation unit 3413, and calculates the depth D to the object position corresponding to the pixel for which the gradient ⁇ x is calculated.
  • “f” is the focal length of the pixel for which the gradient is calculated
  • the gradient ⁇ x is the gradient ⁇ t or the gradient ⁇ p.
  • the depth conversion unit 343 can prevent the calculation of the depth with low reliability by converting the calculated gradient to a depth. .
  • FIG. 9 is a flowchart showing the operation of the first embodiment.
  • the image processing apparatus acquires a plurality of polarization imaging images.
  • the image processing apparatus 30 acquires polarized captured images from a plurality of imaging units, and proceeds to step ST2.
  • step ST2 the image processing apparatus performs pre-processing of the polarization image.
  • the image processing apparatus 30 performs distortion correction and the like on each of the plurality of polarization imaging images acquired from the plurality of imaging units using an internal parameter corresponding to the imaging unit. Furthermore, the image processing apparatus 30 performs registration using the external parameter on each polarization imaging image processed using the internal parameter, and proceeds to step ST3.
  • step ST3 the image processing apparatus generates an epipolar plane image and a polarization epipolar plane image.
  • the image processing apparatus 30 generates, for example, an epipolar plane image and a polarization epipolar plane image by extracting an image of a depth calculation target line from the pre-processed polarization captured image and stacking the images in order of distance of viewpoint position. Each pixel in the polarization epipolar plane image has information indicating the polarization direction.
  • the image processing device 30 generates an epipolar plane image and a polarization epipolar plane image, and proceeds to step ST4.
  • step ST4 the image processing apparatus performs depth calculation.
  • the image processing device 30 calculates the depth to the subject position indicated by the depth calculation target pixel in the depth calculation target line based on the epipolar plane image and the polarization epipolar plane image.
  • FIG. 10 is a flowchart illustrating the depth calculation operation in step ST4.
  • the image processing apparatus performs pixel classification.
  • the image processing device 30 calculates the texture determination value G using the pixel value. Furthermore, the image processing apparatus 30 determines whether the pixel has texture information or not based on the calculated texture determination value G, and proceeds to step ST12.
  • step ST12 the image processing apparatus calculates the gradient using the texture information.
  • the image processing apparatus 30 calculates the gradient ⁇ t using the texture information for the pixel determined to have the texture information, and proceeds to step ST13.
  • step ST13 the image processing apparatus calculates a gradient using polarization information.
  • the image processing apparatus 30 calculates the gradient ⁇ p using polarization information for the pixel determined to have no texture information, and proceeds to step ST14.
  • step ST14 the image processing apparatus calculates the depth.
  • the image processing device 30 converts the gradient ⁇ x calculated in step ST12 and step ST13 into depth.
  • the gradient ⁇ x is the gradient ⁇ t or the gradient ⁇ p.
  • the image processing apparatus performs the above-described processing, and calculates the depth on the basis of the gradient calculated using the texture information, for example, for the position PS1 of the side having the texture in the object OBa. Also, for example, with respect to the position PS3 of the side surface having no texture in the object OBb, the depth is calculated based on the gradient calculated using the polarization information.
  • Second embodiment> Next, a second embodiment of the image processing apparatus will be described. In the second embodiment, the case where depth calculation is performed at a resolution higher than that of the first embodiment will be described.
  • the image processing apparatus has the preprocessing unit 31, the parameter holding unit 32, the polarization epipolar plane image generation unit 33, and the depth calculation unit 34, as in the first embodiment shown in FIG. doing.
  • the preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on polarization imaging images acquired by a plurality of imaging units.
  • the parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board.
  • the preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image. Further, the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter.
  • the preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.
  • the polarization epipolar plane image generation unit 33 extracts an image in the arrangement direction of the plurality of imaging units, that is, the arrangement direction of the plurality of viewpoint positions so as to include an image indicating a desired position in the subject from the plurality of polarization imaging images Then, the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the vertical direction orthogonal to the alignment direction, at the intervals according to the intervals of the imaging units (the intervals of the viewpoint positions). In addition, the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions.
  • the polarization epipolar plane image generation unit 33 generates an epipolar plane image and a polarization epipolar plane image using the plurality of pre-processed polarization imaging images, and outputs the epipolar plane image and the polarization epipolar plane image to the depth calculation unit 34.
  • the configuration of the depth calculation unit is different from that of the first embodiment.
  • the depth calculating unit in the second embodiment calculates the normal from the pixel value of the pixel located in the direction of the gradient of the true value in the polarization epipolar plane image, and uses the calculated normal to execute the first embodiment. Perform depth calculation with higher resolution than the form.
  • FIG. 11 illustrates the configuration of the depth calculation unit according to the second embodiment.
  • the depth calculation unit 34 includes a gradient calculation unit 341, a depth conversion unit 343, a normal calculation unit 344, and an integration processing unit 345.
  • the gradient calculation unit 341 calculates the gradient of each pixel in the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image.
  • the gradient calculating unit 341 includes a pixel classification unit 3411, a texture gradient calculating unit 3412, and a polarization gradient calculating unit 3413.
  • the pixel classification unit 3411 classifies the pixels of the epipolar plane image into pixels having texture information and an image having no texture information.
  • the pixel classification unit 3411 calculates the texture determination value G as described above for the determination target pixel. If the calculated texture determination value G is equal to or greater than a predetermined determination threshold ⁇ , the pixel classification unit 3411 determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold ⁇ . In this case, it is determined that the pixel does not have texture information.
  • the texture gradient calculation unit 3412 calculates the gradient ⁇ t as described above using the texture information with respect to the image classified as the pixel having the texture information by the pixel classification unit 3411.
  • the texture gradient calculation unit 3412 outputs the calculated gradient to the depth conversion unit 343.
  • the polarization gradient calculation unit 3413 calculates the gradient using polarization information for pixels that do not have the texture information classified by the pixel classification unit 3411. As described above, the polarization gradient calculation unit 3413 calculates the waveform of the trigonometric function for each different combination of pixels located in the direction of the gradient in the polarization epipolar plane image, and determines the gradient with the smallest waveform difference as the true value. It is a gradient. The polarization gradient calculation unit 3413 outputs the calculated gradient of the true value to the depth conversion unit 343.
  • the polarization gradient calculating unit 3413 outputs the pixel value of the pixel at which the difference in amplitude and the difference in phase are minimum, that is, the pixel value of the pixel positioned in the direction of the gradient of the true value to the normal calculation unit 344. Furthermore, the polarization gradient calculating unit 3413 outputs not only the pixel values of the pixels positioned in the direction of the gradient of the true value, but also the pixel values of the pixels positioned in the direction of the arbitrary gradient to the normal calculation unit 344.
  • the depth conversion unit 343 converts the gradients calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 of the gradient calculation unit 341 into depths.
  • the depth conversion unit 343 converts the gradient ⁇ p calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 into depth as described above.
  • the depth conversion unit 343 outputs the converted depth to the normal calculation unit 344 and the integration processing unit 345.
  • the normal line calculation unit 344 calculates a normal line using the pixel value (brightness) supplied from the polarization gradient calculation unit 3413. As described above, the pixel value observed when the polarization direction is rotated can be expressed as Expression (5).
  • the normal line calculation unit 344 performs fitting to the function shown in equation (5) using the brightness of the polarization image with three or more polarization directions, and The azimuth angle ⁇ at which the maximum brightness is obtained is determined based on the function indicating the relationship.
  • the object surface normal is expressed in a polar coordinate system, and the normal information is an azimuth angle ⁇ and a zenith angle ⁇ z.
  • the zenith angle ⁇ z is an angle from the z-axis toward the normal
  • the azimuth angle ⁇ is an angle in the y-axis direction with respect to the x-axis as described above.
  • the degree of polarization ⁇ can be calculated by performing the operation of equation (7).
  • the relationship between the degree of polarization and the zenith angle is known to have, for example, the characteristics shown in FIG. 12 from the Fresnel equation, and the zenith angle ⁇ z can be determined based on the degree of polarization ⁇ ⁇ from the characteristics shown in FIG.
  • the characteristic shown in FIG. 12 is an example, and the characteristic changes depending on the refractive index of the subject.
  • the normal line calculation unit 344 obtains the relationship between the brightness and the polarization angle from the polarization direction and the brightness of the polarization image based on the pixel values of the polarization image with three or more polarization directions, and obtains the azimuth angle ⁇ To determine The normal line calculation unit 344 calculates the degree of polarization ⁇ using the maximum brightness and the minimum brightness obtained from the relationship between the brightness and the polarization angle, and calculates it based on the characteristic curve indicating the relationship between the degree of polarization and the zenith angle. The zenith angle ⁇ z corresponding to the polarization degree ⁇ is determined.
  • the normal line calculation unit 344 generates normal line information indicating the normal direction of the subject (the azimuth angle ⁇ and the zenith angle ⁇ z) based on the pixel values of the polarization image having three or more polarization directions.
  • the normal calculation unit 344 outputs normal information indicating the calculated normal direction to the integration processing unit 345.
  • the azimuth of the normal line has an indeterminacy of 180 °, if the gradient of the object surface is estimated based on the depth obtained by the depth conversion unit 343, the indeterminacy of the azimuth can be eliminated.
  • the integration processing unit 345 performs integration processing using the depth obtained by the depth conversion unit 343 and the normal calculated by the normal calculation unit 344 to obtain the depth with high resolution and high accuracy.
  • the integration processing unit 345 is denser and higher than the pixel unit based on the depth and normal of the pixel for which the gradient of the true value is calculated based on the polarization information, or the normal of the pixel for which the gradient of the true value is not calculated. Calculate depth with high resolution and high accuracy.
  • FIG. 13 is a diagram for explaining depth interpolation processing based on the depth and the normal.
  • the depth D1 of the pixel position PX1 and the depth D2 of the pixel position PX2 are obtained by the depth conversion unit 343.
  • the normal line calculation unit 344 obtains the normal line F1 of the pixel position PX1 and the normal line F2 of the pixel position PX2.
  • the integration processing unit 345 performs interpolation processing using the depths D1 and D2 and the normals F1 and F2, and calculates, for example, the depth D12 of the object position corresponding to the boundary position PX12 between the pixel position PX1 and the pixel position PX2.
  • the depth D12 can be calculated based on the equation (13).
  • the coordinate of the pixel position PX1 in the image coordinate system with the focal length of the imaging unit as “f” and the image center as the origin is (u1, v1)
  • the coordinate of the pixel position PX2 is (u2, v2)
  • the normal vector based on the normal F1 is (Nx1, Ny1, Nz1)
  • the normal vector based on the normal F2 is (Nx2, Ny2, Nz2)
  • the pixel position PX1, PX2 is the u direction of the image coordinate system (rightward)
  • the relationships of equations (14) and (15) hold. Therefore, the depth D12 can be calculated based on the equation (16).
  • the depth D12 indicates the depth at the boundary position PX12 calculated by linear interpolation using the depth D1 and the depth D2, and by performing the depth interpolation processing based on the normal indicated by the depth and the normal information
  • the depth D12 can be calculated with high accuracy as compared to the case of calculating the depth by linear interpolation without using the normal line information.
  • the integration processing using the normal is, for example, a gradient with a normal which is a pixel for which normal information is generated and a depth is not obtained by performing integration processing in the same manner as in JP-A-2015-114307.
  • the depth of the uncalculated pixel can be calculated with high accuracy.
  • the integration processing unit 345 performs integration processing using the depth and the normal line or the normal line, and calculates the depth with higher resolution and high accuracy than the pixel unit.
  • FIG. 14 is a flow chart showing the operation of the second embodiment.
  • the image processing device performs pixel classification.
  • the image processing device 30 calculates the texture determination value using the pixel value. Furthermore, the image processing apparatus 30 determines whether the pixel has texture information or not based on the calculated texture determination value, and proceeds to step ST22.
  • step ST22 the image processing apparatus calculates the gradient using the texture information.
  • the image processing apparatus 30 calculates the gradient using the texture information for the pixel determined to have the texture information, and proceeds to step ST23.
  • step ST23 the image processing apparatus calculates a gradient using polarization information.
  • the image processing apparatus 30 calculates the gradient of the true value using the polarization information for the pixel determined to have no texture information, and proceeds to step ST24.
  • step ST24 the image processing apparatus calculates the depth.
  • the image processing device 30 calculates the depth based on the gradients calculated in step ST12 and step ST13, and proceeds to step ST25.
  • step ST25 the image processing apparatus generates normal line information.
  • the image processing device 30 calculates the normal based on the polarization information when the gradient is calculated in step ST23, generates normal information indicating the calculated normal, and proceeds to step ST26.
  • step ST26 the image processing apparatus performs integration processing.
  • the image processing device 30 performs integration processing using the depth obtained in the pixel unit and the normal indicated by the normal information, and calculates the depth with a higher resolution and higher accuracy than the pixel unit.
  • the depth can be calculated for a depth calculation target pixel whose gradient can not be calculated based on texture information or polarization information.
  • the image processing apparatus has the preprocessing unit 31, the parameter holding unit 32, the polarization epipolar plane image generation unit 33, and the depth calculation unit 34, as in the first embodiment shown in FIG. doing.
  • the preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on polarization imaging images acquired by a plurality of imaging units.
  • the parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board.
  • the preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image. Further, the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter.
  • the preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.
  • the polarization epipolar plane image generation unit 33 extracts an image in the arrangement direction of the plurality of imaging units, that is, the arrangement direction of the plurality of viewpoint positions so as to include an image indicating a desired position in the subject from the plurality of polarization imaging images Then, the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the vertical direction orthogonal to the alignment direction, at the intervals according to the intervals of the imaging units (the intervals of the viewpoint positions). In addition, the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions.
  • the polarization epipolar plane image generation unit 33 generates an epipolar plane image and a polarization epipolar plane image using the plurality of pre-processed polarization imaging images, and outputs the epipolar plane image and the polarization epipolar plane image to the depth calculation unit 34.
  • the third embodiment is different from the first and second embodiments in the configuration of the depth calculating unit.
  • the depth calculation unit in the third embodiment further performs processing of calculating the depth of the depth calculation target pixel for which the gradient can not be calculated based on the texture information and the polarization information.
  • FIG. 15 illustrates the configuration of the depth calculator in the third embodiment.
  • the depth calculation unit 34 includes a gradient calculation unit 341, a depth conversion unit 343, a normal calculation unit 344, an integration processing unit 345, and a depth interpolation unit 346.
  • the gradient calculation unit 341 calculates the gradient of each pixel in the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image.
  • the gradient calculating unit 341 includes a pixel classification unit 3411, a texture gradient calculating unit 3412, and a polarization gradient calculating unit 3413.
  • the pixel classification unit 3411 classifies the pixels of the epipolar plane image into pixels having texture information and an image having no texture information.
  • the pixel classification unit 3411 calculates the texture determination value G as described above for the determination target pixel. If the calculated texture determination value G is equal to or greater than a predetermined determination threshold ⁇ , the pixel classification unit 3411 determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold ⁇ . In this case, it is determined that the pixel does not have texture information.
  • the texture gradient calculation unit 3412 calculates the gradient using the texture information for the pixels having the texture information classified by the pixel classification unit 3411.
  • the texture gradient calculating unit 3412 calculates the gradient ⁇ t as described above using, for example, the differential value calculated by the pixel classification unit 3411.
  • the texture gradient calculation unit 3412 outputs the calculated gradient to the depth conversion unit 343.
  • the texture gradient calculation unit 3412 calculates the gradient ⁇ t as described above using the texture information with respect to the image classified as the pixel having the texture information by the pixel classification unit 3411.
  • the texture gradient calculation unit 3412 outputs the calculated gradient to the depth conversion unit 343.
  • the polarization gradient calculation unit 3413 calculates the gradient using polarization information for pixels that do not have the texture information classified by the pixel classification unit 3411. As described above, the polarization gradient calculation unit 3413 calculates the waveform of the trigonometric function for each different combination of pixels located in the direction of the gradient in the polarization epipolar plane image, and determines the gradient with the smallest waveform difference as the true value. It is a gradient. The polarization gradient calculation unit 3413 outputs the calculated gradient of the true value to the depth conversion unit 343.
  • the polarization gradient calculating unit 3413 outputs the pixel value of the pixel at which the difference in amplitude and the difference in phase are minimum, that is, the pixel value of the pixel positioned in the gradient direction of the true value to the normal calculation unit 344. In addition, the polarization gradient calculating unit 3413 outputs not only the pixel values of the pixels positioned in the direction of the gradient of the true value but also the pixel values of the pixels positioned in the direction of the arbitrary gradient to the normal calculation unit 344. Further, the polarization gradient calculating unit 3413 generates normal-no gradient non-computed pixel information indicating a normal-non-normalized gradient non-computed pixel and outputs the pixel information to the depth interpolation unit 346.
  • the depth conversion unit 343 converts the gradients calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 of the gradient calculation unit 341 into depths.
  • the depth conversion unit 343 converts the gradient ⁇ p calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 into depth as described above.
  • the depth conversion unit 343 outputs the converted depth to the normal calculation unit 344 and the integration processing unit 345.
  • the normal line calculating unit 344 determines the azimuth angle ⁇ at which the brightness is maximum. Further, the normal line calculation unit 344 determines the zenith angle ⁇ z corresponding to the degree of polarization ⁇ calculated using the maximum luminance and the minimum luminance obtained from the relationship between the luminance and the polarization angle. The normal line calculation unit 344 generates normal line information indicating the calculated azimuth angle ⁇ and the zenith angle ⁇ z, and outputs the generated normal line information to the integration processing unit 345.
  • the integration processing unit 345 performs integration processing using the depth obtained by the depth conversion unit 343 and the normal calculated by the normal calculation unit 344 to calculate the depth with high resolution and high accuracy, and the depth interpolation unit 346. Output to
  • the depth interpolation unit 346 calculates the depth of the pixel indicated by the non-normal gradient non-computed pixel information supplied from the polarization gradient calculation unit 3413 by interpolation processing.
  • the non-normal gradient uncomputed pixel is a pixel for which the gradient can not be calculated or a pixel with low reliability of the gradient or the normal, that is, a pixel with strong noise. Therefore, the depth interpolation unit 346 performs interpolation processing using the depth of the pixel which is located around the non-normalized gradient non-computed pixel and the depth is calculated by the integration processing unit 345, and the non-normalized gradient is not calculated. Calculate the depth of the pixel.
  • FIG. 16 is a diagram for explaining the depth interpolation processing of the depth interpolation unit 346.
  • the pixel P1 is the pixel closest to the left with respect to the pixel with no normal gradient but not calculated, and for which the depth is calculated.
  • the pixel P2 is located in the right direction with respect to the normal line no gradient non-computed pixel Pt, and the closest pixel for which the depth is calculated is set as the pixel P2.
  • the distance (for example, the number of pixels) from the normal line non-gradient uncomputed pixel Pt to the pixel P1 is “L1”
  • the distance from the normal line no gradient non-computed pixel Pt to the pixel P2 is “L2” Do.
  • the pixel P1 has a depth "D1”
  • the pixel P2 has a depth "D2”.
  • the depth interpolation unit 346 calculates the depth "Dt" of the no-normal-gradient uncomputed pixel Pt based on Expression (17).
  • the depth interpolation unit 346 performs interpolation processing using the depths of pixels located around the non-normal gradient non-computed pixel to calculate the depth of the non-normal gradient non-computed pixel.
  • FIG. 17 is a flow chart showing the operation of the third embodiment.
  • the image processing apparatus performs pixel classification.
  • the image processing device 30 calculates the texture determination value using the pixel value. Furthermore, the image processing apparatus 30 determines whether the pixel has texture information or not based on the calculated texture determination value, and proceeds to step ST32.
  • step ST32 the image processing apparatus calculates the gradient using the texture information.
  • the image processing apparatus 30 calculates the gradient using the texture information for the pixel determined to have the texture information, and proceeds to step ST33.
  • step ST33 the image processing apparatus calculates a gradient using polarization information.
  • the image processing apparatus 30 calculates a gradient using polarization information for a pixel determined to have no texture information, and proceeds to step ST34.
  • step ST34 the image processing apparatus generates no normal line gradient uncalculated pixel information.
  • the image processing device 30 discriminates the pixel with no normal line without calculating the normal line on the basis of the calculation result of the gradient in step ST33, the reliability of the calculated gradient, the amplitude and phase of the waveform of the trigonometric function used for calculating the gradient. Then, on the basis of the determination result, no normal line gradient uncalculated pixel information is generated, and the process proceeds to step ST35.
  • the image processing apparatus calculates the depth in step ST35.
  • the image processing device 30 calculates the depth based on the gradients calculated in step ST32 and step ST33, and proceeds to step ST36.
  • step ST36 the image processing apparatus generates normal line information.
  • the image processing device 30 calculates the normal based on the polarization information when the gradient is calculated in step ST33, generates normal information indicating the calculated normal, and proceeds to step ST37.
  • step ST37 the image processing apparatus performs integration processing.
  • the image processing device 30 performs integration processing using the depth obtained in pixel units and the normal indicated by the normal information, calculates the depth with high resolution at a higher resolution than the pixel units, and proceeds to step ST38 .
  • step ST38 the image processing apparatus performs depth interpolation processing.
  • the image processing device 30 uses the depth of the non-normalized gradient non-calculated pixel indicated by the non-normalized gradient non-calculated pixel information generated in step ST34 as the depth located near the non-normalized gradient non-calculated pixel. Calculated by interpolation processing.
  • the depth can be calculated even for a pixel whose depth can not be calculated in the first embodiment or the second embodiment.
  • FIG. 18 illustrates the configuration of the imaging apparatus according to the fourth embodiment.
  • the imaging apparatus six imaging units are arranged in the horizontal direction, and six stages are stacked in the vertical direction.
  • Polarizing elements 21- (1, 1) to 21- (6, 6) are provided in the imaging units 20- (1, 1) to 20- (6, 6).
  • FIG. 19 illustrates the polarization direction of the imaging device in the fourth embodiment.
  • the polarization direction of the polarization element is arranged so that the polarization direction has a predetermined angle difference, for example, 30 °, with the imaging units adjacent vertically and horizontally.
  • the polarization direction is “0 °, 30 °”.
  • the image processing apparatus includes the preprocessing unit 31, the parameter holding unit 32, the polarization epipolar plane image generation unit 33, and the depth calculation unit 34, as in the first embodiment shown in FIG. ing.
  • the preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on polarization imaging images acquired by a plurality of imaging units.
  • the parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board.
  • the preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image. Further, the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter.
  • the preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.
  • the polarization epipolar plane image generation unit 33 generates a first polarization epipolar plane image and a first epipolar plane image from a plurality of polarization imaging images having different viewpoint positions in the first direction, and generates a viewpoint in a second direction different from the first direction.
  • a second polarization epipolar plane image and a second epipolar plane image are generated from a plurality of polarization imaging images different in position.
  • the polarization epipolar plane image generation unit 33 generates an image in the arrangement direction of the plurality of imaging units, that is, the horizontal direction so as to include an image indicating a desired position in the subject from polarization imaging images acquired by the plurality
  • the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the vertical direction orthogonal to the arrangement direction, in the interval according to the interval of the imaging units (the interval of the viewpoint position).
  • the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions.
  • the polarization epipolar plane image generation unit 33 arranges the plurality of imaging units in the alignment direction, that is, the vertical direction so as to include an image indicating a desired position in the subject from polarization imaging images acquired by the plurality of imaging units arranged in the vertical direction.
  • the image is extracted, and the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the lateral direction orthogonal to the arrangement direction, in the interval according to the interval of the imaging units (the interval of the viewpoint position).
  • the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint position in the horizontal direction orthogonal to the arrangement direction, at an interval corresponding to the interval of the viewpoint position.
  • the polarization epipolar plane image generation unit 33 generates a polarization epipolar plane image (hereinafter referred to as “laterally polarized epipolar plane image”) and an epipolar plane image (hereinafter referred to as “lateral epipolar plane image”) with epipolar lines in the lateral direction and epipolar lines.
  • the polarization epipolar plane image hereinafter referred to as “longitudinal polarization epipolar plane image”
  • the epipolar plane image hereinafter referred to as “longitudinal epipolar plane image” whose vertical directions are the vertical directions are output to the depth calculation unit 34.
  • the fourth embodiment differs from the first to third embodiments in the configuration of the depth calculation unit.
  • the depth calculation unit according to the fourth embodiment includes a gradient calculated using polarization imaging images acquired by a plurality of imaging units aligned in the horizontal direction, and a polarization imaging image acquired by a plurality of imaging units aligned in the vertical direction. Is used to calculate the depth based on the highly reliable gradient.
  • FIG. 20 illustrates the configuration of the depth calculation unit according to the fourth embodiment.
  • the depth calculation unit 34 includes slope calculation units 341 H and 341 V, a slope selection unit 342, a depth conversion unit 343, a normal line calculation unit 344, an integration processing unit 345, and a depth interpolation unit 346.
  • the gradient calculation unit 341H calculates the gradient of the depth calculation target pixel in the lateral epipolar plane image generated by the polarization epipolar plane image generation unit 33.
  • the gradient calculating unit 341H includes a pixel classification unit 3411H, a texture gradient calculating unit 3412H, and a polarization gradient calculating unit 3413H.
  • the pixel classification unit 3411H classifies the pixels of the horizontal epipolar plane image into a pixel having texture information and an image having no texture information.
  • the pixel classification unit 3411H calculates the texture determination value G as described above for the determination target pixel.
  • the pixel classification unit 3411H determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold ⁇ . In this case, it is determined that the pixel does not have texture information.
  • the texture gradient calculation unit 3412H calculates the gradient ⁇ th using texture information for the pixels having the texture information classified by the pixel classification unit 3411H.
  • the texture gradient calculation unit 3412H calculates the gradient ⁇ th as described above, for example, using the differential value calculated by the pixel classification unit 3411H.
  • the texture gradient calculating unit 3412 H outputs the calculated gradient ⁇ th and the reliability Rth to the gradient selecting unit 342, using the dispersion when the gradient of the true value is calculated as the reliability Rth.
  • the polarization gradient calculation unit 3413H calculates the gradient using polarization information for the pixels that do not have the texture information classified by the pixel classification unit 3411H.
  • the polarization gradient calculation unit 3413H calculates the waveform of the trigonometric function for each different combination of pixels positioned in the gradient direction as described above in the transversely polarized epipolar plane image as described above, and determines the gradient with the smallest waveform difference as the true.
  • the gradient of the value is ⁇ ph.
  • the polarization gradient calculation unit 3413H outputs the calculated gradient ⁇ ph and the reliability Rph to the gradient selection unit 342, using the dispersion when the gradient ⁇ ph of the true value is calculated as the reliability Rph.
  • the pixel classification unit 3411V classifies the pixels of the vertical epipolar plane image into pixels having texture information and an image having no texture information.
  • the pixel classification unit 3411V calculates the texture determination value G as described above for the determination target pixel.
  • the pixel classification unit 3411 V determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold ⁇ . In this case, it is determined that the pixel does not have texture information.
  • the texture gradient calculation unit 3412 V calculates the gradient ⁇ tv using texture information for pixels having texture information classified by the pixel classification unit 3411 V.
  • the texture gradient calculation unit 3412 V calculates the gradient ⁇ tv as described above, for example, using the differential value calculated by the pixel classification unit 3411 H.
  • the texture gradient calculating unit 3412 V outputs the calculated gradient ⁇ tv and the reliability R tv to the gradient selecting unit 342, using the variance when the gradient of the true value is calculated as the reliability Rtv.
  • the polarization gradient calculation unit 3413V calculates the gradient using polarization information for pixels that do not have the texture information classified by the pixel classification unit 3411V.
  • the polarization gradient calculation unit 3413 V calculates the waveform of the trigonometric function for each different combination of pixels positioned in the gradient direction as described above in the longitudinal polarization epipolar plane image as described above, and determines the gradient with the smallest waveform difference as the true.
  • the gradient of the value is ⁇ pv.
  • the polarization gradient calculating unit 3413 V outputs the calculated gradient ⁇ pv and the reliability Rpv to the gradient selecting unit 342 with the dispersion obtained when the true value gradient ⁇ pv is calculated as the reliability Rpv.
  • the gradient selection unit 342 is a pixel for which the depth is to be calculated based on the gradients ⁇ th, ⁇ tv, ⁇ ph, ⁇ pv and the reliabilities Rth, Rtv, Rph, Rpv obtained from the texture gradient calculators 3412H, 3412V and the polarization gradient calculators 3413H, 3413V.
  • the gradient is selected and output to the depth conversion unit 343.
  • the gradient selecting unit 342 calculates the pixel value of the pixel located in the direction of an arbitrary gradient in the polarization epipolar plane image used for calculating the selected gradient. Are output to the normal calculation unit 344.
  • the gradient selection unit 342 generates non-normalized gradient uncomputed pixel information indicating whether the depth calculation target pixel is a non-normalized gradient uncomputed pixel and outputs the pixel information to the depth interpolation unit 346.
  • FIG. 21 is a diagram for explaining the gradient selection operation. For example, it is determined that the depth calculation target pixel is a pixel having texture information based on the horizontal epipolar plane image, the gradient ⁇ th is calculated based on the texture information, and the pixel having texture information based on the vertical epipolar plane image If it is determined and the gradient ⁇ tv is calculated based on the texture information, the gradient selection unit 342 compares the reliability Rth and the reliability Rtv, selects a gradient with high reliability, and outputs the gradient to the depth conversion unit 343. . In addition, when the reliability of the selected gradient is lower than the reliability threshold ⁇ , the gradient selecting unit 342 may set the calculated gradient as invalid and set the pixel for which the depth is calculated as the pixel without gradient calculation without normal.
  • the pixel whose depth is to be calculated is determined to be a pixel that does not have texture information based on the horizontal epipolar plane image, the gradient ⁇ ph is calculated based on the polarization information, and has texture information based on the vertical epipolar plane image
  • the gradient selecting unit 342 compares the reliability Rph with the reliability Rpv to select a gradient with high reliability, and the depth conversion unit Output to 343.
  • the gradient selecting unit 342 may set the calculated pixel as invalid and set the pixel for which the depth is calculated as the non-normal gradient uncalculated pixel.
  • the pixel whose depth is to be calculated is determined to be a pixel having texture information based on the epipolar plane image in one direction (for example, the horizontal direction), and the gradient (for example, ⁇ th) is calculated based on the texture information.
  • the gradient selecting unit 342 determines that the gradient is highly reliable.
  • the gradient (for example, ⁇ th) calculated based on the texture information is selected and output to the depth conversion unit 343.
  • the gradient selecting unit 342 selects the gradient (for example, ⁇ pv) calculated based on the polarization information to the depth conversion unit 343. You may output it. Furthermore, when the gradient selection unit 342 selects the gradient calculated based on the polarization information instead of the gradient calculated based on the texture information, the reliability of the gradient calculated based on the polarization information is higher than the reliability threshold value ⁇ If it is low, the calculated gradient may be invalidated, and the pixel for which the depth is to be calculated may be the pixel without normal gradient without being calculated.
  • the gradient selection unit 342 when the gradient is calculated based on the epipolar plane image or the polarized epipolar plane image in one direction, and the gradient is not calculated based on the epipolar plane image and the polarized epipolar plane image in the other direction, The gradient selection unit 342 outputs the calculated gradient to the depth conversion unit 343. In addition, when the calculated reliability of the gradient is lower than the reliability threshold, the gradient selecting unit 342 may set the pixel for which the depth is to be calculated as the pixel for which the normal without gradient is calculated.
  • the depth conversion unit 343 converts the gradient selected by the gradient selection unit 342 into depth.
  • the depth conversion unit 343 converts the gradient selected by the gradient selection unit 342 into a depth as described above, and outputs the depth to the normal calculation unit 344 and the integration processing unit 345.
  • the normal line calculation unit 344 calculates the normal line of the pixel with normal line and gradient not calculated. Based on the pixel value (brightness) supplied from the gradient selection unit 342, the normal line calculation unit 344 determines the azimuth angle ⁇ at which the brightness is maximum. Further, the normal line calculation unit 344 determines the zenith angle ⁇ z corresponding to the degree of polarization ⁇ calculated using the maximum luminance and the minimum luminance obtained from the relationship between the luminance and the polarization angle. The calculation of the normal when the gradient selected by the gradient selecting unit 342 is the gradient ⁇ pv calculated by the gradient calculating unit 341V is the pixel used for calculating the gradient ⁇ pH by the gradient calculating unit 341H as described above.
  • the same process as in the case of calculating the normal, that is, the azimuth angle ⁇ and the zenith angle ⁇ z from the values may be performed by rotating the coordinate axes by 90 degrees.
  • the normal line calculation unit 344 generates normal line information indicating the azimuth angle ⁇ and the zenith angle ⁇ z calculated for the normal line and gradient uncalculated pixel, and outputs the generated normal line information to the integration processing unit 345.
  • the integration processing unit 345 performs integration processing using the depth obtained by the depth conversion unit 343 and the normal calculated by the normal calculation unit 344 to calculate the depth with high resolution and high accuracy, and the depth interpolation unit 346. Output to
  • the depth interpolation unit 346 calculates the depth of the non-normalized gradient uncomputed pixel indicated by the non-normalized gradient uncomputed pixel information supplied from the gradient selecting unit 342 by interpolation processing.
  • the depth interpolation unit 346 performs interpolation processing using the depth of the pixel located around the non-normalized gradient uncomputed pixel and for which the depth is calculated by the integration processing unit 345. Calculate the depth.
  • FIG. 22 is a flow chart showing the operation of the fourth embodiment.
  • the image processing apparatus performs pixel classification.
  • the image processing device 30 calculates the texture determination value using the pixel value. Furthermore, based on the calculated texture determination value, the image processing device 30 determines whether the pixel has texture information or not, for each of the horizontal epipolar plane image and the vertical epipolar plane image. Then, the process proceeds to step ST42.
  • step ST42 the image processing apparatus calculates the gradient using the texture information.
  • the image processing apparatus 30 calculates the gradient and the reliability of the pixels determined to have texture information for each of the horizontal epipolar plane image and the vertical epipolar plane image using texture information, and proceeds to step ST43.
  • step ST43 the image processing apparatus calculates a gradient using polarization information.
  • the image processing device 30 calculates the gradient and the reliability by using polarization information for a pixel determined to have no texture information in each of the horizontal polarization epipolar plane image and the vertical polarization epipolar plane image, and performs steps. Go to ST44.
  • step ST44 the image processing apparatus selects a gradient.
  • the image processing apparatus 30 selects a gradient to be used to calculate the depth from the gradients calculated in step ST42 and step ST43, and proceeds to step ST45.
  • step ST45 the image processing apparatus generates no normal line gradient uncalculated pixel information.
  • the image processing apparatus 30 calculates the gradient without normal based on the gradient calculation result in step ST42 and step ST43, the reliability of the gradient selected in step ST44, and the amplitude and phase of the waveform of the trigonometric function used for calculation of the gradient.
  • the uncomputed pixel is determined, the normal line no gradient uncalculated pixel information is generated based on the determination result, and the process proceeds to step ST46.
  • step ST46 the image processing apparatus calculates the depth.
  • the image processing device 30 calculates the depth based on the gradient selected in step ST44 or the gradient selected in step ST44 and whose reliability is equal to or higher than the reliability threshold set in advance, and the process proceeds to step ST47. move on.
  • step ST47 the image processing apparatus generates normal line information.
  • the image processing device 30 calculates normals of the pixels with normals and gradients not calculated, and generates normal information indicating the calculated normals.
  • the gradient selected in step ST44 is a pixel calculated using the polarization epipolar plane image, the reliability of the gradient is lower than the reliability threshold, and the amplitude and phase of the trigonometric function waveform are gradients In the case of a pixel which becomes constant regardless of, that is, with a normal and a gradient uncalculated pixel, the pixel value of the pixel located in the direction of an arbitrary gradient in the polarization epipolar plane image used for calculating the selected gradient Is calculated, normal line information indicating the calculated normal line is generated, and the process proceeds to step ST48.
  • step ST48 the image processing apparatus performs integration processing.
  • the image processing device 30 performs integration processing using the depth obtained in pixel units and the normal line indicated by the normal information, calculates the depth having a resolution higher than that before integration processing, and proceeds to step ST49.
  • step ST49 the image processing apparatus performs depth interpolation processing.
  • the image processing device 30 calculates the depth of the non-normalized gradient non-computed pixel indicated by the non-normalized gradient non-computed pixel information generated at step ST45 by interpolation processing using the depth of the neighboring pixel.
  • the fourth embodiment it is possible to calculate a high-reliability and high-reliability depth using a plurality of polarization imaging images obtained with the viewpoint position as a two-dimensional position.
  • imaging system 10 illustrated the case where a plurality of imaging units with different polarization directions are linearly arranged in parallel
  • a single imaging device can be moved to switch the polarization direction of the polarization element. It is also good.
  • the imaging device is sequentially moved to the positions of the imaging devices 20-1 to 20-6 shown in FIG. 1, and the polarization direction of the polarizing element 21 is switched according to the movement of the imaging device , Images of the objects OBa and OBb without motion.
  • external parameters corresponding to the movement of the imaging device are used.
  • an imaging system is configured in this manner, a plurality of polarization imaging images having different polarization directions and viewpoint positions can be obtained by performing imaging by moving one imaging device in the horizontal direction or the vertical direction without using a plurality of imaging devices. It is possible to calculate the depth with high resolution and accuracy.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on any type of mobile object such as a car, an electric car, a hybrid electric car, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, a robot May be
  • FIG. 23 is a block diagram showing a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • Vehicle control system 12000 includes a plurality of electronic control units connected via communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an external information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (Interface) 12053 are illustrated as a functional configuration of the integrated control unit 12050.
  • the driveline control unit 12010 controls the operation of devices related to the driveline of the vehicle according to various programs.
  • the drive system control unit 12010 includes a drive force generation device for generating a drive force of a vehicle such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting the drive force to the wheels, and a steering angle of the vehicle. adjusting steering mechanism, and functions as a control device of the braking device or the like to generate a braking force of the vehicle.
  • Body system control unit 12020 controls the operation of the camera settings device to the vehicle body in accordance with various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device of various lamps such as a headlamp, a back lamp, a brake lamp, a blinker or a fog lamp.
  • the body system control unit 12020 the signal of the radio wave or various switches is transmitted from wireless controller to replace the key can be entered.
  • Body system control unit 12020 receives an input of these radio or signal, the door lock device for a vehicle, the power window device, controls the lamp.
  • Outside vehicle information detection unit 12030 detects information outside the vehicle equipped with vehicle control system 12000.
  • an imaging unit 12031 is connected to the external information detection unit 12030.
  • the out-of-vehicle information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle, and receives the captured image.
  • the external information detection unit 12030 may perform object detection processing or distance detection processing of a person, a vehicle, an obstacle, a sign, characters on a road surface, or the like based on the received image.
  • Imaging unit 12031 receives light, an optical sensor for outputting an electric signal corresponding to the received light amount of the light.
  • the imaging unit 12031 can output an electric signal as an image or can output it as distance measurement information.
  • the light image pickup unit 12031 is received may be a visible light, it may be invisible light such as infrared rays.
  • Vehicle information detection unit 12040 detects the vehicle information.
  • a driver state detection unit 12041 that detects a state of a driver is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera for imaging the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver does not go to sleep.
  • the microcomputer 12051 calculates a control target value of the driving force generation device, the steering mechanism or the braking device based on the information inside and outside the vehicle acquired by the outside information detecting unit 12030 or the in-vehicle information detecting unit 12040, and a drive system control unit A control command can be output to 12010.
  • the microcomputer 12051 the driving force generating device on the basis of the information around the vehicle acquired by the outside information detection unit 12030 or vehicle information detection unit 12040, by controlling the steering mechanism or braking device, the driver automatic operation such that autonomously traveling without depending on the operation can be carried out cooperative control for the purpose of.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the external information detection unit 12030.
  • the microcomputer 12051 controls the headlamps in response to the preceding vehicle or the position where the oncoming vehicle is detected outside the vehicle information detection unit 12030, the cooperative control for the purpose of achieving the anti-glare such as switching the high beam to the low beam It can be carried out.
  • Audio and image output unit 12052 transmits, to the passenger or outside of the vehicle, at least one of the output signal of the voice and image to be output device to inform a visually or aurally information.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
  • Display unit 12062 may include at least one of the on-board display and head-up display.
  • FIG. 24 is a diagram illustrating an example of the installation position of the imaging unit 12031.
  • imaging units 12101, 12102, 12103, 12104, and 12105 are provided as the imaging unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, on the front nose of the vehicle 12100, a side mirror, a rear bumper, a back door, an upper portion of a windshield of a vehicle interior, and the like.
  • the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle cabin mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 included in the side mirror mainly acquire an image of the side of the vehicle 12100.
  • the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
  • the imaging unit 12105 provided on the top of the windshield in the passenger compartment is mainly used to detect a leading vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 24 shows an example of the imaging range of the imaging units 12101 to 12104.
  • Imaging range 12111 indicates an imaging range of the imaging unit 12101 provided in the front nose
  • imaging range 12112,12113 are each an imaging range of the imaging unit 12102,12103 provided on the side mirror
  • an imaging range 12114 is The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by overlaying the image data captured by the imaging units 12101 to 12104, a bird's eye view of the vehicle 12100 viewed from above can be obtained.
  • At least one of the imaging unit 12101 through 12104 may have a function of obtaining distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging devices, or an imaging device having pixels for phase difference detection.
  • the microcomputer 12051 measures the distance to each three-dimensional object in the imaging ranges 12111 to 12114, and the temporal change of this distance (relative velocity with respect to the vehicle 12100). In particular, it is possible to extract a three-dimensional object traveling at a predetermined speed (for example, 0 km / h or more) in substantially the same direction as the vehicle 12100 as a leading vehicle, in particular by finding the it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. Automatic operation or the like for autonomously traveling without depending on the way of the driver operation can perform cooperative control for the purpose.
  • automatic brake control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • the microcomputer 12051 converts three-dimensional object data relating to three-dimensional objects into two-dimensional vehicles such as two-wheeled vehicles, ordinary vehicles, large vehicles, classification and extracted, can be used for automatic avoidance of obstacles.
  • the microcomputer 12051 identifies obstacles around the vehicle 12100 into obstacles visible to the driver of the vehicle 12100 and obstacles difficult to see.
  • the microcomputer 12051 determines a collision risk which indicates the risk of collision with the obstacle, when a situation that might collide with the collision risk set value or more, through an audio speaker 12061, a display portion 12062 By outputting a warning to the driver or performing forcible deceleration or avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be performed.
  • At least one of the imaging unit 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the images captured by the imaging units 12101 to 12104.
  • Such pedestrian recognition is, for example, a procedure for extracting feature points in images captured by the imaging units 12101 to 12104 as an infrared camera, and pattern matching processing on a series of feature points indicating the outline of an object to determine whether it is a pedestrian or not
  • the procedure is to determine Microcomputer 12051 is, determines that the pedestrian in the captured image of the imaging unit 12101 to 12104 is present, recognizing the pedestrian, the sound image output unit 12052 is rectangular outline for enhancement to the recognized pedestrian to superimpose, controls the display unit 12062.
  • the audio image output unit 12052 is, an icon or the like indicating a pedestrian may control the display unit 12062 to display the desired position.
  • the image processing apparatus according to the present disclosure may be applied to the external information detection unit 12030 among the configurations described above.
  • the imaging device according to the present disclosure may be applied to the imaging units 12101, 12102, 12103, 12104, 12105 and the like. For example, if a plurality of polarization imaging images having different polarization directions and viewpoint positions are acquired by the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle interior, the outside information detection unit 12030 The distance to the vehicle, the pedestrian in front, and the obstacle can be accurately calculated with high resolution.
  • the outside vehicle information detection unit 12030 is positioned on the side surface, for example, when performing parallel parking.
  • the distance to the object can be accurately calculated with high resolution.
  • the imaging units 12102 and 12103 included in the side mirrors perform imaging by switching the polarization direction according to the movement of the car and acquiring a plurality of polarization imaging images having different viewpoint positions and polarization directions, and performing parallel parking. It is also possible to accurately calculate the distance to an object located on the side with high resolution only by passing in front of the space.
  • the outside vehicle information detection unit 12030 may back when performing distance to the following vehicle or right angle parking The distance to the object located at can be accurately calculated with high resolution.
  • the series of processes described in the specification can be performed by hardware, software, or a combination of both.
  • a program recording the processing sequence is installed and executed in a memory in a computer incorporated in dedicated hardware.
  • the program can be installed and executed on a general-purpose computer that can execute various processes.
  • the program can be recorded in advance on a hard disk or a solid state drive (SSD) as a recording medium, or a read only memory (ROM).
  • the program may be a flexible disk, a compact disc read only memory (CD-ROM), a magneto optical (MO) disc, a digital versatile disc (DVD), a BD (Blu-Ray Disc (registered trademark)), a magnetic disc, a semiconductor memory card Etc.
  • CD-ROM compact disc read only memory
  • MO magneto optical
  • DVD digital versatile disc
  • BD Blu-Ray Disc
  • magnetic disc a semiconductor memory card Etc.
  • Such removable recording media can be provided as so-called package software.
  • the program may be installed from the removable recording medium to the computer, or may be transferred from the download site to the computer wirelessly or by wire via a network such as a LAN (Local Area Network) or the Internet.
  • the computer can receive the program transferred in such a manner, and install the program on a recording medium such as a built-in hard disk.
  • the effect described in this specification is an illustration to the last, is not limited, and may have an additional effect which is not described.
  • the present technology should not be construed as being limited to the embodiments of the above-described technology.
  • the embodiments of this technology disclose the present technology in the form of exemplification, and it is obvious that those skilled in the art can modify or substitute the embodiments within the scope of the present technology. That is, in order to determine the gist of the present technology, the claims should be taken into consideration.
  • the image processing apparatus of the present technology can also have the following configuration.
  • An image processing apparatus including a depth calculation unit that calculates the depth of a pixel for which depth calculation is to be performed, based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.
  • the depth calculation unit A polarization gradient calculation unit that calculates the gradient of the depth calculation target pixel in the polarization epipolar plane image;
  • the image processing apparatus according to (1) further including: a depth conversion unit that converts the gradient calculated by the polarization gradient calculation unit into a depth.
  • the image processing apparatus according to (4), wherein the polarization direction is four or more in a range of an angle difference of less than 180 degrees.
  • a normal line calculation unit that calculates a normal of the depth calculation target pixel using a pixel located in the direction of the gradient calculated by the polarization gradient calculation unit in the polarization epipolar plane image;
  • the image processing apparatus further includes an integrated processing unit that performs interpolation processing using the depth obtained by the depth conversion unit and the normal calculated by the normal calculation unit to obtain a depth that is denser than the pixel unit and has a higher resolution
  • the image processing apparatus according to any one of 2) to 5).
  • the depth interpolation is performed to calculate the depth of the depth calculation object pixel by interpolation processing using the depth of the pixel adjacent to the depth calculation object pixel
  • the image processing apparatus according to any one of (2) to (6), further including a part.
  • It further has a normal line calculation unit which calculates a normal line using pixels located in the direction of an arbitrary gradient in the polarization epipolar plane image,
  • the normal calculation unit calculates the depth using pixels located in a direction of an arbitrary gradient from the depth calculation target pixel for which the gradient can not be calculated.
  • the image processing apparatus according to any one of (2) to (7), which calculates a normal of a target pixel.
  • the depth calculation unit A pixel classification unit that classifies whether the depth calculation target pixel is a pixel having texture information or a pixel not having the texture information based on an epipolar plane image generated from the plurality of polarization imaging images;
  • the image processing apparatus further includes a texture gradient calculation unit that calculates the gradient of the depth calculation target pixel in the epipolar plane image, The texture gradient calculation unit calculates gradients of pixels having the texture information based on the epipolar plane image,
  • the image processing apparatus according to any one of (2) to (8), wherein the polarization gradient calculation unit calculates the gradient of the pixel not having the texture information based on the polarization epipolar plane image.
  • the depth calculation unit The polarization gradient calculator or the texture gradient calculator calculates the first polarization epipolar plane image or the first epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in the first direction.
  • the polarization based on the gradient of the depth calculation target pixel and a second polarization epipolar plane image or a second epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in a second direction different from the first direction.
  • the image processing apparatus further includes a gradient selection unit that selects any one of the gradients for the depth calculation target pixel calculated by the gradient calculation unit or the texture gradient calculation unit,
  • the texture selection unit When the gradient selection unit is one of the gradients calculated by the texture gradient calculation unit and the other gradient is the gradient calculated by the polarization gradient calculation unit, the texture selection unit The image processing apparatus according to (9), wherein the calculated gradient is selected. (12) The image processing apparatus according to (10), wherein the gradient selection unit selects a gradient with high reliability when the gradients are both calculated by the texture gradient calculation unit or the polarization gradient calculation unit.
  • the depth of the depth calculation target pixel is calculated based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions. Be done. For this reason, it becomes possible to calculate the depth based on the polarization information even in the area without texture. Therefore, it is suitable for an apparatus using a function of acquiring a distance to a subject using a captured image, for example, an electronic apparatus mounted on a mobile object such as an automobile.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

L'invention concerne un système d'imagerie (10) qui utilise une pluralité de dispositifs d'imagerie (20-1 à 20-6) et d'éléments de polarisation (21-1 à 21-6) pour acquérir une pluralité d'images polarisées ayant des directions de polarisation et des positions de point de vue différentes. Un dispositif de traitement d'images (30) classe des pixels soumis à un calcul de profondeur en pixels ayant des informations de texture et en pixels n'ayant pas d'informations de texture d'après une image de plan épipolaire générée à partir de la pluralité d'images polarisées. L'image de plan épipolaire sert à calculer les pentes de pixels ayant des informations de texture. Une image de plan épipolaire polarisée sert à calculer les pentes de pixels n'ayant pas d'informations de texture. La conversion des pentes calculées en profondeurs permet de calculer non seulement les profondeurs de pixels qui ont des informations de texture, mais également les profondeurs de pixels qui n'ont pas d'informations de texture.An imaging system (10) that uses a plurality of imaging devices (20-1 to 20-6) and biasing elements (21-1 to 21-6) to acquire a plurality of polarized images having polarization directions and different viewpoints. An image processing device (30) classifies pixels subjected to a pixel depth calculation having texture information and pixels having no texture information according to an epipolar plane image generated from the plurality of polarized images. The epipolar plane image is used to calculate the slopes of pixels having texture information. A polarized epipolar plane image is used to calculate slopes of pixels that do not have texture information. Conversion of depth-calculated slopes not only calculates pixel depths that have texture information, but also depths of pixels that do not have texture information.

Description

画像処理装置と画像処理方法とプログラムおよび画像処理システムIMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, PROGRAM, AND IMAGE PROCESSING SYSTEM

 この技術は、画像処理装置と画像処理方法とプログラムおよび画像処理システムに関し、テクスチャがない領域についてデプスの算出を行う。 This technology relates to an image processing apparatus, an image processing method, a program, and an image processing system, and performs depth calculation for an area without texture.

 従来、複数の撮像装置で得られた画像からエピポーラ平面画像を生成して被写体までの距離情報であるデプスを算出することが行われている。例えば、非特許文献1では、複数の撮像装置で得られた画像におけるテクスチャ情報を利用して、エピポーラ平面画像から算出した勾配に基づいてデプスの算出が行われている。 Conventionally, an epipolar plane image is generated from images obtained by a plurality of imaging devices to calculate depth which is distance information to a subject. For example, in Non-Patent Document 1, calculation of depth is performed based on a gradient calculated from an epipolar plane image using texture information in images obtained by a plurality of imaging devices.

Yongbing Zhang, Huijin Lv, et al、DOI 10.1109/TCSVT.2016.2555778, IEEE. Transactions on Circuits and Systems for Video Technology、「Light Field Depth Estimation via Epipolar Plane Image Analysis and Locally Linear Embedding.」,[online]、[平成29年7月1日検索]、インターネット<URL:http://media.au.tsinghua.edu.cn/liuyebin_files/lfdepth.pdf>Transactions on Circuits and Systems for Video Technology, Yongbing Zhang, Huijin Lv, et al, DOI 10.1109 / TCSVT.2016.2555778, "Light Field Depth Estimation via Epipolar Plane Image Analysis and Locally Linear Embedding.", [Online], [Heisei July 1, 29 Search], Internet <URL: http://media.au.tsinghua.edu.cn/liuyebin_files/lfdepth.pdf>

 ところで、画像におけるテクスチャ情報を利用する方法は、テクスチャがない場合にエピポーラ平面画像から勾配を検出できない。したがって、テクスチャがない領域についてのデプスを算出することができない。 By the way, the method of utilizing the texture information in the image can not detect the gradient from the epipolar plane image when there is no texture. Therefore, it is not possible to calculate the depth for an area without texture.

 そこで、この技術ではテクスチャがない領域についてデプスの算出を行う画像処理装置と画像処理方法とプログラムおよび画像処理システムを提供することを目的とする。 Therefore, it is an object of the present technology to provide an image processing apparatus, an image processing method, a program, and an image processing system for calculating the depth of an area without texture.

 この技術の第1の側面は、
 偏光方向と視点位置が異なる複数の偏光撮像画から生成された偏光エピポーラ平面画像に基づいて、デプス算出対象画素のデプス算出を行うデプス算出部
備える画像処理装置にある。
The first aspect of this technology is
According to another aspect of the present invention, there is provided an image processing apparatus including a depth calculation unit that performs depth calculation of a depth calculation target pixel based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.

 この技術においては、デプス算出対象画素がテクスチャ情報を有する画素と有していない画素のいずれであるか、複数の偏光撮像画から生成されたエピポーラ平面画像に基づいて分類される。テクスチャ情報を有する画素についてはエピポーラ平面画像に基づき勾配が算出されて、テクスチャ情報を有していない画素については偏光エピポーラ平面画像に基づいて勾配が算出されて、算出された勾配をデプスに換算することで、デプス算出対象画素のデプスを算出される。また、算出された勾配の信頼度が予め設定した信頼度閾値以上の勾配がデプスに換算されてもよい。 In this technique, classification is performed on the basis of an epipolar plane image generated from a plurality of polarization imaging images, in which the depth calculation target pixel is either a pixel having texture information or a pixel having no texture information. For pixels with texture information, the gradient is calculated based on the epipolar plane image, and for pixels without texture information, the gradient is calculated based on the polarization epipolar plane image, and the calculated gradient is converted to depth Thus, the depth of the depth calculation target pixel is calculated. In addition, a gradient having a calculated reliability of the gradient equal to or higher than a predetermined reliability threshold may be converted into the depth.

 偏光情報に基づく勾配は、偏光エピポーラ平面画像におけるデプス算出対象画素の位置を基準とした直線上の画素を用いて三角関数へのフィッティングを行い、フィッティングに用いる画素の違いによって生じる三角関数の波形の差が最小となる直線の勾配をデプス算出対象画素の勾配とする。また、複数の偏光撮像画の偏光方向は180度未満の角度差の範囲内で4方向以上とする。 The gradient based on polarization information is fit to a trigonometric function using pixels on a straight line based on the position of the pixel for depth calculation in the polarization epipolar plane image, and the waveform of the trigonometric function generated by the difference in pixels used for fitting The slope of the straight line that minimizes the difference is taken as the slope of the depth calculation target pixel. Further, the polarization directions of the plurality of polarization imaging images are set to four or more directions within the range of the angle difference less than 180 degrees.

 また、偏光エピポーラ平面画像を用いて算出された勾配の方向に位置する画素を用いてデプス算出対象画素の法線を算出して、デプス算出対象画素のデプスと法線を用いて補間処理を行い、画素単位よりも高解像度にデプスを取得する。 In addition, the normal to the depth calculation target pixel is calculated using the pixel located in the direction of the gradient calculated using the polarization epipolar plane image, and interpolation processing is performed using the depth and the normal to the depth calculation target pixel. , Get depth at a higher resolution than pixel units.

 また、デプス算出対象画素の勾配を算出できない場合、デプス算出対象画素に近接する画素のデプスや法線を用いた補間処理によって、デプス算出対象画素のデプスを算出する。また、勾配を算出できないデプス算出対象画素から任意の勾配の方向に位置する画素を用いてデプス算出対象画素の法線を算出する。 When the gradient of the depth calculation target pixel can not be calculated, the depth of the depth calculation target pixel is calculated by interpolation processing using the depth and the normal line of the pixel close to the depth calculation target pixel. Further, the normal to the depth calculation target pixel is calculated using a pixel located in the direction of an arbitrary gradient from the depth calculation target pixel for which the gradient can not be calculated.

 さらに、第1方向に視点位置が異なる複数の偏光撮像画から生成された第1偏光エピポーラ平面画像または第1エピポーラ平面画像に基づいて算出されたデプス算出対象画素の勾配と、第1方向と異なる第2方向に視点位置が異なる複数の偏光撮像画から生成された第2偏光エピポーラ平面画像または第2エピポーラ平面画像に基づいて算出されたデプス算出対象画素の勾配のいずれかが選択されて、選択された勾配がデプスに換算されてもよい。この場合、一方の勾配がテクスチャ情報に基づいて算出された勾配であり、他方の勾配が偏光情報に基づいて算出された勾配である場合、テクスチャ情報に基づいて算出された勾配を選択する。また、勾配が共にテクスチャ情報に基づいて算出された場合、および勾配が共に偏光情報に基づいて算出された場合、信頼度の高い勾配を選択する。 Furthermore, the gradient of the pixel for depth calculation calculated based on the first polarization epipolar plane image or the first epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in the first direction is different from the first direction Either the second polarization epipolar plane image generated from a plurality of polarization imaging images different in viewpoint position in the second direction or the gradient of the depth calculation target pixel calculated based on the second epipolar plane image is selected and selected The slope may be converted to depth. In this case, if one of the gradients is a gradient calculated based on the texture information and the other gradient is a gradient calculated based on the polarization information, the gradient calculated based on the texture information is selected. Also, if the gradients are both calculated based on the texture information, and if the gradients are both calculated based on the polarization information, then a highly reliable gradient is selected.

 この技術の第2の側面は、
 偏光方向と視点位置が異なる複数の偏光撮像画から生成された偏光エピポーラ平面画像に基づいて、デプス算出対象画素のデプス算出を行うこと
を含む画像処理方法にある。
The second aspect of this technology is
The present invention is an image processing method including performing depth calculation of a depth calculation target pixel based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.

 この技術の第3の側面は、
 偏光撮像画を用いた画像処理をコンピュータで実行させるプログラムであって、
 偏光方向と視点位置が異なる複数の前記偏光撮像画から偏光エピポーラ平面画像を生成する手順と、
 生成された前記偏光エピポーラ平面画像に基づいて、デプス算出対象画素のデプス算出を行う手順と
を前記コンピュータで実行させるプログラムにある。
The third aspect of this technology is
A program that causes a computer to execute image processing using a polarization image.
Generating a polarization epipolar plane image from a plurality of polarization imaging images different in polarization direction and viewpoint position;
And a program for causing the computer to execute the steps of calculating the depth of the depth calculation target pixel based on the generated polarization epipolar plane image.

 なお、本技術のプログラムは、例えば、様々なプログラム・コードを実行可能な汎用コンピュータに対して、コンピュータ可読な形式で提供する記憶媒体、通信媒体、例えば、光ディスクや磁気ディスク、半導体メモリなどの記憶媒体、あるいは、ネットワークなどの通信媒体によって提供可能なプログラムである。このようなプログラムをコンピュータ可読な形式で提供することにより、コンピュータ上でプログラムに応じた処理が実現される。 Note that the program of the present technology is, for example, a storage medium, communication medium such as an optical disc, a magnetic disc, a semiconductor memory, etc., provided in a computer readable format to a general-purpose computer capable of executing various program codes. It is a program that can be provided by a medium or a communication medium such as a network. By providing such a program in a computer readable form, processing according to the program is realized on the computer.

 この技術の第4の側面は、
 偏光方向と視点位置が異なる複数の偏光撮像画を取得する撮像装置と、
 前記撮像装置で取得された前記複数の偏光撮像画からデプス算出対象画素のデプス算出を行う画像処理装置を有し、
 前記画像処理装置は、
 前記複数の偏光撮像画からデプス算出対象画素を含む偏光エピポーラ平面画像を生成する偏光エピポーラ平面画像生成部と、
 偏光エピポーラ平面画像生成部で生成された前記偏光エピポーラ平面画像に基づいて、前記デプス算出対象画素のデプス算出を行うデプス算出部と
を備える画像処理システムにある。
The fourth aspect of this technology is
An imaging device for acquiring a plurality of polarization imaging images having different polarization directions and viewpoint positions;
It has an image processing device which performs depth calculation of a depth calculation target pixel from the plurality of polarization imaging images acquired by the imaging device,
The image processing apparatus is
A polarization epipolar plane image generation unit that generates a polarization epipolar plane image including pixels for depth calculation from the plurality of polarization imaging images;
And a depth calculation unit for calculating the depth of the depth calculation target pixel based on the polarization epipolar plane image generated by the polarization epipolar plane image generation unit.

 この技術によれば、偏光方向と視点位置が異なる複数の偏光撮像画から生成された偏光エピポーラ平面画像に基づいて、デプス算出対象画素のデプスが算出される。このため、
テクスチャがない領域について偏光情報に基づきデプスを算出できるようになる。なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、また付加的な効果があってもよい。
According to this technique, the depth of the depth calculation target pixel is calculated based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions. For this reason,
It becomes possible to calculate the depth based on the polarization information for the area without texture. The effects described in the present specification are merely examples and are not limited, and additional effects may be present.

撮像システムの構成を例示した図である。It is the figure which illustrated the composition of the imaging system. 画像処理装置の構成を例示した図である。It is the figure which illustrated the composition of the image processing device. 複数の偏光撮像画とエピポーラ平面画像および偏光エピポーラ平面画像を例示した図である。It is the figure which illustrated the several polarization imaging drawing, the epipolar plane image, and the polarization epipolar plane image. 第1の実施の形態におけるデプス算出部の構成を例示した図である。It is the figure which illustrated the composition of the depth calculation part in a 1st embodiment. テクスチャ情報を有している画素の勾配の算出を説明するための図である。It is a figure for demonstrating calculation of the gradient of the pixel which has texture information. 偏光方向と画素値(輝度)の関係を説明するための図である。It is a figure for demonstrating the relationship between a polarization direction and a pixel value (luminance). 輝度と偏光角の関係を例示した図である。It is the figure which illustrated the relationship between luminance and a polarization angle. 偏光情報を用いて勾配を算出できる場合を説明するための図である。It is a figure for demonstrating the case where gradient can be calculated using polarization information. 第1の実施の形態の動作を示すフローチャートである。It is a flowchart which shows operation | movement of 1st Embodiment. デプス算出動作を例示したフローチャートである。It is the flowchart which illustrated depth calculation operation. 第2の実施の形態におけるデプス算出部の構成を例示した図である。It is the figure which illustrated the composition of the depth calculation part in a 2nd embodiment. 偏光度と天頂角と関係を例示した図である。It is the figure which illustrated the relationship between polarization degree and zenith angle. デプスと法線に基づいたデプス補間処理を説明するための図である。It is a figure for demonstrating the depth interpolation process based on a depth and a normal line. 第2の実施の形態の動作を示すフローチャートである。It is a flowchart which shows operation | movement of 2nd Embodiment. 第3の実施の形態におけるデプス算出部の構成を例示した図である。It is the figure which illustrated the composition of the depth calculation part in a 3rd embodiment. デプス補間処理を説明するための図である。It is a figure for demonstrating a depth interpolation process. 第3の実施の形態の動作を示すフローチャートである。It is a flowchart which shows operation | movement of 3rd Embodiment. 第4の実施の形態における撮像装置の構成を例示した図である。It is the figure which illustrated the composition of the imaging device in a 4th embodiment. 第4の実施の形態における撮像装置の偏光方向を例示した図である。It is the figure which illustrated the polarization direction of the imaging device in a 4th embodiment. 第4の実施の形態におけるデプス算出部の構成を例示した図である。It is the figure which illustrated the composition of the depth calculation part in a 4th embodiment. 勾配選択動作を説明するための図である。It is a figure for demonstrating gradient selection operation | movement. 第4の実施の形態の動作を示すフローチャートである。It is a flowchart which shows operation | movement of 4th Embodiment. 車両制御システムの概略的な構成の一例を示すブロック図である。It is a block diagram showing an example of rough composition of a vehicle control system. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。It is explanatory drawing which shows an example of the installation position of a vehicle exterior information detection part and an imaging part.

 以下、本技術を実施するための形態について説明する。なお、説明は以下の順序で行う。
 1.撮像システムの構成
 2.第1の実施の形態
 3.第2の実施の形態
 4.第3の実施の形態
 5.第4の実施の形態
 6.他の構成と動作
 7.適用例
Hereinafter, modes for carrying out the present technology will be described. The description will be made in the following order.
1. Configuration of imaging system First embodiment 3. Second embodiment 4. Third embodiment 5. Fourth embodiment 6. Other configuration and operation 7. Application example

 <1.撮像システムの構成>
 図1は、本技術の画像処理装置を用いた撮像システムの構成を例示している。撮像システム10は、偏光方向と視点位置が異なる複数の偏光撮像画を取得する撮像装置と、取得された偏光撮像画を用いて画像処理を行う画像処理装置を用いて構成されている。撮像装置では、偏光板または偏光フィルタ等の偏光素子が撮像レンズまたは撮像素子の前面に設けられて複数の偏光画像が取得される。撮像装置は、例えば直線状に並列して複数の撮像部が配置されており、偏光素子の偏光方向が各撮像部で異なる方向に設定されている。なお、偏光方向が180度回転すると回転前と偏光方向が等しくなることから、偏光素子の偏光方向は0度以上で180度未満の角度とする。図1では、例えば6台の撮像部20-1~20-6が直線状に並列して配置されている。また、撮像部20-1~20-6の偏光素子21-1~21-6は、隣接する撮像部との偏光方向の角度差が例えば30度で偏光方向の角度が増加するように設定されている。6台の撮像部20-1~20-6は、例えば角柱状の被写体OBaと、被写体OBaよりも離れた位置に設けられている円柱状の被写体OBを撮像範囲に含めて撮像を行い、視点位置および偏光方向が互いに異なる偏光撮像画を取得して画像処理装置30へ出力する。なお、撮像装置は、視点位置および偏光方向が異なる偏光撮像画を後述するように4以上取得する。
<1. Configuration of Imaging System>
FIG. 1 illustrates the configuration of an imaging system using the image processing device of the present technology. The imaging system 10 is configured using an imaging device that acquires a plurality of polarization imaging images different in polarization direction and viewpoint position, and an image processing device that performs image processing using the acquired polarization imaging image. In an imaging device, polarizing elements, such as a polarizing plate or a polarizing filter, are provided in front of an imaging lens or an imaging element to obtain a plurality of polarized images. In the imaging device, for example, a plurality of imaging units are arranged linearly in parallel, and the polarization directions of the polarization elements are set in different directions in each imaging unit. When the polarization direction is rotated by 180 degrees, the polarization direction becomes equal to that before rotation, so the polarization direction of the polarizing element is set to an angle of 0 degrees or more and less than 180 degrees. In FIG. 1, for example, six imaging units 20-1 to 20-6 are linearly arranged in parallel. In addition, the polarization elements 21-1 to 21-6 of the imaging units 20-1 to 20-6 are set so that the angle difference of the polarization direction with the adjacent imaging units is, for example, 30 degrees and the angle of the polarization direction increases. ing. The six imaging units 20-1 to 20-6 perform imaging, for example, by including a rectangular columnar object OBa and a cylindrical object OB provided at a position farther than the object OBa in the imaging range. Polarized imaging images different in position and polarization direction from each other are acquired and output to the image processing device 30. The imaging apparatus acquires four or more polarization imaging images different in viewpoint position and polarization direction as described later.

 画像処理装置30は、偏光方向と視点位置が異なる複数の偏光撮像画から偏光エピポーラ平面画像やエピポーラ平面画像を生成して、生成した偏光エピポーラ平面画像等を利用して被写体までの距離情報であるデプスを算出する。 The image processing apparatus 30 generates polarization epipolar plane images and epipolar plane images from a plurality of polarization imaging images having different polarization directions and viewpoint positions, and uses the generated polarization epipolar plane images to obtain distance information to the subject. Calculate the depth.

 <2.第1の実施の形態>
 次に、画像処理装置の第1の実施の形態について説明する。図2は、画像処理装置の構成を例示している。画像処理装置30は、前処理部31、パラメータ保持部32、偏光エピポーラ平面画像生成部33、デプス算出部34を有している。
<2. First embodiment>
Next, a first embodiment of the image processing apparatus will be described. FIG. 2 illustrates the configuration of the image processing apparatus. The image processing apparatus 30 includes a preprocessing unit 31, a parameter holding unit 32, a polarization epipolar plane image generation unit 33, and a depth calculation unit 34.

 前処理部31は、パラメータ保持部32に格納されているパラメータを用いて、撮像装置で取得された偏光撮像画に対する前処理を行う。パラメータ保持部32には、予め所定の被写体例えばチェッカーボード等を用いてキャリブレーションを行うことにより得られた内部パラメータと外部パラメータが格納されている。なお、内部パラメータは、撮像部毎に固有のパラメータであり、撮像部の焦点距離やレンズ歪み係数等である。外部パラメータは、撮像部の配置を特定するパラメータであり、平行移動や回転を示している。前処理部31は、偏光撮像画と当該偏光撮像画を取得した撮像部の内部パラメータとを用いて偏光撮像画の歪み補正等を行う。また、前処理部31は、内部パラメータを用いて処理された偏光撮像画と外部パラメータとを用いて偏光撮像画のレジストレーションを行う。したがって、前処理後の偏光撮像画は、画像間で被写体における所望位置を示す画素の縦方向の位置ずれがなく、被写体までの距離情報であるデプスの違いに応じて横方向の位置ずれを生じた画像となる。前処理部31は、前処理後の偏光撮像画を偏光エピポーラ平面画像生成部33へ出力する。 The preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on the polarization imaging image acquired by the imaging device. The parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board. The internal parameter is a parameter unique to each imaging unit, and is a focal length of the imaging unit, a lens distortion coefficient, or the like. The external parameter is a parameter for specifying the arrangement of the imaging unit, and indicates parallel movement and rotation. The preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image. Further, the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter. Therefore, the pre-processed polarization captured image has no positional displacement in the vertical direction of the pixels indicating the desired position of the subject between the images, and causes lateral positional displacement in accordance with the difference in depth which is distance information to the subject. Image. The preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.

 偏光エピポーラ平面画像生成部33は、被写体における所望位置を示す画像を含むように、偏光方向と視点位置が異なる複数の偏光撮像画から、視点位置の並び方向例えば図1に示す構成では横方向に画像を抽出して、抽出した画像を並び方向と直交する縦方向に視点位置の順序であって、視点位置の間隔(撮像部の間隔(Baseline))に応じた間隔で並べてエピポーラ平面画像を生成する。また、偏光エピポーラ平面画像生成部33は、抽出した画像を並び方向と直交する縦方向に視点位置の順序であって、視点位置の間隔に応じた間隔で並べて偏光エピポーラ平面画像を生成する。偏光エピポーラ平面画像の各画素は、抽出先の偏光撮像画の偏光方向を示す偏光方向情報を有している。 The polarization epipolar plane image generation unit 33 generates a plurality of polarization imaging images having different polarization directions and viewpoint positions so as to include an image indicating a desired position on the subject, in the arrangement direction of viewpoint positions, for example, in the horizontal direction in the configuration shown in FIG. An image is extracted, and an epipolar plane image is generated by arranging the extracted images in the order of the viewpoint position in the vertical direction orthogonal to the alignment direction, at the interval according to the interval of the viewpoint positions (the interval between the imaging units (Baseline)) Do. In addition, the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions. Each pixel of the polarization epipolar plane image has polarization direction information indicating the polarization direction of the polarization imaging image to be extracted.

 図3は、複数の偏光撮像画とエピポーラ平面画像および偏光エピポーラ平面画像を例示している。図3の(a)は、撮像部20-1~20-6の偏光方向を例示している。例えば撮像部20-1に設けられた偏光素子21-1は偏光方向を示す角度が「0°」とされており、撮像部20-2に設けられた偏光素子21-2は偏光方向を示す角度が「30°」とされている。また、他の撮像部に設けられた偏光素子も同様に、隣接する撮像部との偏光方向の角度差が30度で偏光方向の角度が増加するように設定されている。また、各撮像部間は距離dsとされており、例えば撮像部20-1を基準とすると撮像部20-6は距離5dsの位置となる。 FIG. 3 illustrates a plurality of polarization imaging images and an epipolar plane image and a polarization epipolar plane image. FIG. 3A illustrates the polarization directions of the imaging units 20-1 to 20-6. For example, the polarization element 21-1 provided in the imaging unit 20-1 has an angle indicating “0 °” indicating the polarization direction, and the polarization element 21-2 provided in the imaging unit 20-2 has a polarization direction. The angle is "30 °". Similarly, the polarization elements provided in the other imaging units are also set so that the angle difference between the adjacent imaging units in the polarization direction is 30 degrees and the polarization direction angle is increased. Further, a distance ds is set between the respective imaging units, and for example, with the imaging unit 20-1 as a reference, the imaging unit 20-6 is positioned at a distance 5ds.

 図3の(b)は、撮像部20-1~20-6で取得された偏光撮像画を示しており、被写体における所望位置を示す画像の位置を破線で示している。例えば、角柱状の被写体OBaの側辺の位置PS1と、被写体OBaよりも離れた位置に設けられている円柱状の被写体OBbにおける側面の位置PS2、および角柱状の被写体OBaの側面内の位置PS3を被写体における所望の位置として以下の説明を行う。なお、各偏光撮像画において、破線で示す位置の画像における左端部を、エピポーラ平面画像および偏光エピポーラ平面画像における横方向の基準画素位置とする。 (B) of FIG. 3 shows polarization captured images acquired by the imaging units 20-1 to 20-6, and indicates the position of the image indicating the desired position on the subject by a broken line. For example, the position PS1 of the side of the prismatic object OBa, the position PS2 of the side surface of the cylindrical object OBb provided at a position farther than the object OBa, and the position PS3 in the side of the prismatic object OBa The following description will be made assuming that is the desired position on the subject. In each polarization imaging image, the left end portion in the image at the position indicated by the broken line is taken as the reference pixel position in the lateral direction in the epipolar plane image and the polarization epipolar plane image.

 図3の(c)はエピポーラ平面画像を示している。エピポーラ平面画像において、横軸は偏光撮像画における横方向の位置であり、縦軸は視点位置の物理上の距離(撮像部間の物理上の距離)である。 (C) of FIG. 3 shows an epipolar plane image. In the epipolar plane image, the horizontal axis is the position in the horizontal direction in the polarization imaging image, and the vertical axis is the physical distance of the viewpoint position (physical distance between the imaging units).

 図3の(d)は偏光エピポーラ平面画像を示している。偏光エピポーラ平面画像において、横軸は偏光撮像画における横方向の位置であり、縦軸は視点位置の物理上の距離(撮像部間の物理上の距離)である。 FIG. 3 (d) shows a polarized epipolar plane image. In the polarization epipolar plane image, the horizontal axis is the position in the lateral direction in the polarization imaging image, and the vertical axis is the physical distance of the viewpoint position (physical distance between imaging units).

 図3の(c)に示すエピポーラ平面画像において、各偏光撮像画における位置PS1の点(PS1-1~PS1-6)を結ぶ線は、視点位置の間隔が等しいことから直線となる。また、図3の(d)に示す偏光エピポーラ平面画像において、各偏光撮像画における位置PS1の点(PS1-1~PS1-6)を結ぶ線は、視点位置の間隔が等しいことから直線となる。また、位置PS2,PS3の点を結ぶ線も直線となる。また、撮像部から所望の位置までの距離情報であるデプスが大きい場合は縦軸に対する直線の傾きが小さく、デプスが小さい場合は縦軸に対する直線の傾きが大きくなる。したがって、位置PS1~PS3において、位置PS1までのデプスが最も小さい場合は、縦軸に対する位置PS1の点を結ぶ直線の傾きが最も大きくなり、位置PS2までのデプスが最も大きい場合は、縦軸に対する位置PS2の点を結ぶ直線の傾きが最も小さくなる。 In the epipolar plane image shown in (c) of FIG. 3, the line connecting the points (PS1-1 to PS1-6) at the position PS1 in each polarization imaging image is a straight line since the intervals of the viewpoint positions are equal. Further, in the polarization epipolar plane image shown in (d) of FIG. 3, the line connecting the points (PS1-1 to PS1-6) at the position PS1 in each polarization imaging image is a straight line since the distance between the viewpoint positions is equal. . Also, a line connecting points PS2 and PS3 is a straight line. When the depth which is distance information from the imaging unit to the desired position is large, the inclination of the straight line with respect to the vertical axis is small, and when the depth is small, the inclination of the straight line with respect to the vertical axis is large. Therefore, at positions PS1 to PS3, when the depth to position PS1 is the smallest, the slope of the straight line connecting the points of position PS1 to the vertical axis is the largest, and when the depth to position PS2 is the largest, the inclination to the vertical axis The slope of the straight line connecting the points of position PS2 is the smallest.

 図2に戻り、偏光エピポーラ平面画像生成部33は、前処理後の複数の偏光撮像画を用いてエピポーラ平面画像と偏光エピポーラ平面画像を生成してデプス算出部34へ出力する。 Returning to FIG. 2, the polarization epipolar plane image generation unit 33 generates an epipolar plane image and a polarization epipolar plane image using the plurality of pre-processed polarization imaging images, and outputs the image to the depth calculation unit 34.

 デプス算出部34は、偏光エピポーラ平面画像生成部33で生成されたエピポーラ平面画像と偏光エピポーラ平面画像を用いてデプスの算出を行う。デプス算出部34は、テクスチャを示す位置のデプスをエピポーラ平面画像に基づいて算出して、テクスチャのない位置のデプスを偏光エピポーラ平面画像に基づいて算出する。 The depth calculation unit 34 calculates the depth using the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image. The depth calculating unit 34 calculates the depth at the position indicating the texture based on the epipolar plane image, and calculates the depth at the position not having the texture based on the polarized epipolar plane image.

 図4はデプス算出部の構成を例示している。デプス算出部34は、勾配算出部341とデプス換算部343を有している。 FIG. 4 illustrates the configuration of the depth calculation unit. The depth calculation unit 34 includes a slope calculation unit 341 and a depth conversion unit 343.

 勾配算出部341は、偏光エピポーラ平面画像生成部33で生成されたエピポーラ平面画像と偏光エピポーラ平面画像における各画素の勾配を算出する。勾配算出部341は、画素分類部3411とテクスチャ勾配算出部3412および偏光勾配算出部3413を有している。 The gradient calculation unit 341 calculates the gradient of each pixel in the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image. The gradient calculating unit 341 includes a pixel classification unit 3411, a texture gradient calculating unit 3412, and a polarization gradient calculating unit 3413.

 画素分類部3411は、エピポーラ平面画像の画素を、テクスチャ情報を有している画素とテクスチャ情報を有していない画像に分類する。画素分類部3411は、判定対処画素について、例えばSobelフィルタやPrewittフィルタ等の1次微分フィルタ、あるいはラプラシアンフィルタ等の2次微分フィルタを用いて、横方向の微分値Iuと縦方向の微分値Isを算出する。また、画素分類部3411は、横方向の微分値Iuと縦方向の微分値Isを用いて式(1)の演算を行い、テクスチャ判定値Gを算出する。 The pixel classification unit 3411 classifies the pixels of the epipolar plane image into pixels having texture information and an image having no texture information. The pixel classification unit 3411 uses a first derivative filter such as a Sobel filter or a Prewitt filter, or a second derivative filter such as a Laplacian filter, for the determination target pixel, to generate the horizontal differential value Iu and the vertical differential value Is. Calculate Also, the pixel classification unit 3411 calculates the texture determination value G by performing the calculation of Expression (1) using the horizontal differential value Iu and the vertical differential value Is.

Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001

 画素分類部3411は、算出したテクスチャ判定値Gが予め設定されている判定閾値σ以上である場合にテクスチャ情報を有している画素と判定して、テクスチャ判定値Gが判定閾値σよりも小さい場合にテクスチャ情報を有していない画素と判定する。 If the calculated texture determination value G is equal to or greater than a predetermined determination threshold σ, the pixel classification unit 3411 determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold σ. In this case, it is determined that the pixel does not have texture information.

 テクスチャ勾配算出部3412は、画素分類部3411でテクスチャ情報を有している画素として分類された画像に対して、テクスチャ情報(テクスチャの色情報や輝度情報)を用いて勾配を算出する。テクスチャ勾配算出部3412は、例えば画素分類部3411で算出された横方向の微分値Iuと縦方向の微分値Isを用いて式(2)の演算を行い、勾配θtを算出する。 The texture gradient calculation unit 3412 calculates the gradient of the image classified as the pixel having the texture information by the pixel classification unit 3411 using the texture information (color information and texture information of the texture). The texture gradient calculation unit 3412 calculates the gradient θt by performing the calculation of Equation (2) using, for example, the horizontal differential value Iu and the vertical differential value Is calculated by the pixel classification unit 3411.

Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002

 また、テクスチャ勾配算出部3412は、勾配の算出対象画素の近傍領域おける画素を用いることで、高精度に勾配θtを算出してもよい。図5は、テクスチャ情報を有している画素の勾配を高精度に算出する場合を説明するための図である。例えば、最適な勾配は、算出対象画素Oに対する近傍領域において、ある方向の画素qに対して算出対象画素Oとの画素値の差分かつ微分値の差分が最も小さい方向とする。具体的には、式(3)を満たす方向とする。なお、式(3)において評価値Cθ(O)は式(4)で算出される値であり、評価値Cθ(0)は、大きいほど信頼度が小さい。 Also, the texture gradient calculation unit 3412 may calculate the gradient θt with high accuracy by using pixels in the vicinity region of the gradient calculation target pixel. FIG. 5 is a diagram for explaining the case where the gradient of a pixel having texture information is calculated with high accuracy. For example, the optimal gradient is a direction in which the difference between the pixel value of the calculation target pixel O and the differential value of the differential value with respect to the pixel q in a certain direction is smallest in the vicinity region to the calculation target pixel O. Specifically, the direction satisfies the equation (3). In Expression (3), the evaluation value C θ (O) is a value calculated by Expression (4), and the larger the evaluation value C θ (0), the lower the reliability.

Figure JPOXMLDOC01-appb-M000003
Figure JPOXMLDOC01-appb-M000003

 式(3)において「Θ」は勾配の方向の集合(0,180)である。式(4)において、「S(θ)」は、算出対象画素Oの近傍領域において勾配θtに沿った方向上の画素の集合である。また、「I(O)」は算出対象画素の画素値、「Iu(O),Is(O)」は算出対象画素の微分値、「I(q)」は近傍領域の画素qの画素値、「Iu(q),Is(q)」は近傍領域の画素qの微分値である。「α」は画素値の差分と勾配の差分の重みを調整するパラメータである。 In equation (3), “Θ” is a set (0, 180) of gradient directions. In Expression (4), “S (θ)” is a set of pixels in the direction along the gradient θt in the vicinity region of the calculation target pixel O. Also, “I (O)” is the pixel value of the calculation target pixel, “Iu (O), Is (O)” is the differential value of the calculation target pixel, and “I (q)” is the pixel value of the pixel q in the vicinity region. , “Iu (q), Is (q)” are differential values of the pixel q in the near region. “Α” is a parameter for adjusting the weight of the pixel value difference and the gradient difference.

 偏光勾配算出部3413は、画素分類部3411でテクスチャ情報を有していない画素として分類された画像に対して、偏光情報を用いて勾配を算出する。ここで、偏光方向と画素値(輝度)の関係について説明する。図6は、偏光方向と画素値(輝度)の関係を説明するための図である。光源LTを用いて被写体OBの照明を行い、偏光素子PLを介して被写体OBを撮像装置CMで撮像する。この場合、撮像装置CMで生成される偏光撮像画は、偏光素子PLの回転に応じて被写体OBの輝度が変化することが知られている。ここで、偏光素子PLを回転させたときの最も高い輝度をImax,最も低い輝度をIminとする。また、2次元座標におけるx軸とy軸を偏光素子PLの平面方向としたとき、偏光素子PLを回転させたときのx軸に対するxy平面上の角度を偏光角υとする。偏光素子PLは、180度回転させると元の偏光状態に戻り180度の周期を有している。また、拡散反射のモデルの場合、最大輝度Imaxが観測されたときの偏光角υを方位角φとする。このような定義を行うと、偏光素子PLを回転させたときに観測される輝度Iは式(5)のように表すことができる。すなわち、偏光角と方位角が等しい位置において偏光素子の偏光方向を変化させたときに生じる輝度変化は三角関数の波形を示す。また、三角関数の波形の位相は、偏光角や方位角によって変化する。なお、図7は、輝度と偏光角の関係を例示している。 The polarization gradient calculation unit 3413 calculates the gradient using polarization information for an image classified by the pixel classification unit 3411 as a pixel having no texture information. Here, the relationship between the polarization direction and the pixel value (brightness) will be described. FIG. 6 is a diagram for explaining the relationship between the polarization direction and the pixel value (brightness). The light source LT is used to illuminate the subject OB, and the subject OB is imaged by the imaging device CM via the polarization element PL. In this case, it is known that, in the polarization imaging image generated by the imaging device CM, the luminance of the subject OB changes in accordance with the rotation of the polarization element PL. Here, the highest luminance when the polarizing element PL is rotated is Imax, and the lowest luminance is Imin. When the x-axis and y-axis in two-dimensional coordinates are the plane direction of the polarizing element PL, an angle on the xy plane with respect to the x-axis when the polarizing element PL is rotated is taken as a polarization angle υ. The polarization element PL returns to the original polarization state when rotated 180 degrees and has a cycle of 180 degrees. Further, in the case of the diffuse reflection model, the polarization angle と き when the maximum luminance Imax is observed is taken as the azimuth angle φ. If such a definition is made, the luminance I observed when the polarizing element PL is rotated can be expressed as Expression (5). That is, the change in luminance caused when the polarization direction of the polarizing element is changed at the position where the polarization angle and the azimuth angle are equal shows a waveform of a trigonometric function. Also, the phase of the trigonometric function waveform changes depending on the polarization angle and the azimuth angle. FIG. 7 illustrates the relationship between the luminance and the polarization angle.

Figure JPOXMLDOC01-appb-M000004
Figure JPOXMLDOC01-appb-M000004

 デプス算出対象画素について勾配を算出する場合、勾配が真値であるときの勾配方向に位置する画素は、異なる視点位置から見た物理上の同じ点である。このため、真値である勾配(「デプス算出対象画素の勾配」ともいう)の方向に位置する画素の画素値を偏光素子の偏光方向順にプロットすると三角関数の波形(例えばSin波)となる。さらに、真値の勾配の方向に位置する画素の画素値から三角関数の波形を求める際の偏光方向の組み合わせが異なる場合でも、三角関数の波形の位相と振幅は等しくなる。しかし、真値の勾配と異なる方向に位置する画素は、異なる視点位置から見た物理上の異なる点であるため、物体の表面の向きがスムーズに変化する場合、真値の勾配と異なる方向に位置する画素の画素値から三角関数の波形を求める際の偏光方向の組み合わせが異なると、三角関数の波形の位相や振幅が異なる。したがって、偏光勾配算出部3413は、偏光方向が異なる組み合わせで求めた三角関数の波形の差が最小となる勾配を、デプス算出対象画素の勾配とする。また、偏光方向を変化させたときに生じる輝度変化を三角関数の波形にフィッティングするためには偏光方向が3方向以上の画素値(輝度値)が必要であり、偏光方向の組み合わせが異なる場合の三角関数の波形の差に基づき勾配の真値を算出することから、複数の偏光撮像画として、偏光方向が180度未満の角度差の範囲内で4方向以上である偏光撮像画を用いる。 When the gradient is calculated for the depth calculation target pixel, the pixels located in the gradient direction when the gradient is a true value are physically the same point viewed from different viewpoint positions. For this reason, when the pixel values of the pixels positioned in the direction of the gradient (which is also referred to as the “gradient of the depth calculation object pixel”) which is a true value are plotted in the polarization direction of the polarizing element, it becomes a waveform of trigonometric function (for example, Sin wave). Furthermore, even when the combination of the polarization directions when obtaining the waveform of the trigonometric function from the pixel values of the pixels located in the direction of the gradient of the true value is different, the phase and the amplitude of the waveform of the trigonometric function are equal. However, pixels located in directions different from the gradient of the true value are physically different points when viewed from different viewpoint positions, so when the orientation of the surface of the object changes smoothly, in the direction different from the gradient of the true value If the combination of the polarization directions in obtaining the waveform of the trigonometric function from the pixel values of the located pixels is different, the phase and the amplitude of the waveform of the trigonometric function are different. Therefore, the polarization gradient calculating unit 3413 sets a gradient that minimizes the difference in the waveform of the trigonometric function obtained in the combination in which the polarization directions are different as the gradient of the pixel for which the depth is to be calculated. In addition, in order to fit the change in brightness that occurs when the polarization direction is changed to the waveform of the trigonometric function, pixel values (brightness values) in three or more polarization directions are required, and combinations of polarization directions are different. Since the true value of the gradient is calculated based on the difference in the waveform of the trigonometric function, polarization imaging drawings having four or more polarization directions within the range of angle differences less than 180 degrees are used as the plurality of polarization imaging drawings.

 図8は、偏光情報を用いて勾配を算出できる場合を説明するための図である。図8の(a)は、真値の勾配を実線で示しており、勾配が誤っている場合を破線で示している。ここで、図1に示すように6つの撮像部を用いる場合、偏光エピポーラ平面画像では、縦方向に偏光方向が「0°,30°,60°,90°,120°,150°」である6画素の画素値が得られる。 FIG. 8 is a diagram for explaining the case where the gradient can be calculated using polarization information. (A) of FIG. 8 shows the gradient of the true value by a solid line, and shows the case where the gradient is incorrect by a broken line. Here, when six imaging units are used as shown in FIG. 1, the polarization direction is “0 °, 30 °, 60 °, 90 °, 120 °, 120 °, 150 °” in the longitudinal direction in the polarization epipolar plane image. A pixel value of 6 pixels is obtained.

 勾配が誤っている場合の各偏光方向の画素値は、例えば図8の(b)に示すように被写体の異なる位置を示すことになる。したがって、画素値から三角関数の波形を求める際の偏光方向の組み合わせが異なると、図8の(c)に示すように三角関数の位相や振幅が異なってしまう。例えば、偏光方向が「0°,60°,120°」である3画素の画素値から求めた三角関数の波形と、偏光方向が「30°,90°,150°」である3画素の画素値から求めた三角関数の波形は、位相や振幅が異なる。しかし、勾配が真値である場合の各偏光方向の画素値は、図8の(d)に示すように被写体の同じ位置を示すので、画素値から三角関数の波形を求める際の偏光方向の画素値の組み合わせが相違しても、図8の(e)に示すように三角関数の位相や振幅は一致する。例えば、偏光方向が「0°,60°,120°」である3画素の画素値から求めた三角関数の波形と、偏光方向が「30°,90°,150°」である3画素の画素値から求めた三角関数の波形は、位相や振幅が一致する。 The pixel value of each polarization direction when the gradient is incorrect indicates, for example, different positions of the subject as shown in FIG. 8B. Therefore, if the combination of the polarization directions when obtaining the waveform of the trigonometric function from the pixel values is different, the phase and the amplitude of the trigonometric function are different as shown in (c) of FIG. For example, a triangular function waveform obtained from pixel values of three pixels whose polarization directions are "0 °, 60 °, 120 °" and three pixel pixels whose polarization directions are "30 °, 90 °, 150 °" The waveforms of trigonometric functions determined from the values differ in phase and amplitude. However, since the pixel value in each polarization direction when the gradient is a true value indicates the same position of the subject as shown in (d) of FIG. 8, the polarization direction in obtaining the waveform of the trigonometric function from the pixel value Even if the combination of pixel values is different, as shown in (e) of FIG. 8, the phase and amplitude of the trigonometric function match. For example, a triangular function waveform obtained from pixel values of three pixels whose polarization directions are "0 °, 60 °, 120 °" and three pixel pixels whose polarization directions are "30 °, 90 °, 150 °" The waveforms of the trigonometric functions obtained from the values have the same phase and amplitude.

 したがって、偏光勾配算出部3413は、画素値から三角関数の波形を求める際の偏光方向の組み合わせが異なる場合における三角関数の波形の差が最小となる勾配を算出する。例えば、偏光勾配算出部3413は、横軸が傾きの角度を示しており縦軸が差分を示すヒストグラムを生成して、差分が最小となる傾きの角度を真値の勾配とする。また、偏光勾配算出部3413は、差分の最大値と最小値との差が予め設定した閾値よりも小さい場合、勾配を算出できない画素と判定してもよい。 Therefore, the polarization gradient calculating unit 3413 calculates a gradient that minimizes the difference in the waveform of the trigonometric function when the combination of the polarization directions when obtaining the waveform of the trigonometric function is different from the pixel value. For example, the polarization gradient calculating unit 3413 generates a histogram in which the horizontal axis indicates the angle of inclination and the vertical axis indicates the difference, and sets the angle of inclination at which the difference is minimum as the gradient of the true value. In addition, the polarization gradient calculating unit 3413 may determine that the gradient can not be calculated if the difference between the maximum value and the minimum value of the difference is smaller than a preset threshold.

 なお、三角関数の波形の振幅と位相の検出に用いる画素の組み合わせは、1画素置きに画素を選択する場合に限られない。例えば、偏光勾配算出部3413は、連続する位置の画素値を組として、組毎に三角関数の波形の振幅と位相を検出してもよい。また、偏光勾配算出部3413は、ランダムに画素を選択して組として、組毎に三角関数の波形の振幅と位相を検出してもよい。さらに、組み合わせに用いる画素は、離れた位置の画素を用いることで、連続する位置の画素を用いる場合に比べてノイズの影響を受け難くできる。なお、偏光方向の角度差を大きくすることで、ノイズ等による影響を受け難くできる。 The combination of pixels used to detect the amplitude and phase of the waveform of the trigonometric function is not limited to the case where pixels are selected every other pixel. For example, the polarization gradient calculation unit 3413 may detect the amplitude and phase of the waveform of the trigonometric function for each set, with the pixel values at successive positions being a set. Also, the polarization gradient calculation unit 3413 may select pixels at random and make a set, and detect the amplitude and phase of the waveform of the trigonometric function for each set. Furthermore, by using pixels at distant positions, the pixels used for the combination can be less susceptible to noise compared to using pixels at successive positions. The influence of noise or the like can be reduced by increasing the angle difference in the polarization direction.

 また、偏光勾配算出部3413は、ノイズの影響を少なくするため、例えば任意の勾配の方向に位置する画素からランダムに選択した画素の画素値に基づき三角関数の波形の位相と振幅の検出する処理を、複数回繰り返して行い、波形の位相と振幅の分散(例えば標準偏差など)を算出する。さらに、偏光勾配算出部3413は、勾配を0°から180°の範囲内として、繰り返して三角関数の波形の位相と振幅の分散を算出する。さらに、偏光勾配算出部3413は、横軸が勾配の角度を示しており縦軸が分散を示すヒストグラムを生成して、分散が最小値となる勾配を真値の勾配と判定する。 Also, the polarization gradient calculation unit 3413 detects the phase and amplitude of the waveform of the trigonometric function based on the pixel values of pixels randomly selected from pixels located in the direction of an arbitrary gradient, for example, to reduce the influence of noise. Is repeated a plurality of times to calculate the variance (for example, standard deviation) of the phase and amplitude of the waveform. Furthermore, the polarization gradient calculating unit 3413 repeatedly calculates the dispersion of the phase and amplitude of the waveform of the trigonometric function with the gradient within the range of 0 ° to 180 °. Further, the polarization gradient calculating unit 3413 generates a histogram in which the horizontal axis indicates the angle of the gradient and the vertical axis indicates the dispersion, and determines the gradient with the smallest dispersion as the gradient of the true value.

 偏光勾配算出部3413は、位相の差が最小となる角度と振幅の差が最小となる角度との角度差、あるいは位相の分散が最小となる角度と振幅の分散が最小となる角度との角度差が予め設定した閾値よりも大きい場合は勾配を算出できない画素と判定してもよい。また、偏光勾配算出部3413は、角度差が閾値内である場合は、いずれか一方の角度を勾配の真値としてもよく、位相の差(分散)が最小となる角度と振幅の差(分散)が最小となる角度の平均値や重み付け平均値等を勾配の真値としてもよい。 The polarization gradient calculating unit 3413 determines the angle difference between the angle at which the phase difference is minimum and the angle at which the difference in amplitude is minimum, or the angle between the angle at which the phase dispersion is minimum and the angle at which the amplitude dispersion is minimum. When the difference is larger than a preset threshold value, it may be determined that the pixel can not calculate the gradient. In addition, when the angle difference is within the threshold, the polarization gradient calculating unit 3413 may set either one of the angles as the true value of the gradient, and the difference between the angle and the amplitude at which the phase difference (dispersion) becomes minimum (dispersion The average value of weighted angles and the weighted average value or the like at which) is minimized may be used as the true value of the gradient.

 図8では、三角関数の波形の位相と振幅を計算するために3点を使用したが、偏光勾配算出部3413は、3点よりも多くの画素値を用いてもよい。さらに、真値の勾配における三角関数の波形の位相と振幅の差分や分散を、勾配の評価値として用いてもよい。この場合、真値の勾配における差分や分散が大きいほど、信頼度が小さくなる。 Although three points are used in FIG. 8 to calculate the phase and amplitude of the waveform of the trigonometric function, the polarization gradient calculating unit 3413 may use more pixel values than three points. Furthermore, the difference or variance of the phase and amplitude of the waveform of the trigonometric function in the true value gradient may be used as the evaluation value of the gradient. In this case, the larger the difference or variance in the gradient of the true value, the smaller the reliability.

 ところで、勾配の算出対象画素が表面の変化のない平面上の点を示している場合、三角関数の波形の振幅と位相は勾配にかかわらず一定で、変化のある面上の点に比べて位相や振幅の分散が小さく、真値の勾配を算出できない。また、後述する第2の実施の形態で説明するように、偏光方向が3方向以上の偏光撮像画の画素値に基づいて法線方向を算出できる。したがって、本技術では、テクスチャ情報を有していない画素において、真値の勾配を算出することができない画素であって、三角関数の波形の振幅と位相が勾配にかかわらず一定で位相や振幅の分散が小さい画素を、法線あり勾配未算出画素とする。また、テクスチャ情報を有していない画素において、法線あり勾配未算出画素と異なる画素であって、勾配の信頼度が予め設定された信頼度閾値よりも低い画素を法線なし勾配未算出画素とする。 By the way, when the calculation target pixel of the gradient indicates a point on the surface where there is no change in the surface, the amplitude and the phase of the trigonometric function waveform are constant regardless of the gradient, and the phase is compared with the point on the changing surface. And the variance of the amplitude is too small to calculate the slope of the true value. Further, as will be described in the second embodiment described later, the normal direction can be calculated based on the pixel values of the polarization image with three or more polarization directions. Therefore, in the present technology, in pixels that do not have texture information, it is a pixel that can not calculate the gradient of the true value, and the amplitude and phase of the trigonometric function waveform are constant regardless of the gradient. A pixel with a small variance is taken as a pixel with normal and not calculated gradient. In addition, in pixels that do not have texture information, pixels that are different from normal pixels and have not calculated gradients, and pixels whose reliability of the gradient is lower than a predetermined reliability threshold value are pixels that have no normal gradients and that have not been calculated. I assume.

 図4に戻り、デプス換算部343は、勾配算出部341のテクスチャ勾配算出部3412と偏光勾配算出部3413で算出された勾配をデプスに換算する。デプス換算部343は、偏光勾配算出部3413で算出された勾配θpを用いて式(6)の演算を行い、勾配θxが算出された画素に対応する被写体位置までのデプスDを算出する。なお、式(6)において「f」は勾配が算出された画素の焦点距離であり、勾配θxは勾配θtまたは勾配θpである。また、デプス換算部343は、勾配の信頼度が予め設定された信頼度閾値以上である場合に、算出された勾配をデプスに換算すれば、信頼度の低いデプスが算出されることを防止できる。 Returning to FIG. 4, the depth conversion unit 343 converts the gradients calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 of the gradient calculation unit 341 into depths. The depth conversion unit 343 performs the calculation of Expression (6) using the gradient θp calculated by the polarization gradient calculation unit 3413, and calculates the depth D to the object position corresponding to the pixel for which the gradient θx is calculated. In equation (6), “f” is the focal length of the pixel for which the gradient is calculated, and the gradient θx is the gradient θt or the gradient θp. In addition, when the reliability of the gradient is equal to or greater than a predetermined reliability threshold, the depth conversion unit 343 can prevent the calculation of the depth with low reliability by converting the calculated gradient to a depth. .

Figure JPOXMLDOC01-appb-M000005
Figure JPOXMLDOC01-appb-M000005

 図9は、第1の実施の形態の動作を示すフローチャートである。ステップST1で画像処理装置は複数の偏光撮像画を取得する。画像処理装置30は例えば複数の撮像部から偏光撮像画を取得してステップST2に進む。 FIG. 9 is a flowchart showing the operation of the first embodiment. In step ST1, the image processing apparatus acquires a plurality of polarization imaging images. For example, the image processing apparatus 30 acquires polarized captured images from a plurality of imaging units, and proceeds to step ST2.

 ステップST2で画像処理装置は偏光撮像画の前処理を行う。画像処理装置30は複数の撮像部から取得した複数の偏光撮像画のそれぞれに対して、撮像部に対応する内部パラメータを用いて歪み補正等を行う。さらに、画像処理装置30は、内部パラメータを用いて処理が行われた各偏光撮像画に対して、外部パラメータを用いてレジストレーションを行いステップST3に進む。 In step ST2, the image processing apparatus performs pre-processing of the polarization image. The image processing apparatus 30 performs distortion correction and the like on each of the plurality of polarization imaging images acquired from the plurality of imaging units using an internal parameter corresponding to the imaging unit. Furthermore, the image processing apparatus 30 performs registration using the external parameter on each polarization imaging image processed using the internal parameter, and proceeds to step ST3.

 ステップST3で画像処理装置はエピポーラ平面画像と偏光エピポーラ平面画像を生成する。画像処理装置30は、前処理後の偏光撮像画から、例えばデプス算出対象ラインの画像を抽出して視点位置の距離順に積層することでエピポーラ平面画像と偏光エピポーラ平面画像を生成する。偏光エピポーラ平面画像における各画素は、偏光方向を示す情報を有している。画像処理装置30は、エピポーラ平面画像と偏光エピポーラ平面画像を生成してステップST4に進む。 In step ST3, the image processing apparatus generates an epipolar plane image and a polarization epipolar plane image. The image processing apparatus 30 generates, for example, an epipolar plane image and a polarization epipolar plane image by extracting an image of a depth calculation target line from the pre-processed polarization captured image and stacking the images in order of distance of viewpoint position. Each pixel in the polarization epipolar plane image has information indicating the polarization direction. The image processing device 30 generates an epipolar plane image and a polarization epipolar plane image, and proceeds to step ST4.

 ステップST4で画像処理装置はデプス算出を行う。画像処理装置30は、エピポーラ平面画像と偏光エピポーラ平面画像に基づき、デプス算出対象ラインにおけるデプス算出対象画素が示す被写体位置までのデプスを算出する。 In step ST4, the image processing apparatus performs depth calculation. The image processing device 30 calculates the depth to the subject position indicated by the depth calculation target pixel in the depth calculation target line based on the epipolar plane image and the polarization epipolar plane image.

 図10は、ステップST4におけるデプス算出動作を例示したフローチャートである。ステップST11で画像処理装置は画素分類を行う。画像処理装置30は、画素値を用いてテクスチャ判定値Gを算出する。さらに、画像処理装置30は、算出したテクスチャ判定値Gに基づき、テクスチャ情報を有する画素または有していない画素のいずれであるかを判別してステップST12に進む。 FIG. 10 is a flowchart illustrating the depth calculation operation in step ST4. In step ST11, the image processing apparatus performs pixel classification. The image processing device 30 calculates the texture determination value G using the pixel value. Furthermore, the image processing apparatus 30 determines whether the pixel has texture information or not based on the calculated texture determination value G, and proceeds to step ST12.

 ステップST12で画像処理装置はテクスチャ情報を用いて勾配を算出する。画像処理装置30は、テクスチャ情報を有すると判別された画素について、テクスチャ情報を用いて勾配θtを算出してステップST13に進む。 In step ST12, the image processing apparatus calculates the gradient using the texture information. The image processing apparatus 30 calculates the gradient θt using the texture information for the pixel determined to have the texture information, and proceeds to step ST13.

 ステップST13で画像処理装置は偏光情報を用いて勾配を算出する。画像処理装置30は、テクスチャ情報を有していないと判別された画素について、偏光情報を用いて勾配θpを算出してステップST14に進む。 In step ST13, the image processing apparatus calculates a gradient using polarization information. The image processing apparatus 30 calculates the gradient θp using polarization information for the pixel determined to have no texture information, and proceeds to step ST14.

 ステップST14で画像処理装置はデプスを算出する。画像処理装置30は、ステップST12およびステップST13で算出された勾配θxをデプスに換算する。なお勾配θxは、勾配θtまたは勾配θpである。 In step ST14, the image processing apparatus calculates the depth. The image processing device 30 converts the gradient θx calculated in step ST12 and step ST13 into depth. The gradient θx is the gradient θt or the gradient θp.

 画像処理装置は、上述の処理を行い、例えば被写体OBaにおいてテクスチャを有する側辺の位置PS1については、テクスチャ情報を用いて算出した勾配に基づきデプスを算出する。また、例えば被写体OBbにおいてテクスチャを有していない側面の位置PS3については、偏光情報を用いて算出した勾配に基づきデプスを算出する。 The image processing apparatus performs the above-described processing, and calculates the depth on the basis of the gradient calculated using the texture information, for example, for the position PS1 of the side having the texture in the object OBa. Also, for example, with respect to the position PS3 of the side surface having no texture in the object OBb, the depth is calculated based on the gradient calculated using the polarization information.

 <3.第2の実施の形態>
 次に、画像処理装置の第2の実施の形態について説明する。第2の実施の形態では、第1の実施の形態よりも高い解像度でデプス算出を行う場合について説明する。
<3. Second embodiment>
Next, a second embodiment of the image processing apparatus will be described. In the second embodiment, the case where depth calculation is performed at a resolution higher than that of the first embodiment will be described.

 第2の実施の形態の画像処理装置は、図2に示す第1の実施の形態と同様に、前処理部31、パラメータ保持部32、偏光エピポーラ平面画像生成部33、デプス算出部34を有している。 The image processing apparatus according to the second embodiment has the preprocessing unit 31, the parameter holding unit 32, the polarization epipolar plane image generation unit 33, and the depth calculation unit 34, as in the first embodiment shown in FIG. doing.

 前処理部31は、パラメータ保持部32に格納されているパラメータを用いて、複数の撮像部で取得された偏光撮像画に対する前処理を行う。パラメータ保持部32には、予め所定の被写体例えばチェッカーボード等を用いてキャリブレーションを行うことにより得られた内部パラメータと外部パラメータが格納されている。前処理部31は、偏光撮像画と当該偏光撮像画を取得した撮像部の内部パラメータとを用いて偏光撮像画の歪み補正等を行う。また、前処理部31は、内部パラメータを用いて処理された偏光撮像画と外部パラメータとを用いて偏光撮像画のレジストレーションを行う。前処理部31は、前処理後の偏光撮像画を偏光エピポーラ平面画像生成部33へ出力する。 The preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on polarization imaging images acquired by a plurality of imaging units. The parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board. The preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image. Further, the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter. The preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.

 偏光エピポーラ平面画像生成部33は、複数の偏光撮像画から被写体における所望位置を示す画像を含むように、複数の撮像部の並び方向すなわち複数の視点位置の並び方向である横方向に画像を抽出して、抽出した画像を並び方向と直交する縦方向に撮像部の順序であって、撮像部の間隔(視点位置の間隔)に応じた間隔で並べてエピポーラ平面画像を生成する。また、偏光エピポーラ平面画像生成部33は、抽出した画像を並び方向と直交する縦方向に視点位置の順序であって、視点位置の間隔に応じた間隔で並べて偏光エピポーラ平面画像を生成する。偏光エピポーラ平面画像生成部33は、前処理後の複数の偏光撮像画を用いてエピポーラ平面画像と偏光エピポーラ平面画像を生成してデプス算出部34へ出力する。 The polarization epipolar plane image generation unit 33 extracts an image in the arrangement direction of the plurality of imaging units, that is, the arrangement direction of the plurality of viewpoint positions so as to include an image indicating a desired position in the subject from the plurality of polarization imaging images Then, the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the vertical direction orthogonal to the alignment direction, at the intervals according to the intervals of the imaging units (the intervals of the viewpoint positions). In addition, the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions. The polarization epipolar plane image generation unit 33 generates an epipolar plane image and a polarization epipolar plane image using the plurality of pre-processed polarization imaging images, and outputs the epipolar plane image and the polarization epipolar plane image to the depth calculation unit 34.

 第2の実施の形態では、デプス算出部の構成が第1の実施の形態と異なる。第2の実施の形態におけるデプス算出部は、偏光エピポーラ平面画像における真値の勾配の方向に位置する画素の画素値から法線を算出して、算出した法線を用いて第1の実施の形態よりも高い解像度でデプス算出を行う。 In the second embodiment, the configuration of the depth calculation unit is different from that of the first embodiment. The depth calculating unit in the second embodiment calculates the normal from the pixel value of the pixel located in the direction of the gradient of the true value in the polarization epipolar plane image, and uses the calculated normal to execute the first embodiment. Perform depth calculation with higher resolution than the form.

 図11は、第2の実施の形態におけるデプス算出部の構成を例示している。デプス算出部34は、勾配算出部341とデプス換算部343と法線算出部344および統合処理部345を有している。 FIG. 11 illustrates the configuration of the depth calculation unit according to the second embodiment. The depth calculation unit 34 includes a gradient calculation unit 341, a depth conversion unit 343, a normal calculation unit 344, and an integration processing unit 345.

 勾配算出部341は、偏光エピポーラ平面画像生成部33で生成されたエピポーラ平面画像と偏光エピポーラ平面画像における各画素の勾配を算出する。勾配算出部341は、画素分類部3411とテクスチャ勾配算出部3412および偏光勾配算出部3413を有している。 The gradient calculation unit 341 calculates the gradient of each pixel in the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image. The gradient calculating unit 341 includes a pixel classification unit 3411, a texture gradient calculating unit 3412, and a polarization gradient calculating unit 3413.

 画素分類部3411は、エピポーラ平面画像の画素を、テクスチャ情報を有している画素とテクスチャ情報を有していない画像に分類する。画素分類部3411は、判定対処画素について、上述のようにテクスチャ判定値Gを算出する。画素分類部3411は、算出したテクスチャ判定値Gが予め設定されている判定閾値σ以上である場合にテクスチャ情報を有している画素と判定して、テクスチャ判定値Gが判定閾値σよりも小さい場合にテクスチャ情報を有していない画素と判定する。 The pixel classification unit 3411 classifies the pixels of the epipolar plane image into pixels having texture information and an image having no texture information. The pixel classification unit 3411 calculates the texture determination value G as described above for the determination target pixel. If the calculated texture determination value G is equal to or greater than a predetermined determination threshold σ, the pixel classification unit 3411 determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold σ. In this case, it is determined that the pixel does not have texture information.

 テクスチャ勾配算出部3412は、画素分類部3411でテクスチャ情報を有している画素として分類された画像に対して、テクスチャ情報を用いて上述のように勾配θtを算出する。テクスチャ勾配算出部3412は、算出した勾配をデプス換算部343へ出力する。 The texture gradient calculation unit 3412 calculates the gradient θt as described above using the texture information with respect to the image classified as the pixel having the texture information by the pixel classification unit 3411. The texture gradient calculation unit 3412 outputs the calculated gradient to the depth conversion unit 343.

 偏光勾配算出部3413は、画素分類部3411で分類されたテクスチャ情報を有していない画素に対して、偏光情報を用いて勾配を算出する。偏光勾配算出部3413は、上述のように、偏光エピポーラ平面画像において勾配の方向に位置する画素の異なる組み合わせ毎に三角関数の波形を算出して、波形の差が最小となる勾配を真値の勾配とする。偏光勾配算出部3413は、算出した真値の勾配をデプス換算部343へ出力する。また、偏光勾配算出部3413は、振幅の差および位相の差が最小となる画素の画素値、すなわち真値の勾配の方向に位置する画素の画素値を法線算出部344へ出力する。さらに、偏光勾配算出部3413は、真値の勾配の方向に位置する画素の画素値だけでなく、任意の勾配の方向に位置する画素の画素値も法線算出部344へ出力する。 The polarization gradient calculation unit 3413 calculates the gradient using polarization information for pixels that do not have the texture information classified by the pixel classification unit 3411. As described above, the polarization gradient calculation unit 3413 calculates the waveform of the trigonometric function for each different combination of pixels located in the direction of the gradient in the polarization epipolar plane image, and determines the gradient with the smallest waveform difference as the true value. It is a gradient. The polarization gradient calculation unit 3413 outputs the calculated gradient of the true value to the depth conversion unit 343. In addition, the polarization gradient calculating unit 3413 outputs the pixel value of the pixel at which the difference in amplitude and the difference in phase are minimum, that is, the pixel value of the pixel positioned in the direction of the gradient of the true value to the normal calculation unit 344. Furthermore, the polarization gradient calculating unit 3413 outputs not only the pixel values of the pixels positioned in the direction of the gradient of the true value, but also the pixel values of the pixels positioned in the direction of the arbitrary gradient to the normal calculation unit 344.

 デプス換算部343は、勾配算出部341のテクスチャ勾配算出部3412と偏光勾配算出部3413で算出された勾配をデプスに換算する。デプス換算部343は、テクスチャ勾配算出部3412と偏光勾配算出部3413で算出された勾配θpを上述のようにデプスに換算する。デプス換算部343は、換算後のデプスを法線算出部344と統合処理部345へ出力する。 The depth conversion unit 343 converts the gradients calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 of the gradient calculation unit 341 into depths. The depth conversion unit 343 converts the gradient θp calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 into depth as described above. The depth conversion unit 343 outputs the converted depth to the normal calculation unit 344 and the integration processing unit 345.

 法線算出部344は、偏光勾配算出部3413から供給された画素値(輝度)を用いて法線を算出する。上述のように、偏光方向を回転させたときに観測される画素値は式(5)のように表すことができる。 The normal line calculation unit 344 calculates a normal line using the pixel value (brightness) supplied from the polarization gradient calculation unit 3413. As described above, the pixel value observed when the polarization direction is rotated can be expressed as Expression (5).

 式(5)では、偏光角υが偏光画像の生成時に明らかであり、最大輝度Imaxと最小輝度Iminおよび方位角φが変数となる。したがって、法線算出部344は、変数が3つであることから、偏光方向が3方向以上の偏光画像の輝度を用いて式(5)に示す関数へのフィッティングを行い、輝度と偏光角の関係を示す関数に基づき最大輝度となる方位角φを判別する。 In equation (5), the polarization angle 明 ら か is apparent when generating a polarization image, and the maximum luminance Imax, the minimum luminance Imin and the azimuth angle φ become variables. Therefore, since there are three variables, the normal line calculation unit 344 performs fitting to the function shown in equation (5) using the brightness of the polarization image with three or more polarization directions, and The azimuth angle φ at which the maximum brightness is obtained is determined based on the function indicating the relationship.

 また、物体表面法線を極座標系で表現して、法線情報を方位角φと天頂角θzとする。なお、天頂角θzはz軸から法線に向かう角度、方位角φは、上述のようにx軸に対するy軸方向の角度とする。ここで、偏光素子PLを回転して得られた最小輝度Iminと最大輝度Imaxを用いても式(7)の演算を行うことで偏光度ρを算出できる。 Further, the object surface normal is expressed in a polar coordinate system, and the normal information is an azimuth angle φ and a zenith angle θz. The zenith angle θz is an angle from the z-axis toward the normal, and the azimuth angle φ is an angle in the y-axis direction with respect to the x-axis as described above. Here, even if the minimum luminance Imin and the maximum luminance Imax obtained by rotating the polarizing element PL are used, the degree of polarization ρ can be calculated by performing the operation of equation (7).

Figure JPOXMLDOC01-appb-M000006
Figure JPOXMLDOC01-appb-M000006

 偏光度と天頂角と関係は、フレネルの式から例えば図12に示す特性を有することが知られており、図12に示す特性から偏光度ρに基づいて天頂角θzを判別できる。なお、図12に示す特性は例示であって、被写体の屈折率に依存して特性は変化する。 The relationship between the degree of polarization and the zenith angle is known to have, for example, the characteristics shown in FIG. 12 from the Fresnel equation, and the zenith angle θz can be determined based on the degree of polarization か ら from the characteristics shown in FIG. The characteristic shown in FIG. 12 is an example, and the characteristic changes depending on the refractive index of the subject.

 したがって、法線算出部344は、偏光方向が3方向以上の偏光撮像画の画素値に基づき、偏光方向と偏光画像の輝度から輝度と偏光角の関係を求めて、最大輝度となる方位角φを判別する。また、法線算出部344は、輝度と偏光角の関係から得た最大輝度と最小輝度を用いて偏光度ρを算出して、偏光度と天頂角の関係を示す特性曲線に基づき、算出した偏光度ρに対応する天頂角θzを判別する。このように、法線算出部344は、偏光方向が3方向以上の偏光画像の画素値に基づき、被写体の法線方向(方位角φと天頂角θz)を示す法線情報を生成する。ここで、真値の勾配の方向に位置する画素の画素値を用いて法線を算出する場合、法線算出部344は、算出した法線方向を示す法線情報を統合処理部345へ出力する。なお、法線の方位角には180°の不定性があるが、デプス換算部343で得られたデプスに基づき被写体面の勾配を推定すれば、方位角の不定性を消去できる。 Therefore, the normal line calculation unit 344 obtains the relationship between the brightness and the polarization angle from the polarization direction and the brightness of the polarization image based on the pixel values of the polarization image with three or more polarization directions, and obtains the azimuth angle φ To determine The normal line calculation unit 344 calculates the degree of polarization ρ using the maximum brightness and the minimum brightness obtained from the relationship between the brightness and the polarization angle, and calculates it based on the characteristic curve indicating the relationship between the degree of polarization and the zenith angle. The zenith angle θz corresponding to the polarization degree ρ is determined. Thus, the normal line calculation unit 344 generates normal line information indicating the normal direction of the subject (the azimuth angle φ and the zenith angle θz) based on the pixel values of the polarization image having three or more polarization directions. Here, when calculating the normal using the pixel value of the pixel located in the direction of the gradient of the true value, the normal calculation unit 344 outputs normal information indicating the calculated normal direction to the integration processing unit 345. Do. Although the azimuth of the normal line has an indeterminacy of 180 °, if the gradient of the object surface is estimated based on the depth obtained by the depth conversion unit 343, the indeterminacy of the azimuth can be eliminated.

 統合処理部345では、デプス換算部343で得られたデプスと法線算出部344で算出された法線を用いて統合処理を行い、高解像度かつ高精度にデプスを求める。統合処理部345では、偏光情報に基づき真値の勾配が算出された画素のデプスと法線、あるいは真値の勾配が算出されていない画素の法線に基づいて、画素単位よりも密で高い解像度かつ高精度にデプスを算出する。 The integration processing unit 345 performs integration processing using the depth obtained by the depth conversion unit 343 and the normal calculated by the normal calculation unit 344 to obtain the depth with high resolution and high accuracy. The integration processing unit 345 is denser and higher than the pixel unit based on the depth and normal of the pixel for which the gradient of the true value is calculated based on the polarization information, or the normal of the pixel for which the gradient of the true value is not calculated. Calculate depth with high resolution and high accuracy.

 図13は、デプスと法線に基づいたデプス補間処理を説明するための図である。例えば、デプス換算部343によって画素位置PX1のデプスD1と画素位置PX2のデプスD2が得られたとする。また、法線算出部344によって、画素位置PX1の法線F1と画素位置PX2の法線F2が得られたとする。統合処理部345は、デプスD1,D2と法線F1,F2を用いて補間処理を行い、例えば画素位置PX1と画素位置PX2との境界位置PX12に対応する被写体位置のデプスD12を算出する。 FIG. 13 is a diagram for explaining depth interpolation processing based on the depth and the normal. For example, it is assumed that the depth D1 of the pixel position PX1 and the depth D2 of the pixel position PX2 are obtained by the depth conversion unit 343. Further, it is assumed that the normal line calculation unit 344 obtains the normal line F1 of the pixel position PX1 and the normal line F2 of the pixel position PX2. The integration processing unit 345 performs interpolation processing using the depths D1 and D2 and the normals F1 and F2, and calculates, for example, the depth D12 of the object position corresponding to the boundary position PX12 between the pixel position PX1 and the pixel position PX2.

 法線の三次元ベクトルと法線の天頂角θzと方位角φの関係は、式(8)~(10)に示す関係となる。 The relationship between the three-dimensional vector of the normal, the zenith angle θz of the normal and the azimuth angle φ is as shown in equations (8) to (10).

Figure JPOXMLDOC01-appb-M000007
Figure JPOXMLDOC01-appb-M000007

 ここで、正射影と仮定して画素位置PX1,PX2の並び方向をx方向とした場合、式(11)(12)の関係が成立する。したがって、式(13)に基づいてデプスD12を算出できる。 Here, assuming that the arrangement direction of the pixel positions PX1 and PX2 is the x direction on the assumption that the orthogonal projection is performed, the relationships of the equations (11) and (12) hold. Therefore, the depth D12 can be calculated based on the equation (13).

Figure JPOXMLDOC01-appb-M000008
Figure JPOXMLDOC01-appb-M000008

 また、透視射影の場合、撮像部の焦点距離を「f」、画像中心を原点にした画像座標系における画素位置PX1の座標を(u1,v1)、画素位置PX2の座標を(u2,v2)、法線F1に基づく法線ベクトルを(Nx1,Ny1,Nz1)、法線F2に基づく法線ベクトルを(Nx2,Ny2,Nz2)、画素位置PX1,PX2が画像座標系のu方向(右向き)上の二点である場合、式(14)(15)の関係が成立する。したがって、式(16)に基づいてデプスD12を算出できる。 In the case of perspective projection, the coordinate of the pixel position PX1 in the image coordinate system with the focal length of the imaging unit as “f” and the image center as the origin is (u1, v1), and the coordinate of the pixel position PX2 is (u2, v2) The normal vector based on the normal F1 is (Nx1, Ny1, Nz1), the normal vector based on the normal F2 is (Nx2, Ny2, Nz2), the pixel position PX1, PX2 is the u direction of the image coordinate system (rightward) In the case of the upper two points, the relationships of equations (14) and (15) hold. Therefore, the depth D12 can be calculated based on the equation (16).

Figure JPOXMLDOC01-appb-M000009
Figure JPOXMLDOC01-appb-M000009

 なお、デプスD12’はデプスD1とデプスD2を用いた直線補間によって算出した境界位置PX12のデプスを示しており、デプスと法線情報で示された法線に基づいたデプス補間処理を行うことで、法線情報を用いることなく直線補間によってデプスを算出する場合に比べて高精度にデプスD12を算出できるようになる。 Note that the depth D12 'indicates the depth at the boundary position PX12 calculated by linear interpolation using the depth D1 and the depth D2, and by performing the depth interpolation processing based on the normal indicated by the depth and the normal information The depth D12 can be calculated with high accuracy as compared to the case of calculating the depth by linear interpolation without using the normal line information.

 また、法線を用いた統合処理は、例えば特開2015-114307号公報と同様に統合処理を行うことで、法線情報が生成されておりデプスが得られていない画素である法線あり勾配未算出画素のデプスを高精度に算出できる。 In addition, the integration processing using the normal is, for example, a gradient with a normal which is a pixel for which normal information is generated and a depth is not obtained by performing integration processing in the same manner as in JP-A-2015-114307. The depth of the uncalculated pixel can be calculated with high accuracy.

 このように、統合処理部345は、デプスと法線あるいは法線を用いて統合処理を行い、画素単位よりも密で高い解像度かつ高精度にデプスを算出する。 As described above, the integration processing unit 345 performs integration processing using the depth and the normal line or the normal line, and calculates the depth with higher resolution and high accuracy than the pixel unit.

 図14は第2の実施の形態の動作を示すフローチャートである。ステップST21で画像処理装置は画素分類を行う。画像処理装置30は、画素値を用いてテクスチャ判定値を算出する。さらに、画像処理装置30は、算出したテクスチャ判定値に基づき、テクスチャ情報を有する画素または有していない画素のいずれであるかを判別してステップST22に進む。 FIG. 14 is a flow chart showing the operation of the second embodiment. In step ST21, the image processing device performs pixel classification. The image processing device 30 calculates the texture determination value using the pixel value. Furthermore, the image processing apparatus 30 determines whether the pixel has texture information or not based on the calculated texture determination value, and proceeds to step ST22.

 ステップST22で画像処理装置はテクスチャ情報を用いて勾配を算出する。画像処理装置30は、テクスチャ情報を有すると判別された画素について、テクスチャ情報を用いて勾配を算出してステップST23に進む。 In step ST22, the image processing apparatus calculates the gradient using the texture information. The image processing apparatus 30 calculates the gradient using the texture information for the pixel determined to have the texture information, and proceeds to step ST23.

 ステップST23で画像処理装置は偏光情報を用いて勾配を算出する。画像処理装置30は、テクスチャ情報を有していないと判別された画素について、偏光情報を用いて真値の勾配を算出してステップST24に進む。 In step ST23, the image processing apparatus calculates a gradient using polarization information. The image processing apparatus 30 calculates the gradient of the true value using the polarization information for the pixel determined to have no texture information, and proceeds to step ST24.

 ステップST24で画像処理装置はデプスを算出する。画像処理装置30は、ステップST12およびステップST13で算出された勾配に基づきデプスを算出してステップST25に進む。 In step ST24, the image processing apparatus calculates the depth. The image processing device 30 calculates the depth based on the gradients calculated in step ST12 and step ST13, and proceeds to step ST25.

 ステップST25で画像処理装置は法線情報を生成する。画像処理装置30は、ステップST23で勾配を算出したときの偏光情報に基づき法線の算出を行い、算出した法線を示す法線情報を生成してステップST26に進む。 In step ST25, the image processing apparatus generates normal line information. The image processing device 30 calculates the normal based on the polarization information when the gradient is calculated in step ST23, generates normal information indicating the calculated normal, and proceeds to step ST26.

 ステップST26で画像処理装置は統合処理を行う。画像処理装置30は、画素単位で得られたデプスと法線情報で示された法線を用いて統合処理を行い、画素単位よりも密で高い解像度かつ高精度にデプスを算出する。 In step ST26, the image processing apparatus performs integration processing. The image processing device 30 performs integration processing using the depth obtained in the pixel unit and the normal indicated by the normal information, and calculates the depth with a higher resolution and higher accuracy than the pixel unit.

 このように、第2の実施の形態によれば、第1の実施の形態よりも密で高い解像度かつ精度よくデプスを算出できるようになる。 Thus, according to the second embodiment, it is possible to calculate the depth with higher resolution and accuracy than the first embodiment.

 <4.第3の実施の形態>
 次に、画像処理装置の第3の実施の形態について説明する。第3の実施の形態では、テクスチャ情報や偏光情報に基づいて勾配を算出することができないデプス算出対象画素について、デプスを算出できるようにする。
<4. Third embodiment>
Next, a third embodiment of the image processing apparatus will be described. In the third embodiment, the depth can be calculated for a depth calculation target pixel whose gradient can not be calculated based on texture information or polarization information.

 第3の実施の形態の画像処理装置は、図2に示す第1の実施の形態と同様に、前処理部31、パラメータ保持部32、偏光エピポーラ平面画像生成部33、デプス算出部34を有している。 The image processing apparatus according to the third embodiment has the preprocessing unit 31, the parameter holding unit 32, the polarization epipolar plane image generation unit 33, and the depth calculation unit 34, as in the first embodiment shown in FIG. doing.

 前処理部31は、パラメータ保持部32に格納されているパラメータを用いて、複数の撮像部で取得された偏光撮像画に対する前処理を行う。パラメータ保持部32には、予め所定の被写体例えばチェッカーボード等を用いてキャリブレーションを行うことにより得られた内部パラメータと外部パラメータが格納されている。前処理部31は、偏光撮像画と当該偏光撮像画を取得した撮像部の内部パラメータとを用いて偏光撮像画の歪み補正等を行う。また、前処理部31は、内部パラメータを用いて処理された偏光撮像画と外部パラメータとを用いて偏光撮像画のレジストレーションを行う。前処理部31は、前処理後の偏光撮像画を偏光エピポーラ平面画像生成部33へ出力する。 The preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on polarization imaging images acquired by a plurality of imaging units. The parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board. The preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image. Further, the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter. The preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.

 偏光エピポーラ平面画像生成部33は、複数の偏光撮像画から被写体における所望位置を示す画像を含むように、複数の撮像部の並び方向すなわち複数の視点位置の並び方向である横方向に画像を抽出して、抽出した画像を並び方向と直交する縦方向に撮像部の順序であって、撮像部の間隔(視点位置の間隔)に応じた間隔で並べてエピポーラ平面画像を生成する。また、偏光エピポーラ平面画像生成部33は、抽出した画像を並び方向と直交する縦方向に視点位置の順序であって、視点位置の間隔に応じた間隔で並べて偏光エピポーラ平面画像を生成する。偏光エピポーラ平面画像生成部33は、前処理後の複数の偏光撮像画を用いてエピポーラ平面画像と偏光エピポーラ平面画像を生成してデプス算出部34へ出力する。 The polarization epipolar plane image generation unit 33 extracts an image in the arrangement direction of the plurality of imaging units, that is, the arrangement direction of the plurality of viewpoint positions so as to include an image indicating a desired position in the subject from the plurality of polarization imaging images Then, the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the vertical direction orthogonal to the alignment direction, at the intervals according to the intervals of the imaging units (the intervals of the viewpoint positions). In addition, the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions. The polarization epipolar plane image generation unit 33 generates an epipolar plane image and a polarization epipolar plane image using the plurality of pre-processed polarization imaging images, and outputs the epipolar plane image and the polarization epipolar plane image to the depth calculation unit 34.

 第3の実施の形態では、デプス算出部の構成が第1の実施の形態および第2の実施の形態と異なる。第3の実施の形態におけるデプス算出部は、テクスチャ情報や偏光情報に基づき勾配を算出できないデプス算出対象画素のデプスを算出する処理をさらに行う。 The third embodiment is different from the first and second embodiments in the configuration of the depth calculating unit. The depth calculation unit in the third embodiment further performs processing of calculating the depth of the depth calculation target pixel for which the gradient can not be calculated based on the texture information and the polarization information.

 図15は、第3の実施の形態におけるデプス算出部の構成を例示している。デプス算出部34は、勾配算出部341とデプス換算部343と法線算出部344および統合処理部345、デプス補間部346を有している。 FIG. 15 illustrates the configuration of the depth calculator in the third embodiment. The depth calculation unit 34 includes a gradient calculation unit 341, a depth conversion unit 343, a normal calculation unit 344, an integration processing unit 345, and a depth interpolation unit 346.

 勾配算出部341は、偏光エピポーラ平面画像生成部33で生成されたエピポーラ平面画像と偏光エピポーラ平面画像における各画素の勾配を算出する。勾配算出部341は、画素分類部3411とテクスチャ勾配算出部3412および偏光勾配算出部3413を有している。 The gradient calculation unit 341 calculates the gradient of each pixel in the epipolar plane image generated by the polarization epipolar plane image generation unit 33 and the polarization epipolar plane image. The gradient calculating unit 341 includes a pixel classification unit 3411, a texture gradient calculating unit 3412, and a polarization gradient calculating unit 3413.

 画素分類部3411は、エピポーラ平面画像の画素を、テクスチャ情報を有している画素とテクスチャ情報を有していない画像に分類する。画素分類部3411は、判定対処画素について、上述のようにテクスチャ判定値Gを算出する。画素分類部3411は、算出したテクスチャ判定値Gが予め設定されている判定閾値σ以上である場合にテクスチャ情報を有している画素と判定して、テクスチャ判定値Gが判定閾値σよりも小さい場合にテクスチャ情報を有していない画素と判定する。 The pixel classification unit 3411 classifies the pixels of the epipolar plane image into pixels having texture information and an image having no texture information. The pixel classification unit 3411 calculates the texture determination value G as described above for the determination target pixel. If the calculated texture determination value G is equal to or greater than a predetermined determination threshold σ, the pixel classification unit 3411 determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold σ. In this case, it is determined that the pixel does not have texture information.

 テクスチャ勾配算出部3412は、画素分類部3411で分類されたテクスチャ情報を有している画素に対して、テクスチャ情報を用いて勾配を算出する。テクスチャ勾配算出部3412は、例えば画素分類部3411で算出された微分値を用いて上述のように勾配θtを算出する。テクスチャ勾配算出部3412は、算出した勾配をデプス換算部343へ出力する。 The texture gradient calculation unit 3412 calculates the gradient using the texture information for the pixels having the texture information classified by the pixel classification unit 3411. The texture gradient calculating unit 3412 calculates the gradient θt as described above using, for example, the differential value calculated by the pixel classification unit 3411. The texture gradient calculation unit 3412 outputs the calculated gradient to the depth conversion unit 343.

 テクスチャ勾配算出部3412は、画素分類部3411でテクスチャ情報を有している画素として分類された画像に対して、テクスチャ情報を用いて上述のように勾配θtを算出する。テクスチャ勾配算出部3412は、算出した勾配をデプス換算部343へ出力する。 The texture gradient calculation unit 3412 calculates the gradient θt as described above using the texture information with respect to the image classified as the pixel having the texture information by the pixel classification unit 3411. The texture gradient calculation unit 3412 outputs the calculated gradient to the depth conversion unit 343.

 偏光勾配算出部3413は、画素分類部3411で分類されたテクスチャ情報を有していない画素に対して、偏光情報を用いて勾配を算出する。偏光勾配算出部3413は、上述のように、偏光エピポーラ平面画像において勾配の方向に位置する画素の異なる組み合わせ毎に三角関数の波形を算出して、波形の差が最小となる勾配を真値の勾配とする。偏光勾配算出部3413は、算出した真値の勾配をデプス換算部343へ出力する。また、偏光勾配算出部3413は、振幅の差および位相の差が最小となる画素の画素値、すなわち真値の勾配方向に位置する画素の画素値を法線算出部344へ出力する。また、偏光勾配算出部3413は、真値の勾配の方向に位置する画素の画素値だけでなく、任意の勾配の方向に位置する画素の画素値も法線算出部344へ出力する。さらに、偏光勾配算出部3413は、法線なし勾配未算出画素を示す法線なし勾配未算出画素情報を生成してデプス補間部346へ出力する。 The polarization gradient calculation unit 3413 calculates the gradient using polarization information for pixels that do not have the texture information classified by the pixel classification unit 3411. As described above, the polarization gradient calculation unit 3413 calculates the waveform of the trigonometric function for each different combination of pixels located in the direction of the gradient in the polarization epipolar plane image, and determines the gradient with the smallest waveform difference as the true value. It is a gradient. The polarization gradient calculation unit 3413 outputs the calculated gradient of the true value to the depth conversion unit 343. In addition, the polarization gradient calculating unit 3413 outputs the pixel value of the pixel at which the difference in amplitude and the difference in phase are minimum, that is, the pixel value of the pixel positioned in the gradient direction of the true value to the normal calculation unit 344. In addition, the polarization gradient calculating unit 3413 outputs not only the pixel values of the pixels positioned in the direction of the gradient of the true value but also the pixel values of the pixels positioned in the direction of the arbitrary gradient to the normal calculation unit 344. Further, the polarization gradient calculating unit 3413 generates normal-no gradient non-computed pixel information indicating a normal-non-normalized gradient non-computed pixel and outputs the pixel information to the depth interpolation unit 346.

 デプス換算部343は、勾配算出部341のテクスチャ勾配算出部3412と偏光勾配算出部3413で算出された勾配をデプスに換算する。デプス換算部343は、テクスチャ勾配算出部3412と偏光勾配算出部3413で算出された勾配θpを上述のようにデプスに換算する。デプス換算部343は、換算後のデプスを法線算出部344と統合処理部345へ出力する。 The depth conversion unit 343 converts the gradients calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 of the gradient calculation unit 341 into depths. The depth conversion unit 343 converts the gradient θp calculated by the texture gradient calculation unit 3412 and the polarization gradient calculation unit 3413 into depth as described above. The depth conversion unit 343 outputs the converted depth to the normal calculation unit 344 and the integration processing unit 345.

 法線算出部344は、偏光勾配算出部3413から供給された画素値(輝度)に基づき、最大輝度となる方位角φを判別する。また、法線算出部344は、輝度と偏光角の関係から得た最大輝度と最小輝度を用いて算出した偏光度ρに対応する天頂角θzを判別する。法線算出部344は、算出した方位角φと天頂角θzを示す法線情報を生成して、統合処理部345へ出力する。 Based on the pixel value (brightness) supplied from the polarization gradient calculating unit 3413, the normal line calculating unit 344 determines the azimuth angle φ at which the brightness is maximum. Further, the normal line calculation unit 344 determines the zenith angle θz corresponding to the degree of polarization ρ calculated using the maximum luminance and the minimum luminance obtained from the relationship between the luminance and the polarization angle. The normal line calculation unit 344 generates normal line information indicating the calculated azimuth angle φ and the zenith angle θz, and outputs the generated normal line information to the integration processing unit 345.

 統合処理部345は、デプス換算部343で得られたデプスと法線算出部344で算出された法線を用いて統合処理を行い、高解像度かつ高精度にデプスを算出してデプス補間部346へ出力する。 The integration processing unit 345 performs integration processing using the depth obtained by the depth conversion unit 343 and the normal calculated by the normal calculation unit 344 to calculate the depth with high resolution and high accuracy, and the depth interpolation unit 346. Output to

 デプス補間部346は、偏光勾配算出部3413から供給された法線なし勾配未算出画素情報で示された画素のデプスを補間処理によって算出する。上述のように、法線なし勾配未算出画素は、勾配を算出できない画素または勾配や法線の信頼度が低い画素、すなわちノイズが強い画素である。したがって、デプス補間部346は、法線なし勾配未算出画素の周辺に位置しており統合処理部345でデプスが算出されている画素のデプスを用いて補間処理を行い、法線なし勾配未算出画素のデプスを算出する。 The depth interpolation unit 346 calculates the depth of the pixel indicated by the non-normal gradient non-computed pixel information supplied from the polarization gradient calculation unit 3413 by interpolation processing. As described above, the non-normal gradient uncomputed pixel is a pixel for which the gradient can not be calculated or a pixel with low reliability of the gradient or the normal, that is, a pixel with strong noise. Therefore, the depth interpolation unit 346 performs interpolation processing using the depth of the pixel which is located around the non-normalized gradient non-computed pixel and the depth is calculated by the integration processing unit 345, and the non-normalized gradient is not calculated. Calculate the depth of the pixel.

 図16は、デプス補間部346のデプス補間処理を説明するための図である。例えば、法線なし勾配未算出画素Ptに対して左方向に位置しており、デプスが算出されている最も近接した画素を画素P1とする。また、法線なし勾配未算出画素Ptに対して右方向に位置しており、デプスが算出されている最も近接した画素を画素P2とする。また、法線なし勾配未算出画素Ptから画素P1までの距離(例えば画素数)を「L1」、法線なし勾配未算出画素Ptから画素P2までの距離(例えば画素数)を「L2」とする。また、画素P1はデプス「D1」、画素P2はデプス「D2」とする。この場合、デプス補間部346は、式(17)に基づき法線なし勾配未算出画素Ptのデプス「Dt」を算出する。 FIG. 16 is a diagram for explaining the depth interpolation processing of the depth interpolation unit 346. For example, the pixel P1 is the pixel closest to the left with respect to the pixel with no normal gradient but not calculated, and for which the depth is calculated. In addition, the pixel P2 is located in the right direction with respect to the normal line no gradient non-computed pixel Pt, and the closest pixel for which the depth is calculated is set as the pixel P2. In addition, the distance (for example, the number of pixels) from the normal line non-gradient uncomputed pixel Pt to the pixel P1 is “L1”, and the distance from the normal line no gradient non-computed pixel Pt to the pixel P2 (for example, pixel number) is “L2” Do. Further, the pixel P1 has a depth "D1", and the pixel P2 has a depth "D2". In this case, the depth interpolation unit 346 calculates the depth "Dt" of the no-normal-gradient uncomputed pixel Pt based on Expression (17).

Figure JPOXMLDOC01-appb-M000010
Figure JPOXMLDOC01-appb-M000010

 このように、デプス補間部346は、法線なし勾配未算出画素の周辺に位置する画素のデプスを用いて補間処理を行い、法線なし勾配未算出画素のデプスを算出する。 Thus, the depth interpolation unit 346 performs interpolation processing using the depths of pixels located around the non-normal gradient non-computed pixel to calculate the depth of the non-normal gradient non-computed pixel.

 図17は第3の実施の形態の動作を示すフローチャートである。ステップST31で画像処理装置は画素分類を行う。画像処理装置30は、画素値を用いてテクスチャ判定値を算出する。さらに、画像処理装置30は、算出したテクスチャ判定値に基づき、テクスチャ情報を有する画素または有していない画素のいずれであるかを判別してステップST32に進む。 FIG. 17 is a flow chart showing the operation of the third embodiment. In step ST31, the image processing apparatus performs pixel classification. The image processing device 30 calculates the texture determination value using the pixel value. Furthermore, the image processing apparatus 30 determines whether the pixel has texture information or not based on the calculated texture determination value, and proceeds to step ST32.

 ステップST32で画像処理装置はテクスチャ情報を用いて勾配を算出する。画像処理装置30は、テクスチャ情報を有すると判別された画素について、テクスチャ情報を用いて勾配を算出してステップST33に進む。 In step ST32, the image processing apparatus calculates the gradient using the texture information. The image processing apparatus 30 calculates the gradient using the texture information for the pixel determined to have the texture information, and proceeds to step ST33.

 ステップST33で画像処理装置は偏光情報を用いて勾配を算出する。画像処理装置30は、テクスチャ情報を有していないと判別された画素について、偏光情報を用いて勾配を算出してステップST34に進む。 In step ST33, the image processing apparatus calculates a gradient using polarization information. The image processing apparatus 30 calculates a gradient using polarization information for a pixel determined to have no texture information, and proceeds to step ST34.

 ステップST34で画像処理装置は法線なし勾配未算出画素情報を生成する。画像処理装置30は、ステップST33における勾配の算出結果、算出した勾配の信頼度、勾配の算出に用いた三角関数の波形の振幅と位相等に基づき、法線なし勾配未算出画素の判別を行い、判別結果に基づき法線なし勾配未算出画素情報を生成してステップST35に進む。 In step ST34, the image processing apparatus generates no normal line gradient uncalculated pixel information. The image processing device 30 discriminates the pixel with no normal line without calculating the normal line on the basis of the calculation result of the gradient in step ST33, the reliability of the calculated gradient, the amplitude and phase of the waveform of the trigonometric function used for calculating the gradient. Then, on the basis of the determination result, no normal line gradient uncalculated pixel information is generated, and the process proceeds to step ST35.

 ステップST35で画像処理装置はデプスを算出する。画像処理装置30は、ステップST32およびステップST33で算出された勾配に基づきデプスを算出してステップST36に進む。 The image processing apparatus calculates the depth in step ST35. The image processing device 30 calculates the depth based on the gradients calculated in step ST32 and step ST33, and proceeds to step ST36.

 ステップST36で画像処理装置は法線情報を生成する。画像処理装置30は、ステップST33で勾配を算出したときの偏光情報に基づき法線の算出を行い、算出した法線を示す法線情報を生成してステップST37に進む。 In step ST36, the image processing apparatus generates normal line information. The image processing device 30 calculates the normal based on the polarization information when the gradient is calculated in step ST33, generates normal information indicating the calculated normal, and proceeds to step ST37.

 ステップST37で画像処理装置は統合処理を行う。画像処理装置30は、画素単位で得られたデプスと法線情報で示された法線を用いて統合処理を行い、画素単位よりも高い解像度で高精度にデプスを算出してステップST38に進む。 In step ST37, the image processing apparatus performs integration processing. The image processing device 30 performs integration processing using the depth obtained in pixel units and the normal indicated by the normal information, calculates the depth with high resolution at a higher resolution than the pixel units, and proceeds to step ST38 .

 ステップST38で画像処理装置はデプス補間処理を行う。画像処理装置30は、ステップST34で生成された法線なし勾配未算出画素情報で示された法線なし勾配未算出画素のデプスを、法線なし勾配未算出画素の近傍に位置するデプスを用いた補間処理によって算出する。 In step ST38, the image processing apparatus performs depth interpolation processing. The image processing device 30 uses the depth of the non-normalized gradient non-calculated pixel indicated by the non-normalized gradient non-calculated pixel information generated in step ST34 as the depth located near the non-normalized gradient non-calculated pixel. Calculated by interpolation processing.

 このように、第3の実施の形態によれば、第1の実施の形態や第2の実施の形態でデプスを算出することができない画素についても、デプスを算出できるようになる。 As described above, according to the third embodiment, the depth can be calculated even for a pixel whose depth can not be calculated in the first embodiment or the second embodiment.

 <5.第4の実施の形態>
 上述の第1乃至第3の実施の形態では、視点位置が一方向(例えば水平方向)に異なる複数の偏光撮像画に基づきデプス算出を行う場合について説明したが、視点位置は複数方向に異なっていてもよい。次に、第4の実施の形態では、視点位置が複数方向に異なる複数の偏光撮像画、例えば視点位置が水平方向と垂直方向に異なる複数の偏光撮像画に基づきデプス算出を行う場合について説明する。
<5. Fourth embodiment>
In the first to third embodiments described above, the case where depth calculation is performed based on a plurality of polarization imaging images having different viewpoint positions in one direction (for example, horizontal direction) has been described, but viewpoint positions are different in a plurality of directions. May be Next, in the fourth embodiment, a case will be described where depth calculation is performed based on a plurality of polarization imaging images whose viewpoint positions are different in a plurality of directions, for example, a plurality of polarization imaging images whose viewpoint positions are different in the horizontal direction and the vertical direction. .

 図18は、第4の実施の形態における撮像装置の構成を例示しており、撮像装置は、撮像部が横方向に6台並べられており、縦方向に6段積層されている。撮像部20-(1,1)~20-(6,6)には、偏光素子21-(1,1)~21-(6,6)が設けられている。 FIG. 18 illustrates the configuration of the imaging apparatus according to the fourth embodiment. In the imaging apparatus, six imaging units are arranged in the horizontal direction, and six stages are stacked in the vertical direction. Polarizing elements 21- (1, 1) to 21- (6, 6) are provided in the imaging units 20- (1, 1) to 20- (6, 6).

 図19は、第4の実施の形態における撮像装置の偏光方向を例示している。偏光素子の偏光方向は、上下左右に隣接する撮像部と偏光方向が所定の角度差、例えば30°の角度差を有するように配置されており、図19では、偏光方向が「0°,30°,60°,90°,120°,150°」のいずれかとされている。 FIG. 19 illustrates the polarization direction of the imaging device in the fourth embodiment. The polarization direction of the polarization element is arranged so that the polarization direction has a predetermined angle difference, for example, 30 °, with the imaging units adjacent vertically and horizontally. In FIG. 19, the polarization direction is “0 °, 30 °”. And 60 °, 90 °, 120 °, and 150 °.

 第4の実施の形態の画像処理装置は、図2示す第1の実施の形態と同様に、前処理部31、パラメータ保持部32、偏光エピポーラ平面画像生成部33、デプス算出部34を有している。 The image processing apparatus according to the fourth embodiment includes the preprocessing unit 31, the parameter holding unit 32, the polarization epipolar plane image generation unit 33, and the depth calculation unit 34, as in the first embodiment shown in FIG. ing.

 前処理部31は、パラメータ保持部32に格納されているパラメータを用いて、複数の撮像部で取得された偏光撮像画に対する前処理を行う。パラメータ保持部32には、予め所定の被写体例えばチェッカーボード等を用いてキャリブレーションを行うことにより得られた内部パラメータと外部パラメータが格納されている。前処理部31は、偏光撮像画と当該偏光撮像画を取得した撮像部の内部パラメータとを用いて偏光撮像画の歪み補正等を行う。また、前処理部31は、内部パラメータを用いて処理された偏光撮像画と外部パラメータとを用いて偏光撮像画のレジストレーションを行う。前処理部31は、前処理後の偏光撮像画を偏光エピポーラ平面画像生成部33へ出力する。 The preprocessing unit 31 uses the parameters stored in the parameter storage unit 32 to perform preprocessing on polarization imaging images acquired by a plurality of imaging units. The parameter holding unit 32 stores in advance internal parameters and external parameters obtained by performing calibration using a predetermined subject such as a checker board. The preprocessing unit 31 performs distortion correction or the like of the polarization imaging image using the polarization imaging image and an internal parameter of the imaging unit that has acquired the polarization imaging image. Further, the preprocessing unit 31 performs registration of the polarization imaging image using the polarization imaging image processed using the internal parameter and the external parameter. The preprocessing unit 31 outputs the polarization imaging image after the preprocessing to the polarization epipolar plane image generating unit 33.

 偏光エピポーラ平面画像生成部33は、第1方向に視点位置が異なる複数の偏光撮像画から第1偏光エピポーラ平面画像と第1エピポーラ平面画像を生成して、第1方向と異なる第2方向に視点位置が異なる複数の偏光撮像画から第2偏光エピポーラ平面画像と第2エピポーラ平面画像を生成する。偏光エピポーラ平面画像生成部33は、例えば横方向に並ぶ複数の撮像部で取得された偏光撮像画から被写体における所望位置を示す画像を含むように、複数の撮像部の並び方向すなわち横方向に画像を抽出して、抽出した画像を並び方向と直交する縦方向に撮像部の順序であって、撮像部の間隔(視点位置の間隔)に応じた間隔で並べてエピポーラ平面画像を生成する。また、偏光エピポーラ平面画像生成部33は、抽出した画像を並び方向と直交する縦方向に視点位置の順序であって、視点位置の間隔に応じた間隔で並べて偏光エピポーラ平面画像を生成する。さらに、偏光エピポーラ平面画像生成部33は、縦方向に並ぶ複数の撮像部で取得された偏光撮像画から被写体における所望位置を示す画像を含むように、複数の撮像部の並び方向すなわち縦方向に画像を抽出して、抽出した画像を並び方向と直交する横方向に撮像部の順序であって、撮像部の間隔(視点位置の間隔)に応じた間隔で並べてエピポーラ平面画像を生成する。また、偏光エピポーラ平面画像生成部33は、抽出した画像を並び方向と直交する横方向に視点位置の順序であって、視点位置の間隔に応じた間隔で並べて偏光エピポーラ平面画像を生成する。偏光エピポーラ平面画像生成部33は、エピポーラ線が横方向である偏光エピポーラ平面画像(以下「横方向偏光エピポーラ平面画像」という)とエピポーラ平面画像(以下「横方向エピポーラ平面画像」という)およびエピポーラ線が縦方向である偏光エピポーラ平面画像(以下「縦方向偏光エピポーラ平面画像」という)とエピポーラ平面画像(以下「縦方向エピポーラ平面画像」という)をデプス算出部34へ出力する。 The polarization epipolar plane image generation unit 33 generates a first polarization epipolar plane image and a first epipolar plane image from a plurality of polarization imaging images having different viewpoint positions in the first direction, and generates a viewpoint in a second direction different from the first direction. A second polarization epipolar plane image and a second epipolar plane image are generated from a plurality of polarization imaging images different in position. For example, the polarization epipolar plane image generation unit 33 generates an image in the arrangement direction of the plurality of imaging units, that is, the horizontal direction so as to include an image indicating a desired position in the subject from polarization imaging images acquired by the plurality The epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the vertical direction orthogonal to the arrangement direction, in the interval according to the interval of the imaging units (the interval of the viewpoint position). In addition, the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint positions in the vertical direction orthogonal to the arrangement direction, at the intervals according to the intervals of the viewpoint positions. Furthermore, the polarization epipolar plane image generation unit 33 arranges the plurality of imaging units in the alignment direction, that is, the vertical direction so as to include an image indicating a desired position in the subject from polarization imaging images acquired by the plurality of imaging units arranged in the vertical direction. The image is extracted, and the epipolar plane image is generated by arranging the extracted images in the order of the imaging units in the lateral direction orthogonal to the arrangement direction, in the interval according to the interval of the imaging units (the interval of the viewpoint position). In addition, the polarization epipolar plane image generation unit 33 generates the polarization epipolar plane image by arranging the extracted images in the order of the viewpoint position in the horizontal direction orthogonal to the arrangement direction, at an interval corresponding to the interval of the viewpoint position. The polarization epipolar plane image generation unit 33 generates a polarization epipolar plane image (hereinafter referred to as “laterally polarized epipolar plane image”) and an epipolar plane image (hereinafter referred to as “lateral epipolar plane image”) with epipolar lines in the lateral direction and epipolar lines. The polarization epipolar plane image (hereinafter referred to as “longitudinal polarization epipolar plane image”) and the epipolar plane image (hereinafter referred to as “longitudinal epipolar plane image”) whose vertical directions are the vertical directions are output to the depth calculation unit 34.

 第4の実施の形態では、デプス算出部の構成が第1の実施の形態乃至第3の実施の形態と異なる。第4の実施の形態におけるデプス算出部は、横方向に並ぶ複数の撮像部で取得された偏光撮像画を用いて算出した勾配と、縦方向に並ぶ複数の撮像部で取得された偏光撮像画を用いて算出した勾配の信頼度を比較して、信頼度の高い勾配に基づいてデプスを算出する。 The fourth embodiment differs from the first to third embodiments in the configuration of the depth calculation unit. The depth calculation unit according to the fourth embodiment includes a gradient calculated using polarization imaging images acquired by a plurality of imaging units aligned in the horizontal direction, and a polarization imaging image acquired by a plurality of imaging units aligned in the vertical direction. Is used to calculate the depth based on the highly reliable gradient.

 図20は、第4の実施の形態におけるデプス算出部の構成を例示している。デプス算出部34は、勾配算出部341H,341Vと勾配選択部342、デプス換算部343、法線算出部344、統合処理部345およびデプス補間部346を有している。 FIG. 20 illustrates the configuration of the depth calculation unit according to the fourth embodiment. The depth calculation unit 34 includes slope calculation units 341 H and 341 V, a slope selection unit 342, a depth conversion unit 343, a normal line calculation unit 344, an integration processing unit 345, and a depth interpolation unit 346.

 勾配算出部341Hは、偏光エピポーラ平面画像生成部33で生成された横方向エピポーラ平面画像におけるデプス算出対象画素の勾配を算出する。勾配算出部341Hは、画素分類部3411Hとテクスチャ勾配算出部3412Hおよび偏光勾配算出部3413Hを有している。 The gradient calculation unit 341H calculates the gradient of the depth calculation target pixel in the lateral epipolar plane image generated by the polarization epipolar plane image generation unit 33. The gradient calculating unit 341H includes a pixel classification unit 3411H, a texture gradient calculating unit 3412H, and a polarization gradient calculating unit 3413H.

 画素分類部3411Hは、横方向エピポーラ平面画像の画素を、テクスチャ情報を有している画素とテクスチャ情報を有していない画像に分類する。画素分類部3411Hは、判定対処画素について、上述のようにテクスチャ判定値Gを算出する。画素分類部3411Hは、算出したテクスチャ判定値Gが予め設定されている判定閾値σ以上である場合にテクスチャ情報を有している画素と判定して、テクスチャ判定値Gが判定閾値σよりも小さい場合にテクスチャ情報を有していない画素と判定する。 The pixel classification unit 3411H classifies the pixels of the horizontal epipolar plane image into a pixel having texture information and an image having no texture information. The pixel classification unit 3411H calculates the texture determination value G as described above for the determination target pixel. When the calculated texture determination value G is equal to or greater than a predetermined determination threshold σ, the pixel classification unit 3411H determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold σ. In this case, it is determined that the pixel does not have texture information.

 テクスチャ勾配算出部3412Hは、画素分類部3411Hで分類されたテクスチャ情報を有している画素に対して、テクスチャ情報を用いて勾配θthを算出する。テクスチャ勾配算出部3412Hは、例えば画素分類部3411Hで算出された微分値を用いて上述のように勾配θthを算出する。テクスチャ勾配算出部3412Hは、真値の勾配を算出したときの分散を信頼度Rthとして、算出した勾配θthと信頼度Rthを勾配選択部342へ出力する。 The texture gradient calculation unit 3412H calculates the gradient θth using texture information for the pixels having the texture information classified by the pixel classification unit 3411H. The texture gradient calculation unit 3412H calculates the gradient θth as described above, for example, using the differential value calculated by the pixel classification unit 3411H. The texture gradient calculating unit 3412 H outputs the calculated gradient θth and the reliability Rth to the gradient selecting unit 342, using the dispersion when the gradient of the true value is calculated as the reliability Rth.

 偏光勾配算出部3413Hは、画素分類部3411Hで分類されたテクスチャ情報を有していない画素に対して、偏光情報を用いて勾配を算出する。偏光勾配算出部3413Hは、横方向偏光エピポーラ平面画像において、上述のように勾配の方向に位置する画素の異なる組み合わせ毎に三角関数の波形を算出して、波形の差が最小となる勾配を真値の勾配θphとする。また、偏光勾配算出部3413Hは、真値の勾配θphを算出したときの分散を信頼度Rphとして、算出した勾配θphと信頼度Rphを勾配選択部342へ出力する。 The polarization gradient calculation unit 3413H calculates the gradient using polarization information for the pixels that do not have the texture information classified by the pixel classification unit 3411H. The polarization gradient calculation unit 3413H calculates the waveform of the trigonometric function for each different combination of pixels positioned in the gradient direction as described above in the transversely polarized epipolar plane image as described above, and determines the gradient with the smallest waveform difference as the true. The gradient of the value is θph. Further, the polarization gradient calculation unit 3413H outputs the calculated gradient θph and the reliability Rph to the gradient selection unit 342, using the dispersion when the gradient θph of the true value is calculated as the reliability Rph.

 画素分類部3411Vは、縦方向エピポーラ平面画像の画素を、テクスチャ情報を有している画素とテクスチャ情報を有していない画像に分類する。画素分類部3411Vは、判定対処画素について、上述のようにテクスチャ判定値Gを算出する。画素分類部3411Vは、算出したテクスチャ判定値Gが予め設定されている判定閾値σ以上である場合にテクスチャ情報を有している画素と判定して、テクスチャ判定値Gが判定閾値σよりも小さい場合にテクスチャ情報を有していない画素と判定する。 The pixel classification unit 3411V classifies the pixels of the vertical epipolar plane image into pixels having texture information and an image having no texture information. The pixel classification unit 3411V calculates the texture determination value G as described above for the determination target pixel. When the calculated texture determination value G is equal to or greater than a predetermined determination threshold σ, the pixel classification unit 3411 V determines that the pixel has texture information, and the texture determination value G is smaller than the determination threshold σ. In this case, it is determined that the pixel does not have texture information.

 テクスチャ勾配算出部3412Vは、画素分類部3411Vで分類されたテクスチャ情報を有している画素に対して、テクスチャ情報を用いて勾配θtvを算出する。テクスチャ勾配算出部3412Vは、例えば画素分類部3411Hで算出された微分値を用いて、上述のように勾配θtvを算出する。テクスチャ勾配算出部3412Vは、真値の勾配を算出したときの分散を信頼度Rtvとして、算出した勾配θtvと信頼度Rtvを勾配選択部342へ出力する。 The texture gradient calculation unit 3412 V calculates the gradient θ tv using texture information for pixels having texture information classified by the pixel classification unit 3411 V. The texture gradient calculation unit 3412 V calculates the gradient θ tv as described above, for example, using the differential value calculated by the pixel classification unit 3411 H. The texture gradient calculating unit 3412 V outputs the calculated gradient θ tv and the reliability R tv to the gradient selecting unit 342, using the variance when the gradient of the true value is calculated as the reliability Rtv.

 偏光勾配算出部3413Vは、画素分類部3411Vで分類されたテクスチャ情報を有していない画素に対して、偏光情報を用いて勾配を算出する。偏光勾配算出部3413Vは、縦方向偏光エピポーラ平面画像において、上述のように勾配の方向に位置する画素の異なる組み合わせ毎に三角関数の波形を算出して、波形の差が最小となる勾配を真値の勾配θpvとする。また、偏光勾配算出部3413Vは、真値の勾配θpvを算出したときの分散を信頼度Rpvとして、算出した勾配θpvと信頼度Rpvを勾配選択部342へ出力する。 The polarization gradient calculation unit 3413V calculates the gradient using polarization information for pixels that do not have the texture information classified by the pixel classification unit 3411V. The polarization gradient calculation unit 3413 V calculates the waveform of the trigonometric function for each different combination of pixels positioned in the gradient direction as described above in the longitudinal polarization epipolar plane image as described above, and determines the gradient with the smallest waveform difference as the true. The gradient of the value is θpv. Further, the polarization gradient calculating unit 3413 V outputs the calculated gradient θpv and the reliability Rpv to the gradient selecting unit 342 with the dispersion obtained when the true value gradient θpv is calculated as the reliability Rpv.

 勾配選択部342は、テクスチャ勾配算出部3412H,3412Vと偏光勾配算出部3413H,3413Vから得られた勾配θth,θtv,θph,θpvと信頼度Rth,Rtv,Rph,Rpvに基づきデプス算出対象画素の勾配を選択してデプス換算部343へ出力する。また、勾配選択部342は、選択した勾配が偏光エピポーラ平面画像に基づいて算出されている場合、選択した勾配の算出に用いた偏光エピポーラ平面画像における任意の勾配の方向に位置する画素の画素値を法線算出部344へ出力する。さらに、勾配選択部342は、デプス算出対象画素が法線なし勾配未算出画素であるか否かを示す法線なし勾配未算出画素情報を生成してデプス補間部346へ出力する。 The gradient selection unit 342 is a pixel for which the depth is to be calculated based on the gradients θth, θtv, θph, θpv and the reliabilities Rth, Rtv, Rph, Rpv obtained from the texture gradient calculators 3412H, 3412V and the polarization gradient calculators 3413H, 3413V. The gradient is selected and output to the depth conversion unit 343. Also, when the selected gradient is calculated based on the polarization epipolar plane image, the gradient selecting unit 342 calculates the pixel value of the pixel located in the direction of an arbitrary gradient in the polarization epipolar plane image used for calculating the selected gradient. Are output to the normal calculation unit 344. Furthermore, the gradient selection unit 342 generates non-normalized gradient uncomputed pixel information indicating whether the depth calculation target pixel is a non-normalized gradient uncomputed pixel and outputs the pixel information to the depth interpolation unit 346.

 図21は、勾配選択動作を説明するための図である。例えば、デプス算出対象画素が、横方向エピポーラ平面画像に基づきテクスチャ情報を有する画素と判別されて、テクスチャ情報に基づき勾配θthが算出されており、縦方向エピポーラ平面画像に基づきテクスチャ情報を有する画素と判別されて、テクスチャ情報に基づき勾配θtvが算出されている場合、勾配選択部342は、信頼度Rthと信頼度Rtvを比較して信頼度の高い勾配を選択してデプス換算部343へ出力する。また、勾配選択部342は、選択した勾配の信頼度が信頼度閾値αよりも低い場合は、算出された勾配を無効として、デプス算出対象画素を法線なし勾配未算出画素としてもよい。 FIG. 21 is a diagram for explaining the gradient selection operation. For example, it is determined that the depth calculation target pixel is a pixel having texture information based on the horizontal epipolar plane image, the gradient θth is calculated based on the texture information, and the pixel having texture information based on the vertical epipolar plane image If it is determined and the gradient θ tv is calculated based on the texture information, the gradient selection unit 342 compares the reliability Rth and the reliability Rtv, selects a gradient with high reliability, and outputs the gradient to the depth conversion unit 343. . In addition, when the reliability of the selected gradient is lower than the reliability threshold α, the gradient selecting unit 342 may set the calculated gradient as invalid and set the pixel for which the depth is calculated as the pixel without gradient calculation without normal.

 デプス算出対象画素が、横方向エピポーラ平面画像に基づきテクスチャ情報を有していない画素と判別されて、偏光情報に基づき勾配θphが算出されており、縦方向エピポーラ平面画像に基づきテクスチャ情報を有していない画素と判別されて、偏光情報に基づき勾配θpvが算出されている場合、勾配選択部342は、信頼度Rphと信頼度Rpvを比較して信頼度の高い勾配を選択してデプス換算部343へ出力する。また、勾配選択部342は、選択した勾配の信頼度が信頼度閾値βよりも低い場合は、算出された勾配を無効として、デプス算出対象画素を法線なし勾配未算出画素としてもよい。 The pixel whose depth is to be calculated is determined to be a pixel that does not have texture information based on the horizontal epipolar plane image, the gradient θph is calculated based on the polarization information, and has texture information based on the vertical epipolar plane image When it is determined that the pixel is not a pixel and the gradient θpv is calculated based on the polarization information, the gradient selecting unit 342 compares the reliability Rph with the reliability Rpv to select a gradient with high reliability, and the depth conversion unit Output to 343. In addition, when the reliability of the selected gradient is lower than the reliability threshold value β, the gradient selecting unit 342 may set the calculated pixel as invalid and set the pixel for which the depth is calculated as the non-normal gradient uncalculated pixel.

 デプス算出対象画素が、一方の方向(例えば横方向)のエピポーラ平面画像に基づきテクスチャ情報を有する画素と判別されて、テクスチャ情報に基づき勾配(例えばθth)が算出されており、他方の方向(例えば縦方向)のエピポーラ平面画像に基づきテクスチャ情報を有していない画素と判別されて、偏光情報に基づき勾配(例えばθpv)が算出されている場合、勾配選択部342は、信頼度の高い勾配として、テクスチャ情報に基づき算出された勾配(例えばθth)を選択してデプス換算部343へ出力する。また、勾配選択部342は、テクスチャ情報に基づき算出された勾配の信頼度が信頼度閾値αよりも低い場合、偏光情報に基づき算出された勾配(例えばθpv)を選択してデプス換算部343へ出力してもよい。さらに、勾配選択部342は、テクスチャ情報に基づき算出された勾配に替えて偏光情報に基づき算出された勾配を選択する場合、偏光情報に基づき算出された勾配の信頼度が信頼度閾値βよりも低い場合は算出された勾配を無効として、デプス算出対象画素を法線なし勾配未算出画素としてもよい。 The pixel whose depth is to be calculated is determined to be a pixel having texture information based on the epipolar plane image in one direction (for example, the horizontal direction), and the gradient (for example, θth) is calculated based on the texture information. When it is determined that the pixel does not have texture information based on the epipolar plane image in the vertical direction) and the gradient (for example, θpv) is calculated based on the polarization information, the gradient selecting unit 342 determines that the gradient is highly reliable. The gradient (for example, θth) calculated based on the texture information is selected and output to the depth conversion unit 343. In addition, when the reliability of the gradient calculated based on the texture information is lower than the reliability threshold α, the gradient selecting unit 342 selects the gradient (for example, θpv) calculated based on the polarization information to the depth conversion unit 343. You may output it. Furthermore, when the gradient selection unit 342 selects the gradient calculated based on the polarization information instead of the gradient calculated based on the texture information, the reliability of the gradient calculated based on the polarization information is higher than the reliability threshold value β If it is low, the calculated gradient may be invalidated, and the pixel for which the depth is to be calculated may be the pixel without normal gradient without being calculated.

 また、図示せずも、一方の方向のエピポーラ平面画像または偏光エピポーラ平面画像に基づき勾配が算出されており、他方の方向のエピポーラ平面画像および偏光エピポーラ平面画像に基づき勾配が算出されていない場合、勾配選択部342は、算出されている勾配をデプス換算部343へ出力する。また、勾配選択部342は、算出した勾配の信頼度が信頼度閾値よりも低い場合、デプス算出対象画素を法線なし勾配未算出画素としてもよい Also, although not shown, when the gradient is calculated based on the epipolar plane image or the polarized epipolar plane image in one direction, and the gradient is not calculated based on the epipolar plane image and the polarized epipolar plane image in the other direction, The gradient selection unit 342 outputs the calculated gradient to the depth conversion unit 343. In addition, when the calculated reliability of the gradient is lower than the reliability threshold, the gradient selecting unit 342 may set the pixel for which the depth is to be calculated as the pixel for which the normal without gradient is calculated.

 デプス換算部343は、勾配選択部342で選択された勾配をデプスに換算する。デプス換算部343は、勾配選択部342で選択された勾配を上述のようにデプスに換算して、法線算出部344と統合処理部345へ出力する。 The depth conversion unit 343 converts the gradient selected by the gradient selection unit 342 into depth. The depth conversion unit 343 converts the gradient selected by the gradient selection unit 342 into a depth as described above, and outputs the depth to the normal calculation unit 344 and the integration processing unit 345.

 法線算出部344は、法線あり勾配未算出画素の法線を算出する。法線算出部344は、勾配選択部342から供給された画素値(輝度)に基づき、最大輝度となる方位角φを判別する。また、法線算出部344は、輝度と偏光角の関係から得た最大輝度と最小輝度を用いて算出した偏光度ρに対応する天頂角θzを判別する。なお、勾配選択部342で選択された勾配が勾配算出部341Vで算出された勾配θpvである場合の法線の算出は、上述のように、勾配算出部341Hで勾配θpHの算出に用いた画素値から法線すなわち方位角φと天頂角θzを算出した場合と同様な処理を、座標軸を90度回転させて行えばよい。法線算出部344は、法線あり勾配未算出画素について算出した方位角φと天頂角θzを示す法線情報を生成して、統合処理部345へ出力する。 The normal line calculation unit 344 calculates the normal line of the pixel with normal line and gradient not calculated. Based on the pixel value (brightness) supplied from the gradient selection unit 342, the normal line calculation unit 344 determines the azimuth angle φ at which the brightness is maximum. Further, the normal line calculation unit 344 determines the zenith angle θz corresponding to the degree of polarization ρ calculated using the maximum luminance and the minimum luminance obtained from the relationship between the luminance and the polarization angle. The calculation of the normal when the gradient selected by the gradient selecting unit 342 is the gradient θpv calculated by the gradient calculating unit 341V is the pixel used for calculating the gradient θpH by the gradient calculating unit 341H as described above. The same process as in the case of calculating the normal, that is, the azimuth angle φ and the zenith angle θz from the values may be performed by rotating the coordinate axes by 90 degrees. The normal line calculation unit 344 generates normal line information indicating the azimuth angle φ and the zenith angle θz calculated for the normal line and gradient uncalculated pixel, and outputs the generated normal line information to the integration processing unit 345.

 統合処理部345は、デプス換算部343で得られたデプスと法線算出部344で算出された法線を用いて統合処理を行い、高解像度かつ高精度にデプスを算出してデプス補間部346へ出力する。 The integration processing unit 345 performs integration processing using the depth obtained by the depth conversion unit 343 and the normal calculated by the normal calculation unit 344 to calculate the depth with high resolution and high accuracy, and the depth interpolation unit 346. Output to

 デプス補間部346は、勾配選択部342から供給された法線なし勾配未算出画素情報で示された法線なし勾配未算出画素のデプスを補間処理によって算出する。デプス補間部346は、法線なし勾配未算出画素の周辺に位置しており統合処理部345でデプスが算出されている画素のデプスを用いて補間処理を行い、法線なし勾配未算出画素のデプスを算出する。 The depth interpolation unit 346 calculates the depth of the non-normalized gradient uncomputed pixel indicated by the non-normalized gradient uncomputed pixel information supplied from the gradient selecting unit 342 by interpolation processing. The depth interpolation unit 346 performs interpolation processing using the depth of the pixel located around the non-normalized gradient uncomputed pixel and for which the depth is calculated by the integration processing unit 345. Calculate the depth.

 図22は第4の実施の形態の動作を示すフローチャートである。ステップST41で画像処理装置は画素分類を行う。画像処理装置30は、画素値を用いてテクスチャ判定値を算出する。さらに、画像処理装置30は、算出したテクスチャ判定値に基づき、テクスチャ情報を有する画素または有していない画素のいずれであるかを、横方向エピポーラ平面画像と縦方向エピポーラ平面画像のそれぞれについて判別してステップST42に進む。 FIG. 22 is a flow chart showing the operation of the fourth embodiment. In step ST41, the image processing apparatus performs pixel classification. The image processing device 30 calculates the texture determination value using the pixel value. Furthermore, based on the calculated texture determination value, the image processing device 30 determines whether the pixel has texture information or not, for each of the horizontal epipolar plane image and the vertical epipolar plane image. Then, the process proceeds to step ST42.

 ステップST42で画像処理装置はテクスチャ情報を用いて勾配を算出する。画像処理装置30は、横方向エピポーラ平面画像と縦方向エピポーラ平面画像のそれぞれについて、テクスチャ情報を有すると判別された画素について、テクスチャ情報を用いて勾配と信頼度を算出してステップST43に進む。 In step ST42, the image processing apparatus calculates the gradient using the texture information. The image processing apparatus 30 calculates the gradient and the reliability of the pixels determined to have texture information for each of the horizontal epipolar plane image and the vertical epipolar plane image using texture information, and proceeds to step ST43.

 ステップST43で画像処理装置は偏光情報を用いて勾配を算出する。画像処理装置30は、横方向偏光エピポーラ平面画像と縦方向偏光エピポーラ平面画像のそれぞれでテクスチャ情報を有していないと判別された画素について、偏光情報を用いて勾配と信頼度を算出してステップST44に進む。 In step ST43, the image processing apparatus calculates a gradient using polarization information. The image processing device 30 calculates the gradient and the reliability by using polarization information for a pixel determined to have no texture information in each of the horizontal polarization epipolar plane image and the vertical polarization epipolar plane image, and performs steps. Go to ST44.

 ステップST44で画像処理装置は勾配を選択する。画像処理装置30は、ステップST42とステップST43で算出された勾配から、デプスの算出に用いる勾配を選択してステップST45に進む。 In step ST44, the image processing apparatus selects a gradient. The image processing apparatus 30 selects a gradient to be used to calculate the depth from the gradients calculated in step ST42 and step ST43, and proceeds to step ST45.

 ステップST45で画像処理装置は法線なし勾配未算出画素情報を生成する。画像処理装置30は、ステップST42およびステップST43における勾配の算出結果、ステップST44で選択した勾配の信頼度、および勾配の算出に用いた三角関数の波形の振幅と位相等に基づき、法線なし勾配未算出画素の判別を行い、判別結果に基づき法線なし勾配未算出画素情報を生成してステップST46に進む。 In step ST45, the image processing apparatus generates no normal line gradient uncalculated pixel information. The image processing apparatus 30 calculates the gradient without normal based on the gradient calculation result in step ST42 and step ST43, the reliability of the gradient selected in step ST44, and the amplitude and phase of the waveform of the trigonometric function used for calculation of the gradient. The uncomputed pixel is determined, the normal line no gradient uncalculated pixel information is generated based on the determination result, and the process proceeds to step ST46.

 ステップST46で画像処理装置はデプスを算出する。画像処理装置30は、ステップST44で選択された勾配、あるいはステップST44で選択された勾配であって、信頼度が予め設定された信頼度閾値以上である勾配に基づきデプスを算出してステップST47に進む。 In step ST46, the image processing apparatus calculates the depth. The image processing device 30 calculates the depth based on the gradient selected in step ST44 or the gradient selected in step ST44 and whose reliability is equal to or higher than the reliability threshold set in advance, and the process proceeds to step ST47. move on.

 ステップST47で画像処理装置は法線情報を生成する。画像処理装置30は、法線あり勾配未算出画素の法線を算出して、算出した法線を示す法線情報を生成する。画像処理装置30は、ステップST44で選択した勾配が偏光エピポーラ平面画像を用いて算出された画素であって、勾配の信頼度が信頼度閾値よりも低く、三角関数の波形の振幅と位相が勾配にかかわらず一定となる画素、すなわち法線あり勾配未算出画素である場合、選択した勾配の算出に用いた偏光エピポーラ平面画像における任意の勾配の方向に位置する画素の画素値を用いて法線の算出を行い、算出した法線を示す法線情報を生成してステップST48に進む。 In step ST47, the image processing apparatus generates normal line information. The image processing device 30 calculates normals of the pixels with normals and gradients not calculated, and generates normal information indicating the calculated normals. In the image processing device 30, the gradient selected in step ST44 is a pixel calculated using the polarization epipolar plane image, the reliability of the gradient is lower than the reliability threshold, and the amplitude and phase of the trigonometric function waveform are gradients In the case of a pixel which becomes constant regardless of, that is, with a normal and a gradient uncalculated pixel, the pixel value of the pixel located in the direction of an arbitrary gradient in the polarization epipolar plane image used for calculating the selected gradient Is calculated, normal line information indicating the calculated normal line is generated, and the process proceeds to step ST48.

 ステップST48で画像処理装置は統合処理を行う。画像処理装置30は、画素単位で得られたデプスと法線情報で示された法線を用いて統合処理を行い、統合処理前よりも高解像度であるデプスを算出してステップST49に進む。 In step ST48, the image processing apparatus performs integration processing. The image processing device 30 performs integration processing using the depth obtained in pixel units and the normal line indicated by the normal information, calculates the depth having a resolution higher than that before integration processing, and proceeds to step ST49.

 ステップST49で画像処理装置はデプス補間処理を行う。画像処理装置30は、ステップST45で生成された法線なし勾配未算出画素情報で示された法線なし勾配未算出画素のデプスを、近傍画素のデプスを用いた補間処理によって算出する。 In step ST49, the image processing apparatus performs depth interpolation processing. The image processing device 30 calculates the depth of the non-normalized gradient non-computed pixel indicated by the non-normalized gradient non-computed pixel information generated at step ST45 by interpolation processing using the depth of the neighboring pixel.

 このように、第4の実施の形態によれば、視点位置を二次元位置として得られた複数の偏光撮像画を用いて、高解像度で信頼度の高いデプスを算出できるようになる。 As described above, according to the fourth embodiment, it is possible to calculate a high-reliability and high-reliability depth using a plurality of polarization imaging images obtained with the viewpoint position as a two-dimensional position.

 <6.他の構成と動作>
 ところで、上述の撮像システム10は、偏光方向が異なる複数の撮像部を直線状に並列して複数配置した場合を例示したが、1つの撮像装置を移動可能として偏光素子の偏光方向を切り替え可能としてもよい。例えば図1に示す撮像装置20-1~20-6の位置に順次撮像装置を移動して、撮像装置の移動に応じて偏光素子21の偏光方向を切り替えて各位置の撮像装置における偏光方向として、動きのない被写体OBa,OBbを撮像する。また、撮像装置の移動に応じた外部パラメータを用いるようにする。このように撮像システムを構成すれば、複数の撮像装置を用いなくとも、1つの撮像装置を横方向や縦方向に移動して撮像を行うことで偏光方向と視点位置が異なる複数の偏光撮像画を取得して、高解像度で精度よくデプスを算出できるようになる。
<6. Other configuration and operation>
By the way, although the above-mentioned imaging system 10 illustrated the case where a plurality of imaging units with different polarization directions are linearly arranged in parallel, a single imaging device can be moved to switch the polarization direction of the polarization element. It is also good. For example, the imaging device is sequentially moved to the positions of the imaging devices 20-1 to 20-6 shown in FIG. 1, and the polarization direction of the polarizing element 21 is switched according to the movement of the imaging device , Images of the objects OBa and OBb without motion. In addition, external parameters corresponding to the movement of the imaging device are used. If an imaging system is configured in this manner, a plurality of polarization imaging images having different polarization directions and viewpoint positions can be obtained by performing imaging by moving one imaging device in the horizontal direction or the vertical direction without using a plurality of imaging devices. It is possible to calculate the depth with high resolution and accuracy.

 <7.適用例>
 本開示に係る技術は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
<7. Application example>
The technology according to the present disclosure can be applied to various products. For example, the technology according to the present disclosure is realized as a device mounted on any type of mobile object such as a car, an electric car, a hybrid electric car, a motorcycle, a bicycle, personal mobility, an airplane, a drone, a ship, a robot May be

 図23は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。 FIG. 23 is a block diagram showing a schematic configuration example of a vehicle control system that is an example of a mobile control system to which the technology according to the present disclosure can be applied.

 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図23に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(Interface)12053が図示されている。 Vehicle control system 12000 includes a plurality of electronic control units connected via communication network 12001. In the example shown in FIG. 23, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an external information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050. Further, as a functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (Interface) 12053 are illustrated.

 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The driveline control unit 12010 controls the operation of devices related to the driveline of the vehicle according to various programs. For example, the drive system control unit 12010 includes a drive force generation device for generating a drive force of a vehicle such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting the drive force to the wheels, and a steering angle of the vehicle. adjusting steering mechanism, and functions as a control device of the braking device or the like to generate a braking force of the vehicle.

 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 Body system control unit 12020 controls the operation of the camera settings device to the vehicle body in accordance with various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device of various lamps such as a headlamp, a back lamp, a brake lamp, a blinker or a fog lamp. In this case, the body system control unit 12020, the signal of the radio wave or various switches is transmitted from wireless controller to replace the key can be entered. Body system control unit 12020 receives an input of these radio or signal, the door lock device for a vehicle, the power window device, controls the lamp.

 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 Outside vehicle information detection unit 12030 detects information outside the vehicle equipped with vehicle control system 12000. For example, an imaging unit 12031 is connected to the external information detection unit 12030. The out-of-vehicle information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle, and receives the captured image. The external information detection unit 12030 may perform object detection processing or distance detection processing of a person, a vehicle, an obstacle, a sign, characters on a road surface, or the like based on the received image.

 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 Imaging unit 12031 receives light, an optical sensor for outputting an electric signal corresponding to the received light amount of the light. The imaging unit 12031 can output an electric signal as an image or can output it as distance measurement information. The light image pickup unit 12031 is received may be a visible light, it may be invisible light such as infrared rays.

 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 Vehicle information detection unit 12040 detects the vehicle information. For example, a driver state detection unit 12041 that detects a state of a driver is connected to the in-vehicle information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera for imaging the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated or it may be determined whether the driver does not go to sleep.

 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 calculates a control target value of the driving force generation device, the steering mechanism or the braking device based on the information inside and outside the vehicle acquired by the outside information detecting unit 12030 or the in-vehicle information detecting unit 12040, and a drive system control unit A control command can be output to 12010. For example, the microcomputer 12051 realizes functions of an advanced driver assistance system (ADAS) including collision avoidance or shock mitigation of a vehicle, follow-up traveling based on an inter-vehicle distance, vehicle speed maintenance traveling, vehicle collision warning, vehicle lane departure warning, etc. It is possible to perform coordinated control aiming at

 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 The microcomputer 12051, the driving force generating device on the basis of the information around the vehicle acquired by the outside information detection unit 12030 or vehicle information detection unit 12040, by controlling the steering mechanism or braking device, the driver automatic operation such that autonomously traveling without depending on the operation can be carried out cooperative control for the purpose of.

 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 Further, the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the external information detection unit 12030. For example, the microcomputer 12051 controls the headlamps in response to the preceding vehicle or the position where the oncoming vehicle is detected outside the vehicle information detection unit 12030, the cooperative control for the purpose of achieving the anti-glare such as switching the high beam to the low beam It can be carried out.

 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図23の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 Audio and image output unit 12052 transmits, to the passenger or outside of the vehicle, at least one of the output signal of the voice and image to be output device to inform a visually or aurally information. In the example of FIG. 23, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices. Display unit 12062, for example, may include at least one of the on-board display and head-up display.

 図24は、撮像部12031の設置位置の例を示す図である。 FIG. 24 is a diagram illustrating an example of the installation position of the imaging unit 12031.

 図24では、撮像部12031として、撮像部12101、12102、12103、12104、12105を有する。 In FIG. 24, imaging units 12101, 12102, 12103, 12104, and 12105 are provided as the imaging unit 12031.

 撮像部12101、12102、12103、12104、12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102、12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。車室内のフロントガラスの上部に備えられる撮像部12105は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, on the front nose of the vehicle 12100, a side mirror, a rear bumper, a back door, an upper portion of a windshield of a vehicle interior, and the like. The imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle cabin mainly acquire an image in front of the vehicle 12100. The imaging units 12102 and 12103 included in the side mirror mainly acquire an image of the side of the vehicle 12100. The imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100. The imaging unit 12105 provided on the top of the windshield in the passenger compartment is mainly used to detect a leading vehicle or a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.

 なお、図24には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 Note that FIG. 24 shows an example of the imaging range of the imaging units 12101 to 12104. Imaging range 12111 indicates an imaging range of the imaging unit 12101 provided in the front nose, imaging range 12112,12113 are each an imaging range of the imaging unit 12102,12103 provided on the side mirror, an imaging range 12114 is The imaging range of the imaging part 12104 provided in the rear bumper or the back door is shown. For example, by overlaying the image data captured by the imaging units 12101 to 12104, a bird's eye view of the vehicle 12100 viewed from above can be obtained.

 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging unit 12101 through 12104 may have a function of obtaining distance information. For example, at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging devices, or an imaging device having pixels for phase difference detection.

 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, based on the distance information obtained from the imaging units 12101 to 12104, the microcomputer 12051 measures the distance to each three-dimensional object in the imaging ranges 12111 to 12114, and the temporal change of this distance (relative velocity with respect to the vehicle 12100). In particular, it is possible to extract a three-dimensional object traveling at a predetermined speed (for example, 0 km / h or more) in substantially the same direction as the vehicle 12100 as a leading vehicle, in particular by finding the it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in advance before the preceding vehicle, and can perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. Automatic operation or the like for autonomously traveling without depending on the way of the driver operation can perform cooperative control for the purpose.

 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, based on the distance information obtained from the imaging units 12101 to 12104, the microcomputer 12051 converts three-dimensional object data relating to three-dimensional objects into two-dimensional vehicles such as two-wheeled vehicles, ordinary vehicles, large vehicles, classification and extracted, can be used for automatic avoidance of obstacles. For example, the microcomputer 12051 identifies obstacles around the vehicle 12100 into obstacles visible to the driver of the vehicle 12100 and obstacles difficult to see. Then, the microcomputer 12051 determines a collision risk which indicates the risk of collision with the obstacle, when a situation that might collide with the collision risk set value or more, through an audio speaker 12061, a display portion 12062 By outputting a warning to the driver or performing forcible deceleration or avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be performed.

 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging unit 12101 to 12104, may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether a pedestrian is present in the images captured by the imaging units 12101 to 12104. Such pedestrian recognition is, for example, a procedure for extracting feature points in images captured by the imaging units 12101 to 12104 as an infrared camera, and pattern matching processing on a series of feature points indicating the outline of an object to determine whether it is a pedestrian or not The procedure is to determine Microcomputer 12051 is, determines that the pedestrian in the captured image of the imaging unit 12101 to 12104 is present, recognizing the pedestrian, the sound image output unit 12052 is rectangular outline for enhancement to the recognized pedestrian to superimpose, controls the display unit 12062. The audio image output unit 12052 is, an icon or the like indicating a pedestrian may control the display unit 12062 to display the desired position.

 以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る画像処理装置は、以上説明した構成のうち、車外情報検出ユニット12030に適用され得る。また、本開示に係る撮像装置は、撮像部12101、12102、12103、12104、12105等に適用され得る。例えば、フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105で偏光方向と視点位置が異なる複数の偏光撮像画を取得すれば、車外情報検出ユニット12030は、先行車両や前方の歩行者、障害物までの距離を高解像度で精度よく算出できる。また、サイドミラーに備えられる撮像部12102、12103で偏光方向と視点位置が異なる複数の偏光撮像画を取得すれば、車外情報検出ユニット12030は、例えば縦列駐車を行う際に、側面側に位置する物体までの距離を高解像度で精度よく算出できる。また、サイドミラーに備えられる撮像部12102、12103では、車の移動に応じて偏光方向を切り替えて撮像を行い、視点位置と偏光方向が異なる複数の偏光撮像画を取得すれば、縦列駐車を行うスペースの前を通過するだけで、側面側に位置する物体までの距離を高解像度で精度よく算出することも可能となる。さらに、リアバンパ又はバックドアに備えられる撮像部12104で偏光方向と視点位置が異なる複数の偏光撮像画を取得すれば、車外情報検出ユニット12030は、後続車までの距離や直角駐車を行う際に後方に位置する物体までの距離を高解像度で精度よく算出できる。 The example of the vehicle control system to which the technology according to the present disclosure can be applied has been described above. The image processing apparatus according to the present disclosure may be applied to the external information detection unit 12030 among the configurations described above. In addition, the imaging device according to the present disclosure may be applied to the imaging units 12101, 12102, 12103, 12104, 12105 and the like. For example, if a plurality of polarization imaging images having different polarization directions and viewpoint positions are acquired by the imaging unit 12101 provided in the front nose and the imaging unit 12105 provided in the upper part of the windshield in the vehicle interior, the outside information detection unit 12030 The distance to the vehicle, the pedestrian in front, and the obstacle can be accurately calculated with high resolution. In addition, if a plurality of polarization imaging images having different polarization directions and viewpoint positions are acquired by the imaging units 12102 and 12103 provided in the side mirrors, the outside vehicle information detection unit 12030 is positioned on the side surface, for example, when performing parallel parking. The distance to the object can be accurately calculated with high resolution. The imaging units 12102 and 12103 included in the side mirrors perform imaging by switching the polarization direction according to the movement of the car and acquiring a plurality of polarization imaging images having different viewpoint positions and polarization directions, and performing parallel parking. It is also possible to accurately calculate the distance to an object located on the side with high resolution only by passing in front of the space. Furthermore, if a plurality of polarization imaging images different in polarization direction and viewpoint position are acquired by the imaging unit 12104 provided in the rear bumper or the back door, the outside vehicle information detection unit 12030 may back when performing distance to the following vehicle or right angle parking The distance to the object located at can be accurately calculated with high resolution.

 明細書中において説明した一連の処理はハードウェア、またはソフトウェア、あるいは両者の複合構成によって実行することが可能である。ソフトウェアによる処理を実行する場合は、処理シーケンスを記録したプログラムを、専用のハードウェアに組み込まれたコンピュータ内のメモリにインストールして実行させる。または、各種処理が実行可能な汎用コンピュータにプログラムをインストールして実行させることが可能である。 The series of processes described in the specification can be performed by hardware, software, or a combination of both. In the case of executing processing by software, a program recording the processing sequence is installed and executed in a memory in a computer incorporated in dedicated hardware. Alternatively, the program can be installed and executed on a general-purpose computer that can execute various processes.

 例えば、プログラムは記録媒体としてのハードディスクやSSD(Solid State Drive)、ROM(Read Only Memory)に予め記録しておくことができる。あるいは、プログラムはフレキシブルディスク、CD-ROM(Compact Disc Read Only Memory),MO(Magneto optical)ディスク,DVD(Digital Versatile Disc)、BD(Blu-Ray Disc(登録商標))、磁気ディスク、半導体メモリカード等のリムーバブル記録媒体に、一時的または永続的に格納(記録)しておくことができる。このようなリムーバブル記録媒体は、いわゆるパッケージソフトウェアとして提供することができる。 For example, the program can be recorded in advance on a hard disk or a solid state drive (SSD) as a recording medium, or a read only memory (ROM). Alternatively, the program may be a flexible disk, a compact disc read only memory (CD-ROM), a magneto optical (MO) disc, a digital versatile disc (DVD), a BD (Blu-Ray Disc (registered trademark)), a magnetic disc, a semiconductor memory card Etc. can be stored (recorded) temporarily or permanently on a removable recording medium such as Such removable recording media can be provided as so-called package software.

 また、プログラムは、リムーバブル記録媒体からコンピュータにインストールする他、ダウンロードサイトからLAN(Local Area Network)やインターネット等のネットワークを介して、コンピュータに無線または有線で転送してもよい。コンピュータでは、そのようにして転送されてくるプログラムを受信し、内蔵するハードディスク等の記録媒体にインストールすることができる。 The program may be installed from the removable recording medium to the computer, or may be transferred from the download site to the computer wirelessly or by wire via a network such as a LAN (Local Area Network) or the Internet. The computer can receive the program transferred in such a manner, and install the program on a recording medium such as a built-in hard disk.

 なお、本明細書に記載した効果はあくまで例示であって限定されるものではなく、記載されていない付加的な効果があってもよい。また、本技術は、上述した技術の実施の形態に限定して解釈されるべきではない。この技術の実施の形態は、例示という形態で本技術を開示しており、本技術の要旨を逸脱しない範囲で当業者が実施の形態の修正や代用をなし得ることは自明である。すなわち、本技術の要旨を判断するためには、請求の範囲を参酌すべきである。 In addition, the effect described in this specification is an illustration to the last, is not limited, and may have an additional effect which is not described. In addition, the present technology should not be construed as being limited to the embodiments of the above-described technology. The embodiments of this technology disclose the present technology in the form of exemplification, and it is obvious that those skilled in the art can modify or substitute the embodiments within the scope of the present technology. That is, in order to determine the gist of the present technology, the claims should be taken into consideration.

 また、本技術の画像処理装置は以下のような構成も取ることができる。
 (1) 偏光方向と視点位置が異なる複数の偏光撮像画から生成された偏光エピポーラ平面画像に基づいて、デプス算出対象画素のデプス算出を行うデプス算出部備える画像処理装置。
 (2) 前記デプス算出部は、
 前記偏光エピポーラ平面画像における前記デプス算出対象画素の勾配を算出する偏光勾配算出部と、
 前記偏光勾配算出部で算出された前記勾配をデプスに換算するデプス換算部と
を有する(1)に記載の画像処理装置。
 (3) 前記デプス換算部は、前記偏光勾配算出部で算出された勾配の信頼度が予め設定した信頼度閾値以上である勾配をデプスに換算する(2)に記載の画像処理装置。
 (4) 前記偏光勾配算出部は、前記偏光エピポーラ平面画像における前記デプス算出対象画素の位置を基準とした直線上の画素を用いて三角関数へのフィッティングを行い、フィッティングに用いる画素の違いによって生じる前記三角関数の波形の差が最小となる前記直線の勾配を前記デプス算出対象画素の勾配とする(2)または(3)のいずれかに記載の画像処理装置。
 (5) 前記偏光方向は180度未満の角度差の範囲内で4方向以上である(4)に記載の画像処理装置。
 (6) 前記偏光エピポーラ平面画像における前記偏光勾配算出部で算出された勾配の方向に位置する画素を用いて前記デプス算出対象画素の法線を算出する法線算出部と、
 前記デプス換算部で得られたデプスと前記法線算出部で算出された法線を用いて補間処理を行い、画素単位よりも密で高い解像度のデプスを取得する統合処理部とをさらに有する(2)乃至(5)のいずれかに記載の画像処理装置。
 (7) 前記勾配算出部で前記デプス算出対象画素の勾配を算出できない場合、前記デプス算出対象画素に近接する画素のデプスを用いた補間処理によって、前記デプス算出対象画素のデプスを算出するデプス補間部をさらに有する(2)乃至(6)のいずれかに記載の画像処理装置。
 (8) 前記偏光エピポーラ平面画像における任意の勾配の方向に位置する画素を用いて法線を算出する法線算出部をさらに有し、
 前記法線算出部は、前記勾配算出部で前記デプス算出対象画素の勾配を算出できない場合、前記勾配を算出できないデプス算出対象画素から任意の勾配の方向に位置する画素を用いて、前記デプス算出対象画素の法線を算出する(2)乃至(7)のいずれかに記載の画像処理装置。
 (9) 前記デプス算出部は、
 前記デプス算出対象画素がテクスチャ情報を有する画素と有していない画素のいずれであるかを前記複数の偏光撮像画から生成されたエピポーラ平面画像に基づいて分類する画素分類部と、
 前記エピポーラ平面画像における前記デプス算出対象画素の勾配を算出するテクスチャ勾配算出部をさらに有し、
 前記テクスチャ勾配算出部は、前記テクスチャ情報を有する画素の勾配を前記エピポーラ平面画像に基づき算出して、
 前記偏光勾配算出部は、前記テクスチャ情報を有していない画素の勾配を前記偏光エピポーラ平面画像に基づき算出する(2)乃至(8)のいずれかに記載の画像処理装置。
 (10) 前記デプス算出部は、
 第1方向に前記視点位置が異なる複数の偏光撮像画から生成された第1偏光エピポーラ平面画像または第1エピポーラ平面画像に基づいて、前記偏光勾配算出部または前記テクスチャ勾配算出部で算出された前記デプス算出対象画素の勾配と、前記第1方向と異なる第2方向に前記視点位置が異なる複数の偏光撮像画から生成された第2偏光エピポーラ平面画像または第2エピポーラ平面画像に基づいて、前記偏光勾配算出部または前記テクスチャ勾配算出部で算出された前記デプス算出対象画素の勾配のいずれかを選択する勾配選択部をさらに有し、
 前記デプス換算部は、前記勾配選択部で選択された前記勾配をデプスに換算する(9)に記載の画像処理装置。
 (11) 前記勾配選択部は、一方の勾配が前記テクスチャ勾配算出部で算出された勾配であり、他方の勾配が前記偏光勾配算出部で算出された勾配である場合、前記テクスチャ勾配算出部で算出された勾配を選択する(9)に記載の画像処理装置。
 (12) 前記勾配選択部は、前記勾配が共に前記テクスチャ勾配算出部または前記偏光勾配算出部で算出された場合、信頼度の高い勾配を選択する(10)に記載の画像処理装置。
In addition, the image processing apparatus of the present technology can also have the following configuration.
(1) An image processing apparatus including a depth calculation unit that calculates the depth of a pixel for which depth calculation is to be performed, based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.
(2) The depth calculation unit
A polarization gradient calculation unit that calculates the gradient of the depth calculation target pixel in the polarization epipolar plane image;
The image processing apparatus according to (1), further including: a depth conversion unit that converts the gradient calculated by the polarization gradient calculation unit into a depth.
(3) The image processing apparatus according to (2), wherein the depth conversion unit converts, to the depth, a gradient whose reliability of the gradient calculated by the polarization gradient calculation unit is equal to or higher than a reliability threshold set in advance.
(4) The polarization gradient calculation unit performs fitting to a trigonometric function using pixels on a straight line based on the position of the depth calculation target pixel in the polarization epipolar plane image, and occurs due to differences in pixels used for fitting The image processing apparatus according to any one of (2) and (3), wherein the gradient of the straight line at which the difference in waveform of the trigonometric function is the smallest is the gradient of the pixel for which the depth is to be calculated.
(5) The image processing apparatus according to (4), wherein the polarization direction is four or more in a range of an angle difference of less than 180 degrees.
(6) A normal line calculation unit that calculates a normal of the depth calculation target pixel using a pixel located in the direction of the gradient calculated by the polarization gradient calculation unit in the polarization epipolar plane image;
The image processing apparatus further includes an integrated processing unit that performs interpolation processing using the depth obtained by the depth conversion unit and the normal calculated by the normal calculation unit to obtain a depth that is denser than the pixel unit and has a higher resolution The image processing apparatus according to any one of 2) to 5).
(7) When the gradient calculation unit can not calculate the gradient of the depth calculation object pixel, the depth interpolation is performed to calculate the depth of the depth calculation object pixel by interpolation processing using the depth of the pixel adjacent to the depth calculation object pixel The image processing apparatus according to any one of (2) to (6), further including a part.
(8) It further has a normal line calculation unit which calculates a normal line using pixels located in the direction of an arbitrary gradient in the polarization epipolar plane image,
When the gradient calculation unit can not calculate the gradient of the depth calculation target pixel, the normal calculation unit calculates the depth using pixels located in a direction of an arbitrary gradient from the depth calculation target pixel for which the gradient can not be calculated. The image processing apparatus according to any one of (2) to (7), which calculates a normal of a target pixel.
(9) The depth calculation unit
A pixel classification unit that classifies whether the depth calculation target pixel is a pixel having texture information or a pixel not having the texture information based on an epipolar plane image generated from the plurality of polarization imaging images;
The image processing apparatus further includes a texture gradient calculation unit that calculates the gradient of the depth calculation target pixel in the epipolar plane image,
The texture gradient calculation unit calculates gradients of pixels having the texture information based on the epipolar plane image,
The image processing apparatus according to any one of (2) to (8), wherein the polarization gradient calculation unit calculates the gradient of the pixel not having the texture information based on the polarization epipolar plane image.
(10) The depth calculation unit
The polarization gradient calculator or the texture gradient calculator calculates the first polarization epipolar plane image or the first epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in the first direction. The polarization based on the gradient of the depth calculation target pixel and a second polarization epipolar plane image or a second epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in a second direction different from the first direction. The image processing apparatus further includes a gradient selection unit that selects any one of the gradients for the depth calculation target pixel calculated by the gradient calculation unit or the texture gradient calculation unit,
The image processing apparatus according to (9), wherein the depth conversion unit converts the gradient selected by the gradient selection unit into a depth.
(11) When the gradient selection unit is one of the gradients calculated by the texture gradient calculation unit and the other gradient is the gradient calculated by the polarization gradient calculation unit, the texture selection unit The image processing apparatus according to (9), wherein the calculated gradient is selected.
(12) The image processing apparatus according to (10), wherein the gradient selection unit selects a gradient with high reliability when the gradients are both calculated by the texture gradient calculation unit or the polarization gradient calculation unit.

 この技術の画像処理装置と画像処理方法とプログラムおよび画像処理システムでは、偏光方向と視点位置が異なる複数の偏光撮像画から生成された偏光エピポーラ平面画像に基づいて、デプス算出対象画素のデプスが算出される。このため、テクスチャがない領域ついても偏光情報に基づきデプスを算出できるようになる。したがって、撮像画を利用して被写体までの距離を取得する機能を用いる機器、例えば自動車等の移動体に搭載される電子機器等に適している。 In the image processing apparatus, the image processing method, the program, and the image processing system of this technology, the depth of the depth calculation target pixel is calculated based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions. Be done. For this reason, it becomes possible to calculate the depth based on the polarization information even in the area without texture. Therefore, it is suitable for an apparatus using a function of acquiring a distance to a subject using a captured image, for example, an electronic apparatus mounted on a mobile object such as an automobile.

 10・・・撮像システム
 20-1~20-6,20-(1,1)~20-(6,6) ・・・撮像装置
 21-1~21-6,21-(1,1)~21-(6,6) ・・・偏光素子
 30・・・画像処理装置
 31・・・前処理部
 32・・・パラメータ保持部
 33・・・偏光エピポーラ平面画像生成部
 34・・・デプス算出部
 341,341H,341V・・・勾配算出部
 342・・・勾配選択部
 343・・・デプス換算部
 344・・・法線算出部
 345・・・統合処理部
 346・・・デプス補間部
 3411,3411H,3411V・・・画素分類部
 3412,3412H,3412V・・・テクスチャ勾配算出部
 3413,3413H,3413V・・・偏光勾配算出部
10: Imaging system 20-1 to 20-6, 20- (1, 1) to 20- (6, 6) ... imaging device 21-1 to 21-6, 21- (1, 1) to 21-(6, 6) ... Polarizing element 30 ... Image processing device 31 ... Preprocessing unit 32 ... Parameter holding unit 33 ... Polarized epipolar plane image generation unit 34 ... Depth calculation unit 341, 341 H, 341 V ... gradient calculation unit 342 ... gradient selection unit 343 ... depth conversion unit 344 ... normal calculation unit 345 ... integration processing unit 346 ... depth interpolation unit 3411, 3411 H , 3411 V: pixel classification unit 3412, 3412 H, 3412 V: texture gradient calculation unit 3413, 3413 H, 3413 V: polarization gradient calculation unit

Claims (17)

 偏光方向と視点位置が異なる複数の偏光撮像画から生成された偏光エピポーラ平面画像に基づいて、デプス算出対象画素のデプス算出を行うデプス算出部
備える画像処理装置。
An image processing apparatus comprising: a depth calculation unit that performs depth calculation of a depth calculation target pixel based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.
 前記デプス算出部は、
 前記偏光エピポーラ平面画像における前記デプス算出対象画素の勾配を算出する偏光勾配算出部と、
 前記偏光勾配算出部で算出された前記勾配をデプスに換算するデプス換算部と
を有する
請求項1に記載の画像処理装置。
The depth calculation unit
A polarization gradient calculation unit that calculates the gradient of the depth calculation target pixel in the polarization epipolar plane image;
The image processing apparatus according to claim 1, further comprising: a depth conversion unit that converts the gradient calculated by the polarization gradient calculation unit into a depth.
 前記デプス換算部は、前記偏光勾配算出部で算出された勾配の信頼度が予め設定した信頼度閾値以上である勾配をデプスに換算する
請求項2に記載の画像処理装置。
The image processing apparatus according to claim 2, wherein the depth conversion unit converts, to the depth, a gradient whose reliability of the gradient calculated by the polarization gradient calculation unit is equal to or more than a reliability threshold set in advance.
 前記偏光勾配算出部は、前記偏光エピポーラ平面画像における前記デプス算出対象画素の位置を基準とした直線上の画素を用いて三角関数へのフィッティングを行い、フィッティングに用いる画素の違いによって生じる前記三角関数の波形の差が最小となる前記直線の勾配を前記デプス算出対象画素の勾配とする
請求項2に記載の画像処理装置。
The polarization gradient calculation unit performs fitting to a trigonometric function using pixels on a straight line based on the position of the depth calculation target pixel in the polarization epipolar plane image, and the trigonometric function generated due to a difference in pixels used for fitting The image processing apparatus according to claim 2, wherein the gradient of the straight line at which the waveform difference between the two is the smallest is the gradient of the depth calculation target pixel.
 前記偏光方向は180度未満の角度差の範囲内で4方向以上である
請求項4に記載の画像処理装置。
The image processing apparatus according to claim 4, wherein the polarization direction is four or more directions within an angular difference range of less than 180 degrees.
 前記偏光エピポーラ平面画像における前記偏光勾配算出部で算出された勾配の方向に位置する画素を用いて前記デプス算出対象画素の法線を算出する法線算出部と、
 前記デプス換算部で得られたデプスと前記法線算出部で算出された法線を用いて補間処理を行い、画素単位よりも密で高い解像度のデプスを取得する統合処理部とをさらに有する
請求項2に記載の画像処理装置。
A normal line calculation unit that calculates a normal line of the depth calculation target pixel using a pixel located in the direction of the gradient calculated by the polarization gradient calculation unit in the polarization epipolar plane image;
The image processing apparatus further includes an integrated processing unit that performs interpolation processing using the depth obtained by the depth conversion unit and the normal calculated by the normal calculation unit to obtain a depth having a resolution higher than that of the pixel unit and having a higher resolution. An image processing apparatus according to Item 2.
 前記勾配算出部で前記デプス算出対象画素の勾配を算出できない場合、前記デプス算出対象画素に近接する画素のデプスを用いた補間処理によって、前記デプス算出対象画素のデプスを算出するデプス補間部をさらに有する
請求項2に記載の画像処理装置。
If the gradient calculation unit can not calculate the gradient of the depth calculation object pixel, the depth interpolation unit further calculates the depth of the depth calculation object pixel by interpolation processing using the depth of the pixel close to the depth calculation object pixel. The image processing apparatus according to claim 2.
 前記偏光エピポーラ平面画像における任意の勾配の方向に位置する画素を用いて法線を算出する法線算出部をさらに有し、
 前記法線算出部は、前記勾配算出部で前記デプス算出対象画素の勾配を算出できない場合、前記勾配を算出できないデプス算出対象画素から任意の勾配の方向に位置する画素を用いて、前記デプス算出対象画素の法線を算出する
請求項2に記載の画像処理装置。
It further comprises a normal line calculation unit that calculates a normal line using pixels located in the direction of an arbitrary gradient in the polarization epipolar plane image,
When the gradient calculation unit can not calculate the gradient of the depth calculation target pixel, the normal calculation unit calculates the depth using pixels located in a direction of an arbitrary gradient from the depth calculation target pixel for which the gradient can not be calculated. The image processing apparatus according to claim 2, wherein the normal of the target pixel is calculated.
 前記デプス算出部は、
 前記デプス算出対象画素がテクスチャ情報を有する画素と有していない画素のいずれであるかを前記複数の偏光撮像画から生成されたエピポーラ平面画像に基づいて分類する画素分類部と、
 前記エピポーラ平面画像における前記デプス算出対象画素の勾配を算出するテクスチャ勾配算出部をさらに有し、
 前記テクスチャ勾配算出部は、前記テクスチャ情報を有する画素の勾配を前記エピポーラ平面画像に基づき算出して、
 前記偏光勾配算出部は、前記テクスチャ情報を有していない画素の勾配を前記偏光エピポーラ平面画像に基づき算出する
請求項2に記載の画像処理装置。
The depth calculation unit
A pixel classification unit that classifies whether the depth calculation target pixel is a pixel having texture information or a pixel not having the texture information based on an epipolar plane image generated from the plurality of polarization imaging images;
The image processing apparatus further includes a texture gradient calculation unit that calculates the gradient of the depth calculation target pixel in the epipolar plane image,
The texture gradient calculation unit calculates gradients of pixels having the texture information based on the epipolar plane image,
The image processing apparatus according to claim 2, wherein the polarization gradient calculation unit calculates the gradient of the pixel not having the texture information based on the polarization epipolar plane image.
 前記デプス算出部は、
 第1方向に前記視点位置が異なる複数の偏光撮像画から生成された第1偏光エピポーラ平面画像または第1エピポーラ平面画像に基づいて、前記偏光勾配算出部または前記テクスチャ勾配算出部で算出された前記デプス算出対象画素の勾配と、前記第1方向と異なる第2方向に前記視点位置が異なる複数の偏光撮像画から生成された第2偏光エピポーラ平面画像または第2エピポーラ平面画像に基づいて、前記偏光勾配算出部または前記テクスチャ勾配算出部で算出された前記デプス算出対象画素の勾配のいずれかを選択する勾配選択部をさらに有し、
 前記デプス換算部は、前記勾配選択部で選択された前記勾配をデプスに換算する
請求項9に記載の画像処理装置。
The depth calculation unit
The polarization gradient calculator or the texture gradient calculator calculates the first polarization epipolar plane image or the first epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in the first direction. The polarization based on the gradient of the depth calculation target pixel and a second polarization epipolar plane image or a second epipolar plane image generated from a plurality of polarization imaging images having different viewpoint positions in a second direction different from the first direction. The image processing apparatus further includes a gradient selection unit that selects any one of the gradients for the depth calculation target pixel calculated by the gradient calculation unit or the texture gradient calculation unit,
The image processing apparatus according to claim 9, wherein the depth conversion unit converts the gradient selected by the gradient selection unit into a depth.
 前記勾配選択部は、一方の勾配が前記テクスチャ勾配算出部で算出された勾配であり、他方の勾配が前記偏光勾配算出部で算出された勾配である場合、前記テクスチャ勾配算出部で算出された勾配を選択する
請求項10に記載の画像処理装置。
The gradient selecting unit is calculated by the texture gradient calculating unit when one gradient is the gradient calculated by the texture gradient calculating unit and the other gradient is the gradient calculated by the polarization gradient calculating unit. The image processing apparatus according to claim 10, wherein a gradient is selected.
 前記勾配選択部は、前記勾配が共に前記テクスチャ勾配算出部または前記偏光勾配算出部で算出された場合、信頼度の高い勾配を選択する
請求項11に記載の画像処理装置。
The image processing apparatus according to claim 11, wherein the gradient selection unit selects a gradient with high reliability when both of the gradients are calculated by the texture gradient calculation unit or the polarization gradient calculation unit.
 偏光方向と視点位置が異なる複数の偏光撮像画から生成された偏光エピポーラ平面画像に基づいて、デプス算出対象画素のデプス算出を行うこと
を含む画像処理方法。
An image processing method comprising performing depth calculation of a depth calculation target pixel based on polarization epipolar plane images generated from a plurality of polarization imaging images having different polarization directions and viewpoint positions.
 偏光撮像画を用いた画像処理をコンピュータで実行させるプログラムであって、
 偏光方向と視点位置が異なる複数の前記偏光撮像画から偏光エピポーラ平面画像を生成する手順と、
 生成された前記偏光エピポーラ平面画像に基づいて、デプス算出対象画素のデプス算出を行う手順と
を前記コンピュータで実行させるプログラム。
A program that causes a computer to execute image processing using a polarization image.
Generating a polarization epipolar plane image from a plurality of polarization imaging images different in polarization direction and viewpoint position;
A program for causing the computer to execute the steps of calculating the depth of a pixel for which depth calculation is to be performed, based on the generated polarization epipolar plane image.
 偏光方向と視点位置が異なる複数の偏光撮像画を取得する撮像装置と、
 前記撮像装置で取得された前記複数の偏光撮像画からデプス算出対象画素のデプス算出を行う画像処理装置を有し、
 前記画像処理装置は、
 前記複数の偏光撮像画からデプス算出対象画素を含む偏光エピポーラ平面画像を生成する偏光エピポーラ平面画像生成部と、
 偏光エピポーラ平面画像生成部で生成された前記偏光エピポーラ平面画像に基づいて、前記デプス算出対象画素のデプス算出を行うデプス算出部と
を備える画像処理システム。
An imaging device for acquiring a plurality of polarization imaging images having different polarization directions and viewpoint positions;
It has an image processing device which performs depth calculation of a depth calculation target pixel from the plurality of polarization imaging images acquired by the imaging device,
The image processing apparatus is
A polarization epipolar plane image generation unit that generates a polarization epipolar plane image including pixels for depth calculation from the plurality of polarization imaging images;
An image processing system comprising: a depth calculation unit that performs depth calculation of the depth calculation target pixel based on the polarization epipolar plane image generated by a polarization epipolar plane image generation unit.
 前記撮像装置は、前記偏光撮像画を取得する撮像部を第1方向または前記第1方向と前記第1方向と異なる第2方向に並列に配置して、前記偏光方向と視点位置が異なる複数の偏光撮像画を取得する
請求項15に記載の画像処理システム。
The imaging device is arranged in parallel in a first direction or a second direction different from the first direction or the first direction and an imaging unit for acquiring the polarization imaging image, and a plurality of polarization directions and viewpoint positions are different. The image processing system according to claim 15, which acquires a polarization imaging image.
 前記撮像装置は、視点位置の移動に応じて前記偏光方向を変化させる
請求項15に記載の画像処理システム。
The image processing system according to claim 15, wherein the imaging device changes the polarization direction according to movement of a viewpoint position.
PCT/JP2018/019088 2017-07-28 2018-05-17 Image processing device, image processing method, program, and image processing system Ceased WO2019021591A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017146331 2017-07-28
JP2017-146331 2017-07-28

Publications (1)

Publication Number Publication Date
WO2019021591A1 true WO2019021591A1 (en) 2019-01-31

Family

ID=65039657

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/019088 Ceased WO2019021591A1 (en) 2017-07-28 2018-05-17 Image processing device, image processing method, program, and image processing system

Country Status (1)

Country Link
WO (1) WO2019021591A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021010067A1 (en) * 2019-07-17 2021-01-21 ソニー株式会社 Information processing device, information processing method, and information processing program
CN112862880A (en) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 Depth information acquisition method and device, electronic equipment and storage medium
CN113074661A (en) * 2021-03-26 2021-07-06 华中科技大学 Projector corresponding point high-precision matching method based on polar line sampling and application thereof
CN113557709A (en) * 2019-04-19 2021-10-26 索尼集团公司 Imaging apparatus, image processing apparatus, and image processing method
CN114930800A (en) * 2020-01-09 2022-08-19 索尼集团公司 Image processing apparatus, image processing method, and imaging apparatus
WO2024134935A1 (en) * 2022-12-22 2024-06-27 株式会社Jvcケンウッド Three-dimensional information correction device and three-dimensional information correction method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012247356A (en) * 2011-05-30 2012-12-13 Canon Inc Imaging module, imaging apparatus, image processing apparatus, and image processing method
JP2013044827A (en) * 2011-08-23 2013-03-04 Sharp Corp Imaging apparatus
JP2014199241A (en) * 2012-07-23 2014-10-23 株式会社リコー Stereo camera
JP2015114307A (en) * 2013-12-16 2015-06-22 ソニー株式会社 Image processing apparatus, image processing method, and imaging apparatus
JP2016081088A (en) * 2014-10-09 2016-05-16 キヤノン株式会社 Image processing device, and image processing method and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012247356A (en) * 2011-05-30 2012-12-13 Canon Inc Imaging module, imaging apparatus, image processing apparatus, and image processing method
JP2013044827A (en) * 2011-08-23 2013-03-04 Sharp Corp Imaging apparatus
JP2014199241A (en) * 2012-07-23 2014-10-23 株式会社リコー Stereo camera
JP2015114307A (en) * 2013-12-16 2015-06-22 ソニー株式会社 Image processing apparatus, image processing method, and imaging apparatus
JP2016081088A (en) * 2014-10-09 2016-05-16 キヤノン株式会社 Image processing device, and image processing method and program

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113557709A (en) * 2019-04-19 2021-10-26 索尼集团公司 Imaging apparatus, image processing apparatus, and image processing method
WO2021010067A1 (en) * 2019-07-17 2021-01-21 ソニー株式会社 Information processing device, information processing method, and information processing program
CN112862880A (en) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 Depth information acquisition method and device, electronic equipment and storage medium
CN114930800A (en) * 2020-01-09 2022-08-19 索尼集团公司 Image processing apparatus, image processing method, and imaging apparatus
CN114930800B (en) * 2020-01-09 2024-05-28 索尼集团公司 Image processing device, image processing method and imaging device
CN113074661A (en) * 2021-03-26 2021-07-06 华中科技大学 Projector corresponding point high-precision matching method based on polar line sampling and application thereof
CN113074661B (en) * 2021-03-26 2022-02-18 华中科技大学 Projector corresponding point high-precision matching method based on polar line sampling and application thereof
WO2024134935A1 (en) * 2022-12-22 2024-06-27 株式会社Jvcケンウッド Three-dimensional information correction device and three-dimensional information correction method

Similar Documents

Publication Publication Date Title
WO2019021591A1 (en) Image processing device, image processing method, program, and image processing system
JP6795027B2 (en) Information processing equipment, object recognition equipment, device control systems, moving objects, image processing methods and programs
US8305431B2 (en) Device intended to support the driving of a motor vehicle comprising a system capable of capturing stereoscopic images
CN107264402B (en) It looks around and equipment and the vehicle including it is provided
JP6459659B2 (en) Image processing apparatus, image processing method, driving support system, program
JP6614247B2 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
JP6597792B2 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
WO2019181284A1 (en) Information processing device, movement device, method, and program
US10672141B2 (en) Device, method, system and computer-readable medium for determining collision target object rejection
CN111480057B (en) Image processing device, image processing method, and program
US20200074212A1 (en) Information processing device, imaging device, equipment control system, mobile object, information processing method, and computer-readable recording medium
JP6597795B2 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
WO2019049763A1 (en) Image processing device, image processing method, and program
JP2019046276A (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
JP6572696B2 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
JP2017129543A (en) Stereo camera device and vehicle
JPWO2017099199A1 (en) Image processing apparatus, object recognition apparatus, device control system, image processing method and program
WO2020036044A1 (en) Image processing device, image processing method, and program
JP2017167970A (en) Image processing apparatus, object recognition apparatus, device control system, image processing method, and program
JP2019160251A (en) Image processing device, object recognition device, instrument control system, moving body, image processing method and program
CN117836818A (en) Information processing device, information processing system, model, and model generation method
CN111465818B (en) Image processing apparatus, image processing method, program, and information processing system
JP7673699B2 (en) Driving Support Devices
US20250139988A1 (en) Object recognition apparatus, object recognition processing method, and recording medium
WO2018097269A1 (en) Information processing device, imaging device, equipment control system, mobile object, information processing method, and computer-readable recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18839451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18839451

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP