WO2019009579A1 - Procédé et appareil de correspondance stéréo utilisant une interpolation à points de support - Google Patents
Procédé et appareil de correspondance stéréo utilisant une interpolation à points de support Download PDFInfo
- Publication number
- WO2019009579A1 WO2019009579A1 PCT/KR2018/007496 KR2018007496W WO2019009579A1 WO 2019009579 A1 WO2019009579 A1 WO 2019009579A1 KR 2018007496 W KR2018007496 W KR 2018007496W WO 2019009579 A1 WO2019009579 A1 WO 2019009579A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- edge
- pixel
- pixels
- disparity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/158—Switching image signals
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Definitions
- the present invention relates to a stereo matching method and apparatus, and more particularly, to a stereo matching method and apparatus using a support point interpolation method for extracting depth information of a two-dimensional image.
- stereo matching techniques are used to obtain a three-dimensional image from a stereo image and are used to obtain a three-dimensional image from a plurality of two-dimensional images photographed at different photographing positions in the same line with respect to the same subject.
- the stereo image thus refers to a plurality of two-dimensional images taken at different photographing positions with respect to the subject, i.e., a plurality of two-dimensional images in a pair relationship with each other.
- a z coordinate which is depth information
- x and y coordinates which are horizontal and vertical position information of the two-dimensional image.
- a stereo matching technique using a stereo image is used.
- the conventional stereo matching technology is a technology that can be operated on a CPU of a general computer, and when it is implemented as an FPGA / ASIC, it requires a lot of memory, which makes it difficult to apply it to a mass-production FPGA / ASIC hardware.
- a stereo matching method including receiving a stereo image including a first image and a second image simultaneously captured by two cameras; Extracting a first horizontal edge image and a first vertical edge image for the first image, and extracting a second horizontal edge image and a second vertical edge image for the second image; Matching a first edge pixel included in the first image with a second edge pixel included in the second image with respect to some or all of the edge pixels included in the first image; Calculating a disparity value that is a distance difference between the first edge pixel and the second edge pixel matched with respect to a part or all of the edge pixels included in the first image.
- the matching step may include comparing a first horizontal edge image for a plurality of first peripheral pixels disposed in the periphery of the first edge pixel and pixel values in the first vertical edge image to a second horizontal edge image and a second vertical It may be determined as a second edge pixel to be matched with the first edge pixel by comparing the pixel value of the edge image with the pixel values of the edge image.
- the matching step includes matching the pixel values of the first horizontal edge image and the plurality of first peripheral pixels in the first vertical edge image with the pixel values of the second horizontal pixel in the second horizontal pixel,
- the pixel having the smallest sum of the difference values of the edge image and the pixel values of the second neighboring pixels in the second vertical edge image may be determined as the second edge pixel matching the first edge pixel.
- the first peripheral pixels may include more objects included in the first vertical edge image than those included in the first horizontal edge image.
- the first peripheral pixels may be divided into four regions arranged around the first edge pixel, and peripheral pixels arranged in the same shape may be selected for each region.
- the removal step may remove unnecessary support points by removing all the support points in the area when the support point includes a certain number or less in the area of the specific size.
- generating a first disparity map by performing a first-order interpolation on a disparity value of the support point map, wherein the step of generating the primary disparity map comprises: ; Interpolating a disparity value by assigning pixels having no disparity value assigned in the reduced support point map to a smaller value among disparity values of pixels located on the left or right side; And expanding the reduced support point map to its original size if a disparity value is assigned to all pixels of the reduced support point map through interpolation.
- the method may further include generating a second disparity map by performing a second interpolation on the first disparity map, wherein the step of generating the second disparity map comprises: The disparity value of the pixel to which the disparity value is not allocated is interpolated using the disparity values of the pixels having the four surrounding disparity values and the distances to the corresponding pixels, The second disparity map may be generated by interpolating disparity values of the unassigned pixels.
- a computer-readable recording medium having a computer readable recording medium having a computer readable recording medium configured to receive a stereo image including a first image and a second image simultaneously captured by two cameras; Extracting a first horizontal edge image and a first vertical edge image for the first image, and extracting a second horizontal edge image and a second vertical edge image for the second image; Matching a first edge pixel included in the first image with a second edge pixel included in the second image with respect to some or all of the edge pixels included in the first image; Calculating a disparity value that is a distance difference between a first edge pixel and a second edge pixel matched with respect to a part or all of the edge pixels included in the first image;
- a computer program may be included.
- a stereo matching apparatus comprising: a camera unit for capturing a stereo image including a first image and a second image simultaneously captured by two cameras; An edge extraction unit for extracting a first horizontal edge image and a first vertical edge image for the first image and extracting a second horizontal edge image and a second vertical edge image for the second image; And a first edge pixel included in the first image is matched with a second edge pixel included in the second image with respect to some or all of the edge pixels included in the first image, And a video processing unit for calculating a disparity value, which is a distance difference between the first edge pixel and the second edge pixel matched for some or all of them.
- An intelligent driver assistance system for recognizing objects around a vehicle using a stereo matching device includes a camera unit for photographing a stereo image including a first image and a second image simultaneously captured by two cameras; An edge extraction unit for extracting a first horizontal edge image and a first vertical edge image for the first image and extracting a second horizontal edge image and a second vertical edge image for the second image; And a first edge pixel included in the first image is matched with a second edge pixel included in the second image with respect to some or all of the edge pixels included in the first image, And a video processing unit for calculating a disparity value, which is a distance difference between the first edge pixel and the second edge pixel matched for some or all of the pixels.
- a stereo matching method for extracting an edge image of an inputted stereo image and calculating matching and disparity values therefor So that the memory usage and processing time required to obtain a three-dimensional image from a stereo image can be reduced, which makes it easy to apply stereo matching to an automotive electrical and electronic system.
- FIG. 1 is a block diagram showing the configuration of a stereo matching apparatus according to an embodiment of the present invention
- Figure 2 is a flow chart provided to illustrate a stereo matching method, in accordance with an embodiment of the present invention
- FIG. 3 illustrates a process of a stereo matching method according to an embodiment of the present invention
- FIG. 4 is a diagram illustrating a process of finding matching pixels according to an exemplary embodiment of the present invention.
- FIG. 5 is a diagram showing peripheral pixels of a horizontal edge image and peripheral pixels of a vertical edge image according to an embodiment of the present invention
- FIG. 6 is a diagram for explaining a process of calculating a disparity value and a depth value using the disparity value according to an embodiment of the present invention
- Figure 7 illustrates a support point map, in accordance with one embodiment of the present invention.
- Figure 8 illustrates an algorithm for eliminating unnecessary support points according to an embodiment of the present invention
- FIG. 9 is a diagram illustrating a support point map in which unnecessary support points are removed, according to an embodiment of the present invention.
- FIG. 10 is a diagram illustrating a process of performing a first-order interpolation according to an embodiment of the present invention.
- FIG. 11 is a view showing some pixels of a support point map before reduction according to an embodiment of the present invention.
- FIG. 12 is a diagram illustrating a process of reducing a support point map at a certain rate according to an embodiment of the present invention
- FIG. 13 is a diagram illustrating a process of interpolating a disparity value of a reduced support point map and enlarging the disparity value back to its original size according to an embodiment of the present invention
- Figure 14 illustrates a method of performing a quadratic interpolation according to an embodiment of the present invention
- Figure 15 illustrates a post-processed disparity map, in accordance with one embodiment of the present invention.
- 16 is a diagram illustrating a disparity map for an input stereo image according to an embodiment of the present invention.
- 17 is a diagram illustrating a parallel processing process using a line unit buffer according to an embodiment of the present invention.
- Figure 18 illustrates a pipeline for a matching process, according to one embodiment of the present invention
- FIG. 19 is a diagram illustrating a hardware parallel processing structure according to an embodiment of the present invention.
- Stereo matching technology is a technique used to obtain a three-dimensional image from a stereo image, which is used to obtain a three-dimensional image from a plurality of two-dimensional images photographed at different shooting positions in the same line for the same subject.
- the stereo image means a plurality of two-dimensional images taken at different photographing positions with respect to the subject, that is, a plurality of two-dimensional images having a pair relationship with each other. For example, have.
- a z coordinate which is depth information
- x and y coordinates which are horizontal and vertical position information of the two-dimensional image.
- disparity disparity
- Stereo matching is a technique used to obtain such a disparity. For example, if the stereo image is left and right images taken by two left and right cameras, one of the left and right images is set as the reference image and the other is set as the search image. In this case, the distance between the reference image and the search image for the same point in space, that is, the difference between the coordinates, is called disparity, and the disparity can be obtained using the stereo matching technique.
- the stereo matching apparatus 100 obtains depth information for each pixel using the disparities between the reference image and the search image obtained as described above for all the pixels of the image, and generates a disparity map expressed by the three-dimensional coordinates .
- the stereo matching apparatus 100 sets a plurality of windows around a reference pixel of a reference image, and sets a window of the same size around the search pixels with respect to the search image.
- the reference pixel means a corresponding point in the search image among the edge pixels of the reference image, i.e., a pixel to which the corresponding point is to be currently searched.
- the search pixel means a pixel to be currently identified as a corresponding point to the reference pixel among the pixels of the search image.
- the plurality of windows are in the form of a plurality of matrices each including a center pixel and surrounding pixels surrounding the center pixel.
- the stereo matching apparatus 100 calculates the similarity between some of the surrounding pixels in the set reference pixel windows and some of the surrounding pixels in each of the searching pixel windows, And the search pixel of the search pixel window having the value corresponding to the corresponding point is matched with the corresponding pixel corresponding to the corresponding point. The distance between the reference pixel and the corresponding pixel is obtained as a disparity.
- the stereo matching apparatus 100 can generate a disparity map by performing a stereo matching method on the inputted stereo image.
- the stereo matching apparatus 100 includes a camera unit 110, an edge extraction unit 120, and an image processing unit 130.
- the camera unit 110 includes a first camera 111 and a second camera 112.
- the camera unit 110 photographs a stereo image including a first image and a second image simultaneously captured by two cameras.
- the first camera 111 and the second camera 112 may be a left eye camera and a right eye camera, and the first image may be a left eye image and the second image may be a right eye image.
- the first image may be the reference image and the second image may be the search image.
- the edge extracting unit 120 extracts a first horizontal edge image and a first vertical edge image for the first image and extracts a second horizontal edge image and a second vertical edge image for the second image.
- the edge extracting unit 120 includes a Sobel filter, and extracts a horizontal edge image and a vertical edge image using a Sobel filter.
- the image processing unit 130 processes various images, processes first and second images and corresponding edge images, and generates a disparity map for generating a three-dimensional image.
- the image processor 130 may match a first edge pixel included in the first image with a second edge pixel included in the second image, for a part or all of the edge pixels included in the first image, A disparity value which is a distance difference between the matched first edge pixel and the second edge pixel is calculated for part or all of the edge pixels included in one image.
- the image processing unit 130 generates a support point map composed of matched pixels, generates a disparity map through the primary interpolation and the secondary interpolation, and performs correction so as to further refine the disparity map .
- FIG. 2 is a flow chart provided to illustrate a stereo matching method, in accordance with an embodiment of the present invention.
- 3 is a diagram illustrating a process of a stereo matching method according to an embodiment of the present invention.
- the stereo matching apparatus 100 receives a stereo image including a first image and a second image simultaneously captured by two cameras (S210).
- the original left image 310 and the original right image 320 of FIG. 3 correspond to the first image and the second image, respectively.
- the stereo matching apparatus 100 extracts a first horizontal edge image and a first vertical edge image for the first image, extracts a second horizontal edge image and a second vertical edge image for the second image, (S220). At this time, the stereo matching apparatus 100 can extract an edge image using a Sobel filter and generate a first horizontal edge image 311, a first vertical edge image 312, The second horizontal edge image 321, and the second vertical edge image 322, as shown in FIG.
- the stereo matching apparatus 100 matches the first edge pixels included in the first image with the second edge pixels included in the second image, for some or all of the edge pixels included in the first image, A disparity value, which is a distance difference between the matched first and second edge pixels, is calculated for some or all of the edge pixels included in the first image (S230).
- the first image becomes the reference image and the second image becomes the search image.
- the stereo matching apparatus 100 outputs a first horizontal edge image for a plurality of first peripheral pixels disposed around the first edge pixel and pixel values within the first vertical edge image to a second horizontal edge And determines the second edge pixel to be a second edge pixel to be matched with the first edge pixel by comparing the pixel value with the pixel values of the image and the second vertical edge image.
- the stereo matching apparatus 100 calculates the pixel values of the first horizontal edge image and the plurality of first peripheral pixels in the first vertical edge image, among the pixels of the second image in the row corresponding to the first edge pixel, And the second edge pixel which matches the first edge pixel is determined as the pixel having the minimum sum of the difference values of the pixel values of the second horizontal edge image and the second surrounding pixels in the second vertical edge image.
- the first peripheral pixels divide the four regions arranged around the first edge pixel, and peripheral pixels arranged in the same shape for each region are selected.
- FIG. 4 the selection process of the neighboring pixels for matching and the matching process will be described with reference to FIGS. 4 and 5.
- FIG. 4 is a diagram for explaining a process of finding matching pixels according to an embodiment of the present invention.
- Left Image represents a first image which is a reference image
- Right represents a second image which is a search image.
- a first edge pixel which is a reference pixel in the first image corresponds to I_desc (x, y) in FIG.
- the stereo matching apparatus 100 selects 5x5 window regions in the upper left, upper right, lower left, and lower right regions based on the first edge pixel (x, y).
- each window region is represented by Lf1, Lf2, Lf3, and Lf4.
- the stereo matching apparatus 100 selects four surrounding pixels for the horizontal edge image for each window area, selects 12 surrounding pixels for the vertical edge image, and selects 16 peripheral pixels for each window area .
- the first peripheral pixels are set to include more objects included in the first vertical edge image than those included in the first horizontal edge image. This is because the vertical edge has a more significant effect on matching than the horizontal edge. Accordingly, the stereo matching apparatus 100 selects more peripheral pixels of the first vertical edge image than the number of peripheral pixels of the first horizontal edge image, and considers the matching. A method of selecting 16 peripheral pixels for one 5x5 window region is shown in detail in Fig.
- FIG. 5 is a diagram showing peripheral pixels of a horizontal edge image and peripheral pixels of a vertical edge image, according to an embodiment of the present invention.
- the image on the left represents a horizontal edge image, and in the horizontal edge image, four peripheral pixels around the center point of the 5x5 window region are selected.
- the right image represents a vertical edge image.
- a total of 12 peripheral pixels including 10 peripheral points and 2 central points of the 5x5 window region are selected.
- the center point since the center point plays an important role in matching, the center point is selected twice by weighting.
- the stereo matching apparatus 100 selects four surrounding pixels for each horizontal edge image for each 5x5 window region, selects 12 surrounding pixels for the vertical edge image, A total of 16 peripheral pixels are selected for each region.
- the stereo matching apparatus 100 selects 16 first peripheral pixels for each of the four window regions Lf1, Lf2, Lf3, and Lf4.
- the stereo matching apparatus 100 calculates the pixel values of the pixels (xd, y) of the second image in the row corresponding to the first edge pixel (x, y) of the first image (Left image)
- the second peripheral pixels are selected in the same manner. 4
- the stereo matching apparatus 100 obtains a matching cost value, which is the sum of the difference values of the pixel values of the first and second neighboring pixels .
- the stereo matching apparatus 100 determines a pixel having a minimum matching cost value as a second edge pixel to be matched with the first edge pixel.
- the matching cost value is the pixel of the second image having the second neighboring pixels most similar to the first neighboring pixels of the first edge pixel, And the second edge pixel of the second image corresponding to the second edge.
- Sum1 represents the sum of the difference between the pixel value of the peripheral pixels in the Lf1 window region and the pixel value of the peripheral pixels in the Rf1 window region
- Sum2 denotes the sum of the pixel value of the peripheral pixels in the Lf2 window region
- Rf2 Sum3 represents the sum of pixel values of neighboring pixels of the Lf3 window region and neighboring pixels of the Rf3 window region
- Sum4 represents the sum of the difference values of the pixel values of neighboring pixels of the Lf4 window region And a difference value between pixel values of neighboring pixels of the Rf4 window region.
- the sum of the sum values of Sum1 + Sum2 + Sum3 + Sum4 represents the matching cost value, which is the sum of the difference values of the pixel values of the second peripheral pixels corresponding to the first peripheral pixels.
- all matching cost values are calculated for 245 pixels in the y-th row of the second image, and a pixel having the lowest matching cost value among the 245 pixels in the y- 2 edge pixels.
- the stereo matching apparatus 100 can determine a second edge pixel that matches the first edge pixel.
- the stereo matching apparatus 100 calculates a disparity value, which is a distance difference between the first edge pixel and the second edge pixel, and will be described with reference to FIG. 6 is a diagram for explaining a process of calculating a disparity value and a depth value using the disparity value according to an embodiment of the present invention.
- the disparity is a value indicating how the positions of the pixels representing the same point differ from each other in the two images. Referring to the formula shown in FIG. 6, the disparity value (x R -x T ) Can be obtained.
- the stereo matching apparatus 100 designates each of the matched pixels as a support point to generate a support point map 330 (S240).
- 7 is a diagram illustrating a support point map according to an embodiment of the present invention. Referring to FIG. 7, it can be seen that a support point map (Initial SPOINT) which is a collection of pixels matched from the left image and the right image is displayed.
- Initial SPOINT Initial SPOINT
- the stereo matching apparatus 100 removes unnecessary support points from the support point map (S250). Specifically, when the support point includes a specific number or less within a specific size area, the stereo matching apparatus 100 removes all the support points in the corresponding area, thereby eliminating unnecessary support points, Thereby generating a map 340. This will be described with reference to Figs. 8 and 9. Fig.
- Figure 8 is a diagram illustrating an algorithm for eliminating unwanted support points, in accordance with an embodiment of the present invention.
- FIG. 8 when there are five or less support points in a 40x40 area, it can be confirmed that the algorithm is implemented by deleting the support points of the corresponding area. However, it goes without saying that the size of the area and the number of support points can be changed flexibly.
- FIG. 9 is a diagram illustrating a support point map in which unnecessary support points are removed according to an embodiment of the present invention.
- the support points map (Refined SPOINT) from which unnecessary support points are removed shows that unnecessary support points disappear in the sky area.
- the stereo matching apparatus 100 performs a first-order interpolation on the disparity value of the support point map to generate the primary disparity map 350 (S260). Specifically, the stereo matching apparatus 100 reduces the support point map at a certain ratio, and assigns the pixels to which the disparity value is not allocated in the reduced support point map to a smaller value among the disparity values of the pixels located on the left or right The disparity value is interpolated. When disparity values are assigned to all the pixels of the support point map reduced through interpolation, the reduced support point map is expanded to the original size, thereby performing the primary interpolation.
- FIG. 10 is a diagram illustrating a process of performing a first-order interpolation according to an embodiment of the present invention.
- the stereo matching apparatus 100 reduces the support point map to a 1/16 ratio, and assigns the pixels to which the disparity value is not allocated in the reduced support point map to the left or right And the disparity value is allocated to all the pixels of the support point map reduced through the interpolation, the reduced support point map is enlarged to the original size, The process of performing the interpolation can be confirmed.
- FIG. 11 is a diagram showing some pixels of a support point map before reduction according to an embodiment of the present invention.
- the stereo matching apparatus 100 selects pixels (black and oblique lines) at regular intervals when reducing the image at a constant rate.
- pixels black and oblique lines
- FIG. 11 it can be confirmed that, in order to reduce to 1/16, a pixel for only the uppermost pixel in the 4x4 region is selected as the pixel to be removed and the remaining pixel is selected as the pixel to be removed.
- FIG. 12 is a diagram illustrating a process of reducing a support point map at a predetermined ratio according to an embodiment of the present invention. If you reduce it to 1/16, you can see that only black and oblique pixels remain.
- FIG. 13 is a diagram illustrating a process of interpolating a disparity value of a reduced support point map and enlarging the disparity value back to its original size according to an embodiment of the present invention.
- the disparity value assigned to the black pixel is applied to the oblique pixel to which the disparity value is not assigned (for example, the disparity value of the smallest black pixel among surrounding black pixels is applied). Then, if it is increased to 16 times and then enlarged to the original size, it can be confirmed that the disparity value is allotted to the uppermost pixel in the 4x4 matrix.
- the stereo matching apparatus 100 performs a first-order interpolation on the disparity value of the support point map to generate the primary disparity map 350.
- the primary disparity map can confirm that a disparity value is allocated to only some pixels at predetermined intervals.
- the stereo matching apparatus 100 performs a secondary interpolation on the primary disparity map to generate a secondary disparity map (S270). Specifically, the stereo matching apparatus 100 uses the disparity values of the pixels having the four surrounding disparity values and the distances to the pixels to which the disparity values are not allocated, A disparity map is generated by interpolating disparity values of pixels to which all disparity values are not allocated in a method of interpolating disparity values of pixels to which no values are assigned. This will be described in detail with reference to Fig.
- FIG. 14 is a diagram illustrating a method of performing a second-order interpolation according to an embodiment of the present invention. As shown in FIG. 14, the secondary interpolation is performed using four peripheral support points.
- the stereo matching apparatus 100 can generate a secondary disparity map to which disparity values for all pixels are allocated.
- the stereo matching apparatus can perform noise elimination and hole filling using a median filter to generate a more accurate secondary disparity map, which is shown in FIG. 15 is a diagram illustrating a post-processed disparity map, in accordance with an embodiment of the present invention.
- the stereo matching apparatus 100 may generate a more precise second-order disparity map by post-processing the second disparity map by a median filter.
- FIG. 16 is a diagram illustrating a disparity map for an input stereo image according to an embodiment of the present invention.
- the stereo matching apparatus 100 can generate the disparity map using the original image photographed through the two cameras as shown in FIG.
- the stereo matching apparatus 100 may optimize the memory size and processing speed through parallel processing. This will be described with reference to FIGS. 17 to 19.
- FIG. 17 is a diagrammatic representation of the stereo matching apparatus 100.
- FIG. 17 is a diagram illustrating a parallel processing process using a line unit buffer according to an embodiment of the present invention. As shown in FIG. 17, the stereo matching apparatus 100 may perform parallel processing using two buffer lines in an edge image generation process.
- the stereo matching apparatus 100 can reduce the time complexity to 1/4 through the four pipeline configurations. This allows the stereo matching device 100 to optimize memory size and access.
- the stereo matching apparatus 100 can reduce the time complexity to 1/4 through four parallel processes when performing the second-order interpolation.
- the stereo matching apparatus 100 can reduce the time complexity and improve the memory usage through parallel processing.
- the stereo matching apparatus 100 may be applied to an ADAS (Advanced Driver Assistance System) that recognizes objects around the vehicle.
- ADAS Advanced Driver Assistance System
- the technical idea of the present invention can also be applied to a computer-readable recording medium in which a function of the stereo matching apparatus 100 according to the present embodiment and a computer program for performing a stereo matching method are recorded.
- the technical idea according to various embodiments of the present invention may be realized in the form of a computer-readable programming language code recorded on a computer-readable recording medium.
- the computer-readable recording medium is any data storage device that can be read by a computer and can store data.
- the computer-readable recording medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical disk, a hard disk drive, a flash memory, a solid state disk (SSD), or the like.
- the computer readable code or program stored in the computer readable recording medium may be transmitted through a network connected between the computers.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
L'invention concerne un procédé de correspondance stéréo, un dispositif de correspondance stéréo, un support d'enregistrement et un système d'aide au conducteur intelligent utilisant ceux-ci. Selon le présent procédé de correspondance stéréo, comme il est possible d'extraire une image de bords d'une image stéréo d'entrée et de calculer des valeurs de correspondance et de disparité de celle-ci, l'utilisation de la mémoire et le temps de traitement nécessaires pour obtenir une image tridimensionnelle à partir d'une image stéréo peuvent être réduits. Par conséquent, il est possible d'appliquer facilement une correspondance stéréo à des systèmes électriques et électroniques pour véhicules.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR1020170086334A KR101920159B1 (ko) | 2017-07-07 | 2017-07-07 | 지원점 보간법을 이용한 스테레오 매칭 방법 및 장치 |
| KR10-2017-0086334 | 2017-07-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019009579A1 true WO2019009579A1 (fr) | 2019-01-10 |
Family
ID=64561969
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/KR2018/007496 Ceased WO2019009579A1 (fr) | 2017-07-07 | 2018-07-03 | Procédé et appareil de correspondance stéréo utilisant une interpolation à points de support |
Country Status (2)
| Country | Link |
|---|---|
| KR (1) | KR101920159B1 (fr) |
| WO (1) | WO2019009579A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113947625A (zh) * | 2021-10-15 | 2022-01-18 | 中国矿业大学 | 一种视差面精细建模的双目图像视差计算优化方法 |
| CN115689965A (zh) * | 2022-12-30 | 2023-02-03 | 武汉大学 | 面向整景卫星影像深度学习密集匹配的多级视差融合方法 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100762670B1 (ko) * | 2006-06-07 | 2007-10-01 | 삼성전자주식회사 | 스테레오 이미지로부터 디스패리티 맵을 생성하는 방법 및장치와 그를 위한 스테레오 매칭 방법 및 장치 |
| JP2009176087A (ja) * | 2008-01-25 | 2009-08-06 | Fuji Heavy Ind Ltd | 車両用環境認識装置 |
| KR100943635B1 (ko) * | 2008-11-06 | 2010-02-24 | 삼성에스디에스 주식회사 | 디지털카메라의 이미지를 이용한 디스패리티 맵 생성 장치 및 방법 |
| KR101265020B1 (ko) * | 2012-02-29 | 2013-05-24 | 성균관대학교산학협력단 | 스테레오 이미지에서 중복된 영역의 히스토그램 분석에 의한 고해상도 디스패리티 맵 생성 장치 및 방법 |
| KR101265021B1 (ko) * | 2012-02-29 | 2013-05-24 | 성균관대학교산학협력단 | 고정 또는 가변 프레임 레이트를 위한 고속의 고해상도 디스패리티 맵 생성 방법 및 장치 |
-
2017
- 2017-07-07 KR KR1020170086334A patent/KR101920159B1/ko active Active
-
2018
- 2018-07-03 WO PCT/KR2018/007496 patent/WO2019009579A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100762670B1 (ko) * | 2006-06-07 | 2007-10-01 | 삼성전자주식회사 | 스테레오 이미지로부터 디스패리티 맵을 생성하는 방법 및장치와 그를 위한 스테레오 매칭 방법 및 장치 |
| JP2009176087A (ja) * | 2008-01-25 | 2009-08-06 | Fuji Heavy Ind Ltd | 車両用環境認識装置 |
| KR100943635B1 (ko) * | 2008-11-06 | 2010-02-24 | 삼성에스디에스 주식회사 | 디지털카메라의 이미지를 이용한 디스패리티 맵 생성 장치 및 방법 |
| KR101265020B1 (ko) * | 2012-02-29 | 2013-05-24 | 성균관대학교산학협력단 | 스테레오 이미지에서 중복된 영역의 히스토그램 분석에 의한 고해상도 디스패리티 맵 생성 장치 및 방법 |
| KR101265021B1 (ko) * | 2012-02-29 | 2013-05-24 | 성균관대학교산학협력단 | 고정 또는 가변 프레임 레이트를 위한 고속의 고해상도 디스패리티 맵 생성 방법 및 장치 |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113947625A (zh) * | 2021-10-15 | 2022-01-18 | 中国矿业大学 | 一种视差面精细建模的双目图像视差计算优化方法 |
| CN115689965A (zh) * | 2022-12-30 | 2023-02-03 | 武汉大学 | 面向整景卫星影像深度学习密集匹配的多级视差融合方法 |
| CN115689965B (zh) * | 2022-12-30 | 2023-03-21 | 武汉大学 | 面向整景卫星影像深度学习密集匹配的多级视差融合方法 |
Also Published As
| Publication number | Publication date |
|---|---|
| KR101920159B1 (ko) | 2018-11-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2020027607A1 (fr) | Dispositif de détection d'objets et procédé de commande | |
| WO2021085784A1 (fr) | Procédé d'apprentissage d'un modèle de détection d'objet et dispositif de détection d'objet dans lequel un modèle de détection d'objet est exécuté | |
| WO2012064010A1 (fr) | Appareil de conversion d'image, appareil d'affichage et procédés utilisant ces appareils | |
| WO2011087289A2 (fr) | Procédé et système pour réaliser le rendu de vues tridimensionnelles d'une scène | |
| JP2023517365A (ja) | 駐車スペース検出方法、装置、デバイス及び記憶媒体 | |
| WO2020017825A1 (fr) | Procédé de combinaison de contenu provenant de multiples trames et dispositif électronique associé | |
| WO2019139234A1 (fr) | Appareil et procédé pour supprimer la distorsion d'un objectif ultra-grand-angulaire et d'images omnidirectionnelles | |
| WO2014010820A1 (fr) | Procédé et appareil d'estimation de mouvement d'image à l'aide d'informations de disparité d'une image multivue | |
| WO2019147024A1 (fr) | Procédé de détection d'objet à l'aide de deux caméras aux distances focales différentes, et appareil associé | |
| WO2021101045A1 (fr) | Appareil électronique et procédé de commande associé | |
| CN113538269B (zh) | 图像处理方法及装置、计算机可读存储介质和电子设备 | |
| WO2011136407A1 (fr) | Appareil et procédé de reconnaissance d'image à l'aide d'un appareil photographique stéréoscopique | |
| WO2021241804A1 (fr) | Dispositif et procédé d'interpolation d'image basée sur des flux multiples | |
| WO2019009579A1 (fr) | Procédé et appareil de correspondance stéréo utilisant une interpolation à points de support | |
| WO2020111311A1 (fr) | Système et procédé d'amélioration de qualité d'image d'objet d'intérêt | |
| WO2022014831A1 (fr) | Procédé et dispositif de détection d'objet | |
| WO2014051309A1 (fr) | Appareil de stéréocorrespondance utilisant une propriété d'image | |
| CN102917234B (zh) | 图像处理装置和方法以及程序 | |
| WO2019098421A1 (fr) | Dispositif de reconstruction d'objet au moyen d'informations de mouvement et procédé de reconstruction d'objet l'utilisant | |
| WO2016104842A1 (fr) | Système de reconnaissance d'objet et procédé de prise en compte de distorsion de caméra | |
| WO2018008871A1 (fr) | Dispositif et procédé de génération de vidéo compacte, et support d'enregistrement dans lequel un programme informatique est enregistré | |
| WO2018131729A1 (fr) | Procédé et système de détection d'un objet mobile dans une image à l'aide d'une seule caméra | |
| WO2017086522A1 (fr) | Procédé de synthèse d'image d'incrustation couleur sans écran d'arrière-plan | |
| JPH1172387A (ja) | 色抽出装置及び色抽出方法 | |
| WO2019139303A1 (fr) | Procédé et dispositif de synthèse d'image |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18828457 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18828457 Country of ref document: EP Kind code of ref document: A1 |