WO2016146559A1 - Procédé permettant de déterminer une position d'un objet dans un système de coordonnées tridimensionnelles, produit-programme informatique, système de caméra et véhicule à moteur - Google Patents
Procédé permettant de déterminer une position d'un objet dans un système de coordonnées tridimensionnelles, produit-programme informatique, système de caméra et véhicule à moteur Download PDFInfo
- Publication number
- WO2016146559A1 WO2016146559A1 PCT/EP2016/055393 EP2016055393W WO2016146559A1 WO 2016146559 A1 WO2016146559 A1 WO 2016146559A1 EP 2016055393 W EP2016055393 W EP 2016055393W WO 2016146559 A1 WO2016146559 A1 WO 2016146559A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- center
- ray
- coordinate system
- determined
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/586—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of parking space
Definitions
- the invention relates to a method for determining a position of an object located in an environmental region of a motor vehicle in a three-dimensional world coordinate system.
- a first image having the object and a second image having the object of an image sequence are provided by means of a camera of the motor vehicle. Furthermore, a first image having the object and a second image having the object of an image sequence are provided by means of a camera of the motor vehicle. Furthermore, a first image having the object and a second image having the object of an image sequence are provided by means of a camera of the motor vehicle. Furthermore, a first image having the object and a second image having the object of an image sequence are provided by means of a camera of the motor vehicle. Furthermore, a first image having the object and a second image having the object of an image sequence are provided by means of a camera of the motor vehicle. Furthermore, a first image having the object and a second image having the object of an image sequence are provided by means of a camera of the motor vehicle. Furthermore, a first image having the object and a second image
- the invention also relates to a computer program product, to a camera system for a motor vehicle as well as to a motor vehicle with a camera system.
- 3D camera systems such as for example TOF (time of flight) cameras can for example be used to provide the position of the object in the three-dimensional world coordinate system.
- TOF time of flight
- Based on the position of the object in the three-dimensional world coordinate system for example, a distance of the object to a motor vehicle, on which the camera system is disposed, can be determined in the three- dimensional world coordinate system.
- the position of the object and/or of the motor vehicle is described in posture and height with respect to a reference surface such as for example the earth's surface.
- the position of the object in the three-dimensional world coordinate system is preferably determined depending on the principle of stereoscopy.
- at least two images are captured from different sites.
- the object is presented in each of the images with at least slightly different views.
- Known methods for determining the position of the object in the three-dimensional world coordinate system according to this principle are usually computationally intensive and slow.
- this object is solved by a method, by a computer program product, by a camera system as well as by a motor vehicle having the features according to the respective independent claims.
- a position of an object located in an environmental region of the motor vehicle in a three-dimensional world coordinate system is determined.
- a first image having the object and a second image having the object of an image sequence are provided by means of a camera of the motor vehicle.
- a first position of the camera during the capture of the first image is in particular different from a second position of the camera during the capture of the second image.
- a first characteristic pixel of the object in the first image and a second characteristic pixel of the object in the second image are determined.
- the first characteristic pixel present in a two-dimensional image coordinate system is transformed into the three- dimensional world coordinate system as a first ray and the second characteristic pixel present in a two-dimensional image coordinate system is transformed into the three- dimensional world coordinate system as a second ray.
- a connecting straight line oriented perpendicularly to the first ray and perpendicularly to the second ray is determined.
- a center of the connecting straight line is determined as the position of the object in the environmental region.
- the determination of the position of the object located in the environmental region of the motor vehicle in the three-dimensional world coordinate system can be effected fast and with little computation.
- the position can be provided with little effort.
- the position of the object is only determined based on two images of the image sequence, which are captured at different sites.
- the determination of the first characteristic pixel can for example be initially effected by an interest point operator.
- the first characteristic pixel as well as the second characteristic pixel can be determined depending on a characteristic pixel determined for example at earlier time by means of a method for tracking the characteristic pixels.
- the first characteristic pixel and the second characteristic pixel can for example be determined by an optical flow method.
- the first characteristic pixel and the second characteristic pixel are in particular each present in a two-dimensional image coordinate system.
- the characteristic pixel is for example described by two coordinates in the image plane.
- the transformation of the first characteristic pixel and the second characteristic pixel into the three-dimensional world coordinate system is in particular effected with the knowledge about calibration parameters of the camera and a determined position and orientation of the camera at the time of capture of the first image and/or of the second image.
- the transformation can then be performed with a translation vector and a rotation matrix.
- the first characteristic pixel is present in the three-dimensional world coordinate system as a first ray
- the second characteristic pixel is present as a second ray.
- a positional blur is described, which is caused in that a two-dimensional coordinate is transformed into a three- dimensional coordinate system.
- the third dimension which is in particular characterized by a distance from the motor vehicle or the camera to the object, cannot be determined based on a single image.
- the first ray and the second ray intersect in the position of the object in the three-dimensional world coordinate system. In reality, however, the first ray and the second ray mostly do not intersect.
- the connecting straight line is determined perpendicularly to the first ray and perpendicularly to the second ray.
- the connecting straight line is in particular the shortest connection between the first ray and the second ray, which is perpendicularly oriented to the two rays.
- the connecting straight line thus is particularly located where the first ray and the second ray can be connected by the shortest possible connecting straight line, which is preferably perpendicular to the two rays.
- the connecting straight line is in particular formed as a line segment having a start point on the first ray or the second ray and an end point on the second ray or on the first ray.
- the line segment is in particular formed straight and not curved.
- the center is determined on the connecting straight line.
- intersection of the connecting straight line with the first ray is equally distant from the center as the intersection of the connecting straight line with the second ray.
- the position of the object in the environmental region is determined by the center.
- the position of the object is therefore described by the center in the three-dimensional world coordinate system.
- a distance from the object to the motor vehicle or to the camera can be determined.
- a 3D reconstruction of the object can for example be
- the motor vehicle can for example be assisted in a parking procedure.
- the transformation from the two-dimensional image coordinate system into the three-dimensional world coordinate system is performed depending on orientation parameters of the motor vehicle determined by odometry and/or visual odometry.
- the odometry denotes a method of estimating position and orientation of a mobile system based on the data of its propulsion.
- the data of the odometry can for example be provided by a CAN bus of the motor vehicle.
- the visual odometry can for example be performed based on the first image and/or the second image and/or further images of cameras of the motor vehicle. It is also advantageous that the odometry and the visual odometry can be combined. Thus, it can for example be that the odometry is supplemented or improved by the visual odometry.
- the orientation parameters of the motor vehicle can therefore be accurately and reliably determined.
- yaw information and/or pitch information and/or roll information of the motor vehicle are described by the orientation parameters of the motor vehicle.
- a rotation around a vertical axis of the motor vehicle is described by the yaw information.
- a rotation around the transverse axis of the motor vehicle is described by the pitch information.
- a rotation around the longitudinal axis of the motor vehicle is described by the roll information.
- a ray pair is determined by the first ray and the second ray and the center of the connecting straight line is determined only depending on the ray pair.
- the center and thus the position of the object are in particular determined exclusively by the first characteristic pixel and the second characteristic pixel.
- the center and thus the position of the object is in particular determined exclusively based on two images of the image sequence.
- the first image and the second image are provided as images of the image sequence immediately consecutive in time.
- the position of the object can be determined within a short period of time.
- the position of the object in the environmental region characterized by the center is checked by an error checking method.
- the error checking method it can be determined how reliable the determined position of the object is.
- the center is incorrect.
- An incorrect center incorrectly describing the determined position of the object in the environmental region can for example be excluded from the further procedure.
- the incorrect center is not taken into account in further processing of the information.
- the incorrect center cannot be taken into account for example in 3D reconstruction of the object.
- the error checking method is in particular performed in steps, wherein the evidence of an error of the center in one of the steps can already be sufficient to classify the center as incorrect.
- the error checking method is further advantageous in that a position of the object checked for errors multiple times with different approaches can be provided by the center.
- the center is retransformed into the first image and/or into the second image and/or into a third image of the image sequence, and the error checking method is performed depending on the retransformation of the center, wherein a first error value is provided by the retransformation of the center into the first image and/or into the second image and a second error value is provided by the retransformation of the center into the third image, and the position of the object characterized by the center is incorrect if the first error value is determined as a predetermined first error limit value and/or the second error value is determined as greater than a predetermined second error limit value.
- the retransformation can be effected by inverting the transformation of the first characteristic pixel and/or of the second characteristic pixel into the three- dimensional world coordinate system.
- the center is retransformed into the first image and/or the second image and the position thereof is assessed depending on the first error limit value.
- the center is assessed as incorrect if a distance of the center from the first characteristic pixel in the first image and/or from the second characteristic pixel in the second image is determined as larger than the first error limit value.
- the center can be retransformed into the third image.
- a third characteristic pixel of the object is determined.
- the third characteristic pixel can for example also be determined by means of an optical flow method and for example be a continuation of the tracking of the first characteristic pixel and the second characteristic pixel.
- the center can now be assessed as incorrect if a distance from the center to the third characteristic pixel is larger than the predetermined second error limit value.
- the determination of the incorrectness of the center depending on the third characteristic pixel or the third image is more reliable than it can be performed based on the first image and/or the second image. This is founded in that the third characteristic pixel was not used for calculating the center.
- a reliability assessment of the center is provided based on the first error limit value and/or the second error limit value.
- a length of the connecting straight line is determined, and the error checking method is performed depending on the length of the connecting straight line, and the position of the object characterized by the center is assessed as incorrect if the length is larger than a predetermined length limit value.
- the distance of the intersection of the connecting straight line with the first ray and the intersection with the second ray is determined by the length.
- the length is the magnitude of the line segment of the connecting straight line.
- an angle between the first ray and the second ray is determined, and the error checking method is performed depending on the angle, and the position of the object characterized by the center is assessed as incorrect if the angle is larger than an angle limit value.
- the angle limit value can for example be determined depending on a position of the camera during the capture of the first image and a position of the camera during the capture of the second image. In particular, the difference between the positions is known by the orientation parameters of the motor vehicle.
- the angle it is preferably determined how close the first ray and the second ray are to a parallel state of the first ray and the second ray. The closer the first ray and the second ray to the parallel state, the smaller the angle between the first ray and the second ray.
- the center and therefore the position of the object are assessed as incorrect.
- the center is considered as reliable.
- the reliability of the center or of the position of the object can be determined depending on the angle.
- the reliability of the center can be further increased.
- a plurality of characteristic pixels of the object are determined in a plurality of images of the image sequence, and a plurality of pixels are determined each based on two of the characteristic pixels, and the error checking method is performed depending on the plurality of the centers, and the position of the object characterized by the center is assessed as incorrect if a predetermined number of the plurality of the centers is less than a predetermined distance apart from the center.
- the reliability can be determined based on a local distribution of the centers.
- the predetermined number of the plurality of the centers is present in the range set by the predetermined distance, thus, the center can be assessed as reliable and thus not incorrect.
- the error checking method is performed depending on a plurality of distances of the plurality of the centers and/or an arithmetic mean of the number of the distances and/or a standard deviation of the plurality of the distances.
- the reliability or the non-present incorrectness of the center can be determined depending on the other centers.
- the other centers can in particular fast be provided since they also are in particular determined only based on two images of the image sequence.
- the center can be assessed as reliable based on the arithmetic mean and/or the standard deviation and/or the distances of the centers to the center.
- the position of the object in the three-dimensional world coordinate system can also be assessed as reliable.
- a last ray and/or a next to last ray provided by two images at the end of the image sequence are additionally or alternatively used for assessing the reliability of the center.
- the last image of the image sequence means that a further image of the image sequence is not provided after the last image.
- a next to last center of the next to last ray and a last center of the last ray are assessed as incorrect by the error checking method.
- This assessment can for example be effected by visualization of the center in preferably a motor vehicle coordinate system. This error checking is advantageous in case of sudden deceleration of the motor vehicle.
- the invention also relates to a computer program product formed for performing a method according to the invention if the computer program product is executed on a
- the invention relates to a camera system with a camera and an evaluation unit, wherein the camera system is adapted to perform a method according to the invention.
- the evaluation unit can for example be integrated in the camera or be present as a separate unit.
- the camera is preferably connected to the evaluation unit.
- a motor vehicle according to the invention in particular a passenger car, includes a camera system according to the invention or an advantageous implementation thereof.
- Fig. 1 in schematic plan view an embodiment of a motor vehicle according to the invention with a camera system
- Fig. 2 a schematic illustration of a center of a connecting straight line in a three- dimensional world coordinate system
- Fig. 3 a flow diagram of a method according to the invention for determining a position of an object located in an environmental region of the motor vehicle in the three-dimensional world coordinate system;
- Fig. 4 a flow diagram of an error checking method to check the center
- Fig. 5 a schematic illustration of an error checking method of the center based on a first error limit value and a second error limit value
- Fig. 6 a schematic illustration of an environmental region of the motor vehicle with a position of an object in the three-dimensional world coordinate system
- Fig. 7 a schematic illustration of a plan view image of the motor vehicle with a position of an object in the three-dimensional world coordinate system.
- a plan view of a motor vehicle 1 with a camera system 2 is schematically illustrated.
- the camera system 2 includes a camera 3 and an evaluation unit 4.
- the camera 3 is disposed on a rear 5 of the motor vehicle 1 in the embodiment.
- the arrangement of the camera 3 is variously possible on the motor vehicle 1 , however, preferably such that an environmental region 6 of the motor vehicle 1 can be at least partially captured.
- the part of the environmental region located on the rear 5 of the motor vehicle 1 is captured by the camera 3.
- the arrangement of the evaluation unit 4 is also variously possible on the motor vehicle 1 , however preferably such that the evaluation unit 4 can be connected to the camera 3.
- the evaluation unit 4 can be integrated in the camera 3 or be formed as a separate unit.
- the camera system 2 can also include multiple cameras 3 and/or multiple evaluation units 4.
- An object 7 is located in the environmental region 6.
- the object 7 is captured by the camera 3.
- Fig. 2 shows a three-dimensional world coordinate system 8.
- the three-dimensional world coordinate system 8 provides a three-dimensional coordinate with for example an x-value, a y-value and a z-value for each point.
- the relation of the motor vehicle 1 or of the camera system 2 to the three-dimensional world coordinate system 8 can be determined based on orientation parameters for example provided by odometry and/or visual odometry of the motor vehicle 1 .
- Fig. 2 shows a first position d of the camera 3 and a second position 0 2 of the camera 3. In the first position d , a first image 9 is provided by the camera 3, and in the second position 0 2 , a second image 10 is provided by the camera 3.
- a first characteristic pixel is determined.
- a second characteristic pixel l 2 is determined.
- the characteristic pixels , l 2 can for example be determined by an optical flow method.
- An initial determination of the first characteristic pixel can for example be performed by an interest operator, for example a Harris operator.
- the characteristic pixels , l 2 are present in the respective image 9, 10 in a two-dimensional image coordinate system.
- the characteristic pixels , l 2 are
- the characteristic pixels , l 2 are thus transformed from a two-dimensional coordinate system into a three-dimensional coordinate system.
- the transformation can for example be effected based on orientation parameters of the motor vehicle 1 , in particular yaw information and/or pitch information and/or roll information of the motor vehicle 1 .
- the orientation parameters of the motor vehicle 1 can for example be determined by odometry and/or by visual odometry.
- the orientation parameters of the motor vehicle 1 determined by the odometry can for example be picked up on the CAN bus of the motor vehicle 1 .
- Those orientation parameters of the motor vehicle 1 which are determined by the visual odometry, can for example be determined based on images of cameras of the motor vehicle 1 .
- the images for performing the visual odometry in particular show the environmental region 6 of the motor vehicle 1 .
- calibration of the camera system 2 or of the camera 3 is known.
- the calibration includes calibration parameters, which can be present in the form of an external orientation and/or an internal orientation.
- the first characteristic pixel is present as a first ray v-i in the three-dimensional world coordinate system 8.
- the second characteristic pixel l 2 is present as a second ray v 2 in the three-dimensional world coordinate system 8 after the transformation.
- the first ray and the second ray v 2 are in particular present as half-lines and have their origin in the first position d and in the second position 0 2 , respectively.
- the characteristic pixels , l 2 in the three-dimensional world coordinate system 8 are present as a ray v 1 ; v 2 .
- the first ray v-i and the second ray v 2 therefore point from the respective position d , 0 2 of the camera 3 to the object 7 in the three-dimensional world coordinate system 8.
- a connecting straight line w is determined between the first ray v-i and the second ray v 2 .
- the connecting straight line w is oriented perpendicularly to the first ray v-i and
- the connecting straight line w intersects the first ray v-i in a first object point Pi and the second ray v 2 in a second object point P 2 .
- the first object point Pi and the second object point P 2 coarsely describe the position of the object 7.
- a center P is determined on the connecting straight line w.
- the center P is in the middle between the first object point Pi and the second object point P 2 .
- the position of the object 7 in the three-dimensional world coordinate system 8 is described by the center P. It is assumed that the position of the object 7 is located in the middle between the first object point Pi and the second object point P 2 .
- the determination of the center P can be mathematically described as follows:
- auxiliary variables a- ⁇ , a 2 , a 3 , a 4 and a 5 are introduced and defined by known parameters.
- the center P is now determined as follows:
- Fig. 3 shows an exemplary procedure of the method according to the invention.
- a step S1 a plurality of characteristic pixels is provided.
- the first characteristic pixel and the second characteristic pixel ⁇ 2 are
- the characteristic pixels are each determined from the object 7, but each provided by different images of an image sequence of the camera 3.
- the image sequence in particular includes the first image 9 and the second image 10.
- the characteristic pixels are transformed into the three-dimensional world coordinate system 8 by a step S3 with the aid of orientation parameters of the motor vehicle 1 from a step S2.
- a transformation is in particular effected with a translation vector and at least one rotation matrix.
- the first ray and the second ray v 2 are present.
- ray pairs are determined.
- a ray pair is determined with the first ray and the second ray v 2 .
- the ray pair is composed of two rays in the three-dimensional world coordinate system 8.
- the center P is provided based on the ray pair, thus for example the first ray and the second ray v 2 .
- the center P is provided as already above described.
- the position of the object 7 in the three-dimensional world coordinate system 8 is described by the center P.
- step S6 is performed, in which the position of the object 7 in the environmental region 8 characterized by the center P is checked by an error checking method. If the center P passes the error checking method without error, thus, it is provided as a reliable position of the object 7 in a step S7. If the center P is assessed as incorrect in the step S6, thus, it is not considered as reliable and is not further taken into account in particular in determining the position of the object 7.
- step S6 upon positive outcome of the error checking, a step S7 follows, and upon negative outcome of the error checking, a step S8 directly follows. The step S8 also follows after step S7. In the step S8, further positions of further objects in the
- step S9 the environmental region 6 of the motor vehicle 1 are provided for passing the steps S1 up to at least step S6. If all of the positions of the further objects in the environmental region 6 are processed, thus, this is determined in a step S9. If all of the positions of the further objects are processed, thus, the method is terminated in a step S10. If this is not the case, the method is continued with step S1 .
- a first result Y of the error checking method or a second result N of the error checking method follows after step S6.
- the respective center P has been assessed as correct, while the respective center P has been assessed as incorrect according to the second result N.
- Fig. 4 shows the detailed procedure of the error checking method according to step S6.
- a step S1 1 a loop over the centers P is generated. Thus, in particular all of the centers P are processed by the loop.
- a step S12 subsequent to step S1 1 it is checked if all of the centers P have been passed. If this is the case, thus, a step S13 follows, and the error checking method is terminated. If this is not the case, thus, a step S14 follows.
- a step S14a is performed.
- a procedure described in more detail in Fig. 5 is performed.
- a step S14b is performed.
- a length of the connecting straight line w is determined.
- a step S14c follows. In the step S14c, an angle ⁇ between the first ray v-i and the second ray v 2 is determined. If the angle ⁇ is larger than an angle limit value, the position of the object 7 characterized by the center P is assessed as incorrect. In the step S14c, it is thus examined how close the first ray v-i and the second ray v 2 are to a parallel state PZ.
- the parallelism or the state PZ close to the parallelism of the rays v 1 ; v 2 can be mathematically checked as follows:
- the center P is compared to a plurality of centers P.
- the center P is considered as correct and reliable if a predetermined number of the plurality of the centers P is less than a predetermined distance apart from the center P.
- the entire distribution of the centers P provided over the image sequence from the object 7 is used to check the reliability or correctness of the respective center P.
- the error checking method according to step S14d can for example be effected based on the distances of the plurality of the centers P to the respectively examined center P.
- an arithmetic mean of the plurality of the distances and/or a standard deviation of the plurality of the distances can be used to perform the error checking method on the respective center P.
- a step S15 it is checked if the respective center P has been assessed as correct by the error checking method according to step S14.
- the center P is in particular assessed as correct if the steps S14a to S14d have been passed successfully and thus within the respective limit values. If the error checking of the center P is not passed, thus, it is further continued with the step S1 1 .
- step S16 follows, by which the center P is provided as a candidate for the position of the object 7.
- step S17 the number of the centers P for the position of the object 7 is checked. If a sufficient number of candidates of the centers P is present, thus, a step S18 follows and step S6 is terminated. If this is not the case, thus, step S1 1 follows.
- Fig. 5 shows the error checking method according to step S14a.
- the first characteristic pixel is determined in the first image 9 and the second characteristic pixel l 2 is determined in the second image 10.
- the first characteristic pixel is
- the second characteristic pixel l 2 is transformed from the two-dimensional image coordinate system into the three- dimensional world coordinate system 8.
- the first characteristic pixel is present as the first ray Vi .
- the second characteristic pixel l 2 is present in the three-dimensional world coordinate system 8, as already described, as the second ray v 2 .
- the center P is retransformed into the first image 9 and/or the second image 10.
- the center P retransformed into the first image 9 is present there as a first retransformed pixel l r1 .
- the retransformed center P is present in the second image 10 as a second
- the center P is assessed as incorrect.
- the error checking or the error checking method based on the retransformation of the center P into the first image 9 and/or the second image 10 can be described as a self back projection error checking method since the center P has also been determined based on the first image 9 and the second image 10.
- the image sequence includes more than two images.
- at least a third image 1 1 is present.
- the third image 1 1 has a third characteristic pixel l 3 of the object 7.
- the third characteristic pixel l 3 was not taken into account in particular in determining the center P.
- the center P is back projected into the third image 1 1 .
- the back projected center P is present as a third retransformed pixel l r3 .
- the camera is 3 in a third position 0 3 .
- the error checking or the error checking method is in now performed depending on a distance between the third characteristic pixel l 3 and the third retransformed pixel l r3 . If the distance between the third characteristic pixel l 3 and the third retransformed pixel l r3 is equal to or less than a predetermined second error limit value, thus, the center P is assessed as correct. However, if the distance between the third characteristic pixel l 3 and the third retransformed pixel l r3 is larger than the second error limit value, thus, the position of the object 7 characterized by the center P is assessed as incorrect. In this case, the center P preferably is no longer taken into account for the further procedure.
- the distances between the characteristic pixels , l 2 , l 3 and the retransformed pixels l r1 , l r2 , l r3 are determined in the respective image plane of the image 9, 10, 1 1 .
- Fig. 6 exemplarily shows the second image 10.
- the second image 10 is captured by the camera 3 according to Fig. 6.
- the second image 10 shows the environmental region 6.
- the second image 10 shows the object 7 and further objects 12.
- centers P are shown, by which the position of the object 7 and/or positions of further objects in the three-dimensional world coordinate system 8 are characterized.
- the camera system 2 can for example also be used by a driver assistance system of the motor vehicle 1 , which is formed as a parking assistant.
- a distance from the motor vehicle 1 or from the camera 3 to the object 7 can be determined.
- a height of the object 7 above the ground of the environmental region 6 can for example be determined.
- it can for example also be determined if the motor vehicle 1 with its known height can pass below the object 7.
- Fig. 7 shows a plan view image 13 of the environmental region 6.
- a user of a driver assistance system of the motor vehicle 1 can for example recognize if the object 7 and/or the further objects 12 have to be considered as an obstacle in reversing of the motor vehicle 1 .
- the centers P describing the position of the object 7 and/or of the further objects 12 in Fig. 7 are in particular present in the three-dimensional world coordinate system 8.
- a distance from the position of the object 7 to the motor vehicle 1 can be determined, and a collision warning can for example be output upon falling below the distance by the motor vehicle 1 .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
L'invention concerne un procédé permettant de déterminer une position d'un objet (7) situé dans une région environnementale (6) d'un véhicule à moteur (1) dans un système de coordonnées tridimensionnelles (8), ledit procédé consistant à : - fournir une première image (9) comprenant l'objet (7) et une seconde image (10) comprenant l'objet (7) d'une séquence d'images au moyen d'une caméra (3) du véhicule à moteur (1), déterminer un premier pixel caractéristique (I1) de l'objet (7) dans la première image (9) et un second pixel caractéristique (I2) de l'objet (7) dans la seconde image (10), - transformer le premier pixel caractéristique (I1) présent dans un système de coordonnées d'images bidimensionnelles dans le système de coordonnées tridimensionnelles (8) en tant que premier rayon (v1) et le second pixel caractéristique (I2) présent dans un système de coordonnées d'images bidimensionnelles dans le système de coordonnées tridimensionnelles (8) en tant que second rayon (v2), - déterminer une ligne droite de connexion (w) orientée perpendiculairement au premier rayon (v) et au second rayon (v2), - déterminer un centre (P) de la ligne droite de connexion (w) en tant que position de l'objet (7) dans la région environnementale (6).
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102015104065.4A DE102015104065A1 (de) | 2015-03-18 | 2015-03-18 | Verfahren zum Bestimmen einer Position eines Objekts in einem dreidimensionalen Weltkoordinatensystem, Computerprogrammprodukt, Kamerasystem und Kraftfahrzeug |
| DE102015104065.4 | 2015-03-18 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2016146559A1 true WO2016146559A1 (fr) | 2016-09-22 |
Family
ID=55642408
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2016/055393 Ceased WO2016146559A1 (fr) | 2015-03-18 | 2016-03-14 | Procédé permettant de déterminer une position d'un objet dans un système de coordonnées tridimensionnelles, produit-programme informatique, système de caméra et véhicule à moteur |
Country Status (2)
| Country | Link |
|---|---|
| DE (1) | DE102015104065A1 (fr) |
| WO (1) | WO2016146559A1 (fr) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106527487A (zh) * | 2016-12-23 | 2017-03-22 | 北京理工大学 | 一种运动平台上无人机自主精确着陆系统及着陆方法 |
| EP3388972A1 (fr) * | 2017-04-13 | 2018-10-17 | Delphi Technologies, Inc. | Procédé et dispositif de production d'une carte d'occupation d'un environnement d'un véhicule |
| DE102019102561A1 (de) | 2019-02-01 | 2020-08-06 | Connaught Electronics Ltd. | Verfahren zum Erkennen einer Pflastermarkierung |
| CN113538578A (zh) * | 2021-06-22 | 2021-10-22 | 恒睿(重庆)人工智能技术研究院有限公司 | 目标定位方法、装置、计算机设备和存储介质 |
| CN115830125A (zh) * | 2022-12-26 | 2023-03-21 | 成都地平线征程科技有限公司 | 杆状物的位置的确定方法、装置、电子设备和介质 |
| CN116129087A (zh) * | 2021-11-30 | 2023-05-16 | 北京百度网讯科技有限公司 | 定位方法、视觉地图的生成方法及其装置 |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10846541B2 (en) | 2017-01-04 | 2020-11-24 | Qualcomm Incorporated | Systems and methods for classifying road features |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090243889A1 (en) * | 2008-03-27 | 2009-10-01 | Mando Corporation | Monocular motion stereo-based free parking space detection apparatus and method |
| DE102012023060A1 (de) * | 2012-11-24 | 2014-06-12 | Connaught Electronics Ltd. | Verfahren zum Detektieren eines beweglichen Objekts mithilfe eines Histogramms anhand von Bildern einer Kamera und Kamerasystem für ein Kraftfahrzeug |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE102005010225A1 (de) * | 2005-03-05 | 2006-09-07 | Daimlerchrysler Ag | Verfahren zum Vergleich eines realen Gegenstandes mit einem digitalen Modell |
| US10254118B2 (en) * | 2013-02-21 | 2019-04-09 | Regents Of The University Of Minnesota | Extrinsic parameter calibration of a vision-aided inertial navigation system |
-
2015
- 2015-03-18 DE DE102015104065.4A patent/DE102015104065A1/de active Pending
-
2016
- 2016-03-14 WO PCT/EP2016/055393 patent/WO2016146559A1/fr not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090243889A1 (en) * | 2008-03-27 | 2009-10-01 | Mando Corporation | Monocular motion stereo-based free parking space detection apparatus and method |
| DE102012023060A1 (de) * | 2012-11-24 | 2014-06-12 | Connaught Electronics Ltd. | Verfahren zum Detektieren eines beweglichen Objekts mithilfe eines Histogramms anhand von Bildern einer Kamera und Kamerasystem für ein Kraftfahrzeug |
Non-Patent Citations (1)
| Title |
|---|
| BEARDSLEY P A ET AL: "Navigation using affine structure from motion", CORRECT SYSTEM DESIGN; [LECTURE NOTES IN COMPUTER SCIENCE; LECT.NOTES COMPUTER], SPRINGER INTERNATIONAL PUBLISHING, CHAM, vol. 801 Chap.8, no. 558, 2 May 1994 (1994-05-02), pages 85 - 96, XP047289425, ISSN: 0302-9743, ISBN: 978-3-642-24570-1, [retrieved on 20050616] * |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106527487A (zh) * | 2016-12-23 | 2017-03-22 | 北京理工大学 | 一种运动平台上无人机自主精确着陆系统及着陆方法 |
| EP3388972A1 (fr) * | 2017-04-13 | 2018-10-17 | Delphi Technologies, Inc. | Procédé et dispositif de production d'une carte d'occupation d'un environnement d'un véhicule |
| DE102019102561A1 (de) | 2019-02-01 | 2020-08-06 | Connaught Electronics Ltd. | Verfahren zum Erkennen einer Pflastermarkierung |
| CN113538578A (zh) * | 2021-06-22 | 2021-10-22 | 恒睿(重庆)人工智能技术研究院有限公司 | 目标定位方法、装置、计算机设备和存储介质 |
| CN116129087A (zh) * | 2021-11-30 | 2023-05-16 | 北京百度网讯科技有限公司 | 定位方法、视觉地图的生成方法及其装置 |
| CN115830125A (zh) * | 2022-12-26 | 2023-03-21 | 成都地平线征程科技有限公司 | 杆状物的位置的确定方法、装置、电子设备和介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| DE102015104065A1 (de) | 2016-09-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2016146559A1 (fr) | Procédé permettant de déterminer une position d'un objet dans un système de coordonnées tridimensionnelles, produit-programme informatique, système de caméra et véhicule à moteur | |
| US10424081B2 (en) | Method and apparatus for calibrating a camera system of a motor vehicle | |
| US9736460B2 (en) | Distance measuring apparatus and distance measuring method | |
| CN111414794B (zh) | 计算一拖车挂接点位置的方法 | |
| CN107230218B (zh) | 用于生成对从安装在运载工具上的摄像机捕捉的图像导出的估计的置信度测量的方法和设备 | |
| EP3193306B1 (fr) | Procédé et dispositif permettant d'estimer l'orientation d'une caméra par rapport à une surface de route | |
| US10484665B2 (en) | Camera parameter set calculation method, recording medium, and camera parameter set calculation apparatus | |
| US9862318B2 (en) | Method to determine distance of an object from an automated vehicle with a monocular device | |
| US11783507B2 (en) | Camera calibration apparatus and operating method | |
| CN104732196B (zh) | 车辆检测方法及系统 | |
| US9661319B2 (en) | Method and apparatus for automatic calibration in surrounding view systems | |
| EP3086284A1 (fr) | Estimation de paramètres de caméra extrinsèques provenant de lignes d'image | |
| JP4803449B2 (ja) | 車載カメラの校正装置、校正方法、並びにこの校正方法を用いた車両の生産方法 | |
| JP4803450B2 (ja) | 車載カメラの校正装置及び当該装置を用いた車両の生産方法 | |
| JP6584208B2 (ja) | 情報処理装置、情報処理方法、プログラム | |
| CN107209930B (zh) | 环视图像稳定方法和装置 | |
| CN105205459B (zh) | 一种图像特征点类型的识别方法和装置 | |
| CN111753605A (zh) | 车道线定位方法、装置、电子设备及可读介质 | |
| US20170161912A1 (en) | Egomotion estimation system and method | |
| US9892519B2 (en) | Method for detecting an object in an environmental region of a motor vehicle, driver assistance system and motor vehicle | |
| KR20140054710A (ko) | 3차원 지도 생성 장치 및 3차원 지도 생성 방법 | |
| WO2018222122A1 (fr) | Procédés de correction de perspective, produits programmes d'ordinateur et systèmes | |
| JP2020123342A (ja) | 魚眼レンズが装着された2次元カメラを使用して2次元イメージショットで乗客の状態を予測するための方法及び装置{method and device for estimating passenger statuses in 2 dimension image shot by using 2 dimension camera with fisheye lens} | |
| JP4943034B2 (ja) | ステレオ画像処理装置 | |
| EP3227827B1 (fr) | Système d'assistance de pilot, véhicule et méthode de classification un vecteur de flux |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16712748 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16712748 Country of ref document: EP Kind code of ref document: A1 |