EP2932434A1 - Method and device for anaylzing trafficability - Google Patents
Method and device for anaylzing trafficabilityInfo
- Publication number
- EP2932434A1 EP2932434A1 EP13826921.2A EP13826921A EP2932434A1 EP 2932434 A1 EP2932434 A1 EP 2932434A1 EP 13826921 A EP13826921 A EP 13826921A EP 2932434 A1 EP2932434 A1 EP 2932434A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- trafficability
- image data
- vehicle
- segmentation
- pixels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
Definitions
- the invention relates to a method and a device for trafficability analysis, which are particularly suitable for use in driver assistance systems.
- Camera-based driver assistance systems which recognize the course of their own lane on the basis of the lane markings, are now established in the market, and their use is already required by law in certain application areas.
- these driver assistance systems recognize the course of the markings of the own and the neighboring lanes and estimate therefrom the position of the own vehicle relative to the lane markings. Unintentional leaving the lane can thus be detected early and the system can initiate a suitable reaction, the z. B. warns the driver before leaving or prevented by driving the steering a leaving the lane.
- driver assistance systems which warn not only when leaving a lane or prevent the departure, but to support a driver, for example in an evasive maneuver need for such a function more information about the possible path of the own vehicle than by the above-mentioned pure lane marking detecting systems is determined. If, for example, the goal of a driver assistance system is to prevent an accident by means of suitable automatic evasion, such a system also requires reliable information about whether a possible avoidance path is passable at all, in addition to information about one's own traffic lane, so that the vehicle is no longer damaged by the evasion or caused, as it would be by an accident in case of non-evasion. The determination of such information is referred to herein as trafficability analysis.
- the object of the present invention is now to propose a method and a device for trafficability analysis. This object is solved by the subject matters of the independent claims. Further embodiments of the invention will become apparent from the dependent claims.
- image data is understood to mean not only data generated by the camera-based system, but also those of all environmental detection systems, for example radar, lidar-based systems, which can supply data to an environment.
- the detection of different areas is performed on the basis of an estimated ground level of the environment, whereby computing time can be saved and thus, as a rule, faster analysis results are obtained.
- driving activities recognized in the trafficability analysis are taken into account in different areas, as a result of which a more reliable analysis result can be obtained.
- An embodiment of the invention now relates to a trafficability analysis method using a computer comprising the steps of: receiving image data of an environment in front of a vehicle, analyzing the image data to recognize different areas in an environment image, and analyzing detected different areas with respect to the trafficability the vehicle. Analyzing the image data to detect different regions may include the following steps:
- a method based on environmental images acquired with a plurality of camera optics or camera beams may be employed or a method based on recording with a camera optics at various positions utilizing motion stereo may be employed.
- the segmentation of the estimated ground plane on the relevant pixels can be performed on the basis of color, saturation, intensity and / or texture information.
- the segmentation of the estimated ground plane on the relevant pixels can be performed on the basis of variance information of the calculated positions of pixels in space.
- the analysis of detected different areas with regard to the trafficability by the vehicle may include the detection of obstacles, in particular raised objects, and / or the detection of driving activities.
- a driving activity is understood in particular to mean that a vehicle other than its own is currently driving in an area or has already traveled.
- the driving activity can Include information about the direction of travel of the other vehicle, which counter or cross traffic can be considered.
- an area can be excluded from trafficability. This applies, for example, to oncoming traffic even if driving in principle would be possible, but actual driving offers a great risk of a frontal accident.
- the recognition of driving activities may include receiving and evaluating data of a camera-, radar- and / or lidar-based object recognition and / or receiving and evaluating an object list generated with a camera, radar and / or lidar-based object recognition.
- the recognition of driving activities can also have a long-term observation of driving activities to increase the recognition reliability, the transfer of a trafficability classification to similar image areas and / or a dynamic exclusion of trafficability when detected danger by driving activity observation.
- a further embodiment of the invention relates to a trafficability analysis apparatus using a computer having first means for receiving image data of an environment in front of a vehicle, second means for analyzing the image data for detecting different areas in an environment image, and third means for analyzing detected different ones Areas with regard to vehicle passability.
- the second means may be arranged to carry out a method according to the invention and as described above, and the third means may be designed to carry out a method according to the invention and as described above.
- Another embodiment of the invention relates to a driver assistance system having a device according to the invention and as described herein. Further advantages and possible applications of the present invention will become apparent from the following description in conjunction with the / in the drawing (s) illustrated embodiment (s).
- FIG. 1 shows a flow chart of an embodiment of a method for trafficability analysis according to the invention.
- Fig. 2 shows an example of a detected with a digital camera
- Fig. 3 shows another example of a detected with a digital camera
- An environmental image in front of a vehicle segmented by a trafficability analysis method according to the present invention a block diagram of an embodiment of an apparatus for Trafficability analysis according to the invention.
- the flowchart of a program executed by a computer, shown in FIG. 1, is used to analyze image data generated, for example, by a stereovision camera which captures images of the surroundings in front of a vehicle and may belong to a camera-based driver assistance system with regard to vehicle driveability For example, to be able to quickly determine a suitable alternative route in the event of an evasive maneuver.
- step S10 digital image data of the surroundings in front of the vehicle are received by the stereovision camera for the drivability analysis. for example via a special image data transmission line, a vehicle bus or a radio link.
- the received image data is then analyzed in subsequent steps S12-S20 to detect different regions in the environment image.
- steps S22-S24 the different areas identified in the preceding steps are analyzed with regard to their trafficability by the vehicle and trafficable areas are recognized.
- the recognized as passable areas can then be issued to be processed by a driver assistance system, which should assist a driver in an evasive maneuver and to signal him drivable alternative routes.
- a drivable range can typically be determined by analysis of changing or consistent textures, e.g. B. be detected by detecting a transition from a tar track to a turf at the roadside.
- an assessment of the trafficability of the areas adjacent to the own roadway from image information alone is often not reliably possible. So could one adjacent lane be equipped with a different surface than its own lane, but in the image as an unattached strip of sand.
- This problem of separation and detection of different areas can also occur in the stereovision methods which are often used today, which calculate a spatial (3D) coordinate for each pixel of images captured with a 3D camera. With the aid of these methods, it is basically possible to separate raised objects from the ground level, as is indicated by way of example in FIG.
- raised objects 20 and 22 can be separated from the ground plane 12.
- ground plane 12 for example an asphalt road 14 bounded by meadows (right and left side area 16 or 18), then reliable separation of the areas is often impossible (separation into trafficable / not passable).
- a separation of different areas even within a plane such as the ground plane 12 could in principle be carried out by means of, for example, color, intensity or texture-based segmentation of mono images, ie the separation of different areas within the ground plane 12 such as the asphalt road 14 and the adjacent meadows 16 and 18 and the right object / obstacle 20 (FIG. 2) and 30, 32 (FIG. 3), which do not protrude beyond the horizontal line of the ground plane 12.
- a major disadvantage of this method is the required high computational effort, which speaks so far against a series application, especially in driver assistance systems.
- the method according to the invention now combines in the following steps a texture-based segmentation with stereo vision in order to obtain the advantages of both methods with reduced computational complexity.
- the segmentation can be carried out only on the area which can not be subdivided by the stereo-vision or stereo method (only the ground plane 12 in FIG. 2 instead of the complete, framed area including the objects).
- a segmented ground plane in which passable areas are separated from non-drivable areas is obtained, the computational outlay being reduced compared to a segmentation of the full screen.
- step S12 the position of pixels in space is calculated from a plurality of images captured by the stereo-vision camera with the aid of a stereo-vision approach. Based on the space points, an estimate of the ground plane 12 is made in the next step S14. With the help of the determined ground level, the relevant pixels for the segmentation of the ground level 12 can be determined in step S16. On these pixels, a segmentation of the ground plane 12 is performed in step S18. Since the number of pixels to be segmented is significantly lower than in the original image, the computational outlay for the segmentation step S18 can be significantly reduced. The result is a segmentation of the ground plane 12, which provides additional information about passable / non-drivable areas.
- the segmentation of the selected pixels can be performed by color, intensity or texture.
- additional information for a segmentation for example, the variance of the height of the spatial points (the variance is eg for a meadow in addition to the road surface higher than for a flat road surface) or a small height deviation can be used.
- the technical advantage of this procedure is a segmentation of points in the ground plane on the basis of features in the image (eg intensity, color, texture), compared to a segmentation of the frame by a suitable selection of pixels (stereo ground level) computing time is saved and also additional information for the segmentation (eg variance of the height of the space points) are made available.
- Decisive here is the selection of pixels to be segmented from the image with the aid of an estimate of the (relevant) ground plane, which is carried out with the aid of a stereo method.
- the different regions 14, 16, 18 of the ground plane 12 (see FIGS. 2 and 3) obtained by the segmentation in step S18 are output in a subsequent step S20 for further processing by a driver assistance system.
- the output different areas are analyzed in subsequent steps S22 and S24 with regard to their trafficability.
- step S22 for example, obstacles 20 and 22 in the right and left side regions 16 and 18 of the road 14 (FIG. 2) and obstacles 30 and 32 in the right side region 16 of the road 14 (FIG. 3) are recognized, for example by a texture , Color or intensity analysis. If obstacles 20 and 22 or 30 and 32 are detected in one area, the corresponding area 16 and 18 with an obstacle 20 or 22 (FIG. 2) and the area 16 with obstacles 30 and 32 (FIG. not passable ", areas without obstacles are marked as" passable ".
- the trafficability assessment or analysis of areas around the vehicle contains information as to whether a driving activity has already been or will be perceived in the areas investigated.
- driving activity could be determined by a vehicle-own sensor, for. B. by a camera-based object recognition, or else by the merger with other sensors, eg. B. with the object list of a radar-based sensor.
- a driving activity 28 has been detected in a region 18 in the image, then this region can with high probability be drivable be considered.
- An observation of the area over a longer period of time can contribute to a correspondingly higher security of this assessment.
- step S24 is not limited to camera-based systems, but the analysis and fused consideration of driving activity can be used in all traffic assessment systems.
- the marked as "not passable” and "passable” areas can be further processed by the driver assistance system, in particular they can be used to determine a possible alternative route in an on-road obstacle 14. If an evasive maneuver is necessary, a determined alternative route can either be passively signaled to the driver, for example by a visual display or by a voice output similar to a navigation system, or a determined alternative route can be used for an active intervention in the vehicle control, for example Generate autonomous steering interventions for initiating and possibly performing an evasive maneuver.
- 4 shows a block diagram of a trafficability analysis device 100 according to the invention, which processes data from a stereo-vision camera with a first and second camera 102 or 104. The two cameras 102 and 104 provide image data of the surroundings in front of a vehicle.
- This image data is supplied to a stereo vision processing unit 106, which calculates the position of pixels in the space, that is, executes the above-explained process step S12.
- the calculated pixel positions in the space are transmitted to a ground level estimation unit 108, which estimates a ground level in the surrounding images based on the obtained space points in accordance with method step S14 explained above.
- a relevant pixel selection unit 1 1 0 determined on the basis of the unit 108 estimated ground level and the image data from the two cameras 1 02 and 1 04 relevant pixels for a segmentation of the ground plane (corresponding to the above-explained method step S1 6).
- an image segmentation unit 1 1 2 Based on the relevant pixels determined by the unit 1 1 0, an image segmentation unit 1 1 2 performs a segmentation of the ground plane (method step S1 8).
- the different areas of the ground level determined by the unit 1 1 2 are output from a ground level area output unit 1 14 in a form suitable for further processing to a trafficability analysis unit 1 1 6 which analyzes each of the output different areas in terms of drivability (corresponding the method steps S22 and S24 explained above and outputs the result of the analysis, for example in the form of a list, as follows:
- the above lists can be further processed by a driver assistance system as described above.
- the device shown in Fig. 4 may be implemented in hardware and / or software.
- it may be implemented in the form of an Application Specific Integrated Circuit (ASIC) or Programmable Gate Array (FPGA) or a microprocessor or controller that executes a firmware implementing the method shown in FIG.
- ASIC Application Specific Integrated Circuit
- FPGA Programmable Gate Array
- microprocessor or controller that executes a firmware implementing the method shown in FIG.
- the present invention enables a computationally efficient audibility analysis, especially for use in driver assistance systems.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DE102012112104.4A DE102012112104A1 (en) | 2012-12-11 | 2012-12-11 | PROCESS AND DEVICE FOR PROCESSABILITY ANALYSIS |
| PCT/DE2013/200336 WO2014090245A1 (en) | 2012-12-11 | 2013-12-06 | Method and device for anaylzing trafficability |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP2932434A1 true EP2932434A1 (en) | 2015-10-21 |
Family
ID=50064320
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP13826921.2A Ceased EP2932434A1 (en) | 2012-12-11 | 2013-12-06 | Method and device for anaylzing trafficability |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US9690993B2 (en) |
| EP (1) | EP2932434A1 (en) |
| DE (2) | DE102012112104A1 (en) |
| WO (1) | WO2014090245A1 (en) |
Families Citing this family (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP6156400B2 (en) * | 2015-02-09 | 2017-07-05 | トヨタ自動車株式会社 | Traveling road surface detection device and traveling road surface detection method |
| US10222932B2 (en) | 2015-07-15 | 2019-03-05 | Fyusion, Inc. | Virtual reality environment based manipulation of multilayered multi-view interactive digital media representations |
| US10147211B2 (en) | 2015-07-15 | 2018-12-04 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
| US12261990B2 (en) | 2015-07-15 | 2025-03-25 | Fyusion, Inc. | System and method for generating combined embedded multi-view interactive digital media representations |
| US11006095B2 (en) | 2015-07-15 | 2021-05-11 | Fyusion, Inc. | Drone based capture of a multi-view interactive digital media |
| US11095869B2 (en) | 2015-09-22 | 2021-08-17 | Fyusion, Inc. | System and method for generating combined embedded multi-view interactive digital media representations |
| US10242474B2 (en) | 2015-07-15 | 2019-03-26 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
| US11783864B2 (en) * | 2015-09-22 | 2023-10-10 | Fyusion, Inc. | Integration of audio into a multi-view interactive digital media representation |
| US11202017B2 (en) | 2016-10-06 | 2021-12-14 | Fyusion, Inc. | Live style transfer on a mobile device |
| US10437879B2 (en) | 2017-01-18 | 2019-10-08 | Fyusion, Inc. | Visual search using multi-view interactive digital media representations |
| US20180227482A1 (en) | 2017-02-07 | 2018-08-09 | Fyusion, Inc. | Scene-aware selection of filters and effects for visual digital media content |
| US10313651B2 (en) | 2017-05-22 | 2019-06-04 | Fyusion, Inc. | Snapshots at predefined intervals or angles |
| US11069147B2 (en) | 2017-06-26 | 2021-07-20 | Fyusion, Inc. | Modification of multi-view interactive digital media representation |
| US10592747B2 (en) | 2018-04-26 | 2020-03-17 | Fyusion, Inc. | Method and apparatus for 3-D auto tagging |
| WO2020146418A1 (en) * | 2019-01-07 | 2020-07-16 | Ainstein Ai, Inc. | Radar-camera detection system and methods |
| CN114051631A (en) * | 2019-06-27 | 2022-02-15 | 哲内提 | Method and system for estimating drivable surfaces |
| CN112396051B (en) * | 2019-08-15 | 2024-05-03 | 纳恩博(北京)科技有限公司 | Determination method and device for passable area, storage medium and electronic device |
| EP4357944A1 (en) * | 2022-10-20 | 2024-04-24 | Zenseact AB | Identification of unknown traffic objects |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120070071A1 (en) * | 2010-09-16 | 2012-03-22 | California Institute Of Technology | Systems and methods for automated water detection using visible sensors |
Family Cites Families (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7630806B2 (en) * | 1994-05-23 | 2009-12-08 | Automotive Technologies International, Inc. | System and method for detecting and protecting pedestrians |
| US7783403B2 (en) * | 1994-05-23 | 2010-08-24 | Automotive Technologies International, Inc. | System and method for preventing vehicular accidents |
| US8140358B1 (en) * | 1996-01-29 | 2012-03-20 | Progressive Casualty Insurance Company | Vehicle monitoring system |
| US6104812A (en) * | 1998-01-12 | 2000-08-15 | Juratrade, Limited | Anti-counterfeiting method and apparatus using digital screening |
| US20040247157A1 (en) * | 2001-06-15 | 2004-12-09 | Ulrich Lages | Method for preparing image information |
| JP3960092B2 (en) * | 2001-07-12 | 2007-08-15 | 日産自動車株式会社 | Image processing apparatus for vehicle |
| JP4612635B2 (en) * | 2003-10-09 | 2011-01-12 | 本田技研工業株式会社 | Moving object detection using computer vision adaptable to low illumination depth |
| US20100013615A1 (en) | 2004-03-31 | 2010-01-21 | Carnegie Mellon University | Obstacle detection having enhanced classification |
| DE102005002719A1 (en) | 2005-01-20 | 2006-08-03 | Robert Bosch Gmbh | Course prediction method in driver assistance systems for motor vehicles |
| DE102005045017A1 (en) | 2005-09-21 | 2007-03-22 | Robert Bosch Gmbh | Method and driver assistance system for sensor-based approach control of a motor vehicle |
| FR2898986B1 (en) * | 2006-03-24 | 2008-05-23 | Inrets | OBSTACLE DETECTION |
| US8340421B2 (en) * | 2008-02-04 | 2012-12-25 | Eyep Inc. | Three-dimensional system and method for connection component labeling |
| US8605947B2 (en) | 2008-04-24 | 2013-12-10 | GM Global Technology Operations LLC | Method for detecting a clear path of travel for a vehicle enhanced by object detection |
| JP5216010B2 (en) * | 2009-01-20 | 2013-06-19 | 本田技研工業株式会社 | Method and apparatus for identifying raindrops on a windshield |
| JP4788798B2 (en) * | 2009-04-23 | 2011-10-05 | トヨタ自動車株式会社 | Object detection device |
| JP2012253690A (en) * | 2011-06-06 | 2012-12-20 | Namco Bandai Games Inc | Program, information storage medium, and image generation system |
| CN103177236B (en) * | 2011-12-22 | 2016-06-01 | 株式会社理光 | Road area detection method and device, lane line detection method and apparatus |
| EP2863374A4 (en) * | 2012-06-14 | 2016-04-20 | Toyota Motor Co Ltd | CIRCULATION PATH SEPARATION MARKING DETECTION APPARATUS, AND DRIVER ASSISTANCE SYSTEM |
| JP5829980B2 (en) * | 2012-06-19 | 2015-12-09 | トヨタ自動車株式会社 | Roadside detection device |
| US9488483B2 (en) * | 2013-05-17 | 2016-11-08 | Honda Motor Co., Ltd. | Localization using road markings |
| WO2015024257A1 (en) * | 2013-08-23 | 2015-02-26 | Harman International Industries, Incorporated | Unstructured road boundary detection |
-
2012
- 2012-12-11 DE DE102012112104.4A patent/DE102012112104A1/en not_active Withdrawn
-
2013
- 2013-12-06 EP EP13826921.2A patent/EP2932434A1/en not_active Ceased
- 2013-12-06 DE DE112013005909.6T patent/DE112013005909A5/en active Pending
- 2013-12-06 WO PCT/DE2013/200336 patent/WO2014090245A1/en active Application Filing
- 2013-12-06 US US14/647,958 patent/US9690993B2/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120070071A1 (en) * | 2010-09-16 | 2012-03-22 | California Institute Of Technology | Systems and methods for automated water detection using visible sensors |
Also Published As
| Publication number | Publication date |
|---|---|
| US20150324649A1 (en) | 2015-11-12 |
| WO2014090245A1 (en) | 2014-06-19 |
| DE102012112104A1 (en) | 2014-06-12 |
| US9690993B2 (en) | 2017-06-27 |
| DE112013005909A5 (en) | 2015-09-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP2932434A1 (en) | Method and device for anaylzing trafficability | |
| DE102013205950B4 (en) | Roadside detection method | |
| DE102015105248B4 (en) | Method and system for generating an image of the surroundings of an articulated vehicle | |
| DE102012221563B4 (en) | FUNCTIONAL DIAGNOSIS AND VALIDATION OF A VEHICLE-BASED IMAGING SYSTEM | |
| WO2014032904A1 (en) | Method and apparatus for identifying a position of a vehicle in a lane | |
| DE102016200828B4 (en) | Object detection device and object detection method | |
| DE102016118502A1 (en) | Method, device and device for determining a roadway boundary | |
| DE112018007485T5 (en) | Road surface detection device, image display device using a road surface detection device, obstacle detection device using a road surface detection device, road surface detection method, image display method using a road surface detection method, and obstacle detection method using a road surface detection method | |
| EP2629243A1 (en) | Method for detecting and tracking lane markings | |
| DE102015203016A1 (en) | Method and device for optical self-localization of a motor vehicle in an environment | |
| EP3520023B1 (en) | Detection and validation of objects from sequential images of a camera | |
| WO2013029722A2 (en) | Method for representing surroundings | |
| DE102018108751B4 (en) | Method, system and device for obtaining 3D information from objects | |
| DE102015114403A1 (en) | Proximity object detecting device for a vehicle and approaching object detection method therefor | |
| DE102015115012A1 (en) | Method for generating an environment map of an environment of a motor vehicle based on an image of a camera, driver assistance system and motor vehicle | |
| EP2023265A1 (en) | Method for recognising an object | |
| DE102013012930A1 (en) | Method for determining a current distance and / or a current speed of a target object from a reference point in a camera image, camera system and motor vehicle | |
| DE102018121008A1 (en) | CROSS TRAFFIC RECORDING USING CAMERAS | |
| DE102017103540A1 (en) | Determine an angular position of a trailer without a target mark | |
| DE102019132012B4 (en) | Method and system for detecting small unclassified obstacles on a road surface | |
| EP1944212B1 (en) | Method and device for recognising potentially dangerous objects for a vehicle | |
| WO2019057252A1 (en) | METHOD AND DEVICE FOR DETECTING LICENSES, DRIVER ASSISTANCE SYSTEM AND VEHICLE | |
| DE102018109680A1 (en) | Method for distinguishing lane markings and curbs by parallel two-dimensional and three-dimensional evaluation; Control means; Driving assistance system; as well as computer program product | |
| WO2019162327A2 (en) | Method for determining a distance between a motor vehicle and an object | |
| DE102006007550A1 (en) | Roadway markings detecting method for motor vehicle, involves analyzing pixels in such a manner that roadway markings for vehicle are detected, and using ridge-operator as image recognition operator during analysis of pixels |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| 17P | Request for examination filed |
Effective date: 20150713 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| AX | Request for extension of the european patent |
Extension state: BA ME |
|
| DAX | Request for extension of the european patent (deleted) | ||
| 17Q | First examination report despatched |
Effective date: 20180130 |
|
| 17Q | First examination report despatched |
Effective date: 20180205 |
|
| REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
| 18R | Application refused |
Effective date: 20190624 |