[go: up one dir, main page]

WO2025078129A1 - Method and processor for recognizing a stationary traffic sign - Google Patents

Method and processor for recognizing a stationary traffic sign Download PDF

Info

Publication number
WO2025078129A1
WO2025078129A1 PCT/EP2024/076223 EP2024076223W WO2025078129A1 WO 2025078129 A1 WO2025078129 A1 WO 2025078129A1 EP 2024076223 W EP2024076223 W EP 2024076223W WO 2025078129 A1 WO2025078129 A1 WO 2025078129A1
Authority
WO
WIPO (PCT)
Prior art keywords
traffic sign
point cloud
information
vehicle
lidar sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2024/076223
Other languages
French (fr)
Inventor
Antoine RAYNARD
Umair Nasir
Bharath Amarendra
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Valeo Detection Systems GmbH
Original Assignee
Valeo Detection Systems GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Valeo Detection Systems GmbH filed Critical Valeo Detection Systems GmbH
Publication of WO2025078129A1 publication Critical patent/WO2025078129A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Definitions

  • Modern vehicles like cars, vans, trucks, motorcycles, etc. may comprise sensor systems, whose data are used for driver information and/or are used by driver assistance systems.
  • Sensor systems are constantly being developed for various functions, e. g. for the acquisition of environmental information in the near and far range of vehicles, such as passenger cars or commercial vehicles. Based on the acquired data, a model of the vehicle environment may be generated and a reaction to changes in this vehicle environment is possible.
  • Sensor systems may also serve as sensors for driver assistance systems, in particular assistance systems for autonomous or semi- autonomous vehicle control. They may for example be used to detect obstacles and/or other road users in the front, rear, or blind spot areas of a vehicle.
  • Sensor systems may be based on different sensor principles, such as radar, ultrasound, optics.
  • a LiDAR sensor comprised in a vehicle obtains a point cloud of an environment of the vehicle.
  • the LiDAR sensor may provide raw data comprising a point cloud comprising measured values, in particular depth values, for the environment of the vehicle.
  • the LiDAR sensor comprises an optical transmission device which emits an optical signal.
  • the optical signal is reflected in the environment of the LiDAR sensor.
  • An optical reception device of the LiDAR sensor receives the reflection of the optical signal.
  • the sent and received optical signal can be evaluated to obtain the measured values.
  • the measurements comprising the transmission, reception and evaluation may be repeated.
  • the collection of the measured values for the various points of the environment forms the point cloud.
  • a pulsed LiDAR sensor will emit the optical signal in pulses. Each pulse of the optical signal may provide a measurement value of a point.
  • a scanning LiDAR sensor will cause the optical signal to perform a scanning movement, thereby successively scanning points of the environment.
  • the method offers reliable traffic sign identification and recognition of the visual information on the traffic sign.
  • the traffic sign identification refers in particular to the identification of the traffics sign board within the point cloud.
  • the use of the point cloud information obtained by the LiDAR sensor allows for a good performance in a broad spectrum of light conditions, including low light conditions and excessive sunlight.
  • the point cloud information is used to identify the traffic sign board in the environment. Then, in a further step, the visual traffic sign information on the traffic sign board is extracted using the contrast of intensity information. Intensity information is sometimes referred to as greyscale information or a greyscale image.
  • the LiDAR. sensor may measure the intensity information of the received light. Using contrast information of such intensities allows to extract the visual information, which comprises the shapes of the information figuring on the traffic sign.
  • the intensity information and/or the contrast of intensity information may be stored in the point cloud.
  • the traffic sign board is identified within the point cloud using depth information comprised in the point cloud. For example, based on the depth information for each echo pixel, the traffic sign board is identified from the rest of the environment.
  • the depth information is accurately and directly available in the point cloud and can advantageously be used for the identification of the traffic sign board.
  • a frame of a LiDAR. sensor may refer to one scan or snapshot of the environment captured by the LiDAR sensor. It represents the data captured within a specific timeframe, typically a fraction of a second.
  • a single frame may contain multiple data points, each may comprise the 3D coordinate including the depth or distance information and optional additional information like e. g. intensity or the like.
  • a frame may refer to one scan period of the environment.
  • the identification of the traffic sign comprises identifying a region within the point cloud comprising the traffic sign and extracting a shape of the traffic sign.
  • This identification relates to the identification of the board of the traffic sign and the identification of its shape.
  • the region of a potential traffic sign is identified using the point cloud information, e. g. the depth information, and then the shape of the object in the identified region is extracted. The extracted shape may then be used in further steps of the method.
  • the identification of the traffic sign, in particular the traffic sign board, in the point cloud is performed using the extracted shape of the traffic sign.
  • Traffic signs may have distinct shapes that may be comprised in a limited set of possible shapes of traffic signs. This property may be made use of when identifying the traffic sign within the point cloud.
  • the possible shapes of the traffic sign may be stored in a storage and read from this storage. Only objects with the shape that is among the possible shapes for traffic signs may then be extracted as a traffic sign, in particular a traffic sign board.
  • the extraction and/or the recognition of the visual information is performed using the extracted shape of the traffic sign.
  • the shape of the traffic sign that has been extracted from the point cloud in one of the previous steps of the method is usually combined with a limited number of characters, letters and/or symbols on a traffic sign.
  • This additional information on the visual information that is possible in combination with the extracted shape of the traffic sign may be used to extract the visual information of the traffic sign board and/or recognize the extracted visual information. This may further improve the reliability and speed of the proposed method.
  • the described method may at least partially be performed while the vehicle comprising the LiDAR sensor is in motion.
  • the method may be used for autonomous and/or semi-autonomous driving operations and is particularly adapted to be used on a vehicle while in motion to identify, extract and recognize roadside traffic signs.
  • the output of the method may advantageously be combined with the output of other sensors, e. g. a camera, in order further improve the reliability and the level of redundancy.
  • the processor may have an output interface via which information can be output, which depends on the recognized traffic sign.
  • the output information may for example comprise the speed limit shown on the traffic sign.
  • the described processor may be used, for example, in a vehicle which is in motion and performing an autonomous or semi-autonomous driving function.
  • the information output by the processor can then be further used within the vehicle.
  • further processing and use can take place in autonomous or semi-autonomous driving systems.
  • use within the computing unit, in particular central computing unit, of the vehicle is advantageous, where information about the vehicle and its environment may be processed centrally.
  • a computer program product comprising instructions that, when executed by the processor, cause the processor to perform the described method.
  • the instructions may be executed by the control unit of the LiDAR sensor and cause the control unit to perform the described method.
  • the instructions may be executed by the computing unit of the vehicle and cause the computing unit to perform the described method.
  • the computer program product may for example correspond to a computer program comprising the instructions or may correspond to a computer-readable storage medium storing the computer program comprising the instructions.
  • Fig. 4 schematically illustrates a point cloud with extracted visual information.
  • FIG 1 a method for recognizing a stationary traffic sign 30 is illustrated.
  • a point cloud 32 of an environment 22 is obtained by a LiDAR. sensor 10 comprised in a vehicle 20.
  • the point cloud 32 may be comprised in raw data of the LiDAR sensor 10 that may be output by the LiDAR sensor 10 to a computing unit 24 of the vehicle 20 or further processed within the LiDAR sensor 10.
  • the stationary traffic sign 30 is identified within the point cloud 32 using depth information comprised in the point cloud 32.
  • the identification of the stationary traffic sign 30 comprises tracking the position of the traffic sign 30 over a plurality of frames captured by the LiDAR sensor 10 and confirming the stationary property of the traffic sign 30 based on the plurality of captured frames. If the vehicle 20 is in motion while this method is performed, this may give a reliable confirmation of the stationary property of the traffic sign 30.
  • the traffic sign 30 recognition in the next step may then only be performed if the stationary condition has been confirmed. This may help to avoid identifying writings on other moving objects.
  • the identification of the traffic sign 30 further comprises identifying a region within the point cloud 32 comprising the traffic sign 30 and extracting a shape of the traffic sign 30. The shape of the traffic sign 30 may then be used for the identification of the traffic sign 30 as well, as the possible shapes of traffic signs 30 is a limited set of shapes.
  • visual information 34 of the traffic sign 30 is extracted using contrast of intensity information, wherein the underlying intensity information has been obtained by the LiDAR sensor 10.
  • the contrast of intensity information may in particular be calculated using the intensity information comprised in the point cloud 32. In other embodiments, the contrast of intensity information may already be comprised in the point cloud 32.
  • the extracted visual information 34 comprises for example characters, letters and/or symbols.
  • the shape of the traffic sign 30 may then also be used for the extraction of the visual information 34, as the visual information 34 possibly correlating with a certain shape of a traffic sign 30 is limited.
  • the recognized visual information 34 comprising its meaning, e. g. speed limit "80" is output.
  • FIG. 2 schematically illustrates the vehicle 20, for example a passenger car, comprising the LiDAR sensor 10.
  • the LiDAR sensor 10 is located in a front area of the vehicle 20.
  • the environment 22 detected by the LiDAR sensor 10 is located in front of the vehicle 20 in the direction of travel.
  • An object O is schematically illustrated in the environment 22.
  • the LiDAR sensor 10 comprises an optical transmission device 12 emitting an optical signal L.
  • the optical transmission device 12 may comprise a light source for emitting laser light.
  • the LiDAR sensor 10 also comprises an optical reception device 14 receiving the reflected optical signal L.
  • the optical transmission device 12 may emit the optical signa L in pulses PL.
  • the pulsed optical signal L comprises short periods of time, wherein the optical signal L is transmitted. This may be referred to as the pulse PL. In between the pulses PL, no optical signal L is transmitted by the optical transmission device 12. The echoes of the sent pulses PL are then received by the optical reception device 14.
  • the optical reception device 14 comprises an opto-electronic detector, for example a point sensor, line sensor or area sensor, in particular an avalanche photodiode, a photodiode cell, a CCD sensor, an active pixel sensor, for example a CMOS sensor or the like.
  • an opto-electronic detector for example a point sensor, line sensor or area sensor, in particular an avalanche photodiode, a photodiode cell, a CCD sensor, an active pixel sensor, for example a CMOS sensor or the like.
  • optical signals L in particular laser signals
  • the electrical signals can be processed, for example, by a control unit 18.
  • the point cloud 32 may be comprised in the raw data of the LiDAR sensor 10.
  • the raw data comprising the point cloud 32 may be output to the computing unit 24.
  • the raw data comprising the point cloud 32 may also be further processed within the LiDAR. sensor 10.
  • the point cloud 32 generated by the LiDAR sensor 10 may be used to recognize traffic signs 30 as described with respect to figure 1 and figures 3 and 4 below.
  • the point cloud 32 may comprise measured information like depth and intensity associated with each point.
  • the point cloud 32 may further be used to detect further stationary or moving objects O, in particular vehicles, persons, animals, plants, obstacles, unevenness in the road, in particular potholes or stones, lane boundaries, open spaces, in particular parking spaces, precipitation or the like, in the environment 22.
  • the computing unit 24 may be designed as a central vehicle computer of the vehicle 20, in which data from several sensor systems of the vehicle 20 can be received, evaluated and/or further processed.
  • the computing unit 24 can be used, for example, to implement autonomous or semi-autonomous driving functions.
  • FIG 4 it is shown how the region of the point cloud 32 comprising the segregated traffic sign 30 board is further processed.
  • the embodiment shown in figure 4 illustrates the identification of the high reflective traffic sign 30 giving the speed limit "80".
  • the traffic sign 30 is identified, classified, and the meaning is extracted.
  • the method allows to use the LiDAR sensor 10 on a vehicle 20 to read texts and symbols of the road-side traffic sign 30 in a reliable manner also in difficult light conditions, e. g. low light conditions or bright sunlight conditions.
  • FIG 5 an example of a received pulse PL of the optical signal L is shown in an amplitude Amp over time t diagram.
  • a pulse PL may be received by the optical reception device 14 for the case the optical transmission device 12 emits the optical signal in pulses PL.
  • the width WI of an emitted pulse PL may be in the range of 1-5 ns, e. g. 2-3 ns.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The present disclosure relates to a method for recognizing a stationary traffic sign (30), wherein a point cloud (32) of an environment (22) is obtained by a LiDAR sensor (10) comprised in a vehicle (20), the stationary traffic sign (30) is identified within the point cloud (32) and visual information (34) of the traffic sign (30) is extracted using contrast of intensity information, wherein the intensity information has been obtained by the LiDAR sensor (10). The disclosure further relates to a processor, a LiDAR sensor (10), a computing unit (24), a vehicle (20) and a computer program product.

Description

METHOD AND PROCESSOR FOR RECOGNIZING A STATIONARY TRAFFIC SIGN
Field
The present disclosure relates to a method and processor for recognizing a stationary traffic sign. The disclosure further relates to a LiDAR sensor, a computing unit, a vehicle, and a computer program product.
Background
Modern vehicles like cars, vans, trucks, motorcycles, etc. may comprise sensor systems, whose data are used for driver information and/or are used by driver assistance systems.
Sensor systems are constantly being developed for various functions, e. g. for the acquisition of environmental information in the near and far range of vehicles, such as passenger cars or commercial vehicles. Based on the acquired data, a model of the vehicle environment may be generated and a reaction to changes in this vehicle environment is possible. Sensor systems may also serve as sensors for driver assistance systems, in particular assistance systems for autonomous or semi- autonomous vehicle control. They may for example be used to detect obstacles and/or other road users in the front, rear, or blind spot areas of a vehicle. Sensor systems may be based on different sensor principles, such as radar, ultrasound, optics.
An important optical sensor principle for environment detection, e. g. of vehicles, is the LiDAR. technology (LiDAR: Light Detection And Ranging). A LiDAR sensor comprises an optical transmission device and an optical reception device. The transmission device emits an optical signal, which can be continuous or pulsed. In addition, the optical signal may be modulated. Electromagnetic waves in the form of laser beams in the ultraviolet, visual or infrared range may be used as optical signals in a LiDAR sensor. The light of the optical signal is received by the optical reception device after reflection from an object in a detection area of the LiDAR sensor. The optical signal can for example be evaluated according to a time-of- flight method and the spatial position and distance of the object on which the reflection occurred can be determined. In addition, it may be possible to determine a relative velocity. Reflection or reflected light is understood to mean any light that is reflected back and should also include, in particular, light that is reflected back by scattering or absorption emission.
Scanning LiDAR sensors emit optical signals that move in a scanning direction. The scanning movement may be achieved by steering the light beam of the optical signal using a beam steering device.
In CN108009474A method and device for extracting text written on a vehicle is described. The extraction is based on using a stationary laser ranging device.
Summary
A method for recognizing a stationary traffic sign comprises:
• A LiDAR sensor comprised in a vehicle obtains a point cloud of an environment of the vehicle.
• The stationary traffic sign is identified within the point cloud.
• Visual information of the traffic sign is extracted using contrast of intensity information, wherein the intensity information has been obtained by the LiDAR sensor.
The LiDAR sensor may provide raw data comprising a point cloud comprising measured values, in particular depth values, for the environment of the vehicle. For creating the point cloud, the LiDAR sensor comprises an optical transmission device which emits an optical signal. The optical signal is reflected in the environment of the LiDAR sensor. An optical reception device of the LiDAR sensor receives the reflection of the optical signal. The sent and received optical signal can be evaluated to obtain the measured values. The measurements comprising the transmission, reception and evaluation may be repeated. The collection of the measured values for the various points of the environment forms the point cloud. A pulsed LiDAR sensor will emit the optical signal in pulses. Each pulse of the optical signal may provide a measurement value of a point. A scanning LiDAR sensor will cause the optical signal to perform a scanning movement, thereby successively scanning points of the environment.
The point cloud may be understood as a set of points, wherein each point comprises respective coordinates in a two-dimensional or three-dimensional coordinate system. In case of a three-dimensional point cloud, the three- dimensional coordinates may, for example, be determined by the direction of incidence of a reflected optical signal and the corresponding time-of-flight or radial distance measured for this respective point. In other words, the three-dimensional coordinate system may be a three-dimensional polar coordinate system. However, the information may also be given in Cartesian coordinates for each of the points. In addition to the spatial information, namely the two-dimensional or three- dimensional coordinates, the point cloud may also store additional information or measurement values for the individual points such as an intensity of the respective received optical signal.
The method offers reliable traffic sign identification and recognition of the visual information on the traffic sign. The traffic sign identification refers in particular to the identification of the traffics sign board within the point cloud. The use of the point cloud information obtained by the LiDAR sensor allows for a good performance in a broad spectrum of light conditions, including low light conditions and excessive sunlight.
In the stepwise approach, the point cloud information is used to identify the traffic sign board in the environment. Then, in a further step, the visual traffic sign information on the traffic sign board is extracted using the contrast of intensity information. Intensity information is sometimes referred to as greyscale information or a greyscale image. The LiDAR. sensor may measure the intensity information of the received light. Using contrast information of such intensities allows to extract the visual information, which comprises the shapes of the information figuring on the traffic sign. The intensity information and/or the contrast of intensity information may be stored in the point cloud.
Contrast of intensity refers to the difference in intensity between elements, e. g. points, pixels or regions, in an image. For point clouds, the contrast information may refer to the difference in intensity between the points or groups of points of the point cloud. Histograms may be used to analyse the contrasts. Using the contrast of intensity information allows to make the visual information of the traffic sign stand out more distinctly from its background. This is in particular useful for traffic signs with high reflective portions, where for example the visual information is printed on a high reflective background.
In an embodiment of the method, the traffic sign board is identified within the point cloud using depth information comprised in the point cloud. For example, based on the depth information for each echo pixel, the traffic sign board is identified from the rest of the environment. The depth information is accurately and directly available in the point cloud and can advantageously be used for the identification of the traffic sign board.
In an embodiment of the method, the identification of the stationary traffic sign comprises tracking the position of the traffic sign over a plurality of frames captured by the LiDAR sensor and confirming the stationary property of the traffic sign based on the plurality of captured frames.
A frame of a LiDAR. sensor may refer to one scan or snapshot of the environment captured by the LiDAR sensor. It represents the data captured within a specific timeframe, typically a fraction of a second. A single frame may contain multiple data points, each may comprise the 3D coordinate including the depth or distance information and optional additional information like e. g. intensity or the like. For a scanning LiDAR sensor, a frame may refer to one scan period of the environment.
Tracking the plurality of frames captured from a moving vehicle may allow to confirm the stationary property of the traffic sign. This may help to confirm that it is a roadside traffic signpost indeed. To this end, a position of an object, a tentative traffic sign, across multiple scan frames may be used to detect that it is static in the scene of the environment.
According to an embodiment of the method, the identification of the traffic sign comprises identifying a region within the point cloud comprising the traffic sign and extracting a shape of the traffic sign. This identification relates to the identification of the board of the traffic sign and the identification of its shape. In this embodiment, the region of a potential traffic sign is identified using the point cloud information, e. g. the depth information, and then the shape of the object in the identified region is extracted. The extracted shape may then be used in further steps of the method.
In embodiments of the method, the identification of the traffic sign, in particular the traffic sign board, in the point cloud is performed using the extracted shape of the traffic sign. Traffic signs may have distinct shapes that may be comprised in a limited set of possible shapes of traffic signs. This property may be made use of when identifying the traffic sign within the point cloud. The possible shapes of the traffic sign may be stored in a storage and read from this storage. Only objects with the shape that is among the possible shapes for traffic signs may then be extracted as a traffic sign, in particular a traffic sign board.
In an embodiment of the method, the visual information is extracted using contrast of intensity information, wherein the intensity information is comprised in the point cloud. The point cloud provided as raw data by the LiDAR sensor comprises intensity information. The intensity information is used to obtain contrast of intensity information, for example giving the contrast in between the points or groups of points of the point cloud. Using intensity information comprised within the point cloud is a particularly efficient way to obtain the contrast of intensity information between the points of the point cloud.
The visual information extracted from the traffic sign may for example comprise characters, letters and/or symbols.
To the visual information extracted from the traffic sign an image recognition may be applied. The recognized visual information may then be output. Recognition of the visual information of the traffic sign may comprise extracting the meaning of the traffic sign. This information may be output and/or be used by other devices, e. g. a computing unit of the vehicle. The meaning of traffic signs may for example comprise information on a speed limit that may be signalled to the driver or used by a computing unit or other devices of the vehicle.
In embodiments of the method, the extraction and/or the recognition of the visual information is performed using the extracted shape of the traffic sign. The shape of the traffic sign that has been extracted from the point cloud in one of the previous steps of the method is usually combined with a limited number of characters, letters and/or symbols on a traffic sign. This additional information on the visual information that is possible in combination with the extracted shape of the traffic sign may be used to extract the visual information of the traffic sign board and/or recognize the extracted visual information. This may further improve the reliability and speed of the proposed method.
The described method may at least partially be performed while the vehicle comprising the LiDAR sensor is in motion. The method may be used for autonomous and/or semi-autonomous driving operations and is particularly adapted to be used on a vehicle while in motion to identify, extract and recognize roadside traffic signs. The output of the method may advantageously be combined with the output of other sensors, e. g. a camera, in order further improve the reliability and the level of redundancy.
A processor for recognizing a stationary traffic sign is configured to perform the steps of the described method. The processor may be comprised in a control unit of the described LiDAR sensor. The processor may be comprised in a computing unit of the vehicle. The vehicle may comprise the LiDAR. sensor, which comprises the processor, and/or the vehicle may comprise the computing unit, which comprises the processor.
The processor may have an output interface via which information can be output, which depends on the recognized traffic sign. The output information may for example comprise the speed limit shown on the traffic sign. The described processor may be used, for example, in a vehicle which is in motion and performing an autonomous or semi-autonomous driving function. The information output by the processor can then be further used within the vehicle. In particular, further processing and use can take place in autonomous or semi-autonomous driving systems. In particular, use within the computing unit, in particular central computing unit, of the vehicle is advantageous, where information about the vehicle and its environment may be processed centrally.
A computer program product comprising instructions that, when executed by the processor, cause the processor to perform the described method. In embodiments, the instructions may be executed by the control unit of the LiDAR sensor and cause the control unit to perform the described method. In other embodiments, the instructions may be executed by the computing unit of the vehicle and cause the computing unit to perform the described method. The computer program product may for example correspond to a computer program comprising the instructions or may correspond to a computer-readable storage medium storing the computer program comprising the instructions.
Brief description of the figures
Embodiments will now be described with reference to the attached drawing figures by way of example only. Like reference numerals are used to refer to like elements throughout. The illustrated structures and devices are not necessarily drawn to scale.
Fig. 1 schematically illustrates a method to recognize a stationary traffic sign. Fig. 2 schematically illustrates a vehicle comprising a LiDAR sensor.
Fig. 3 schematically illustrates a point cloud with an identified traffic sign.
Fig. 4 schematically illustrates a point cloud with extracted visual information.
Fig. 5 schematically illustrates how intensity values of a pulse of an optical signal may be obtained.
Detailed Description
In figure 1 a method for recognizing a stationary traffic sign 30 is illustrated.
In A, a point cloud 32 of an environment 22 is obtained by a LiDAR. sensor 10 comprised in a vehicle 20. The point cloud 32 may be comprised in raw data of the LiDAR sensor 10 that may be output by the LiDAR sensor 10 to a computing unit 24 of the vehicle 20 or further processed within the LiDAR sensor 10.
In B, the stationary traffic sign 30 is identified within the point cloud 32 using depth information comprised in the point cloud 32. The identification of the stationary traffic sign 30 comprises tracking the position of the traffic sign 30 over a plurality of frames captured by the LiDAR sensor 10 and confirming the stationary property of the traffic sign 30 based on the plurality of captured frames. If the vehicle 20 is in motion while this method is performed, this may give a reliable confirmation of the stationary property of the traffic sign 30. The traffic sign 30 recognition in the next step may then only be performed if the stationary condition has been confirmed. This may help to avoid identifying writings on other moving objects. The identification of the traffic sign 30 further comprises identifying a region within the point cloud 32 comprising the traffic sign 30 and extracting a shape of the traffic sign 30. The shape of the traffic sign 30 may then be used for the identification of the traffic sign 30 as well, as the possible shapes of traffic signs 30 is a limited set of shapes.
In C, visual information 34 of the traffic sign 30 is extracted using contrast of intensity information, wherein the underlying intensity information has been obtained by the LiDAR sensor 10. The contrast of intensity information may in particular be calculated using the intensity information comprised in the point cloud 32. In other embodiments, the contrast of intensity information may already be comprised in the point cloud 32. The extracted visual information 34 comprises for example characters, letters and/or symbols. The shape of the traffic sign 30 may then also be used for the extraction of the visual information 34, as the visual information 34 possibly correlating with a certain shape of a traffic sign 30 is limited.
In D, the visual information 34 of the traffic sign 30 is recognized based on the extracted visual information 34 using image recognition. The recognition comprises obtaining the meaning of the traffic sign 30. The shape of the traffic sign 30 may then also be used for the recognition of the visual information 34, as the meaning of a traffic sign 30 possibly correlating with a certain shape of a traffic sign 30 is limited.
In E, the recognized visual information 34 comprising its meaning, e. g. speed limit "80" is output.
Figure 2 schematically illustrates the vehicle 20, for example a passenger car, comprising the LiDAR sensor 10. The LiDAR sensor 10 is located in a front area of the vehicle 20. The environment 22 detected by the LiDAR sensor 10 is located in front of the vehicle 20 in the direction of travel. An object O is schematically illustrated in the environment 22.
The LiDAR sensor 10 comprises an optical transmission device 12 emitting an optical signal L. The optical transmission device 12 may comprise a light source for emitting laser light. The LiDAR sensor 10 also comprises an optical reception device 14 receiving the reflected optical signal L. The optical transmission device 12 may emit the optical signa L in pulses PL. The pulsed optical signal L comprises short periods of time, wherein the optical signal L is transmitted. This may be referred to as the pulse PL. In between the pulses PL, no optical signal L is transmitted by the optical transmission device 12. The echoes of the sent pulses PL are then received by the optical reception device 14.
Preferably, the optical reception device 14 comprises an opto-electronic detector, for example a point sensor, line sensor or area sensor, in particular an avalanche photodiode, a photodiode cell, a CCD sensor, an active pixel sensor, for example a CMOS sensor or the like. With the opto-electronic detector, optical signals L, in particular laser signals, can be received and converted into electrical signals. The electrical signals can be processed, for example, by a control unit 18.
In the control unit 18 of the LiDAR sensor 10, the emitted and received optical signal L can be evaluated to generate a point cloud 32 of the environment 22 of the vehicle 20. The point cloud 32 may be used to e. g. detect the object O and/or to determine the distance to the object 0. The control unit 18 may also be used to monitor and control the transmitting process in the optical transmission device 12 and the receiving process in the optical reception device 14. For example, a time- of-flight method, in particular a direct or an indirect time-of-flight method, may be used to determine the distance to individual points in the point cloud 32.
The point cloud 32 may be comprised in the raw data of the LiDAR sensor 10. The raw data comprising the point cloud 32 may be output to the computing unit 24. The raw data comprising the point cloud 32 may also be further processed within the LiDAR. sensor 10.
In the example shown, a point cloud 32 of the environment 22 that is monitored in the direction of travel in front of the vehicle 20 may be generated. It is also possible to arrange the LiDAR sensor 10 in other areas of the vehicle 20, for example in the rear area and/or in lateral areas. It is also possible to arrange several LiDAR sensors 10 on the vehicle 20, especially in corner areas of the vehicle 20.
The point cloud 32 generated by the LiDAR sensor 10 may be used to recognize traffic signs 30 as described with respect to figure 1 and figures 3 and 4 below. The point cloud 32 may comprise measured information like depth and intensity associated with each point. The point cloud 32 may further be used to detect further stationary or moving objects O, in particular vehicles, persons, animals, plants, obstacles, unevenness in the road, in particular potholes or stones, lane boundaries, open spaces, in particular parking spaces, precipitation or the like, in the environment 22.
Figure 2 also shows the computing unit 24 comprised in the vehicle 20. The computing unit 24 onboard the vehicle 20 is connected to the LiDAR sensor 10 via a data interface via which output data from the LiDAR sensor 10 can be transmitted to the computing unit 24 of the vehicle 20.
For example, the computing unit 24 may be designed as a central vehicle computer of the vehicle 20, in which data from several sensor systems of the vehicle 20 can be received, evaluated and/or further processed. The computing unit 24 can be used, for example, to implement autonomous or semi-autonomous driving functions.
A processor may be configured to execute the steps of the described method to recognize stationary traffic signs 30. The processor may in particular receive the raw data from the LiDAR sensor 10 as input to the method. The processor may be comprised in the LiDAR. sensor 10 itself, e. g. within the control unit 18. The raw data comprising the point cloud 32 may then be transmitted internally within the LiDAR sensor 10. The processor may also be comprised in the computing unit 24. The raw data comprising the point cloud 32 may then be transmitted from the control unit 18 of the LiDAR sensor 10 to the computing unit 24. The described method to recognize stationary traffic signs 30 may then be executed on the computing unit 24.
Figure 3 schematically illustrates two embodiments of point clouds 32 as obtained by the LiDAR sensor 10 as shown in figure 2 for example. Associated with each point, also called pixel, in the environment 22 is a depth or distance value, giving a three-dimensional point cloud 32. The depth or distance value has been obtained by the LiDAR sensor 10 by evaluating the sent and received optical signal L for each point or pixel of the point cloud 32. Associated with each point, other measurement values my optionally also be obtained by the LiDAR sensor 10 and be comprised in the point cloud 32.
From the depth information of the point cloud 32 regions are extracted where traffic signs 30 are recognizable in each of the two embodiments of the three- dimensional point cloud 32. Here, the traffic sign 30 board with its outer shape is segregated from the rest of the environment 22 based on the depth information for each pixel in the point cloud 32. The depth information is readily available in the point cloud 32 in an accurate way.
In figure 4, it is shown how the region of the point cloud 32 comprising the segregated traffic sign 30 board is further processed. The embodiment shown in figure 4 illustrates the identification of the high reflective traffic sign 30 giving the speed limit "80".
The traffic sign 30 is read in the following further steps:
Once the traffic sign 30 has been segregated as shown in figure 3, the intensity information for each pixel of the point cloud 32 is used to separate the visual information 34 of the traffic sign 30 on the board. Shown in figure 4 is a high reflectivity traffic sign 30. For recognizing such a high reflectivity traffic sign 30, using contrast of intensity information is particularly useful. For obtaining the contrast of intensity information, the intensity information stored in the point cloud 32 may be used. From the stored intensity information, a contrast may be calculated, e. g. by calculating the difference in between e. g. neighbouring pixels or groups of pixels.
Then, using image processing techniques on the extracted visual information 34, the traffic sign 30 is identified, classified, and the meaning is extracted.
The method allows to use the LiDAR sensor 10 on a vehicle 20 to read texts and symbols of the road-side traffic sign 30 in a reliable manner also in difficult light conditions, e. g. low light conditions or bright sunlight conditions.
Recognition of traffic signs 30 using the described method increases the confidence level. It has a very good performance in low light conditions and a high distance range for the recognition of road traffic signs 30, in particular for the case of high reflectivity road traffic signs 30. From the LiDAR sensor 10, there is highly accurate depth information for entities on the road, including the road traffic signs 30, available.
Recognition of traffic signs 30 using LiDAR technology increases the confidence level also for the case of sensor fusion. For example, there may be two sensors like camera and LiDAR as an input to the sensor fusion. The camera may be used for traffic sign 30 recognition. The LiDAR sensor 10 may be used in addition to detect and recognize the road traffic signs 30 to improve the redundancy of the fusion. This may have advantages for the development towards fully autonomous systems. Furthermore, the available local information around the vehicle 20 may be improved by the sensor fusion involving the traffic sign 30 recognition, e. g. information about speed limits, road quality, hazards etc.
In figure 5, an example of a received pulse PL of the optical signal L is shown in an amplitude Amp over time t diagram. Such a pulse PL may be received by the optical reception device 14 for the case the optical transmission device 12 emits the optical signal in pulses PL. The width WI of an emitted pulse PL may be in the range of 1-5 ns, e. g. 2-3 ns.
The pulses PL may be emitted in a slightly different angles for each pulse PL, such that the sequence of pulses PL performs a scanning movement over the environment 22. For such an optical signal L, each pulse PL of the optical signal L may provide a measurement value of a point in the point cloud 32. The measurement value includes the depth or distance information and may include the intensity information. The intensity information may be measured with three different parameters. It may correspond to the peak PE value of the received pulse PL, the area AR. of the received pulse PL and/or the width WI of the received pulse PL.
As shown in figure 5, the measurement may be triggered at the point START, when the amplitude Amp of a received optical signal L passes above a threshold TH value with the rising edge. The measurement may be stopped at the point STOP, when the amplitude value falls below the same threshold TH value and the threshold TH is passed in the other direction.
One or more of these measured values for the intensity may also be stored in the point cloud 32 associated with each point. The measured intensity may then be used in the described way to recognize the traffic sign 30.

Claims

1. Method for recognizing a stationary traffic sign (30), wherein a point cloud (32) of an environment (22) is obtained by a LiDAR sensor (10) comprised in a vehicle (20), the stationary traffic sign (30) is identified within the point cloud (32), visual information (34) of the traffic sign (30) is extracted using contrast of intensity information, wherein the intensity information has been obtained by the LiDAR. sensor (10).
2. Method according to claim 1, wherein the traffic sign (30) is identified within the point cloud (32) using depth information comprised in the point cloud (32).
3. Method according to claim 1 or 2, wherein the identification of the stationary traffic sign (30) comprises tracking the position of the traffic sign (30) over a plurality of frames captured by the LiDAR sensor (10) and confirming the stationary property of the traffic sign (30) based on the plurality of captured frames.
4. Method according to one of the preceding claims, wherein the identification of the traffic sign (30) comprises identifying a region within the point cloud comprising the traffic sign (30) and extracting a shape of the traffic sign (30).
5. Method according to claim 4, wherein the identification of the traffic sign (30) is performed using the extracted shape of the traffic sign (30).
6. Method according to one of the preceding claims, wherein the visual information (34) is extracted using contrast of intensity information, wherein the intensity information is comprised in the point cloud (32).
7. Method according to one of the preceding claims, wherein the extracted visual information (34) comprises characters, letters and/or symbols.
8. Method according to one of the preceding claims, wherein the visual information (34) of the traffic sign (30) is recognized based on the extracted visual information (34) using image recognition and wherein the recognized visual information (34) is output.
9. Method according to claim 8, wherein the extraction and/or the recognition of the visual information (34) is performed using the extracted shape of the traffic sign (30).
10. Method according to one of the preceding claims, wherein the method is at least partially performed while the vehicle (20) comprising the LiDAR sensor (10) is in motion.
11. Processor for recognizing a stationary traffic sign (30) configured to perform the steps of the method according to one of claims 1 to 10.
12. LiDAR. sensor (10) comprised in a vehicle (20), the LiDAR sensor (10) comprising a control unit (18), the control unit (18) comprising the processor according to claim 11.
13. Computing unit (24) comprised in a vehicle (20), the computing unit (24) comprising the processor according to claim 11.
14. Vehicle (20) comprising the LiDAR sensor (10) according to claim 12 and/or the computing unit according to claim 13.
15. Computer program product comprising instructions that, when executed by a processor, cause the processor to perform a method according to any one of claims 1 to 10.
PCT/EP2024/076223 2023-10-11 2024-09-19 Method and processor for recognizing a stationary traffic sign Pending WO2025078129A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102023127698.0A DE102023127698A1 (en) 2023-10-11 2023-10-11 Method and processor for detecting a stationary traffic sign
DE102023127698.0 2023-10-11

Publications (1)

Publication Number Publication Date
WO2025078129A1 true WO2025078129A1 (en) 2025-04-17

Family

ID=92882881

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2024/076223 Pending WO2025078129A1 (en) 2023-10-11 2024-09-19 Method and processor for recognizing a stationary traffic sign

Country Status (2)

Country Link
DE (1) DE102023127698A1 (en)
WO (1) WO2025078129A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009474A (en) 2017-11-01 2018-05-08 武汉万集信息技术有限公司 A kind of surface of vehicle picture and text extracting method and device based on laser ranging
CN110618434B (en) * 2019-10-30 2021-11-16 北京航空航天大学 Tunnel positioning system based on laser radar and positioning method thereof
CN115512317A (en) * 2022-08-09 2022-12-23 岚图汽车科技有限公司 A road sign recognition method, device, electronic equipment and storage medium
CN116468933A (en) * 2023-03-24 2023-07-21 北京航空航天大学 A Traffic Sign Classification Method Based on LiDAR Intensity Correction and Point Cloud Upsampling

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009474A (en) 2017-11-01 2018-05-08 武汉万集信息技术有限公司 A kind of surface of vehicle picture and text extracting method and device based on laser ranging
CN110618434B (en) * 2019-10-30 2021-11-16 北京航空航天大学 Tunnel positioning system based on laser radar and positioning method thereof
CN115512317A (en) * 2022-08-09 2022-12-23 岚图汽车科技有限公司 A road sign recognition method, device, electronic equipment and storage medium
CN116468933A (en) * 2023-03-24 2023-07-21 北京航空航天大学 A Traffic Sign Classification Method Based on LiDAR Intensity Correction and Point Cloud Upsampling

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WENG SHENGXIA ET AL: "Road traffic sign detection and classification from mobile LiDAR point clouds", BIOMEDICAL PHOTONICS AND OPTOELECTRONIC IMAGING : 8 - 10 NOVEMBER 2000, BEIJING, CHINA; [PROCEEDINGS // SPIE, INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING : P, ISSN 0038-7355], SPIE, BELLINGHAM, WASH., US, vol. 9901, 2 March 2016 (2016-03-02), pages 99010A - 99010A, XP060063826, ISBN: 978-1-62841-832-3, DOI: 10.1117/12.2234911 *

Also Published As

Publication number Publication date
DE102023127698A1 (en) 2025-04-17

Similar Documents

Publication Publication Date Title
US11821988B2 (en) Ladar system with intelligent selection of shot patterns based on field of view data
EP3885794A1 (en) Track and road obstacle detecting method
US20150336575A1 (en) Collision avoidance with static targets in narrow spaces
CN112379674B (en) Automatic driving equipment and system
CN113888463B (en) Wheel rotation angle detection method and device, electronic equipment and storage medium
US20210018611A1 (en) Object detection system and method
US8885889B2 (en) Parking assist apparatus and parking assist method and parking assist system using the same
Zhang et al. Lidar degradation quantification for autonomous driving in rain
KR20120072131A (en) Context-aware method using data fusion of image sensor and range sensor, and apparatus thereof
CN112784679A (en) Vehicle obstacle avoidance method and device
Zhou A review of LiDAR sensor technologies for perception in automated driving
US20190187253A1 (en) Systems and methods for improving lidar output
EP3769114A1 (en) Methods and systems for identifying material composition of objects
CN113376643B (en) Distance detection method and device and electronic equipment
Godfrey et al. Evaluation of flash LiDAR in adverse weather conditions toward active road vehicle safety
Li et al. Composition and application of current advanced driving assistance system: A review
JP3954053B2 (en) Vehicle periphery monitoring device
US20220179077A1 (en) Method for supplementary detection of objects by a lidar system
WO2025078129A1 (en) Method and processor for recognizing a stationary traffic sign
Cowan et al. Investigation of adas/ads sensor and system response to rainfall rate
CN114954442A (en) A vehicle control method, system and vehicle
JP7673764B2 (en) Noise removal device, object detection device, and noise removal method
CN112016496A (en) Method, device and equipment for target detection
US20250346223A1 (en) Vehicular sensing system with height detection and object tracking
JPH08503772A (en) Vehicle reverse assist device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24775878

Country of ref document: EP

Kind code of ref document: A1