WO2022246851A1 - Procédé et système de test faisant appel à des données de levé aérien pour système de perception de conduite autonome, et support de stockage - Google Patents
Procédé et système de test faisant appel à des données de levé aérien pour système de perception de conduite autonome, et support de stockage Download PDFInfo
- Publication number
- WO2022246851A1 WO2022246851A1 PCT/CN2021/096993 CN2021096993W WO2022246851A1 WO 2022246851 A1 WO2022246851 A1 WO 2022246851A1 CN 2021096993 W CN2021096993 W CN 2021096993W WO 2022246851 A1 WO2022246851 A1 WO 2022246851A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- test
- vehicle
- test vehicle
- automatic driving
- scene data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
Definitions
- the present application relates to the technical field of automatic driving, and in particular to a testing method, system and storage medium for an automatic driving perception system based on aerial survey data.
- Autonomous driving means that the driver does not need to operate the vehicle, but automatically collects environmental information through the sensors on the vehicle, and automatically drives according to the environmental information.
- the current method is to install high-precision sensors on the roof of the test vehicle.
- the driver of the test vehicle and the driver of the ordinary vehicle can realize the particularity of the test vehicle and affect the naturalness of the measurement data. Therefore, none of the current test methods can complete the performance test of the automatic driving perception system in a natural state, which may lead to the failure of the automatic driving system and cause traffic accidents.
- the embodiment of the present application provides a test method, system and storage medium for an automatic driving perception system based on aerial survey data, and specifically provides a test system, a test device, a drone, a vehicle, Test methods and storage media to improve the safety of automated driving systems.
- the embodiment of the present application provides a test system, the test system is used to test the automatic driving perception system in the automatic driving system, and the test system includes:
- test vehicle includes an automatic driving system
- the automatic driving system includes an automatic driving perception system
- the automatic driving perception system is used for surrounding environment perception to generate perception results
- Unmanned aerial vehicle described unmanned aerial vehicle can follow described test vehicle to fly, and collects the traffic scene data in the described test vehicle running process;
- a test device is used for communicating with the UAV and the test vehicle, the test device is used for acquiring the traffic scene data, and determining the accuracy of the perception result according to the traffic scene data.
- test system which is also used to test the automatic driving perception system in the automatic driving system, and the test system includes:
- test vehicle includes an automatic driving system
- the automatic driving system includes an automatic driving perception system
- the automatic driving perception system is used to perceive the surrounding environment of the test vehicle to obtain a perception result
- An unmanned aerial vehicle communicates with the test vehicle, and the unmanned aerial vehicle can follow the flight of the test vehicle, and collect traffic scene data during the driving process of the test vehicle, and according to the traffic scene The data determine the accuracy of the perception result of the automatic driving perception system.
- test system is used to test the automatic driving perception system in the automatic driving system
- test system includes:
- test vehicle includes an automatic driving system
- the automatic driving system includes an automatic driving perception system
- the automatic driving perception system is used to perceive the surrounding environment of the test vehicle to obtain a perception result
- the unmanned aerial vehicle can follow the flight of the test vehicle, and collect traffic scene data during the driving process of the test vehicle,
- test vehicle is connected in communication with the UAV, and the test vehicle is used to obtain the traffic scene data, and determine the accuracy of the perception result of the automatic driving perception system according to the traffic scene data.
- the embodiment of the present application also provides a testing method for testing the automatic driving perception system in the automatic driving system, the testing method comprising:
- the test vehicle includes an automatic driving system
- the automatic driving system includes an automatic driving perception system
- the automatic driving perception system is used to perceive the surrounding environment of the test vehicle to generate a perception result
- the traffic scene data is collected by the unmanned aerial vehicle following the flight of the test vehicle;
- the accuracy of the perception result of the automatic driving perception system is determined according to the traffic scene data.
- the embodiment of the present application also provides an unmanned aerial vehicle, the unmanned aerial vehicle includes:
- the platform is arranged on the body;
- a photographing device arranged on the cloud platform, for photographing images
- the drone further includes a processor and a memory
- the memory is used to store a computer program
- the processor is used to execute the computer program and realize any of the functions provided by the embodiments of the present application when executing the computer program. test method described in the item.
- the embodiment of the present application further provides a test device, the test device includes a processor and a memory;
- the memory is used to store computer programs
- the processor is configured to execute the computer program and implement any one of the testing methods provided in the embodiments of the present application when executing the computer program.
- the embodiment of the present application also provides a vehicle, the vehicle comprising:
- An automatic driving system the automatic driving system is connected to the vehicle platform, the automatic driving system includes an automatic driving perception system, and the automatic driving perception system is used to perceive the surrounding environment of the vehicle to obtain a perception result;
- the automatic driving system includes a processor and a memory
- the memory is used to store a computer program
- the processor is used to execute the computer program and realize any one of the embodiments provided in the present application when executing the computer program. the test method described.
- the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor realizes the implementation of the present application.
- test system, test device, unmanned aerial vehicle, test method, and storage medium disclosed in the embodiments of the present application can use the aerial survey data of the unmanned aerial vehicle to realize the performance test of the automatic driving perception system of the automatic driving system in a natural state, thus Improve the safety of automatic driving systems in practical applications.
- Fig. 1 is a schematic structural view of a test vehicle provided in the embodiment of the present application.
- Fig. 2 is a schematic diagram of a test system provided by the embodiment of the present application.
- Fig. 3 is a schematic structural diagram of an unmanned aerial vehicle provided by an embodiment of the present application.
- Fig. 4 is a schematic block diagram of an unmanned aerial vehicle provided by an embodiment of the present application.
- Fig. 5 is a schematic structural view of another test vehicle provided in the embodiment of the present application.
- Fig. 6 is a schematic diagram of another test system provided by the embodiment of the present application.
- FIG. 7 is a schematic diagram of a traffic scene provided by an embodiment of the present application.
- Fig. 8 is a schematic flowchart of determining a perception result provided by an embodiment of the present application.
- FIG. 9 is a schematic flow chart of an optimized target detection algorithm provided by an embodiment of the present application.
- FIG. 10 is a schematic flow chart of a testing method for an automatic driving perception system provided by an embodiment of the present application.
- FIG. 11 is a schematic flow chart of another testing method for an automatic driving perception system provided by an embodiment of the present application.
- Fig. 12 is a schematic block diagram of a test device provided by an embodiment of the present application.
- Fig. 13 is a schematic block diagram of a vehicle provided by an embodiment of the present application.
- L0-L5 levels which are No Automation (L0) and Driver Assistance (L1).
- partial driving automation Partial Automation, L2)
- conditional driving automation Conditional Automation, L3
- high driving automation High Automation, L4
- full driving automation Full Automation, L5
- the automatic driving perception system in the automatic driving system includes setting sensors on the automatic driving vehicle to perceive the surrounding environment of the vehicle, such as lidar and vision sensors.
- the test method of the automatic driving perception system is mainly to test by installing high-precision sensors on the test vehicle, as shown in Figure 1.
- the roof of the test vehicle is equipped with high-precision
- the true value of the driving perception system verifies the perception results of the automatic driving perception system.
- the high-precision sensor refers to higher precision than the sensor actually used in the automatic driving perception system.
- the test vehicle has a large number of external sensors, so that both the driver of the test vehicle and the driver of the ordinary vehicle can realize the particularity of the test vehicle, so it has an impact on the naturalness of the collected test data.
- the collected test data may be In order to avoid the data of the test vehicle, the driver of an ordinary vehicle cannot complete the test of the automatic driving perception system in a natural state, resulting in an inaccurate test of the automatic driving perception system.
- test vehicles are tested with sufficient randomness, but it is impossible to conduct a long-term test on a certain target road section. segment or roundabout segment.
- the test vehicle cannot be tested for a long time when passing through these road sections, so it cannot fail to provide enough data to support it.
- the high-precision sensors installed on the test vehicle have a limited measurement range and low collection efficiency. They can only collect data from vehicles around the test vehicle. They only focus on the safety performance of the automatic driving system, and do not take into account the impact of the automatic driving system on traffic. Improve traffic efficiency.
- the embodiments of the present application provide a test method, system and storage medium for an automatic driving perception system based on aerial survey data, and more specifically provide a test system, a test device, a drone,
- the test method and storage medium are used to complete the performance test of the automatic driving perception system in a natural state, thereby improving the safety of the automatic driving system and reducing traffic accidents.
- FIG. 2 shows a schematic diagram of a testing system provided by an embodiment of the present application.
- the test system 100 includes a UAV 10 , a test device 20 and a test vehicle 30 , and the test device 20 is in communication connection with the UAV 10 and the test vehicle 30 .
- the test vehicle 30 includes an automatic driving system, the automatic driving system includes an automatic driving perception system, and the automatic driving perception system is used to perceive the surrounding environment of the test vehicle 30 to generate a perception result. It should be noted that the test vehicle 30 in the test system 100 provided in the embodiment of the present application may not be equipped with high-precision sensors.
- the unmanned aerial vehicle 10 can follow the test vehicle 30 to fly, and collect the traffic scene data during the test vehicle 30 driving process.
- the testing device 20 is used to obtain the traffic scene data collected by the UAV 10, and determine the accuracy of the perception result according to the traffic scene data.
- the unmanned aerial vehicle 10 can follow the test vehicle 30 to fly, so it can pay attention to a certain target road section and improve the flexibility of data collection.
- the unmanned aerial vehicle 10 includes a body 11 , a platform 12 , a photographing device 13 , a power system 14 , and a control system 15 .
- Airframe 11 may include a fuselage and an undercarriage (also referred to as landing gear).
- the fuselage may include a center frame and one or more arms connected to the center frame, and the one or more arms extend radially from the center frame.
- the tripod is connected with the fuselage, and is used for supporting when the UAV 10 lands.
- the pan-tilt 12 is installed on the body 11 for carrying the photographing device 13 .
- the cloud platform 12 can include three motors, that is, the cloud platform 12 is a three-axis platform, and under the control of the control system 15 of the drone 10, the shooting angle of the shooting device 13 can be adjusted, and the shooting angle can be understood as The angle of the direction of the lens of the photographing device 13 towards the target to be photographed relative to the horizontal direction or the vertical direction.
- the pan/tilt 12 may further include a controller for controlling the movement of the pan/tilt 12 by controlling the motor of the pan/tilt, and then adjusting the shooting angle of the shooting device 13 .
- the gimbal 12 may be independent of the UAV 10 , or may be a part of the UAV 10 .
- the motor may be a DC motor or an AC motor; or, the motor may be a brushless motor or a brushed motor.
- the photographing device 13 can be, for example, a camera or a video camera or other equipment for capturing images.
- the photographing device 13 can communicate with the control system 15 and take pictures under the control of the control system 15 .
- the photographing device 13 is mounted on the body 11 of the UAV 10 through the platform 12 . It can be understood that the camera device 13 can also be directly fixed on the body 11 of the UAV 10, so that the pan-tilt 12 can be omitted.
- the photographing device 13 can be controlled to photograph the test vehicle driving on the target road section from a bird's-eye view to obtain video data of the test vehicle, which can be used as traffic scene data of the test vehicle.
- the viewing angle is that the optical axis direction of the camera lens of the shooting device 13 is perpendicular to the target road section to be photographed, or is approximately perpendicular to the target road section to be photographed.
- the approximately vertical angle is, for example, 88 degrees or 92 degrees, etc., and of course other angle values can also be used. , is not limited here.
- the photographing device 13 may include a monocular camera or a binocular camera for shooting of different functions.
- a monocular camera is used to capture images of a test vehicle driving on a target road section, and the binocular camera can obtain images of the test vehicle.
- the depth image of the target object on the target road section includes the test vehicle on the target road section and the distance information of the target object.
- the target object is such as other ordinary vehicles or pedestrians.
- the depth image can also be used as a kind of traffic scene data.
- the power system 14 may include one or more electronic governors (referred to as ESCs for short), one or more propellers and one or more motors corresponding to the one or more propellers, wherein the motors are connected to the electronic governor Between the propeller, the motor and the propeller are arranged on the arm of the UAV 10 .
- the electronic governor is used to receive the drive signal generated by the control system 15, and provide a drive current to the motor according to the drive signal to control the speed of the motor and then drive the propeller to rotate, thereby providing power for the flight of the UAV 10.
- the man-machine 10 is capable of movement in one or more degrees of freedom. In some embodiments, drone 10 may rotate about one or more axes of rotation.
- the rotation axis may include a roll axis (Roll), a yaw axis (Yaw) and a pitch axis (pitch).
- Roll roll
- Yaw yaw
- pitch axis pitch axis
- the motor may be a DC motor or an AC motor.
- the motor can be a brushless motor or a brushed motor.
- Control system 15 may include a controller and a sensing system. Wherein, the controller is used to control the flight of the UAV 10, for example, the flight of the UAV 10 can be controlled according to the attitude information measured by the sensing system. It should be understood that the controller can control the UAV 10 according to pre-programmed instructions, or can control the UAV 10 by responding to one or more control instructions from the control terminal.
- the sensing system is used to measure the attitude information of the UAV 10, that is, the position information and state information of the UAV 10 in space, such as three-dimensional position, three-dimensional angle, three-dimensional velocity, three-dimensional acceleration and three-dimensional angular velocity, etc.
- the sensing system may include, for example, at least one of sensors such as a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (Inertial Measurement Unit, IMU), a visual sensor, a global navigation satellite system, and a barometer.
- sensors such as a gyroscope, an ultrasonic sensor, an electronic compass, an inertial measurement unit (Inertial Measurement Unit, IMU), a visual sensor, a global navigation satellite system, and a barometer.
- IMU inertial Measurement Unit
- the global navigation satellite system may be the Global Positioning System (GPS).
- the location information of the test vehicle or the target in the image can be calculated.
- the target in the image can be calculated location information of the object.
- a controller may include one or more processors and memory.
- the processor may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU), or a digital signal processor (Digital Signal Processor, DSP), etc.
- the memory can be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk or a mobile hard disk.
- the UAV 10 may also include a radar device, which is installed on the UAV 10, specifically on the body 11 of the UAV 10, during the flight of the UAV 10, It is used to measure the surrounding environment of the UAV 10, such as obstacles, to ensure flight safety.
- a radar device which is installed on the UAV 10, specifically on the body 11 of the UAV 10, during the flight of the UAV 10, It is used to measure the surrounding environment of the UAV 10, such as obstacles, to ensure flight safety.
- the radar device is installed on the tripod of the UAV 10 , and the radar device communicates with the control system 15 , and the radar device transmits the collected observation data to the control system for processing by the control system 15 .
- the drone 10 may include two or more tripods, and the radar device is mounted on one of the tripods.
- the radar device can also be mounted on other positions of the UAV 10, which is not specifically limited.
- the radar device can specifically be a laser radar, and can also be used to collect point cloud data of the test vehicle driving on the target road, and use the point cloud data as traffic scene data.
- the UAV is used to collect traffic scene data during the driving process of the test vehicle
- the data form of the traffic scene data may include video data, point cloud data, depth images, position information of the UAV, and attitude information, etc.
- the unmanned aerial vehicle 10 may include a rotary-wing unmanned aerial vehicle, such as a four-rotor unmanned aerial vehicle, a six-rotor unmanned aerial vehicle, an eight-rotor unmanned aerial vehicle, or a fixed-wing unmanned aerial vehicle.
- a rotary-wing unmanned aerial vehicle such as a four-rotor unmanned aerial vehicle, a six-rotor unmanned aerial vehicle, an eight-rotor unmanned aerial vehicle, or a fixed-wing unmanned aerial vehicle.
- the combination of machines is not limited here.
- the test device 20 may be a server or a terminal device, wherein the terminal device may be, for example, a desktop computer, a notebook, a tablet, or a smart phone.
- the test vehicle 30 includes an automatic driving system, which includes an automatic driving perception system, and the automatic driving perception system includes at least one of the following: a visual sensor, a radar or an inertial sensor, wherein the visual sensor includes a monocular camera or a binocular camera .
- the automatic driving perception system includes a laser radar 301 and a visual sensor 302, wherein the data volume of the laser radar 301 and the visual sensor 302 is multiple, and they are arranged at different positions of the test vehicle 30.
- a perception result is obtained, such as the distance between the test vehicle 30 and other surrounding vehicles, or the distance to roadside facilities.
- the autonomous driving perception system may of course also include other types of radar, such as millimeter-wave radar, etc.
- the visual sensor may include a monocular camera or a binocular camera.
- the accuracy of the lidar 301 at different positions on the test vehicle 30 can be different. LiDAR 301 Accuracy.
- Fig. 6 is another kind of test system for testing the automatic driving perception system provided by the present application, this test system 100 includes unmanned aerial vehicle 10 and test vehicle 30, and test vehicle 30 is connected with unmanned aerial vehicle 10 communication .
- the test vehicle 30 in FIG. 6 also includes an automatic driving system
- the automatic driving system includes an automatic driving perception system
- the automatic driving perception system is used to perceive the surrounding environment of the test vehicle 30 to generate a perception result.
- the unmanned aerial vehicle 10 can follow the test vehicle 30 to fly, and collect the traffic scene data in the driving process of the test vehicle 30, wherein, the unmanned aerial vehicle 10 is also used to obtain the perception result of the automatic driving perception system of the test vehicle 30, and according to the traffic The scene data determines the accuracy of the perception results.
- the unmanned aerial vehicle 10 can also send the collected traffic scene data to the test vehicle 30, or send the detection results of the target objects around the test vehicle 30 according to the traffic scene data. to the test vehicle 30, so that the test vehicle 30 determines the accuracy of the perception result of the automatic driving perception system according to the traffic scene data or detection results.
- the test of the autonomous driving perception system is completed according to the traffic scene data collected by the UAV 10 , which can be completed by the UAV 10 , the test device 20 or the test vehicle 30 .
- UAV 10 communicates with test vehicle 30, obtains the perception result of the automatic driving perception system of test vehicle 30, and determines the accuracy of the perception result according to traffic scene data;
- test device 20 communicates with UAV 10 and test vehicle connection, used to obtain traffic scene data and the perception results of the automatic driving perception system, and determine the accuracy of the perception results according to the traffic scene data; Traffic scene data determine the accuracy of this perception result.
- test of the autonomous driving perception system can be completed by the UAV 10, the test device 20 or the test vehicle 30, including the processing of the traffic scene data by the corresponding UAV 10, the test device 20 or the test vehicle.
- 30 is completed, such as identifying the motion state of the test vehicle in the traffic scene data and the target objects around the test vehicle and the target information of the target object.
- the target information includes the relative position and motion information of the target object relative to the test vehicle.
- test device 20 is used to determine the accuracy of the sensing result of the automatic driving sensing system of the test vehicle according to the traffic scene data.
- the unmanned aerial vehicle 10 can be controlled to follow the test vehicle 30 to fly and hover over the test vehicle 30, and collect traffic scenes during the test vehicle 30 driving process. data.
- the hovering over the test vehicle 30 means that the UAV 10 is relatively stationary relative to the test vehicle 30 , or it can be understood that the UAV 10 and the test vehicle 30 have the same movement speed and direction.
- the following function of the UAV 10 can be used to hover over the test vehicle 30 and maintain a certain hover height, thereby minimizing the collection of the side walls of the vehicles around the test vehicle and facilitating subsequent data processing. , to improve the testing efficiency and accuracy of the autonomous driving perception system.
- the hovering position and/or hovering height of the UAV 10 over the test vehicle are related to the traffic scene of the test vehicle.
- the traffic scene may be a vehicle scene or a facility scene where the test vehicle travels on the target road.
- the hovering height can be increased to avoid the truck from blocking other vehicles, thereby Higher quality traffic scene data can be collected.
- the UAV 10 can hover over the test vehicle 30 within a certain period of time, and collect traffic scene data during the test vehicle 30 driving process.
- the wind force in a certain time period is smaller than that in other time periods, and/or, the light intensity in a certain time period is greater than that in other time periods.
- a specific period of time such as 8:00 am to 5:00 pm in sunny and windless weather, can minimize the picture shaking caused by the movement of the drone, thereby improving the test accuracy of the automatic driving perception system.
- the UAV 10 in order to improve the quality of traffic scene data collection, and then improve the accuracy of automatic driving perception system testing.
- the UAV 10 can adjust its flight attitude and/or the shooting angle of the camera 13 carried by it, and collect traffic scene data during the driving of the test vehicle 30 .
- the UAV 10 can adjust its flight attitude and/or the shooting angle of the shooting device 13 carried by it according to the road information of the test vehicle 30 traveling on the target road section, and collect traffic scene data during the driving of the test vehicle 30 .
- the target road section may be any road section that the test vehicle may travel on, such as any road section among expressways, urban roads, and urban and rural roads.
- the target road section may also be a road section with frequent traffic accidents, such as a long downhill road section, an urban-suburban junction road section, a sharp turning road section, an "S"-shaped road section or a circular island road section.
- the target road section may also be a special road section, such as any road section in a tunnel, a sea-crossing bridge, or a viaduct. It can be seen that the test system can measure the traffic scene data of any road section, and is not limited by the terrain, and the measurement cost is low.
- the road information of the target road section may be the shape of the road section and other facilities affecting the driving of the test vehicle, such as facilities on the road section.
- the road information data includes road facility information and/or road type information.
- the road facility information may include, for example, traffic signs, traffic markings, traffic lights and/or road auxiliary facilities.
- the road type information is, for example, urban roads or highways.
- urban roads may include arterial roads, express roads, secondary arterial roads and/or branch roads, etc.
- the unmanned aerial vehicle 10 can adjust its flight attitude and/or the shooting angle of the shooting device carried by it according to the operating conditions of the test vehicle 30, such as the speed and steering of the test vehicle, to collect test results. Traffic scene data during the running of the vehicle 30 .
- the content of the traffic scene data may include road information data, vehicle information data, environment information data, and traffic participant information data.
- the road information data includes road facility information and/or road type information
- the road facility information may include, for example, traffic signs, traffic markings, traffic lights and/or road auxiliary facilities, etc.
- the road type information is, for example, an urban road or a highway
- Urban roads may include arterial roads, express roads, secondary arterial roads and/or branch roads, and the like.
- the vehicle information data includes at least one of the following: vehicle type information, vehicle location information, vehicle speed information, vehicle driving direction information and vehicle size information.
- the vehicle type information is, for example, M1 passenger car, N1 passenger car, trailer, two-wheeler or tricycle.
- the environment information data includes weather information and/or road surrounding environment information, such as daytime, night, sunny, rainy, snowy or foggy weather information, and road surrounding environment information such as buildings or flowers and trees around the road.
- Traffic participant information data includes pedestrian information and/non-motor vehicle information.
- Pedestrian information includes, for example, the walking speed, direction and location of children, adults or the elderly, and non-motor vehicle information, such as the speed of bicycles and electric two-wheelers , direction and location information.
- the high-precision sensor installed on the roof of the test vehicle shown in Figure 1 has a limited installation height, resulting in a small field of view and a large blind area.
- the height and attitude of the UAV applying for the test system can be adaptively adjusted based on the perception range, and the maneuverability brought can eliminate blind spots and perceive all targets of the test vehicle, including the surrounding vehicles of the test vehicle and the test vehicle. For example, there is a partially blocked vehicle in the blind spot, and the blind spot vehicle is specifically a vehicle that is blocked by the surrounding vehicles.
- blind spot vehicle data to measure the autonomous driving perception system can also improve the traffic throughput rate of the autonomous driving system.
- the Ego car is a test vehicle
- vehicle 1 vehicle 2...vehicle 8 are surrounding vehicles of the test vehicle
- vehicle 9 vehicle 10...vehicle 24 is a blind spot vehicle.
- the UAV has the function of flight stability control, there are still many factors that prevent the collected traffic scene data from being directly used for the performance verification of the autopilot system. Due to the influence of various aspects, the pixel matrix of the collected video data has many problems such as unclear targets, weak contrast, and high brightness exposure rate, which increases the difficulty of detection. Therefore, it is also necessary to preprocess the collected traffic scene data, wherein the preprocessing includes at least one of foreground extraction, color space conversion, image binarization, edge detection, morphological processing and setting the region of interest, specifically The corresponding processing method can be selected according to the actual processing effect.
- the traffic scene data includes the driving status of the test vehicle 30, such as road information, vehicle type information, vehicle position information, vehicle speed information, vehicle direction information, and vehicle type information, vehicle position information, vehicle speed information, and surrounding vehicles.
- Vehicle driving direction information, and road surrounding environment information such as buildings or flowers and trees around the road.
- Traffic participant information data includes pedestrian information and/non-motor vehicle information.
- Pedestrian information includes, for example, the walking speed, direction and location of children, adults or the elderly, and non-motor vehicle information, such as the speed of bicycles and electric two-wheelers , direction and location information, etc.
- the perception result of the automatic driving perception system of the test vehicle 30 also includes these information, so the traffic scene data collected by the UAV 10 can be used to determine the accuracy of the perception result of the automatic driving perception system.
- the source data required for the perception of the automatic driving perception system can first be obtained, such as the image data output by the vehicle camera, the point cloud data output by the lidar, the radar data, as well as the positioning data output by the positioning module, etc., and then input the source data to the perception algorithm module adopted by the automatic driving perception system of the automatic driving system.
- This perception algorithm module can also be called the functional module of the automatic driving perception system.
- the module calculates the perception result, as shown in Figure 8, the perception result specifically refers to the relative position information of the target objects around the test vehicle and the test vehicle.
- the sensing results at least include relative position information of objects around the test vehicle 30 relative to the test vehicle 30 , such as relative distances.
- the perception result further includes attitude information of the test vehicle itself and/or target information of the target object, and the target information includes the type of the target object and/or the moving speed of the target object.
- the attitude information is such as driving speed and driving direction.
- the accuracy of the perception result is determined according to the traffic scene data.
- the detection result of the target object around the test vehicle can be determined according to the traffic scene data.
- the detection result at least includes the position information of the target object relative to the test vehicle, and then the test vehicle is obtained.
- the perception result corresponding to the automatic driving perception system is determined by comparing the detection result and the perception result to obtain the test result. Specifically, the detection result is taken as the true value, and the perception result is compared with the detection result to obtain the difference result (ie, the test result). Whether the accuracy of the perception results meets the design requirements.
- the perception algorithm module of the automatic driving perception system includes a target recognition module, a drivable area module, a multi-target tracking module and a self-attitude detection module
- the perception results of the automatic driving perception system include the recognition results of these functional modules .
- the traffic scene data collected by the UAV can be used to determine the detection results of these functional modules, and the two are compared to complete the test of the automatic driving perception system. For specific comparisons, please refer to Table 1, which is based on the UAV shooting.
- the scene elements identified by the traffic scene data can be used as the true value of the perception algorithm module in the autonomous driving perception system.
- Table 1 shows the correspondence between the detection results of the UAV and the perception algorithm modules in the perception system
- the perception algorithm module shown in Table 1 and the detection results based on the detection target of the UAV constitute the limitation of the perception algorithm module of the automatic driving perception system of the embodiment of the present application. In practical applications, the May include more perception algorithm modules.
- the objects can be divided into dynamic traffic elements and static traffic elements, that is, the objects include dynamic traffic elements and static traffic elements.
- the dynamic traffic element in addition to identifying the relative position information of the dynamic traffic element to the test vehicle, it is also necessary to identify the motion information of the dynamic traffic element, such as the motion speed and direction.
- the dynamic traffic elements include at least one of the following: motor vehicles, non-motor vehicles, pedestrians or animals, and the static traffic elements include at least one of the following: lane lines, obstacles or road edges on the road.
- the test device 20 is specifically configured to perform target recognition on the image of the traffic scene data according to the image characteristics of the target object, to obtain the target objects around the test vehicle, wherein the image features include the color information of the target object, One or more of size information and texture information.
- target information such as lane lines, obstacles, and road edges can be identified through target recognition algorithms.
- the line type features of the lane lines on the road where the test vehicle is traveling can be acquired, the line type features include line type information and color information, and then identify the traffic scene data according to the line type features. lane line.
- the lane line has significantly different image features compared with the surrounding road environment, and the different image features are embodied in edge features such as gradient and grayscale.
- edge features such as gradient and grayscale.
- the detection and positioning of lane lines can be realized according to these edge features, and then the recognition and positioning of roads can be realized.
- lane lines in the traffic scene data may be identified according to edge features, where the edge features include gradient features and/or color features. Specifically, calculate the gradient of the image in the traffic scene data, determine the edge of the lane line according to the change of the gradient, and obtain the lane line.
- lane lines Compared with the road surface, lane lines have edge features such as gradients and grayscales that are easier to distinguish.
- Lane line extraction is realized through pre-designed features. Lane line extraction mainly uses the edge features of lane lines compared with the road surface environment to perform lane lines. Feature design, commonly used edge features are mainly gradient features and color features. For example, calculate the gradient of the image in the traffic scene data.
- the lane line can be determined, or by calculating the different directions in the gradient map corresponding to the image in the traffic scene data
- the first-order derivative or higher-order derivative of and search the peak of the first-order derivative or higher-order reciprocal to locate the edge, and then determine the direction of the edge according to the direction of the gradient to determine the lane line.
- a pre-trained image recognition model can also be pre-stored in the test device 20, and the image recognition model is used to identify objects around the test vehicle, so that the test device 20 can also use the image of the traffic scene data input to the pre-trained image recognition model to obtain the target objects included in the traffic scene data.
- different targets correspond to different image recognition models, for example, the image recognition model corresponding to lane lines is used to recognize lane lines, and other targets are similar.
- the image recognition model used to recognize lane lines can be obtained by training using a convolutional neural network (Convolutional Neural Networks, CNN). It is an important tool and is widely used in the fields of image recognition and target detection.
- CNN convolutional Neural Networks
- the UAV includes a binocular camera
- the binocular camera is used to capture left-eye images and right-eye images of the test vehicle while driving. Therefore, the test device can obtain the depth image of the test vehicle when the test vehicle is driving according to the left-eye image and the right-eye image, and then segment the left-eye image or the right-eye image according to the difference between the depth information in the depth image to determine obstacles.
- obstacle detection can be performed based on binocular stereo vision.
- the basic principle of binocular stereo vision is to use the left and right cameras to image the target scene from their respective angles to obtain its image pair (left eye image and right eye image). image), and match the corresponding image points in the left-eye image and right-eye image through the corresponding algorithm, so as to calculate the disparity between the image pair to obtain its depth information, so that any image in the image pair can be segmented by using the difference between the depth information. Identify obstacles.
- the target object can also be identified by means of image processing, for example, morphological processing, specifically, for example, the area corresponding to the target object in the image of the traffic scene data can be connected to obtain a connected area, and determined according to the connected area Target.
- image processing for example, morphological processing, specifically, for example, the area corresponding to the target object in the image of the traffic scene data can be connected to obtain a connected area, and determined according to the connected area Target.
- the regions corresponding to the objects in the images of the traffic scene data may also be connected to obtain the connected regions, and the size information and centroid position of the connected regions may be determined.
- the size information can be used to determine the type of object, and the center of mass position can be used to calculate the speed of the object.
- the region corresponding to the target object in the image of the traffic scene data is the region located in the foreground image of the image, and the foreground image has undergone morphological processing.
- the foreground image of the image is obtained by performing foreground advance on the image.
- the region of interest (Region Of Interest, ROI) can also be extracted in the foreground image.
- the region of interest is the key part of the target recognition.
- various operators such as OpenCV Operator operator in
- the image in the region of interest can be binarized, and then a more complete foreground image can be obtained by using morphology to remove interference and repair. Connected areas are marked as a whole to obtain connected areas, and the length, width, and centroid position of each connected area are calculated for subsequent calculation of the speed of the target.
- the type of the target object can also be determined according to the size of the connected area, wherein the target object includes a vehicle or an interference object, and the type of the target object includes at least the first type, the second type, and the third type, and the first type 1.
- the sizes of the vehicles corresponding to the second type and the third type are different.
- the first type is a small car
- the second type is a medium-sized car
- the third type is a large-sized car.
- Small-sized cars include locomotives, bicycles, etc.
- medium-sized cars include cars, commercial vehicles, small trucks, etc.
- large-sized cars include trucks, Buses, tourist buses, large trucks, etc.
- the type of the target object can also be determined according to the width of the connected area and the road width in the image of the traffic scene data.
- Exemplary such as identifying the type of vehicle, judging whether the width of the connected area is greater than a quarter of the road width, if the width of the connected area is greater than a quarter of the road width, the aspect ratio of the connected area needs to be judged to determine Whether it is interference, for example, the width of the connected area is greater than a quarter of the road width, and the aspect ratio of the connected area is between 1.5 and 3 times, then it is determined that the target object corresponding to the connected area is a vehicle.
- the type of vehicle can also be determined. For example, if the width of the connected area is greater than half of the road width and the length exceeds 3.5 times the width of the vehicle, it is determined to be a large vehicle, and other vehicle widths are regarded as interference exclusion. Using the width of the connected area and the width of the road, most of the afterimages caused by improper detection of drones or shaking branches and leaves of roadside trees can be eliminated, thereby improving the detection accuracy of the automatic driving perception system.
- the road width mentioned above is based on the average road width obtained after horizontal projection processing.
- the target object in the traffic scene that is, the dynamic traffic element and the lens traffic element
- the relative position of the target object to the test vehicle can also be calculated through the image information.
- the motion information of the dynamic traffic element such as information such as motion speed and motion direction, and of course motion information such as deceleration or acceleration can also be determined in a similar manner.
- the moving direction can be determined according to the moving track of the dynamic traffic element in the multi-frame images, of course, other ways can also be used, which are not limited here, such as determining the direction of the dynamic traffic element relative to the lane line to determine the moving direction.
- the pixel positions of the reference point in at least two frames of images in the traffic scene data can be determined, and the relative distance information of the reference point relative to another reference point can be obtained, wherein the relative distance information is that the reference point is in the real scene Relative to the distance of the other reference point, at least two frames of images are converted from two-dimensional coordinates to three-dimensional coordinates, the actual displacement of the target object in the image is determined according to the pixel position and relative distance information, and finally according to the frame corresponding to the at least two frames of images
- the rate determines the change time corresponding to the actual displacement, and the movement speed of the target is determined according to the change time and the actual displacement.
- the reference point can be a fixed point in the traffic scene, such as a lamp post or a lane line, etc., and knowing the distance between the two selected reference points in the real scene, determine the pixel positions of the reference point in at least two frames of images in the traffic scene data , according to the pixel position, it can be determined that the position of the target object (such as a vehicle) changes in at least two frames of images, because the vehicle is moving relative to the reference point when the image is captured, so the position of the target object relative to the reference point in different frame images changes , and then obtain the relative distance information between the reference point input by the user and another reference point, and convert at least two frames of images from two-dimensional coordinates to three-dimensional coordinates, the actual displacement corresponding to the changed position of the target object in the image can be determined, the actual The displacement is also the displacement in the real scene.
- the target object such as a vehicle
- the change time corresponding to the actual displacement is determined according to the frame rate corresponding to at least two frames of images, so that the motion speed can be determined according to the actual displacement and the change time, for example, the actual displacement is divided by the change time.
- the size of the movement speed is determined according to the frame rate corresponding to at least two frames of images, so that the motion speed can be determined according to the actual displacement and the change time, for example, the actual displacement is divided by the change time.
- the moving distance of the center of mass includes multiple pixels, according to the number of pixels corresponding to the road in the image and the actual width of the road, determine the distance corresponding to each pixel in the image, and then determine the distance of the target according to the moving distance of the center of mass and the distance corresponding to each pixel Actual distance traveled. That is, the number of pixels included in the moving distance of the center of mass is multiplied by the distance corresponding to each pixel, that is, the actual driving distance of the target can be obtained.
- correction processing can also be performed on the number of pixels corresponding to the road in the image, wherein the correction processing includes binarization processing and/or Or horizontal projection processing, the binarization processing and/or horizontal projection processing both process the image information corresponding to the road in the image.
- the functional modules in the automatic driving perception system that do not meet the requirements when testing the perception results, it is also possible to determine the functional modules in the automatic driving perception system that do not meet the requirements according to the test results, such as the functional modules that do not meet the performance standards, specifically, a certain performance in the perception results If the value is lower than the performance standard, or if the difference with the performance standard exceeds the preset range, it can be considered as not meeting the requirements. After determining the functional modules in the automatic driving perception system that do not meet the requirements, optimize the functional modules.
- the functional module includes one or more of a target recognition module, a drivable area recognition module, a multi-target tracking module, and a self-attitude detection module
- the optimization includes at least one of the following: optimizing the function The perception algorithm of the module, select the sensor with higher precision.
- test results find the functional modules with unsatisfactory perception effects in the autonomous driving perception system, optimize the functional modules, optimize the perception algorithm at the software level, and select cost-effective and high-precision sensors at the hardware level.
- the target detection algorithm in the target recognition module can be optimized, such as using other target detection algorithms to replace the mixed Gaussian model as the target detection algorithm, or the mixed Gaussian The model is optimized.
- the mixed Gaussian model method is an adaptive background modeling method, for the case of a fixed camera, the mixed Gaussian model will gradually reach a stable state after being established for a period of time. Then, as time changes, the background image continues to change slowly, and the mixed Gaussian model also needs to be continuously updated.
- the mixed Gaussian model does not require manual intervention in the initialization process, the accumulation of background calculation errors is small, and it has good adaptability to background changes.
- the mixed Gaussian model method also has some shortcomings when dealing with factors such as noise and sudden changes in illumination.
- the embodiment of the present application also proposes a mixed Gaussian model based on edge detection images, and uses an improved method based on neighborhood difference to remove noise, and then uses the comprehensive processing of the two as a target detection algorithm to apply to the target In the recognition module, the target detection effect can be improved.
- the image sequence of the original video frame in the traffic scene data is used to establish a general mixed Gaussian model, and an operator is used to find the edge of the video frame, and then the edge mixed Gaussian model is established.
- the edge mixed Gaussian model on the one hand, can use the edge image to reduce part of the noise and improve the ability to resist sudden changes in light, on the other hand, it uses the improved neighborhood difference method to denoise the detection results of the ordinary mixed Gaussian model.
- the foreground image of the ordinary Gaussian model processed by the neighborhood difference method is expanded by the morphological method, and then intersected with the foreground image of the edge-mixed Gaussian model, so that the denoising effects of the two methods can be combined, and the denoised image can be obtained edge image.
- Use the obtained edge image to expand again, and then intersect with the foreground image of the ordinary Gaussian model. In this way, some target images that were mistakenly removed as the background can be retrieved when the improved neighborhood difference method is used for denoising.
- the collected data is collected in a natural state, thereby realizing the performance test of the automatic driving perception system in the automatic driving system in a natural state , thereby improving the safety of the automatic driving system in practical applications.
- the shooting angle of the UAV is from directly above the lane, there will be no situation where a large car blocks a small car, and the data error rate is low, which can improve the accuracy of the automatic driving perception system test. Since the above-mentioned test system only needs one test vehicle and one drone to complete tests on different road sections at different times, the test coverage can be improved while reducing costs.
- FIG. 10 shows a schematic flowchart of a testing method for an automatic driving perception system provided by an embodiment of the present application.
- the test method of the automatic driving perception system can be applied to the unmanned aerial vehicle, test device or test vehicle provided in the above embodiment to complete the performance test of the automatic driving perception system in a natural state.
- the testing method of the automatic driving system includes step S101 and step S102.
- the test vehicle includes an automatic driving system
- the automatic driving system includes an automatic driving perception system
- the automatic driving perception system is used to perceive the surrounding environment of the test vehicle to generate a perception result
- the traffic scene data is collected by the unmanned aerial vehicle following the flight of the test vehicle;
- the drone is capable of following and hovering over the test vehicle. This can minimize the collection of the side walls of the vehicles around the test vehicle, facilitate subsequent data processing, and improve the testing efficiency and accuracy of the autonomous driving perception system.
- the hovering position and/or hovering height of the drone over the test vehicle is related to the traffic scene of the test vehicle.
- the traffic scene may be a vehicle scene or a facility scene in which the test vehicle travels on the target road, so that higher-quality traffic scene data can be collected.
- the drone can hover over the test vehicle for a specific period of time, and collect traffic scene data during the test vehicle driving.
- the wind force at the specific time is smaller than the wind force at other time periods, and/or, the light intensity at the specific time is greater than the light intensity at other time periods. This improves the test accuracy of the autonomous driving perception system.
- the UAV in order to improve the quality of traffic scene data collection, and then improve the accuracy of automatic driving perception system testing.
- the UAV can adjust its flight attitude and/or the shooting angle of the camera it carries, and collect traffic scene data during the driving of the test vehicle.
- the UAV can adjust its flight attitude and/or the shooting angle of its equipped camera device according to the road information of the test vehicle driving on the target road section, and collect traffic scene data during the test vehicle driving process.
- the UAV can adjust its flight attitude and/or the shooting angle of its mounted camera device according to the operating conditions of the test vehicle, and collect traffic scene data during the driving of the test vehicle.
- the sensing result at least includes relative position information of objects around the test vehicle relative to the test vehicle.
- the perception result further includes attitude information of the test vehicle and/or target information of the target, and the target information includes the type of the target and/or the moving speed of the target.
- the objects around the test vehicle include surrounding vehicles and blind spot vehicles
- the surrounding vehicles are vehicles adjacent to the test vehicle
- the blind spot vehicles are vehicles blocked by the surrounding vehicles. In this way, the influence of dead zones can be eliminated.
- the automatic driving perception system includes at least one of the following: a visual sensor, a radar or an inertial sensor, wherein the visual sensor includes a monocular camera or a binocular camera.
- the UAV has the function of flight stability control, there are still many factors that prevent the collected traffic scene data from being directly used for the performance verification of the autopilot system. Due to the influence of various aspects, the pixel matrix of the collected video data has many problems such as unclear targets, weak contrast, and high brightness exposure rate, which increases the difficulty of detection. Therefore, in some embodiments, it is also necessary to preprocess the collected traffic scene data, wherein the preprocessing includes foreground extraction, color space conversion, image binarization, edge detection, morphological processing, and setting the region of interest. At least one of the items, specifically, the corresponding processing method can be selected according to the actual processing effect.
- the accuracy of the perception result is determined according to the traffic scene data.
- the detection result of the target object around the test vehicle can be determined according to the traffic scene data.
- the detection result at least includes the position information of the target object relative to the test vehicle, and then the test vehicle is obtained.
- the perception result corresponding to the automatic driving perception system is determined by comparing the detection result and the perception result to obtain the test result. Specifically, the detection result is taken as the true value, and the perception result is compared with the detection result to obtain the difference result (ie, the test result).
- the accuracy of the perception results whether it meets the design requirements
- the objects around the test vehicle include dynamic traffic elements and static traffic elements, wherein the dynamic traffic elements include at least one of the following: motor vehicles, non-motor vehicles, pedestrians or animals, and the static traffic elements include the following At least one of: Lane markings, barriers, or road edges on the road.
- target recognition can be performed on the image of the traffic scene data according to the image characteristics of the target objects to obtain the target objects around the test vehicle; wherein, the image features include color information, size information and texture information of the target objects. one or more.
- the area corresponding to the target object in the image of the traffic scene data is the area located in the foreground image of the image, and the foreground image has undergone morphological processing.
- the type of the target object can be determined according to the size of the connected area, wherein the target object includes a vehicle or an interference object, and the type of the target object includes at least the first type, the second type and the third type, the first type, The second type and the third type correspond to vehicles of different sizes.
- the type of the target can be determined according to the width of the connected area and the road width in the image of the traffic scene data.
- the pixel positions of the reference point in at least two frames of images in the traffic scene data can be determined; the relative distance information of the reference point relative to another reference point can be obtained, wherein the relative distance information is The distance of the reference point relative to another reference point in the real scene; converting at least two frames of images from two-dimensional coordinates to three-dimensional coordinates, and determining the actual displacement of the target object in the image according to the pixel position and relative distance information; according to at least two frames
- the frame rate corresponding to the image determines the change time corresponding to the actual displacement, and the moving speed of the target is determined according to the change time and the actual displacement.
- the actual driving distance of the target object in the two adjacent images of the traffic scene data can also be determined according to the road width of the test vehicle driving road.
- the travel time of the target is determined according to the time difference, and then the moving speed of the target is determined according to the actual travel distance and travel time.
- the actual driving distance of the target object in the two images of the traffic scene data is determined, specifically, according to the centroid position of the target object in two adjacent images, determine
- the moving distance of the center of mass of the target object, the moving distance of the center of mass includes multiple pixels; according to the number of pixels corresponding to the road in the image and the actual width of the road, determine the distance corresponding to each pixel in the image; according to the moving distance of the center of mass and the distance corresponding to each pixel , to determine the actual travel distance of the target.
- correction processing can also be performed on the number of pixels corresponding to the road in the image, wherein the correction processing includes binarization processing and/or Or horizontal projection processing, the binarization processing and/or horizontal projection processing both process the image information corresponding to the road in the image.
- the line type feature of the lane line on the road where the test vehicle is traveling can be obtained, the line type feature includes line type information and color information; the lane in the traffic scene data is identified according to the line type feature Wire.
- the lane line in the traffic scene data may also be identified based on edge detection based on edge features, wherein the edge features include gradient features and/or color features.
- the lane line is obtained by calculating the gradient of the image in the traffic scene data, and determining the edge of the lane line according to the change of the gradient.
- it can be identified by a pre-trained image recognition model, which is used to identify the target around the test vehicle, so that the image of the traffic scene data can be input into the pre-trained image recognition model to obtain The target objects included in the traffic scene data.
- a pre-trained image recognition model which is used to identify the target around the test vehicle, so that the image of the traffic scene data can be input into the pre-trained image recognition model to obtain The target objects included in the traffic scene data.
- the binocular camera is used to shoot the left eye image and the right eye image when the test vehicle is driving, and the test vehicle can be obtained according to the left eye image and the right eye image. Depth image; then segment the left-eye image or right-eye image according to the difference between the depth information in the depth image to determine obstacles.
- the functional modules in the automatic driving perception system that do not meet the requirements when testing the perception results, it is also possible to determine the functional modules in the automatic driving perception system that do not meet the requirements according to the test results, such as the functional modules that do not meet the performance standards. Specifically, it can be one of the perception results. If the performance value is lower than the performance standard, or the difference between the relative performance standard exceeds the preset range, it can be considered as not meeting the requirements. After determining the functional modules in the automatic driving perception system that do not meet the requirements, optimize the functional modules.
- the functional module includes one or more of a target recognition module, a drivable area recognition module, a multi-target tracking module, and a self-attitude detection module
- the optimization includes at least one of the following: optimizing the perception algorithm of the functional module, selecting Higher precision sensors.
- FIG. 11 shows a schematic flowchart of another testing method for an automatic driving perception system provided by an embodiment of the present application.
- the test method of the automatic driving perception system can be applied to the unmanned aerial vehicle, test device or test vehicle provided in the above embodiment to complete the performance test of the automatic driving perception system in a natural state.
- the testing method of the automatic driving system includes steps S201 to S207.
- the traffic scene data is collected by the unmanned aerial vehicle following the flight of the test vehicle, the test vehicle includes an automatic driving system, and the automatic driving system includes an automatic driving perception system, and the automatic driving perception system is used to perceive the test vehicle surrounding environment to generate perception results.
- the preprocessing includes at least one of foreground extraction, color space conversion, image binarization, edge detection, morphological processing and setting the region of interest.
- the preprocessed traffic scene data is identified to obtain dynamic traffic elements and static traffic elements, and the specific identification methods refer to the above-mentioned embodiments.
- step S207 determines that the automatic driving perception system is qualified; when the key indicators of the automatic driving perception system are not qualified , then execute step S206.
- the hardware corresponding to the functional module can be optimized, such as selecting a higher-precision sensor, or the software algorithm can be optimized.
- the collected data is collected in a natural state, thus realizing the detection of the automatic driving perception system in the automatic driving system. Performance testing in natural conditions, thereby improving the safety of autonomous driving systems in practical applications.
- this unmanned aerial vehicle 10 comprises: body 11, cloud platform 12 and photographing device 13, and cloud platform 12 is arranged on the body 11, and photographing device 13 is arranged on the cloud platform 12, and the photographing device 13 is used for photographing images.
- the UAV 10 also includes a processor and a memory, the memory is used to store a computer program, and the processor is used to execute the computer program and realize any one of the embodiments provided in the present application when executing the computer program.
- the test method of the automatic driving perception system is used to determine whether the embodiments provided in the present application is executing the computer program.
- FIG. 12 is a schematic block diagram of a testing device provided by an embodiment of the present application. As shown in FIG. 12 , the test device 200 further includes at least one or more processors 201 and memory 202 .
- the processor 201 may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU) or a digital signal processor (Digital Signal Processor, DSP), etc.
- MCU Micro-controller Unit
- CPU Central Processing Unit
- DSP Digital Signal Processor
- the memory 202 can be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
- the memory 202 is used to store a computer program; the processor 201 is used to execute the computer program and when executing the computer program, execute any one of the automatic driving perception system testing methods provided in the embodiments of the present application, In order to realize the performance test of the automatic driving perception system in the natural state, thereby improving the safety of the automatic driving system in practical applications.
- the processor 201 is configured to execute the computer program and implement the following operations when executing the computer program:
- the test vehicle includes an automatic driving system
- the automatic driving system includes an automatic driving perception system
- the automatic driving perception system is used to perceive the surrounding environment of the test vehicle to generate a perception result
- the traffic scene data is collected by the unmanned aerial vehicle following the flight of the test vehicle; the accuracy of the perception result of the automatic driving perception system is determined according to the traffic scene data.
- FIG. 13 is a schematic diagram of a vehicle provided by an embodiment of the present application.
- the vehicle 400 includes an automatic driving system 40 and a vehicle platform 41, the automatic driving system 40 is connected to the vehicle platform 41, the automatic driving system includes an automatic driving perception system, and the vehicle platform 41 includes various equipment and components of the vehicle body Wait.
- the automatic driving system 40 also includes at least one or more processors 401 and memory 402 .
- the processor 401 may be, for example, a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU) or a digital signal processor (Digital Signal Processor, DSP), etc.
- MCU Micro-controller Unit
- CPU Central Processing Unit
- DSP Digital Signal Processor
- the memory 402 can be a Flash chip, a read-only memory (ROM, Read-Only Memory) disk, an optical disk, a U disk, or a mobile hard disk.
- the memory 402 is used to store a computer program; the processor 401 is used to execute the computer program and when executing the computer program, execute any one of the automatic driving perception system testing methods provided in the embodiments of the present application, In order to realize the performance test of the automatic driving perception system in the natural state, thereby improving the safety of the automatic driving system in practical applications.
- the processor 401 is configured to execute the computer program and implement the following operations when executing the computer program:
- the test vehicle includes an automatic driving system
- the automatic driving system includes an automatic driving perception system
- the automatic driving perception system is used to perceive the surrounding environment of the test vehicle to generate a perception result
- the traffic scene data is collected by the unmanned aerial vehicle following the flight of the test vehicle; the accuracy of the perception result of the automatic driving perception system is determined according to the traffic scene data.
- an embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes program instructions, and the processor executes the program instructions to implement The steps of any one of the testing methods for the automatic driving perception system provided in the above-mentioned embodiments.
- the computer-readable storage medium may be an internal storage unit of the test device, drone or vehicle described in any of the foregoing embodiments, such as the memory or internal memory of the test device.
- the computer-readable storage medium can also be an external storage device of the test device, such as a plug-in hard disk equipped on the test device, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD ) card, flash card (Flash Card), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Aviation & Aerospace Engineering (AREA)
- Traffic Control Systems (AREA)
Abstract
La présente invention concerne un procédé et un système de test (100) faisant appel à des données de levé aérien pour un système de perception de conduite autonome, et un support de stockage. Le système de test (100) comprend : un véhicule aérien sans pilote (10), un dispositif de test (20) et un véhicule de test (30) ; le véhicule de test (30) comprend un système de conduite autonome, et le système de conduite autonome comprend un système de perception de conduite autonome ; le système de perception de conduite autonome sert à entourer la perception de l'environnement de façon à générer un résultat de perception ; le véhicule aérien sans pilote (10) peut suivre le véhicule de test (30) en volant et collecter des données de scène de trafic lorsque le véhicule de test (30) se déplace ; et le dispositif de test (20) communique avec le véhicule aérien sans pilote (10) et le véhicule de test (30), et sert à acquérir les données de scène de trafic et à déterminer, en fonction des données de scène de trafic, la précision du résultat de perception.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2021/096993 WO2022246851A1 (fr) | 2021-05-28 | 2021-05-28 | Procédé et système de test faisant appel à des données de levé aérien pour système de perception de conduite autonome, et support de stockage |
| CN202180087990.5A CN116802581A (zh) | 2021-05-28 | 2021-05-28 | 基于航测数据的自动驾驶感知系统测试方法、系统及存储介质 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2021/096993 WO2022246851A1 (fr) | 2021-05-28 | 2021-05-28 | Procédé et système de test faisant appel à des données de levé aérien pour système de perception de conduite autonome, et support de stockage |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022246851A1 true WO2022246851A1 (fr) | 2022-12-01 |
Family
ID=84229469
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2021/096993 Ceased WO2022246851A1 (fr) | 2021-05-28 | 2021-05-28 | Procédé et système de test faisant appel à des données de levé aérien pour système de perception de conduite autonome, et support de stockage |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN116802581A (fr) |
| WO (1) | WO2022246851A1 (fr) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115840883A (zh) * | 2022-12-21 | 2023-03-24 | 上海高德威智能交通系统有限公司 | 感知信息验证方法、装置、电子设备及存储介质 |
| CN116088389A (zh) * | 2023-02-10 | 2023-05-09 | 深圳海星智驾科技有限公司 | 检测装置的控制方法及控制器、调控系统及工程车 |
| CN116488762A (zh) * | 2023-04-06 | 2023-07-25 | 北京易路行技术有限公司 | 一种路侧感知设备时钟同步精度检测方法、系统及装置 |
| CN118038386A (zh) * | 2024-04-12 | 2024-05-14 | 成都航空职业技术学院 | 一种高密度复杂交通场景下动态目标检测系统 |
| CN118484024A (zh) * | 2024-04-30 | 2024-08-13 | 广东警官学院(广东省公安司法管理干部学院) | 一种用于交通事故现场的无人机航拍参数计算方法 |
| CN118675136A (zh) * | 2024-07-25 | 2024-09-20 | 山东德鲁泰信息科技股份有限公司 | 一种基于物联网的车辆全息感知模型生成管理系统 |
| WO2024221150A1 (fr) * | 2023-04-24 | 2024-10-31 | 深圳市大疆创新科技有限公司 | Aéronef, ainsi que procédé et appareil de commande s'y rapportant |
| CN118991741A (zh) * | 2023-05-16 | 2024-11-22 | 比亚迪股份有限公司 | 一种自动泊车方法、电子设备、车辆及可读存储介质 |
| CN119049301A (zh) * | 2024-08-28 | 2024-11-29 | 山东交通学院 | 一种基于无人机航拍视频的车辆瞬时速度检测方法及设备 |
| CN119782762A (zh) * | 2024-12-02 | 2025-04-08 | 大卓智能科技有限公司 | 智能驾驶系统感知性能的评估方法和装置 |
| CN119806109A (zh) * | 2024-12-26 | 2025-04-11 | 酷睿程(北京)科技有限公司 | 检测自动控制系统的方法、控制方法、芯片、设备及介质 |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117148315B (zh) * | 2023-10-31 | 2024-01-26 | 上海伯镭智能科技有限公司 | 一种无人驾驶汽车运行检测方法及系统 |
| CN119023299B (zh) * | 2024-09-27 | 2025-07-01 | 广东汽车检测中心有限公司 | 一种汽车自动驾驶功能测试方法及系统 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070250260A1 (en) * | 2006-04-25 | 2007-10-25 | Honeywell International Inc. | Method and system for autonomous tracking of a mobile target by an unmanned aerial vehicle |
| KR20180083712A (ko) * | 2017-01-13 | 2018-07-23 | 주식회사 블루젠드론 | 무인비행기를 이용한 다 차로 차량 검지 시스템 및 방법 |
| CN108414238A (zh) * | 2018-03-09 | 2018-08-17 | 孙会鸿 | 自动泊车功能实车测试系统及测试方法 |
| CN110347182A (zh) * | 2019-07-23 | 2019-10-18 | 广汽蔚来新能源汽车科技有限公司 | 辅助驾驶装置、系统、无人机以及车辆 |
| CN112558608A (zh) * | 2020-12-11 | 2021-03-26 | 重庆邮电大学 | 一种基于无人机辅助的车机协同控制及路径优化方法 |
| CN112735164A (zh) * | 2020-12-25 | 2021-04-30 | 北京智能车联产业创新中心有限公司 | 测试数据构建方法及测试方法 |
-
2021
- 2021-05-28 WO PCT/CN2021/096993 patent/WO2022246851A1/fr not_active Ceased
- 2021-05-28 CN CN202180087990.5A patent/CN116802581A/zh active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070250260A1 (en) * | 2006-04-25 | 2007-10-25 | Honeywell International Inc. | Method and system for autonomous tracking of a mobile target by an unmanned aerial vehicle |
| KR20180083712A (ko) * | 2017-01-13 | 2018-07-23 | 주식회사 블루젠드론 | 무인비행기를 이용한 다 차로 차량 검지 시스템 및 방법 |
| CN108414238A (zh) * | 2018-03-09 | 2018-08-17 | 孙会鸿 | 自动泊车功能实车测试系统及测试方法 |
| CN110347182A (zh) * | 2019-07-23 | 2019-10-18 | 广汽蔚来新能源汽车科技有限公司 | 辅助驾驶装置、系统、无人机以及车辆 |
| CN112558608A (zh) * | 2020-12-11 | 2021-03-26 | 重庆邮电大学 | 一种基于无人机辅助的车机协同控制及路径优化方法 |
| CN112735164A (zh) * | 2020-12-25 | 2021-04-30 | 北京智能车联产业创新中心有限公司 | 测试数据构建方法及测试方法 |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115840883A (zh) * | 2022-12-21 | 2023-03-24 | 上海高德威智能交通系统有限公司 | 感知信息验证方法、装置、电子设备及存储介质 |
| CN116088389A (zh) * | 2023-02-10 | 2023-05-09 | 深圳海星智驾科技有限公司 | 检测装置的控制方法及控制器、调控系统及工程车 |
| CN116488762A (zh) * | 2023-04-06 | 2023-07-25 | 北京易路行技术有限公司 | 一种路侧感知设备时钟同步精度检测方法、系统及装置 |
| WO2024221150A1 (fr) * | 2023-04-24 | 2024-10-31 | 深圳市大疆创新科技有限公司 | Aéronef, ainsi que procédé et appareil de commande s'y rapportant |
| CN118991741A (zh) * | 2023-05-16 | 2024-11-22 | 比亚迪股份有限公司 | 一种自动泊车方法、电子设备、车辆及可读存储介质 |
| CN118038386A (zh) * | 2024-04-12 | 2024-05-14 | 成都航空职业技术学院 | 一种高密度复杂交通场景下动态目标检测系统 |
| CN118038386B (zh) * | 2024-04-12 | 2024-06-07 | 成都航空职业技术学院 | 一种高密度复杂交通场景下动态目标检测系统 |
| CN118484024A (zh) * | 2024-04-30 | 2024-08-13 | 广东警官学院(广东省公安司法管理干部学院) | 一种用于交通事故现场的无人机航拍参数计算方法 |
| CN118675136A (zh) * | 2024-07-25 | 2024-09-20 | 山东德鲁泰信息科技股份有限公司 | 一种基于物联网的车辆全息感知模型生成管理系统 |
| CN119049301A (zh) * | 2024-08-28 | 2024-11-29 | 山东交通学院 | 一种基于无人机航拍视频的车辆瞬时速度检测方法及设备 |
| CN119782762A (zh) * | 2024-12-02 | 2025-04-08 | 大卓智能科技有限公司 | 智能驾驶系统感知性能的评估方法和装置 |
| CN119806109A (zh) * | 2024-12-26 | 2025-04-11 | 酷睿程(北京)科技有限公司 | 检测自动控制系统的方法、控制方法、芯片、设备及介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116802581A (zh) | 2023-09-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2022246851A1 (fr) | Procédé et système de test faisant appel à des données de levé aérien pour système de perception de conduite autonome, et support de stockage | |
| JP7073315B2 (ja) | 乗物、乗物測位システム、及び乗物測位方法 | |
| US12222218B2 (en) | Techniques for collaborative map construction between an unmanned aerial vehicle and a ground vehicle | |
| TWI703064B (zh) | 用於在不良照明狀況下定位運輸工具的系統和方法 | |
| US20230274653A1 (en) | Techniques for sharing mapping data between an unmanned aerial vehicle and a ground vehicle | |
| US10936869B2 (en) | Camera configuration on movable objects | |
| CN112740225B (zh) | 一种路面要素确定方法及装置 | |
| JP7252943B2 (ja) | 航空機のための対象物検出及び回避 | |
| CN108647638B (zh) | 一种车辆位置检测方法及装置 | |
| CN111874006B (zh) | 路线规划处理方法和装置 | |
| US20190101649A1 (en) | Systems, devices, and methods for autonomous vehicle localization | |
| WO2022246852A1 (fr) | Procédé de test de système de conduite automatique basé sur des données d'études aériennes, système de test et support de stockage | |
| CN110648389A (zh) | 基于无人机和边缘车辆协同的城市街景3d重建方法和系统 | |
| CN111856491A (zh) | 用于确定车辆的地理位置和朝向的方法和设备 | |
| CN111275015A (zh) | 一种基于无人机的电力巡线电塔检测识别方法及系统 | |
| CN117576652B (zh) | 道路对象的识别方法、装置和存储介质及电子设备 | |
| CN114998436B (zh) | 对象标注方法、装置、电子设备及存储介质 | |
| KR20200142315A (ko) | 도로 네트워크를 갱신하는 방법 및 장치 | |
| WO2018149539A1 (fr) | Procédé et appareil d'estimation d'une plage d'un objet mobile | |
| CN116935281A (zh) | 基于雷达和视频的机动车道异常行为在线监测方法及设备 | |
| CN112446915A (zh) | 一种基于图像组的建图方法及装置 | |
| CN119007132A (zh) | 基于无人机的道路交通巡检方法、无人机、设备和介质 | |
| US12135565B2 (en) | Adaptive sensor control | |
| CN111833443A (zh) | 自主机器应用中的地标位置重建 | |
| CN114240769A (zh) | 一种图像处理方法以及装置 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21942422 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202180087990.5 Country of ref document: CN |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21942422 Country of ref document: EP Kind code of ref document: A1 |