[go: up one dir, main page]

WO2024016877A1 - Roadside sensing simulation system for vehicle-road collaboration - Google Patents

Roadside sensing simulation system for vehicle-road collaboration Download PDF

Info

Publication number
WO2024016877A1
WO2024016877A1 PCT/CN2023/098821 CN2023098821W WO2024016877A1 WO 2024016877 A1 WO2024016877 A1 WO 2024016877A1 CN 2023098821 W CN2023098821 W CN 2023098821W WO 2024016877 A1 WO2024016877 A1 WO 2024016877A1
Authority
WO
WIPO (PCT)
Prior art keywords
simulation
unit
data
vehicle
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2023/098821
Other languages
French (fr)
Chinese (zh)
Inventor
王亚飞
周志松
邬明宇
刘旭磊
李泽星
张睿韬
章翼辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Publication of WO2024016877A1 publication Critical patent/WO2024016877A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the invention relates to the technical field of drive test perception simulation, and specifically is a drive test perception simulation system oriented to vehicle-road collaboration.
  • Intelligent transportation system can effectively improve the safety and efficiency of road traffic through artificial intelligence and information and communication technology. It has been widely recognized. It consists of two parts: “smart car” and “smart road”. Vehicle-road collaboration is the development of ITS. The advanced stage is used to realize communication between vehicles and vehicles and roadside systems, so that vehicles can better perceive the surrounding environment, receive relevant information about assisted driving, and allow road supervision departments to handle traffic accidents more effectively.
  • Roadside sensing is an important part of the development of vehicle-road collaborative applications. By deploying sensors on the roadside, the collected road information is sent to the vehicle through V2X communication, so that the vehicle has beyond-line-of-sight sensing capabilities. In practical applications, it provides To achieve optimal roadside sensing effects, different scenarios often require different RSU configurations. The selection and installation of RSUs is a time-consuming and labor-intensive process. In addition, the identification of traffic participants is the core of roadside sensing. Based on machine The learning recognition algorithm requires a large basket of labeled data, and manual labeling is an extremely inefficient way to verify. With the continuous improvement of computer hardware performance in recent years, applying simulation technology to the field of intelligent transportation has become a necessary means for various R&D institutions to accelerate the development process.
  • the present invention provides a road test perception simulation system oriented to vehicle-road collaboration.
  • This system deploys sensors on the roadside and communicates the collected road surface information to the vehicle through V2X, so that the vehicle has With beyond-line-of-sight sensing capability, the problem of RSU configuration and sample data generation can be well solved by building this roadside sensing simulation system.
  • a road test perception simulation system oriented to vehicle-road collaboration, including a simulation platform module, a simulation framework module, a middleware module, and a node module.
  • the simulation platform module includes Graphics engine unit and physics engine unit, and the physics engine unit is connected with the graphics engine unit, and the graphics engine unit and the physics engine unit are both connected with the simulation framework module
  • the simulation framework module includes a simulation environment unit, a dynamic scene unit, a road Side sensor unit, positioning simulation unit, communication simulation unit and dynamics simulation unit, etc.
  • the middleware module includes ROS, YARP and other external communication units
  • the node module includes a vehicle control unit, data processing unit, etc.
  • the middleware module There is a two-way communication connection between the middleware module and the simulation framework module, and there is a two-way communication connection between the middleware module and each unit in the node module.
  • the drive test perception simulation system is a simulation system developed based on LGSVL and suitable for roadside perception, Among them, the custom scene function is used to develop a simulation environment suitable for roadside sensing, the custom vehicle and sensor model functions are used to create a roadside sensing unit, and custom communication content is used to realize the collection and transmission of roadside sensing data.
  • the drive test perception simulation system includes a simulation scene construction, which is composed of a static environment unit, a dynamic traffic unit and a roadside unit;
  • Static environment units mainly include lanes used for vehicle driving, buildings in the scene, green plants, street lights, etc. in the area. These constitute the objective environment of the simulation scene and do not change with changes in other conditions during the simulation test process;
  • Dynamic traffic unit which is a key component of the simulation test scenario, mainly refers to the control, traffic flow, pedestrian flow and other parts of the simulation that have dynamic characteristics, including traffic light simulation, motor vehicle simulation, pedestrian simulation, etc.;
  • the roadside unit is the core component of vehicle-road collaboration and is responsible for the collection, processing and transmission of vehicle-road information. It is also the key research object of the roadside perception simulation system for vehicle-road collaboration.
  • the static environment unit is modeled by blender and then rendered in Unity HD to obtain the static environment of the simulation system.
  • the dynamic traffic simulation scene construction method implemented by the dynamic traffic unit mainly includes construction based on real traffic case data, generalization construction based on real case data, and construction based on microscopic traffic simulation system.
  • the roadside unit includes a camera, laser radar, millimeter wave radar, industrial computer, etc.
  • a data collection and construction method for a vehicle-road collaboration-oriented road test perception simulation system includes the following steps:
  • the pseudo code generated by simulating point cloud data in the scene is:
  • the true value data corresponds to the manual label data in the real data.
  • the data content includes the position, orientation, bounding box size, speed, type, etc. of the identifiable object.
  • the true value data is compared to the simulation system. It is known that only the true value data and the point cloud data need to be synchronized and output, so the efficiency of output tags can be greatly improved;
  • the current ROS time is used as the naming of each frame of point cloud data and true value data.
  • the current ROS time is n.ms
  • the point cloud data file collected at the corresponding time is saved as nm.pcd
  • the true value data file is nm.txt. Import the simulated point cloud data and true value data of the same frame into Rviz for display, and the output simulation data is obtained. .
  • the real point cloud data in step S1 also has a key information which is the reflection intensity.
  • the reflection intensity mainly reflects the reflectivity of different physical materials to the near-infrared light used by the lidar. Therefore, the simulated point cloud data also needs to consider the intensity value.
  • LGSVL the metallicity and color values in the model material are obtained and normalized to obtain intensity values ranging from 0 to 255.
  • the true value data in step S2 is generated by creating a new true value data sensor in LGSVL.
  • the configuration parameters of the true value data sensor and the lidar sensor need to be consistent. Such as position and attitude, effective range, frequency, etc.
  • the invention provides a road test perception simulation system oriented to vehicle-road collaboration. It has the following beneficial effects:
  • the present invention provides a road test perception simulation system oriented to vehicle-road collaboration.
  • the system is developed and constructed based on the autonomous driving simulation software LGSVL.
  • the development content includes a simulation environment, a roadside unit and data collection and communication, and With the help of the simulation environment, the relationship between the height of the lidar and the road point cloud coverage is analyzed, which can provide a reference for the actual installation position of the lidar.
  • the vehicle recognition model obtained from the point cloud data output in the simulation environment with the real
  • the mutual verification results between the models obtained from the data indicate that the simulation system's simulation of lidar and environment can restore the real situation to a high degree, thereby obtaining better simulation results.
  • the present invention provides a road test perception simulation system oriented to vehicle-road collaboration.
  • This system deploys sensors on the roadside and sends the collected road information to the vehicle through V2X communication, so that the vehicle has beyond-line-of-sight perception.
  • V2X communication so that the vehicle has beyond-line-of-sight perception.
  • Figure 1 is a schematic structural diagram of the roadside sensing simulation system of the present invention
  • Figure 2 is an overall plan view of the simulation scene of the present invention.
  • the embodiment of the present invention provides a road test perception simulation system for vehicle-road collaboration, including simulation Real platform module, simulation framework module, middleware module, node module,
  • the simulation platform module includes a graphics engine unit and a physics engine unit, and the physics engine unit is connected to the graphics engine unit, and both the graphics engine unit and the physics engine unit Connected to the simulation framework module, which includes a simulation environment unit, a dynamic scene unit, a roadside sensor unit, a positioning simulation unit, a communication simulation unit, a dynamics simulation unit, etc.
  • the middleware module includes ROS, YARP, etc.
  • the node module includes a vehicle control unit, a data processing unit, etc., a two-way communication connection between the middleware module and the simulation framework module, and a two-way communication connection between the middleware module and each unit in the node module. Communication connection.
  • the road test perception simulation system is a simulation system developed based on LGSVL and suitable for roadside perception.
  • the custom scene function is used to develop a simulation environment suitable for roadside perception, and the custom vehicle and sensor model functions are used to create roadside perception.
  • the unit uses customized communication content to collect and transmit roadside sensing data.
  • the drive test perception simulation system includes a simulation scene construction, which is composed of a static environment unit, a dynamic traffic unit and a roadside unit;
  • Static environment units mainly include lanes used for vehicle driving, buildings in the scene, green plants, street lights, etc. in the area. These constitute the objective environment of the simulation scene and do not change with changes in other conditions during the simulation test process;
  • Dynamic traffic unit which is a key component of the simulation test scenario, mainly refers to the control, traffic flow, pedestrian flow and other parts of the simulation that have dynamic characteristics, including traffic light simulation, motor vehicle simulation, pedestrian simulation, etc.;
  • the roadside unit is the core component of vehicle-road collaboration and is responsible for the collection, processing and transmission of vehicle-road information. It is also the key research object of the roadside perception simulation system for vehicle-road collaboration.
  • the static environment unit is modeled by blender and then rendered by Unity HD to obtain the static environment of the simulation system.
  • the dynamic traffic simulation scene construction method implemented by the dynamic traffic unit mainly includes construction based on real traffic case data, based on real case data Generalized construction, and construction based on microscopic traffic simulation system, the roadside unit includes camera, laser radar, millimeter wave radar, industrial computer, etc.
  • the data collection and construction method of the road test perception simulation system for vehicle-road collaboration includes the following steps:
  • the pseudo code generated by simulating point cloud data in the scene is:
  • the true value data corresponds to the manual label data in the real data.
  • the data content includes the position, orientation, bounding box size, speed, type, etc. of the identifiable object.
  • the true value data is compared to the simulation system. It is known that only the true value data and the point cloud data need to be synchronized and output, so the efficiency of output tags can be greatly improved;
  • the current ROS time is used as the naming of each frame of point cloud data and true value data.
  • the current ROS time is n.ms
  • the point cloud data file collected at the corresponding time is saved as nm.pcd
  • the true value data file is nm.txt. Import the simulated point cloud data and true value data of the same frame into Rviz for display, and the output simulation data is obtained. .
  • lidar In reality, due to the high cost of lidar, the layout of lidar needs to be optimized in roadside layout so that the effective coverage area of a single lidar is utilized as much as possible. For roads with RSUs arranged on only one side, due to the large differences in the shapes of various types of vehicles, small vehicles may be blocked by large vehicles, thus posing a challenge to the over-the-horizon function provided by vehicle-road collaboration. In order to reduce this In the case of lidar blind spots caused by vehicle shielding, the simplest and most effective way is to increase the installation height of lidar. Obtaining the minimum installation height of lidar requires comprehensive testing of lidar parameters, road environment parameters, vehicle parameters and other conditions. Through Real road testing is not realistic, but it can be completed simply and intuitively with the help of the roadside perception simulation system proposed in this article.
  • object recognition based on LiDAR point cloud data is not affected by ambient light and has higher robustness, so it plays an important role in vehicle-road collaboration.
  • recognition based on point cloud is even more difficult than image recognition using deep learning methods.
  • process of producing label data is extremely manual. difficult. A large amount of label data can be generated quickly and accurately through a simulation system, but whether simulated data can replace real data still needs to be verified through experiments.
  • Design 4 groups of experiments to verify simulated data The first group uses real data to train and test with real data.
  • the second group uses simulated data to train and test with simulated data.
  • the third group uses real data to train and test with simulated data.
  • the fourth group uses simulated data to train.
  • the four sets of experiments used the same training network. The data sizes of the training set and the test set were all obtained at a ratio of 4:1. The final results are shown in Table 1 below:
  • Precision is the precision rate of recognition, relative to the samples detected in the test set
  • Recall is the recall rate, relative to the entire test set
  • F1 score is the harmonic average of precision rate and recall rate, from Table 2

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention relates to the technical field of roadside sensing simulation, and provides a roadside sensing simulation system for vehicle-road collaboration. The roadside sensing simulation system for vehicle-road collaboration comprises a simulation platform module, a simulation framework module, a middleware module, and a node module; the simulation platform module comprises a graphics engine unit and a physics engine unit, the physics engine unit is connected to the graphics engine unit, and the graphics engine unit and the physics engine unit are both connected to the simulation framework module; the simulation framework module comprises a simulation environment unit, a dynamic scene unit, a roadside sensor unit, a positioning simulation unit, a communication simulation unit, a dynamics simulation unit, etc. According to the system, sensors are deployed on a roadside to send collected road surface information to a vehicle via V2X communication, so that the vehicle has a beyond-visual-range sensing capability; and the problems of RSU configuration and sample data generation can be well solved by constructing the roadside sensing simulation system.

Description

一种面向车路协同的路测感知仿真系统A road test perception simulation system for vehicle-road collaboration 技术领域Technical field

本发明涉及路测感知仿真技术领域,具体为一种面向车路协同的路测感知仿真系统。The invention relates to the technical field of drive test perception simulation, and specifically is a drive test perception simulation system oriented to vehicle-road collaboration.

背景技术Background technique

智能交通系统通过人工智能与信息通讯技术可以有效提升道路交通的安全和效率,目前已经得到广泛的认可,它包含“聪明的车”和“智慧的路”两部分,车路协同是ITS发展的高级阶段,用来实现车与车以及车与路侧系统之间的通信,使车辆能够更好地感知周围环境,接受辅助驾驶的相关信息,让道路监管部门能够更有效地处理交通事故。Intelligent transportation system can effectively improve the safety and efficiency of road traffic through artificial intelligence and information and communication technology. It has been widely recognized. It consists of two parts: "smart car" and "smart road". Vehicle-road collaboration is the development of ITS. The advanced stage is used to realize communication between vehicles and vehicles and roadside systems, so that vehicles can better perceive the surrounding environment, receive relevant information about assisted driving, and allow road supervision departments to handle traffic accidents more effectively.

路侧感知是车路协同应用开发的重要组成部分,通过在路侧部署传感器,将采集到的路面信息经V2X通信给到车辆,使车辆拥有超视距的感知能力,在实际应用中,为达到最优的路侧感知效果,不同的场景往往需要不同的RSU配置,RSU的选型及安装是一个耗时耗力的过程,另外,交通参与者的识别是路侧感知的核心,基于机器学习的识别算法需要大筐的标签数据,而人工打标签被验证是一个效率极其低下的方式。而随着近些年计算机硬件性能的不断提升,将仿真技术应用于智能交通领域成为了各类研发机构加速开发进程的必要手段。Roadside sensing is an important part of the development of vehicle-road collaborative applications. By deploying sensors on the roadside, the collected road information is sent to the vehicle through V2X communication, so that the vehicle has beyond-line-of-sight sensing capabilities. In practical applications, it provides To achieve optimal roadside sensing effects, different scenarios often require different RSU configurations. The selection and installation of RSUs is a time-consuming and labor-intensive process. In addition, the identification of traffic participants is the core of roadside sensing. Based on machine The learning recognition algorithm requires a large basket of labeled data, and manual labeling is an extremely inefficient way to verify. With the continuous improvement of computer hardware performance in recent years, applying simulation technology to the field of intelligent transportation has become a necessary means for various R&D institutions to accelerate the development process.

目前,随着智能交通领域的快速发展,模拟仿真技术在其中扮演着越来越重要的角色,尤其是针对自动驾驶和车路协同已经有很多的仿真应用和研究,然而面向路侧感知的模拟仿真仍然鲜有人涉足,但其作为车路协同的应用开发却是不可或缺的技术。At present, with the rapid development of the field of intelligent transportation, simulation technology plays an increasingly important role in it. In particular, there have been many simulation applications and research on autonomous driving and vehicle-road collaboration. However, simulation for roadside sensing Simulation is still rarely involved, but it is an indispensable technology for the application development of vehicle-road collaboration.

发明内容Contents of the invention

(一)解决的技术问题(1) Technical problems solved

针对现有技术的不足,本发明提供了一种面向车路协同的路测感知仿真系统,该系统通过在在路侧部署传感器,将采集到的路面信息经V2X通信给到车辆,使车辆拥有超视距的感知能力,通过构建该路侧感知仿真系统可以很好地解决RSU配置及样本数据生成的问题。In view of the shortcomings of the existing technology, the present invention provides a road test perception simulation system oriented to vehicle-road collaboration. This system deploys sensors on the roadside and communicates the collected road surface information to the vehicle through V2X, so that the vehicle has With beyond-line-of-sight sensing capability, the problem of RSU configuration and sample data generation can be well solved by building this roadside sensing simulation system.

(二)技术方案(2) Technical solutions

为实现以上目的,本发明通过以下技术方案予以实现:一种面向车路协同的路测感知仿真系统,包括仿真平台模块、仿真框架模块、中间件模块、结点模块,所述仿真平台模块包括图形引擎单元和物理引擎单元,且物理引擎单元与图形引擎单元进行连接,且图形引擎单元和物理引擎单元均与仿真框架模块进行连接,所述仿真框架模块包括模拟环境单元、动态场景单元、路侧传感器单元、定位仿真单元、通信仿真单元和动力学仿真单元等,所述中间件模块包括ROS、YARP等外界通讯单元,所述结点模块包括车辆控制单元、数据处理单元等,所述中间件模块与仿真框架模块之间双向通信连接,所述中间件模块与结点模块内的各单元之间双向通信连接。In order to achieve the above objectives, the present invention is implemented through the following technical solutions: a road test perception simulation system oriented to vehicle-road collaboration, including a simulation platform module, a simulation framework module, a middleware module, and a node module. The simulation platform module includes Graphics engine unit and physics engine unit, and the physics engine unit is connected with the graphics engine unit, and the graphics engine unit and the physics engine unit are both connected with the simulation framework module, the simulation framework module includes a simulation environment unit, a dynamic scene unit, a road Side sensor unit, positioning simulation unit, communication simulation unit and dynamics simulation unit, etc., the middleware module includes ROS, YARP and other external communication units, the node module includes a vehicle control unit, data processing unit, etc., the middleware module There is a two-way communication connection between the middleware module and the simulation framework module, and there is a two-way communication connection between the middleware module and each unit in the node module.

优选的,所述路测感知仿真系统是基于LGSVL开发且适用于路侧感知的仿真系统, 其中,利用自定义场景功能开发适用于路侧感知的模拟环境,利用自定义车辆及传感器模型功能创建路侧感知单元,利用自定义通讯内容实现路侧感知数据的采集与传输。Preferably, the drive test perception simulation system is a simulation system developed based on LGSVL and suitable for roadside perception, Among them, the custom scene function is used to develop a simulation environment suitable for roadside sensing, the custom vehicle and sensor model functions are used to create a roadside sensing unit, and custom communication content is used to realize the collection and transmission of roadside sensing data.

优选的,所述路测感知仿真系统包括模拟场景构建,所述模拟场景构建由静态环境单元、动态交通单元和路侧单元组成;Preferably, the drive test perception simulation system includes a simulation scene construction, which is composed of a static environment unit, a dynamic traffic unit and a roadside unit;

静态环境单元,主要包括用于车辆行驶的车道,场景内的建筑,区域内的绿植、路灯等,这些构成了模拟场景的客观环境,并且不随仿真测试过程中其它条件的变化而改变;Static environment units mainly include lanes used for vehicle driving, buildings in the scene, green plants, street lights, etc. in the area. These constitute the objective environment of the simulation scene and do not change with changes in other conditions during the simulation test process;

动态交通单元,其是仿真测试场景的关键组成,主要指仿真中具备动态特性的管控、车流、人流等部分,包括红绿灯仿真,机动车仿真,行人仿真等;Dynamic traffic unit, which is a key component of the simulation test scenario, mainly refers to the control, traffic flow, pedestrian flow and other parts of the simulation that have dynamic characteristics, including traffic light simulation, motor vehicle simulation, pedestrian simulation, etc.;

路侧单元,其是车路协同的核心部件,负责车路信息的采集、处理与传输,也是面向车路协同的路侧感知仿真系统的重点研究对象。The roadside unit is the core component of vehicle-road collaboration and is responsible for the collection, processing and transmission of vehicle-road information. It is also the key research object of the roadside perception simulation system for vehicle-road collaboration.

优选的,所述静态环境单元通过blender建模后经Unity高清渲染后得到模拟仿真系统的静态环境。Preferably, the static environment unit is modeled by blender and then rendered in Unity HD to obtain the static environment of the simulation system.

优选的,所述动态交通单元实现的动态交通仿真场景构建方法主要有基于真实交通案例数据的构建,基于真实案例数据的泛化构建,以及基于微观交通仿真系统的构建。Preferably, the dynamic traffic simulation scene construction method implemented by the dynamic traffic unit mainly includes construction based on real traffic case data, generalization construction based on real case data, and construction based on microscopic traffic simulation system.

优选的,所述路侧单元包括摄像头、激光雷达、毫米波雷达、工控机等。Preferably, the roadside unit includes a camera, laser radar, millimeter wave radar, industrial computer, etc.

优选的,一种面向车路协同的路测感知仿真系统数据采集构建方法,包括以下步骤:Preferably, a data collection and construction method for a vehicle-road collaboration-oriented road test perception simulation system includes the following steps:

S1.模拟点云数据生成S1. Simulate point cloud data generation

参照真实激光雷达的扫描方式,模拟每一条真实雷达射线的发射,通过与场景中所有物体求交,若在激光雷达的最大探测距离内存在交点,则返回相应的点云坐标,假设模拟激光雷达为L线,水平分辨率为R,水平扫描范围为360°,得到每一帧发射射线的数量N为:
N=L×360/R
Referring to the scanning method of real lidar, simulate the emission of each real radar ray. By intersecting with all objects in the scene, if there is an intersection point within the maximum detection distance of lidar, the corresponding point cloud coordinates will be returned. Assume that the lidar is simulated. is the L line, the horizontal resolution is R, and the horizontal scanning range is 360°. The number N of rays emitted in each frame is:
N=L×360/R

若探测距离为D,场景内模拟点云数据生成的伪代码为:
If the detection distance is D, the pseudo code generated by simulating point cloud data in the scene is:

有上式和伪代码可知,当激光雷达频率较高,场景内环境较为复杂且模型足够精细时,通过模拟射线求交的计算量极大,以激光雷达为64线,水平分辨率0.4,频率10Hz为例,单纯每秒发射的激光雷达射线就高达576000条,在此基础上还需要对每一条射线遍历场景内除激光雷达外的所有物体模型。为了达到实时仿真的效果,可以运用CPU并行或GPU计算的方式来提高计算效率,LGSVL采用GPU计算点云数据;It can be seen from the above formula and pseudo code that when the lidar frequency is high, the environment in the scene is complex and the model is sufficiently refined, the calculation amount of intersection through simulated rays is extremely large. Take the lidar as 64 lines, the horizontal resolution is 0.4, and the frequency Taking 10Hz as an example, the number of lidar rays emitted per second alone is as high as 576,000. On this basis, it is also necessary to traverse all object models in the scene except lidar for each ray. In order to achieve the effect of real-time simulation, CPU parallel or GPU computing can be used to improve computing efficiency. LGSVL uses GPU to calculate point cloud data;

S2.真值数据生成与处理 S2. True value data generation and processing

有了模拟点云数据后,一般还需要配合真值数据,用作模型识别训练的数据集。真值数据对应真实数据中的人工标签数据,数据内容包括可识别物体的位置、朝向、包围盒大小、速度、类型等,不同于人工打标签的过程,真值数据相对于仿真系统而言是已知的,只需要将真值数据与点云数据进行配合同步输出即可,因此可以大大提高输出标签的效率;After having simulated point cloud data, it is generally necessary to match the real value data to be used as a data set for model recognition training. The true value data corresponds to the manual label data in the real data. The data content includes the position, orientation, bounding box size, speed, type, etc. of the identifiable object. Different from the manual labeling process, the true value data is compared to the simulation system. It is known that only the true value data and the point cloud data need to be synchronized and output, so the efficiency of output tags can be greatly improved;

S3.仿真数据输出S3. Simulation data output

由于模拟点云数据与真值数据分别通过不同的传感器采集,为了实现每一帧文件的相互匹配,采用获取当前ROS时间作为每一帧点云数据和真值数据的命名,如当前ROS时间为n.ms,对应时刻采集的点云数据文件保存为nm.pcd,真值数据文件为nm.txt,将同一帧的模拟点云数据与真值数据导入Rviz中显示,即得到输出的仿真数据。Since the simulated point cloud data and the true value data are collected through different sensors, in order to achieve mutual matching of each frame file, the current ROS time is used as the naming of each frame of point cloud data and true value data. For example, the current ROS time is n.ms, the point cloud data file collected at the corresponding time is saved as nm.pcd, and the true value data file is nm.txt. Import the simulated point cloud data and true value data of the same frame into Rviz for display, and the output simulation data is obtained. .

优选的,所述步骤Sl中的真实点云数据除了位置坐标外,还有一个关键信息是反射强度,反射强度主要反映的是不同物理材质对激光雷达所使用的近红外光线的反射率。因此,模拟点云数据同样需要考虑强度值,LGSVL中通过获取模型材质中的金属度及颜色值并进行归一化处理得到取值范围在0~255间的强度值。Preferably, in addition to the position coordinates, the real point cloud data in step S1 also has a key information which is the reflection intensity. The reflection intensity mainly reflects the reflectivity of different physical materials to the near-infrared light used by the lidar. Therefore, the simulated point cloud data also needs to consider the intensity value. In LGSVL, the metallicity and color values in the model material are obtained and normalized to obtain intensity values ranging from 0 to 255.

优选的,所述步骤S2中的真值数据生成通过在LGSVL中新建真值数据传感器,为实现真值数据与点云数据匹配,需要将真值数据传感器与激光雷达传感器的配置参数保持一致,如位置姿态、有效范围、频率等。Preferably, the true value data in step S2 is generated by creating a new true value data sensor in LGSVL. In order to match the true value data with the point cloud data, the configuration parameters of the true value data sensor and the lidar sensor need to be consistent. Such as position and attitude, effective range, frequency, etc.

(三)有益效果(3) Beneficial effects

本发明提供了一种面向车路协同的路测感知仿真系统。具备以下有益效果:The invention provides a road test perception simulation system oriented to vehicle-road collaboration. It has the following beneficial effects:

1、本发明提供了一种面向车路协同的路测感知仿真系统,该系统基于自动驾驶仿真软件LGSVL进行二次开发构建,开发内容包括模拟仿真环境、路侧单元及数据采集与通讯,并借助仿真环境分析了激光雷达的高度与路面点云覆盖之间的关系,可以为激光雷达的实际安装位置提供参考,并且通过对比由仿真环境中输出的点云数据得到的车辆识别模型与由真实数据得到的模型之间的相互验证结果,得出该仿真系统对激光雷达和环境的模拟可以较高程度地还原真实情况,从而得到更好的模拟仿真效果。1. The present invention provides a road test perception simulation system oriented to vehicle-road collaboration. The system is developed and constructed based on the autonomous driving simulation software LGSVL. The development content includes a simulation environment, a roadside unit and data collection and communication, and With the help of the simulation environment, the relationship between the height of the lidar and the road point cloud coverage is analyzed, which can provide a reference for the actual installation position of the lidar. By comparing the vehicle recognition model obtained from the point cloud data output in the simulation environment with the real The mutual verification results between the models obtained from the data indicate that the simulation system's simulation of lidar and environment can restore the real situation to a high degree, thereby obtaining better simulation results.

2、本发明提供了一种面向车路协同的路测感知仿真系统,该系统通过在在路侧部署传感器,将采集到的路面信息经V2X通信给到车辆,使车辆拥有超视距的感知能力,通过构建该路侧感知仿真系统可以很好地解决RSU配置及样本数据生成的问题。2. The present invention provides a road test perception simulation system oriented to vehicle-road collaboration. This system deploys sensors on the roadside and sends the collected road information to the vehicle through V2X communication, so that the vehicle has beyond-line-of-sight perception. Capability, by constructing this roadside sensing simulation system, the problems of RSU configuration and sample data generation can be well solved.

附图说明Description of drawings

图l为本发明的路侧感知仿真系统结构组成示意图;Figure 1 is a schematic structural diagram of the roadside sensing simulation system of the present invention;

图2为本发明的模拟场景整体平面示意图。Figure 2 is an overall plan view of the simulation scene of the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, rather than all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts fall within the scope of protection of the present invention.

实施例:Example:

如图1-2所示,本发明实施例提供一种面向车路协同的路测感知仿真系统,包括仿 真平台模块、仿真框架模块、中间件模块、结点模块,所述仿真平台模块包括图形引擎单元和物理引擎单元,且物理引擎单元与图形引擎单元进行连接,且图形引擎单元和物理引擎单元均与仿真框架模块进行连接,所述仿真框架模块包括模拟环境单元、动态场景单元、路侧传感器单元、定位仿真单元、通信仿真单元和动力学仿真单元等,所述中间件模块包括ROS、YARP等外界通讯单元,所述结点模块包括车辆控制单元、数据处理单元等,所述中间件模块与仿真框架模块之间双向通信连接,所述中间件模块与结点模块内的各单元之间双向通信连接。As shown in Figure 1-2, the embodiment of the present invention provides a road test perception simulation system for vehicle-road collaboration, including simulation Real platform module, simulation framework module, middleware module, node module, the simulation platform module includes a graphics engine unit and a physics engine unit, and the physics engine unit is connected to the graphics engine unit, and both the graphics engine unit and the physics engine unit Connected to the simulation framework module, which includes a simulation environment unit, a dynamic scene unit, a roadside sensor unit, a positioning simulation unit, a communication simulation unit, a dynamics simulation unit, etc., and the middleware module includes ROS, YARP, etc. External communication unit, the node module includes a vehicle control unit, a data processing unit, etc., a two-way communication connection between the middleware module and the simulation framework module, and a two-way communication connection between the middleware module and each unit in the node module. Communication connection.

所述路测感知仿真系统是基于LGSVL开发且适用于路侧感知的仿真系统,其中,利用自定义场景功能开发适用于路侧感知的模拟环境,利用自定义车辆及传感器模型功能创建路侧感知单元,利用自定义通讯内容实现路侧感知数据的采集与传输。The road test perception simulation system is a simulation system developed based on LGSVL and suitable for roadside perception. The custom scene function is used to develop a simulation environment suitable for roadside perception, and the custom vehicle and sensor model functions are used to create roadside perception. The unit uses customized communication content to collect and transmit roadside sensing data.

所述路测感知仿真系统包括模拟场景构建,所述模拟场景构建由静态环境单元、动态交通单元和路侧单元组成;The drive test perception simulation system includes a simulation scene construction, which is composed of a static environment unit, a dynamic traffic unit and a roadside unit;

静态环境单元,主要包括用于车辆行驶的车道,场景内的建筑,区域内的绿植、路灯等,这些构成了模拟场景的客观环境,并且不随仿真测试过程中其它条件的变化而改变;Static environment units mainly include lanes used for vehicle driving, buildings in the scene, green plants, street lights, etc. in the area. These constitute the objective environment of the simulation scene and do not change with changes in other conditions during the simulation test process;

动态交通单元,其是仿真测试场景的关键组成,主要指仿真中具备动态特性的管控、车流、人流等部分,包括红绿灯仿真,机动车仿真,行人仿真等;Dynamic traffic unit, which is a key component of the simulation test scenario, mainly refers to the control, traffic flow, pedestrian flow and other parts of the simulation that have dynamic characteristics, including traffic light simulation, motor vehicle simulation, pedestrian simulation, etc.;

路侧单元,其是车路协同的核心部件,负责车路信息的采集、处理与传输,也是面向车路协同的路侧感知仿真系统的重点研究对象。The roadside unit is the core component of vehicle-road collaboration and is responsible for the collection, processing and transmission of vehicle-road information. It is also the key research object of the roadside perception simulation system for vehicle-road collaboration.

所述静态环境单元通过blender建模后经Unity高清渲染后得到模拟仿真系统的静态环境,所述动态交通单元实现的动态交通仿真场景构建方法主要有基于真实交通案例数据的构建,基于真实案例数据的泛化构建,以及基于微观交通仿真系统的构建,所述路侧单元包括摄像头、激光雷达、毫米波雷达、工控机等。The static environment unit is modeled by blender and then rendered by Unity HD to obtain the static environment of the simulation system. The dynamic traffic simulation scene construction method implemented by the dynamic traffic unit mainly includes construction based on real traffic case data, based on real case data Generalized construction, and construction based on microscopic traffic simulation system, the roadside unit includes camera, laser radar, millimeter wave radar, industrial computer, etc.

该面向车路协同的路测感知仿真系统数据采集构建方法,包括以下步骤:The data collection and construction method of the road test perception simulation system for vehicle-road collaboration includes the following steps:

S1.模拟点云数据生成S1. Simulate point cloud data generation

参照真实激光雷达的扫描方式,模拟每一条真实雷达射线的发射,通过与场景中所有物体求交,若在激光雷达的最大探测距离内存在交点,则返回相应的点云坐标,假设模拟激光雷达为L线,水平分辨率为R,水平扫描范围为360°,得到每一帧发射射线的数量N为:Referring to the scanning method of real lidar, simulate the emission of each real radar ray. By intersecting with all objects in the scene, if there is an intersection point within the maximum detection distance of lidar, the corresponding point cloud coordinates will be returned. Assume that the lidar is simulated. is the L line, the horizontal resolution is R, and the horizontal scanning range is 360°. The number N of rays emitted in each frame is:

N=L×360/RN=L×360/R

若探测距离为D,场景内模拟点云数据生成的伪代码为:
If the detection distance is D, the pseudo code generated by simulating point cloud data in the scene is:

有上式和伪代码可知,当激光雷达频率较高,场景内环境较为复杂且模型足够精细时,通过模拟射线求交的计算量极大,以激光雷达为64线,水平分辨率0.4,频率10Hz为例,单纯每秒发射的激光雷达射线就高达576000条,在此基础上还需要对每一条射线遍历场景内除激光雷达外的所有物体模型。为了达到实时仿真的效果,可以运用CPU并行或GPU计算的方式来提高计算效率,LGSVL采用GPU计算点云数据;It can be seen from the above formula and pseudo code that when the lidar frequency is high, the scene environment is complex and the model is sufficiently refined, the calculation amount of intersection through simulated rays is extremely large. Take the lidar as 64 lines, the horizontal resolution is 0.4, and the frequency Taking 10Hz as an example, the number of lidar rays emitted per second alone is as high as 576,000. On this basis, it is also necessary to traverse all object models in the scene except lidar for each ray. In order to achieve the effect of real-time simulation, CPU parallel or GPU computing can be used to improve computing efficiency. LGSVL uses GPU to calculate point cloud data;

S2.真值数据生成与处理S2. True value data generation and processing

有了模拟点云数据后,一般还需要配合真值数据,用作模型识别训练的数据集。真值数据对应真实数据中的人工标签数据,数据内容包括可识别物体的位置、朝向、包围盒大小、速度、类型等,不同于人工打标签的过程,真值数据相对于仿真系统而言是已知的,只需要将真值数据与点云数据进行配合同步输出即可,因此可以大大提高输出标签的效率;After having simulated point cloud data, it is generally necessary to match the real value data to be used as a data set for model recognition training. The true value data corresponds to the manual label data in the real data. The data content includes the position, orientation, bounding box size, speed, type, etc. of the identifiable object. Different from the manual labeling process, the true value data is compared to the simulation system. It is known that only the true value data and the point cloud data need to be synchronized and output, so the efficiency of output tags can be greatly improved;

S3.仿真数据输出S3. Simulation data output

由于模拟点云数据与真值数据分别通过不同的传感器采集,为了实现每一帧文件的相互匹配,采用获取当前ROS时间作为每一帧点云数据和真值数据的命名,如当前ROS时间为n.ms,对应时刻采集的点云数据文件保存为nm.pcd,真值数据文件为nm.txt,将同一帧的模拟点云数据与真值数据导入Rviz中显示,即得到输出的仿真数据。Since the simulated point cloud data and the true value data are collected through different sensors, in order to achieve mutual matching of each frame file, the current ROS time is used as the naming of each frame of point cloud data and true value data. For example, the current ROS time is n.ms, the point cloud data file collected at the corresponding time is saved as nm.pcd, and the true value data file is nm.txt. Import the simulated point cloud data and true value data of the same frame into Rviz for display, and the output simulation data is obtained. .

实验测试Experimental test

在现实中,由于激光雷达成本较高,在路侧布局中需要优化激光雷达的布局使得单个激光雷达的有效覆盖区域尽可能多地被利用。对于只有单侧布置RSU的路面,因为各类车辆的形体差异较大,有可能存在小车被大车遮挡的情况,从而对车路协同提供的超视距功能构成挑战,为了减少这种因大车遮蔽造成激光雷达盲区的情况,最简单有效的方法是增加激光雷达的安装高度,获取激光雷达的最低安装高度需要综合包括激光雷达参数,道路环境参数,车辆参数等多种条件进行测试,通过真实路测是不大现实的,而借助本文提出的路侧感知仿真系统可以简单直观地完成。In reality, due to the high cost of lidar, the layout of lidar needs to be optimized in roadside layout so that the effective coverage area of a single lidar is utilized as much as possible. For roads with RSUs arranged on only one side, due to the large differences in the shapes of various types of vehicles, small vehicles may be blocked by large vehicles, thus posing a challenge to the over-the-horizon function provided by vehicle-road collaboration. In order to reduce this In the case of lidar blind spots caused by vehicle shielding, the simplest and most effective way is to increase the installation height of lidar. Obtaining the minimum installation height of lidar requires comprehensive testing of lidar parameters, road environment parameters, vehicle parameters and other conditions. Through Real road testing is not realistic, but it can be completed simply and intuitively with the help of the roadside perception simulation system proposed in this article.

相对于基于摄像头采集的二维图像识别物体,基于激光雷达的点云数据的物体识别因为不受环境光的影响,具有更高的鲁棒性,因此在车路协同中具有重要的地位。相应地,由于单帧的点云数据量巨大,同样采用深度学习的方法,基于点云的识别难度较于图像识别有过之而无不及,尤其制作标签数据的过程,采用人工的方式是极其困难的。通过仿真系统可以快速准确地生成大量标签数据,但模拟数据是否可以替代真实数据仍需要通过实验进行验证。Compared with object recognition based on two-dimensional images collected by cameras, object recognition based on LiDAR point cloud data is not affected by ambient light and has higher robustness, so it plays an important role in vehicle-road collaboration. Correspondingly, due to the huge amount of point cloud data in a single frame, recognition based on point cloud is even more difficult than image recognition using deep learning methods. Especially the process of producing label data is extremely manual. difficult. A large amount of label data can be generated quickly and accurately through a simulation system, but whether simulated data can replace real data still needs to be verified through experiments.

设计4组实验进行模拟数据的验证,第1组采用真实数据训练真实数据测试,第2组采用模拟数据训练模拟数据测试,第3组采用真实数据训练模拟数据测试,第4组采用模拟数据训练真实数据测试,4组实验采用相同的训练网络,训练集与测试集的数据量均按4∶1得到,最后结果如下表1所示:Design 4 groups of experiments to verify simulated data. The first group uses real data to train and test with real data. The second group uses simulated data to train and test with simulated data. The third group uses real data to train and test with simulated data. The fourth group uses simulated data to train. For real data testing, the four sets of experiments used the same training network. The data sizes of the training set and the test set were all obtained at a ratio of 4:1. The final results are shown in Table 1 below:

表2模拟数据与真实数据的测试对比
Table 2 Test comparison between simulated data and real data

其中,Precision为识别的精确率,相对于测试集中检测出来的样本而言,Recall为召回率,相对于整个测试集而言,F1 score为精确率和召回率的调和平均数,从表2中可以看出,不管是用真实数据测试模拟数据,还是模拟数据测试真实数据,最后的结果都显示各类评价指标可以比较接近纯真实数据的情况,由此可知,通过仿真系统输出的模拟点云数据可以较好地还原真实数据的特征。Among them, Precision is the precision rate of recognition, relative to the samples detected in the test set, Recall is the recall rate, relative to the entire test set, F1 score is the harmonic average of precision rate and recall rate, from Table 2 It can be seen that whether real data is used to test simulated data or simulated data is used to test real data, the final results show that various evaluation indicators can be relatively close to pure real data. It can be seen that the simulated point cloud output through the simulation system The data can better restore the characteristics of real data.

尽管已经示出和描述了本发明的实施例,对于本领域的普通技术人员而言,可以理解在不脱离本发明的原理和精神的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由所附权利要求及其等同物限定。 Although the embodiments of the present invention have been shown and described, those of ordinary skill in the art will understand that various changes, modifications, and substitutions can be made to these embodiments without departing from the principles and spirit of the invention. and modifications, the scope of the invention is defined by the appended claims and their equivalents.

Claims (9)

一种面向车路协同的路测感知仿真系统,其特征在于,包括仿真平台模块、仿真框架模块、中间件模块、结点模块,所述仿真平台模块包括图形引擎单元和物理引擎单元,且物理引擎单元与图形引擎单元进行连接,且图形引擎单元和物理引擎单元均与仿真框架模块进行连接,所述仿真框架模块包括模拟环境单元、动态场景单元、路侧传感器单元、定位仿真单元、通信仿真单元和动力学仿真单元等,所述中间件模块包括ROS、YARP等外界通讯单元,所述结点模块包括车辆控制单元、数据处理单元等,所述中间件模块与仿真框架模块之间双向通信连接,所述中间件模块与结点模块内的各单元之间双向通信连接。A road test perception simulation system oriented to vehicle-road collaboration, which is characterized in that it includes a simulation platform module, a simulation framework module, a middleware module, and a node module. The simulation platform module includes a graphics engine unit and a physics engine unit, and the physics The engine unit is connected to the graphics engine unit, and both the graphics engine unit and the physics engine unit are connected to the simulation framework module. The simulation framework module includes a simulation environment unit, a dynamic scene unit, a roadside sensor unit, a positioning simulation unit, and a communication simulation. unit and dynamics simulation unit, etc., the middleware module includes ROS, YARP and other external communication units, the node module includes a vehicle control unit, data processing unit, etc., the middleware module and the simulation framework module have two-way communication Connection, a two-way communication connection between the middleware module and each unit in the node module. 根据权利要求1所述的一种面向车路协同的路测感知仿真系统,其特征在于,所述路测感知仿真系统是基于LGSVL开发且适用于路侧感知的仿真系统,其中,利用自定义场景功能开发适用于路侧感知的模拟环境,利用自定义车辆及传感器模型功能创建路侧感知单元,利用自定义通讯内容实现路侧感知数据的采集与传输。A road test perception simulation system for vehicle-road collaboration according to claim 1, characterized in that the road test perception simulation system is a simulation system developed based on LGSVL and suitable for roadside perception, wherein, using custom The scene function development is suitable for the simulation environment of roadside sensing, using customized vehicle and sensor model functions to create roadside sensing units, and using customized communication content to realize the collection and transmission of roadside sensing data. 根据权利要求1所述的一种面向车路协同的路测感知仿真系统,其特征在于,所述路测感知仿真系统包括模拟场景构建,所述模拟场景构建由静态环境单元、动态交通单元和路侧单元组成;A road test perception simulation system for vehicle-road cooperation according to claim 1, characterized in that the road test perception simulation system includes a simulation scene construction, and the simulation scene construction consists of a static environment unit, a dynamic traffic unit and Roadside unit composition; 静态环境单元,主要包括用于车辆行驶的车道,场景内的建筑,区域内的绿植、路灯等,这些构成了模拟场景的客观环境,并且不随仿真测试过程中其它条件的变化而改变;Static environment units mainly include lanes used for vehicle driving, buildings in the scene, green plants, street lights, etc. in the area. These constitute the objective environment of the simulation scene and do not change with changes in other conditions during the simulation test process; 动态交通单元,其是仿真测试场景的关键组成,主要指仿真中具备动态特性的管控、车流、人流等部分,包括红绿灯仿真,机动车仿真,行人仿真等;Dynamic traffic unit, which is a key component of the simulation test scenario, mainly refers to the control, traffic flow, pedestrian flow and other parts of the simulation that have dynamic characteristics, including traffic light simulation, motor vehicle simulation, pedestrian simulation, etc.; 路侧单元,其是车路协同的核心部件,负责车路信息的采集、处理与传输,也是面向车路协同的路侧感知仿真系统的重点研究对象。The roadside unit is the core component of vehicle-road collaboration and is responsible for the collection, processing and transmission of vehicle-road information. It is also the key research object of the roadside perception simulation system for vehicle-road collaboration. 根据权利要求3所述的一种面向车路协同的路测感知仿真系统,其特征在于,所述静态环境单元通过blender建模后经Unity高清渲染后得到模拟仿真系统的静态环境。A road test perception simulation system for vehicle-road collaboration according to claim 3, characterized in that the static environment unit is modeled by blender and then rendered in Unity high definition to obtain the static environment of the simulation system. 根据权利要求3所述的一种面向车路协同的路测感知仿真系统,其特征在于,所述动态交通单元实现的动态交通仿真场景构建方法主要有基于真实交通案例数据的构建,基于真实案例数据的泛化构建,以及基于微观交通仿真系统的构建。A road test perception simulation system for vehicle-road collaboration according to claim 3, characterized in that the dynamic traffic simulation scene construction method implemented by the dynamic traffic unit mainly includes construction based on real traffic case data, based on real cases Generalization construction of data and construction of microscopic traffic simulation system. 根据权利要求3所述的一种面向车路协同的路测感知仿真系统,其特征在于,所述路侧单元包括摄像头、激光雷达、毫米波雷达、工控机等。A road test perception simulation system for vehicle-road collaboration according to claim 3, characterized in that the roadside unit includes a camera, laser radar, millimeter wave radar, industrial computer, etc. 一种面向车路协同的路测感知仿真系统数据采集构建方法,其特征在于,包括以下步骤:A data collection and construction method for a road test perception simulation system oriented to vehicle-road collaboration, which is characterized by including the following steps: S1.模拟点云数据生成S1. Simulate point cloud data generation 参照真实激光雷达的扫描方式,模拟每一条真实雷达射线的发射,通过与场景中所有物体求交,若在激光雷达的最大探测距离内存在交点,则返回相应的点云坐标,假设模拟激光雷达为L线,水平分辨率为R,水平扫描范围为360°,得到每一帧发射射线的数量N为:
N=L×360/R
Referring to the scanning method of real lidar, simulate the emission of each real radar ray. By intersecting with all objects in the scene, if there is an intersection point within the maximum detection distance of lidar, the corresponding point cloud coordinates will be returned. Assume that the lidar is simulated. is the L line, the horizontal resolution is R, and the horizontal scanning range is 360°. The number N of rays emitted in each frame is:
N=L×360/R
若探测距离为D,场景内模拟点云数据生成的伪代码为:
If the detection distance is D, the pseudo code generated by simulating point cloud data in the scene is:
有上式和伪代码可知,当激光雷达频率较高,场景内环境较为复杂且模型足够精细时,通过模拟射线求交的计算量极大,以激光雷达为64线,水平分辨率0.4,频率10Hz为例,单纯每秒发射的激光雷达射线就高达576000条,在此基础上还需要对每一条射线遍历场景内除激光雷达外的所有物体模型。为了达到实时仿真的效果,可以运用CPU并行或GPU计算的方式来提高计算效率,LGSVL采用GPU计算点云数据;It can be seen from the above formula and pseudo code that when the lidar frequency is high, the scene environment is complex and the model is sufficiently refined, the calculation amount of intersection through simulated rays is extremely large. Take the lidar as 64 lines, the horizontal resolution is 0.4, and the frequency Taking 10Hz as an example, the number of lidar rays emitted per second alone is as high as 576,000. On this basis, it is also necessary to traverse all object models in the scene except lidar for each ray. In order to achieve the effect of real-time simulation, CPU parallel or GPU computing can be used to improve computing efficiency. LGSVL uses GPU to calculate point cloud data; S2.真值数据生成与处理S2. True value data generation and processing 有了模拟点云数据后,一般还需要配合真值数据,用作模型识别训练的数据集。真值数据对应真实数据中的人工标签数据,数据内容包括可识别物体的位置、朝向、包围盒大小、速度、类型等,不同于人工打标签的过程,真值数据相对于仿真系统而言是已知的,只需要将真值数据与点云数据进行配合同步输出即可,因此可以大大提高输出标签的效率;After having simulated point cloud data, it is generally necessary to match the real value data to be used as a data set for model recognition training. The true value data corresponds to the manual label data in the real data. The data content includes the position, orientation, bounding box size, speed, type, etc. of the identifiable object. Different from the manual labeling process, the true value data is compared to the simulation system. It is known that only the true value data and the point cloud data need to be synchronized and output, so the efficiency of output tags can be greatly improved; S3.仿真数据输出S3. Simulation data output 由于模拟点云数据与真值数据分别通过不同的传感器采集,为了实现每一帧文件的相互匹配,采用获取当前ROS时间作为每一帧点云数据和真值数据的命名,如当前ROS时间为n.ms,对应时刻采集的点云数据文件保存为nm.pcd,真值数据文件为nm.txt,将同一帧的模拟点云数据与真值数据导入Rviz中显示,即得到输出的仿真数据。Since the simulated point cloud data and the true value data are collected through different sensors, in order to achieve mutual matching of each frame file, the current ROS time is used as the naming of each frame of point cloud data and true value data. For example, the current ROS time is n.ms, the point cloud data file collected at the corresponding time is saved as nm.pcd, and the true value data file is nm.txt. Import the simulated point cloud data and true value data of the same frame into Rviz for display, and the output simulation data is obtained. .
根据权利要求7所述的一种面向车路协同的路测感知仿真系统数据采集构建方法,其特征在于,所述步骤S1中的真实点云数据除了位置坐标外,还有一个关键信息是反射强度,反射强度主要反映的是不同物理材质对激光雷达所使用的近红外光线的反射率。因此,模拟点云数据同样需要考虑强度值,LGSVL中通过获取模型材质中的金属度及颜色值并进行归一化处理得到取值范围在0~255间的强度值。A data collection and construction method for a vehicle-road collaboration-oriented road test perception simulation system according to claim 7, characterized in that, in addition to position coordinates, the real point cloud data in step S1 also has a key information which is reflection Intensity, reflection intensity mainly reflects the reflectivity of different physical materials to the near-infrared light used by lidar. Therefore, the simulated point cloud data also needs to consider the intensity value. In LGSVL, the metallicity and color values in the model material are obtained and normalized to obtain intensity values ranging from 0 to 255. 根据权利要求7所述的一种面向车路协同的路测感知仿真系统数据采集构建方法,其特征在于,所述步骤S2中的真值数据生成通过在LGSVL中新建真值数据传感器,为实现真值数据与点云数据匹配,需要将真值数据传感器与激光雷达传感器的配置参数保持一致,如位置姿态、有效范围、频率等。 A data collection and construction method for a vehicle-road collaboration-oriented road test perception simulation system according to claim 7, characterized in that the true value data in step S2 is generated by creating a new true value data sensor in LGSVL. To match the true value data with the point cloud data, the configuration parameters of the true value data sensor and the lidar sensor need to be consistent, such as position and attitude, effective range, frequency, etc.
PCT/CN2023/098821 2022-07-22 2023-06-07 Roadside sensing simulation system for vehicle-road collaboration Ceased WO2024016877A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210871266.8 2022-07-22
CN202210871266.8A CN115292913A (en) 2022-07-22 2022-07-22 Vehicle-road-cooperation-oriented drive test perception simulation system

Publications (1)

Publication Number Publication Date
WO2024016877A1 true WO2024016877A1 (en) 2024-01-25

Family

ID=83824956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/098821 Ceased WO2024016877A1 (en) 2022-07-22 2023-06-07 Roadside sensing simulation system for vehicle-road collaboration

Country Status (2)

Country Link
CN (1) CN115292913A (en)
WO (1) WO2024016877A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117970832A (en) * 2024-01-31 2024-05-03 哈尔滨工业大学 A hybrid scenario simulation system for heterogeneous multi-unmanned systems
CN118070571A (en) * 2024-04-19 2024-05-24 中汽智联技术有限公司 Simulation method, device and storage medium of laser sensor
CN118191785A (en) * 2024-02-22 2024-06-14 江苏优探智能科技有限公司 Laser radar detection method and related equipment
CN118714704A (en) * 2024-08-28 2024-09-27 深圳市凯铭智慧建设科技有限公司 A method and system for energy-saving control of urban lighting based on edge computing
CN119212183A (en) * 2024-11-26 2024-12-27 常州星宇车灯股份有限公司 Test system and test method for intelligent headlights
CN119272535A (en) * 2024-12-06 2025-01-07 江汉大学 Method and device for constructing simulation scene based on intelligent agent and simulation physics engine
CN119475962A (en) * 2024-09-29 2025-02-18 合肥工业大学 A method for optimizing the deployment of roadside perception units under partial vehicle-road collaboration conditions
CN119918169A (en) * 2024-11-29 2025-05-02 西部科学城智能网联汽车创新中心(重庆)有限公司 An optimization method and device for autonomous driving simulation test
CN120183196A (en) * 2025-04-08 2025-06-20 中山大学·深圳 Connected vehicle-road collaborative testing system and testing method thereof
CN120186657A (en) * 2025-03-20 2025-06-20 北京市计量检测科学研究院 A wireless communication static testing method and system for vehicle-road collaboration

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115292913A (en) * 2022-07-22 2022-11-04 上海交通大学 Vehicle-road-cooperation-oriented drive test perception simulation system
CN116719054B (en) * 2023-08-11 2023-11-17 光轮智能(北京)科技有限公司 Virtual laser radar point cloud generation method, computer equipment and storage medium
CN118314424B (en) * 2024-06-05 2024-08-20 武汉理工大学 Vehicle-road collaborative self-advancing learning multi-mode verification method based on edge scene

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443620A (en) * 2020-04-30 2020-07-24 重庆车辆检测研究院有限公司 Test equipment and test vehicle for intelligent vehicle-road coordination system
CN112199991A (en) * 2020-08-27 2021-01-08 广州中国科学院软件应用技术研究所 A simulation point cloud filtering method and system for vehicle-road cooperative roadside perception
CN112382079A (en) * 2020-09-21 2021-02-19 广州中国科学院软件应用技术研究所 Road side perception analog simulation method and system for vehicle-road cooperation
US20210406562A1 (en) * 2020-06-24 2021-12-30 Keysight Technologies, Inc. Autonomous drive emulation methods and devices
CN115292913A (en) * 2022-07-22 2022-11-04 上海交通大学 Vehicle-road-cooperation-oriented drive test perception simulation system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112198859B (en) * 2020-09-07 2022-02-11 西安交通大学 Method, system and device for testing automatic driving vehicle in vehicle ring under mixed scene
CN116529784A (en) * 2020-11-05 2023-08-01 德斯拜思有限公司 Method and system for adding lidar data
CN113256976B (en) * 2021-05-12 2022-08-05 中移智行网络科技有限公司 Vehicle-road cooperative system, analog simulation method, vehicle-mounted equipment and road side equipment
CN113868862A (en) * 2021-09-28 2021-12-31 一汽解放汽车有限公司 Vehicle in-loop testing method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443620A (en) * 2020-04-30 2020-07-24 重庆车辆检测研究院有限公司 Test equipment and test vehicle for intelligent vehicle-road coordination system
US20210406562A1 (en) * 2020-06-24 2021-12-30 Keysight Technologies, Inc. Autonomous drive emulation methods and devices
CN112199991A (en) * 2020-08-27 2021-01-08 广州中国科学院软件应用技术研究所 A simulation point cloud filtering method and system for vehicle-road cooperative roadside perception
CN112382079A (en) * 2020-09-21 2021-02-19 广州中国科学院软件应用技术研究所 Road side perception analog simulation method and system for vehicle-road cooperation
CN115292913A (en) * 2022-07-22 2022-11-04 上海交通大学 Vehicle-road-cooperation-oriented drive test perception simulation system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
GUO YUN-PENG; ZOU KAI; CHEN SHENG-DONG; YUAN FENG: "Road-Side Sensing Simulation Toward Cooperative Vehicle Infrastructure System", COMPUTER SYSTEMS AND APPLICATIONS, ZHONGGUO KEXUEYUAN RUANJIAN YANJIUSUO, CN, vol. 30, no. 5, 28 April 2021 (2021-04-28), CN , pages 92 - 98, XP009552168, ISSN: 1003-3254, DOI: 10.15888/j.cnki.csa.007907 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117970832A (en) * 2024-01-31 2024-05-03 哈尔滨工业大学 A hybrid scenario simulation system for heterogeneous multi-unmanned systems
CN118191785A (en) * 2024-02-22 2024-06-14 江苏优探智能科技有限公司 Laser radar detection method and related equipment
CN118070571A (en) * 2024-04-19 2024-05-24 中汽智联技术有限公司 Simulation method, device and storage medium of laser sensor
CN118714704A (en) * 2024-08-28 2024-09-27 深圳市凯铭智慧建设科技有限公司 A method and system for energy-saving control of urban lighting based on edge computing
CN119475962A (en) * 2024-09-29 2025-02-18 合肥工业大学 A method for optimizing the deployment of roadside perception units under partial vehicle-road collaboration conditions
CN119212183A (en) * 2024-11-26 2024-12-27 常州星宇车灯股份有限公司 Test system and test method for intelligent headlights
CN119918169A (en) * 2024-11-29 2025-05-02 西部科学城智能网联汽车创新中心(重庆)有限公司 An optimization method and device for autonomous driving simulation test
CN119272535A (en) * 2024-12-06 2025-01-07 江汉大学 Method and device for constructing simulation scene based on intelligent agent and simulation physics engine
CN120186657A (en) * 2025-03-20 2025-06-20 北京市计量检测科学研究院 A wireless communication static testing method and system for vehicle-road collaboration
CN120183196A (en) * 2025-04-08 2025-06-20 中山大学·深圳 Connected vehicle-road collaborative testing system and testing method thereof

Also Published As

Publication number Publication date
CN115292913A (en) 2022-11-04

Similar Documents

Publication Publication Date Title
WO2024016877A1 (en) Roadside sensing simulation system for vehicle-road collaboration
CN110796714B (en) Map construction method, device, terminal and computer readable storage medium
WO2023207016A1 (en) Autonomous driving test system and method based on digital twin cloud control platform
CN111859618A (en) Multi-terminal-in-the-loop virtual-real combined traffic comprehensive scene simulation test system and method
WO2020199057A1 (en) Self-piloting simulation system, method and device, and storage medium
CN108428254A (en) The construction method and device of three-dimensional map
CN118968470B (en) Method, apparatus, device, medium and program product for detecting object recognition model
CN114004951B (en) Road sign extraction method and system combining point cloud intensity and geometric structure
CN114972177A (en) Road disease identification management method, device and intelligent terminal
WO2021189420A1 (en) Data processing method and device
CN115257785B (en) A method and system for producing an autonomous driving data set
Pan et al. A novel computer vision‐based monitoring methodology for vehicle‐induced aerodynamic load on noise barrier
CN119783609A (en) Regulatory simulation test method, device, equipment and storage medium for intelligent driving algorithm
CN118918182A (en) Ground element labeling method, storage medium, vehicle-mounted equipment and vehicle
JP7204087B2 (en) Object recognition device
Chen et al. Transforming traffic accident investigations: a virtual-real-fusion framework for intelligent 3D traffic accident reconstruction
CN112712098B (en) Image data processing method and device
CN116629019A (en) OpenX-based automatic driving simulation scene construction system
CN117034579A (en) Point cloud data generation method, device, equipment and storage medium
CN119889050B (en) Method for identifying vehicles in service area based on three-dimensional laser radar data
CN119600510B (en) Video data driven tunnel fire detection and simulation method and system
CN110909656A (en) Pedestrian detection method and system with integration of radar and camera
US11189082B2 (en) Simulated overhead perspective images with removal of obstructions
CN117078470B (en) BIM+GIS-based three-dimensional sign dismantling management system
Bai et al. Cyber mobility mirror for enabling cooperative driving automation: A co-simulation platform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23841944

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 23841944

Country of ref document: EP

Kind code of ref document: A1