CN118405130A - A method, system and medium for off-road environment perception and tracking and guiding vehicles - Google Patents
A method, system and medium for off-road environment perception and tracking and guiding vehicles Download PDFInfo
- Publication number
- CN118405130A CN118405130A CN202410834127.7A CN202410834127A CN118405130A CN 118405130 A CN118405130 A CN 118405130A CN 202410834127 A CN202410834127 A CN 202410834127A CN 118405130 A CN118405130 A CN 118405130A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- information
- point cloud
- target
- tracking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/14—Adaptive cruise control
- B60W30/16—Control of distance between vehicles, e.g. keeping a distance to preceding vehicle
- B60W30/165—Automatically following the path of a preceding lead vehicle, e.g. "electronic tow-bar"
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/403—Image sensing, e.g. optical camera
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2420/00—Indexing codes relating to the type of sensors based on the principle of their operation
- B60W2420/40—Photo, light or radio wave sensitive means, e.g. infrared sensors
- B60W2420/408—Radar; Laser, e.g. lidar
Landscapes
- Engineering & Computer Science (AREA)
- Automation & Control Theory (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
本发明公开了一种越野环境感知与跟踪引导车辆的方法、系统及介质,涉及跟车控制技术领域,运用于无GPS等定位信号的越野环境下,对前方的引导车辆进行精确感知,并在此基础上,以前方的引导车辆为运动跟随目标,实现自主跟车;所述方法包括以下步骤:设置安装于自主跟随车的感知传感器,所述感知传感器包括摄像头、4D毫米波雷达和FMCW激光雷达;获取所述感知传感器的检测信息;根据所述检测信息执行针对引导车辆的初始识别流程、持续跟踪流程或目标找回流程;本发明基于摄像头、4D毫米波雷达和FMCW激光雷达结合的方式进行引导车辆跟踪,可以在无GPS、车速较高、近距离、扬尘等高难度工况或恶劣环境下进行有效的跟车,弥补了现有技术的不足。
The present invention discloses a method, system and medium for sensing and tracking a guiding vehicle in an off-road environment, and relates to the technical field of following vehicle control. The method is applied to an off-road environment without positioning signals such as GPS, to accurately sense a guiding vehicle in front, and on this basis, to realize autonomous following vehicle by taking the guiding vehicle in front as a moving following target. The method comprises the following steps: arranging a sensing sensor installed on an autonomous following vehicle, wherein the sensing sensor comprises a camera, a 4D millimeter wave radar and an FMCW laser radar; acquiring detection information of the sensing sensor; and executing an initial recognition process, a continuous tracking process or a target retrieval process for the guiding vehicle according to the detection information. The present invention tracks the guiding vehicle in a manner combining a camera, a 4D millimeter wave radar and an FMCW laser radar, and can effectively follow the vehicle in difficult working conditions or harsh environments such as without GPS, at high vehicle speed, at close distance and under dust, thereby making up for the deficiencies of the prior art.
Description
技术领域Technical Field
本发明涉及跟车控制技术领域,特别是涉及一种越野环境感知与跟踪引导车辆的方法、系统及介质。The present invention relates to the technical field of vehicle following control, and in particular to a method, system and medium for sensing an off-road environment and tracking and guiding a vehicle.
背景技术Background technique
无人车辆目标识别与跟随系统是一种智能化的车辆控制系统,它利用先进的传感器、计算机视觉和人工智能等技术实现无人车辆对引导车辆的自主跟随。自主跟随功能可应用于自主物流运输保障,有效减少驾驶或者遥控人员,同时提高长途运输的智能化水平,在特种运输领域用途很广。The unmanned vehicle target recognition and following system is an intelligent vehicle control system that uses advanced sensors, computer vision, artificial intelligence and other technologies to enable unmanned vehicles to autonomously follow the guide vehicle. The autonomous following function can be applied to autonomous logistics and transportation support, effectively reducing the number of drivers or remote control personnel, while improving the intelligence level of long-distance transportation, and is widely used in the field of special transportation.
在现有技术中,对引导车辆目标识别与跟踪的主要实现方式包括TOF激光雷达、摄像头、毫米波雷达、无线电通信和GPS等,这些跟车技术各自存在如下缺陷:In the existing technology, the main implementation methods for guiding vehicle target recognition and tracking include TOF laser radar, camera, millimeter wave radar, radio communication and GPS, etc. These vehicle following technologies each have the following defects:
1.基于TOF激光雷达的目标识别与跟踪:TOF激光雷达通过几何建模进行目标识别与跟踪,但点云数据随距离增加而稀疏,对远距离目标检测有限,而近距离跟踪在高速跟踪过程中可能造成安全风险。此外激光无法穿透扬尘,在越野环境下,引导车辆高速行驶带起的大量扬尘可能导致TOF激光雷达的错误识别,甚至完全遮蔽引导车辆,使无人车目标识别功能失效。见图6,在图6中,明显可以看到出现了大量的干扰激光检测的点云,在更恶劣的工况下,车辆前方代表扬尘的点云和代表引导车辆的点云完全混为一大团,无法分割和检测。1. Target recognition and tracking based on TOF LiDAR: TOF LiDAR uses geometric modeling to identify and track targets, but point cloud data becomes sparse as distance increases, and detection of long-distance targets is limited. Close-range tracking may pose a safety risk during high-speed tracking. In addition, lasers cannot penetrate dust. In off-road environments, a large amount of dust raised by the high-speed driving of the guide vehicle may cause misidentification of the TOF LiDAR, or even completely obscure the guide vehicle, rendering the target recognition function of the unmanned vehicle ineffective. See Figure 6. In Figure 6, it can be clearly seen that a large number of point clouds have appeared that interfere with laser detection. Under more severe working conditions, the point cloud representing the dust in front of the vehicle and the point cloud representing the guide vehicle are completely mixed into a large mass and cannot be segmented or detected.
2.基于视觉图像的目标识别与跟踪:视觉目标识别需要从图片坐标系转换到车体坐标系,转换存在误差,不利于跟随车辆的精确跟踪控制和高速跟车控制。视觉传感器受光照、视角遮挡等影响较大,特别是越野环境下存在大量扬尘,难以应用于越野环境的高速精确跟车系统。视觉的恶劣工况在图6中也有展示。另外在夜间时候,常规摄像头不可靠,还得使用昂贵的夜视摄像头。但缺乏摄像头也导致了系统缺少稳定的特征识别,尤其是目标跟丢以后的自恢复过程非常不稳定,因此需要以一定的形式合理利用视觉图像。本申请采用了4D毫米波雷达结合FMCW激光雷达,以此二者为主来进行检测,稳定跟踪的成功率已经非常高了,因此为了节约成本没有再选择昂贵的夜视摄像头。此外依靠视觉图像容易有误检,比如某汽车在高速路上将广告牌上的车的照片误以为是车辆,自动进行了制动从而引发了事故,因此在本申请中视觉主要是辅助检测,用于起步和跟丢找回。2. Target recognition and tracking based on visual images: Visual target recognition requires conversion from the image coordinate system to the vehicle coordinate system. There are errors in the conversion, which is not conducive to the precise tracking control and high-speed following control of the following vehicle. Visual sensors are greatly affected by illumination, visual angle occlusion, etc., especially in off-road environments where there is a lot of dust, making it difficult to apply to high-speed and precise following systems in off-road environments. The poor working conditions of vision are also shown in Figure 6. In addition, at night, conventional cameras are unreliable, and expensive night vision cameras must be used. However, the lack of cameras also leads to the lack of stable feature recognition in the system, especially the self-recovery process after the target is lost is very unstable, so it is necessary to reasonably use visual images in a certain form. This application uses 4D millimeter wave radar combined with FMCW laser radar, and uses these two as the main detection methods. The success rate of stable tracking is already very high, so in order to save costs, no expensive night vision cameras are selected. In addition, relying on visual images is prone to false detection. For example, a car on the highway mistakenly thinks the photo of a car on a billboard is a vehicle, automatically brakes, and causes an accident. Therefore, in this application, vision is mainly used as an auxiliary detection for starting and lost recovery.
3.基于毫米波雷达的目标识别与跟踪:毫米波雷达能够对动态目标进行有效跟踪,但毫米波雷达分辨率较低无法示廓,无法准确感知引导车辆的尺寸和中心,严重影响高速跟车时候的方向控制精度。此外,毫米波雷达难以检测和区分静态车辆,在车辆目标初始化时,容易造成跟踪目标初始化失败。另外,在目标丢失的情况下,难以有效找回目标。见图7,是毫米波雷达检测的一个工况,此时激光雷达点云还能够检测出前车的车尾,毫米波则丢失掉了目标。毫米波雷达本身是检测不出点云的,传感器内置的算法在这种情况下存在弃掉前车信息的概率,外在的表现就是传感器原始数据不可靠,无法反馈疑似引导车的全部目标信息。3. Target recognition and tracking based on millimeter-wave radar: Millimeter-wave radar can effectively track dynamic targets, but the resolution of millimeter-wave radar is low and cannot show the outline, and cannot accurately perceive the size and center of the guide vehicle, which seriously affects the direction control accuracy when following the vehicle at high speed. In addition, millimeter-wave radar has difficulty in detecting and distinguishing static vehicles. When the vehicle target is initialized, it is easy to cause the tracking target initialization to fail. In addition, it is difficult to effectively retrieve the target when the target is lost. See Figure 7, which is a working condition of millimeter-wave radar detection. At this time, the laser radar point cloud can still detect the rear of the front vehicle, but the millimeter wave has lost the target. The millimeter-wave radar itself cannot detect the point cloud. In this case, the algorithm built into the sensor has the probability of discarding the information of the front vehicle. The external manifestation is that the raw data of the sensor is unreliable and cannot feedback all the target information of the suspected guide vehicle.
4.基于无线电通信传输位置信息的目标跟踪:需要对引导车辆进行较大的改造,包括安装定位系统和无线电通信系统,这种方法具有一定局限性,不太适用于通用性和经济性要求较高的情况。此外,存在潜在的安全风险,如无线电传输延迟和电磁干扰的影响,由于通信延迟的存在,引导车辆的位置精度可能受到一定程度的影响,从而影响了跟随车辆的控制精度。所以,在本申请所针对的越野工况中,无线电无法使用。4. Target tracking based on radio communication transmission of position information: It requires major modifications to the guide vehicle, including the installation of a positioning system and a radio communication system. This method has certain limitations and is not suitable for situations where versatility and economy are required. In addition, there are potential safety risks, such as the influence of radio transmission delays and electromagnetic interference. Due to the existence of communication delays, the position accuracy of the guide vehicle may be affected to a certain extent, thereby affecting the control accuracy of the following vehicle. Therefore, in the off-road conditions targeted by this application, radio cannot be used.
5.基于全球定位系统(Global Positioning System)的目标跟踪:使用GPS可以对引导车辆进行跟踪,但在越野环境下GPS信号不稳定,在穿越隧道、森林和山谷会出现明显的信号减弱或丢失情况,导致卫星定位结果不准,难以适应越野复杂环境的高速精确跟车要求。在特别条件下,卫星定位信号会出现严重的失真,不可用。所以GPS在本申请所针对的越野工况中,也不能使用。其它包括北斗在内的定位系统存在同样的问题。5. Target tracking based on the Global Positioning System: GPS can be used to track the guide vehicle, but the GPS signal is unstable in off-road environments. When passing through tunnels, forests, and valleys, there will be obvious signal attenuation or loss, resulting in inaccurate satellite positioning results, which makes it difficult to adapt to the requirements of high-speed and precise vehicle following in complex off-road environments. Under special conditions, the satellite positioning signal will be severely distorted and unusable. Therefore, GPS cannot be used in the off-road conditions targeted by this application. Other positioning systems, including Beidou, have the same problem.
综上所述,TOF激光雷达存在无法处理越野环境下高速行驶的扬尘问题;摄像头存在难以确定目标物体的距离问题以及越野环境下高速行驶的扬尘问题;毫米波雷达存在分辨率较低容易将距离较近物体识别为同一物体的问题;无线电通信成本过高,且存在电磁干扰和通信延迟问题;GPS在越野环境信号不稳定容易出现定位不准的情况,影响了位置确定精度。To summarize, TOF laser radar cannot handle the dust problem caused by high-speed driving in off-road environments; cameras have difficulty determining the distance of the target object and the dust problem caused by high-speed driving in off-road environments; millimeter-wave radar has a low resolution and easily identifies objects that are close to each other as the same object; radio communication costs are too high, and there are problems with electromagnetic interference and communication delays; GPS signals are unstable in off-road environments and are prone to inaccurate positioning, which affects the accuracy of position determination.
在越野环境高速近距离跟车时候,无人车的轨迹控制精度要求非常高,这是由于越野环境的路面情况较差,即使轨迹偏差较小也可能会碰到水坑、悬崖之类障碍地形,产生严重的后果。前述传感器虽然可以做一定程序的融合,但在扬尘、雨、雪、雾等恶劣工况下,即使是融合感知,性能也较差。When following a vehicle at a close distance at high speed in an off-road environment, the trajectory control accuracy of the unmanned vehicle is very high. This is because the road conditions in the off-road environment are poor. Even if the trajectory deviation is small, it may hit obstacles such as puddles and cliffs, resulting in serious consequences. Although the aforementioned sensors can be fused to a certain extent, the performance is poor even with fusion perception in harsh working conditions such as dust, rain, snow, and fog.
发明内容Summary of the invention
本发明的目的在于,针对现有技术中的上述问题,提供一种越野环境感知与跟踪引导车辆的方法、系统及介质,结合摄像头、4D毫米波雷达和FMCW激光雷达,解决现有技术在高速、近距离、扬尘等越野环境下,容易出现跟丢、准确率不高的问题。The purpose of the present invention is to provide a method, system and medium for off-road environment perception and tracking and guiding vehicles in response to the above-mentioned problems in the prior art, which combines cameras, 4D millimeter-wave radars and FMCW lidars to solve the problems of the prior art that the vehicle is easily lost and has low accuracy in off-road environments such as high speed, close distance and dust.
为解决上述技术问题,本发明的具体技术方案如下:In order to solve the above technical problems, the specific technical solutions of the present invention are as follows:
一方面,本发明提供一种越野环境感知与跟踪引导车辆的方法,包括以下步骤:In one aspect, the present invention provides a method for off-road environment perception and tracking and guiding a vehicle, comprising the following steps:
确认自主跟随车和引导车辆;Confirm autonomous following vehicles and leading vehicles;
设置安装于自主跟随车的传感器,所述传感器包括摄像头、4D毫米波雷达和FMCW激光雷达;A sensor is provided to be installed on the autonomous following vehicle, wherein the sensor includes a camera, a 4D millimeter wave radar and a FMCW laser radar;
获取所述传感器的检测信息;Acquiring detection information of the sensor;
根据所述检测信息执行针对引导车辆的初始识别流程、持续跟踪流程或目标找回流程。An initial identification process, a continuous tracking process or a target retrieval process for the guide vehicle is executed according to the detection information.
作为一种改进的方案,所述初始识别流程包括:引导车辆进入初始检测范围内,人工输入引导车辆特征信息,自主跟随车的感知流程即进入初始识别状态,识别失败则重新输入引导车辆特征信息或者调整引导车辆位置,识别成功则进入持续跟踪状态,随即将引导车辆定义为跟踪目标。As an improved solution, the initial recognition process includes: guiding the vehicle into the initial detection range, manually inputting the characteristic information of the guiding vehicle, and the perception process of the autonomous following vehicle enters the initial recognition state. If the recognition fails, the characteristic information of the guiding vehicle is re-entered or the position of the guiding vehicle is adjusted. If the recognition is successful, the continuous tracking state is entered, and the guiding vehicle is then defined as the tracking target.
作为一种改进的方案,所述持续跟踪流程包括:在持续跟踪状态中若目标未丢失,则保持持续跟踪状态,若目标丢失则进入目标找回状态。As an improved solution, the continuous tracking process includes: if the target is not lost in the continuous tracking state, then the continuous tracking state is maintained; if the target is lost, then the target retrieval state is entered.
作为一种改进的方案,所述目标找回流程包括:目标找回成功则保持持续跟踪状态,若目标找回失败则停车,回到所述初始识别流程。As an improved solution, the target retrieval process includes: if the target is successfully retrieved, the tracking state is maintained; if the target is not retrieved, the vehicle is stopped and the initial recognition process is returned.
作为一种改进的方案,所述摄像头用于获取图像信息并传给计算机,计算机通过深度学习并使用预训练数据集和目标检测算法实现目标检测功能,通过目标检测功能识别所述引导车辆的特征,生成并保存特征数据;As an improved solution, the camera is used to obtain image information and transmit it to a computer. The computer implements a target detection function through deep learning and using a pre-trained data set and a target detection algorithm. The computer identifies the features of the guide vehicle through the target detection function and generates and saves feature data.
所述4D毫米波雷达用于获取4D毫米波点云数据,以获得目标的距离信息、方位信息、速度信息、宽度信息和高度信息,毫米波可以穿透扬尘、雨、雪、雾等非金属物质,从而协助锁定引导车;The 4D millimeter wave radar is used to obtain 4D millimeter wave point cloud data to obtain the distance information, direction information, speed information, width information and height information of the target. The millimeter wave can penetrate non-metallic materials such as dust, rain, snow and fog, thereby assisting in locking the guide vehicle;
所述FMCW激光雷达用于获取FMCW激光雷达点云数据,FMCW激光雷达点云数据中包含每一个点的距离信息、方位信息、速度信息、宽度信息和高度信息;FMCW在测距、测速、抗干扰、功率(人眼安全)、信噪比等多方面,全面优于ToF激光雷达,FMCW激光雷达采用多普勒效应直接获取径向速度,并且可以获取百万点云中每一个点的速度,因此,虽然FMCW激光雷达不能够穿透扬尘、雨、雪、雾,但是可以通过反馈点云的速度信息,有效区分扬尘和引导车辆,从而协助锁定引导车。The FMCW laser radar is used to obtain FMCW laser radar point cloud data, which contains distance information, azimuth information, speed information, width information and height information of each point. FMCW is superior to ToF laser radar in many aspects such as distance measurement, speed measurement, anti-interference, power (eye safety), signal-to-noise ratio, etc. FMCW laser radar uses the Doppler effect to directly obtain radial velocity, and can obtain the velocity of each point in the million point cloud. Therefore, although FMCW laser radar cannot penetrate dust, rain, snow and fog, it can effectively distinguish between dust and guide vehicles by feeding back the velocity information of the point cloud, thereby assisting in locking the guide vehicle.
作为一种改进的方案,根据所述检测信息执行初始识别流程,进一步包括:As an improved solution, performing an initial identification process according to the detection information further includes:
将所述FMCW激光雷达点云数据和所述4D毫米波点云数据进行处理,去除点云里的扬尘部分,再投影到所述图像信息中,进行图像和点云信息的初始融合,得到点云融合数据;Processing the FMCW laser radar point cloud data and the 4D millimeter wave point cloud data, removing dust from the point cloud, and projecting them onto the image information, performing initial fusion of the image and point cloud information, and obtaining point cloud fusion data;
将人工输入的引导车辆特征信息与所述点云融合数据进行匹配,若成功匹配,则获取所述引导车辆的位置信息、结构信息、速度信息、方向信息;若匹配失败则重新调整引导车辆进入初始检测范围内或重新输入引导车辆特征信息。The manually input guide vehicle characteristic information is matched with the point cloud fusion data. If the match is successful, the position information, structure information, speed information, and direction information of the guide vehicle are obtained; if the match fails, the guide vehicle is readjusted to enter the initial detection range or the guide vehicle characteristic information is re-entered.
作为一种改进的方案,根据所述检测信息执行持续跟踪流程,进一步包括:As an improved solution, the continuous tracking process is performed according to the detection information, further comprising:
获取引导车辆上一时刻的方向信息、速度信息、位置信息,根据引导车辆上一时刻的方向信息、速度信息、位置信息确定感兴趣区域;Obtaining the direction information, speed information, and position information of the guide vehicle at the last moment, and determining the area of interest according to the direction information, speed information, and position information of the guide vehicle at the last moment;
获取引导车辆当前位置信息、FMCW激光雷达点云数据、4D毫米波点云数据并与所述感兴趣区域进行匹配,匹配成功则根据引导车辆当前点云融合数据规划路线,进行持续跟车,若不匹配则进入目标找回状态。The current position information of the guide vehicle, FMCW lidar point cloud data, and 4D millimeter wave point cloud data are obtained and matched with the area of interest. If the match is successful, the route is planned according to the current point cloud fusion data of the guide vehicle and the vehicle is continuously followed. If there is no match, the target retrieval state is entered.
作为一种改进的方案,根据所述检测信息执行目标找回流程,进一步包括:As an improved solution, executing the target retrieval process according to the detection information further includes:
获取引导车辆丢失前一时刻的位置信息、方向信息、速度信息并预估引导车辆下一时刻位置,根据预估的引导车辆下一时刻位置筛选点云融合数据;将筛选后的点云融合数据与所述特征数据匹配,若匹配则进入持续跟车状态,若不匹配则停车,进入初始识别状态。The position information, direction information, and speed information of the guide vehicle before it is lost are obtained, and the position of the guide vehicle at the next moment is estimated. The point cloud fusion data is filtered according to the estimated position of the guide vehicle at the next moment. The filtered point cloud fusion data is matched with the feature data. If they match, the system enters a continuous following state. If they do not match, the system stops and enters an initial recognition state.
另一方面,本发明还提供一种越野环境感知与跟踪引导车辆的系统,包括:On the other hand, the present invention also provides a system for off-road environment perception and tracking and guiding vehicles, comprising:
传感器,所述传感器设置于自主跟随车上,所述传感器包括摄像头、4D毫米波雷达和FMCW激光雷达;A sensor is provided on the autonomous following vehicle, and includes a camera, a 4D millimeter wave radar, and a FMCW laser radar;
控制单元,用于获取所述传感器的检测信息,并根据所述检测信息执行初始识别流程、持续跟踪流程或目标找回流程。A control unit is used to obtain detection information from the sensor and execute an initial recognition process, a continuous tracking process or a target retrieval process according to the detection information.
另一方面,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现所述越野环境感知与跟踪引导车辆的方法的步骤。On the other hand, the present invention also provides a computer-readable storage medium, on which a computer program is stored. When the computer program is executed by a processor, the steps of the method for off-road environment perception and tracking and guiding a vehicle are implemented.
本发明技术方案的有益效果是:The beneficial effects of the technical solution of the present invention are:
本发明采用摄像头初始化锁定引导车辆,并使用FMCW激光雷达点云和4D毫米波雷达点云进行校验,大大的提高了引导车辆识别的准确性。The present invention uses a camera to initialize and lock the guided vehicle, and uses FMCW laser radar point cloud and 4D millimeter wave radar point cloud for verification, which greatly improves the accuracy of guided vehicle identification.
锁定引导车辆之后结合FMCW激光雷达和4D毫米波雷达进行持续跟车,可以实现高速、近距离、扬尘等恶劣环境下的持续跟车。After locking the guiding vehicle, the FMCW lidar and 4D millimeter-wave radar are combined to continuously follow the vehicle, which can achieve continuous following in harsh environments such as high speed, close distance, and dust.
在目标丢失后使用引导车辆上一时刻感知信息预估引导车辆的位置,进行短暂跟车,将FMCW激光雷达点云和4D毫米波雷达点云投影锁定的引导车辆投影到摄像头图像寻找引导车辆,有较好的目标丢失找回能力。After the target is lost, the position of the guide vehicle is estimated using the guide vehicle’s last perception information, and the vehicle is briefly followed. The guide vehicle locked by the FMCW lidar point cloud and 4D millimeter-wave radar point cloud is projected onto the camera image to search for the guide vehicle, which has a good target recovery capability.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the specific implementation methods of the present invention or the technical solutions in the prior art, the drawings required for use in the specific implementation methods or the description of the prior art will be briefly introduced below. Obviously, the drawings described below are some implementation methods of the present invention. For ordinary technicians in this field, other drawings can be obtained based on these drawings without paying creative work.
图1是本发明实施例1所述越野环境感知与跟踪引导车辆的方法的流程示意图;FIG1 is a schematic flow chart of a method for off-road environment perception and tracking and guiding a vehicle according to Embodiment 1 of the present invention;
图2是本发明实施例1所述越野环境感知与跟踪引导车辆的方法中初始识别的详细流程逻辑示意图;FIG2 is a detailed flow diagram of initial identification in the method for off-road environment perception and tracking and guiding a vehicle according to Embodiment 1 of the present invention;
图3是本发明实施例1所述越野环境感知与跟踪引导车辆的方法中持续跟踪的详细流程逻辑示意图;FIG3 is a detailed logic diagram of the continuous tracking process in the method for off-road environment perception and tracking and guiding a vehicle according to Embodiment 1 of the present invention;
图4是本发明实施例1所述越野环境感知与跟踪引导车辆的方法中目标找回的详细流程逻辑示意图;4 is a detailed flow chart of target retrieval in the method for off-road environment perception and tracking and guiding a vehicle according to Embodiment 1 of the present invention;
图5是本发明实施例2所述越野环境感知与跟踪引导车辆的系统的架构图;5 is a schematic diagram of the system for off-road environment perception and tracking and guiding vehicles according to Embodiment 2 of the present invention;
图6是现有技术中基于TOF激光雷达的目标识别与跟踪的效果展示图;FIG6 is a diagram showing the effect of target recognition and tracking based on TOF laser radar in the prior art;
图7是现有技术中基于毫米波雷达的目标识别与跟踪的效果展示图。FIG. 7 is a diagram showing the effect of target recognition and tracking based on millimeter-wave radar in the prior art.
具体实施方式Detailed ways
下面结合附图对本发明的较佳实施例进行详细阐述,以使本发明的优点和特征能更易于被本领域技术人员理解,从而对本发明的保护范围做出更为清楚明确的界定。The preferred embodiments of the present invention are described in detail below in conjunction with the accompanying drawings so that the advantages and features of the present invention can be more easily understood by those skilled in the art, thereby making a clearer and more definite definition of the protection scope of the present invention.
在本发明的描述中,需要说明的是,本发明所描述的实施例是本发明一部分实施例,而不是全部的实施例;基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In the description of the present invention, it should be noted that the embodiments described in the present invention are only part of the embodiments of the present invention, rather than all of the embodiments; based on the embodiments in the present invention, all other embodiments obtained by ordinary technicians in the field without making creative work are within the scope of protection of the present invention.
实施例1,本实施例提供一种越野环境感知与跟踪引导车辆的方法,如图1所示,包括以下步骤:Embodiment 1: This embodiment provides a method for off-road environment perception and tracking and guiding a vehicle, as shown in FIG1 , comprising the following steps:
配置感知传感器:在无人车辆上配置摄像头、4D毫米波雷达和FMCW激光雷达,所述摄像头用于获取图像信息,所述4D毫米波雷达用于获取4D毫米波点云数据,所述FMCW激光雷达用于获取FMCW激光雷达点云数据;Configure perception sensors: configure cameras, 4D millimeter-wave radars, and FMCW laser radars on the unmanned vehicle. The cameras are used to obtain image information, the 4D millimeter-wave radars are used to obtain 4D millimeter-wave point cloud data, and the FMCW laser radars are used to obtain FMCW laser radar point cloud data.
初始识别:引导车辆先进入初始检测范围内,然后人工输入引导车辆特征,进入初始识别状态,识别失败则重新输入引导车辆特征,识别成功则进入持续跟踪状态;Initial identification: The guided vehicle first enters the initial detection range, and then the guided vehicle features are manually input to enter the initial identification state. If the identification fails, the guided vehicle features are re-entered. If the identification is successful, it enters the continuous tracking state;
持续跟踪:在持续跟踪状态中若目标未丢失则保持持续跟踪状态,若目标丢失则进入目标找回状态;Continuous tracking: If the target is not lost in the continuous tracking state, the continuous tracking state is maintained; if the target is lost, the target retrieval state is entered;
目标找回:目标找回成功则继续保持持续跟踪状态,若目标找回失败则停车,重新调整引导车辆进入初始检测范围,输入引导车辆特征信息,进行初始识别。Target retrieval: If the target is successfully retrieved, the tracking state will continue. If the target is not retrieved, the vehicle will stop and readjust the guided vehicle to enter the initial detection range, input the guided vehicle feature information, and perform initial identification.
作为本发明的一种实施方式,摄像头用于提供高分辨率的图像信息;摄像头通过目标检测技术精确的识别引导车辆,具体的,摄像头将拍摄到的图像信息传给计算机,计算机通过深度学习并使用预训练数据集和目标检测算法实现目标检测功能,通过目标检测功能识别引导车辆的特征如特定的车型、颜色、特定的车牌、特殊标记物等,可以精确地识别引导车辆,并保存特征数据。且在跟踪过程中可以持续对特征数据校验,来判断是否跟对了引导车辆。在目标跟踪失败时候也可以通过摄像头结合4D毫米波雷达和FMCW激光雷达点云融合数据重新进行引导车辆位置的计算并预估引导车辆的可能位置。需要说明的是,利用摄像头图像,基于深度学习方法,检测车辆及其特征,在自动驾驶领域已经是常规的普遍的基本方法,且存在着很多通用的开源的检测算法,比如yolo系列。本申请不对基于摄像头的车辆检测进行深入创造,也不对此进行保护,它只是本申请所采用的一种手段。本申请着重保护的是整套感知方法,即1、选择最有效的传感器;2、整体的感知流程,尤其是不同传感器之间对引导车感知结果的相互校正、融合,跟丢后的处理方法;3、各个子流程的具体过程。本申请主要致力于如何利用好最优的传感器进行检测,相互之间最有效关联,最大程度减少故障率,达到最高概率的稳定跟踪。As an embodiment of the present invention, the camera is used to provide high-resolution image information; the camera accurately identifies the guided vehicle through target detection technology. Specifically, the camera transmits the captured image information to the computer, and the computer realizes the target detection function through deep learning and using pre-trained data sets and target detection algorithms. The target detection function identifies the characteristics of the guided vehicle, such as a specific model, color, specific license plate, special markers, etc., and can accurately identify the guided vehicle and save the feature data. And the feature data can be continuously verified during the tracking process to determine whether the guided vehicle is followed. When the target tracking fails, the camera can also be combined with 4D millimeter wave radar and FMCW laser radar point cloud fusion data to recalculate the guided vehicle position and estimate the possible position of the guided vehicle. It should be noted that using camera images, based on deep learning methods, to detect vehicles and their features is already a conventional and common basic method in the field of autonomous driving, and there are many general open source detection algorithms, such as the yolo series. This application does not make in-depth creations based on camera vehicle detection, nor does it protect this. It is only a means adopted by this application. This application focuses on protecting the entire set of perception methods, namely 1. Selecting the most effective sensor; 2. The overall perception process, especially the mutual correction and fusion of the perception results of the guide vehicle between different sensors, and the processing method after the loss; 3. The specific process of each sub-process. This application is mainly devoted to how to make good use of the best sensors for detection, the most effective correlation between each other, minimize the failure rate, and achieve the highest probability of stable tracking.
作为本发明的一种实施方式,4D毫米波雷达较于传统毫米波雷达可以多获取目标的高度信息,利用4D毫米波雷达可以感知到点云信息,获得目标的距离、方位、速度和高度信息,可以用来区分路障、人行道、低矮的建筑物等,4D毫米波的数据高度信息还可以帮助系统构建更准确的环境三维模型,完成自主导航系统跟车任务。As an embodiment of the present invention, 4D millimeter-wave radar can obtain more target height information than traditional millimeter-wave radar. 4D millimeter-wave radar can sense point cloud information and obtain target distance, direction, speed and height information, which can be used to distinguish roadblocks, sidewalks, low buildings, etc. The 4D millimeter-wave data height information can also help the system build a more accurate three-dimensional model of the environment and complete the autonomous navigation system's vehicle following task.
作为本发明的一种实施方式,FMCW激光雷达可以提供非常高的近距离测量精度,通常在几毫米到几厘米之间,可以生成高密度的点云数据,并且可以获取点云中每一个点的数据,捕捉目标和环境的细节,提供精确的空间信息,可以用于精确的地图绘制和目标定位。FMCW激光雷达的点云密度比较高,故能提供高精度的分辨率。但是FMCW激光雷达在一些恶劣环境条件下可能会受到影响,雾霾中的水汽和颗粒会散射激光束,降低FMCW激光雷达的探测距离和分辨率,使得障碍物检测变得困难。沙尘暴中悬浮的沙尘颗粒会干扰FMCW激光雷达的传播路径,造成信号衰减和数据不稳定。此时,采用4D毫米波雷达作为补充,4D毫米波信号的波长较短,能够相对容易地穿透非金属介质,而不受太大影响,结合FMCW激光雷达点云数据,从而显著提升在特殊气象或环境条件下的感知能力。As an embodiment of the present invention, FMCW laser radar can provide very high close-range measurement accuracy, usually between a few millimeters and a few centimeters, can generate high-density point cloud data, and can obtain data for each point in the point cloud, capture the details of the target and the environment, and provide accurate spatial information, which can be used for accurate mapping and target positioning. The point cloud density of FMCW laser radar is relatively high, so it can provide high-precision resolution. However, FMCW laser radar may be affected under some harsh environmental conditions. The water vapor and particles in the haze will scatter the laser beam, reduce the detection distance and resolution of FMCW laser radar, and make obstacle detection difficult. The sand and dust particles suspended in the sandstorm will interfere with the propagation path of FMCW laser radar, causing signal attenuation and data instability. At this time, 4D millimeter wave radar is used as a supplement. The wavelength of 4D millimeter wave signal is shorter, and it can penetrate non-metallic media relatively easily without being affected too much. Combined with FMCW laser radar point cloud data, it can significantly improve the perception ability under special meteorological or environmental conditions.
作为本发明的一种实施方式,如图2所示,所述初始识别包括:将FMCW激光雷达点云数据和4D毫米波点云数据投影到图像信息中,进行图像和点云信息的初始融合,得到点云融合数据;然后人工输入引导车辆的特征信息,将特征信息与点云融合数据进行匹配,若成功匹配,则获取引导车辆的位置、结构、速度、方向信息;若匹配失败则重新调整车辆进入检测范围或重新输入引导车辆特征信息。As an embodiment of the present invention, as shown in Figure 2, the initial identification includes: projecting the FMCW lidar point cloud data and the 4D millimeter wave point cloud data into the image information, performing initial fusion of the image and point cloud information, and obtaining point cloud fusion data; then manually inputting the characteristic information of the guiding vehicle, matching the characteristic information with the point cloud fusion data, and if the match is successful, obtaining the position, structure, speed, and direction information of the guiding vehicle; if the match fails, readjusting the vehicle to enter the detection range or re-entering the characteristic information of the guiding vehicle.
作为本发明的一种实施方式,初始识别进一步包括:首先进行4D毫米波雷达点云和FMCW激光雷达点云的融合,去除掉扬尘等以后,再将点云朝图像投影,进一步确定目标;具体包括以下步骤:As an embodiment of the present invention, the initial recognition further includes: firstly fusing the 4D millimeter wave radar point cloud and the FMCW laser radar point cloud, removing dust, etc., and then projecting the point cloud onto the image to further determine the target; specifically including the following steps:
1、建立车辆坐标系,注意将FMCW激光雷达和4D毫米波雷达的坐标系进行统一;原点位于无人车车头中部,往右为x轴正方向,往前为y轴正方向,往上为z轴正方向,符合右手定则。1. Establish the vehicle coordinate system, and pay attention to unifying the coordinate systems of the FMCW laser radar and the 4D millimeter wave radar; the origin is located in the middle of the front of the unmanned vehicle, the right is the positive direction of the x-axis, the front is the positive direction of the y-axis, and the upward is the positive direction of the z-axis, which conforms to the right-hand rule.
2、每次检测时,首先获得上一刻的前车感知结果点云Pk-1,以及本时刻4D毫米波雷达的完整点云P’kr、本时刻FMCW激光雷达的完整点云P’kl;特别指出,FMCW激光雷达点云和4D毫米波雷达可以感知获得每个点的径向速度vr,而现有的普通激光雷达和毫米波雷达无法获得该参数。2. Each time a detection is performed, the point cloud P k-1 of the perception result of the preceding vehicle at the previous moment, as well as the complete point cloud P' kr of the 4D millimeter-wave radar and the complete point cloud P' kl of the FMCW lidar at the current moment are first obtained. It is particularly noted that the FMCW lidar point cloud and the 4D millimeter-wave radar can sense and obtain the radial velocity vr of each point, while the existing ordinary lidar and millimeter-wave radar cannot obtain this parameter.
针对Pk-1获取上一刻车辆的所处位置,即pk-1.x,pk-1.y,pk-1.z,在P’kr和P’kl中进行点云聚类,挑选出该位置附近的点云簇,包括本时刻4D毫米波雷达的点云簇Pkr、本时刻FMCW激光雷达的点云簇Pkl;注意此处不适宜进行基于深度学习的激光点云识别,例如PointPillar方法,这是因为有扬尘等的存在,现有点云深度学习方法的效果较差。随后做以下筛选:For Pk-1 , obtain the position of the vehicle at the last moment, that is, p k-1 .x, p k-1 .y, p k-1 .z, perform point cloud clustering in P'kr and P'kl , and select the point cloud cluster near the position, including the point cloud cluster Pkr of the 4D millimeter wave radar at this moment and the point cloud cluster Pkl of the FMCW laser radar at this moment; Note that it is not suitable to perform laser point cloud recognition based on deep learning, such as the PointPillar method, because there is dust, etc., and the effect of the existing point cloud deep learning method is poor. Then do the following screening:
1)利用Pkr进行点云分割和筛选,即遍历Pkl 1) Use Pkr to segment and filter the point cloud, that is, traverse Pkl
select pi in Pklwhich satisfyselect pi in P kl which satisfy
max distance(pi, points in Pkl)<threshold_1max distance(pi, points in P kl )<threshold_1
return pireturn pi
经此筛选,可以借助4D毫米波的点云和穿透作用,去除部分灰尘等噪点;将筛选出的pi点重新组合成Pkl;After this screening, the point cloud and penetration effect of 4D millimeter wave can be used to remove some noise points such as dust; the screened pi points are recombined into P kl ;
2)利用上一刻的点云的每一个点pk-1,求取引导车的径向速度v,即2) Using each point p k-1 of the point cloud at the last moment, calculate the radial velocity v of the guide vehicle, that is
v=average(pk-1.vr)v = average(p k-1 .vr)
针对Pkl进行点云分割和筛选,即遍历Pkl Perform point cloud segmentation and screening on P kl , that is, traverse P kl
select pi in Pklwhich satisfyselect pi in P kl which satisfy
distance(pi.vr, v)<threshold_2distance(pi.vr, v)<threshold_2
return pireturn pi
经此筛选,可以借助FMCW激光雷达的点的速度信息,进一步去除部分灰尘等噪点,将筛选出的pi点重新组合成Pkl;After this screening, we can further remove some noise points such as dust with the help of the velocity information of the points of the FMCW laser radar, and reassemble the screened pi points into P kl ;
3)利用Pkr对点云进行补充,即遍历Pkr 3) Use P kr to supplement the point cloud, that is, traverse P kr
select pi in Pkrwhich satisfyselect pi in P kr which satisfy
max distance(pi, points in Pkl)<threshold_3max distance(pi, points in P kl )<threshold_3
return pireturn pi
经此筛选,可以将激光雷达未曾检测到的点进行补充。这是因为灰尘会遮挡部分激光发射至引导车上,这些部分不会有激光点云,但是会有毫米波点云;将筛选出的pi点,补充上Pkl的点,即得到本时刻引导的点云Pk。After this screening, the points that have not been detected by the laser radar can be supplemented. This is because dust will block part of the laser from being emitted to the guide vehicle. These parts will not have laser point clouds, but will have millimeter wave point clouds. The selected pi points are supplemented with the points of P kl to obtain the point cloud P k guided at this moment.
3、在本申请的大多数工况下,仅用4D毫米波雷达、FMCW雷达的融合数据,即可以准确地获得引导车的轮廓;在初始找车,或者需要校验或者需要做跟丢找回时候,才会利用摄像头信号进行点云的进一步匹配;此时有:3. In most working conditions of this application, only the fusion data of 4D millimeter wave radar and FMCW radar can accurately obtain the outline of the guide vehicle; when initially looking for the vehicle, or when verification or lost tracking is required, the camera signal will be used for further matching of the point cloud; at this time:
1)使用内参矩阵和外参矩阵将3D点云数据转换到摄像头坐标系下的2D图像坐标;这个过程包括将点云的从点云坐标系数据变换到摄像头坐标系,并使用相机内参将3D坐标投影到2D图像平面,通过投影矩阵即可实现。1) Use the intrinsic and extrinsic matrix to convert the 3D point cloud data to the 2D image coordinates in the camera coordinate system; this process includes transforming the point cloud data from the point cloud coordinate system to the camera coordinate system, and using the camera intrinsic parameters to project the 3D coordinates to the 2D image plane, which can be achieved through the projection matrix.
2)针对图像进行深度学习检测,确定点云投影区域内是否存在车辆,且判断车辆特征是否与预定/记录的车辆特征一致。2) Perform deep learning detection on the image to determine whether there is a vehicle in the point cloud projection area and whether the vehicle features are consistent with the predetermined/recorded vehicle features.
作为本发明的一种实施方式,如图3所示,所述持续跟踪包括:由引导车辆上一时刻方向、速度、位置信息确定感兴趣区域ROI(Region of interest),根据引导车辆当前位置信息、FMCW激光雷达点云数据、4D毫米波点云数据与ROI进行匹配,匹配成功则根据引导车辆点云数据规划行车路线,进行持续跟车,若引导车辆点云数据与ROI不匹配则进入目标找回状态。As an embodiment of the present invention, as shown in FIG3 , the continuous tracking includes: determining a region of interest ROI (Region of interest) based on the direction, speed, and position information of the guiding vehicle at the previous moment, matching the guiding vehicle's current position information, FMCW lidar point cloud data, and 4D millimeter wave point cloud data with the ROI, and if the match is successful, planning a driving route based on the guiding vehicle's point cloud data, and continuously following the vehicle, and if the guiding vehicle's point cloud data does not match the ROI, entering a target retrieval state.
作为本发明的一种实施方式,如图4所示,所述目标找回包括:根据引导车辆丢失前一时刻位置、方向、速度预估引导车辆下一时刻位置,根据位置预估结果筛选点云数据,将筛选的点云数据与图像检测的结果匹配,若匹配则进入持续跟车状态,若不匹配则停车进入初始识别状态。As an embodiment of the present invention, as shown in Figure 4, the target retrieval includes: estimating the position of the guide vehicle at the next moment based on the position, direction, and speed of the guide vehicle at the previous moment before it was lost, filtering point cloud data based on the position estimation result, matching the filtered point cloud data with the result of image detection, and entering a continuous following state if they match; if not, stopping and entering an initial recognition state.
实施例2,本实施例基于与实施例1中所述的一种越野环境感知与跟踪引导车辆的方法相同的发明构思,提供一种越野环境感知与跟踪引导车辆的系统,采用摄像头、FMCW激光雷达、4D毫米波雷达结合的方式实现跟车感知系统,即在越野环境高速近距离环境下高精度地感知引导车辆的位置和轮廓,并将结果下发给后续的自动跟车运动控制单元。Example 2. This example is based on the same inventive concept as the method for off-road environment perception and tracking and guiding vehicles described in Example 1, and provides a system for off-road environment perception and tracking and guiding vehicles. The system adopts a combination of cameras, FMCW laser radars, and 4D millimeter-wave radars to realize a vehicle following perception system, that is, to perceive the position and contour of the guiding vehicle with high precision in an off-road high-speed and close-range environment, and send the results to the subsequent automatic vehicle following motion control unit.
系统架构如图5所示,将摄像头的图像信息、FMCW激光雷达点云信息、4D毫米波点云信息结合,初始化锁定引导车辆信息,使用4D毫米波雷达锁定引导车辆,持续进行跟车;将车辆位置信息传给车辆运动控制单元;若出现目标丢失,则根据车辆上一时刻位置、方向、速度信息预计引导车辆可能出现的位置,进行引导车辆找回;成功找回则继续跟车,若找回失败停车。The system architecture is shown in Figure 5. The camera's image information, FMCW lidar point cloud information, and 4D millimeter wave point cloud information are combined to initialize the locking and guiding vehicle information. The 4D millimeter wave radar is used to lock the guiding vehicle and continue to follow the vehicle. The vehicle position information is transmitted to the vehicle motion control unit. If the target is lost, the possible position of the guiding vehicle is estimated based on the vehicle's last position, direction, and speed information, and the guiding vehicle is retrieved. If the vehicle is successfully retrieved, the following will continue. If the retrieval fails, the system will stop.
实施例3,本实施例提供一种计算机可读存储介质,所述存储介质用于储存将上述实施例1所述的越野环境感知与跟踪引导车辆的方法实现所用的计算机软件指令,其包含用于执行上述为所述越野环境感知与跟踪引导车辆的方法所设置的程序;具体的,该可执行程序可以内置在实施例2所述的越野环境感知与跟踪引导车辆的装置中,这样,多模式自主跟车控制装置就可以通过执行内置的可执行程序实现所述实施例1所述的越野环境感知与跟踪引导车辆的方法。Embodiment 3. This embodiment provides a computer-readable storage medium, and the storage medium is used to store computer software instructions used to implement the method for off-road environment perception and tracking and guiding the vehicle described in the above-mentioned embodiment 1, which includes a program for executing the above-mentioned method for off-road environment perception and tracking and guiding the vehicle; specifically, the executable program can be built into the device for off-road environment perception and tracking and guiding the vehicle described in embodiment 2, so that the multi-mode autonomous following control device can implement the method for off-road environment perception and tracking and guiding the vehicle described in embodiment 1 by executing the built-in executable program.
摄像头清晰度高,但位置信息不够准确,可以用来锁定引导车辆;激光雷达的分辨率较高但在雨、雪、雾以及扬尘等环境下的效果较差;4D毫米波雷达可以穿透雨、雪、雾、扬尘,可以在FMCW激光雷达效果较差时作为补充跟车;区别于现有技术,本申请采用摄像头、FMCW激光雷达、4D毫米波雷达结合的方式实现跟车感知。采用摄像头初始化锁定引导车辆,采用FMCW激光雷达和4D毫米波雷达进行持续跟车,利用了摄像头的高分辨率锁定引导车辆并使用激光雷达点云和4D毫米波雷达点云进行校验,大大的提高了车辆识别的准确性。锁定引导车辆之后使用FMCW激光雷达和4D毫米波雷达进行持续跟车,可以实现高速、近距离、扬尘等恶劣环境下的持续跟车。在目标丢失后可以进入目标找回状态,能够在目标丢失的情况下,预估引导车辆的位置,利用4D毫米波数据和FMCW激光雷达数据和摄像头目标检测结果匹配,找回引导车辆,有一定的目标丢失找回能力。The camera has high definition, but the position information is not accurate enough. It can be used to lock and guide the vehicle; the laser radar has a high resolution but the effect is poor in rain, snow, fog and dust environments; the 4D millimeter wave radar can penetrate rain, snow, fog and dust, and can be used as a supplement to follow the vehicle when the FMCW laser radar is less effective; different from the prior art, this application uses a combination of cameras, FMCW laser radars, and 4D millimeter wave radars to achieve vehicle following perception. The camera is used to initialize and lock the guiding vehicle, and the FMCW laser radar and 4D millimeter wave radar are used for continuous following. The high resolution of the camera is used to lock and guide the vehicle, and the laser radar point cloud and 4D millimeter wave radar point cloud are used for verification, which greatly improves the accuracy of vehicle identification. After locking and guiding the vehicle, the FMCW laser radar and 4D millimeter wave radar are used to continue to follow the vehicle, which can achieve continuous following in harsh environments such as high speed, close distance, and dust. After the target is lost, it can enter the target retrieval state. When the target is lost, it can estimate the position of the guide vehicle, use 4D millimeter wave data and FMCW lidar data and camera target detection results to match and retrieve the guide vehicle. It has a certain target loss retrieval capability.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本文的范围。Those of ordinary skill in the art will appreciate that the units and steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, computer software, or a combination of the two. In order to clearly illustrate the interchangeability of hardware and software, the composition and steps of each example have been generally described in the above description according to function. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered beyond the scope of this article.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the systems, devices and units described above can refer to the corresponding processes in the aforementioned method embodiments and will not be repeated here.
在本文所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。In the several embodiments provided herein, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the device embodiments described above are only schematic, for example, the division of the units is only a logical function division, and there may be other division methods in actual implementation, such as multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed can be an indirect coupling or communication connection through some interfaces, devices or units, or can be electrical, mechanical or other forms of connection.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本文实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiments of this article.
另外,在本文各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of this invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above-mentioned integrated unit may be implemented in the form of hardware or in the form of software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本文的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本文各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of this article is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including a number of instructions to enable a computer device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of this article. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk or optical disk and other media that can store program codes.
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above descriptions are merely embodiments of the present invention and are not intended to limit the patent scope of the present invention. Any equivalent structure or equivalent process transformation made using the contents of the present invention specification and drawings, or directly or indirectly applied in other related technical fields, are also included in the patent protection scope of the present invention.
Claims (5)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410834127.7A CN118405130B (en) | 2024-06-26 | 2024-06-26 | A method, system and medium for off-road environment perception and tracking and guiding vehicles |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410834127.7A CN118405130B (en) | 2024-06-26 | 2024-06-26 | A method, system and medium for off-road environment perception and tracking and guiding vehicles |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN118405130A true CN118405130A (en) | 2024-07-30 |
| CN118405130B CN118405130B (en) | 2024-09-03 |
Family
ID=92001138
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410834127.7A Active CN118405130B (en) | 2024-06-26 | 2024-06-26 | A method, system and medium for off-road environment perception and tracking and guiding vehicles |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118405130B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119348469A (en) * | 2024-12-23 | 2025-01-24 | 浙江宇视科技有限公司 | Charging pile dust prevention method, device, equipment and medium |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2018066716A (en) * | 2016-10-14 | 2018-04-26 | 国立大学法人金沢大学 | Object tracking device |
| CN111352112A (en) * | 2020-05-08 | 2020-06-30 | 泉州装备制造研究所 | Target detection method based on vision, lidar and millimeter wave radar |
| CN111368938A (en) * | 2020-03-19 | 2020-07-03 | 南京因果人工智能研究院有限公司 | Multi-target vehicle tracking method based on MDP |
| CN114675275A (en) * | 2022-03-21 | 2022-06-28 | 北京航空航天大学合肥创新研究院(北京航空航天大学合肥研究生院) | A target detection method based on the fusion of 4D millimeter wave radar and lidar |
| WO2023121657A1 (en) * | 2021-12-21 | 2023-06-29 | Intel Corporation | Radar apparatus, system, and method |
| KR20240007459A (en) * | 2022-07-08 | 2024-01-16 | 국민대학교산학협력단 | Method and apparatus for processing mimo fmcw radar signal |
-
2024
- 2024-06-26 CN CN202410834127.7A patent/CN118405130B/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2018066716A (en) * | 2016-10-14 | 2018-04-26 | 国立大学法人金沢大学 | Object tracking device |
| CN111368938A (en) * | 2020-03-19 | 2020-07-03 | 南京因果人工智能研究院有限公司 | Multi-target vehicle tracking method based on MDP |
| CN111352112A (en) * | 2020-05-08 | 2020-06-30 | 泉州装备制造研究所 | Target detection method based on vision, lidar and millimeter wave radar |
| WO2023121657A1 (en) * | 2021-12-21 | 2023-06-29 | Intel Corporation | Radar apparatus, system, and method |
| CN114675275A (en) * | 2022-03-21 | 2022-06-28 | 北京航空航天大学合肥创新研究院(北京航空航天大学合肥研究生院) | A target detection method based on the fusion of 4D millimeter wave radar and lidar |
| KR20240007459A (en) * | 2022-07-08 | 2024-01-16 | 국민대학교산학협력단 | Method and apparatus for processing mimo fmcw radar signal |
Non-Patent Citations (1)
| Title |
|---|
| 王世峰;戴祥;徐宁;张鹏飞;: "无人驾驶汽车环境感知技术综述", 长春理工大学学报(自然科学版), no. 01, 15 February 2017 (2017-02-15) * |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119348469A (en) * | 2024-12-23 | 2025-01-24 | 浙江宇视科技有限公司 | Charging pile dust prevention method, device, equipment and medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN118405130B (en) | 2024-09-03 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111192295B (en) | Target detection and tracking method, apparatus, and computer-readable storage medium | |
| CN107646114B (en) | Method for estimating lane | |
| CN111712731A (en) | Target detection method, system and movable platform | |
| CN113002562B (en) | Vehicle control device and storage medium | |
| CN114724110A (en) | Target detection method and equipment | |
| WO2018177026A1 (en) | Device and method for determining road edge | |
| EP4204267A1 (en) | Point cloud segmentation using a coherent lidar for autonomous vehicle applications | |
| WO2019061311A1 (en) | Control method for self-driving car, control terminal and machine readable storage medium | |
| CN113313654B (en) | Laser point cloud filtering denoising method, system, equipment and storage medium | |
| Muresan et al. | Multi-object tracking of 3D cuboids using aggregated features | |
| CN114084129A (en) | Fusion-based vehicle automatic driving control method and system | |
| CN115932831B (en) | Target segment tracking method, device, equipment and storage medium based on radar | |
| CN111781606A (en) | Novel miniaturization implementation method for fusion of laser radar and ultrasonic radar | |
| WO2018091685A1 (en) | Self-calibrating sensor system for a wheeled vehicle | |
| CN116299473A (en) | A Method of Crossing Target Detection Based on MIMO Millimeter Wave Radar | |
| CN118405130B (en) | A method, system and medium for off-road environment perception and tracking and guiding vehicles | |
| Hussain et al. | Multiple objects tracking using radar for autonomous driving | |
| CN117584992A (en) | Vehicle and control methods incorporating sensor fusion | |
| CN115792891A (en) | Target track tracking method based on fusion of multi-millimeter-wave radar and laser radar | |
| Li et al. | Composition and application of current advanced driving assistance system: A review | |
| CN110119751A (en) | Laser radar point cloud Target Segmentation method, target matching method, device and vehicle | |
| CN113734197A (en) | Unmanned intelligent control scheme based on data fusion | |
| Meydani | State-of-the-Art Analysis of the Performance of the Sensors Utilized in Autonomous Vehicles in Extreme Conditions | |
| CN116843716A (en) | Objectives according to methods and devices, electronic devices, computer-readable storage media | |
| WO2024078265A1 (en) | Multi-layer high-precision map generation method and apparatus |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right | ||
| TR01 | Transfer of patent right |
Effective date of registration: 20250819 Address after: 719000 Shaanxi Province Yulin City High-tech Industrial Park Kaiyuan Avenue Qinchuangyuan Building 3rd Floor Innovation Space Patentee after: Yulin Saiyi Intelligent Technology Co.,Ltd. Country or region after: China Address before: Building 28, No. 618 Wharf West Street, Kunlun Street, Liyang City, Changzhou City, Jiangsu Province, 213300 Patentee before: Jiangsu intelligent unmanned Equipment Industry Innovation Center Co.,Ltd. Country or region before: China |