CN118031963A - Underground positioning method and system based on comprehensive dotted line characteristics - Google Patents
Underground positioning method and system based on comprehensive dotted line characteristics Download PDFInfo
- Publication number
- CN118031963A CN118031963A CN202410146865.2A CN202410146865A CN118031963A CN 118031963 A CN118031963 A CN 118031963A CN 202410146865 A CN202410146865 A CN 202410146865A CN 118031963 A CN118031963 A CN 118031963A
- Authority
- CN
- China
- Prior art keywords
- point
- line
- features
- pose
- current frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- E—FIXED CONSTRUCTIONS
- E21—EARTH OR ROCK DRILLING; MINING
- E21F—SAFETY DEVICES, TRANSPORT, FILLING-UP, RESCUE, VENTILATION, OR DRAINING IN OR OF MINES OR TUNNELS
- E21F17/00—Methods or devices for use in mines or tunnels, not covered elsewhere
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Mining & Mineral Resources (AREA)
- Multimedia (AREA)
- Geochemistry & Mineralogy (AREA)
- Geology (AREA)
- General Life Sciences & Earth Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于综合点线特征的井下定位方法及系统,在基于点线特征的视觉里程计部分中,首先对双目相机的原始图像进行去畸变和双目校正操作,使得两图像的对极线在同一水平线上。随后对图像进行点线特征提取与匹配,完成后构造系统的点线重投影误差函数并进行位姿估计;随后使用局部优化线程,对位姿进行进一步优化;为了消除累积误差,进行回环检测操作,检测到回环后进行回环校正,并对位姿进行一次全局位姿图优化。本发明可以在在光线较暗以及光线变化的井下场景中拥有较高的定位精度,并且由于对较为耗时的点线特征处理算法进行改进,使得本发明避免了传统基于点线特征的视觉定位方法较为耗时的问题。
The present invention discloses an underground positioning method and system based on comprehensive point and line features. In the visual odometer part based on point and line features, firstly, the original image of the binocular camera is subjected to dedistortion and binocular correction operations, so that the epipolar lines of the two images are on the same horizontal line. Then, the image is subjected to point and line feature extraction and matching, and after completion, the point and line reprojection error function of the system is constructed and the pose estimation is performed; then, the pose is further optimized using a local optimization thread; in order to eliminate the accumulated error, a loop detection operation is performed, and after the loop is detected, a loop correction is performed, and a global pose graph optimization is performed on the pose. The present invention can have a higher positioning accuracy in underground scenes with dim light and changing light, and because the more time-consuming point and line feature processing algorithm is improved, the present invention avoids the problem that the traditional visual positioning method based on point and line features is more time-consuming.
Description
技术领域Technical Field
本发明属于机器人定位技术领域,具体涉及一种基于综合点线特征的井下定位方法及系统。The invention belongs to the technical field of robot positioning, and in particular relates to an underground positioning method and system based on comprehensive point and line features.
背景技术Background technique
随着科技的发展,矿井下的设备越来越智能化,消防、救援、巡检机器人应用不断广泛,这些机器人完成任务的基本前提是实时准确地确定自身的位置。在室内环境中使用的定位方案有超宽带(UWB)技术、同步定位与建图(Simultaneous Localization AndMapping,SLAM)技术等。虽然UWB技术精度足够,但成本很高,且覆盖范围不够。With the development of science and technology, equipment in mines is becoming more and more intelligent, and firefighting, rescue, and inspection robots are increasingly widely used. The basic premise for these robots to complete their tasks is to accurately determine their own positions in real time. The positioning solutions used in indoor environments include ultra-wideband (UWB) technology and simultaneous localization and mapping (SLAM) technology. Although UWB technology is accurate enough, it is very expensive and has insufficient coverage.
SLAM技术是当前应用十分广泛的定位方案,具有精度高、成本低以及可以大范围场景工作的优势,但传统SLAM算法主要依靠点特征,在光线较暗以及光线变化的井下场景中,会出现无法提取到足够典型的特征与特征匹配失败的问题,并使得定位效果不佳。SLAM technology is currently a widely used positioning solution with the advantages of high precision, low cost and the ability to work in a wide range of scenarios. However, traditional SLAM algorithms mainly rely on point features. In underground scenes with dim light or changing light, there will be problems such as failure to extract sufficiently typical features and feature matching failure, resulting in poor positioning results.
发明内容Summary of the invention
本发明所要解决的技术问题在于针对上述现有技术中的不足,提供一种基于综合点线特征的井下定位方法及系统,用于解决传统基于点特征的SLAM算法在光线较暗以及光线变化的井下场景中定位精度低的技术问题。The technical problem to be solved by the present invention is to provide an underground positioning method and system based on comprehensive point and line features in view of the deficiencies in the above-mentioned prior art, so as to solve the technical problem of low positioning accuracy of traditional point feature-based SLAM algorithms in underground scenes with dim light and changing light.
本发明采用以下技术方案:The present invention adopts the following technical solutions:
一种基于综合点线特征的井下定位方法,包括以下步骤:A downhole positioning method based on comprehensive point and line features comprises the following steps:
S1、对双目相机的两幅原始图像进行去畸变和双目校正操作,使得处理后的两幅图像的对极线在同一水平线上;S1, performing dedistortion and binocular correction operations on the two original images of the binocular camera, so that the epipolar lines of the two processed images are on the same horizontal line;
S2、使用改进的点线特征处理算法对步骤S1得到的两幅图像进行点和线特征的提取与匹配;S2, using an improved point and line feature processing algorithm to extract and match point and line features of the two images obtained in step S1;
S3、根据步骤S2得到的特征推导点线特征的重投影误差函数;S3, deriving a reprojection error function of point and line features based on the features obtained in step S2;
S4、使用列文伯格-马夸尔特法对步骤S3得到的重投影误差函数进行求解,得到要估计的相机位姿,使用局部优化线程对得到的相机位姿进行优化,得到优化后的位姿;S4, using the Levenberg-Marquardt method to solve the reprojection error function obtained in step S3 to obtain the camera pose to be estimated, and using a local optimization thread to optimize the obtained camera pose to obtain an optimized pose;
S5、根据S4得到的位姿判断是否为关键帧,若当前帧是关键帧,根据两帧图像之间的相似度进行回环检测,并对检测到的回环进行校正,得到需要校正的位姿;S5, judging whether the current frame is a key frame according to the posture obtained in S4, if the current frame is a key frame, performing loop detection according to the similarity between the two frames of images, and correcting the detected loop to obtain the posture that needs to be corrected;
S6、利用步骤S5得到的需要校正的位姿对位姿图进行全局优化,输出图像帧的位姿,得到定位轨迹。S6. Use the posture that needs to be corrected obtained in step S5 to globally optimize the posture graph, output the posture of the image frame, and obtain the positioning trajectory.
优选地,步骤S2具体为:Preferably, step S2 is specifically:
S201、使用GPU对特征提取过程进行加速计算,采用改进的光流法匹配策略分别进行双向和环形操作,得到图像的特征;S201, using a GPU to accelerate the feature extraction process, using an improved optical flow matching strategy to perform bidirectional and circular operations respectively, to obtain image features;
S202、使用直线段检测算法进行线段检测,并对破损线段进行恢复,随后基于点特征构造点线不变量,在现有特征点的基础上完成线段匹配,若基于点线不变量的匹配线段数量不足,则对不满足点线不变量构造条件的线段计算LBD描述子并进行线段匹配,得到匹配后的特征。S202. Use the straight line segment detection algorithm to perform line segment detection and restore the damaged line segments. Then, construct the point-line invariant based on the point features, and complete the line segment matching based on the existing feature points. If the number of matching line segments based on the point-line invariant is insufficient, calculate the LBD descriptor for the line segments that do not meet the point-line invariant construction conditions and perform line segment matching to obtain the matched features.
更优选地,步骤S201中,双向操作具体为:More preferably, in step S201, the bidirectional operation is specifically:
对图像I1和I2进行点特征跟踪时,从图像I1到图像I2使用LK光流法进行特征跟踪;检查跟踪结果,保留跟踪正确的点特征,对跟踪错误的进行剔除;将保留的点特征从图像I2到图像I1使用LK光流法;再次检查跟踪结果;将保留的点特征作为图像I1与图像I2的匹配点对。When performing point feature tracking on images I 1 and I 2 , use the LK optical flow method to track features from image I 1 to image I 2 ; check the tracking results, retain the correctly tracked point features, and remove the incorrectly tracked ones; use the LK optical flow method to track the retained point features from image I 2 to image I 1 ; check the tracking results again; use the retained point features as the matching point pairs between image I 1 and image I 2 .
更优选地,步骤S201中,环形操作具体为:More preferably, in step S201, the ring operation is specifically:
对上一帧左目图像与当前帧左目图像/>使用双向光流法,/>的点特征集合为将跟踪得到的匹配点集合记为/>对当前帧左目图像/>与当前帧右目图像/>使用双向光流法,/>点特征集合为/>将跟踪得到的匹配点集合记为xtemp1;对当前帧右目图像/>与上一帧右目图像/>使用双向光流法,/>点特征集合为xtemp1,将跟踪得到的匹配点集合记为xtemp2;上一帧右目图像/>原本的点特征集合为/>通过上一帧右目图像/>的临时点特征集合为xtemp2,对xtemp2中的特征进行检测,保留落入/>对应特征的周围;当前帧右目图像的临时点特征集合为xtemp1,上一帧右目图像/>的临时点特征集合为xtemp2,根据xtemp1与xtemp2的对应关系,从xtemp1中剔除xtemp2剔除的特征,保留剩余特征,并记为/> Previous left eye image With the current frame left eye image/> Using bidirectional optical flow, /> The point feature set is The set of matching points obtained by tracking is recorded as/> For the current frame left eye image/> With the current frame right eye image/> Using bidirectional optical flow, /> The point feature set is/> The set of matching points obtained by tracking is recorded as x temp1 ; for the right eye image of the current frame/> With the previous frame right eye image/> Using bidirectional optical flow, /> The point feature set is x temp1 , and the matching point set obtained by tracking is recorded as x temp2 ; the previous frame right eye image/> The original point feature set is/> Through the previous frame right eye image/> The temporary point feature set is x temp2 , and the features in x temp2 are detected, and the features falling into /> The surrounding of the corresponding feature; the right image of the current frame The temporary point feature set is x temp1 , the previous frame right eye image/> The temporary point feature set is x temp2 . According to the correspondence between x temp1 and x temp2 , the features removed from x temp2 are removed from x temp1 , and the remaining features are retained and recorded as />
更优选地,步骤S202具体为:More preferably, step S202 is specifically:
S2021、剔除长度低于设定阈值的线段,综合线段主方向的角度、两条线段中点之间的距离、线段端点与端点之间的距离、端点距离与两线段平均长度的比值因素进行评判,进行线段合并处理;S2021, eliminating line segments whose length is less than a set threshold, comprehensively evaluating the angle of the main direction of the line segment, the distance between the midpoints of the two line segments, the distance between the endpoints of the line segment, and the ratio of the endpoint distance to the average length of the two line segments, and merging the line segments;
S2022、对于空间中同一平面上点线的投影,前后两帧中两个点特征与线特征的距离比值不变,称为点线不变量,单应矩阵用4对匹配特征点通过直接线性变换法求解;S2022. For the projection of points and lines on the same plane in space, the distance ratio between two point features and line features in the previous and next frames remains unchanged, which is called point-line invariant. The homography matrix is solved by direct linear transformation method using 4 pairs of matching feature points.
S2023、对线段主方向的夹角小于等于设定阈值,当线段支持域内至少有2个匹配的点特征,则计算对应的点线不变量,对于线段l1和l2,其最终的相似性Sim(l1,l2)为AffSim(l1,l2)的最大值,若Sim(l1,l2)>0.95时,则线段匹配成功。S2023. When the angle of the main direction of the line segment is less than or equal to the set threshold, and there are at least two matching point features in the line segment support domain, the corresponding point-line invariant is calculated. For line segments l 1 and l 2 , the final similarity Sim(l 1 ,l 2 ) is the maximum value of AffSim(l 1 ,l 2 ). If Sim(l 1 ,l 2 )>0.95, the line segment is matched successfully.
优选地,步骤S3中,重投影误差函数F具体为:Preferably, in step S3, the reprojection error function F is specifically:
其中,pl和Il分别表示点、线特征集,ρp和ρl分别表示点、线特征的Huber鲁棒核函数,Σj和Σk表示点、线特征的高斯分布协方差矩阵,和/>分别表示第j个点特征和第k个线特征的重投影误差。Among them, p l and I l represent the point and line feature sets respectively, ρ p and ρ l represent the Huber robust kernel functions of point and line features respectively, Σ j and Σ k represent the Gaussian distribution covariance matrix of point and line features, and/> They represent the reprojection errors of the j-th point feature and the k-th line feature respectively.
优选地,步骤S4中,使用局部优化线程对得到的相机位姿进行优化具体为:Preferably, in step S4, the obtained camera pose is optimized using a local optimization thread as follows:
使用当前处理的关键帧、共视图中与当前帧相连的关键帧,以及关键帧观测到的路标空间点线构建局部地图,以相机位姿和空间点线位置作为待优化的状态变量,将局部地图内的空间点线与能观测到对应点线的关键帧之间的重投影误差函数作为约束条件,利用图模型的稀疏性加速求解,得到优化后的相机位姿。A local map is constructed using the currently processed keyframe, the keyframes connected to the current frame in the co-visual map, and the landmark spatial points and lines observed by the keyframes. The camera pose and spatial point and line positions are used as state variables to be optimized. The reprojection error function between the spatial points and lines in the local map and the keyframes where the corresponding points and lines can be observed is used as a constraint condition. The sparsity of the graph model is used to accelerate the solution to obtain the optimized camera pose.
优选地,步骤S5中,关键帧选取原则如下:Preferably, in step S5, the key frame selection principle is as follows:
当前帧与上一个关键帧之间的普通帧大于等于20个;追踪到大于等于55个点特征和26条线特征;当前帧与上一个关键帧共同观测到的点线特征占当前帧观测到的点线特征的比值小于0.7。The number of ordinary frames between the current frame and the previous key frame is greater than or equal to 20; greater than or equal to 55 point features and 26 line features are tracked; the ratio of the point and line features observed jointly by the current frame and the previous key frame to the point and line features observed in the current frame is less than 0.7.
优选地,步骤S5中,对检测到的回环进行校正具体为:Preferably, in step S5, the detected loop is corrected by:
通过计算得到的当前帧和回环帧的相对位姿以及回环帧的位姿,确定当前帧新的位姿;通过当前帧未更新前与周围关键帧的相对位姿对当前帧周围的关键帧进行位姿更新;位姿更新后,根据每个路标与生成路标的关键帧的相对位置对和关键帧有关的路标进行更新;将回环帧及其邻接关键帧观测到的路标投影到当前帧及其邻接关键帧中,将历史数据作为依据对当前帧路标进行更新。The new pose of the current frame is determined by calculating the relative pose of the current frame and the loop frame, as well as the pose of the loop frame; the pose of the key frames around the current frame is updated by the relative pose of the current frame with the surrounding key frames before the current frame is updated; after the pose is updated, the landmarks related to the key frame are updated according to the relative position of each landmark and the key frame that generated the landmark; the landmarks observed in the loop frame and its adjacent key frames are projected into the current frame and its adjacent key frames, and the landmarks of the current frame are updated based on the historical data.
第二方面,本发明实施例提供了一种基于综合点线特征的井下定位系统,包括:In a second aspect, an embodiment of the present invention provides a downhole positioning system based on comprehensive point and line features, comprising:
预处理模块,对双目相机的两幅原始图像进行去畸变和双目校正操作,使得处理后的两幅图像的对极线在同一水平线上;The preprocessing module performs dedistortion and binocular correction operations on the two original images of the binocular camera so that the epipolar lines of the two processed images are on the same horizontal line;
提取模块,使用改进的点线特征处理算法对预处理模块得到的两幅图像进行点和线特征的提取与匹配;The extraction module uses an improved point and line feature processing algorithm to extract and match point and line features of the two images obtained by the preprocessing module;
计算模块,根据提取模块得到的特征推导点线特征的重投影误差函数;A calculation module, which derives a reprojection error function of point and line features based on the features obtained by the extraction module;
优化模块,使用列文伯格-马夸尔特法对计算模块得到的重投影误差函数进行求解,得到要估计的相机位姿,使用局部优化线程对得到的相机位姿进行优化,得到优化后的位姿;根据位姿判断是否为关键帧,若当前帧是关键帧,根据两帧图像之间的相似度进行回环检测,并对检测到的回环进行校正,得到需要校正的位姿;The optimization module uses the Levenberg-Marquardt method to solve the reprojection error function obtained by the calculation module to obtain the camera pose to be estimated, and uses the local optimization thread to optimize the obtained camera pose to obtain the optimized pose; determines whether it is a key frame according to the pose, and if the current frame is a key frame, performs loop detection according to the similarity between the two frames, and corrects the detected loop to obtain the pose that needs to be corrected;
定位模块,利用优化模块得到的需要校正的位姿对位姿图进行全局优化,输出图像帧的位姿,得到定位轨迹。The positioning module uses the posture that needs to be corrected obtained by the optimization module to globally optimize the posture graph, output the posture of the image frame, and obtain the positioning trajectory.
与现有技术相比,本发明至少具有以下有益效果:Compared with the prior art, the present invention has at least the following beneficial effects:
一种基于综合点线特征的井下定位方法,相比于传统的仅依靠点特征的SLAM算法,改善了其在光线较暗场景下的定位性能不佳的情况。在原始算法的基础上通过引入线特征和图像预处理算法,设计了综合点线特征的视觉定位算法。针对在光线较暗场景中,点特征提取难度变大的问题,本发明采用运算量较低的图像预处理算法对需要增强的图像帧进行处理,并引入实际场景中较为丰富的线特征;针对引入线特征增加系统耗时的问题,本发明设计了改进的光流法和基于点线不变量的线段匹配算法,改善了依靠描述子的特征匹配算法较为耗时的问题。A method for underground positioning based on comprehensive point and line features improves the poor positioning performance in dark scenes compared to the traditional SLAM algorithm that only relies on point features. On the basis of the original algorithm, a visual positioning algorithm with comprehensive point and line features is designed by introducing line features and image preprocessing algorithms. In view of the problem that point feature extraction becomes more difficult in dark scenes, the present invention uses an image preprocessing algorithm with low computational complexity to process image frames that need to be enhanced, and introduces relatively rich line features in actual scenes; in view of the problem that the introduction of line features increases system time consumption, the present invention designs an improved optical flow method and a line segment matching algorithm based on point and line invariants, which improves the problem that the feature matching algorithm relying on descriptors is relatively time-consuming.
进一步的,实际场景中存在着较多的线特征,且点线特征的结合可以降低特征提取难度,解决定位精度较低甚至定位失败的问题。Furthermore, there are many line features in actual scenes, and the combination of point and line features can reduce the difficulty of feature extraction and solve the problem of low positioning accuracy or even positioning failure.
进一步的,双向光流法的目的是通过观察图像中像素在两帧之间的运动来获取场景中物体的运动信息。光流表示了图像中每个像素点在时间上的位移,即物体的运动轨迹。双向光流法考虑了两帧之间的光流信息,能够更全面地捕捉物体的运动特征。Furthermore, the purpose of the bidirectional optical flow method is to obtain the motion information of objects in the scene by observing the movement of pixels in the image between two frames. Optical flow represents the displacement of each pixel in the image over time, that is, the motion trajectory of the object. The bidirectional optical flow method takes into account the optical flow information between two frames and can capture the motion characteristics of the object more comprehensively.
进一步的,由于通过环形的排列方式,可以在一定程度上减少由于噪声或异常引起的误差,所以可以提高系统的鲁棒性。Furthermore, since the ring-shaped arrangement can reduce errors caused by noise or anomalies to a certain extent, the robustness of the system can be improved.
进一步的,通过设置S202,可以剔除线段中无法识别的部分,去除异常值可以提高系统的准确性,进一步提高线段匹配的速度。Furthermore, by setting S202, unrecognizable parts of the line segment can be eliminated, and removing outliers can improve the accuracy of the system and further improve the speed of line segment matching.
进一步的,通过设置重投影误差函数F的方式度量局部地图中空间点线和能观测到的点线关键帧之间的差距,并以此为约束条件对定位进行求解。Furthermore, the gap between the spatial points and lines in the local map and the observable point and line key frames is measured by setting the reprojection error function F, and the positioning is solved based on this constraint.
进一步的,局部优化线程的主要构成有:选取关键帧、局部地图维护、调整位姿,该过程由于通过对局部地图内的相机位姿和空间点线进行优化,在一定程度上可以消除累积误差,提高定位精度Furthermore, the main components of the local optimization thread are: selecting key frames, maintaining local maps, and adjusting postures. This process can eliminate cumulative errors and improve positioning accuracy to a certain extent by optimizing the camera postures and spatial points and lines in the local map.
进一步的,通过设置关键帧来表示场景,从而减少了需要处理的图像帧数量。通过只关注关键帧,算法可以减少计算量,提高系统的实时性。Furthermore, by setting key frames to represent the scene, the number of image frames that need to be processed is reduced. By focusing only on key frames, the algorithm can reduce the amount of calculation and improve the real-time performance of the system.
进一步的,回环校正检测能够识别系统曾经访问过的场景,但由于传感器和算法的误差,之前记录的位置可能存在一定的偏差。通过回环校正,系统可以校正这些误差,保持全局一致性,提高定位的准确性。提高地图的一致性:回环校正有助于保持构建的地图的一致性。在没有回环校正的情况下,系统可能会出现不同部分的地图不匹配的情况,通过回环校正可以将这些部分进行对齐,构建更加一致的地图。Furthermore, loop correction detection can identify scenes that the system has visited before, but due to errors in sensors and algorithms, there may be some deviations in the previously recorded locations. Through loop correction, the system can correct these errors, maintain global consistency, and improve positioning accuracy. Improve map consistency: Loop correction helps maintain the consistency of the constructed map. In the absence of loop correction, the system may have different parts of the map that do not match. Through loop correction, these parts can be aligned to build a more consistent map.
可以理解的是,上述第二方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。It can be understood that the beneficial effects of the second aspect mentioned above can be found in the relevant description of the first aspect mentioned above, and will not be repeated here.
综上所述,本发明通过综合点线特征,提供了更全面、多维度的信息,使得在光线较暗的情况下也能够保持高精度的定位。相对于传统基于点特征的SLAM算法,这个方法更具适应性,能够应对光线变化的挑战。这项发明的优势在于提供了更强大的工具,使得在光线较暗、变化多端的井下场景中实现高精度的定位成为可能。In summary, the present invention provides more comprehensive and multi-dimensional information by integrating point and line features, so that high-precision positioning can be maintained even in low-light conditions. Compared with traditional SLAM algorithms based on point features, this method is more adaptable and can cope with the challenges of light changes. The advantage of this invention is that it provides more powerful tools, making it possible to achieve high-precision positioning in low-light and ever-changing underground scenes.
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solution of the present invention is further described in detail below through the accompanying drawings and embodiments.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
图1为MH_04_difficult序列的场景图,其中,(a)为MH_03_medium序列,(b)为MH_04_difficult序列;Figure 1 is a scene diagram of the MH_04_difficult sequence, where (a) is the MH_03_medium sequence and (b) is the MH_04_difficult sequence;
图2为本方法的基本框架图;Fig. 2 is a basic framework diagram of the method;
图3为光流法匹配策略图;Figure 3 is a diagram of the optical flow matching strategy;
图4为基于点线不变量的线特征匹配过程图;FIG4 is a diagram of the line feature matching process based on point-line invariants;
图5为线段合并示意图;FIG5 is a schematic diagram of line segment merging;
图6为点线不变量示意图;Fig. 6 is a schematic diagram of a point-line invariant;
图7为线段匹配效果示意图;FIG7 is a schematic diagram of line segment matching effect;
图8为线特征的重投影误差示意图;FIG8 is a schematic diagram of the reprojection error of line features;
图9为原始图像特征提取与匹配;Figure 9 shows the original image feature extraction and matching;
图10为图像预处理后特征提取与匹配;Figure 10 shows feature extraction and matching after image preprocessing;
图11为在MH_04_difficult序列上的轨迹图;Figure 11 is a trajectory diagram on the MH_04_difficult sequence;
图12为APE误差图。Figure 12 is a graph of the APE error.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.
在本发明的描述中,需要理解的是,术语“包括”和“包含”指示所描述特征、整体、步骤、操作、元素和/或组件的存在,但并不排除一个或多个其它特征、整体、步骤、操作、元素、组件和/或其集合的存在或添加。In the description of the present invention, it should be understood that the terms “include” and “comprises” indicate the presence of described features, wholes, steps, operations, elements and/or components, but do not exclude the presence or addition of one or more other features, wholes, steps, operations, elements, components and/or collections thereof.
还应当理解,在本发明说明书中所使用的术语仅仅是出于描述特定实施例的目的而并不意在限制本发明。如在本发明说明书和所附权利要求书中所使用的那样,除非上下文清楚地指明其它情况,否则单数形式的“一”、“一个”及“该”意在包括复数形式。It should also be understood that the terms used in the present specification are only for the purpose of describing specific embodiments and are not intended to limit the present invention. As used in the present specification and the appended claims, the singular forms "a", "an" and "the" are intended to include plural forms unless the context clearly indicates otherwise.
还应当进一步理解,在本发明说明书和所附权利要求书中使用的术语“和/或”是指相关联列出的项中的一个或多个的任何组合以及所有可能组合,并且包括这些组合,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本发明中字符“/”,一般表示前后关联对象是一种“或”的关系。It should be further understood that the term "and/or" used in the present specification and the appended claims refers to any combination of one or more of the associated listed items and all possible combinations, and includes these combinations. For example, A and/or B can represent: A exists alone, A and B exist at the same time, and B exists alone. In addition, the character "/" in the present invention generally indicates that the associated objects are in an "or" relationship.
应当理解,尽管在本发明实施例中可能采用术语第一、第二、第三等来描述预设范围等,但这些预设范围不应限于这些术语。这些术语仅用来将预设范围彼此区分开。例如,在不脱离本发明实施例范围的情况下,第一预设范围也可以被称为第二预设范围,类似地,第二预设范围也可以被称为第一预设范围。It should be understood that, although the terms first, second, third, etc. may be used to describe preset ranges, etc. in the embodiments of the present invention, these preset ranges should not be limited to these terms. These terms are only used to distinguish preset ranges from each other. For example, without departing from the scope of the embodiments of the present invention, the first preset range may also be referred to as the second preset range, and similarly, the second preset range may also be referred to as the first preset range.
取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”或“响应于检测”。类似地,取决于语境,短语“如果确定”或“如果检测(陈述的条件或事件)”可以被解释成为“当确定时”或“响应于确定”或“当检测(陈述的条件或事件)时”或“响应于检测(陈述的条件或事件)”。The word "if" as used herein may be interpreted as "at the time of" or "when" or "in response to determining" or "in response to detecting", depending on the context. Similarly, the phrases "if it is determined" or "if (stated condition or event) is detected" may be interpreted as "when it is determined" or "in response to determining" or "when detecting (stated condition or event)" or "in response to detecting (stated condition or event)", depending on the context.
在附图中示出了根据本发明公开实施例的各种结构示意图。这些图并非是按比例绘制的,其中为了清楚表达的目的,放大了某些细节,并且可能省略了某些细节。图中所示出的各种区域、层的形状及它们之间的相对大小、位置关系仅是示例性的,实际中可能由于制造公差或技术限制而有所偏差,并且本领域技术人员根据实际所需可以另外设计具有不同形状、大小、相对位置的区域/层。Various structural schematic diagrams of the embodiments disclosed in the present invention are shown in the accompanying drawings. These figures are not drawn to scale, and some details are magnified and some details may be omitted for the purpose of clear expression. The shapes of various regions and layers shown in the figures and the relative sizes and positional relationships therebetween are only exemplary, and may deviate in practice due to manufacturing tolerances or technical limitations, and those skilled in the art may further design regions/layers with different shapes, sizes, and relative positions according to actual needs.
本发明提供了一种基于综合点线特征的井下定位方法,在光线较暗以及光线变化的井下场景中拥有较高的定位精度,并且由于对较为耗时的点线特征处理算法进行改进,使得本发明避免了传统基于点线特征的视觉定位方法较为耗时的问题。The present invention provides an underground positioning method based on comprehensive point and line features, which has high positioning accuracy in underground scenes with dim light or changing light. In addition, due to the improvement of the more time-consuming point and line feature processing algorithm, the present invention avoids the problem of the traditional visual positioning method based on point and line features being more time-consuming.
请参阅图2,本发明一种基于综合点线特征的井下定位方法,包括基于点线特征的视觉里程计,即前端线程,以及回环检测与优化部分;在基于点线特征的视觉里程计部分中,首先对双目相机的原始图像进行去畸变和双目校正操作,使得两图像的对极线在同一水平线上。随后对图像进行点线特征提取与匹配,完成后构造系统的点线重投影误差函数并进行位姿估计;随后使用局部优化线程,对位姿进行进一步优化;为了消除累积误差,进行回环检测操作,检测到回环后进行回环校正,并对位姿进行一次全局位姿图优化。具体步骤如下:Please refer to Figure 2. The present invention provides an underground positioning method based on comprehensive point and line features, including a visual odometer based on point and line features, i.e., a front-end thread, and a loop detection and optimization part; in the visual odometer part based on point and line features, firstly, the original image of the binocular camera is subjected to dedistortion and binocular correction operations, so that the epipolar lines of the two images are on the same horizontal line. Then, the image is subjected to point and line feature extraction and matching, after which the point and line reprojection error function of the system is constructed and pose estimation is performed; then, the pose is further optimized using a local optimization thread; in order to eliminate accumulated errors, a loop detection operation is performed, loop correction is performed after the loop is detected, and a global pose graph optimization is performed on the pose. The specific steps are as follows:
S1、读取双目相机的原始图像,进行去畸变和双目校正操作;S1, read the original image of the binocular camera, and perform dedistortion and binocular correction operations;
S2、使用改进的点线特征处理算法;S2, using improved point and line feature processing algorithm;
S201、考虑到传统LK光流法中存在大量的并行运算,使用GPU进行加速计算,该思路通过CUDA在OpenCV中进行实现。S201. Considering the large amount of parallel computing in the traditional LK optical flow method, GPU is used for accelerated computing. This idea is implemented in OpenCV through CUDA.
请参阅图3,为了提高特征跟踪的准确率,本发明设计了改进的光流法匹配策略,匹配策略中双向是指当对图像I1和I2进行点特征跟踪时,按如下步骤进行:Please refer to FIG3 . In order to improve the accuracy of feature tracking, the present invention designs an improved optical flow matching strategy. The bidirectional matching strategy means that when point feature tracking is performed on images I 1 and I 2 , the following steps are performed:
从左至右的箭头所示,从图像I1到图像I2使用LK光流法进行特征跟踪;As shown by the arrows from left to right, feature tracking is performed from image I 1 to image I 2 using the LK optical flow method;
检查第一步的跟踪结果,保留跟踪正确的点特征,对跟踪错误的进行剔除;Check the tracking results of the first step, keep the correctly tracked point features, and remove the incorrectly tracked ones;
将第二步保留的点特征从图像I2到图像I1使用LK光流法,如图3中从右至左的箭头所示;The point features retained in the second step are transferred from image I 2 to image I 1 using the LK optical flow method, as shown by the arrow from right to left in Figure 3;
检查第三步的跟踪结果,执行与第二步类似的操作;Check the tracking results in step 3 and perform similar operations as in step 2;
将保留的点特征作为图像I1与图像I2的匹配点对。The retained point features are used as matching point pairs between image I 1 and image I 2 .
匹配策略中环形操作如下:The ring operation in the matching strategy is as follows:
对上一帧左目图像与当前帧左目图像/>使用双向光流法,/>的点特征集合为将跟踪得到的匹配点集合记为/> Previous left eye image With the current frame left eye image/> Using bidirectional optical flow, /> The point feature set is The set of matching points obtained by tracking is recorded as/>
对当前帧左目图像与当前帧右目图像/>使用双向光流法,/>点特征集合为/>将跟踪得到的匹配点集合记为xtemp1;For the left image of the current frame With the current frame right eye image/> Using bidirectional optical flow, /> The point feature set is/> The set of matching points obtained by tracking is recorded as x temp1 ;
对当前帧右目图像与上一帧右目图像/>使用双向光流法,/>点特征集合为xtemp1,将跟踪得到的匹配点集合记为xtemp2;Right eye image of the current frame With the previous frame right eye image/> Using bidirectional optical flow, /> The point feature set is x temp1 , and the matching point set obtained by tracking is recorded as x temp2 ;
上一帧右目图像原本的点特征集合为/>通过上一步得到的/>的临时点特征集合为xtemp2,对xtemp2中的特征进行检测,判断其是否可以落入/>的对应特征的周围,若可以则进行保留,否则进行剔除;Previous right eye image The original point feature set is/> Obtained from the previous step /> The temporary point feature set is x temp2 , and the features in x temp2 are detected to determine whether they can fall into /> The surrounding of the corresponding feature is retained if possible, otherwise it is removed;
当前帧右目图像的临时点特征集合为xtemp1,上一帧右目图像/>的临时点特征集合为xtemp2,根据xtemp1与xtemp2的对应关系,从xtemp1中剔除上一步中xtemp2剔除的特征,保留剩余特征,并记为/> Current frame right eye image The temporary point feature set is x temp1 , the previous frame right eye image/> The temporary point feature set is x temp2 . According to the correspondence between x temp1 and x temp2 , the features removed from x temp2 in the previous step are removed from x temp1 , and the remaining features are retained and recorded as />
针对在快速运动场景中,特征跟踪易失败的问题,通过前一帧的运动对光流点进行预测,对光流跟踪提供较好的初值。To address the problem that feature tracking is prone to failure in fast-motion scenes, the optical flow points are predicted through the motion of the previous frame to provide a better initial value for optical flow tracking.
S202、设计一种基于点线不变量的线特征匹配方法,用于相邻帧与左右目图像之间的线特征匹配;S202, designing a line feature matching method based on point-line invariants for line feature matching between adjacent frames and left and right eye images;
首先使用LSD(a Line Segment Detector)算法进行线段检测,并对破损线段进行恢复,随后基于点特征构造出点线不变量,在现有特征点的基础上完成线段匹配,若在个别场景下基于点线不变量的匹配线段数量不足,则对不满足点线不变量构造条件的线段计算LBD描述子并进行线段匹配;能够有效提高线段匹配的速度和准确度,First, the LSD (Line Segment Detector) algorithm is used to detect line segments and restore damaged line segments. Then, point-line invariants are constructed based on point features, and line segment matching is completed based on existing feature points. If the number of matching line segments based on point-line invariants is insufficient in some scenarios, the LBD descriptor is calculated for the line segments that do not meet the point-line invariant construction conditions and line segment matching is performed. This can effectively improve the speed and accuracy of line segment matching.
请参阅图4,具体步骤如下:Please refer to Figure 4, the specific steps are as follows:
S2021、线特征预处理S2021, Line feature preprocessing
LSD算法在进行线段检测时,会将一条连续的线段识别为多段;针对此问题,本发明进行线段合并处理。首先剔除长度过短的线段,然后综合线段主方向的角度、两条线段中点之间的距离、线段端点与端点之间的距离、端点距离与两线段平均长度的比值等因素进行评判,效果如图5所示。When performing line segment detection, the LSD algorithm will identify a continuous line segment as multiple segments. To address this problem, the present invention performs line segment merging processing. First, line segments that are too short are eliminated, and then the angle of the main direction of the line segment, the distance between the midpoints of the two line segments, the distance between the endpoints of the line segment, and the ratio of the endpoint distance to the average length of the two line segments are comprehensively evaluated. The effect is shown in Figure 5.
设置一个阈值,用于剔除长度低于该阈值的线段。长度过短的线段可能是噪声或者无关紧要的线段,剔除它们有助于提高系统的鲁棒性;计算每条线段的主方向角度。可以使用线段两端点的坐标信息来计算线段的方向。主方向角度有助于后续的线段合并判断,可以通过方向角度的差异来评估两条线段是否趋于平行;计算两条线段的中点之间的距离。如果两条线段的中点距离较近,可能表明它们在空间中是相邻的,有可能进行合并;计算每条线段的两个端点之间的距离。这个距离可以用来评估线段的长度,同时也可以用于判断线段是否趋于平行;将端点之间的距离与两条线段的平均长度的比值作为一个判定因子。如果这个比值较小,表示两条线段的端点距离相对较小,可能表明它们在同一条直线上。Set a threshold to remove line segments whose length is lower than the threshold. Line segments that are too short may be noise or insignificant. Removing them helps improve the robustness of the system. Calculate the main direction angle of each line segment. The coordinate information of the two endpoints of the line segment can be used to calculate the direction of the line segment. The main direction angle is helpful for subsequent segment merging judgment. The difference in direction angles can be used to evaluate whether two line segments tend to be parallel. Calculate the distance between the midpoints of the two line segments. If the midpoints of the two line segments are close, it may indicate that they are adjacent in space and may be merged. Calculate the distance between the two endpoints of each line segment. This distance can be used to evaluate the length of the line segment, and it can also be used to determine whether the line segments tend to be parallel. The ratio of the distance between the endpoints to the average length of the two line segments is used as a judgment factor. If this ratio is small, it means that the distance between the endpoints of the two line segments is relatively small, which may indicate that they are on the same straight line.
通过综合考虑上述因素,可以得到一个综合评判的指标,可以用于决定是否对两条线段进行合并整个过程旨在提高线段检测的准确性和可靠性。By comprehensively considering the above factors, a comprehensive evaluation index can be obtained, which can be used to decide whether to merge two line segments. The whole process aims to improve the accuracy and reliability of line segment detection.
S2022、点线不变量S2022, Point-Line Invariants
对于空间中同一平面上点线的投影,前后两帧中两个点特征与线特征的距离比值不变,称为点线不变量,单应矩阵用4对匹配特征点通过直接线性变换法求解。构造点线不变量的前提是点线位于同一平面,本文认为位于线段支持域内的点线符合这一要求。线段支持域以点和线段之间的距离作为确定原则,主要分为点沿着平行线段与垂直线段两个方向的距离,具体是指点距离线段的垂直平分线的距离和点距离线段的距离,对其要求分别是小于线段长度的0.5倍和小于线段长度的2倍。For the projection of points and lines on the same plane in space, the ratio of the distance between the two point features and the line features in the previous and next frames remains unchanged, which is called the point-line invariant. The homography matrix is solved by direct linear transformation using 4 pairs of matching feature points. The premise for constructing point-line invariants is that the points and lines are located in the same plane. This paper believes that the points and lines located in the support domain of the line segment meet this requirement. The support domain of the line segment uses the distance between the point and the line segment as the determination principle, which is mainly divided into the distance of the point along the parallel line segment and the perpendicular line segment. Specifically, it refers to the distance between the point and the perpendicular bisector of the line segment and the distance between the point and the line segment. The requirements are less than 0.5 times the length of the line segment and less than 2 times the length of the line segment, respectively.
请参阅图6,由位于同一平面上的线段及其附近的2个特征点组成。设空间点P1、P2以及空间线段L位于平面β上,X1、X2、l1与Y1、Y2、l2是空间点P1、P2以及空间线段L在前后两帧上的投影。Please refer to Figure 6, which consists of a line segment located on the same plane and two feature points nearby. Assume that the spatial points P 1 , P 2 and the spatial line segment L are located on the plane β, and X 1 , X 2 , l 1 and Y 1 , Y 2 , l 2 are the projections of the spatial points P 1 , P 2 and the spatial line segment L on the previous and next frames.
对于线段l1和l2,使用单应矩阵描述其映射关系。假设该映射关系为H,线段l1和l2所在直线的系数向量为p和q,点X1、X2与Y1、Y2对应的齐次坐标为X1、X2和Y1、Y2。For line segments l 1 and l 2 , the homography matrix is used to describe their mapping relationship. Assume that the mapping relationship is H, the coefficient vectors of the straight line where line segments l 1 and l 2 are located are p and q, and the homogeneous coordinates corresponding to points X 1 , X 2 and Y 1 , Y 2 are X 1 , X 2 and Y 1 , Y 2 .
特征点和特征线满足如下关系:The feature points and feature lines satisfy the following relationship:
q=Hp (1)q=Hp (1)
Yi=HXi,i=1,2 (2) Yi = HXi , i = 1, 2 (2)
记:remember:
将式(1)和式(2)带入式(4)得:Substituting equation (1) and equation (2) into equation (4), we get:
D(X1,X2,l1)=D(Y1,Y2,l2) (5)D(X 1 ,X 2 ,l 1 )=D(Y 1 ,Y 2 ,l 2 ) (5)
由式(5)可知,对于空间中同一平面上点线的投影,前后两帧中两个点特征与线特征的距离比值是不变的,并将其称为点线不变量。其中单应矩阵可以用4对匹配特征点通过直接线性变换法求解。From formula (5), we can see that for the projection of points and lines on the same plane in space, the distance ratio between two point features and line features in the previous and next frames is constant, and it is called point-line invariant. The homography matrix can be solved by direct linear transformation method using 4 pairs of matching feature points.
S2023、线段匹配S2023, line segment matching
对所有的线段直接计算点线不变量进行匹配会造成计算资源浪费,本文对线段主方向的夹角设置一个范围,若夹角大于该范围,则不进行下一步匹配,减少匹配耗时。Directly calculating the point-line invariants for all line segments for matching will result in a waste of computing resources. This paper sets a range for the angle of the main direction of the line segment. If the angle is larger than the range, the next step of matching will not be performed, thus reducing the matching time.
对于符合角度限制的线段,验证其是否满足构造点线不变量的条件,即线段支持域内至少有2个匹配的点特征,若满足则计算其点线不变量,定义两条线段的不变量误差为:For the line segments that meet the angle restrictions, verify whether they meet the conditions for constructing point-line invariants, that is, there are at least two matching point features in the line segment support domain. If they meet the conditions, calculate their point-line invariants and define the invariant error of the two line segments as:
AffSim(l1,l2)=exp(-|D(Xi,Xj,l1)-D(Yi,Yj,l2)|),i,j∈[1,n]AffSim(l 1 ,l 2 )=exp(−|D(X i ,X j ,l 1 )−D(Y i ,Y j ,l 2 )|), i,j∈[1,n]
其中,(Xi,Xj)是线段l1支持域内的2个特征点,(Yi,Yj)是线段l2支持域内的2个特征点,(Xi,Yi)是一对匹配的特征点,n表示线段支持域内匹配的特征点的对数。Among them, (X i ,X j ) are two feature points in the support domain of line segment l 1 , (Y i ,Y j ) are two feature points in the support domain of line segment l 2 , (X i ,Y i ) is a pair of matching feature points, and n represents the number of matching feature points in the support domain of the line segment.
对于线段l1和l2,其最终的相似性Sim(l1,l2)为AffSim(l1,l2)的最大值,若Sim(l1,l2)>0.95时,则线段匹配成功,。For line segments l 1 and l 2 , their final similarity Sim(l 1 ,l 2 ) is the maximum value of AffSim(l 1 ,l 2 ). If Sim(l 1 ,l 2 )>0.95, the line segments are matched successfully.
对所有的线段直接计算点线不变量进行匹配会造成计算资源浪费,对线段主方向的夹角设置一个范围,若夹角大于该范围,则不进行下一步匹配,减少匹配耗时。Directly calculating the point-line invariants for all line segments for matching will result in a waste of computing resources. A range is set for the angle of the main direction of the line segment. If the angle is larger than the range, the next step of matching is not performed, thus reducing the matching time.
对于符合角度限制的线段,验证其是否满足构造点线不变量的条件,即线段支持域内至少有2个匹配的点特征,若满足则计算其点线不变量,定义两条线段l1和l2的不变量误差为:For the line segments that meet the angle restriction, verify whether they meet the conditions for constructing point-line invariants, that is, there are at least two matching point features in the line segment support domain. If they meet the conditions, calculate their point-line invariants and define the invariant errors of the two line segments l1 and l2 as:
AffSim(l1,l2)=exp(-|D(Xi,Xj,l1)-D(Yi,Yj,l2)|),i,j∈[1,n] (6)AffSim(l 1 ,l 2 )=exp(−|D(X i ,X j ,l 1 )−D(Y i ,Y j ,l 2 )|),i,j∈[1,n] (6)
其中,(Xi,Xj)是线段l1支持域内的2个特征点,(Yi,Yj)是线段l2支持域内的2个特征点,(Xi,Yi)是一对匹配的特征点,n表示线段支持域内匹配的特征点的对数。Among them, (X i ,X j ) are two feature points in the support domain of line segment l 1 , (Y i ,Y j ) are two feature points in the support domain of line segment l 2 , (X i ,Y i ) is a pair of matching feature points, and n represents the number of matching feature points in the support domain of the line segment.
对于线段l1和l2,其最终的相似性Sim(l1,l2)为AffSim(l1,l2)的最大值,若Sim(l1,l2)>0.95时,则线段匹配成功,总体匹配效果如图7所示。For line segments l 1 and l 2 , their final similarity Sim(l 1 ,l 2 ) is the maximum value of AffSim(l 1 ,l 2 ). If Sim(l 1 ,l 2 )>0.95, the line segments are matched successfully. The overall matching effect is shown in FIG7 .
S3、推导点线特征的重投影误差函数S3. Derivation of the reprojection error function of point and line features
S301、点线特征的参数化方法S301. Parameterization method of point and line features
点特征相对线特征较为简单,可以在三维空间中使用欧式坐标来表示。本发明采用普吕克坐标和正交方法对直线进行表示。Point features are relatively simpler than line features and can be represented in three-dimensional space using Euclidean coordinates. The present invention uses Plücker coordinates and orthogonal methods to represent straight lines.
S302、点线特征的重投影误差函数S302, reprojection error function of point and line features
使用重投影误差构造点线误差函数,首次投影指空间点线投影到图像平面上形成的特征。重投影指利用估计的三维空间路标和相机位姿进行再次投影。重投影误差指重投影时估计到的像素点和初次投影得到的像素点之间的差值。可构造出综合点线特征的重投影误差函数,如公式(7)所示:The point and line error function is constructed using the reprojection error. The first projection refers to the feature formed by projecting the spatial point and line onto the image plane. Reprojection refers to reprojection using the estimated three-dimensional space landmarks and camera pose. The reprojection error refers to the difference between the estimated pixel point during reprojection and the pixel point obtained by the first projection. The reprojection error function of the comprehensive point and line features can be constructed as shown in formula (7):
其中,pl和Il分别表示点、线特征集,ρp和ρl分别表示点、线特征的Huber鲁棒核函数,Σj和Σk表示点、线特征的高斯分布协方差矩阵,和/>分别表示第j个点特征和第k个线特征的重投影误差。Among them, p l and I l represent the point and line feature sets respectively, ρ p and ρ l represent the Huber robust kernel functions of point and line features respectively, Σ j and Σ k represent the Gaussian distribution covariance matrix of point and line features, and/> They represent the reprojection errors of the j-th point feature and the k-th line feature respectively.
S4、对误差函数使用列文伯格-马夸尔特法(Levenberg-Marquardt,LM)进行求解便可得到要估计的相机位姿,使用局部优化线程对相机位姿进行优化;S4. The error function is solved by using the Levenberg-Marquardt method (LM) to obtain the camera pose to be estimated, and the camera pose is optimized using a local optimization thread;
得到的相机位姿是依靠相邻两帧之间信息得到的,随着系统的运行,结果的可靠性逐步降低。为了实现更高的定位精度,本发明借鉴ORBSLAM2算法的共视图思想,使用当前处理的关键帧、共视图中与当前帧相连的关键帧,以及这些关键帧观测到的路标空间点线构建局部地图,以相机位姿和空间点线位置作为待优化的状态变量,将局部地图内的空间点线与能观测到该点线的关键帧之间的重投影误差函数作为约束条件,利用图模型的稀疏性加速求解。通过对局部地图内的相机位姿和空间点线进行优化,在一定程度上可以消除累积误差,提高定位精度。The obtained camera pose is obtained by relying on the information between two adjacent frames. As the system runs, the reliability of the results gradually decreases. In order to achieve higher positioning accuracy, the present invention draws on the co-viewing idea of the ORBSLAM2 algorithm, uses the key frames currently processed, the key frames connected to the current frame in the co-viewing, and the landmark spatial points and lines observed by these key frames to construct a local map, and uses the camera pose and spatial point and line positions as state variables to be optimized. The reprojection error function between the spatial points and lines in the local map and the key frames that can observe the points and lines is used as a constraint condition, and the sparsity of the graph model is used to accelerate the solution. By optimizing the camera pose and spatial points and lines in the local map, the cumulative error can be eliminated to a certain extent and the positioning accuracy can be improved.
S5、若当前帧是关键帧,则进行回环检测,并对检测到的回环进行校正;S5. If the current frame is a key frame, loop detection is performed and the detected loop is corrected;
本发明选取关键帧的原则为:当前帧与上一个关键帧之间的普通帧不少于20个;追踪到不少于55个点特征和26条线特征;当前帧与上一个关键帧共同观测到的点线特征占当前帧观测到的点线特征的比值小于0.7。The principles for selecting key frames in the present invention are: there are no less than 20 ordinary frames between the current frame and the previous key frame; no less than 55 point features and 26 line features are tracked; the ratio of the point and line features observed jointly by the current frame and the previous key frame to the point and line features observed in the current frame is less than 0.7.
本发明通过词袋模型,以两帧图像之间的相似度作为回环检测的依据。一般可基于词袋模型判断两帧之间的相似程度。词袋模型由单词节点组成,单词节点则由图像点、线特征的描述子组成。通过该模型得到每一帧图像对应的单词向量,然后对比两帧的单词向量,确定其相似程度。The present invention uses the bag-of-words model to take the similarity between two frames of images as the basis for loop detection. Generally, the similarity between two frames can be judged based on the bag-of-words model. The bag-of-words model is composed of word nodes, and the word nodes are composed of descriptors of image points and line features. The word vector corresponding to each frame of the image is obtained through the model, and then the word vectors of the two frames are compared to determine their similarity.
校正的主要目的是将当前帧漂移的位姿更新为更准确的数据。通过计算得到的当前帧和回环帧的相对位姿以及回环帧的位姿,即可得到当前帧新的位姿。同时,当前帧周围的关键帧可以通过当前帧未更新前与周围关键帧的相对位姿进行位姿更新。在进行位姿更新后,也需要对和这些关键帧有关的路标进行更新。对于每个路标来说,其与生成它的关键帧的相对位置没有发生改变,可根据此特点对路标点线进行更新。当前帧观测到的路标也需要更新,可以将回环帧及其邻接关键帧观测到的路标投影到当前帧及其邻接关键帧中,一般认为历史数据的可靠性更高,因此将历史数据作为依据对当前帧路标进行更新。The main purpose of the correction is to update the drifted pose of the current frame to more accurate data. The new pose of the current frame can be obtained by calculating the relative pose of the current frame and the loop frame and the pose of the loop frame. At the same time, the key frames around the current frame can be updated with their relative poses to the surrounding key frames before the current frame is updated. After the pose is updated, the landmarks related to these key frames also need to be updated. For each landmark, its relative position with the key frame that generated it has not changed, and the landmark points and lines can be updated based on this feature. The landmarks observed in the current frame also need to be updated. The landmarks observed in the loop frame and its adjacent key frames can be projected into the current frame and its adjacent key frames. It is generally believed that historical data is more reliable, so the historical data is used as a basis to update the landmarks of the current frame.
S6、位姿图优化,输出图像帧的位姿,得到定位轨迹。S6. Optimize the pose graph, output the pose of the image frame, and obtain the positioning trajectory.
为了得到更高的定位精度,在回环检测后,需要进行全局优化。随着系统的持续运行,待估计变量的规模越来越庞大,会造成计算效率降低,耗时增加的问题。事实上,空间点线坐标在局部优化后,趋于收敛,再次进行后端优化的必要性不大,因此本发明构建仅对相机位姿进行优化的位姿图模型。In order to obtain higher positioning accuracy, global optimization is required after loop detection. As the system continues to run, the scale of the variables to be estimated becomes larger and larger, which will cause the problem of reduced computational efficiency and increased time consumption. In fact, after local optimization, the spatial point and line coordinates tend to converge, and there is little need for back-end optimization again. Therefore, the present invention constructs a posture graph model that only optimizes the camera posture.
根据位姿图模型进行轨迹定位时通过以下步骤:The following steps are performed to locate the trajectory based on the pose graph model:
1)获取位姿图;1) Obtain the pose graph;
2)设置轨迹约束:在位姿图中,节点表示相机的位姿,而边表示相邻节点之间的相对运动关系。这些约束是由传感器提供的,可以是视觉特征匹配、IMU测量等。这些约束连接了不同时刻的位姿,构成轨迹的基础。2) Set trajectory constraints: In the pose graph, nodes represent the pose of the camera, and edges represent the relative motion relationship between adjacent nodes. These constraints are provided by sensors, which can be visual feature matching, IMU measurement, etc. These constraints connect the poses at different times and form the basis of the trajectory.
3)非线性优化:利用非线性优化算法,最小化位姿图中所有约束的误差项。这个过程会调整相机的位姿,使得整个位姿图更符合实际观测,从而得到更准确的轨迹估计。3) Nonlinear optimization: Use nonlinear optimization algorithms to minimize the error terms of all constraints in the pose graph. This process adjusts the camera's pose so that the entire pose graph is more consistent with actual observations, thereby obtaining a more accurate trajectory estimate.
4)轨迹估计:优化完成后,可以从位姿图中提取相机在不同时刻的位姿信息,从而得到整个轨迹的估计。这个轨迹估计可以用于定位、导航或其他应用。其中位姿图是一个图结构,其中节点表示相机的位姿,边表示两个位姿之间的约束关系。4) Trajectory Estimation: After the optimization is completed, the pose information of the camera at different times can be extracted from the pose graph to obtain an estimate of the entire trajectory. This trajectory estimate can be used for positioning, navigation or other applications. The pose graph is a graph structure in which nodes represent the poses of the camera and edges represent the constraints between two poses.
这样的模型用于最小化相机在不同时间或位置的估计误差,从而提高SLAM系统的准确性。Such a model is used to minimize the estimation error of the camera at different times or positions, thereby improving the accuracy of the SLAM system.
本发明再一个实施例中,提供一种基于综合点线特征的井下定位系统,该系统能够用于实现上述基于综合点线特征的井下定位方法,具体的,该基于综合点线特征的井下定位系统包括预处理模块、提取模块、计算模块、优化模块以及定位模块。In yet another embodiment of the present invention, a downhole positioning system based on comprehensive point and line features is provided, which can be used to implement the above-mentioned downhole positioning method based on comprehensive point and line features. Specifically, the downhole positioning system based on comprehensive point and line features includes a preprocessing module, an extraction module, a calculation module, an optimization module and a positioning module.
其中,预处理模块,对双目相机的两幅原始图像进行去畸变和双目校正操作,使得处理后的两幅图像的对极线在同一水平线上;The preprocessing module performs dedistortion and binocular correction operations on the two original images of the binocular camera, so that the epipolar lines of the two processed images are on the same horizontal line;
提取模块,使用改进的点线特征处理算法对预处理模块得到的两幅图像进行点和线特征的提取与匹配;The extraction module uses an improved point and line feature processing algorithm to extract and match point and line features of the two images obtained by the preprocessing module;
计算模块,根据提取模块得到的特征推导点线特征的重投影误差函数;A calculation module, which derives a reprojection error function of point and line features based on the features obtained by the extraction module;
优化模块,使用列文伯格-马夸尔特法对计算模块得到的重投影误差函数进行求解,得到要估计的相机位姿,使用局部优化线程对得到的相机位姿进行优化,得到优化后的位姿;根据位姿判断是否为关键帧,若当前帧是关键帧,根据两帧图像之间的相似度进行回环检测,并对检测到的回环进行校正,得到需要校正的位姿;The optimization module uses the Levenberg-Marquardt method to solve the reprojection error function obtained by the calculation module to obtain the camera pose to be estimated, and uses the local optimization thread to optimize the obtained camera pose to obtain the optimized pose; determines whether it is a key frame according to the pose, and if the current frame is a key frame, performs loop detection according to the similarity between the two frames, and corrects the detected loop to obtain the pose that needs to be corrected;
定位模块,利用优化模块得到的需要校正的位姿对位姿图进行全局优化,输出图像帧的位姿,得到定位轨迹。The positioning module uses the posture that needs to be corrected obtained by the optimization module to globally optimize the posture graph, output the posture of the image frame, and obtain the positioning trajectory.
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。通常在此处附图中的描述和所示的本发明实施例的组件可以通过各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, rather than all of the embodiments. The components of the embodiments of the present invention described and shown in the drawings here can usually be arranged and designed in various different configurations. Therefore, the following detailed description of the embodiments of the present invention provided in the drawings is not intended to limit the scope of the claimed invention, but merely represents selected embodiments of the present invention. Based on the embodiments in the present invention, all other embodiments obtained by ordinary technicians in this field without making creative work are within the scope of protection of the present invention.
本发明在国际公认的EuRoC数据集的MH_04_difficult序列对本发明方法进行试验,MH_04_difficult序列中存在一些光线较暗、光线变化、纹理不丰富的环境,可以模拟井下的环境,该序列的片段如图1所示。The method of the present invention is tested in the MH_04_difficult sequence of the internationally recognized EuRoC data set. The MH_04_difficult sequence contains some environments with dim light, changing light, and poor texture, which can simulate the underground environment. A fragment of the sequence is shown in Figure 1.
对本方法的性能进行评估,使用绝对位姿误差(Absolute Pose Error,APE)作为定位算法精度的评价指标。APE通过计算估计位姿与真实位姿之间的距离,对算法的定位精度进行评价。The performance of this method is evaluated by using Absolute Pose Error (APE) as an evaluation indicator of the accuracy of the positioning algorithm. APE evaluates the positioning accuracy of the algorithm by calculating the distance between the estimated pose and the true pose.
APE的计算公式为:The calculation formula of APE is:
APEi=Ti(Ti′)-1 (8)APE i = Ti (T i ′) -1 (8)
其中,Ti′∈SE(3)表示在i时刻的估计位姿,Ti∈SE(3)表示在i时刻的真实位姿。Among them, Ti′∈SE (3) represents the estimated pose at time i, and Ti∈SE (3) represents the true pose at time i.
计算出所有时刻的APE后,可以计算其均方根误差(Root Mean Squared Error,RMSE),通过RMSE整体评估在整条轨迹上的算法定位精度,计算公式如下:After calculating the APE at all times, we can calculate its root mean square error (RMSE). The RMSE is used to evaluate the positioning accuracy of the algorithm on the entire trajectory. The calculation formula is as follows:
上式中,trans(APE)表示APE的平移部分。In the above formula, trans(APE) represents the translation part of APE.
对MH_04_difficult序列中的原始图像进行线特征提取与匹配的效果如图9所示,可以发现提取出的特征较少。对图像增强后的特征提取与匹配效果如图10所示,可以发现,经过图像增强后,特征提取与匹配的效果有较大提高。The effect of line feature extraction and matching of the original image in the MH_04_difficult sequence is shown in Figure 9, and it can be found that the extracted features are relatively few. The effect of feature extraction and matching after image enhancement is shown in Figure 10, and it can be found that after image enhancement, the effect of feature extraction and matching is greatly improved.
对比了本发明方法和ORBSLAM2算法在MH_04_difficult序列定位精度,如图11和图12所示,其中PL-SLAM表示本发明方法的结果,将图中信息进行总结,如表1所示。The positioning accuracy of the method of the present invention and the ORBSLAM2 algorithm in the MH_04_difficult sequence is compared, as shown in Figures 11 and 12, where PL-SLAM represents the result of the method of the present invention. The information in the figure is summarized as shown in Table 1.
表1定位精度对比(米)Table 1 Comparison of positioning accuracy (meters)
在MH_04_difficult序列中,由于纹理不够丰富,点特征不足,在引入线特征后,PL-SLAM算法相比ORBSLAM2算法的定位精度提高明显。PL-SLAM算法的APE误差最大值约为0.106米,比ORBSLAM2算法减小了0.048米,APE均方根误差为0.043米,比ORBSLAM2算法减少了50.6%的误差。本发明的APE误差最大值不足0.1米,APE均方根误差仅为0.043米,在运动范围宽约为6米,距离为58.6米的场景中,定位精度相当高。In the MH_04_difficult sequence, due to the insufficient texture and insufficient point features, after the introduction of line features, the positioning accuracy of the PL-SLAM algorithm is significantly improved compared with the ORBSLAM2 algorithm. The maximum APE error of the PL-SLAM algorithm is about 0.106 meters, which is 0.048 meters less than that of the ORBSLAM2 algorithm, and the APE root mean square error is 0.043 meters, which is 50.6% less than that of the ORBSLAM2 algorithm. The maximum APE error of the present invention is less than 0.1 meters, and the APE root mean square error is only 0.043 meters. In the scene with a motion range of about 6 meters wide and a distance of 58.6 meters, the positioning accuracy is quite high.
综上所述,本发明一种基于综合点线特征的井下定位方法及系统,解决传统算法在光线较暗等场景下,定位精度不够甚至定位失败的问题。在传统算法的基础上,通过引入线特征和图像预处理算法,设计了综合点线特征的视觉定位算法。结果表明,本文算法提升明显,具有较高的定位精度。In summary, the present invention provides an underground positioning method and system based on comprehensive point and line features, which solves the problem that the traditional algorithm has insufficient positioning accuracy or even fails to locate in scenes with low light. On the basis of the traditional algorithm, a visual positioning algorithm based on comprehensive point and line features is designed by introducing line features and image preprocessing algorithms. The results show that the algorithm in this paper has obvious improvements and has higher positioning accuracy.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。Those skilled in the art will appreciate that the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, the present application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment in combination with software and hardware. Moreover, the present application may adopt the form of a computer program product implemented in one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) that contain computer-usable program code.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to the flowchart and/or block diagram of the method, device (system) and computer program product according to the embodiment of the present application. It should be understood that each process and/or box in the flowchart and/or block diagram, and the combination of the process and/or box in the flowchart and/or block diagram can be realized by computer program instructions. These computer program instructions can be provided to a processor of a general-purpose computer, a special-purpose computer, an embedded processor or other programmable data processing device to produce a machine, so that the instructions executed by the processor of the computer or other programmable data processing device produce a device for realizing the function specified in one process or multiple processes in the flowchart and/or one box or multiple boxes in the block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing device to work in a specific manner, so that the instructions stored in the computer-readable memory produce a manufactured product including an instruction device that implements the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions may also be loaded onto a computer or other programmable data processing device so that a series of operational steps are executed on the computer or other programmable device to produce a computer-implemented process, whereby the instructions executed on the computer or other programmable device provide steps for implementing the functions specified in one or more processes in the flowchart and/or one or more boxes in the block diagram.
以上内容仅为说明本发明的技术思想,不能以此限定本发明的保护范围,凡是按照本发明提出的技术思想,在技术方案基础上所做的任何改动,均落入本发明权利要求书的保护范围之内。The above contents are only for explaining the technical idea of the present invention and cannot be used to limit the protection scope of the present invention. Any changes made on the basis of the technical solution in accordance with the technical idea proposed by the present invention shall fall within the protection scope of the claims of the present invention.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410146865.2A CN118031963A (en) | 2024-02-01 | 2024-02-01 | Underground positioning method and system based on comprehensive dotted line characteristics |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410146865.2A CN118031963A (en) | 2024-02-01 | 2024-02-01 | Underground positioning method and system based on comprehensive dotted line characteristics |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118031963A true CN118031963A (en) | 2024-05-14 |
Family
ID=90998568
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410146865.2A Pending CN118031963A (en) | 2024-02-01 | 2024-02-01 | Underground positioning method and system based on comprehensive dotted line characteristics |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118031963A (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119477095A (en) * | 2024-11-30 | 2025-02-18 | 西北工业大学 | A method for evaluating the stability of rescue well positioning results based on the Raida criterion |
-
2024
- 2024-02-01 CN CN202410146865.2A patent/CN118031963A/en active Pending
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119477095A (en) * | 2024-11-30 | 2025-02-18 | 西北工业大学 | A method for evaluating the stability of rescue well positioning results based on the Raida criterion |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114862949B (en) | A structured scene visual SLAM method based on point, line and surface features | |
| CN111899334B (en) | Visual synchronous positioning and map building method and device based on point-line characteristics | |
| CN110458161B (en) | Mobile robot doorplate positioning method combined with deep learning | |
| CN111462207A (en) | RGB-D simultaneous positioning and map creation method integrating direct method and feature method | |
| CN113537208A (en) | Visual positioning method and system based on semantic ORB-SLAM technology | |
| CN110070615A (en) | A kind of panoramic vision SLAM method based on polyphaser collaboration | |
| CN111462135A (en) | Semantic Mapping Method Based on Visual SLAM and 2D Semantic Segmentation | |
| US9299161B2 (en) | Method and device for head tracking and computer-readable recording medium | |
| CN112085790A (en) | Point-line combined multi-camera visual SLAM method, equipment and storage medium | |
| CN118521653B (en) | Positioning and mapping method and system based on fusion of LiDAR and inertial measurement in complex scenes | |
| CN113570713B (en) | A semantic map construction method and device for dynamic environments | |
| CN111998862A (en) | Dense binocular SLAM method based on BNN | |
| Liu et al. | Visual slam based on dynamic object removal | |
| CN112419497A (en) | Monocular vision-based SLAM method combining feature method and direct method | |
| Yang et al. | CubeSLAM: Monocular 3D object detection and SLAM without prior models | |
| CN116128966A (en) | A Semantic Localization Method Based on Environmental Objects | |
| CN117253003A (en) | Indoor RGB-D SLAM method integrating direct method and point-plane characteristic method | |
| CN118225096A (en) | Multi-sensor SLAM method based on dynamic feature point elimination and loop detection | |
| CN117593650A (en) | Moving point filtering visual SLAM method based on 4D millimeter wave radar and SAM image segmentation | |
| CN118031963A (en) | Underground positioning method and system based on comprehensive dotted line characteristics | |
| CN113465617B (en) | A map construction method, device and electronic equipment | |
| CN112907633B (en) | Dynamic feature point identification method and its application | |
| Shao | A Monocular SLAM System Based on the ORB Features | |
| CN119206203A (en) | A semantic visual SLAM method based on deep mask segmentation in dynamic environment | |
| CN118887353A (en) | A SLAM mapping method integrating points, lines and visual labels |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |