[go: up one dir, main page]

CN112486172B - Road edge detection method and robot - Google Patents

Road edge detection method and robot Download PDF

Info

Publication number
CN112486172B
CN112486172B CN202011380356.4A CN202011380356A CN112486172B CN 112486172 B CN112486172 B CN 112486172B CN 202011380356 A CN202011380356 A CN 202011380356A CN 112486172 B CN112486172 B CN 112486172B
Authority
CN
China
Prior art keywords
robot
map
static obstacle
road edge
obstacle map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011380356.4A
Other languages
Chinese (zh)
Other versions
CN112486172A (en
Inventor
黄寅
张涛
吴翔
郭璁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202011380356.4A priority Critical patent/CN112486172B/en
Publication of CN112486172A publication Critical patent/CN112486172A/en
Priority to PCT/CN2021/134282 priority patent/WO2022111723A1/en
Application granted granted Critical
Publication of CN112486172B publication Critical patent/CN112486172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0274Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means using mapping information stored in a memory device

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a road edge detection method and a robot, wherein the road edge detection method comprises the following steps: acquiring depth data, robot pose and a topological map; establishing a static obstacle map according to the depth data and the robot pose; calculating the gray value of the static obstacle map; and calculating the road edge according to the robot pose, the topological map and the gray value. According to the road edge detection method and the robot, the depth data, the robot pose, the topological map and the static obstacle map are fused, so that the road edge is calculated more accurately, and the possibility of collision of the robot is reduced.

Description

Road edge detection method and robot
Technical Field
The invention relates to the technical field of Internet, in particular to a road edge detection method and a robot.
Background
The mobile robot moves in a specific scene to autonomously perform tasks such as distribution, guidance, inspection, sterilization, and the like. Such scenarios include restaurants, hotels, office buildings, hospitals, and the like. In these scenes, the robot needs to build a map and plan a path, many obstacles exist near the path along which the robot walks, and the robot needs to avoid the obstacles during the movement. In the prior art, the robot detects the edge of a walking road through a laser radar, but the mode has larger error, and the robot is easy to collide with an obstacle.
Disclosure of Invention
The present invention has been made in view of the above-described conventional circumstances, and an object thereof is to provide a road edge detection method and a robot capable of accurately identifying a road edge and avoiding collision.
In order to achieve the above object, the present invention provides the following technical solutions:
the invention provides a road edge detection method, which comprises the following steps:
acquiring depth data, robot pose and a topological map;
establishing a static obstacle map according to the depth data and the robot pose;
Calculating the gray value of the static obstacle map;
And calculating the road edge according to the robot pose, the topological map and the gray value.
In this case, the depth data, the robot pose, the topological map and the static obstacle map are fused, so that the road side edge is calculated more accurately, and the possibility of collision of the robot is reduced.
After the step of establishing the static obstacle map according to the depth data and the robot pose, the method comprises the following steps:
setting a plurality of measurement grids on the static obstacle map, wherein the resolution of the measurement grids is the same as that of the static obstacle map, and aligning the plurality of measurement grids with the static obstacle map;
Converting the depth point cloud of the depth data from a robot coordinate system to a world coordinate system by the robot pose, and projecting the depth point cloud to the ground;
Marking the measurement grid according to whether the measurement grid has the depth point cloud or not;
When the depth point cloud is arranged in the measurement grid, the grid value of the static obstacle map of the corresponding area of the measurement grid is increased by a first characteristic value, and when the depth point cloud is not arranged in the measurement grid, the grid value of the static obstacle map of the corresponding area of the measurement grid is decreased by a second characteristic value.
Therefore, the depth point clouds are fused and combined with the measurement grids, and the grids with the depth point clouds in the static obstacle map can be more obviously quantified and distinguished.
The calculating the gray value of the static obstacle map specifically includes:
And fusing a plurality of continuous frames of the grid values of the static obstacle map, and calculating to obtain the gray value after increasing the first characteristic value or reducing the second characteristic value.
In this case, the gray values blend the changing condition of the successive multi-frame grid values, so that the obstacle recognition is more accurate.
The marking the measurement grid according to whether the measurement grid has the depth point cloud or not, specifically includes:
The measurement grid with the depth point cloud is marked 1 and the measurement grid without the depth point cloud is marked 0.
The calculating the road edge according to the robot pose, the topological map and the gray value specifically comprises the following steps:
Finding out a topology path of a road where the robot is currently located in the topology map according to the pose of the robot;
sampling at specific spatial intervals along the topological path;
inquiring the gray value along the normal direction of the topological path by taking the sampling position as a starting point;
when the gray value is larger than a threshold value, recording a corresponding coordinate position;
Fitting a plurality of the coordinate positions into a straight line.
In this case, when the gray value is greater than the threshold value, the corresponding pixel is the peak pixel, and the position of the obstacle can be considered, and the road edge can be fitted according to the position of the obstacle and by combining the topological path, so that the accuracy of road edge detection is improved.
Wherein, the fitting the coordinate positions into a straight line specifically includes:
Fitting the straight line by adopting a random sampling coincidence algorithm.
The fitting of the straight line by adopting a random sampling coincidence algorithm specifically comprises the following steps:
Calculating the confidence coefficient of the straight line;
and selecting the straight line with the highest scores at the two sides of the topological map as the road edge.
The calculating the confidence coefficient of the straight line specifically includes:
The confidence coefficient calculating method comprises the following steps:
wherein n is the number of pixels of the straight line covered in the static obstacle map, and V is the pixel value of the pixels.
Therefore, the anti-interference capability of fitting calculation can be improved, and the accuracy of calculation is improved.
Wherein the depth data is acquired by a depth camera.
The invention also provides a robot, and the road edge detection method is applied.
According to the road edge detection method and the robot, the depth data, the robot pose, the topological map and the static obstacle map are fused, so that the road edge is calculated more accurately, and the possibility of collision of the robot is reduced.
Drawings
Fig. 1 shows a schematic flow chart of a road edge detection method according to the present invention;
FIG. 2 is a flow chart of an embodiment of a road edge detection method according to the present invention;
fig. 3 shows a schematic flow chart of an embodiment of the road edge detection method according to the present invention.
Detailed Description
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the following description, the same members are denoted by the same reference numerals, and overlapping description thereof is omitted. In addition, the drawings are schematic, and the ratio of the sizes of the components to each other, the shapes of the components, and the like may be different from actual ones.
As shown in fig. 1, an embodiment of the present invention relates to a road edge detection method, including:
101. Acquiring depth data, robot pose and a topological map;
102. establishing a static obstacle map according to the depth data and the robot pose;
103. Calculating the gray value of the static obstacle map;
104. And calculating the road edge according to the robot pose, the topological map and the gray value.
In this case, the depth data, the robot pose, the topological map and the static obstacle map are fused, so that the road side edge is calculated more accurately, and the possibility of collision of the robot is reduced.
In the present embodiment, the depth data may be acquired by scanning the surrounding environment of the robot with a laser radar provided in the robot. The robot pose includes position information and orientation information of the robot. The robot pose can be obtained through a laser radar, an IMU, an odometer or the like. The topological map is a robot moving path which is set artificially.
As shown in fig. 2, in the present embodiment, after step 102, the method includes:
1021. Setting a plurality of measurement grids on the static obstacle map, wherein the resolution of the measurement grids is the same as that of the static obstacle map, and aligning the plurality of measurement grids with the static obstacle map;
1022. converting the depth point cloud of the depth data from a robot coordinate system to a world coordinate system by the robot pose, and projecting the depth point cloud to the ground;
1023. Marking the measurement grid according to whether the measurement grid has the depth point cloud or not;
1024. when the depth point cloud is arranged in the measurement grid, the grid value of the static obstacle map of the corresponding area of the measurement grid is increased by a first characteristic value, and when the depth point cloud is not arranged in the measurement grid, the grid value of the static obstacle map of the corresponding area of the measurement grid is decreased by a second characteristic value.
Therefore, the depth point clouds are fused and combined with the measurement grids, and the grids with the depth point clouds in the static obstacle map can be more obviously quantified and distinguished.
In this embodiment, step 103 specifically includes:
And fusing a plurality of continuous frames of the grid values of the static obstacle map, and calculating to obtain the gray value after increasing the first characteristic value or reducing the second characteristic value.
In this case, the gray values blend the changing condition of the successive multi-frame grid values, so that the obstacle recognition is more accurate.
In this embodiment, the gradation value is a gradation value of each pixel.
In this embodiment, step 1023 specifically includes:
The measurement grid with the depth point cloud is marked 1 and the measurement grid without the depth point cloud is marked 0.
As shown in fig. 3, in this embodiment, step 104 specifically includes:
1041. Finding out a topology path of a road where the robot is currently located in the topology map according to the pose of the robot;
1042. Sampling at specific spatial intervals along the topological path;
1043. inquiring the gray value along the normal direction of the topological path by taking the sampling position as a starting point;
1044. When the gray value is larger than a threshold value, recording a corresponding coordinate position;
1045. Fitting a plurality of the coordinate positions into a straight line.
In this case, when the gray value is greater than the threshold value, the corresponding pixel is the peak pixel, and the position of the obstacle can be considered, and the road edge can be fitted according to the position of the obstacle and by combining the topological path, so that the accuracy of road edge detection is improved.
In some examples, the spatial intervals may be equally spaced.
In this embodiment, sampling is performed at specific spatial intervals, so that the amount of calculation can be reduced and the calculation efficiency can be improved.
In some examples, in one sample, the sample point gray value is [0,0,10,15,20,200,215,170,120,180,100,50], the threshold may be 190, and the coordinate positions corresponding to the two gray values are recorded 200,215.
In this embodiment, step 1045 specifically includes:
Fitting the straight line by adopting a random sampling coincidence algorithm.
In this embodiment, the fitting the straight line by using a random sampling coincidence algorithm specifically includes:
Calculating the confidence coefficient of the straight line;
and selecting the straight line with the highest scores at the two sides of the topological map as the road edge.
In this embodiment, a plurality of straight lines are fitted to both sides of the topological map. And calculating the confidence of the straight line, and screening the straight line serving as the road edge.
In this embodiment, the calculating the confidence coefficient of the straight line specifically includes:
The confidence coefficient calculating method comprises the following steps:
wherein n is the number of pixels of the straight line covered in the static obstacle map, and V is the pixel value of the pixels.
Therefore, the anti-interference capability of fitting calculation can be improved, and the accuracy of calculation is improved.
In this embodiment, the depth data is acquired by a depth camera. The depth data includes a depth map.
In this embodiment, the topology map includes a number of topology paths.
In some examples, the topological map may be drawn manually. The topology map includes path information that the robot can pass through. The topological path may be a straight line.
The embodiment of the invention also relates to a robot, and the road edge detection method is applied. The robot may include a depth camera. A depth camera acquires the depth data. The robot may further comprise at least one of a laser radar, an IMU, an odometer for acquiring the pose of the robot.
The above-described embodiments do not limit the scope of the present invention. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the above embodiments should be included in the scope of the present invention.

Claims (8)

1. A method of road edge detection, the method comprising:
acquiring depth data, robot pose and a topological map;
establishing a static obstacle map according to the depth data and the robot pose;
After the step of establishing a static obstacle map according to the depth data and the robot pose, the method comprises the following steps:
setting a plurality of measurement grids on the static obstacle map, wherein the resolution of the measurement grids is the same as that of the static obstacle map, and aligning the plurality of measurement grids with the static obstacle map;
Converting the depth point cloud of the depth data from a robot coordinate system to a world coordinate system by the robot pose, and projecting the depth point cloud to the ground;
Marking the measurement grid according to whether the measurement grid has the depth point cloud or not;
When the depth point cloud is arranged in the measurement grid, the grid value of the static obstacle map of the corresponding area of the measurement grid is increased by a first characteristic value, and when the depth point cloud is not arranged in the measurement grid, the grid value of the static obstacle map of the corresponding area of the measurement grid is decreased by a second characteristic value;
Calculating the gray value of the static obstacle map, wherein the calculating the gray value of the static obstacle map specifically comprises the following steps:
After the grid values of the static obstacle map are fused for a plurality of frames, the first characteristic value is increased or the second characteristic value is reduced, and the gray value is calculated;
And calculating the road edge according to the robot pose, the topological map and the gray value.
2. The method for detecting the edge of the road according to claim 1, wherein the marking the measurement grid according to whether the measurement grid has the depth point cloud comprises:
The measurement grid with the depth point cloud is marked 1 and the measurement grid without the depth point cloud is marked 0.
3. The method for detecting a road edge according to claim 1, wherein the calculating a road edge from the robot pose, the topological map, and the gray value specifically comprises:
Finding out a topology path of a road where the robot is currently located in the topology map according to the pose of the robot;
sampling at specific spatial intervals along the topological path;
inquiring the gray value along the normal direction of the topological path by taking the sampling position as a starting point;
when the gray value is larger than a threshold value, recording a corresponding coordinate position;
Fitting a plurality of the coordinate positions into a straight line.
4. A method of road edge detection as claimed in claim 3, wherein said fitting a plurality of said coordinate positions to a straight line comprises:
Fitting the straight line by adopting a random sampling coincidence algorithm.
5. The method of claim 4, wherein said fitting said straight line using a random sample consensus algorithm comprises:
Calculating the confidence coefficient of the straight line;
and selecting the straight line with the highest scores at the two sides of the topological map as the road edge.
6. The method for detecting road edges according to claim 5, wherein said calculating the confidence of the straight line comprises:
The confidence coefficient calculating method comprises the following steps:
wherein n is the number of pixels of the straight line covered in the static obstacle map, and V is the pixel value of the pixels.
7. The road edge detection method of claim 1, wherein the depth data is acquired by a depth camera.
8. A robot, characterized by applying the road edge detection method according to any one of claims 1-7.
CN202011380356.4A 2020-11-30 2020-11-30 Road edge detection method and robot Active CN112486172B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011380356.4A CN112486172B (en) 2020-11-30 2020-11-30 Road edge detection method and robot
PCT/CN2021/134282 WO2022111723A1 (en) 2020-11-30 2021-11-30 Road edge detection method and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011380356.4A CN112486172B (en) 2020-11-30 2020-11-30 Road edge detection method and robot

Publications (2)

Publication Number Publication Date
CN112486172A CN112486172A (en) 2021-03-12
CN112486172B true CN112486172B (en) 2024-08-02

Family

ID=74938491

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011380356.4A Active CN112486172B (en) 2020-11-30 2020-11-30 Road edge detection method and robot

Country Status (2)

Country Link
CN (1) CN112486172B (en)
WO (1) WO2022111723A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112486172B (en) * 2020-11-30 2024-08-02 深圳市普渡科技有限公司 Road edge detection method and robot
CN115330969A (en) * 2022-10-12 2022-11-11 之江实验室 A vectorized description method of local static environment for ground unmanned vehicles
CN116597001A (en) * 2023-04-21 2023-08-15 杭州萤石软件有限公司 Indoor top boundary position detection method, device, robot and storage medium
CN118550305B (en) * 2024-07-29 2024-10-01 上海擎朗智能科技有限公司 Control method for robot edge cleaning, robot and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109765901A (en) * 2019-02-18 2019-05-17 华南理工大学 Dynamic cost map navigation method based on line laser and binocular vision
CN109993780A (en) * 2019-03-07 2019-07-09 深兰科技(上海)有限公司 A kind of three-dimensional high-precision ground drawing generating method and device

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4282662B2 (en) * 2004-12-14 2009-06-24 本田技研工業株式会社 Moving path generation device for autonomous mobile robot
CN103247040B (en) * 2013-05-13 2015-11-25 北京工业大学 Based on the multi-robot system map joining method of hierarchical topology structure
CN103400392B (en) * 2013-08-19 2016-06-22 山东鲁能智能技术有限公司 Binocular vision navigation system and method based on Intelligent Mobile Robot
CN103456182B (en) * 2013-09-06 2015-10-21 浙江大学 A kind of road edge detection method based on distance measuring sensor and system thereof
CN105511457B (en) * 2014-09-25 2019-03-01 科沃斯机器人股份有限公司 Robot static path planning method
US9630319B2 (en) * 2015-03-18 2017-04-25 Irobot Corporation Localization and mapping using physical features
KR101748632B1 (en) * 2015-10-29 2017-06-20 한국과학기술연구원 Robot control system and method for planning driving path of robot
CN107544501A (en) * 2017-09-22 2018-01-05 广东科学技术职业学院 A kind of intelligent robot wisdom traveling control system and its method
KR102466940B1 (en) * 2018-04-05 2022-11-14 한국전자통신연구원 Topological map generation apparatus for traveling robot and method thereof
CN109074668B (en) * 2018-08-02 2022-05-20 达闼机器人股份有限公司 Path navigation method, related device and computer readable storage medium
CN111679664A (en) * 2019-02-25 2020-09-18 北京奇虎科技有限公司 3D map construction method based on depth camera and sweeping robot
CN109895100B (en) * 2019-03-29 2020-10-16 深兰科技(上海)有限公司 Navigation map generation method and device and robot
CN110147748B (en) * 2019-05-10 2022-09-30 安徽工程大学 Mobile robot obstacle identification method based on road edge detection
CN111161334B (en) * 2019-12-31 2023-06-02 南通大学 Semantic map construction method based on deep learning
CN112486172B (en) * 2020-11-30 2024-08-02 深圳市普渡科技有限公司 Road edge detection method and robot

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109765901A (en) * 2019-02-18 2019-05-17 华南理工大学 Dynamic cost map navigation method based on line laser and binocular vision
CN109993780A (en) * 2019-03-07 2019-07-09 深兰科技(上海)有限公司 A kind of three-dimensional high-precision ground drawing generating method and device

Also Published As

Publication number Publication date
CN112486172A (en) 2021-03-12
WO2022111723A1 (en) 2022-06-02

Similar Documents

Publication Publication Date Title
CN112486172B (en) Road edge detection method and robot
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
CN111220993B (en) Target scene positioning method and device, computer equipment and storage medium
CN106548173B (en) An Improved UAV 3D Information Acquisition Method Based on Hierarchical Matching Strategy
CN107850449B (en) Method and system for generating and using positioning reference data
US8588471B2 (en) Method and device of mapping and localization method using the same
CN112346463B (en) A Path Planning Method for Unmanned Vehicles Based on Velocity Sampling
JP4409035B2 (en) Image processing apparatus, singular part detection method, and recording medium recording singular part detection program
CN113074727A (en) Indoor positioning navigation device and method based on Bluetooth and SLAM
KR20190053217A (en) METHOD AND SYSTEM FOR GENERATING AND USING POSITIONING REFERENCE DATA
CN112464812A (en) Vehicle-based sunken obstacle detection method
US12122413B2 (en) Method for estimating distance to and location of autonomous vehicle by using mono camera
KR102626574B1 (en) Method for calibration of camera and lidar, and computer program recorded on record-medium for executing method therefor
CN109815831B (en) Vehicle orientation obtaining method and related device
CN114721001B (en) A mobile robot positioning method based on multi-sensor fusion
Konrad et al. Localization in digital maps for road course estimation using grid maps
US20240427019A1 (en) Visual mapping method, and computer program recorded on recording medium for executing method therefor
CN117152210B (en) Image dynamic tracking method based on dynamic observation field of view and related device
KR102675138B1 (en) Method for calibration of multiple lidars, and computer program recorded on record-medium for executing method therefor
CN115930946A (en) Method for describing multiple characteristics of dynamic barrier in indoor and outdoor alternating environment
KR102616437B1 (en) Method for calibration of lidar and IMU, and computer program recorded on record-medium for executing method therefor
Burger et al. Unstructured road slam using map predictive road tracking
CN111239761B (en) Method for indoor real-time establishment of two-dimensional map
CN118565457A (en) Grid map construction method and device based on observation direction and intelligent mobile device
US20230168688A1 (en) Sequential mapping and localization (smal) for navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant