WO2019127445A1 - Procédé, appareil et système de cartographie tridimensionnelle, plateforme en nuage, dispositif électronique et produit programme informatique - Google Patents
Procédé, appareil et système de cartographie tridimensionnelle, plateforme en nuage, dispositif électronique et produit programme informatique Download PDFInfo
- Publication number
- WO2019127445A1 WO2019127445A1 PCT/CN2017/120059 CN2017120059W WO2019127445A1 WO 2019127445 A1 WO2019127445 A1 WO 2019127445A1 CN 2017120059 W CN2017120059 W CN 2017120059W WO 2019127445 A1 WO2019127445 A1 WO 2019127445A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- visual positioning
- positioning device
- information
- pose information
- pose
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
Definitions
- the present application relates to the field of map reconstruction technologies, and in particular, to a three-dimensional mapping method, device, system, cloud platform, electronic device and computer program product.
- 3D map reconstruction is a mathematical model that is suitable for computer representation and understanding. It is the basis for computer processing, operation and analysis of 3D space environment. It is also the key technology to establish virtual reality in the computer to express the objective world.
- the mobile platform first needs to locate the location where it is located, and then splicing the three-dimensional map models acquired at different locations to achieve the purpose of real-time three-dimensional mapping. Therefore, real-time positioning and 3D scene reconstruction are two key technologies for 3D mapping. At present, there are two main ways of high-precision mobile positioning and 3D reconstruction, based on laser radar and vision-based positioning reconstruction.
- the 3D reconstruction method based on machine vision can be used in real-time robot construction, 3D map reconstruction, terrain and terrain construction.
- 3D reconstruction based on single/binocular vision 3D reconstruction based on depth camera
- 3D reconstruction based on laser radar there are three methods based on laser radar.
- the parallax is used for 3D reconstruction.
- the monocular reconstruction uses the parallax generated by the same scene at different positions during the single camera motion reconstruction.
- the binocular vision reconstruction is to pre-calibrate the binocular camera position relationship. Reconstruction using binocular parallax.
- Single/binocular 3D reconstruction requires matching of images at different locations. This method is computationally complex and requires the use of computing devices such as GPUs. It is difficult to meet real-time reconstruction requirements on CPU or mobile platforms. In addition, the method is greatly affected by the actual texture brightness of the real scene, and the quality of the three-dimensional reconstruction is poor.
- the three-dimensional reconstruction method based on depth camera directly acquires three-dimensional information by using the principles of structured optical coding and TOF (Time of Flight, which is a kind of depth camera), but the method has poor adaptability and cannot achieve satisfactory results in an outdoor environment. Moreover, most of the 3D mapping based on depth sensors need to process a large amount of point cloud information, resulting in excessive utilization of resources.
- lidar Based on the 3D reconstruction method of lidar, most of the current multi-line 3D laser radars are used, which are expensive and costly. However, based on the three-dimensional reconstruction of single-line lidar, most of them use a self-rotating device to continuously rotate the single-line point-by-point laser. Three-dimensional point cloud, this way is poorly adapted to the environment.
- the embodiment provides a three-dimensional mapping method, device, system, cloud platform, electronic device and computer program product, which combines a visual positioning method with a single-line lidar motion reconstruction, and utilizes the flexibility of visual positioning to use a single-line laser radar.
- the motion produces stable three-dimensional point cloud characteristics, effectively realizing online real-time reconstruction of indoor and outdoor scenes.
- the environment adaptability is strong, and the computational complexity is not high, which effectively improves the practicability and robustness of the three-dimensional scene reconstruction.
- the embodiment provides a three-dimensional mapping method, and the method includes:
- the three-dimensional point cloud of the laser collected by the single-line lidar at different positions is mapped to the same reference coordinate system to obtain a three-dimensional map.
- the embodiment provides a three-dimensional mapping device, where the device includes: a positioning unit, a calculation unit, and a three-dimensional reconstruction unit;
- the positioning unit is configured to perform positioning by using a visual positioning device to obtain pose information of the visual positioning device;
- the calculating unit is configured to calibrate a pose relationship between the visual positioning device and the single-line lidar, and calculate a position of the single-line lidar according to the pose information of the visual positioning device and the pose relationship Posture information
- the three-dimensional reconstruction unit is configured to map the three-dimensional point cloud of the laser collected by the single-line lidar at different positions according to the pose information of the single-line lidar to the same reference coordinate system to obtain a three-dimensional map.
- the embodiment provides a three-dimensional mapping system, the system comprising: a visual positioning device and at least two single-line laser radars;
- the visual positioning device is located at a center of the system to generate pose information
- the single-line laser radar is located on both sides of the visual positioning device and is on the same horizontal plane.
- the laser three-dimensional point cloud is collected at different positions, and according to the positional variation relationship between the calibration and the visual positioning device. And mapping the laser three-dimensional point cloud to the same reference coordinate system to obtain a three-dimensional map.
- an embodiment of the present disclosure provides a cloud platform, where the cloud platform includes the foregoing three-dimensional mapping device, and stores a three-dimensional map uploaded by the three-dimensional mapping device.
- the embodiment provides an electronic device, where the electronic device includes:
- a communication device a memory, one or more processors; and one or more modules, the one or more modules being stored in the memory and configured to be executed by the one or more processors,
- the one or more modules include instructions for performing the various steps in any of the above three-dimensional mapping methods.
- the present embodiment provides a computer program product for use in conjunction with an electronic device, the computer program product comprising a computer program embedded in a computer readable storage medium, the computer program comprising The electronic device executes instructions of each of the above three-dimensional mapping methods.
- the flexibility of the visual positioning, combined with the single-line lidar motion to generate stable three-dimensional point cloud characteristics, and the three-dimensional construction, can be applied in the motion environment, and the environment requirements are not high, the method can be applied to the robot field, virtual The construction of a three-dimensional environmental map in the realm of reality and augmented reality.
- the present disclosure utilizes single-line lidar motion to generate a three-dimensional contour of a map, which can be applied to various complex and uncertain scenes indoors and outdoors, and has nothing to do with light intensity and environmental texture, and the cost of a single-line lidar acquiring a point cloud of a three-dimensional space scene is relatively low. Low; another solution can realize real-time 3D scene reconstruction on the CPU, low computational complexity; use the visual positioning principle to locate the moving position in real time, the 3D mapping system can be installed not only on the sports car, but also in the backpack. Or use handheld scanning, the operation is very flexible and convenient.
- FIG. 1 is a schematic structural diagram of a three-dimensional drawing system in the embodiment
- FIG. 2 is another schematic structural diagram of a three-dimensional drawing system in the embodiment
- FIG. 3 is a schematic diagram of scanning of a single-line laser radar in the embodiment
- FIG. 5 is a schematic structural diagram of a three-dimensional drawing device in the embodiment.
- FIG. 6 is a schematic structural diagram of an electronic device in the embodiment.
- the single-line laser radar is used for positioning, and the environment is poorly adapted. It cannot be applied in an uneven environment.
- the self-rotating device continuously rotates the single-line point-by-point laser to acquire a three-dimensional point cloud. The way that the reconstruction error occurs when the device moves quickly.
- the present disclosure provides a three-dimensional mapping method, which is based on a visual positioning device, combines the characteristics of a single-line lidar motion to generate a stable three-dimensional point cloud, and performs three-dimensional mapping, which can be applied in indoor and outdoor scenes.
- the present disclosure provides a three-dimensional mapping method, which is applied to the three-dimensional mapping system shown in FIG. 1, the system comprising a visual positioning device and at least two single-line laser radars;
- the visual positioning device is located at the center of the system to generate pose information of the visual positioning device
- the single-line laser radar is located on both sides of the visual positioning device and is on the same horizontal plane.
- the laser three-dimensional point cloud is collected at different positions by scanning to the periphery, and according to the positional change relationship between the calibration and the visual positioning device, A multi-chip laser 3D point cloud is mapped to the same reference coordinate system to obtain a three-dimensional map.
- the system further comprises at least two panoramic cameras, which are located on both sides of the visual positioning device, are on the same horizontal plane, acquire color information of the environment image, according to the calibration panoramic camera and the single line The pose relationship between the lidars is used to color the three-dimensional map.
- the single-line laser radar includes a first single-line laser radar 102 and a second single-line laser radar 103
- the panoramic camera includes a first panoramic camera 104 and a second panoramic camera 105.
- the first single-line laser radar 102 is located on the first plane with the first panoramic camera 104.
- the second single-line laser radar 103 and the second panoramic camera 105 are located on the second plane.
- Both the first plane and the second plane form an angle with the plane in which the visual positioning device 101 is located.
- the angle is in the range of 135 to 165 degrees, and the scenes in front of the left front and the right front are reconstructed respectively. If the range is exceeded, the missing or excessive overlap of the reconstructed scene will result in waste of resources and affect the reconstruction effect.
- the first single-line laser radar 102 and the first panoramic camera 104 are a group, and the right front scene is scanned to perform three-dimensional mapping, and the first single-line laser radar 102 scans around, as shown in the single-line laser radar scanning diagram shown in FIG.
- the first panoramic camera 104 records the color information of the point cloud scanned by the laser radar, performs color rendering on the laser scanned map, and improves the fidelity of the drawing;
- the first single-line laser radar 103 and the first panoramic camera 105 are a group
- the left front scene is scanned; the visual positioning device 101 is located at the center of the system, and performs real-time positioning according to the front scene information.
- the three-dimensional drawing system is a whole, and the system can be installed on a corresponding sports car or a human body backpack according to different scenarios, and the use is more flexible.
- the present disclosure acquires distance information for a barrier on a scan plane based on flight time. According to this property, the lidar scanning plane is placed vertically in the plane to obtain the position of the surrounding objects on the scanning plane, and then the three-dimensional model of the laser radar is recovered by the motion of the lidar.
- the visual positioning device may be a monocular camera or a binocular camera, or may be a combination of an inertial navigation unit and a monocular camera or a binocular camera.
- FIG. 4 specifically includes:
- Step 201 Perform positioning by using a visual positioning device to obtain pose information of the visual positioning device;
- the visual positioning device is used to acquire the environment image
- the visual locator is used to obtain the pose information of the visual positioning device
- the visual positioning unit is combined with the inertial navigation unit to obtain the pose information of the visual positioning device.
- image matching positioning may be performed based on local feature points in a real scene, or the pose information of the visual positioning device may be obtained by using a direct matching method, or the inertial navigation unit and the visual positioning unit may be merged to obtain a visual Positioning information of the positioning device, wherein there are two ways to fuse the inertial navigation unit and the visual positioning unit: tight coupling and loose coupling.
- the manner of obtaining the pose information of the visual positioning device based on the local feature points includes:
- Determining a search area of the target in the current frame according to a frame of the environment image extracting local feature points of the search area; matching local feature points of the search area with local feature points of the target in the previous frame to obtain a feature point with successful matching; Adaptive tracking of the matching feature points is performed to obtain the pose information of the visual positioning device.
- the relocation and loop detection functions of the mobile positioning are realized, and the stability and positioning accuracy of the position tracking are enhanced.
- ORB Oriented fast and rotated brief
- the ORB feature is a fast feature point extraction and description method, which is based on FAST (Feature from accelerated segment test) feature point extraction and BRIEF (Binary robust independent elementary features) feature point description based on improved optimization.
- FAST Feature from accelerated segment test
- BRIEF Binary robust independent elementary features
- the manner of obtaining the pose information of the visual positioning device by using the direct matching method includes:
- the pose information is the pose information of the visual positioning device.
- the direct positioning method based on direct method is to use the brightness information of image pixels to estimate the camera motion.
- the direct method can use the sparse direct method, the semi-dense direct method or the dense direct method. Make a limit.
- the method of integrating the inertial navigation unit and the visual positioning unit to obtain the pose information of the visual positioning device is divided into a tight coupling method and a loose coupling method.
- Tight coupling obtaining the image features in the environment image acquired by the visual positioning unit, and the pose information and velocity information of the visual positioning unit obtained by the inertial navigation unit, and optimizing the image features, pose information and speed information nonlinearly to obtain optimization
- the post pose information is the pose information of the visual positioning device, and the tight coupling method can realize the pose estimation with higher precision and robustness.
- the inertial navigation unit dynamically calculates the motion increment of the visual positioning device, uses the visual positioning unit to obtain the first pose information of the visual positioning device, and calculates the first gain matrix of the multi-information fusion filter of the first pose information
- the second gain matrix of the motion-incremented multi-information fusion filter uses the multi-information fusion filter to fuse the first gain matrix and the second gain matrix to obtain the pose information of the visual positioning device.
- the multi-information fusion filter may be a Kalman filter, and may be a particle filter or the like.
- Step 202 calibrate the pose relationship between the visual positioning device and the single-line lidar, and calculate the pose information of the single-line lidar;
- the pose relationship between the visual positioning device and the single-line laser radar is calibrated, and the pose information of the single-line lidar is calculated according to the pose information and the pose relationship of the visual positioning device.
- the pose between the visual positioning device and the three-dimensional coordinate system of the single-line laser radar is calibrated, and the x, y axis can be defined in the single-line lidar scanning plane, and the z-axis is established according to the right-hand coordinate system rule.
- the pose information of the single-line laser radar can be obtained according to the calibration result.
- the pose relationship between the visual positioning device and the single-line laser radar is calibrated. Since the pose information of the visual positioning device has been obtained in step 201, the pose information of the single-line lidar can be calculated according to the calibration relationship.
- Step 203 Acquire laser point cloud data by using a single line laser radar
- This scheme uses single-line laser radar to acquire point cloud data at different locations, and combines the pose information of the single-line lidar to construct a 3D point cloud in a 3D scene.
- Step 204 Processing the laser three-dimensional point cloud collected at different positions to obtain a three-dimensional map
- the laser three-dimensional point cloud collected by the single-line lidar at different positions is mapped to the same reference coordinate system, and the order is obtained. 3D point cloud.
- the visual positioning method in step 201 obtains the motion parameters of the three-dimensional mapping system, and according to the calibration relationship of step 202, the position transformation parameters of the single-line lidar are calculated, so that the lasers obtained at different positions can be combined with the precise motion parameters of the visual positioning.
- a three-dimensional point cloud is mapped to the same reference coordinate system to form an ordered three-dimensional point cloud.
- the present disclosure can construct a three-dimensional map, and there is no color in the three-dimensional map.
- the three-dimensional point cloud is rendered and meshed.
- Step 205 Perform a rendering process on the three-dimensional point cloud map to obtain a true color three-dimensional map
- the point cloud acquired by the laser radar After acquiring the 3D point cloud and constructing the 3D graphics, the point cloud acquired by the laser radar has no real color information, so this step uses the panoramic camera to obtain the color information of the external scene, and adds color information for the constructed 3D graphics.
- the step specifically includes: collecting the color information of the environment image by using the panoramic camera, calibrating the pose relationship between the panoramic camera and the single-line lidar, mapping the color information of the panoramic camera to the three-dimensional map, and rendering the color of the three-dimensional map to obtain true Colorful three-dimensional map.
- Step 206 Perform mesh processing on the rendered true color three-dimensional map or three-dimensional map to obtain a gridded three-dimensional map.
- the meshing process may be performed first, and then the color is added to the three-dimensional point cloud map, and the three-dimensional point cloud may be firstly color-rendered and then meshed, and the color rendering and the network are performed.
- the lattice processing operation is an optimization measure for the three-dimensional image.
- this step meshes the point cloud to obtain a gridded three-dimensional map.
- the point cloud As a huge grid, find out the extremum of each coordinate axis in the three-dimensional object space. According to the number of divisions a, the X, Y, and Z axes are divided into 3a intervals, and the entire huge mesh will be Divided into 2 2a square grids, each sub-grid is numbered.
- the present disclosure utilizes a visual positioning device to first perform positioning, and obtains a pose information of a single-line lidar by calibrating a pose relationship between a visual positioning device and a single-line lidar, and combines point cloud data collected by a single-line lidar to obtain a three-dimensional map of the point cloud.
- the 3D map is also rendered and meshed, so that the constructed 3D map has color attributes, and the gridding operation is more conducive to the coordinates of the target point.
- the embodiment provides a three-dimensional drawing device, and the principle of solving the problem is similar to the three-dimensional drawing method. Therefore, the implementation of the three-dimensional drawing device can be referred to the three-dimensional drawing method. Implementation, repetition will not be repeated.
- a three-dimensional mapping device includes a positioning unit 301, a computing unit 302, and a three-dimensional reconstruction unit 303:
- the positioning unit 301 is configured to perform positioning by using a visual positioning device to obtain pose information of the visual positioning device;
- the calculating unit 302 is configured to calibrate the pose relationship between the visual positioning device and the single-line lidar, and calculate the pose information of the single-line lidar according to the pose information and the pose relationship of the visual positioning device;
- the three-dimensional reconstruction unit 303 is configured to map the three-dimensional point cloud of the laser collected by the single-line lidar at different positions according to the pose information of the single-line lidar to the same reference coordinate system to obtain a three-dimensional map.
- the positioning unit may adopt four positioning implementation manners, and the positioning unit includes a first positioning unit, a second positioning unit, a third positioning unit or a fourth positioning unit:
- the first positioning unit obtains the pose information of the visual positioning device by using feature point matching or direct method matching according to the image gray level information;
- the second positioning unit acquires the current frame image and the initial pose information of the visual positioning device, establishes a luminosity error function between the current frame image and the previous frame image, and performs nonlinear optimization on the luminosity error function to obtain the optimized pose information.
- the optimized pose information is the pose information of the visual positioning device;
- the third positioning unit acquires the image features in the environment image collected by the visual positioning unit, and the pose information and the velocity information of the visual positioning unit obtained by the inertial navigation unit, and nonlinearly optimizes the image features, the pose information and the velocity information.
- the optimized pose information is obtained, and the optimized pose information is the pose information of the visual positioning device;
- the fourth positioning unit dynamically calculates the motion increment of the visual positioning device by using the inertial navigation unit, and obtains the first pose information of the visual positioning device by using the visual positioning unit, and fuses the first pose information and the motion increment to obtain the vision. Positioning information of the positioning device.
- the first positioning unit including the extracting subunit and the locating subunit, specifically:
- Extracting subunits extracting local feature points in the collected environment image
- the positioning subunit which is the target search area of the current frame of the environment image, extracts the local feature points of the search area, and matches the local feature points of the search area with the local feature points of the target in the previous frame to obtain a successful matching feature.
- Point adaptive tracking of the matching feature points to obtain the pose information of the visual positioning device.
- the second positioning unit includes a first calculating subunit, a second calculating subunit, and a fusion subunit;
- a first calculating sub-unit using an inertial navigation unit to dynamically calculate a motion increment of the visual positioning device, and using a visual positioning unit to obtain a first pose information of the visual positioning device;
- a second calculating subunit calculating a first gain matrix of the multi-information fusion filtering of the first pose information, and calculating a second gain matrix of the multi-information fusion filtering of the motion increment;
- the fusion subunit combines the first gain matrix and the second gain matrix by using multi-information fusion filtering to obtain pose information of the visual positioning device.
- the apparatus further includes a rendering unit that acquires color information of the environment image using the panoramic camera, calibrates a pose relationship between the panoramic camera and the single-line lidar, and maps color information of the panoramic camera to the three-dimensional map The color of the three-dimensional map is rendered to obtain a true color three-dimensional map.
- a rendering unit that acquires color information of the environment image using the panoramic camera, calibrates a pose relationship between the panoramic camera and the single-line lidar, and maps color information of the panoramic camera to the three-dimensional map The color of the three-dimensional map is rendered to obtain a true color three-dimensional map.
- the device further includes a meshing unit, configured to perform mesh processing on the point cloud in the true color three-dimensional map obtained by the rendering unit, to obtain a gridded three-dimensional map, or obtained by the three-dimensional reconstruction unit.
- the 3D point cloud is meshed and processed to obtain a gridded 3D map.
- the disclosure uses the positioning unit to perform positioning processing first, and obtains the pose information of the single-line lidar by calibrating the position and orientation relationship between the visual positioning device and the single-line lidar, and combines the point cloud data collected by the single-line lidar to obtain a three-dimensional map. .
- the 3D map is also rendered and meshed, so that the constructed 3D map has color attributes, and the meshing operation facilitates the coordinates of the target point.
- the embodiment of the present disclosure provides a cloud platform, which includes any three-dimensional mapping device in the above embodiment, and stores a three-dimensional map uploaded by the three-dimensional mapping device.
- the cloud platform further includes a processing device for processing the three-dimensional map uploaded by the three-dimensional drawing device.
- the processing operation includes storing the three-dimensional map, storing the three-dimensional map reported by different three-dimensional mapping devices into different cloud storage spaces, or storing the three-dimensional maps reported by different three-dimensional mapping devices into a cloud storage space, and according to different The three-dimensional map reported by the regional 3D mapping device is stored in different cloud storage spaces.
- the processing operation further includes splicing a plurality of the three-dimensional maps to obtain a large three-dimensional map, for example, combining a plurality of three-dimensional maps reported by different three-dimensional mapping devices in the same area, and splicing the plurality of three-dimensional maps. Form a three-dimensional map within a large area.
- the embodiment further provides an electronic device. Since the principle is similar to the method for determining the prompt information, the implementation of the method may refer to the implementation of the method, and the repeated description is not repeated.
- the electronic device 600 includes: a communication device 601, a memory 602, one or more processors 603; and one or more modules, the one or more modules being stored in the memory and being Configured to be executed by the one or more processors, the one or more modules including instructions for performing the various steps in any of the above three-dimensional mapping methods.
- the electronic device is a robot.
- the embodiment further provides a computer program product for use in conjunction with an electronic device, the computer program product comprising a computer program embedded in a computer readable storage medium, the computer program comprising The electronic device executes instructions of each of the above three-dimensional mapping methods.
- embodiments of the present application can be provided as a method, system, or computer program product.
- the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
- the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- Traffic Control Systems (AREA)
Abstract
L'invention concerne un procédé, un appareil et un système de cartographie tridimensionnelle, une plateforme en nuage, un dispositif électronique et un produit programme informatique. Le procédé de cartographie tridimensionnelle comprend les étapes consistant à : acquérir des informations de pose d'un dispositif de positionnement visuel ; étalonner une relation de pose entre le dispositif de positionnement visuel et un lidar à ligne unique ; en fonction des informations de pose et de la relation de pose du dispositif de positionnement visuel, calculer les informations de pose du lidar à ligne unique ; et selon les informations de pose du lidar à ligne unique, cartographier le nuage de points tridimensionnel laser collecté par le lidar à ligne unique à différentes positions sur le même système de coordonnées de référence pour obtenir un nuage de points tridimensionnel ordonné. Le procédé de cartographie tridimensionnelle acquiert avec précision les informations de position par le biais de la manière de positionnement visuel, combine avec le mode que le radar laser à ligne unique acquiert les données de nuage de points, et peut obtenir une carte tridimensionnelle précise. En outre, le rendu et l'engrènement sont également fournis, ce qui permet d'obtenir une carte tridimensionnelle avec une couleur et une capacité d'aplatissement.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2017/120059 WO2019127445A1 (fr) | 2017-12-29 | 2017-12-29 | Procédé, appareil et système de cartographie tridimensionnelle, plateforme en nuage, dispositif électronique et produit programme informatique |
| CN201780002708.2A CN108401461B (zh) | 2017-12-29 | 2017-12-29 | 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2017/120059 WO2019127445A1 (fr) | 2017-12-29 | 2017-12-29 | Procédé, appareil et système de cartographie tridimensionnelle, plateforme en nuage, dispositif électronique et produit programme informatique |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019127445A1 true WO2019127445A1 (fr) | 2019-07-04 |
Family
ID=63095112
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2017/120059 Ceased WO2019127445A1 (fr) | 2017-12-29 | 2017-12-29 | Procédé, appareil et système de cartographie tridimensionnelle, plateforme en nuage, dispositif électronique et produit programme informatique |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN108401461B (fr) |
| WO (1) | WO2019127445A1 (fr) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111580129A (zh) * | 2020-04-07 | 2020-08-25 | 华南理工大学 | 一种基于单线激光雷达获取3d激光点云的方法 |
| CN111750805A (zh) * | 2020-07-06 | 2020-10-09 | 山东大学 | 一种基于双目相机成像和结构光技术的三维测量装置及测量方法 |
| CN113495281A (zh) * | 2021-06-21 | 2021-10-12 | 杭州飞步科技有限公司 | 可移动平台的实时定位方法及装置 |
| CN115585818A (zh) * | 2022-10-31 | 2023-01-10 | 中国星网网络应用有限公司 | 一种地图构建方法、装置、电子设备及存储介质 |
Families Citing this family (34)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108401461B (zh) * | 2017-12-29 | 2021-06-04 | 达闼机器人有限公司 | 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品 |
| CN109211241B (zh) * | 2018-09-08 | 2022-04-29 | 天津大学 | 基于视觉slam的无人机自主定位方法 |
| CN110895833B (zh) * | 2018-09-13 | 2025-02-21 | 北京京东尚科信息技术有限公司 | 一种室内场景三维建模的方法和装置 |
| CN109297510B (zh) * | 2018-09-27 | 2021-01-01 | 百度在线网络技术(北京)有限公司 | 相对位姿标定方法、装置、设备及介质 |
| CN109358342B (zh) * | 2018-10-12 | 2022-12-09 | 东北大学 | 基于2d激光雷达的三维激光slam系统及控制方法 |
| AU2018282435B1 (en) * | 2018-11-09 | 2020-02-06 | Beijing Didi Infinity Technology And Development Co., Ltd. | Vehicle positioning system using LiDAR |
| CN111174788B (zh) * | 2018-11-13 | 2023-05-02 | 北京京东乾石科技有限公司 | 一种室内二维建图方法和装置 |
| CN109887057B (zh) * | 2019-01-30 | 2023-03-24 | 杭州飞步科技有限公司 | 生成高精度地图的方法和装置 |
| CN109725330A (zh) * | 2019-02-20 | 2019-05-07 | 苏州风图智能科技有限公司 | 一种车体定位方法及装置 |
| CN109887087B (zh) * | 2019-02-22 | 2021-02-19 | 广州小鹏汽车科技有限公司 | 一种车辆的slam建图方法及系统 |
| CN111735439B (zh) * | 2019-03-22 | 2022-09-30 | 北京京东乾石科技有限公司 | 地图构建方法、装置和计算机可读存储介质 |
| CN110007300B (zh) * | 2019-03-28 | 2021-08-06 | 东软睿驰汽车技术(沈阳)有限公司 | 一种得到点云数据的方法及装置 |
| CN110118554B (zh) * | 2019-05-16 | 2021-07-16 | 达闼机器人有限公司 | 基于视觉惯性的slam方法、装置、存储介质和设备 |
| CN110276834B (zh) * | 2019-06-25 | 2023-04-11 | 达闼科技(北京)有限公司 | 一种激光点云地图的构建方法、终端和可读存储介质 |
| CN112400122B (zh) * | 2019-08-26 | 2024-07-16 | 北京航迹科技有限公司 | 定位目标对象的系统和方法 |
| CN110580740B (zh) * | 2019-08-27 | 2021-08-20 | 清华大学 | 多智能体协同三维建模方法及装置 |
| CN112907659B (zh) * | 2019-11-19 | 2024-07-12 | 浙江菜鸟供应链管理有限公司 | 移动设备定位系统、方法及设备 |
| CN111080784B (zh) * | 2019-11-27 | 2024-04-19 | 贵州宽凳智云科技有限公司北京分公司 | 一种基于地面图像纹理的地面三维重建方法和装置 |
| WO2021121306A1 (fr) | 2019-12-18 | 2021-06-24 | 北京嘀嘀无限科技发展有限公司 | Procédé et système de localisation visuelle |
| CN111862337B (zh) * | 2019-12-18 | 2024-05-10 | 北京嘀嘀无限科技发展有限公司 | 视觉定位方法、装置、电子设备和计算机可读存储介质 |
| WO2021128297A1 (fr) * | 2019-12-27 | 2021-07-01 | 深圳市大疆创新科技有限公司 | Procédé, système et dispositif permettant de construire une carte de nuage de points tridimensionnel |
| CN111199578B (zh) * | 2019-12-31 | 2022-03-15 | 南京航空航天大学 | 基于视觉辅助激光雷达的无人机三维环境建模方法 |
| CN111325796B (zh) * | 2020-02-28 | 2023-08-18 | 北京百度网讯科技有限公司 | 用于确定视觉设备的位姿的方法和装置 |
| CN111340834B (zh) * | 2020-03-10 | 2023-05-12 | 山东大学 | 基于激光雷达和双目相机数据融合的衬板装配系统及方法 |
| CN111721281B (zh) * | 2020-05-27 | 2022-07-15 | 阿波罗智联(北京)科技有限公司 | 位置识别方法、装置和电子设备 |
| CN111983639B (zh) * | 2020-08-25 | 2023-06-02 | 浙江光珀智能科技有限公司 | 一种基于Multi-Camera/Lidar/IMU的多传感器SLAM方法 |
| CN112630745B (zh) * | 2020-12-24 | 2024-08-09 | 深圳市大道智创科技有限公司 | 一种基于激光雷达的环境建图方法和装置 |
| CN112634357B (zh) * | 2020-12-30 | 2022-12-23 | 哈尔滨工业大学芜湖机器人产业技术研究院 | 用于机器人二维视觉系统的通讯数据处理方法及系统 |
| CN112927362B (zh) * | 2021-04-07 | 2024-08-27 | Oppo广东移动通信有限公司 | 地图重建方法及装置、计算机可读介质和电子设备 |
| CN114063099B (zh) * | 2021-11-10 | 2025-05-23 | 厦门大学 | 基于rgbd的定位方法及装置 |
| CN114170361B (zh) * | 2021-12-07 | 2025-12-05 | 阿波罗智能技术(北京)有限公司 | 三维地图元素生成方法、装置、设备及存储介质 |
| CN114283193B (zh) * | 2021-12-24 | 2024-11-22 | 长三角哈特机器人产业技术研究院 | 一种栈板三维视觉定位方法及系统 |
| CN114594489A (zh) * | 2022-02-16 | 2022-06-07 | 北京天玛智控科技股份有限公司 | 一种矿用三维彩色点云重建系统及方法 |
| CN114895276B (zh) * | 2022-04-13 | 2025-04-11 | 深圳市普蓝机器人有限公司 | 一种用于移动机器人的定位建图方法及系统 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20140003987A (ko) * | 2012-06-25 | 2014-01-10 | 서울대학교산학협력단 | 비젼 센서 정보와 모션 센서 정보를 융합한 모바일 로봇용 slam 시스템 |
| CN104374376A (zh) * | 2014-11-05 | 2015-02-25 | 北京大学 | 一种车载三维测量系统装置及其应用 |
| CN106324616A (zh) * | 2016-09-28 | 2017-01-11 | 深圳市普渡科技有限公司 | 一种基于惯性导航单元与激光雷达的地图构建方法 |
| CN106443687A (zh) * | 2016-08-31 | 2017-02-22 | 欧思徕(北京)智能科技有限公司 | 一种基于激光雷达和全景相机的背负式移动测绘系统 |
| CN107167141A (zh) * | 2017-06-15 | 2017-09-15 | 同济大学 | 基于双一线激光雷达的机器人自主导航系统 |
| CN108401461A (zh) * | 2017-12-29 | 2018-08-14 | 深圳前海达闼云端智能科技有限公司 | 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品 |
-
2017
- 2017-12-29 CN CN201780002708.2A patent/CN108401461B/zh active Active
- 2017-12-29 WO PCT/CN2017/120059 patent/WO2019127445A1/fr not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20140003987A (ko) * | 2012-06-25 | 2014-01-10 | 서울대학교산학협력단 | 비젼 센서 정보와 모션 센서 정보를 융합한 모바일 로봇용 slam 시스템 |
| CN104374376A (zh) * | 2014-11-05 | 2015-02-25 | 北京大学 | 一种车载三维测量系统装置及其应用 |
| CN106443687A (zh) * | 2016-08-31 | 2017-02-22 | 欧思徕(北京)智能科技有限公司 | 一种基于激光雷达和全景相机的背负式移动测绘系统 |
| CN106324616A (zh) * | 2016-09-28 | 2017-01-11 | 深圳市普渡科技有限公司 | 一种基于惯性导航单元与激光雷达的地图构建方法 |
| CN107167141A (zh) * | 2017-06-15 | 2017-09-15 | 同济大学 | 基于双一线激光雷达的机器人自主导航系统 |
| CN108401461A (zh) * | 2017-12-29 | 2018-08-14 | 深圳前海达闼云端智能科技有限公司 | 三维建图方法、装置、系统、云端平台、电子设备和计算机程序产品 |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111580129A (zh) * | 2020-04-07 | 2020-08-25 | 华南理工大学 | 一种基于单线激光雷达获取3d激光点云的方法 |
| CN111580129B (zh) * | 2020-04-07 | 2022-05-24 | 华南理工大学 | 一种基于单线激光雷达获取3d激光点云的方法 |
| CN111750805A (zh) * | 2020-07-06 | 2020-10-09 | 山东大学 | 一种基于双目相机成像和结构光技术的三维测量装置及测量方法 |
| CN113495281A (zh) * | 2021-06-21 | 2021-10-12 | 杭州飞步科技有限公司 | 可移动平台的实时定位方法及装置 |
| CN113495281B (zh) * | 2021-06-21 | 2023-08-22 | 杭州飞步科技有限公司 | 可移动平台的实时定位方法及装置 |
| CN115585818A (zh) * | 2022-10-31 | 2023-01-10 | 中国星网网络应用有限公司 | 一种地图构建方法、装置、电子设备及存储介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108401461A (zh) | 2018-08-14 |
| CN108401461B (zh) | 2021-06-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2019127445A1 (fr) | Procédé, appareil et système de cartographie tridimensionnelle, plateforme en nuage, dispositif électronique et produit programme informatique | |
| CN112132972B (zh) | 一种激光与图像数据融合的三维重建方法及系统 | |
| CN110070615B (zh) | 一种基于多相机协同的全景视觉slam方法 | |
| CN108898676B (zh) | 一种虚实物体之间碰撞及遮挡检测方法及系统 | |
| CN111156998B (zh) | 一种基于rgb-d相机与imu信息融合的移动机器人定位方法 | |
| CN113592989B (zh) | 一种三维场景的重建系统、方法、设备及存储介质 | |
| CN107223269B (zh) | 三维场景定位方法和装置 | |
| CN110176032B (zh) | 一种三维重建方法及装置 | |
| WO2019127347A1 (fr) | Procédé, appareil et système de cartographie tridimensionnelle, plateforme en nuage, dispositif électronique et produit programme d'ordinateur | |
| quan Li et al. | Construction and accuracy test of a 3D model of non-metric camera images using Agisoft PhotoScan | |
| CN111721281B (zh) | 位置识别方法、装置和电子设备 | |
| CN109709977B (zh) | 移动轨迹规划的方法、装置及移动物体 | |
| CN113409473B (zh) | 实现虚实融合的方法、装置、电子设备及存储介质 | |
| CN113361365B (zh) | 定位方法和装置、设备及存储介质 | |
| CN110889873A (zh) | 一种目标定位方法、装置、电子设备及存储介质 | |
| CN116503566B (zh) | 一种三维建模方法、装置、电子设备及存储介质 | |
| Gadasin et al. | Reconstruction of a Three-Dimensional Scene from its Projections in Computer Vision Systems | |
| CN112312113A (zh) | 用于生成三维模型的方法、装置和系统 | |
| CN108062788A (zh) | 一种三维重建方法、装置、设备和介质 | |
| CN111080784A (zh) | 一种基于地面图像纹理的地面三维重建方法和装置 | |
| US8509522B2 (en) | Camera translation using rotation from device | |
| CN114299230B (zh) | 一种数据生成方法、装置、电子设备及存储介质 | |
| CN119048718B (zh) | 一种增强现实三维注册的方法及电子设备 | |
| CN119152114A (zh) | 三维重建方法及装置、存储介质、电子设备 | |
| CN118298120A (zh) | 一种基于数据驱动的遥感影像新视角合成方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17936972 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02.03.2021) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 17936972 Country of ref document: EP Kind code of ref document: A1 |