WO2022205750A1 - Point cloud data generation method and apparatus, electronic device, and storage medium - Google Patents
Point cloud data generation method and apparatus, electronic device, and storage medium Download PDFInfo
- Publication number
- WO2022205750A1 WO2022205750A1 PCT/CN2021/114435 CN2021114435W WO2022205750A1 WO 2022205750 A1 WO2022205750 A1 WO 2022205750A1 CN 2021114435 W CN2021114435 W CN 2021114435W WO 2022205750 A1 WO2022205750 A1 WO 2022205750A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- dimensional feature
- feature points
- dimensional
- target device
- cloud data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Definitions
- the embodiments of the present disclosure relate to the field of positioning technologies, and to a method, an apparatus, an electronic device, and a storage medium for generating point cloud data.
- Simultaneous localization and mapping refers to a mobile device moving from an unknown location in an unknown environment, positioning itself according to the location estimation and map during the movement process, and building on the basis of its own positioning. Incremental maps, enabling autonomous positioning and navigation of mobile devices.
- a SLAM system running on a mobile device will generate a large error accumulation in long-distance tracking, which reduces the accuracy and stability of the SLAM system.
- embodiments of the present disclosure provide at least a method, apparatus, electronic device, and storage medium for generating point cloud data.
- an embodiment of the present disclosure provides a method for generating point cloud data, including:
- the positioning pose information and the position information of the two-dimensional feature points in the scene image determine the three-dimensional feature points matching the two-dimensional feature points in the pre-built three-dimensional scene map, and the 3D position information of 3D feature points;
- point cloud data corresponding to the current scene is generated.
- the feature point extraction method corresponding to the SLAM system can be used to extract at least one two-dimensional feature point from the collected scene image, and according to the obtained positioning pose information and the position of the two-dimensional feature point in the scene image information, to determine the 3D feature points that match the 2D feature points in the 3D scene map, and the 3D position information of the 3D feature points.
- the three-dimensional feature points matched by the feature points and the three-dimensional position information of the three-dimensional feature points can further generate relatively accurate point cloud data corresponding to the current scene.
- the feature point extraction method corresponding to the SLAM system is used to extract the two-dimensional feature points in the scene image, the two-dimensional feature points are matched with the SLAM system, so that the generated point cloud data of the current scene is matched with the SLAM system, and the subsequent The accumulated error of the SLAM system is corrected by using the point cloud data corresponding to the current scene.
- the method further includes:
- the generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points including:
- the point cloud data corresponding to the current scene is generated.
- the semantic information of the three-dimensional feature points and/or the position confidence corresponding to the three-dimensional position information of the three-dimensional feature points based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or The position confidence is used to generate point cloud data corresponding to the current scene.
- the generated point cloud data contains rich information of 3D feature points.
- the method further includes:
- the generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points including:
- point cloud data corresponding to the current scene is generated.
- the credible 3D feature point can be determined by the semantic information and/or the position confidence of the 3D feature point, and the unreliable 3D feature points in at least one 3D feature point can be screened out.
- 3D position information and credible 3D feature points can generate more accurate point cloud data corresponding to the current scene, which alleviates the bad influence of unreliable 3D feature points on point cloud data.
- the method further includes:
- the current positioning result of the SLAM system is adjusted to obtain the adjusted current positioning result.
- At least one two-dimensional feature point is extracted from the scene image by using the feature point extraction method corresponding to the SLAM system, that is, the type of the obtained two-dimensional feature point is the same as that of the feature point extracted by the SLAM system; for example, if the SLAM system
- the corresponding feature point extraction method is the FAST feature point extraction algorithm, then use the FAST feature point extraction algorithm to extract at least one two-dimensional feature point from the scene image, and the obtained two-dimensional feature point is the FAST corner point, and the SLAM system.
- the feature points are also FAST corner points, so the extracted two-dimensional feature points are of the same type as the feature points extracted in the SLAM system, so that the generated point cloud data corresponding to the current scene can be used to more accurately adjust the current SLAM system. Positioning results.
- the stability of the positioning results of the SLAM system can be improved.
- acquiring the positioning pose information of the target device includes:
- the positioning pose information of the target device is determined.
- setting up multiple ways to obtain the positioning pose information of the target device can improve the flexibility of determining the positioning pose information.
- the obtaining of the scene image corresponding to the current scene collected by the target device and the positioning pose information of the target device include:
- that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
- an embodiment of the present disclosure provides an apparatus for generating point cloud data, including:
- an acquisition part configured to acquire the scene image corresponding to the current scene collected by the target device, and the positioning pose information of the target device
- the extraction part is configured to extract at least one two-dimensional feature point from the scene image by using the feature point extraction method corresponding to the synchronous positioning and the mapping SLAM system;
- a first determining part is configured to determine, according to the positioning pose information and the position information of the two-dimensional feature points in the scene image, matching with the two-dimensional feature points in a pre-built three-dimensional scene map The three-dimensional feature points of , and the three-dimensional position information of the three-dimensional feature points;
- the generating part is configured to generate point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points.
- the apparatus further includes: a second determining part configured to:
- the generation section is also configured to:
- the point cloud data corresponding to the current scene is generated.
- the apparatus further includes: a third determining part configured to:
- the generation section is also configured to:
- point cloud data corresponding to the current scene is generated.
- the device further includes: an adjustment part configured to:
- the current positioning result of the SLAM system is adjusted to obtain the adjusted current positioning result.
- the obtaining part is further configured as:
- the positioning pose information of the target device is determined.
- the obtaining part is further configured as:
- that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
- an embodiment of the present disclosure provides an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processor In communication with the memory through a bus, the machine-readable instructions execute the steps of the method for generating point cloud data according to the first aspect or any one of the embodiments when executed by the processor.
- an embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor when it is executed as described in the first aspect or any one of the above-mentioned embodiments.
- the steps of the point cloud data generation method are described in the first aspect or any one of the above-mentioned embodiments.
- an embodiment of the present disclosure provides a computer program, including computer-readable code, when the computer-readable code is executed in an electronic device, a processor in the electronic device executes the first aspect or any of the above The steps of the method for generating point cloud data according to an embodiment.
- FIG. 1 shows a schematic flowchart of a method for generating point cloud data provided by an embodiment of the present disclosure
- FIG. 2 shows a schematic flowchart of a method for determining credible three-dimensional feature points in a method for generating point cloud data provided by an embodiment of the present disclosure
- FIG. 3 shows another method for generating point cloud data provided by an embodiment of the present disclosure
- FIG. 4 shows a schematic diagram of the architecture of an apparatus for generating point cloud data provided by an embodiment of the present disclosure
- FIG. 5 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- Real-time positioning and map construction means that a mobile device starts to move from an unknown position in an unknown environment, performs its own positioning according to the position estimation and map during the movement process, and builds an incremental map on the basis of its own positioning, so as to realize Autonomous positioning and navigation of mobile devices.
- a SLAM system running on a mobile device will generate a large error accumulation in long-distance tracking, which reduces the accuracy and stability of the SLAM system.
- the offline maps generated by lidar or three-dimensional reconstruction have high accuracy and global consistency
- high-precision offline map point information can be integrated into the tracking process of the SLAM system to effectively Reduce the error of the SLAM system.
- the local image can be uploaded to the cloud for visual positioning, and the inliers can be screened out according to the positioning results of the current image and the offline map, and the inliers can be returned to the SLAM system.
- this method usually results in a limited number of inliers after screening, and it is difficult to continuously apply them to the SLAM system.
- the embodiments of the present disclosure provide a method, apparatus, electronic device, and storage medium for generating point cloud data.
- the method for generating point cloud data provided by the embodiments of the present disclosure can be applied to a mobile computer device with a certain computing capability, for example, the mobile computer device can be a mobile phone, a computer, a tablet, an Augmented Reality (AR) device, Robots etc.
- the method for generating point cloud data may be implemented by a processor invoking computer-readable instructions stored in a memory.
- FIG. 1 is a schematic flowchart of a method for generating point cloud data provided by an embodiment of the present disclosure
- the method includes S101-S104, wherein:
- S102 utilize the feature point extraction mode corresponding to synchronous positioning and mapping system, extract at least one two-dimensional feature point from the scene image;
- the feature point extraction method corresponding to the SLAM system can be used to extract at least one two-dimensional feature point from the collected scene image, and according to the obtained positioning pose information and the position of the two-dimensional feature point in the scene image. information, to determine the 3D feature points that match the 2D feature points in the 3D scene map, and the 3D position information of the 3D feature points.
- the three-dimensional feature points matched by the feature points and the three-dimensional position information of the three-dimensional feature points can further generate relatively accurate point cloud data corresponding to the current scene.
- the feature point extraction method corresponding to the SLAM system is used to extract the two-dimensional feature points in the scene image, the two-dimensional feature points are matched with the SLAM system, so that the generated point cloud data of the current scene is matched with the SLAM system, and the subsequent The accumulated error of the SLAM system is corrected by using the point cloud data corresponding to the current scene.
- the target device may be any movable device including an image acquisition device, for example, the target device may be a robot, an AR device, a mobile phone, a computer, and the like.
- the scene image of the current scene collected by the image collection device set on the target device may be acquired; the image collection device may be a camera or the like.
- a scene image corresponding to the current scene collected by the target device can be obtained, and the positioning pose information of the target device in the process of collecting the scene image can be obtained.
- the positioning pose information may include position information and orientation information, for example, the position information may be three-dimensional position information; the orientation information may be represented by Euler angles.
- obtaining the positioning pose information of the target device in the above S101 may be implemented by at least one of the following implementation manners:
- Manner 1 Determine the positioning pose information of the target device based on the scene image.
- Manner 2 Acquire detection data of a positioning sensor included on the target device; and determine the positioning pose information of the target device based on the detection data.
- a visual positioning algorithm may be used to determine the positioning pose information of the target device based on the scene image corresponding to the current scene. For example, feature point extraction can be performed on the scene image to obtain multiple feature point information included in the scene image, and the positioning pose information of the target device can be determined by using the multiple feature point information and the constructed offline map.
- the positioning sensor may include: a radar device, an inertial measurement unit (Inertial Measurement Unit, IMU), a gyroscope and other sensors capable of measuring the position and attitude of the device.
- IMU Inertial Measurement Unit
- the radar device can collect the point cloud data of the current scene, and then match the collected point cloud data with the high-precision map to determine the positioning pose information of the target device.
- the method for determining the positioning pose information of the target device may also include other positioning methods, which are only illustratively described herein.
- the positioning pose information of the target device can also be determined by positioning methods such as a global positioning system (Global Positioning System, GPS), wireless communication technology WiFi positioning, and real-time dynamic positioning (Real-Timekinematic, RTK).
- setting up multiple ways to obtain the positioning pose information of the target device can improve the flexibility of determining the positioning pose information.
- acquiring the scene image corresponding to the current scene collected by the target device and the positioning pose information of the target device may include:
- that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
- the scene corresponding to the current scene collected by the target device can be obtained when the moving distance of the target device reaches the set distance threshold, or when the moving time of the target device reaches the time threshold of the device.
- the distance threshold and time threshold can be set as required, for example, the distance threshold can be 20 meters, 30 meters, 50 meters, etc.; the time threshold can be 30 seconds, 1 minute, and so on.
- the scene image of the current scene collected by the target device and the positioning pose information of the target device may be acquired once every time the target device moves 20 meters (distance threshold).
- distance threshold when the target device moves every 20 seconds (time threshold), a scene image of the current scene collected by the target device and the positioning pose information of the target device are acquired once.
- the movement distance of the target device may be determined by using a displacement sensor provided on the target device for measuring the movement distance.
- the set positioning algorithm can be used to perform real-time detection on the moving distance of the target device, etc.
- the moving time of the target device can be determined using the clock set on the target device.
- At least one two-dimensional feature point can be extracted from the acquired scene image of the current scene by using the feature point extraction method corresponding to the SLAM system.
- the two-dimensional feature points may be feature points on the target object included in the scene image.
- the feature point extraction method may be a feature point extraction algorithm deployed in the SLAM system.
- the feature point extraction algorithm may include, but is not limited to, Scale Invariant Feature Transform (SIFT) algorithm, Scale Invariant Feature Transform The accelerated version of the algorithm (SIFT algorithm) SURF algorithm, FAST feature point extraction algorithm, etc.
- the FAST feature point extraction algorithm may be used to extract at least one two-dimensional feature point from the scene image.
- the step of extracting at least one two-dimensional feature point from the scene image by using the feature point extraction algorithm corresponding to the SLAM system can be executed on the mobile device; the feature point extraction algorithm corresponding to the SLAM system can also be executed on the server, from the scene image.
- the step of extracting at least one two-dimensional feature point from the image can be executed on the mobile device; the feature point extraction algorithm corresponding to the SLAM system can also be executed on the server, from the scene image.
- At least one two-dimensional feature point can be extracted from the scene image by using the feature point extraction algorithm corresponding to the SLAM system set on the movable device.
- the collected scene image can be sent to the server, so that at least one two-dimensional feature point can be extracted from the scene image by using the feature point extraction algorithm corresponding to the SLAM system set on the server.
- the 3D feature points matching the 2D feature points and the 3D feature points can be determined in the pre-built 3D scene map according to the positioning pose information and the position information of the 2D feature points in the scene image.
- the three-dimensional position information corresponding to the feature point is obtained.
- a ray casting algorithm can be used to determine the three-dimensional feature points that match the two-dimensional feature points according to the positioning pose information, the position information of the two-dimensional feature points in the scene image, and the pre-built three-dimensional scene map. , and the three-dimensional position information corresponding to the three-dimensional feature points.
- the point cloud data corresponding to the current scene can be generated by using the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points.
- the method further includes: determining semantic information of the three-dimensional feature point, and/or position confidence corresponding to the three-dimensional position information of the three-dimensional feature point.
- the generating the point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points includes: based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or the position confidence, to generate point cloud data corresponding to the current scene.
- the semantic information of the three-dimensional feature points and/or the position confidence corresponding to the three-dimensional position information of the three-dimensional feature points based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or The position confidence is used to generate point cloud data corresponding to the current scene.
- the generated point cloud data contains rich information of 3D feature points.
- the information corresponding to each three-dimensional feature point may include semantic information and/or position confidence information.
- the semantic information corresponding to the three-dimensional feature point and/or the position confidence corresponding to the three-dimensional position information of the three-dimensional feature point are obtained from the three-dimensional scene map.
- the position confidence can be used to represent the reliability of the three-dimensional position information.
- the semantic information and position confidence of the three-dimensional feature points may be determined in the pre-built three-dimensional scene map; In the process of information or position confidence, the semantic information and position confidence of the three-dimensional feature points in the pre-built three-dimensional scene map are determined.
- the three-dimensional scene map can be constructed according to the following steps: obtaining the video corresponding to the scene, sampling the multi-frame scene samples from the video, or obtaining the collected multi-frame scene samples corresponding to the scene; A plurality of three-dimensional feature point information is extracted from the sample; and a three-dimensional scene map can be constructed based on the extracted multiple three-dimensional feature point information.
- the trained semantic segmentation neural network can be used to detect the constructed 3D scene map, and determine the semantic information of each 3D feature point in the 3D scene map.
- the semantic information of the three-dimensional feature point may be used to represent the type of the target object corresponding to the three-dimensional feature point.
- the semantic information of the three-dimensional feature point may include walls, tables, cups, leaves, animals, and the like.
- the semantic information of the three-dimensional feature points can be set as required.
- the trained neural network can be used to detect the constructed 3D scene map to determine the position confidence of each 3D feature point in the 3D scene map.
- the position confidence of each 3D feature point in the 3D scene map can also be determined according to the semantic information of the 3D feature point. For example, if the semantic information of the 3D feature point is a table, since the table is an object that is not easy to move, the confidence of the position of the 3D feature point can be set higher; if the semantic information of the 3D feature point is a leaf, because the leaf is relatively The object is easy to move, so the position confidence of the three-dimensional feature point can be set to be small.
- point cloud data corresponding to the current scene may be generated based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or position confidence. For example, in the process that the three-dimensional feature points include semantic information, the generated point cloud data corresponding to the current scene includes the semantic information of each point cloud point.
- the method further includes:
- S202 Determine a credible three-dimensional feature point in at least one three-dimensional feature point based on the semantic information and/or the position confidence of the three-dimensional feature point.
- the generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points includes: based on the trusted three-dimensional feature points and the trusted three-dimensional feature points
- the corresponding three-dimensional position information is used to generate point cloud data corresponding to the current scene.
- the credible 3D feature point can be determined by the semantic information and/or the position confidence of the 3D feature point, and the unreliable 3D feature points in at least one 3D feature point can be screened out.
- 3D position information and credible 3D feature points can generate more accurate point cloud data corresponding to the current scene, which alleviates the bad influence of unreliable 3D feature points on point cloud data.
- the trusted three-dimensional feature points may be determined based on semantic information and/or position confidence of the three-dimensional feature points.
- determining a credible 3D feature point in at least one 3D feature point based on the semantic information of the 3D feature point it can be determined whether the object corresponding to the 3D feature point belongs to the movable category according to the semantic information of the 3D feature point, and if so, It is determined that the three-dimensional feature point does not belong to the credible three-dimensional feature point; if not, it is determined that the three-dimensional feature point belongs to the credible three-dimensional feature point.
- a mapping relationship table of movable categories and immovable categories can be preset, and then it can be determined according to the semantic information of the three-dimensional feature points and the set mapping relationship table that the objects corresponding to the three-dimensional feature points belong to the movable category or belong to the immovable category. category.
- a confidence threshold may be set, and a 3D feature point whose position confidence is greater than or equal to the confidence threshold may be determined as Credible 3D feature points; 3D feature points whose position confidence is less than the set confidence threshold are determined as unreliable 3D feature points.
- the candidate can be determined based on the semantic information of the 3D feature point first in the at least one 3D feature point.
- the trusted three-dimensional feature points are determined; and the trusted three-dimensional feature points in the candidate trusted three-dimensional feature points are determined based on the position confidence.
- a candidate credible 3D feature point in at least one 3D feature point may be determined based on the position confidence of the 3D feature point; then based on semantic information, a credible 3D feature point in the candidate credible 3D feature point may be determined.
- point cloud data corresponding to the current scene may be generated based on the trusted three-dimensional feature points and the three-dimensional position information corresponding to the trusted three-dimensional feature points.
- the method further includes S301 , see FIG. 3 . shown:
- the errors of the system are accumulated to obtain the adjusted current positioning result, so that the obtained adjusted current positioning result has a higher accuracy.
- At least one two-dimensional feature point is extracted from the scene image by using the feature point extraction method corresponding to the SLAM system, that is, the type of the obtained two-dimensional feature point is the same as that of the feature point extracted by the SLAM system; for example, if the SLAM system
- the corresponding feature point extraction method is the FAST feature point extraction algorithm, then use the FAST feature point extraction algorithm to extract at least one two-dimensional feature point from the scene image, and the obtained two-dimensional feature point is the FAST corner point, and the SLAM system.
- the feature points are also FAST corner points, so the extracted two-dimensional feature points are of the same type as the feature points extracted in the SLAM system, so that the generated point cloud data corresponding to the current scene can be used to more accurately adjust the current SLAM system.
- Positioning results At the same time, compared with using the acquired pose data of the target device to eliminate the cumulative error of the SLAM system, the stability of the positioning results of the SLAM system can be improved.
- the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the execution order of each step should be based on its function and possible intrinsic Logical OK.
- an embodiment of the present disclosure also provides an apparatus for generating point cloud data.
- a schematic diagram of the architecture of the apparatus for generating point cloud data provided by an embodiment of the present disclosure includes an acquisition part 401 and an extraction part 402 , a first determining part 403, and a generating part 404, wherein:
- the acquisition part 401 is configured to acquire the scene image corresponding to the current scene collected by the target device, and the positioning pose information of the target device;
- the extraction part 402 is configured to extract at least one two-dimensional feature point from the scene image by using a feature point extraction method corresponding to the synchronous positioning and mapping SLAM system;
- the first determining part 403 is configured to determine, according to the positioning pose information and the position information of the two-dimensional feature points in the scene image, the pre-built three-dimensional scene map, and the two-dimensional feature points and the two-dimensional feature points. matching three-dimensional feature points and three-dimensional position information of the three-dimensional feature points;
- the generating part 404 is configured to generate point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points.
- the apparatus further includes: a second determining part 405 configured to:
- the generating section 404 is further configured to:
- the point cloud data corresponding to the current scene is generated.
- the apparatus further includes: a third determining part 406 configured to:
- the generating section 404 is further configured to:
- point cloud data corresponding to the current scene is generated.
- the device further includes: an adjustment part 407 configured to:
- the current positioning result of the SLAM system is adjusted to obtain the adjusted current positioning result.
- the obtaining part 401 is further configured to:
- Acquire detection data of a positioning sensor included on the target device and determine the positioning pose information of the target device based on the detection data.
- the obtaining part 401 is further configured to:
- that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
- the functions or templates included in the apparatus provided by the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments, and for implementation, reference may be made to the above method embodiments.
- an embodiment of the present disclosure also provides an electronic device.
- a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure includes a processor 401 , a memory 402 , and a bus 403 .
- the memory 402 is configured to store execution instructions, including the memory 4021 and the external memory 4022; the memory 4021 here is also called internal memory, and is configured to temporarily store the operation data in the processor 401 and the external memory 4022 such as the hard disk.
- the processor 401 exchanges data with the external memory 4022 through the memory 4021.
- the processor 401 and the memory 402 communicate through the bus 403, so that the processor 401 executes the following instructions:
- the positioning pose information and the position information of the two-dimensional feature points in the scene image determine the three-dimensional feature points matching the two-dimensional feature points in the pre-built three-dimensional scene map, and the 3D position information of 3D feature points;
- point cloud data corresponding to the current scene is generated.
- embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the method for generating point cloud data described in the foregoing method embodiments is executed A step of.
- Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the steps of the point cloud data generation method described in the foregoing method embodiments. Method Examples.
- the above-mentioned computer program product can be realized by means of hardware, software or a combination thereof.
- the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
- the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the technical solutions of the embodiments of the present disclosure are essentially or contribute to the prior art or parts of the technical solutions may be embodied in the form of software products, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
- the computer-readable storage medium may be a ferroelectric memory (Ferroelectric Random Access Memory, FRAM), a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable read-only memory) , PROM), Electronic Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash Memory, Magnetic Surface Memory, Optical Disc, or CD - ROM and other memories; can also be various devices including one or any combination of the above memories.
- FRAM Ferroelectric Random Access Memory
- ROM Read-Only Memory
- PROM programmable read-only memory
- PROM Electronic Programmable Read-Only Memory
- EPROM Electronic Programmable Read-Only Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- Flash Memory Magnetic Surface Memory, Optical Disc, or CD - ROM and other memories
- the present disclosure provides a point cloud data generation method, device, electronic device and storage medium.
- the method includes: acquiring a scene image corresponding to a current scene collected by a target device, and positioning pose information of the target device; using synchronous positioning The feature point extraction method corresponding to the mapping SLAM system, extracts at least one two-dimensional feature point from the scene image; according to the positioning pose information and the position information of the two-dimensional feature point in the scene image , determine the three-dimensional feature points that match the two-dimensional feature points and the three-dimensional position information of the three-dimensional feature points in the pre-built three-dimensional scene map; based on the three-dimensional feature points and the three-dimensional position of the three-dimensional feature points information to generate point cloud data corresponding to the current scene.
- the embodiment of the present disclosure can more accurately determine the three-dimensional feature points that match the two-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, and further, can generate the corresponding more accurate point cloud data. Since the feature point extraction method corresponding to the SLAM system is used to extract the two-dimensional feature points in the scene image, the two-dimensional feature points are matched with the SLAM system, so that the generated point cloud data of the current scene is matched with the SLAM system, and the subsequent The accumulated error of the SLAM system is corrected by using the point cloud data corresponding to the current scene.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本公开基于申请号为202110348215.2、申请日为2021年03月31日、申请名称为“点云数据生成方法、装置、电子设备及存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本公开作为参考。This disclosure is based on the Chinese patent application with the application number of 202110348215.2, the application date of March 31, 2021, and the application name of "point cloud data generation method, device, electronic device and storage medium", and requires the priority of the Chinese patent application The entire content of this Chinese patent application is incorporated herein by reference.
本公开实施例涉及定位技术领域,涉及一种点云数据生成方法、装置、电子设备及存储介质。The embodiments of the present disclosure relate to the field of positioning technologies, and to a method, an apparatus, an electronic device, and a storage medium for generating point cloud data.
即时定位与地图构建(simultaneous localization and mapping,SLAM)是指可移动设备在未知环境中从一个未知位置开始移动,在移动过程中根据位置估计和地图进行自身定位,同时在自身定位的基础上建造增量式地图,从而实现可移动设备的自主定位和导航。一般的,在可移动设备上运行的SLAM系统在长距离跟踪中会产生较大的误差累计,降低了SLAM系统的准确度和稳定性。Simultaneous localization and mapping (SLAM) refers to a mobile device moving from an unknown location in an unknown environment, positioning itself according to the location estimation and map during the movement process, and building on the basis of its own positioning. Incremental maps, enabling autonomous positioning and navigation of mobile devices. Generally, a SLAM system running on a mobile device will generate a large error accumulation in long-distance tracking, which reduces the accuracy and stability of the SLAM system.
发明内容SUMMARY OF THE INVENTION
有鉴于此,本公开实施例至少提供一种点云数据生成方法、装置、电子设备及存储介质。In view of this, embodiments of the present disclosure provide at least a method, apparatus, electronic device, and storage medium for generating point cloud data.
一方面,本公开实施例提供了一种点云数据生成方法,包括:In one aspect, an embodiment of the present disclosure provides a method for generating point cloud data, including:
获取目标设备采集的当前场景对应的场景图像,以及所述目标设备的定位位姿信息;acquiring a scene image corresponding to the current scene collected by the target device, and the positioning pose information of the target device;
利用同步定位与建图系统对应的特征点提取方式,从所述场景图像中提取至少一个二维特征点;Extract at least one two-dimensional feature point from the scene image by using the feature point extraction method corresponding to the synchronous positioning and mapping system;
根据所述定位位姿信息、和所述二维特征点在所述场景图像中的位置信息,在预先构建的三维场景地图中确定与所述二维特征点匹配的三维特征点、以及所述三维特征点的三维位置信息;According to the positioning pose information and the position information of the two-dimensional feature points in the scene image, determine the three-dimensional feature points matching the two-dimensional feature points in the pre-built three-dimensional scene map, and the 3D position information of 3D feature points;
基于所述三维特征点、和所述三维特征点的三维位置信息,生成所述当前场景对应的点云数据。Based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, point cloud data corresponding to the current scene is generated.
采用上述方法,可以利用SLAM系统对应的特征点提取方式,从采集到的场景图像中提取至少一个二维特征点,并根据获取到的定位位姿信息和二维特征点在场景图像中的位置信息,确定三维场景地图中,与二维特征点匹配的三维特征点、以及三维特征点的三维位置信息,由于三维场景地图中包括的特征点信息较为准确,故可以较准确的确定与二维特征点匹配的三维特征点和三维特征点的三维位置信息,进一步,可以生成当前场景对应的较为准确的点云数据。由于使用了SLAM系统对应的特征点提取方式提取场景图像中的二维特征点,该二维特征点与SLAM系统相匹配,使得生成的当前场 景的点云数据与SLAM系统相匹配,进而后续可以利用当前场景对应的点云数据对SLAM系统的累计误差进行修正。Using the above method, the feature point extraction method corresponding to the SLAM system can be used to extract at least one two-dimensional feature point from the collected scene image, and according to the obtained positioning pose information and the position of the two-dimensional feature point in the scene image information, to determine the 3D feature points that match the 2D feature points in the 3D scene map, and the 3D position information of the 3D feature points. The three-dimensional feature points matched by the feature points and the three-dimensional position information of the three-dimensional feature points can further generate relatively accurate point cloud data corresponding to the current scene. Since the feature point extraction method corresponding to the SLAM system is used to extract the two-dimensional feature points in the scene image, the two-dimensional feature points are matched with the SLAM system, so that the generated point cloud data of the current scene is matched with the SLAM system, and the subsequent The accumulated error of the SLAM system is corrected by using the point cloud data corresponding to the current scene.
一种可能的实施方式中,所述方法还包括:In a possible implementation, the method further includes:
确定所述三维特征点的语义信息,和/或,所述三维特征点的三维位置信息对应的位置置信度;determining the semantic information of the three-dimensional feature point, and/or the position confidence level corresponding to the three-dimensional position information of the three-dimensional feature point;
所述基于所述三维特征点、和所述三维特征点的三维位置信息,生成所述当前场景对应的点云数据,包括:The generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, including:
基于所述三维特征点、所述三维特征点的三维位置信息,以及确定的所述语义信息和/或所述位置置信度,生成所述当前场景对应的点云数据。Based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or the position confidence, the point cloud data corresponding to the current scene is generated.
上述实施方式中,通过确定三维特征点的语义信息和/或三维特征点的三维位置信息对应的位置置信度,基于三维特征点、三维特征点的三维位置信息,以及确定的语义信息和/或位置置信度,生成当前场景对应的点云数据,生成的点云数据中包括的三维特征点的信息较为丰富。In the above-mentioned embodiment, by determining the semantic information of the three-dimensional feature points and/or the position confidence corresponding to the three-dimensional position information of the three-dimensional feature points, based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or The position confidence is used to generate point cloud data corresponding to the current scene. The generated point cloud data contains rich information of 3D feature points.
一种可能的实施方式中,所述方法还包括:In a possible implementation, the method further includes:
确定所述三维特征点的语义信息,和/或,所述三维特征点的三维位置信息对应的位置置信度;determining the semantic information of the three-dimensional feature point, and/or the position confidence level corresponding to the three-dimensional position information of the three-dimensional feature point;
基于所述三维特征点的所述语义信息和/或所述位置置信度,确定至少一个三维特征点中的可信三维特征点;determining a credible three-dimensional feature point in at least one three-dimensional feature point based on the semantic information and/or the position confidence of the three-dimensional feature point;
所述基于所述三维特征点、和所述三维特征点的三维位置信息,生成所述当前场景对应的点云数据,包括:The generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, including:
基于所述可信三维特征点、和所述可信三维特征点对应的三维位置信息,生成所述当前场景对应的点云数据。Based on the trusted three-dimensional feature points and the three-dimensional position information corresponding to the trusted three-dimensional feature points, point cloud data corresponding to the current scene is generated.
采用上述方法,可以通过三维特征点的语义信息和/或位置置信度,确定可信三维特征点,将至少一个三维特征点中不可信的三维特征点筛除,基于可信三维特征点对应的三维位置信息和可信三维特征点,可以生成当前场景对应的较为准确的点云数据,缓解了不可信三维特征点对点云数据造成的不佳影响。Using the above method, the credible 3D feature point can be determined by the semantic information and/or the position confidence of the 3D feature point, and the unreliable 3D feature points in at least one 3D feature point can be screened out. 3D position information and credible 3D feature points can generate more accurate point cloud data corresponding to the current scene, which alleviates the bad influence of unreliable 3D feature points on point cloud data.
一种可能的实施方式中,在基于所述三维特征点、和所述三维特征点的三维位置信息,生成所述当前场景对应的点云数据之后,所述方法还包括:In a possible implementation manner, after generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, the method further includes:
利用所述当前场景对应的点云数据,对所述SLAM系统的当前定位结果进行调整,得到调整后的当前定位结果。Using the point cloud data corresponding to the current scene, the current positioning result of the SLAM system is adjusted to obtain the adjusted current positioning result.
这里,利用SLAM系统对应的特征点提取方式,从场景图像中提取至少一个二维特征点,即得到的二维特征点的类型与SLAM系统提取到的特征点的类型相同;比如,若SLAM系统对应的特征点提取方式为FAST特征点提取算法,则利用FAST特征点提取算法从场景图像中提取至少一个二维特征点,得到的二维特征点为FAST角点,以及SLAM系统中提取到的特征点也为FAST角点,故提取到的二维特征点与SLAM系统中提取的特征点的类型相同,进而使得利用生成的当前场景对应的点云数据,可以较准确的调整SLAM系统的当前定位结果。Here, at least one two-dimensional feature point is extracted from the scene image by using the feature point extraction method corresponding to the SLAM system, that is, the type of the obtained two-dimensional feature point is the same as that of the feature point extracted by the SLAM system; for example, if the SLAM system The corresponding feature point extraction method is the FAST feature point extraction algorithm, then use the FAST feature point extraction algorithm to extract at least one two-dimensional feature point from the scene image, and the obtained two-dimensional feature point is the FAST corner point, and the SLAM system. The feature points are also FAST corner points, so the extracted two-dimensional feature points are of the same type as the feature points extracted in the SLAM system, so that the generated point cloud data corresponding to the current scene can be used to more accurately adjust the current SLAM system. Positioning results.
同时,与利用获取的目标设备的位姿数据消除SLAM系统的累计误差相 比,可以提高SLAM系统的定位结果的稳定性。At the same time, compared with using the acquired pose data of the target device to eliminate the cumulative error of the SLAM system, the stability of the positioning results of the SLAM system can be improved.
一种可能的实施方式中,获取所述目标设备的定位位姿信息,包括:In a possible implementation manner, acquiring the positioning pose information of the target device includes:
基于所述场景图像,确定所述目标设备的所述定位位姿信息;或者,Determine the positioning pose information of the target device based on the scene image; or,
获取所述目标设备上包括的定位传感器的检测数据;acquiring detection data of a positioning sensor included on the target device;
基于所述检测数据,确定所述目标设备的所述定位位姿信息。Based on the detection data, the positioning pose information of the target device is determined.
这里,设置多种方式获取目标设备的定位位姿信息,可以提高确定定位位姿信息的灵活性。Here, setting up multiple ways to obtain the positioning pose information of the target device can improve the flexibility of determining the positioning pose information.
一种可能的实施方式中,所述获取目标设备采集的当前场景对应的场景图像,以及所述目标设备的定位位姿信息,包括:In a possible implementation manner, the obtaining of the scene image corresponding to the current scene collected by the target device and the positioning pose information of the target device include:
在检测到所述目标设备满足设置的移动条件的情况下,获取目标设备采集的当前场景的场景图像,以及所述目标设备的定位位姿信息;In the case of detecting that the target device satisfies the set movement conditions, acquire the scene image of the current scene collected by the target device, and the positioning pose information of the target device;
其中,目标设备满足设置的移动条件包括:目标设备的移动距离达到设置的距离阈值;或者,目标设备的移动时间达到设置的时间阈值。Wherein, that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
以下装置、电子设备等的效果描述参见上述方法的说明。The following descriptions of the effects of devices, electronic devices, etc. refer to the descriptions of the above-mentioned methods.
另一方面,本公开实施例提供了一种点云数据生成装置,包括:On the other hand, an embodiment of the present disclosure provides an apparatus for generating point cloud data, including:
获取部分,被配置为获取目标设备采集的当前场景对应的场景图像,以及所述目标设备的定位位姿信息;an acquisition part, configured to acquire the scene image corresponding to the current scene collected by the target device, and the positioning pose information of the target device;
提取部分,被配置为利用同步定位与建图SLAM系统对应的特征点提取方式,从所述场景图像中提取至少一个二维特征点;The extraction part is configured to extract at least one two-dimensional feature point from the scene image by using the feature point extraction method corresponding to the synchronous positioning and the mapping SLAM system;
第一确定部分,被配置为根据所述定位位姿信息、和所述二维特征点在所述场景图像中的位置信息,在预先构建的三维场景地图中确定与所述二维特征点匹配的三维特征点、以及所述三维特征点的三维位置信息;A first determining part is configured to determine, according to the positioning pose information and the position information of the two-dimensional feature points in the scene image, matching with the two-dimensional feature points in a pre-built three-dimensional scene map The three-dimensional feature points of , and the three-dimensional position information of the three-dimensional feature points;
生成部分,被配置为基于所述三维特征点、和所述三维特征点的三维位置信息,生成所述当前场景对应的点云数据。The generating part is configured to generate point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points.
一种可能的实施方式中,所述装置还包括:第二确定部分,被配置为:In a possible implementation manner, the apparatus further includes: a second determining part configured to:
确定所述三维特征点的语义信息,和/或,所述三维特征点的三维位置信息对应的位置置信度;determining the semantic information of the three-dimensional feature point, and/or the position confidence level corresponding to the three-dimensional position information of the three-dimensional feature point;
所述生成部分,还被配置为:The generation section is also configured to:
基于所述三维特征点、所述三维特征点的三维位置信息,以及确定的所述语义信息和/或所述位置置信度,生成所述当前场景对应的点云数据。Based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or the position confidence, the point cloud data corresponding to the current scene is generated.
一种可能的实施方式中,所述装置还包括:第三确定部分,被配置为:In a possible implementation manner, the apparatus further includes: a third determining part configured to:
确定所述三维特征点的语义信息,和/或,所述三维特征点的三维位置信息对应的位置置信度;determining the semantic information of the three-dimensional feature point, and/or the position confidence level corresponding to the three-dimensional position information of the three-dimensional feature point;
基于所述三维特征点的所述语义信息和/或所述位置置信度,确定至少一个三维特征点中的可信三维特征点;determining a credible three-dimensional feature point in at least one three-dimensional feature point based on the semantic information and/or the position confidence of the three-dimensional feature point;
所述生成部分,还被配置为:The generation section is also configured to:
基于所述可信三维特征点、和所述可信三维特征点对应的三维位置信息,生成所述当前场景对应的点云数据。Based on the trusted three-dimensional feature points and the three-dimensional position information corresponding to the trusted three-dimensional feature points, point cloud data corresponding to the current scene is generated.
一种可能的实施方式中,所述装置还包括:调整部分,被配置为:In a possible implementation manner, the device further includes: an adjustment part configured to:
利用所述当前场景对应的点云数据,对所述SLAM系统的当前定位结果 进行调整,得到调整后的当前定位结果。Using the point cloud data corresponding to the current scene, the current positioning result of the SLAM system is adjusted to obtain the adjusted current positioning result.
一种可能的实施方式中,所述获取部分,还被配置为:In a possible implementation manner, the obtaining part is further configured as:
基于所述场景图像,确定所述目标设备的所述定位位姿信息;或者,Determine the positioning pose information of the target device based on the scene image; or,
获取所述目标设备上包括的定位传感器的检测数据;acquiring detection data of a positioning sensor included on the target device;
基于所述检测数据,确定所述目标设备的所述定位位姿信息。Based on the detection data, the positioning pose information of the target device is determined.
一种可能的实施方式中,所述获取部分,还被配置为:In a possible implementation manner, the obtaining part is further configured as:
在检测到所述目标设备满足设置的移动条件的情况下,获取目标设备采集的当前场景的场景图像,以及所述目标设备的定位位姿信息;In the case of detecting that the target device satisfies the set movement conditions, acquire the scene image of the current scene collected by the target device, and the positioning pose information of the target device;
其中,目标设备满足设置的移动条件包括:目标设备的移动距离达到设置的距离阈值;或者,目标设备的移动时间达到设置的时间阈值。Wherein, that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
另一方面,本公开实施例提供一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如上述第一方面或任一实施方式所述的点云数据生成方法的步骤。On the other hand, an embodiment of the present disclosure provides an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processor In communication with the memory through a bus, the machine-readable instructions execute the steps of the method for generating point cloud data according to the first aspect or any one of the embodiments when executed by the processor.
另一方面,本公开实施例提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行如上述第一方面或任一实施方式所述的点云数据生成方法的步骤。On the other hand, an embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor when it is executed as described in the first aspect or any one of the above-mentioned embodiments. The steps of the point cloud data generation method.
另一方面,本公开实施例提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行如上述第一方面或任一实施方式所述的点云数据生成方法的步骤。On the other hand, an embodiment of the present disclosure provides a computer program, including computer-readable code, when the computer-readable code is executed in an electronic device, a processor in the electronic device executes the first aspect or any of the above The steps of the method for generating point cloud data according to an embodiment.
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开实施例的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to explain the technical solutions of the embodiments of the present disclosure more clearly, the following briefly introduces the accompanying drawings required in the embodiments, which are incorporated into the specification and constitute a part of the specification. The drawings illustrate embodiments consistent with the present disclosure, and together with the description, serve to explain the technical solutions of the embodiments of the present disclosure. It should be understood that the following drawings only show some embodiments of the present disclosure, and therefore should not be regarded as limiting the scope. Other related figures are obtained from these figures.
图1示出了本公开实施例所提供的一种点云数据生成方法的流程示意图;FIG. 1 shows a schematic flowchart of a method for generating point cloud data provided by an embodiment of the present disclosure;
图2示出了本公开实施例所提供的一种点云数据生成方法中,确定可信三维特征点的方式的流程示意图;FIG. 2 shows a schematic flowchart of a method for determining credible three-dimensional feature points in a method for generating point cloud data provided by an embodiment of the present disclosure;
图3示出了本公开实施例所提供的另一种点云数据生成方法;FIG. 3 shows another method for generating point cloud data provided by an embodiment of the present disclosure;
图4示出了本公开实施例所提供的一种点云数据生成装置的架构示意图;FIG. 4 shows a schematic diagram of the architecture of an apparatus for generating point cloud data provided by an embodiment of the present disclosure;
图5示出了本公开实施例所提供的一种电子设备的结构示意图。FIG. 5 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。 通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be described clearly and completely below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments These are only some of the embodiments of the present disclosure, but not all of the embodiments. The components of the disclosed embodiments generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations. Therefore, the following detailed description of the embodiments of the disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure as claimed, but is merely representative of selected embodiments of the disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present disclosure.
即时定位与地图构建是指可移动设备在未知环境中从一个未知位置开始移动,在移动过程中根据位置估计和地图进行自身定位,同时在自身定位的基础上建造增量式地图,从而实现可移动设备的自主定位和导航。一般的,在可移动设备上运行的SLAM系统在长距离跟踪中会产生较大的误差累计,降低了SLAM系统的准确度和稳定性。Real-time positioning and map construction means that a mobile device starts to move from an unknown position in an unknown environment, performs its own positioning according to the position estimation and map during the movement process, and builds an incremental map on the basis of its own positioning, so as to realize Autonomous positioning and navigation of mobile devices. Generally, a SLAM system running on a mobile device will generate a large error accumulation in long-distance tracking, which reduces the accuracy and stability of the SLAM system.
由于激光雷达或者三维重建(Struct From Motion,SFM)的方法生成的离线地图具备很高的精度以及全局一致性,故可以将高精度的离线地图点信息融入到SLAM系统的跟踪过程中,以有效降低SLAM系统的误差。一般的,可以将本地图像上传到云端进行视觉定位,根据当前图像与离线地图的定位结果筛选出内点(inlier),并将内点返回给SLAM系统。但是,这种方式通常会导致筛选后的内点数量有限,很难持续的作用到SLAM系统中。Since the offline maps generated by lidar or three-dimensional reconstruction (Struct From Motion, SFM) have high accuracy and global consistency, high-precision offline map point information can be integrated into the tracking process of the SLAM system to effectively Reduce the error of the SLAM system. Generally, the local image can be uploaded to the cloud for visual positioning, and the inliers can be screened out according to the positioning results of the current image and the offline map, and the inliers can be returned to the SLAM system. However, this method usually results in a limited number of inliers after screening, and it is difficult to continuously apply them to the SLAM system.
为了解决上述问题,本公开实施例提出了一种点云数据生成方法、装置、电子设备及存储介质。In order to solve the above problems, the embodiments of the present disclosure provide a method, apparatus, electronic device, and storage medium for generating point cloud data.
下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开实施例的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开实施例保护的范围。The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, but not all of the embodiments. The components of the disclosed embodiments generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations. Therefore, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the claimed embodiments of the present disclosure, but is merely representative of selected embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the embodiments of the present disclosure.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
为便于对本公开实施例进行理解,首先对本公开实施例所公开的一种点云数据生成方法进行详细介绍。本公开实施例所提供的点云数据生成方法可应用于具有一定计算能力的可移动计算机设备,比如,该可移动计算机设备可以为手机、电脑、平板、增强显示(Augmented Reality,AR)设备、机器人等。在一些可能的实现方式中,该点云数据生成方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。In order to facilitate the understanding of the embodiments of the present disclosure, a method for generating point cloud data disclosed in the embodiments of the present disclosure is first introduced in detail. The method for generating point cloud data provided by the embodiments of the present disclosure can be applied to a mobile computer device with a certain computing capability, for example, the mobile computer device can be a mobile phone, a computer, a tablet, an Augmented Reality (AR) device, Robots etc. In some possible implementations, the method for generating point cloud data may be implemented by a processor invoking computer-readable instructions stored in a memory.
参见图1所示,为本公开实施例所提供的点云数据生成方法的流程示意图,该方法包括S101-S104,其中:Referring to FIG. 1, which is a schematic flowchart of a method for generating point cloud data provided by an embodiment of the present disclosure, the method includes S101-S104, wherein:
S101,获取目标设备采集的当前场景对应的场景图像,以及所述目标设备的定位位姿信息;S101, acquiring a scene image corresponding to a current scene collected by a target device, and positioning pose information of the target device;
S102,利用同步定位与建图系统对应的特征点提取方式,从所述场景图 像中提取至少一个二维特征点;S102, utilize the feature point extraction mode corresponding to synchronous positioning and mapping system, extract at least one two-dimensional feature point from the scene image;
S103,根据所述定位位姿信息、和所述二维特征点在所述场景图像中的位置信息,在预先构建的三维场景地图中确定与所述二维特征点匹配的三维特征点、以及所述三维特征点的三维位置信息;S103, according to the positioning pose information and the position information of the two-dimensional feature point in the scene image, determine the three-dimensional feature point matching the two-dimensional feature point in the pre-built three-dimensional scene map, and three-dimensional position information of the three-dimensional feature point;
S104,基于所述三维特征点、和所述三维特征点的三维位置信息,生成所述当前场景对应的点云数据。S104 , based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, generate point cloud data corresponding to the current scene.
上述方法中,可以利用SLAM系统对应的特征点提取方式,从采集到的场景图像中提取至少一个二维特征点,并根据获取到的定位位姿信息和二维特征点在场景图像中的位置信息,确定三维场景地图中,与二维特征点匹配的三维特征点、以及三维特征点的三维位置信息,由于三维场景地图中包括的特征点信息较为准确,故可以较准确的确定与二维特征点匹配的三维特征点和三维特征点的三维位置信息,进一步,可以生成当前场景对应的较为准确的点云数据。由于使用了SLAM系统对应的特征点提取方式提取场景图像中的二维特征点,该二维特征点与SLAM系统相匹配,使得生成的当前场景的点云数据与SLAM系统相匹配,进而后续可以利用当前场景对应的点云数据对SLAM系统的累计误差进行修正。In the above method, the feature point extraction method corresponding to the SLAM system can be used to extract at least one two-dimensional feature point from the collected scene image, and according to the obtained positioning pose information and the position of the two-dimensional feature point in the scene image. information, to determine the 3D feature points that match the 2D feature points in the 3D scene map, and the 3D position information of the 3D feature points. The three-dimensional feature points matched by the feature points and the three-dimensional position information of the three-dimensional feature points can further generate relatively accurate point cloud data corresponding to the current scene. Since the feature point extraction method corresponding to the SLAM system is used to extract the two-dimensional feature points in the scene image, the two-dimensional feature points are matched with the SLAM system, so that the generated point cloud data of the current scene is matched with the SLAM system, and the subsequent The accumulated error of the SLAM system is corrected by using the point cloud data corresponding to the current scene.
下述对S101-S104进行相关说明。S101-S104 will be described below.
针对S101:For S101:
目标设备可以为任一包括图像采集装置的可移动设备,比如,目标设备可以为机器人、AR设备、手机、电脑等。这里,可以获取目标设备上设置的图像采集装置采集的当前场景的场景图像;该图像采集装置可以为摄像头等。The target device may be any movable device including an image acquisition device, for example, the target device may be a robot, an AR device, a mobile phone, a computer, and the like. Here, the scene image of the current scene collected by the image collection device set on the target device may be acquired; the image collection device may be a camera or the like.
这里,可以获取目标设备采集的当前场景对应的场景图像,以及获取目标设备在采集场景图像过程中的定位位姿信息。其中,定位位姿信息可以包括位置信息和朝向信息,比如,位置信息可以为三维位置信息;朝向信息可以用欧拉角进行表示。Here, a scene image corresponding to the current scene collected by the target device can be obtained, and the positioning pose information of the target device in the process of collecting the scene image can be obtained. The positioning pose information may include position information and orientation information, for example, the position information may be three-dimensional position information; the orientation information may be represented by Euler angles.
一种可选实施方式中,可以通过以下实现方式的至少之一实现上述S101中的获取所述目标设备的定位位姿信息:In an optional implementation manner, obtaining the positioning pose information of the target device in the above S101 may be implemented by at least one of the following implementation manners:
方式一、基于所述场景图像,确定所述目标设备的所述定位位姿信息。Manner 1: Determine the positioning pose information of the target device based on the scene image.
方式二、获取所述目标设备上包括的定位传感器的检测数据;基于所述检测数据,确定所述目标设备的所述定位位姿信息。Manner 2: Acquire detection data of a positioning sensor included on the target device; and determine the positioning pose information of the target device based on the detection data.
在方式一中,可以使用视觉定位算法,基于当前场景对应的场景图像,确定目标设备的定位位姿信息。比如,可以对场景图像进行特征点提取,得到场景图像中包括的多个特征点信息,利用该多个特征点信息和构建的离线地图,确定目标设备的定位位姿信息。In the first mode, a visual positioning algorithm may be used to determine the positioning pose information of the target device based on the scene image corresponding to the current scene. For example, feature point extraction can be performed on the scene image to obtain multiple feature point information included in the scene image, and the positioning pose information of the target device can be determined by using the multiple feature point information and the constructed offline map.
在方式二中,定位传感器可以包括:雷达设备、惯性测量单元(Inertial Measurement Unit,IMU)、陀螺仪等能够测量设备位姿的传感器。比如,在定位传感器为雷达设备的情况下,雷达设备可以采集当前场景的点云数据,再将采集的点云数据与高精地图进行匹配,确定目标设备的定位位姿信息。In the second manner, the positioning sensor may include: a radar device, an inertial measurement unit (Inertial Measurement Unit, IMU), a gyroscope and other sensors capable of measuring the position and attitude of the device. For example, when the positioning sensor is a radar device, the radar device can collect the point cloud data of the current scene, and then match the collected point cloud data with the high-precision map to determine the positioning pose information of the target device.
其中,确定目标设备的定位位姿信息的方法还可以包括其他定位方法,此处仅为示例性说明。比如,还可以通过全球定位系统(Global Positioning System,GPS)、无线通信技术WiFi定位、实时动态定位(Real-Timekinematic,RTK)等定位方法确定目标设备的定位位姿信息。Wherein, the method for determining the positioning pose information of the target device may also include other positioning methods, which are only illustratively described herein. For example, the positioning pose information of the target device can also be determined by positioning methods such as a global positioning system (Global Positioning System, GPS), wireless communication technology WiFi positioning, and real-time dynamic positioning (Real-Timekinematic, RTK).
这里,设置多种方式获取目标设备的定位位姿信息,可以提高确定定位位姿信息的灵活性。Here, setting up multiple ways to obtain the positioning pose information of the target device can improve the flexibility of determining the positioning pose information.
一种可选实施方式中,S101中,获取目标设备采集的当前场景对应的场景图像,以及所述目标设备的定位位姿信息,可以包括:In an optional implementation manner, in S101, acquiring the scene image corresponding to the current scene collected by the target device and the positioning pose information of the target device may include:
在检测到所述目标设备满足设置的移动条件的情况下,获取目标设备采集的当前场景的场景图像,以及所述目标设备的定位位姿信息;In the case of detecting that the target device satisfies the set movement conditions, acquire the scene image of the current scene collected by the target device, and the positioning pose information of the target device;
其中,目标设备满足设置的移动条件包括:目标设备的移动距离达到设置的距离阈值;或者,目标设备的移动时间达到设置的时间阈值。Wherein, that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
一种可选实施方式中,可以在目标设备的移动距离达到设置的距离阈值的情况下,或者,目标设备的移动时间达到设备的时间阈值的情况下,获取目标设备采集的当前场景对应的场景图像以及目标设备的定位位姿信息。其中,距离阈值、时间阈值可以根据需要进行设置,比如,距离阈值可以为20米、30米、50米等;时间阈值可以为30秒、1分钟等。In an optional implementation manner, the scene corresponding to the current scene collected by the target device can be obtained when the moving distance of the target device reaches the set distance threshold, or when the moving time of the target device reaches the time threshold of the device. The image and the positioning pose information of the target device. The distance threshold and time threshold can be set as required, for example, the distance threshold can be 20 meters, 30 meters, 50 meters, etc.; the time threshold can be 30 seconds, 1 minute, and so on.
比如,可以在目标设备每移动20米(距离阈值)的情况下,获取一次目标设备采集的当前场景的场景图像以及目标设备的定位位姿信息。或者,在目标设备每移动20秒(时间阈值)的情况下,获取一次目标设备采集的当前场景的场景图像以及目标设备的定位位姿信息。For example, the scene image of the current scene collected by the target device and the positioning pose information of the target device may be acquired once every time the target device moves 20 meters (distance threshold). Alternatively, when the target device moves every 20 seconds (time threshold), a scene image of the current scene collected by the target device and the positioning pose information of the target device are acquired once.
示例性的,可以利用目标设备上设置的用于测量移动距离的位移传感器,确定目标设备的移动距离。或者,可以利用设置的定位算法对目标设备的移动距离进行实时检测等。可以利用目标设备上设置的时钟,确定目标设备的移动时间。Exemplarily, the movement distance of the target device may be determined by using a displacement sensor provided on the target device for measuring the movement distance. Alternatively, the set positioning algorithm can be used to perform real-time detection on the moving distance of the target device, etc. The moving time of the target device can be determined using the clock set on the target device.
针对S102以及S103:For S102 and S103:
可以利用SLAM系统对应的特征点提取方式,从获取的当前场景的场景图像中提取至少一个二维特征点。比如该二维特征点可以为场景图像中包括的目标对象上的特征点。At least one two-dimensional feature point can be extracted from the acquired scene image of the current scene by using the feature point extraction method corresponding to the SLAM system. For example, the two-dimensional feature points may be feature points on the target object included in the scene image.
这里,可以利用SLAM系统对应的特征点提取方式,从当前场景对应的场景图像中提取至少一个二维特征点。其中,特征点提取方式可以为部署在SLAM系统中的特征点提取算法,比如该特征点提取算法可以包括但不限于尺度不变特征变换(Scale Invariant Feature Transform,SIFT)算法、尺度不变特征变换算法(SIFT算法)的加速版SURF算法、FAST特征点提取算法等。Here, at least one two-dimensional feature point can be extracted from the scene image corresponding to the current scene by using the feature point extraction method corresponding to the SLAM system. The feature point extraction method may be a feature point extraction algorithm deployed in the SLAM system. For example, the feature point extraction algorithm may include, but is not limited to, Scale Invariant Feature Transform (SIFT) algorithm, Scale Invariant Feature Transform The accelerated version of the algorithm (SIFT algorithm) SURF algorithm, FAST feature point extraction algorithm, etc.
比如,若SLAM系统对应的特征点提取算法为FAST特征点提取算法,则可以利用FAST特征点提取算法从场景图像中提取至少一个二维特征点。For example, if the feature point extraction algorithm corresponding to the SLAM system is the FAST feature point extraction algorithm, the FAST feature point extraction algorithm may be used to extract at least one two-dimensional feature point from the scene image.
这里,可以在可移动设备执行利用SLAM系统对应的特征点提取算法,从场景图像中提取至少一个二维特征点的步骤;也可以在服务器上执行利用SLAM系统对应的特征点提取算法,从场景图像中提取至少一个二维特征点的步骤。Here, the step of extracting at least one two-dimensional feature point from the scene image by using the feature point extraction algorithm corresponding to the SLAM system can be executed on the mobile device; the feature point extraction algorithm corresponding to the SLAM system can also be executed on the server, from the scene image. The step of extracting at least one two-dimensional feature point from the image.
比如,在目标设备采集场景图像之后,可以利用可移动设备上设置的SLAM系统对应的特征点提取算法,从场景图像中提取至少一个二维特征点。 或者,在目标设备采集场景图像之后,可以将采集的场景图像发送给服务器,以便可以利用服务器上设置的SLAM系统对应的特征点提取算法,从场景图像中提取至少一个二维特征点。For example, after the target device collects the scene image, at least one two-dimensional feature point can be extracted from the scene image by using the feature point extraction algorithm corresponding to the SLAM system set on the movable device. Alternatively, after the target device collects the scene image, the collected scene image can be sent to the server, so that at least one two-dimensional feature point can be extracted from the scene image by using the feature point extraction algorithm corresponding to the SLAM system set on the server.
在得到二维特征点之后,可以根据定位位姿信息、和二维特征点在场景图像中的位置信息,在预先构建的三维场景地图中确定与二维特征点匹配的三维特征点、以及三维特征点对应的三维位置信息。After the 2D feature points are obtained, the 3D feature points matching the 2D feature points and the 3D feature points can be determined in the pre-built 3D scene map according to the positioning pose information and the position information of the 2D feature points in the scene image. The three-dimensional position information corresponding to the feature point.
示例性的,可以使用光线投射算法(ray casting),根据定位位姿信息、二维特征点在场景图像中的位置信息和预先构建的三维场景地图,确定与二维特征点匹配的三维特征点、以及三维特征点对应的三维位置信息。Exemplarily, a ray casting algorithm can be used to determine the three-dimensional feature points that match the two-dimensional feature points according to the positioning pose information, the position information of the two-dimensional feature points in the scene image, and the pre-built three-dimensional scene map. , and the three-dimensional position information corresponding to the three-dimensional feature points.
针对S104:For S104:
可以利用三维特征点、和三维特征点的三维位置信息,生成当前场景对应的点云数据。The point cloud data corresponding to the current scene can be generated by using the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points.
一种可能的实施方式中,所述方法还包括:确定所述三维特征点的语义信息,和/或,所述三维特征点的三维位置信息对应的位置置信度。In a possible implementation manner, the method further includes: determining semantic information of the three-dimensional feature point, and/or position confidence corresponding to the three-dimensional position information of the three-dimensional feature point.
所述基于所述三维特征点、和所述三维特征点的三维位置信息,生成所述当前场景对应的点云数据,包括:基于所述三维特征点、所述三维特征点的三维位置信息,以及确定的所述语义信息和/或所述位置置信度,生成所述当前场景对应的点云数据。The generating the point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points includes: based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or the position confidence, to generate point cloud data corresponding to the current scene.
上述实施方式中,通过确定三维特征点的语义信息和/或三维特征点的三维位置信息对应的位置置信度,基于三维特征点、三维特征点的三维位置信息,以及确定的语义信息和/或位置置信度,生成当前场景对应的点云数据,生成的点云数据中包括的三维特征点的信息较为丰富。In the above-mentioned embodiment, by determining the semantic information of the three-dimensional feature points and/or the position confidence corresponding to the three-dimensional position information of the three-dimensional feature points, based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or The position confidence is used to generate point cloud data corresponding to the current scene. The generated point cloud data contains rich information of 3D feature points.
示例性的,在预先构建的三维场景地图中,每个三维特征点对应的信息中可以包括语义信息和/或位置置信度信息,在确定了与二维特征点匹配的三维特征点之后,可以从三维场景地图中得到该三维特征点对应的语义信息和/或三维特征点的三维位置信息对应的位置置信度。其中,该位置置信度可以用于表征三维位置信息的可靠程度。Exemplarily, in the pre-built three-dimensional scene map, the information corresponding to each three-dimensional feature point may include semantic information and/or position confidence information. The semantic information corresponding to the three-dimensional feature point and/or the position confidence corresponding to the three-dimensional position information of the three-dimensional feature point are obtained from the three-dimensional scene map. The position confidence can be used to represent the reliability of the three-dimensional position information.
一种可能的实施方式中,可以在构建三维场景地图的过程中,在预先构建的三维场景地图中确定三维特征点的语义信息、位置置信度;或者,也可以在确定二维特征点的语义信息、或位置置信度的过程中,确定预先构建的三维场景地图中三维特征点的语义信息、位置置信度。In a possible implementation, in the process of constructing the three-dimensional scene map, the semantic information and position confidence of the three-dimensional feature points may be determined in the pre-built three-dimensional scene map; In the process of information or position confidence, the semantic information and position confidence of the three-dimensional feature points in the pre-built three-dimensional scene map are determined.
可以根据下述步骤构建三维场景地图:获取该场景对应的视频,从视频中采样得到多帧场景样本,或者,获取采集的该场景对应的多帧场景样本;再利用神经网络算法从多帧场景样本中提取得到多个三维特征点信息;并可以基于提取得到的多个三维特征点信息,构建三维场景地图。The three-dimensional scene map can be constructed according to the following steps: obtaining the video corresponding to the scene, sampling the multi-frame scene samples from the video, or obtaining the collected multi-frame scene samples corresponding to the scene; A plurality of three-dimensional feature point information is extracted from the sample; and a three-dimensional scene map can be constructed based on the extracted multiple three-dimensional feature point information.
在三维场景地图中三维特征点包含有语义信息的过程中,可以利用训练后的语义分割神经网络,对构建的三维场景地图进行检测,确定三维场景地图中每个三维特征点的语义信息。其中,三维特征点的语义信息可以用于表征该三维特征点对应的目标对象的种类,比如,三维特征点的语义信息可以包括墙壁、桌子、水杯、树叶、动物等。这里,三维特征点的语义信息可以 根据需要进行设置。In the process that the 3D feature points in the 3D scene map contain semantic information, the trained semantic segmentation neural network can be used to detect the constructed 3D scene map, and determine the semantic information of each 3D feature point in the 3D scene map. The semantic information of the three-dimensional feature point may be used to represent the type of the target object corresponding to the three-dimensional feature point. For example, the semantic information of the three-dimensional feature point may include walls, tables, cups, leaves, animals, and the like. Here, the semantic information of the three-dimensional feature points can be set as required.
在三维场景地图中三维特征点包含有位置置信度的过程中,可以利用训练后的神经网络,对构建的三维场景地图进行检测,确定三维场景地图中每个三维特征点的位置置信度。或者,也可以根据三维特征点的语义信息,确定三维场景地图中每个三维特征点的位置置信度。比如,若三维特征点的语义信息为桌子,由于桌子为不易移动的物体,故可以将该三维特征点的位置置信度设置的较大;若三维特征点的语义信息为树叶,由于树叶为较容易移动的物体,故可以将该三维特征点的位置置信度设置的较小。In the process that the 3D feature points in the 3D scene map contain the position confidence, the trained neural network can be used to detect the constructed 3D scene map to determine the position confidence of each 3D feature point in the 3D scene map. Alternatively, the position confidence of each 3D feature point in the 3D scene map can also be determined according to the semantic information of the 3D feature point. For example, if the semantic information of the 3D feature point is a table, since the table is an object that is not easy to move, the confidence of the position of the 3D feature point can be set higher; if the semantic information of the 3D feature point is a leaf, because the leaf is relatively The object is easy to move, so the position confidence of the three-dimensional feature point can be set to be small.
进而可以基于三维特征点、三维特征点的三维位置信息,以及确定的语义信息和/或位置置信度,生成当前场景对应的点云数据。比如,在三维特征点包括语义信息的过程中,生成的当前场景对应的点云数据中包括每个点云点的语义信息。Further, point cloud data corresponding to the current scene may be generated based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or position confidence. For example, in the process that the three-dimensional feature points include semantic information, the generated point cloud data corresponding to the current scene includes the semantic information of each point cloud point.
一种可能的实施方式中,参见图2所示,所述方法还包括:In a possible implementation, as shown in FIG. 2 , the method further includes:
S201,确定所述三维特征点的语义信息,和/或,所述三维特征点的三维位置信息对应的位置置信度;S201, determining the semantic information of the three-dimensional feature point, and/or, the position confidence level corresponding to the three-dimensional position information of the three-dimensional feature point;
S202,基于所述三维特征点的所述语义信息和/或所述位置置信度,确定至少一个三维特征点中的可信三维特征点。S202: Determine a credible three-dimensional feature point in at least one three-dimensional feature point based on the semantic information and/or the position confidence of the three-dimensional feature point.
所述基于所述三维特征点、和所述三维特征点的三维位置信息,生成所述当前场景对应的点云数据,包括:基于所述可信三维特征点、和所述可信三维特征点对应的三维位置信息,生成所述当前场景对应的点云数据。The generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points includes: based on the trusted three-dimensional feature points and the trusted three-dimensional feature points The corresponding three-dimensional position information is used to generate point cloud data corresponding to the current scene.
采用上述方法,可以通过三维特征点的语义信息和/或位置置信度,确定可信三维特征点,将至少一个三维特征点中不可信的三维特征点筛除,基于可信三维特征点对应的三维位置信息和可信三维特征点,可以生成当前场景对应的较为准确的点云数据,缓解了不可信三维特征点对点云数据造成的不佳影响。Using the above method, the credible 3D feature point can be determined by the semantic information and/or the position confidence of the 3D feature point, and the unreliable 3D feature points in at least one 3D feature point can be screened out. 3D position information and credible 3D feature points can generate more accurate point cloud data corresponding to the current scene, which alleviates the bad influence of unreliable 3D feature points on point cloud data.
确定三维特征点的语义信息和三维特征点的三维位置信息对应的位置置信度的方式可以参考上述过程。For the manner of determining the position confidence corresponding to the semantic information of the three-dimensional feature point and the three-dimensional position information of the three-dimensional feature point, reference may be made to the above process.
在可以基于三维特征点的语义信息和/或位置置信度,确定可信三维特征点。The trusted three-dimensional feature points may be determined based on semantic information and/or position confidence of the three-dimensional feature points.
在基于三维特征点的语义信息,确定至少一个三维特征点中的可信三维特征点的过程中,可以根据三维特征点的语义信息,确定三维特征点对应的对象是否属于可移动类别,若是,确定该三维特征点不属于可信三维特征点;若否,则确定该三维特征点属于可信三维特征点。其中,可以预先设置有可移动类别和不可移动类别的映射关系表,进而可以根据三维特征点的语义信息和设置的映射关系表,确定三维特征点对应的对象属于可移动类别、或属于不可移动类别。In the process of determining a credible 3D feature point in at least one 3D feature point based on the semantic information of the 3D feature point, it can be determined whether the object corresponding to the 3D feature point belongs to the movable category according to the semantic information of the 3D feature point, and if so, It is determined that the three-dimensional feature point does not belong to the credible three-dimensional feature point; if not, it is determined that the three-dimensional feature point belongs to the credible three-dimensional feature point. Among them, a mapping relationship table of movable categories and immovable categories can be preset, and then it can be determined according to the semantic information of the three-dimensional feature points and the set mapping relationship table that the objects corresponding to the three-dimensional feature points belong to the movable category or belong to the immovable category. category.
在基于三维特征点的位置置信度,确定至少一个三维特征点中的可信三维特征点的过程中,可以设置置信度阈值,将位置置信度大于或等于置信度阈值的三维特征点,确定为可信三维特征点;将位置置信度小于设置的置信度阈值的三维特征点,确定为不可信三维特征点。In the process of determining a credible 3D feature point in at least one 3D feature point based on the position confidence of the 3D feature points, a confidence threshold may be set, and a 3D feature point whose position confidence is greater than or equal to the confidence threshold may be determined as Credible 3D feature points; 3D feature points whose position confidence is less than the set confidence threshold are determined as unreliable 3D feature points.
在基于三维特征点的语义信息和位置置信度,确定至少一个三维特征点中的可信三维特征点的过程中,可以先基于三维特征点的语义信息,确定至少一个三维特征点中的候选可信三维特征点;再基于位置置信度,确定候选可信三维特征点中的可信三维特征点。或者,可以先基于三维特征点的位置置信度,确定至少一个三维特征点中的候选可信三维特征点;再基于语义信息,确定候选可信三维特征点中的可信三维特征点。In the process of determining a credible 3D feature point in at least one 3D feature point based on the semantic information and position confidence of the 3D feature point, the candidate can be determined based on the semantic information of the 3D feature point first in the at least one 3D feature point. The trusted three-dimensional feature points are determined; and the trusted three-dimensional feature points in the candidate trusted three-dimensional feature points are determined based on the position confidence. Alternatively, a candidate credible 3D feature point in at least one 3D feature point may be determined based on the position confidence of the 3D feature point; then based on semantic information, a credible 3D feature point in the candidate credible 3D feature point may be determined.
进而,可以基于可信三维特征点、和可信三维特征点对应的三维位置信息,生成当前场景对应的点云数据。Furthermore, point cloud data corresponding to the current scene may be generated based on the trusted three-dimensional feature points and the three-dimensional position information corresponding to the trusted three-dimensional feature points.
一种可能的实施方式中,基于图1提供的方法,在基于三维特征点、和三维特征点的三维位置信息,生成当前场景对应的点云数据之后,所述方法还包括S301,参见图3所示:In a possible implementation, based on the method provided in FIG. 1 , after generating point cloud data corresponding to the current scene based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, the method further includes S301 , see FIG. 3 . shown:
S301,利用所述当前场景对应的点云数据,对所述SLAM系统的当前定位结果进行调整,得到调整后的当前定位结果。S301 , using the point cloud data corresponding to the current scene, adjust the current positioning result of the SLAM system to obtain an adjusted current positioning result.
这里,可以将当前场景对应的点云数据输入至SLAM系统中,控制SLAM系统将接收到的点云数据,加入到SLAM系统的跟踪过程中,对SLAM系统的当前定位结果进行调整,以消除SLAM系统的误差累计,得到调整后的当前定位结果,使得得到的调整后的当前定位结果的准确度较高。Here, you can input the point cloud data corresponding to the current scene into the SLAM system, control the SLAM system to add the received point cloud data to the tracking process of the SLAM system, and adjust the current positioning results of the SLAM system to eliminate SLAM The errors of the system are accumulated to obtain the adjusted current positioning result, so that the obtained adjusted current positioning result has a higher accuracy.
这里,利用SLAM系统对应的特征点提取方式,从场景图像中提取至少一个二维特征点,即得到的二维特征点的类型与SLAM系统提取到的特征点的类型相同;比如,若SLAM系统对应的特征点提取方式为FAST特征点提取算法,则利用FAST特征点提取算法从场景图像中提取至少一个二维特征点,得到的二维特征点为FAST角点,以及SLAM系统中提取到的特征点也为FAST角点,故提取到的二维特征点与SLAM系统中提取的特征点的类型相同,进而使得利用生成的当前场景对应的点云数据,可以较准确的调整SLAM系统的当前定位结果。同时,与利用获取的目标设备的位姿数据消除SLAM系统的累计误差相比,可以提高SLAM系统的定位结果的稳定性。Here, at least one two-dimensional feature point is extracted from the scene image by using the feature point extraction method corresponding to the SLAM system, that is, the type of the obtained two-dimensional feature point is the same as that of the feature point extracted by the SLAM system; for example, if the SLAM system The corresponding feature point extraction method is the FAST feature point extraction algorithm, then use the FAST feature point extraction algorithm to extract at least one two-dimensional feature point from the scene image, and the obtained two-dimensional feature point is the FAST corner point, and the SLAM system. The feature points are also FAST corner points, so the extracted two-dimensional feature points are of the same type as the feature points extracted in the SLAM system, so that the generated point cloud data corresponding to the current scene can be used to more accurately adjust the current SLAM system. Positioning results. At the same time, compared with using the acquired pose data of the target device to eliminate the cumulative error of the SLAM system, the stability of the positioning results of the SLAM system can be improved.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that, in the above-mentioned method of the specific embodiment, the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the execution order of each step should be based on its function and possible intrinsic Logical OK.
基于相同的构思,本公开实施例还提供了一种点云数据生成装置,参见图4所示,为本公开实施例提供的点云数据生成装置的架构示意图,包括获取部分401、提取部分402、第一确定部分403、生成部分404,其中:Based on the same concept, an embodiment of the present disclosure also provides an apparatus for generating point cloud data. Referring to FIG. 4 , a schematic diagram of the architecture of the apparatus for generating point cloud data provided by an embodiment of the present disclosure includes an
获取部分401,被配置为获取目标设备采集的当前场景对应的场景图像,以及所述目标设备的定位位姿信息;The
提取部分402,被配置为利用同步定位与建图SLAM系统对应的特征点提取方式,从所述场景图像中提取至少一个二维特征点;The
第一确定部分403,被配置为根据所述定位位姿信息、和所述二维特征点在所述场景图像中的位置信息,确定预先构建的三维场景地图中,与所述二维特征点匹配的三维特征点、以及所述三维特征点的三维位置信息;The first determining
生成部分404,被配置为基于所述三维特征点、和所述三维特征点的三维 位置信息,生成所述当前场景对应的点云数据。The generating
一种可能的实施方式中,所述装置还包括:第二确定部分405,被配置为:In a possible implementation manner, the apparatus further includes: a second determining
确定所述三维特征点的语义信息,和/或,所述三维特征点的三维位置信息对应的位置置信度;determining the semantic information of the three-dimensional feature point, and/or the position confidence level corresponding to the three-dimensional position information of the three-dimensional feature point;
所述生成部分404,还被配置为:The
基于所述三维特征点、所述三维特征点的三维位置信息,以及确定的所述语义信息和/或所述位置置信度,生成所述当前场景对应的点云数据。Based on the three-dimensional feature points, the three-dimensional position information of the three-dimensional feature points, and the determined semantic information and/or the position confidence, the point cloud data corresponding to the current scene is generated.
一种可能的实施方式中,所述装置还包括:第三确定部分406,被配置为:In a possible implementation manner, the apparatus further includes: a third determining
确定所述三维特征点的语义信息,和/或,所述三维特征点的三维位置信息对应的位置置信度;determining the semantic information of the three-dimensional feature point, and/or the position confidence level corresponding to the three-dimensional position information of the three-dimensional feature point;
基于所述三维特征点的所述语义信息和/或所述位置置信度,确定至少一个三维特征点中的可信三维特征点;determining a credible three-dimensional feature point in at least one three-dimensional feature point based on the semantic information and/or the position confidence of the three-dimensional feature point;
所述生成部分404,还被配置为:The
基于所述可信三维特征点、和所述可信三维特征点对应的三维位置信息,生成所述当前场景对应的点云数据。Based on the trusted three-dimensional feature points and the three-dimensional position information corresponding to the trusted three-dimensional feature points, point cloud data corresponding to the current scene is generated.
一种可能的实施方式中,所述装置还包括:调整部分407,被配置为:In a possible implementation manner, the device further includes: an
利用所述当前场景对应的点云数据,对所述SLAM系统的当前定位结果进行调整,得到调整后的当前定位结果。Using the point cloud data corresponding to the current scene, the current positioning result of the SLAM system is adjusted to obtain the adjusted current positioning result.
一种可能的实施方式中,所述获取部分401,还被配置为:In a possible implementation manner, the obtaining
基于所述场景图像,确定所述目标设备的所述定位位姿信息;或者,Determine the positioning pose information of the target device based on the scene image; or,
获取所述目标设备上包括的定位传感器的检测数据;基于所述检测数据,确定所述目标设备的所述定位位姿信息。Acquire detection data of a positioning sensor included on the target device; and determine the positioning pose information of the target device based on the detection data.
一种可能的实施方式中,所述获取部分401,还被配置为:In a possible implementation manner, the obtaining
在检测到所述目标设备满足设置的移动条件的情况下,获取目标设备采集的当前场景的场景图像,以及所述目标设备的定位位姿信息;In the case of detecting that the target device satisfies the set movement conditions, acquire the scene image of the current scene collected by the target device, and the positioning pose information of the target device;
其中,目标设备满足设置的移动条件包括:目标设备的移动距离达到设置的距离阈值;或者,目标设备的移动时间达到设置的时间阈值。Wherein, that the target device satisfies the set movement condition includes: the movement distance of the target device reaches the set distance threshold; or, the movement time of the target device reaches the set time threshold.
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模板可以被配置为执行上文方法实施例描述的方法,其实现可以参照上文方法实施例的描述。In some embodiments, the functions or templates included in the apparatus provided by the embodiments of the present disclosure may be configured to execute the methods described in the above method embodiments, and for implementation, reference may be made to the above method embodiments.
基于同一技术构思,本公开实施例还提供了一种电子设备。参照图4所示,为本公开实施例提供的电子设备的结构示意图,包括处理器401、存储器402、和总线403。其中,存储器402被配置为存储执行指令,包括内存4021和外部存储器4022;这里的内存4021也称内存储器,被配置为暂时存放处理器401中的运算数据,以及与硬盘等外部存储器4022交换的数据,处理器401通过内存4021与外部存储器4022进行数据交换,当电子设备400运行时,处理器401与存储器402之间通过总线403通信,使得处理器401在执行以下指令:Based on the same technical concept, an embodiment of the present disclosure also provides an electronic device. Referring to FIG. 4 , a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure includes a
获取目标设备采集的当前场景对应的场景图像,以及所述目标设备的定位位姿信息;acquiring a scene image corresponding to the current scene collected by the target device, and the positioning pose information of the target device;
利用同步定位与建图SLAM系统对应的特征点提取方式,从所述场景图像中提取至少一个二维特征点;Extract at least one two-dimensional feature point from the scene image by using the feature point extraction method corresponding to the synchronous positioning and mapping SLAM system;
根据所述定位位姿信息、和所述二维特征点在所述场景图像中的位置信息,在预先构建的三维场景地图中确定与所述二维特征点匹配的三维特征点、以及所述三维特征点的三维位置信息;According to the positioning pose information and the position information of the two-dimensional feature points in the scene image, determine the three-dimensional feature points matching the two-dimensional feature points in the pre-built three-dimensional scene map, and the 3D position information of 3D feature points;
基于所述三维特征点、和所述三维特征点的三维位置信息,生成所述当前场景对应的点云数据。Based on the three-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, point cloud data corresponding to the current scene is generated.
此外,本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的点云数据生成方法的步骤。In addition, embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the method for generating point cloud data described in the foregoing method embodiments is executed A step of.
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的点云数据生成方法的步骤,可参见上述方法实施例。Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the steps of the point cloud data generation method described in the foregoing method embodiments. Method Examples.
其中,上述计算机程序产品可以通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品体现为计算机存储介质,在另一个可选实施例中,计算机程序产品体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。Wherein, the above-mentioned computer program product can be realized by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK) and the like.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的工作过程,可以参考前述方法实施例中的对应过程。在本公开所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that, for the convenience and brevity of description, reference may be made to the corresponding processes in the foregoing method embodiments for the working processes of the systems and apparatuses described above. In the several embodiments provided by the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. The apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。The functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present disclosure are essentially or contribute to the prior art or parts of the technical solutions may be embodied in the form of software products, and the computer software products are stored in a storage medium , including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present disclosure.
在一些可能的实现方式中,计算机可读存储介质可以是铁电存储器(Ferroelectric Random Access Memory,FRAM)、只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable read-only memory,PROM)、电子可编程只读存储器(Electrical Programmable read-only memory,EPROM)、带电可擦可编程只读存储器(Electrical Programmable read-only memory,EEPROM)、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。In some possible implementations, the computer-readable storage medium may be a ferroelectric memory (Ferroelectric Random Access Memory, FRAM), a read-only memory (Read-Only Memory, ROM), a programmable read-only memory (Programmable read-only memory) , PROM), Electronic Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash Memory, Magnetic Surface Memory, Optical Disc, or CD - ROM and other memories; can also be various devices including one or any combination of the above memories.
以上仅为本公开实施例的实施方式,但本公开实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开实施例的保护范围之内。因此,本公开实施例的保护范围应以权利要求的保护范围为准。The above are only implementations of the embodiments of the present disclosure, but the protection scope of the embodiments of the present disclosure is not limited thereto. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed by the embodiments of the present disclosure. , all should be covered within the protection scope of the embodiments of the present disclosure. Therefore, the protection scope of the embodiments of the present disclosure should be subject to the protection scope of the claims.
本公开提供了一种点云数据生成方法、装置、电子设备及存储介质,该方法包括:获取目标设备采集的当前场景对应的场景图像,以及所述目标设备的定位位姿信息;利用同步定位与建图SLAM系统对应的特征点提取方式,从所述场景图像中提取至少一个二维特征点;根据所述定位位姿信息、和所述二维特征点在所述场景图像中的位置信息,确定预先构建的三维场景地图中,与所述二维特征点匹配的三维特征点、以及所述三维特征点的三维位置信息;基于所述三维特征点、和所述三维特征点的三维位置信息,生成所述当前场景对应的点云数据。由于三维场景地图中包括的特征点信息较为准确,故本公开实施例可以较准确的确定与二维特征点匹配的三维特征点和三维特征点的三维位置信息,进一步,可以生成当前场景对应的较为准确的点云数据。由于使用了SLAM系统对应的特征点提取方式提取场景图像中的二维特征点,该二维特征点与SLAM系统相匹配,使得生成的当前场景的点云数据与SLAM系统相匹配,进而后续可以利用当前场景对应的点云数据对SLAM系统的累计误差进行修正。The present disclosure provides a point cloud data generation method, device, electronic device and storage medium. The method includes: acquiring a scene image corresponding to a current scene collected by a target device, and positioning pose information of the target device; using synchronous positioning The feature point extraction method corresponding to the mapping SLAM system, extracts at least one two-dimensional feature point from the scene image; according to the positioning pose information and the position information of the two-dimensional feature point in the scene image , determine the three-dimensional feature points that match the two-dimensional feature points and the three-dimensional position information of the three-dimensional feature points in the pre-built three-dimensional scene map; based on the three-dimensional feature points and the three-dimensional position of the three-dimensional feature points information to generate point cloud data corresponding to the current scene. Since the feature point information included in the three-dimensional scene map is relatively accurate, the embodiment of the present disclosure can more accurately determine the three-dimensional feature points that match the two-dimensional feature points and the three-dimensional position information of the three-dimensional feature points, and further, can generate the corresponding more accurate point cloud data. Since the feature point extraction method corresponding to the SLAM system is used to extract the two-dimensional feature points in the scene image, the two-dimensional feature points are matched with the SLAM system, so that the generated point cloud data of the current scene is matched with the SLAM system, and the subsequent The accumulated error of the SLAM system is corrected by using the point cloud data corresponding to the current scene.
Claims (11)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110348215.2A CN112907671B (en) | 2021-03-31 | 2021-03-31 | Point cloud data generation method and device, electronic equipment and storage medium |
| CN202110348215.2 | 2021-03-31 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022205750A1 true WO2022205750A1 (en) | 2022-10-06 |
Family
ID=76109691
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2021/114435 Ceased WO2022205750A1 (en) | 2021-03-31 | 2021-08-25 | Point cloud data generation method and apparatus, electronic device, and storage medium |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN112907671B (en) |
| WO (1) | WO2022205750A1 (en) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112907671B (en) * | 2021-03-31 | 2022-08-02 | 深圳市慧鲤科技有限公司 | Point cloud data generation method and device, electronic equipment and storage medium |
| CN113470181B (en) * | 2021-07-14 | 2025-05-30 | 浙江商汤科技开发有限公司 | Plane construction method, device, electronic equipment and storage medium |
| CN113741698B (en) * | 2021-09-09 | 2023-12-15 | 亮风台(上海)信息科技有限公司 | A method and device for determining and presenting target mark information |
| CN114495072A (en) * | 2022-01-29 | 2022-05-13 | 上海商汤临港智能科技有限公司 | Occupant state detection method and device, electronic device and storage medium |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170076491A1 (en) * | 2015-09-14 | 2017-03-16 | Fujitsu Limited | Operation support method, operation support program, and operation support system |
| CN108734654A (en) * | 2018-05-28 | 2018-11-02 | 深圳市易成自动驾驶技术有限公司 | It draws and localization method, system and computer readable storage medium |
| CN110288710A (en) * | 2019-06-26 | 2019-09-27 | Oppo广东移动通信有限公司 | A three-dimensional map processing method, processing device and terminal equipment |
| CN110487274A (en) * | 2019-07-30 | 2019-11-22 | 中国科学院空间应用工程与技术中心 | SLAM method, system, navigation vehicle and storage medium for weak texture scene |
| CN112269851A (en) * | 2020-11-16 | 2021-01-26 | Oppo广东移动通信有限公司 | Map data update method, device, storage medium and electronic device |
| CN112907671A (en) * | 2021-03-31 | 2021-06-04 | 深圳市慧鲤科技有限公司 | Point cloud data generation method and device, electronic equipment and storage medium |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6580821B1 (en) * | 2000-03-30 | 2003-06-17 | Nec Corporation | Method for computing the location and orientation of an object in three dimensional space |
| US10839556B2 (en) * | 2018-10-23 | 2020-11-17 | Microsoft Technology Licensing, Llc | Camera pose estimation using obfuscated features |
| CN111260538B (en) * | 2018-12-03 | 2023-10-03 | 北京魔门塔科技有限公司 | Positioning and vehicle-mounted terminal based on long-baseline binocular fisheye camera |
| CN109887032B (en) * | 2019-02-22 | 2021-04-13 | 广州小鹏汽车科技有限公司 | Monocular vision SLAM-based vehicle positioning method and system |
| CN110084272B (en) * | 2019-03-26 | 2021-01-08 | 哈尔滨工业大学(深圳) | Cluster map creation method and repositioning method based on cluster map and position descriptor matching |
| CN111586360B (en) * | 2020-05-14 | 2021-09-10 | 佳都科技集团股份有限公司 | Unmanned aerial vehicle projection method, device, equipment and storage medium |
| CN111862180B (en) * | 2020-07-24 | 2023-11-17 | 盛景智能科技(嘉兴)有限公司 | Camera set pose acquisition method and device, storage medium and electronic equipment |
-
2021
- 2021-03-31 CN CN202110348215.2A patent/CN112907671B/en active Active
- 2021-08-25 WO PCT/CN2021/114435 patent/WO2022205750A1/en not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170076491A1 (en) * | 2015-09-14 | 2017-03-16 | Fujitsu Limited | Operation support method, operation support program, and operation support system |
| CN108734654A (en) * | 2018-05-28 | 2018-11-02 | 深圳市易成自动驾驶技术有限公司 | It draws and localization method, system and computer readable storage medium |
| CN110288710A (en) * | 2019-06-26 | 2019-09-27 | Oppo广东移动通信有限公司 | A three-dimensional map processing method, processing device and terminal equipment |
| CN110487274A (en) * | 2019-07-30 | 2019-11-22 | 中国科学院空间应用工程与技术中心 | SLAM method, system, navigation vehicle and storage medium for weak texture scene |
| CN112269851A (en) * | 2020-11-16 | 2021-01-26 | Oppo广东移动通信有限公司 | Map data update method, device, storage medium and electronic device |
| CN112907671A (en) * | 2021-03-31 | 2021-06-04 | 深圳市慧鲤科技有限公司 | Point cloud data generation method and device, electronic equipment and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN112907671B (en) | 2022-08-02 |
| CN112907671A (en) | 2021-06-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2022205750A1 (en) | Point cloud data generation method and apparatus, electronic device, and storage medium | |
| US11295472B2 (en) | Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database | |
| CN104145294B (en) | Self-Pose Estimation Based on Scene Structure | |
| CN105009120B (en) | Client-server based dynamic search | |
| EP3309751B1 (en) | Image processing device, method, and program | |
| CN107850673B (en) | Visual-Inertial Ranging Attitude Drift Calibration | |
| WO2022188094A1 (en) | Point cloud matching method and apparatus, navigation method and device, positioning method, and laser radar | |
| CN108871311B (en) | Pose determination method and device | |
| US20150092048A1 (en) | Off-Target Tracking Using Feature Aiding in the Context of Inertial Navigation | |
| US20160005164A1 (en) | Extrinsic parameter calibration of a vision-aided inertial navigation system | |
| CN110322500A (en) | Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring | |
| CN104662435A (en) | Method of determining a position and orientation of a device associated with a capturing device for capturing at least one image | |
| US20150199556A1 (en) | Method of using image warping for geo-registration feature matching in vision-aided positioning | |
| EP2989481A1 (en) | Localization systems and methods | |
| EP2927635B1 (en) | Feature set optimization in vision-based positioning | |
| CN111721283A (en) | Accuracy detection method, device, computer equipment and storage medium of positioning algorithm | |
| Fissore et al. | Towards surveying with a smartphone | |
| CN110794434A (en) | A pose determination method, device, device and storage medium | |
| CN107683496B (en) | The mapping of hypothesis line and verifying for 3D map | |
| US9245343B1 (en) | Real-time image geo-registration processing | |
| Liu et al. | Evaluation of different SLAM algorithms using Google tangle data | |
| CN117152569B (en) | Tracking algorithm accuracy detection device | |
| HK40044579B (en) | Point cloud data generation method, device, electronic device and storage medium | |
| HK40044579A (en) | Point cloud data generation method, device, electronic device and storage medium | |
| JP2023045010A (en) | Information processing system and information processing method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21934393 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22.01.2024) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 21934393 Country of ref document: EP Kind code of ref document: A1 |