CN116817892A - Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map - Google Patents
Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map Download PDFInfo
- Publication number
- CN116817892A CN116817892A CN202311083802.9A CN202311083802A CN116817892A CN 116817892 A CN116817892 A CN 116817892A CN 202311083802 A CN202311083802 A CN 202311083802A CN 116817892 A CN116817892 A CN 116817892A
- Authority
- CN
- China
- Prior art keywords
- semantic
- route
- unmanned aerial
- aerial vehicle
- semantic map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Navigation (AREA)
Abstract
The application discloses a cloud integrated unmanned aerial vehicle route positioning method and system based on a visual semantic map, wherein the method comprises the following steps: the unmanned aerial vehicle collects aerial line nodding images in real time and performs preprocessing to construct an aerial line local semantic map, and the aerial line local semantic map is uploaded to a cloud server; the cloud server performs fusion updating on the local semantic map of the route to obtain a global semantic map of the route; the unmanned aerial vehicle acquires the latest route global semantic map, and when encountering a satellite refusing environment, semantic feature matching is carried out on the real-time route nodding image and the route global semantic map so as to acquire positioning information of the unmanned aerial vehicle. According to the application, a fusion updating module which needs to be updated continuously and has large operation amount is arranged at a cloud server end, and an unmanned aerial vehicle end only needs to execute local map building and semantic matching with small calculation amount, so that calculation tasks are distributed reasonably; the application can accurately describe the ground information below the route and update the change of the ground information rapidly, thereby being beneficial to improving the positioning robustness of the unmanned aerial vehicle in the satellite refusing environment.
Description
Technical Field
The application relates to the technical field of unmanned aerial vehicle positioning, in particular to a cloud integrated unmanned aerial vehicle route positioning method and system based on a visual semantic map.
Background
With the continuous progress and development of unmanned aerial vehicle technology, large-scale commercial applications of unmanned aerial vehicles have appeared in human society, such as rural and forestry protection, electric inspection, urban express delivery, medical material transportation, etc., which require unmanned aerial vehicles to perform tens, hundreds, or even thousands of flights per day along a predetermined route. In airliner flight, the positioning and navigation information of the unmanned aerial vehicle is mainly from a global satellite navigation system, such as GPS, beidou, GLONASS, galileo and the like. Satellite navigation signals are subject to environmental interference, such as urban tall buildings, severe weather, electromagnetic radiation, and the like, which may result in loss of satellite navigation signals. In order to improve the robustness of unmanned aerial vehicle positioning navigation in large-scale commercial application, the research of the route positioning technology under the satellite refusing environment is extremely critical, wherein the visual navigation is widely focused due to the advantages of strong anti-interference capability, low power consumption, low cost, small volume, simple equipment structure, passive type, high positioning precision and the like.
The visual navigation mainly refers to photographing the ground through visual imaging equipment on board an aircraft, such as visible light, infrared, SAR and other sensors, and then processing the images to obtain various navigation parameters. The current visual navigation can be divided into map navigation and map-free navigation according to whether a reference database is needed or not. The ground pattern navigation needs to use a digital scene reference database which is stored in advance and contains accurate geographic information, which is also called a navigation map, and the absolute positioning of the aircraft can be realized by utilizing a frame of real shot image to carry out image matching or feature matching with the navigation map. In some unmanned aerial vehicle vision auxiliary navigation methods and systems, a geographic information system is used as an airborne reference database, real-time shooting images of unmanned aerial vehicles are processed, visual features with practical geographic significance, such as village information, road information and the like, are extracted, and the obtained visual features are matched with the geographic information system to obtain unmanned aerial vehicle vision positioning results. Some unmanned aerial vehicle autonomous positioning methods based on monocular vision take satellite remote sensing images as a reference database, perform image mixed registration on the remote sensing images and images acquired by the unmanned aerial vehicle, calculate to obtain homonymous feature points, and solve the geographic coordinates of the unmanned aerial vehicle. The method based on the geographic information system or the satellite remote sensing image database can solve the problem of positioning the unmanned aerial vehicle to a certain extent, but has the following three disadvantages: firstly, the updating frequency of a geographic information system and a satellite remote sensing image is too low, and data is generally updated in units of years, so that urban and rural appearance changes caused by the current large-scale construction of China cannot be reflected in time; secondly, the geographic information system and the satellite remote sensing image are different from the unmanned aerial vehicle in shooting time, season and weather, the images of the same ground target can have larger difference, the image matching difficulty is high, and the accuracy is low; thirdly, for a specific route of the unmanned aerial vehicle, the coverage range of the geographic information system and the satellite remote sensing image is too large, and the unmanned aerial vehicle cannot be customized and cut, contains a large amount of redundant information, consumes a long time in image matching, and is unfavorable for real-time positioning of the unmanned aerial vehicle.
According to the unmanned aerial vehicle vision repositioning method based on object semantics, object semantic information is extracted from 800 images in advance by using a YOLOv3 network, a topological graph is constructed by using the semantic information, and a scene map library is formed. And (3) extracting semantic information from the real-time images to be matched by using a YOLOv3 network, matching the images to be matched with a map library by using a random walk algorithm, and finally solving the pose of the unmanned aerial vehicle by using an EPnP algorithm. The method can effectively solve the problem of insufficient image matching precision caused by environmental illumination change, but the matching algorithm is complex, and the method for updating the map library is not involved, so that the method is not suitable for large-scale multi-frame outdoor flight in unmanned aerial vehicle commercial application.
A typical representative of the map-less navigation method is the visual SLAM technology based on an unmanned plane. Along with the vigorous development of computer vision, visual SLAM navigation gradually becomes a mature navigation scheme, and achieves better effects in small, narrow and closed complex spaces. However, visual SLAM navigation is suitable for exploring unknown flight environments, and is not suitable for unmanned aerial vehicle navigation with known outdoor wide range, known flight routes and high robustness requirements.
In summary, the existing visual positioning navigation methods have certain limitations, and the methods are designed around the unmanned aerial vehicle in a single-frame flight mode, and the system architecture optimization under the condition of large-scale flight of the unmanned aerial vehicle is not considered. Therefore, the application provides a cloud integrated unmanned aerial vehicle route positioning method based on a visual semantic map and facing a satellite rejection environment.
Disclosure of Invention
The application aims to provide a cloud integrated unmanned aerial vehicle route positioning method and system based on a visual semantic map, aiming at the problems that a reference database of an unmanned aerial vehicle visual positioning method is unsuitable and the advantage of large-scale flight is not exerted in the prior art.
The aim of the application is realized by the following technical scheme: the first aspect of the embodiment of the application provides a cloud integrated unmanned aerial vehicle route positioning method based on a visual semantic map, which comprises the following steps:
s1, when a satellite navigation signal is good, acquiring a route nodding image in real time in each flight of the unmanned aerial vehicle, preprocessing the route nodding image to construct a route local semantic map corresponding to the flight frame, and uploading the route local semantic map corresponding to the flight frame to a cloud server; wherein the preprocessing comprises semantic segmentation, inverse perspective transformation, scaling processing and rasterization processing;
s2, the cloud server performs fusion updating on the route local semantic map received from the unmanned aerial vehicle end to obtain a route global semantic map;
s3, before the unmanned aerial vehicle executes the flight task, acquiring a current latest route global semantic map from a cloud server; when the unmanned aerial vehicle encounters a satellite refusing environment in the flight process, semantic feature matching is carried out on the real-time aerial route nodding image and the aerial route global semantic map so as to acquire the current positioning information of the unmanned aerial vehicle.
Further, the constructing the route local semantic map corresponding to the flight frame in the step S1 includes the following sub-steps:
s11, when satellite navigation signals are good, acquiring real-time route nodding images, real-time flight heights, real-time flight postures and real-time longitudes and latitudes in each flight of the unmanned aerial vehicle; wherein the flight attitude includes a pitch angle, a roll angle, and a yaw angle;
s12, carrying out semantic segmentation on the real-time route nodding image by adopting a semantic segmentation network so as to extract semantic tags of ground targets in the real-time route nodding image and obtain a semantic image; wherein the semantic tags include road, river, building and ground signs;
s13, carrying out inverse perspective transformation on the semantic image according to the real-time flight gesture so as to obtain a semantic image after the inverse perspective transformation; the semantic image plane after the inverse perspective transformation is parallel to the ground, and the image display direction is the north direction;
s14, scaling the semantic image subjected to the inverse perspective transformation according to the real-time flying height and the preset shooting height to obtain a scaled semantic image; the shooting height of the semantic image after the scaling processing is a preset shooting height;
s15, rasterizing the scaled semantic image according to the real-time longitude and latitude and the preset grid size to obtain a rasterized semantic image, and calculating the longitude and latitude and semantic tag score of each grid;
s16, constructing a route local semantic map according to the rasterized semantic image, the longitude and latitude of each grid and the semantic tag score.
Further, in the step S15, the calculating the longitude and latitude of each grid specifically includes: the longitude and latitude of the central grid of the semantic image are real-time longitude and latitude of the unmanned aerial vehicle, and the longitude and latitude of the other grids are converted according to the distance between the central grid and the longitude and latitude of the central grid.
Further, the calculating the semantic tag score of each grid in step S15 specifically includes: firstly, setting all semantic label scores to zero, and then counting semantic labels to which each pixel in the grid belongs, and adding one to the semantic label scores.
Further, the step S2 includes the following sub-steps:
s21, the cloud server receives the local semantic map of the air route from the unmanned aerial vehicle end, adopts the grid size preset in the step S15 to carry out rasterization processing on the global semantic map of the air route, initializes the global semantic map of the air route, and all grids in the initialized global semantic map of the air route only contain longitude and latitude information, and the score of the semantic tag is set to be zero;
s22, loading a local semantic map of the airline to the global semantic map of the airline for fusion;
s23, judging whether the route local semantic map loaded in the step S22 is the last route local semantic map, and if so, jumping to the step S26; otherwise, jumping to step S24;
s24, adding the semantic label score of each grid in the local semantic map of the airline into the grid corresponding to the global semantic map of the airline according to the longitude and latitude of the grid so as to update the semantic label score of the local semantic map of the airline;
s25, returning to the step S22 after finishing fusion updating of all grids in the current route local semantic map;
s26, taking the semantic label corresponding to the maximum semantic label score of each grid in the route global semantic map as the semantic label of the grid so as to acquire the route global semantic map.
Further, the route local semantic map and the route global semantic map are two-dimensional occupation grid maps, and each grid comprises longitude and latitude information, semantic tags and scores thereof; the route local semantic map and the route global semantic map are constructed according to a preset grid size and a preset shooting height.
Further, the step S3 includes the following substeps:
s31, before executing a flight task, the unmanned aerial vehicle acquires a current latest route global semantic map from a cloud server;
s32, acquiring a real-time aerial route nodding image, a real-time flight altitude, a real-time flight attitude and a real-time longitude and latitude of the unmanned aerial vehicle in a satellite refusing environment when the unmanned aerial vehicle encounters the satellite refusing environment in the flight process; wherein the flight attitude includes a pitch angle, a roll angle, and a yaw angle;
s33, performing semantic segmentation on the real-time route nodding image acquired in the step S32 by adopting a semantic segmentation network so as to acquire a second semantic image;
s34, performing inverse perspective transformation on the second semantic image according to the real-time flight gesture obtained in the step S32 so as to obtain a second semantic image after the inverse perspective transformation; the second semantic image plane after the inverse perspective transformation is parallel to the ground, and the image display direction is the north direction;
s35, scaling the second semantic image after the inverse perspective transformation according to the real-time flying height acquired in the step S32 and the shooting height preset in the step S14 so as to acquire a scaled second semantic image; the shooting height of the second semantic image after the scaling processing is a preset shooting height;
s36, carrying out semantic feature matching on the second semantic image after the scaling processing and the route global semantic map so as to obtain a plurality of matching similarity;
s37, searching a region with the maximum similarity of the scaled second semantic image in the route global semantic map, and acquiring the longitude and latitude of the center of the region as the current positioning information of the unmanned aerial vehicle.
The second aspect of the embodiment of the application provides a cloud integrated unmanned aerial vehicle route positioning system based on a visual semantic map, which is used for realizing the cloud integrated unmanned aerial vehicle route positioning method based on the visual semantic map, and comprises the following steps:
the unmanned aerial vehicle is used for constructing a local semantic map of the route when the satellite navigation signal is good and acquiring positioning information in real time based on semantic feature matching when the satellite refusing environment is encountered; and
the cloud server is used for receiving the local semantic map of the air route, carrying out fusion updating processing on the local semantic map of the air route, and generating the global semantic map of the air route.
Further, the unmanned aerial vehicle includes:
the satellite navigation system is used for unmanned aerial vehicle flight navigation when the satellite navigation signal is good;
the downward-looking camera is used for shooting a ground image right below the unmanned aerial vehicle route and acquiring a real-time route nodding image of the unmanned aerial vehicle;
the height measurement module is used for acquiring the real-time flying height of the unmanned aerial vehicle;
the gesture measurement module is used for acquiring the real-time flight gesture of the unmanned aerial vehicle;
the onboard computer is used for running the semantic map building module and the semantic feature matching module and storing the local semantic map of the airline and the global semantic map of the airline;
the data communication module is used for uploading the local semantic map of the route to the cloud server and downloading the global semantic map of the route from the cloud server;
the semantic map building module is used for building a local semantic map of the route according to the real-time route nodding image, the real-time flying height, the real-time flying attitude and the real-time longitude and latitude; and
the semantic feature matching module is used for carrying out semantic feature matching on the route global semantic map and the unmanned aerial vehicle real-time nodding image so as to obtain a plurality of matching similarities.
Further, the cloud server includes: and the fusion updating module is used for carrying out fusion updating processing on the route local semantic map to generate a route global semantic map.
Compared with the prior art, the application has the beneficial effects that:
(1) When the satellite navigation signal is good, a large number of route local semantic maps are constructed through a large number of flights of the unmanned aerial vehicle, the route local semantic maps are uploaded to a cloud server for processing, the cloud server is used for fusion updating of the large number of route local semantic maps to generate a route global semantic map, the unmanned aerial vehicle downloads the latest route global semantic map before each flight, and if the unmanned aerial vehicle encounters a satellite rejection environment in flight, real-time route positioning of the unmanned aerial vehicle is realized through semantic feature matching of real-time nodding images and the route global semantic map; according to the application, an unmanned aerial vehicle end and cloud server end integrated map building matching positioning framework is constructed, calculation tasks are reasonably distributed, a semantic map fusion updating module which needs to be updated continuously and has large calculation amount is placed at a cloud server end with abundant calculation force resources, and the unmanned aerial vehicle end only needs to execute a local map building and semantic matching module with small calculation amount, so that the unmanned aerial vehicle can realize real-time positioning in a satellite refusing environment through the framework;
(2) The unmanned aerial vehicle positioning system can position the route of unmanned aerial vehicle in large-scale application, and fully utilizes the advantages of the unmanned aerial vehicle in the aspect of drawing and updating a large number of flight frames; the global semantic map of the route constructed by the application can accurately describe the ground information below the route and quickly update the change of the ground information, thereby providing a systematic and robust solution for unmanned aerial vehicle positioning under the satellite refusing environment.
Drawings
FIG. 1 is a flow chart of a cloud integrated unmanned aerial vehicle route positioning method based on a visual semantic map;
FIG. 2 is a flow chart of a machine side constructing a route local semantic map in an embodiment of the application;
FIG. 3 is a schematic diagram of the effect of the machine side semantic mapping module in the embodiment of the application;
FIG. 4 is a flow chart of acquiring an airline global semantic map by a cloud end in an embodiment of the present application;
fig. 5 is a flowchart of machine side semantic feature matching in an embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The present application will be described in detail with reference to the accompanying drawings. The features of the examples and embodiments described below may be combined with each other without conflict.
Referring to fig. 1, the cloud integrated unmanned aerial vehicle route positioning method based on the visual semantic map specifically comprises the following steps:
s1, when a satellite navigation signal is good, acquiring a route nodding image in real time in each flight of the unmanned aerial vehicle, preprocessing the route nodding image to construct a route local semantic map corresponding to the flight frame, and uploading the route local semantic map corresponding to the flight frame to a cloud server. Wherein the preprocessing includes semantic segmentation, inverse perspective transformation, scaling processing, and rasterization processing.
It should be understood that the method of the present application pertains to map-type visual navigation, wherein the navigation map used is constructed from a multitude of aerial shots of the unmanned aerial vehicle.
Specifically, when the satellite navigation signal is good, and the unmanned aerial vehicle uses the satellite navigation system to execute the route flight task, at the moment, each unmanned aerial vehicle acquires a route nod image in real time when executing each flight task, and preprocesses the route nod image to construct a route local semantic map corresponding to the current flight frame; and uploading the route local semantic map corresponding to the flight frame number to a cloud server for subsequent processing.
In this embodiment, the machine side builds a local semantic map of the route as shown in fig. 2, and includes the following steps:
s11, when satellite navigation signals are good, acquiring real-time route nodding images, real-time flight heights, real-time flight postures and real-time longitudes and latitudes in each flight of the unmanned aerial vehicle; wherein the flight attitude includes pitch angle, roll angle and yaw angle.
S12, carrying out semantic segmentation on the real-time route nodding image by adopting a semantic segmentation network so as to extract semantic tags of ground targets in the real-time route nodding image and obtain a semantic image; the semantic tags comprise roads, rivers, buildings, ground marks and the like.
It should be understood that the extracted semantic tags are tags of each pixel, and are usually in the form of images, so that semantic images can be obtained after extracting semantic tags of ground targets in real-time line nodding images.
The semantic features can effectively solve the problems of deviation in feature extraction and insufficient positioning accuracy caused by differences in imaging of the same ground target in different weather, seasons, illumination and other environments. The semantic tags can accurately and uniquely describe ground information, so the semantic tags can also be mountain and the like.
In this embodiment, the semantic segmentation network is a DDRNet (Deep Dual-resolution Networks) network, and the DDRNet network is used to perform semantic segmentation on the aerial nodding image, because the DDRNet network has a Dual-stream structure, and can simultaneously maintain high-resolution image features and high-level semantic features. Meanwhile, the DDRNet network is a light-weight network, so that the reasoning speed is high.
S13, carrying out inverse perspective transformation on the semantic image according to the real-time flight gesture so as to obtain a semantic image after the inverse perspective transformation; the semantic image plane after the inverse perspective transformation is parallel to the ground, and the image display direction is the north direction.
In the shooting process of the unmanned aerial vehicle, the flying posture, namely the pitch angle, the roll angle and the yaw angle of the unmanned aerial vehicle are changed in real time, so that the acquired aerial line nodding images have perspective transformation of different degrees, and the perspective influence of the flying posture of the unmanned aerial vehicle on the aerial line nodding images is eliminated through reverse perspective transformation.
Specifically, according to real-time flight attitude of the unmanned aerial vehicle, namely real-time data such as pitch angle, roll angle and yaw angle, a transformation matrix is calculated, and according to the transformation matrix, the semantic image is subjected to inverse perspective transformation, so that the transformed image is parallel to the ground.
S14, scaling the semantic image subjected to the inverse perspective transformation according to the real-time flying height and the preset shooting height to obtain a scaled semantic image; the shooting heights of the semantic images after the scaling processing are all preset shooting heights.
It should be noted that, the flight heights of the unmanned aerial vehicle are different each time, so that the scales of the aerial line nodding images are inconsistent, and the follow-up unified processing is not facilitated.
Specifically, the photographing height may be set toHAnd according to the real-time flying height of the unmanned aerial vehicleTaking camera pinhole imaging as a model, scaling the semantic image after inverse perspective transformation, wherein the specific image scaling ratio is as follows:
where k represents the scaling of the image.
It should be understood that the scaling ratio corresponding to each aerial line nodding image is different, and after the semantic images after the inverse perspective transformation are scaled according to the corresponding scaling ratio, the shooting heights of the aerial line nodding images of all the flight frames of the unmanned aerial vehicle are all preset shooting heights, and correspondingly, the shooting heights of the aerial line local semantic maps generated by all the flight frames of the unmanned aerial vehicle are also all preset shooting heights.
The application scenario of the embodiment is for logistics distribution in an unmanned plane park, and the ground road can accurately and uniquely represent ground information below a route, so that the semantic segmentation network only extracts two types of semantic tags: roads and others. Accordingly, the semantic image effect after the scaling process is as shown in fig. 3.
S15, rasterizing the scaled semantic image according to the real-time longitude and latitude and the preset grid size to obtain a rasterized semantic image, and calculating the longitude and latitude and semantic tag score of each grid.
In order to improve robustness of the semantic map generated later, it is necessary to perform rasterization processing on the aerial route nodding image to filter noise caused by semantic segmentation. The grid projected onto the ground is square with a certain physical size, so that it is necessary to manually preset a grid size.
Illustratively, in this embodiment, the grid size is set to be 1m×1m, and then the scaled semantic image is rasterized according to the real-time longitude and latitude, and the longitude and latitude and semantic tag score of each grid are calculated.
Further, calculating the longitude and latitude of each grid is specifically as follows: the longitude and latitude of the central grid of the semantic image are real-time longitude and latitude of the unmanned aerial vehicle, and the longitude and latitude of the other grids are converted according to the distance between the central grid and the longitude and latitude of the central grid.
Further, calculating the semantic tag score of each grid is specifically: firstly, setting all semantic label scores to zero, and then counting semantic labels to which each pixel in the grid belongs, and adding one to the semantic label scores. I.e. to which semantic label each pixel within the grid belongs, the score of that semantic label is incremented by one.
S16, constructing a route local semantic map according to the rasterized semantic image, the longitude and latitude of each grid and the semantic tag score.
It should be noted that, the local semantic map of the route is a two-dimensional occupying grid map, and each grid contains longitude and latitude information, semantic tags and scores thereof. The local semantic map of the route is constructed according to a preset shooting height, and the default display direction is the north direction, wherein the grid size and the shooting height are set manually in advance.
S2, the cloud server performs fusion updating on the route local semantic map received from the unmanned aerial vehicle end to obtain a route global semantic map.
Specifically, when the satellite navigation signal received by the cloud server is good, a large number of route local semantic maps constructed by multiple unmanned aerial vehicles flying for multiple times are received, and fusion updating is performed on the large number of route local semantic maps, so that a route global semantic map can be obtained, as shown in fig. 4, and the method comprises the following steps:
s21, the cloud server receives the route local semantic map from the unmanned aerial vehicle, adopts the grid size preset in the step S15 to rasterize the route global semantic map, initializes the route global semantic map, all grids in the initialized route global semantic map only contain longitude and latitude information, and the semantic tag score is set to zero.
It should be understood that the route global semantic map is acquired according to the route local semantic map, and thus, the grid size of the route global semantic map is consistent with the grid size preset for the route local semantic map in step S15.
S22, loading a local semantic map of the airline to the global semantic map of the airline for fusion.
S23, judging whether the route local semantic map loaded in the step S22 is the last route local semantic map, and if so, jumping to the step S26; otherwise, the process goes to step S24.
S24, adding the semantic label score of each grid in the local semantic map of the airline into the grid corresponding to the global semantic map of the airline according to the longitude and latitude of the grid so as to update the semantic label score of the local semantic map of the airline.
S25, returning to the step S22 after finishing the fusion updating of all grids in the current route local semantic map.
It should be understood that after the fusion update of one route local semantic map is completed, the process returns to step S22, where the next route local semantic map is loaded to continue the fusion update until the last route local semantic map.
S26, taking the semantic label corresponding to the maximum semantic label score of each grid in the route global semantic map as the semantic label of the grid so as to acquire the route global semantic map.
Specifically, an airline global semantic map is output. After fusion updating of all the route local semantic maps is completed, the semantic label with the highest score of the semantic label corresponding to each grid in the route global semantic map is used as the semantic label of the grid, and the route global semantic map is output.
It should be understood that each grid in the global semantic map of the airline may include a plurality of semantic tags and scores thereof, such as semantic tag 1, a plurality of semantic tag 1 scores, semantic tag 2, and a plurality of semantic tag 2 scores, where the semantic tag score maximum value is one of the semantic tag 2 scores in all the semantic tag scores, and then the semantic tag corresponding to the semantic tag score maximum value, i.e., semantic tag 2, is the semantic tag of the grid, and similarly, all the semantic tags corresponding to the grids may be obtained, and the global semantic map of the airline may be further obtained.
It should be noted that, the route global semantic map is a two-dimensional occupying grid map, and each grid contains longitude and latitude information, semantic tags and scores thereof. The navigation global semantic map is constructed according to a preset shooting height, and the default display direction is the north direction, wherein the grid size and the shooting height are set manually in advance.
S3, before the unmanned aerial vehicle executes the flight task, acquiring a current latest route global semantic map from a cloud server; when the unmanned aerial vehicle encounters a satellite refusing environment in the flight process, semantic feature matching is carried out on the real-time aerial route nodding image and the aerial route global semantic map so as to acquire the current positioning information of the unmanned aerial vehicle.
Specifically, before each unmanned aerial vehicle executes a flight task, acquiring a current latest route global semantic map; in the flight process, when satellite navigation signals are shielded and interfered, namely when the satellite navigation signals encounter a satellite refusing environment, the unmanned aerial vehicle starts a visual route positioning function, namely, a real-time route nodding image of the unmanned aerial vehicle is subjected to semantic feature matching with a route global semantic map, and longitude and latitude corresponding to a region with the largest matching similarity are current positioning information of the unmanned aerial vehicle. As shown in fig. 5, the method specifically comprises the following steps:
s31, before executing the flight task, the unmanned aerial vehicle acquires a current latest route global semantic map from a cloud server.
S32, acquiring a real-time aerial route nodding image, a real-time flight altitude, a real-time flight attitude and a real-time longitude and latitude of the unmanned aerial vehicle in a satellite refusing environment when the unmanned aerial vehicle encounters the satellite refusing environment in the flight process; wherein the flight attitude includes pitch angle, roll angle and yaw angle.
S33, performing semantic segmentation on the real-time route nodding image acquired in the step S32 by adopting a semantic segmentation network so as to acquire a second semantic image.
It should be understood that the semantic segmentation is performed on the course nodding image acquired in step S32 by using the method in step S12, so as to obtain a semantic image in the satellite rejection environment.
S34, performing inverse perspective transformation on the second semantic image according to the real-time flight attitude obtained in the step S32 so as to obtain a second semantic image after the inverse perspective transformation; the second semantic image plane after the inverse perspective transformation is parallel to the ground, and the image display direction is the north direction.
It should be understood that the second semantic image acquired in step S33 is subjected to inverse perspective transformation by the method in step S13 such that the photographing plane is parallel to the ground and the display direction is converted to north.
S35, scaling the second semantic image after the inverse perspective transformation according to the real-time flying height acquired in the step S32 and the shooting height preset in the step S14 so as to acquire a second semantic image after the scaling; the shooting height of the second semantic image after the scaling processing is a preset shooting height.
It should be understood thatThe method in the step S14 is adopted to perform scaling processing on the semantic image after the inverse perspective transformation acquired in the step S34, and the real-time shooting height is converted into a preset shooting heightH。
S36, carrying out semantic feature matching on the second semantic image after the scaling processing and the route global semantic map so as to obtain a plurality of matching similarities.
In this embodiment, a nearest neighbor data association algorithm (NN) may be used to perform semantic feature matching on the scaled semantic image and the route global semantic map. It should be understood that the nearest data association algorithm has small operand, is easy to realize by hardware, and is suitable for real-time positioning of a machine end.
S37, searching a region with the maximum similarity of the scaled second semantic image in the route global semantic map, and acquiring the longitude and latitude of the center of the region as the current positioning information of the unmanned aerial vehicle.
It should be understood that the global semantic map includes a plurality of regions, each region is subjected to semantic feature matching with the scaled second semantic image, so that a plurality of matching similarities can be obtained, and the longitude and latitude corresponding to the region with the largest matching similarity is the current position information of the unmanned aerial vehicle.
It is worth mentioning that the embodiment of the application also provides a cloud integrated unmanned aerial vehicle route positioning system based on the visual semantic map, which is used for realizing the cloud integrated unmanned aerial vehicle route positioning method based on the visual semantic map in the embodiment.
In this embodiment, the unmanned aerial vehicle route positioning system includes a plurality of unmanned aerial vehicles and a cloud server. The unmanned aerial vehicle is used for constructing a local semantic map of the route when the satellite navigation signal is good, and acquiring positioning information in real time based on semantic feature matching when the satellite refusing environment is encountered; the cloud server is used for receiving the local semantic map of the air route, and carrying out fusion updating processing on the local semantic map of the air route to generate the global semantic map of the air route.
Further, the number of the unmanned aerial vehicles can be one, two or more, and the number of the unmanned aerial vehicles can be specifically set according to actual needs.
Further, the unmanned aerial vehicle for constructing the route local semantic map and the unmanned aerial vehicle for acquiring the positioning information in real time can be the same unmanned aerial vehicle or different unmanned aerial vehicles.
Specifically, the unmanned aerial vehicle at least comprises a satellite navigation system, a down-looking camera, a height measurement module, a gesture measurement module, an onboard computer, a data communication module, a semantic mapping module and a semantic feature matching module. The satellite navigation system is used for unmanned aerial vehicle flight navigation when satellite navigation signals are good, and the differential GPS module is adopted as the satellite navigation system in the embodiment; the down-looking camera is used for shooting a ground image right below the unmanned aerial vehicle route and acquiring a real-time route nodding image of the unmanned aerial vehicle; the height measurement module is used for acquiring the real-time flight height of the unmanned aerial vehicle, and in the embodiment, the barometer and the long-range laser range finder are adopted for fusion calculation of the real-time flight height of the unmanned aerial vehicle; the attitude measurement module is used for acquiring the real-time flight attitude of the unmanned aerial vehicle, and an Inertial Measurement Unit (IMU) and a magnetometer are adopted as the attitude measurement module in the embodiment; the on-board computer is used for running the semantic map building module and the semantic feature matching module and storing the local semantic map of the airline and the global semantic map of the airline; the data communication module is used for uploading the local semantic map of the route to the cloud server and downloading the global semantic map of the route from the cloud server; the semantic map building module is used for building a local semantic map of the route according to the real-time route nodding image, the real-time flying height, the real-time flying posture and the real-time longitude and latitude; the semantic feature matching module is used for carrying out semantic feature matching on the route global semantic map and the unmanned aerial vehicle real-time nodding image so as to obtain a plurality of matching similarities.
Further, the data communication module may adopt a wired transmission mode or a wireless communication mode, and in this embodiment, a 4G/5G wireless mobile network is adopted.
Specifically, the cloud server comprises a fusion updating module, and the fusion updating module is used for carrying out fusion updating processing on the local semantic map of the route to generate the global semantic map of the route.
The application fully utilizes the advantages of a large number of flight frames of the unmanned aerial vehicle in the aspect of navigation map construction and updating, and constructs an integrated map construction, matching and positioning framework of the unmanned aerial vehicle end and the cloud server end; according to the application, the calculation tasks are reasonably distributed, the semantic map fusion updating module which needs to be updated continuously and has larger calculation amount is arranged at the cloud server end with rich calculation force resources, and the unmanned plane end only needs to execute the local map building and semantic matching module with smaller calculation amount; the global semantic map of the route constructed by the application can accurately describe the ground information below the route and quickly update the change of the ground information, thereby providing a systematic and robust solution for unmanned aerial vehicle positioning under the satellite refusing environment.
The above embodiments are merely for illustrating the design concept and features of the present application, and are intended to enable those skilled in the art to understand the content of the present application and implement the same, the scope of the present application is not limited to the above embodiments. Therefore, all equivalent changes or modifications according to the principles and design ideas of the present application are within the scope of the present application.
Claims (10)
1. A cloud integrated unmanned aerial vehicle route positioning method based on a visual semantic map is characterized by comprising the following steps:
s1, when a satellite navigation signal is good, acquiring a route nodding image in real time in each flight of the unmanned aerial vehicle, preprocessing the route nodding image to construct a route local semantic map corresponding to the flight frame, and uploading the route local semantic map corresponding to the flight frame to a cloud server; wherein the preprocessing comprises semantic segmentation, inverse perspective transformation, scaling processing and rasterization processing;
s2, the cloud server performs fusion updating on the route local semantic map received from the unmanned aerial vehicle end to obtain a route global semantic map;
s3, before the unmanned aerial vehicle executes the flight task, acquiring a current latest route global semantic map from a cloud server; when the unmanned aerial vehicle encounters a satellite refusing environment in the flight process, semantic feature matching is carried out on the real-time aerial route nodding image and the aerial route global semantic map so as to acquire the current positioning information of the unmanned aerial vehicle.
2. The cloud integrated unmanned aerial vehicle route positioning method based on the visual semantic map according to claim 1, wherein the constructing the route local semantic map corresponding to the flight frame in the step S1 comprises the following substeps:
s11, when satellite navigation signals are good, acquiring real-time route nodding images, real-time flight heights, real-time flight postures and real-time longitudes and latitudes in each flight of the unmanned aerial vehicle; wherein the flight attitude includes a pitch angle, a roll angle, and a yaw angle;
s12, carrying out semantic segmentation on the real-time route nodding image by adopting a semantic segmentation network so as to extract semantic tags of ground targets in the real-time route nodding image and obtain a semantic image; wherein the semantic tags include road, river, building and ground signs;
s13, carrying out inverse perspective transformation on the semantic image according to the real-time flight gesture so as to obtain a semantic image after the inverse perspective transformation; the semantic image plane after the inverse perspective transformation is parallel to the ground, and the image display direction is the north direction;
s14, scaling the semantic image subjected to the inverse perspective transformation according to the real-time flying height and the preset shooting height to obtain a scaled semantic image; the shooting height of the semantic image after the scaling processing is a preset shooting height;
s15, rasterizing the scaled semantic image according to the real-time longitude and latitude and the preset grid size to obtain a rasterized semantic image, and calculating the longitude and latitude and semantic tag score of each grid;
s16, constructing a route local semantic map according to the rasterized semantic image, the longitude and latitude of each grid and the semantic tag score.
3. The cloud integrated unmanned aerial vehicle route positioning method based on the visual semantic map according to claim 2, wherein the calculating the longitude and latitude of each grid in step S15 is specifically: the longitude and latitude of the central grid of the semantic image are real-time longitude and latitude of the unmanned aerial vehicle, and the longitude and latitude of the other grids are converted according to the distance between the central grid and the longitude and latitude of the central grid.
4. The cloud integrated unmanned aerial vehicle route positioning method based on the visual semantic map according to claim 1, wherein the calculating the semantic tag score of each grid in step S15 is specifically: firstly, setting all semantic label scores to zero, and then counting semantic labels to which each pixel in the grid belongs, and adding one to the semantic label scores.
5. The cloud integrated unmanned aerial vehicle route positioning method based on the visual semantic map according to claim 1, wherein the step S2 comprises the following substeps:
s21, the cloud server receives the local semantic map of the air route from the unmanned aerial vehicle end, adopts the grid size preset in the step S15 to carry out rasterization processing on the global semantic map of the air route, initializes the global semantic map of the air route, and all grids in the initialized global semantic map of the air route only contain longitude and latitude information, and the score of the semantic tag is set to be zero;
s22, loading a local semantic map of the airline to the global semantic map of the airline for fusion;
s23, judging whether the route local semantic map loaded in the step S22 is the last route local semantic map, and if so, jumping to the step S26; otherwise, jumping to step S24;
s24, adding the semantic label score of each grid in the local semantic map of the airline into the grid corresponding to the global semantic map of the airline according to the longitude and latitude of the grid so as to update the semantic label score of the local semantic map of the airline;
s25, returning to the step S22 after finishing fusion updating of all grids in the current route local semantic map;
s26, taking the semantic label corresponding to the maximum semantic label score of each grid in the route global semantic map as the semantic label of the grid so as to acquire the route global semantic map.
6. The cloud integrated unmanned aerial vehicle route positioning method based on the visual semantic map according to claim 1, wherein the route local semantic map and the route global semantic map are two-dimensional occupied grid maps, and each grid comprises longitude and latitude information, semantic tags and scores thereof; the route local semantic map and the route global semantic map are constructed according to a preset grid size and a preset shooting height.
7. The cloud integrated unmanned aerial vehicle route positioning method based on the visual semantic map according to claim 1, wherein the step S3 comprises the following substeps:
s31, before executing a flight task, the unmanned aerial vehicle acquires a current latest route global semantic map from a cloud server;
s32, acquiring a real-time aerial route nodding image, a real-time flight altitude, a real-time flight attitude and a real-time longitude and latitude of the unmanned aerial vehicle in a satellite refusing environment when the unmanned aerial vehicle encounters the satellite refusing environment in the flight process; wherein the flight attitude includes a pitch angle, a roll angle, and a yaw angle;
s33, performing semantic segmentation on the real-time route nodding image acquired in the step S32 by adopting a semantic segmentation network so as to acquire a second semantic image;
s34, performing inverse perspective transformation on the second semantic image according to the real-time flight gesture obtained in the step S32 so as to obtain a second semantic image after the inverse perspective transformation; the second semantic image plane after the inverse perspective transformation is parallel to the ground, and the image display direction is the north direction;
s35, scaling the second semantic image after the inverse perspective transformation according to the real-time flying height acquired in the step S32 and the shooting height preset in the step S14 so as to acquire a scaled second semantic image; the shooting height of the second semantic image after the scaling processing is a preset shooting height;
s36, carrying out semantic feature matching on the second semantic image after the scaling processing and the route global semantic map so as to obtain a plurality of matching similarity;
s37, searching a region with the maximum similarity of the scaled second semantic image in the route global semantic map, and acquiring the longitude and latitude of the center of the region as the current positioning information of the unmanned aerial vehicle.
8. A cloud integrated unmanned aerial vehicle route positioning system based on a visual semantic map for implementing the cloud integrated unmanned aerial vehicle route positioning method based on a visual semantic map as claimed in any one of claims 1 to 6, comprising:
the unmanned aerial vehicle is used for constructing a local semantic map of the route when the satellite navigation signal is good and acquiring positioning information in real time based on semantic feature matching when the satellite refusing environment is encountered; and
the cloud server is used for receiving the local semantic map of the air route, carrying out fusion updating processing on the local semantic map of the air route, and generating the global semantic map of the air route.
9. The visual semantic map-based cloud integrated unmanned aerial vehicle route positioning system of claim 8, wherein the unmanned aerial vehicle comprises:
the satellite navigation system is used for unmanned aerial vehicle flight navigation when the satellite navigation signal is good;
the downward-looking camera is used for shooting a ground image right below the unmanned aerial vehicle route and acquiring a real-time route nodding image of the unmanned aerial vehicle;
the height measurement module is used for acquiring the real-time flying height of the unmanned aerial vehicle;
the gesture measurement module is used for acquiring the real-time flight gesture of the unmanned aerial vehicle;
the onboard computer is used for running the semantic map building module and the semantic feature matching module and storing the local semantic map of the airline and the global semantic map of the airline;
the data communication module is used for uploading the local semantic map of the route to the cloud server and downloading the global semantic map of the route from the cloud server;
the semantic map building module is used for building a local semantic map of the route according to the real-time route nodding image, the real-time flying height, the real-time flying attitude and the real-time longitude and latitude; and
the semantic feature matching module is used for carrying out semantic feature matching on the route global semantic map and the unmanned aerial vehicle real-time nodding image so as to obtain a plurality of matching similarities.
10. The visual semantic map-based cloud integrated unmanned aerial vehicle route positioning system of claim 8, wherein the cloud server comprises: and the fusion updating module is used for carrying out fusion updating processing on the route local semantic map to generate a route global semantic map.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311083802.9A CN116817892B (en) | 2023-08-28 | 2023-08-28 | Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311083802.9A CN116817892B (en) | 2023-08-28 | 2023-08-28 | Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN116817892A true CN116817892A (en) | 2023-09-29 |
| CN116817892B CN116817892B (en) | 2023-12-19 |
Family
ID=88139555
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311083802.9A Active CN116817892B (en) | 2023-08-28 | 2023-08-28 | Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116817892B (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119810193A (en) * | 2024-12-16 | 2025-04-11 | 西安工业大学 | A UAV visual positioning method based on twin network |
| CN120120994A (en) * | 2025-05-14 | 2025-06-10 | 山东黄河建工有限公司 | A land area measurement method and system for spatial planning |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111551167A (en) * | 2020-02-10 | 2020-08-18 | 江苏盖亚环境科技股份有限公司 | Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation |
| US20200364554A1 (en) * | 2018-02-09 | 2020-11-19 | Baidu Usa Llc | Systems and methods for deep localization and segmentation with a 3d semantic map |
| CN112258618A (en) * | 2020-11-04 | 2021-01-22 | 中国科学院空天信息创新研究院 | Semantic mapping and localization method based on fusion of prior laser point cloud and depth map |
| CN113313824A (en) * | 2021-04-13 | 2021-08-27 | 中山大学 | Three-dimensional semantic map construction method |
| US20220057213A1 (en) * | 2020-07-31 | 2022-02-24 | Abhay Singhal | Vision-based navigation system |
| CN114216454A (en) * | 2021-10-27 | 2022-03-22 | 湖北航天飞行器研究所 | Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS rejection environment |
| CN114964236A (en) * | 2022-05-25 | 2022-08-30 | 重庆长安汽车股份有限公司 | Mapping and vehicle positioning system and method for underground parking lot environment |
| WO2023056698A1 (en) * | 2021-10-09 | 2023-04-13 | 广东汇天航空航天科技有限公司 | Positioning navigation method and system of aircraft, and computing device |
| CN116628115A (en) * | 2023-03-24 | 2023-08-22 | 四川腾盾科技有限公司 | Semantic map database and semantic segmentation map generation method applied to unmanned aerial vehicle |
-
2023
- 2023-08-28 CN CN202311083802.9A patent/CN116817892B/en active Active
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200364554A1 (en) * | 2018-02-09 | 2020-11-19 | Baidu Usa Llc | Systems and methods for deep localization and segmentation with a 3d semantic map |
| CN111551167A (en) * | 2020-02-10 | 2020-08-18 | 江苏盖亚环境科技股份有限公司 | Global navigation auxiliary method based on unmanned aerial vehicle shooting and semantic segmentation |
| US20220057213A1 (en) * | 2020-07-31 | 2022-02-24 | Abhay Singhal | Vision-based navigation system |
| CN112258618A (en) * | 2020-11-04 | 2021-01-22 | 中国科学院空天信息创新研究院 | Semantic mapping and localization method based on fusion of prior laser point cloud and depth map |
| CN113313824A (en) * | 2021-04-13 | 2021-08-27 | 中山大学 | Three-dimensional semantic map construction method |
| WO2023056698A1 (en) * | 2021-10-09 | 2023-04-13 | 广东汇天航空航天科技有限公司 | Positioning navigation method and system of aircraft, and computing device |
| CN114216454A (en) * | 2021-10-27 | 2022-03-22 | 湖北航天飞行器研究所 | Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS rejection environment |
| CN114964236A (en) * | 2022-05-25 | 2022-08-30 | 重庆长安汽车股份有限公司 | Mapping and vehicle positioning system and method for underground parking lot environment |
| CN116628115A (en) * | 2023-03-24 | 2023-08-22 | 四川腾盾科技有限公司 | Semantic map database and semantic segmentation map generation method applied to unmanned aerial vehicle |
Non-Patent Citations (2)
| Title |
|---|
| 孙曼晖;杨绍武;易晓东;刘衡竹;: "基于GIS和SLAM的机器人大范围环境自主导航", 仪器仪表学报, no. 03 * |
| 陈国军;陈巍;郁汉琪;王涵立;: "基于语义ORB-SLAM2算法的移动机器人自主导航方法研究", 机床与液压, no. 09 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119810193A (en) * | 2024-12-16 | 2025-04-11 | 西安工业大学 | A UAV visual positioning method based on twin network |
| CN120120994A (en) * | 2025-05-14 | 2025-06-10 | 山东黄河建工有限公司 | A land area measurement method and system for spatial planning |
Also Published As
| Publication number | Publication date |
|---|---|
| CN116817892B (en) | 2023-12-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Xing et al. | Multi-UAV cooperative system for search and rescue based on YOLOv5 | |
| Sotnikov et al. | Methods for ensuring the accuracy of radiometric and optoelectronic navigation systems of flying robots in a developed infrastructure | |
| Carlevaris-Bianco et al. | University of Michigan North Campus long-term vision and lidar dataset | |
| Ma et al. | Using unmanned aerial vehicle for remote sensing application | |
| Li et al. | MARS-LVIG dataset: A multi-sensor aerial robots SLAM dataset for LiDAR-visual-inertial-GNSS fusion | |
| Cappelle et al. | Virtual 3D city model for navigation in urban areas | |
| US7630797B2 (en) | Accuracy enhancing system for geospatial collection value of an image sensor aboard an airborne platform and associated methods | |
| CN116817892B (en) | Cloud integrated unmanned aerial vehicle route positioning method and system based on visual semantic map | |
| KR102557775B1 (en) | Drone used 3d mapping method | |
| US7603208B2 (en) | Geospatial image change detecting system with environmental enhancement and associated methods | |
| CN112381935A (en) | Synthetic vision generation and multi-element fusion device | |
| US8433457B2 (en) | Environmental condition detecting system using geospatial images and associated methods | |
| ZHANG et al. | Review of the light-weighted and small UAV system for aerial photography and remote sensing | |
| Chaudhry et al. | A comparative study of modern UAV platform for topographic mapping | |
| Pathak et al. | UAV-based topographical mapping and accuracy assessment of orthophoto using GCP | |
| Small | AggieAir: Towards low-cost cooperative multispectral remote sensing using small unmanned aircraft systems | |
| CN117036461A (en) | Unmanned aerial vehicle absolute position rapid positioning method and device based on environment semantic information | |
| Liu et al. | A vision-inertial interaction-based autonomous UAV positioning algorithm | |
| Simon et al. | 3D MAPPING OF A VILLAGE WITH A WINGTRAONE VTOL TAILSITER DRONE USING PIX4D MAPPER. | |
| Starek et al. | Application of unmanned aircraft systems for coastal mapping and resiliency | |
| Chen et al. | Tightly coupled lidar-inertial-GPS environment detection and landing area selection based on powered parafoil UAV | |
| Pi et al. | Deep neural networks for drone view localization and mapping in GPS-denied environments | |
| KR20240005607A (en) | Image matching method of image taken by unmanned aerial vehicle and device therefor | |
| Sambolek et al. | Determining the geolocation of a person detected in an image taken with a drone | |
| Rodriguez-Galvis et al. | Development of low-cost ground control system for UAV-based mapping |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |