WO2023030062A1 - Procédé et appareil de commande de vol pour véhicule aérien sans pilote, et dispositif, support et programme - Google Patents
Procédé et appareil de commande de vol pour véhicule aérien sans pilote, et dispositif, support et programme Download PDFInfo
- Publication number
- WO2023030062A1 WO2023030062A1 PCT/CN2022/113856 CN2022113856W WO2023030062A1 WO 2023030062 A1 WO2023030062 A1 WO 2023030062A1 CN 2022113856 W CN2022113856 W CN 2022113856W WO 2023030062 A1 WO2023030062 A1 WO 2023030062A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- processed
- flight
- feature point
- image feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/08—Control of attitude, i.e. control of roll, pitch, or yaw
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
Definitions
- the present application relates to the field of information technology, and in particular to a flight control method, device, equipment, medium and program of an unmanned aerial vehicle.
- the embodiment of the present application provides a flight control method, device, equipment, medium and program of an unmanned aerial vehicle, through the three-dimensional Coordinate information, building a three-dimensional map, can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information; at the same time, determine the flight trajectory based on the three-dimensional map to achieve obstacle avoidance flight, which can reduce the drone's flight during flight. Affected by the actual flight environment.
- An embodiment of the present application provides a flight control method of an unmanned aerial vehicle, the method comprising:
- the screen content of the image to be processed includes information about the flight environment ahead;
- An embodiment of the present application provides a flight control device for an unmanned aerial vehicle, the device comprising:
- the acquisition part is configured to acquire the image to be processed; wherein, the screen content of the image to be processed includes the information of the flight environment ahead;
- the first determining part is configured to determine image feature point pairs satisfying preset conditions based on two temporally adjacent frames of images to be processed;
- the second determination part is configured to determine the three-dimensional coordinate information associated with the image feature point pair in the forward flight environment
- the adjustment part is configured to adjust the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information to obtain a three-dimensional map;
- the third determining part is configured to determine the flight track of the UAV based on the three-dimensional map.
- An embodiment of the present application also provides an electronic device, the electronic device including: a processor, a memory, and a communication bus; wherein the communication bus is used to implement a communication connection between the processor and the memory;
- the processor is used to execute the program in the memory, so as to realize the flight control method of the drone as described above.
- the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to Realize the flight control method of the unmanned aerial vehicle as described above.
- the embodiment of the present application also provides a computer program, the computer program includes computer-readable codes, and when the computer-readable codes run in an electronic device, the processor of the electronic device executes the above-mentioned The flight control method of the unmanned aerial vehicle.
- the flight control method, device, equipment, medium and program of the unmanned aerial vehicle provided by the embodiment of the present application, firstly, obtain the image to be processed, the screen content of the image to be processed includes the flight environment information in front; secondly, based on the temporally adjacent Two frames of images to be processed, determine the image feature point pairs that meet the preset conditions; and in the forward flight environment, determine the three-dimensional coordinate information associated with the image feature point pairs; finally, based on the three-dimensional coordinate information, the pair corresponding to the image to be processed
- the map to be adjusted is adjusted to obtain a three-dimensional map; based on the three-dimensional map, the flight trajectory of the UAV is determined.
- the three-dimensional map is constructed through the three-dimensional coordinate information associated with the image feature point pairs in the two adjacent frames of images to be processed in time series, which can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information ;
- the flight trajectory is determined based on the three-dimensional map to achieve obstacle avoidance flight, which can reduce the influence of the actual flight environment on the UAV during flight.
- FIG. 1 is a schematic flow chart of a flight control method for an unmanned aerial vehicle provided in an embodiment of the present application
- FIG. 2 is a schematic flow chart of another unmanned aerial vehicle flight control method provided by the embodiment of the present application.
- FIG. 3 is a schematic flow diagram of another flight control method for a drone provided in an embodiment of the present application.
- FIG. 4 is a schematic flow chart of building a three-dimensional map during a flight provided by an embodiment of the present application
- FIG. 5 is a schematic diagram representing the corresponding relationship between pairs of image feature points provided by the embodiment of the present application.
- FIG. 6 is a schematic structural diagram of a flight control device for a drone provided in an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- references to "an embodiment of the present application” or “the foregoing embodiment” throughout the specification mean that a specific feature, structure or characteristic related to the embodiment is included in at least one embodiment of the present application. Therefore, appearances of "in the embodiment of the present application” or “in the foregoing embodiment” throughout the specification do not necessarily refer to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.
- the serial numbers of the above-mentioned processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, rather than the implementation process of the embodiments of the present application. constitute any limitation. The serial numbers of the above embodiments of the present application are for description only, and do not represent the advantages and disadvantages of the embodiments.
- the information processing device executes any step in the embodiments of the present application, and may be a processor of the information processing device executes the step. It is also worth noting that the embodiment of the present application does not limit the order in which the information processing device executes the following steps. In addition, the methods used to process data in different embodiments may be the same method or different methods. It should also be noted that any step in the embodiments of the present application can be executed independently by the information processing device, that is, when the information processing device executes any step in the following embodiments, it may not depend on the execution of other steps.
- UAV obstacle avoidance UAVs are gradually replacing humans to complete various special tasks, such as search, rescue, fire fighting and data collection. When completing such tasks, UAVs are often in an environment with complex terrain, such as buildings, narrow indoors, rugged mountains and forests, etc. Obstacles in the environment will pose a collision threat to UAVs at any time. make it difficult to complete the task. UAVs must collect environmental information as much as possible through their limited sensors and storage space in order to detect obstacles in space in time and avoid obstacles in the original route by updating the flight path.
- CV Computer Vision
- CV is a simulation of biological vision using computers and related equipment. Its main task is to obtain various information of the corresponding scene by processing the collected pictures or videos.
- the main goal of traditional computer vision systems is to extract features from images, including edge detection, corner detection, and image segmentation. Depending on the type and quality of the input image, different algorithms perform differently.
- VSLAM Vision-based synchronous positioning and map construction
- drones or vehicles use airborne equipment to perceive obstacles in the front environment, obtain information such as relative distance and their own angle and posture, and process these information to realize real-time Update dynamic environment information.
- the UAV autonomously formulates obstacle avoidance routes based on dynamic environmental information and its own flight status.
- the flight control system adjusts the flight speed and direction of the UAV to achieve obstacle avoidance.
- the existing UAV obstacle recognition is usually implemented in the following two ways:
- the first type is mainly based on the manual identification of the drone operator, and the operator manually operates the drone to complete the evasive flight of the obstacles in the space through the real-time transmission of the picture sent by the drone. This puts high demands on the technical requirements of drone operators, and accidents may occur due to misoperation, and there will be problems such as shortage of manpower when multiple drones are required to work at the same time.
- the second is to use algorithms to enable UAVs to autonomously identify and avoid obstacles; usually, obstacles in real space are mapped to a two-dimensional virtual map to realize UAVs’ perception of obstacles, which will lead to The point clouds of various obstacles in the height information are compressed into two-dimensional space, and the original shape of the obstacle cannot be perceived. If it is an obstacle with an irregular shape, it will lead to modeling errors. Although this method is feasible for avoiding some relatively small obstacles in the space. However, some large-scale obstacles, such as: forests, mountains, buildings, etc. If you only control the UAV to avoid obstacles on a two-dimensional horizontal plane, it will definitely increase the distance traveled for obstacle avoidance, which is obviously disadvantageous for many small UAVs with limited battery reserve capacity.
- the embodiment of the present application provides a flight control method of a drone, which is applied to electronic equipment.
- the method includes the following steps:
- Step 101 acquire an image to be processed.
- the picture content of the image to be processed includes the information of the flight environment ahead.
- the electronic device may be any device with data processing capabilities; wherein, the electronic device may be a data processing device installed inside the drone, or it may be an electronic device capable of information interaction with the drone.
- the device can also be a cloud processor for managing drones, etc.
- the electronic device may receive images to be processed sent by at least one of the image acquisition part and the video acquisition part arranged on the UAV; correspondingly, in the embodiment of the present application, the UAV is provided with an image acquisition part and At least one of the video acquisition part, at least one of the image acquisition part and the video acquisition part may be: a monocular camera or a binocular camera.
- the forward flight environment information refers to the forward environment information where the UAV is located during flight.
- the unmanned aerial vehicle may perform flight missions in environments such as mountains, forests, buildings, or indoors.
- the number of images to be processed can be one, or two or more;
- the picture format of the image to be processed can be bitmap (Bitmap, BMP) format, Joint Photographic Experts Group (Joint Photographic Experts Group , JEPG) format and Portable Network Graphics (Portable Network Graphics, PNG) format, etc.
- Step 102 based on two temporally adjacent frames of images to be processed, determine image feature point pairs satisfying preset conditions.
- the electronic device determines image feature point pairs satisfying preset conditions based on two frames of images to be processed that are temporally adjacent; wherein, temporally adjacent refers to the time at which the two frames of images to be processed are respectively collected
- Adjacent may also mean that two frames of images to be processed are adjacent in time when they are respectively acquired by the electronic device.
- the preset condition can be set in advance, for example, it can refer to the similarity of the related attribute information between the two image feature points in the image feature point pair, and can also refer to the similarity between the two image feature points in the image feature point pair
- the preset distance between them is less than or equal to the preset threshold, and it may also mean that the image feature points have the same position information of the two image feature points in the respective images to be processed.
- the electronic device first extracts at least one image feature point on each image to be processed from two temporally adjacent frames to be processed; secondly, selects any image feature point as the first image feature point, at the same time calculate the Hamming distance between other image feature points that are not in the same image to be processed as the first image feature point, and if the Hamming distance is less than or equal to the preset distance, the first image feature point and the corresponding The other image feature points of are determined as a pair of image feature point pairs.
- the electronic device may describe the image feature point pair based on the position information of the image feature point pair in the corresponding image to be processed, or may describe the image feature point pair based on the feature information to describe the image feature point pair.
- the two image feature points in the image feature point pair are obtained by collecting the same space point in the front flight environment by the image acquisition device installed inside the UAV based on adjacent time points. That is to say, each image feature point in the image feature point pair is mapped to the same spatial point in the forward flight environment.
- the number of feature point pairs may be one, two or more in the embodiment of the present application.
- Step 103 in the forward flight environment, determine the three-dimensional coordinate information associated with the image feature point pairs.
- the electronic device determines the three-dimensional coordinate information associated with the image feature point pair in the forward flight environment; wherein, the three-dimensional coordinate information is a spatial point in the forward flight environment that has a mapping relationship with the image feature point pair coordinate information.
- the three-dimensional coordinate information may refer to a three-dimensional coordinate parameter of a spatial point having a mapping relationship with the image feature point pair in the world coordinate system.
- the electronic device is based on the coordinate position of each image feature point in the image feature point pair in the corresponding image to be processed.
- the coordinate position can be the electronic device taking the image acquisition device in the drone as a reference
- the established camera coordinate system obtains the corresponding coordinate position information, and then based on the two coordinate positions, the three-dimensional coordinate information associated with the image feature point pair is determined based on a geometric algorithm.
- Step 104 based on the three-dimensional coordinate information, adjust the map to be adjusted corresponding to the image to be processed to obtain a three-dimensional map.
- the electronic device based on the determined three-dimensional coordinate information, the electronic device adjusts or corrects the map to be adjusted corresponding to the image to be processed to obtain a three-dimensional map with height information; A virtual three-dimensional image corresponding to the environment.
- the map to be adjusted corresponding to the image to be processed may be a two-dimensional topographical map, or a three-dimensional topographical map.
- the map to be adjusted corresponding to the image to be processed is a two-dimensional planar map
- the electronic device can fuse the determined three-dimensional coordinate information with the two-dimensional coordinate information in the two-dimensional planar map to construct a map with height information 3D map of .
- the map to be adjusted corresponding to the image to be processed is a three-dimensional map
- the electronic device may adjust or correct the three-dimensional coordinate information based on the determined three-dimensional coordinate information to obtain an updated three-dimensional map.
- the map to be adjusted corresponding to the image to be processed may be that the electronic device generates a two-dimensional plane map corresponding to each frame of the image to be processed after acquiring each frame of the image to be processed; it may also be The electronic device determines a preset three-dimensional stereoscopic image corresponding to the multiple frames of images to be processed based on the acquired multiple frames of images to be processed and based on a correlation algorithm.
- Step 105 based on the three-dimensional map, determine the flight track of the drone.
- the electronic device determines the flight trajectory of the drone based on the obtained three-dimensional map; wherein, the flight trajectory may refer to an actual obstacle avoidance path in the forward flight environment.
- the electronic device obtains the obstacle information existing in the flying environment in front, and then determines the flight trajectory that the UAV needs to avoid obstacles during the flight, that is, the UAV flight trajectory.
- the image feature point pairs, the three-dimensional coordinate information associated with the image feature point pairs in the forward flight environment, and the three-dimensional map are determined sequentially through the collected images to be processed; in this way, the forward flight can be restored more accurately
- the state of obstacles in the environment can then give a more accurate three-dimensional map with height information, and then enable the electronic device to give a more accurate obstacle avoidance path for the drone based on the determined three-dimensional map, that is, to ensure that no one is as far as possible
- the flight path of the aircraft is not affected by obstacles in the flying environment ahead.
- the flight control method of the unmanned aerial vehicle provided by the embodiment of the present application, firstly, obtain the image to be processed, and the screen content of the image to be processed includes the information of the flying environment in front; secondly, based on the two frames of images to be processed adjacent to each other in time sequence, it is determined that the following conditions are met: Image feature point pairs with preset conditions; and in the forward flight environment, determine the three-dimensional coordinate information associated with the image feature point pairs; finally, based on the three-dimensional coordinate information, adjust the map to be adjusted corresponding to the image to be processed, and obtain Three-dimensional map: Based on the three-dimensional map, determine the flight trajectory of the drone.
- the three-dimensional map is constructed through the three-dimensional coordinate information associated with the image feature point pairs in the two adjacent frames of images to be processed in time series, which can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information ;
- the flight trajectory is determined based on the three-dimensional map to achieve obstacle avoidance flight, which can reduce the influence of the actual flight environment on the UAV during flight.
- the embodiment of the present application also provides a flight control method of a drone, which is applied to electronic equipment, as shown in Figure 1 and Figure 2, the method includes the following steps:
- Step 201 collecting information about the flight environment ahead to obtain a preset image.
- the electronic device collects the flight environment information ahead to obtain a preset image, that is, the content of the screen collected by the electronic device includes the flight environment information ahead to obtain a preset image; where the preset image can be set on the drone And it includes at least the electronic equipment of the image collection part, which is obtained by collecting the flight environment information in front of the UAV during the flight.
- the preset image may be an image directly collected by the electronic device during the flight of the drone without any data processing; correspondingly, the number of preset images and the frequency of collection are not discussed in this embodiment Any restrictions.
- Step 202 Adjust the image contrast of the preset image to obtain an image to be processed.
- the electronic device adjusts the image contrast of the preset image to obtain the image to be processed; wherein, the contrast adjustment can be to correct or optimize the pixel value of the preset image, or to adjust the pixel value of the preset image Image contrast enhancement.
- image contrast adjustment refers to image contrast enhancement.
- the electronic device may directly enhance the image contrast of the preset image, or may enhance the image contrast of the preset image through an indirect method; wherein, based on at least one of histogram stretching and histogram equalization, The image contrast is enhanced, and its specific implementation process is not described in detail in the embodiments of this application.
- the image to be processed is obtained by enhancing the image contrast of the acquired preset image; in this way, the feature information in the image to be processed can be made more prominent, and the intensity gradient of the pixel value of the key point increases , so that more prominent image feature points can be extracted when the image to be processed is extracted later.
- the electronic device determines image feature point pairs that meet the preset conditions based on the two adjacent frames of images to be processed in time sequence, that is, the electronic device executes step 102 provided in the above embodiment, which can be performed in the following ways from step 203 to step 205 to fulfill:
- Step 203 Determine at least one image feature point of each frame of the image to be processed in the two frames of images to be processed that are temporally adjacent.
- the electronic device determines at least one image feature point of each frame of the image to be processed in two adjacent frames of the image to be processed in time sequence; wherein, the number and parameter information corresponding to the image feature points of different images to be processed can be Same or different.
- the electronic device determines at least one image feature point of each frame of the image to be processed in the two adjacent frames of the image to be processed in time sequence, that is, the electronic device executes the above step 203, and can pass the following steps 203a and 203b way to achieve:
- Step 203a Perform image downsampling on each frame of the image to be processed according to the image resolution gradient, and generate an image pyramid corresponding to each frame of the image to be processed.
- the electronic device performs image down-sampling on each frame of the image to be processed among the two adjacent frames of the image to be processed in time sequence according to the image resolution gradient, and obtains an image pyramid corresponding to each frame of the image to be processed.
- the image pyramid is a kind of multi-scale expression of the image, which is an effective but conceptually simple structure to explain the image at multiple resolutions;
- the image pyramid of an image is a series of pyramid shapes (bottom-up) step by step Reduced and derived from a set of image resolutions from the same original image. It is obtained by down-sampling in steps, and the sampling is stopped until a certain termination condition is reached.
- a layer-by-layer image is compared to a pyramid, and the higher the level, the smaller the image and the lower the resolution.
- Step 203b performing feature extraction on images at each level in the image pyramid corresponding to each frame of image to be processed, to obtain at least one image feature point of each frame of image to be processed.
- the electronic device performs a feature extraction operation, that is, performs feature extraction on images at each level of the image pyramid corresponding to each frame of the image to be processed in two consecutive frames of images to be processed in time sequence, and obtains the image of each frame to be processed. At least one image feature point of the image is processed.
- the electronic device may use a relevant neural network to perform feature extraction on images at each level; it may also perform feature extraction on images at each level based on a target detection algorithm.
- the electronic device can perform feature extraction from images at each level in the image pyramid corresponding to each image to be processed based on the fast feature point extraction and description algorithm (Oriented FAST and Rotated BRIEF, ORB).
- the fast feature point extraction and description algorithm Oriented FAST and Rotated BRIEF, ORB.
- the electronic device fuses image feature points corresponding to images at each level in the image pyramid of the image to be processed, and combines them to obtain a set of image feature points of each image to be processed.
- the electronic device performs feature extraction on the image of each level in the image pyramid of each frame of the image to be processed, and obtains the image feature points of each frame of the image to be processed; thus, image enhancement is performed on the collected real-time image Preprocessing and multi-layer downsampling, and then performing feature point extraction on each layer of the sampled image can improve the quantity and quality of feature point extraction, and has strong robustness for some complex flight environments, and can improve The technical effect of improving the UAV's ability to identify obstacles.
- Step 204 Determine the binary parameter corresponding to at least one image feature point.
- the electronic device may use ORB to describe at least one image feature point of each obtained image to be processed, that is, use a binary value to describe the image feature point.
- Step 205 among the image feature points of each frame of the image to be processed in the two frames of images to be processed adjacent in time sequence, based on the binary parameter corresponding to at least one image feature point, determine the image feature point pair.
- the electronic device performs feature matching on the image feature points based on the binary parameters corresponding to each image feature point among the image feature points of each frame of the image to be processed in the two adjacent frames of images to be processed in time sequence , to get image feature point pairs.
- the corresponding image feature points are obtained by performing feature extraction on each frame of the image to be processed. Points are matched to obtain image feature point pairs; in this way, image feature point pairs can be determined efficiently and accurately, which in turn can improve the accuracy of real-time perception of the flying environment in front of the drone during flight.
- the electronic device determines the pair of image feature points based on the binary parameter corresponding to at least one image feature point among at least one image feature point of each frame of the image to be processed in the two adjacent frames in time sequence. , that is, the electronic device executes step 205, which may be implemented by the following steps 205a and 205b:
- Step 205 a based on the binary parameters corresponding to the image feature points, determine the Hamming distance between two image feature points located in two temporally adjacent frames of images to be processed.
- the electronic device calculates the Hamming distance between binary parameters corresponding to two image feature points in different images to be processed in two consecutive frames of images to be processed in time sequence.
- the Hamming distance is used in the data transmission error control coding.
- the Hamming distance is a concept, which represents the number of different bits corresponding to two (same length) words, and d(x, y) represents two words x Hamming distance between and y. Perform an XOR operation on two strings and count the number of 1s, then this number is the Hamming distance.
- Step 205b if the Hamming distance is less than a preset threshold, determine two image feature points as an image feature point pair.
- the Hamming distance when the Hamming distance is less than the preset threshold, it is considered that the corresponding two image feature points are approximately similar, that is, a matching feature point pair, and then the two image feature points are composed of image feature points right.
- the number of image feature point pairs existing in two consecutive frames of images to be processed in time sequence may be one, two or more, which is not limited in this embodiment of the present application.
- the matching relationship between the two image feature points is determined by calculating the Hamming distance between the two image feature points in two consecutive frames of images to be processed that are different and in sequential order; , which can efficiently and accurately determine image feature point pairs.
- the flight control method of the UAV provided by the embodiment of the present application can efficiently and accurately give the image feature point pair by preprocessing the image, extracting the image feature point based on the image pyramid, and determining the image feature point pair based on feature matching;
- the three-dimensional coordinate information associated with the image feature point pairs in the two adjacent images to be processed in time series is constructed to construct a three-dimensional map, which can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information;
- Determining the flight trajectory based on the three-dimensional map to achieve obstacle avoidance flight can reduce the influence of the actual flight environment on the flight process of the UAV.
- the embodiment of the present application also provides a flight control method of a drone, which is applied to electronic equipment, as shown in Figure 1 and Figure 3, the method includes the following steps:
- Step 301 Obtain the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed.
- the electronic device acquires the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed; wherein, the two-dimensional coordinate information may be based on the image acquisition settings on the drone The corresponding coordinate parameters in the camera coordinate system established for the reference object.
- the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed may be completely the same or different; correspondingly, in the embodiment of the present application, the electronic device In point pairing, the first coordinate of image feature point A in the first image to be processed is marked as (x1, y1), and the second coordinate of image feature point B in the second image to be processed is marked as (x2, y2) ; Wherein, the first image to be processed and the second image to be processed are adjacent in time sequence, and their time sequence is not limited in this embodiment of the present application.
- Step 302 based on the two-dimensional coordinate information, determine the spatial position relationship between two image feature points in the image feature point pair.
- the electronic device is based on the first coordinates of the feature points of the first image and the second coordinates of the feature points of the second image, and at the same time according to the two frames of images to be processed adjacent in time sequence, that is, the first image to be processed and the epipolar geometric relationship between the second image to be processed, and calculate an essential matrix or a fundamental matrix that characterizes the spatial position relationship between the feature points of the two images.
- the electronic device can jointly determine the spatial position relationship between two image feature points in the image feature point pair based on the acquisition parameters of the image acquisition part set on the drone and the two-dimensional coordinate information.
- Step 303 based on the spatial position relationship and the two-dimensional coordinate information, determine the three-dimensional coordinate information in the forward flight environment.
- the electronic device determines relevant three-dimensional coordinate information based on the spatial position relationship, that is, based on the essential matrix or fundamental matrix representing the spatial position relationship between two image feature points, and the two-dimensional coordinate information.
- the corresponding spatial position relationship is determined, and then the corresponding three-dimensional coordinate information is determined; in this way, the actual flight can be determined efficiently and accurately.
- the three-dimensional coordinate points in the environment can further enable the electronic equipment to construct a more accurate topographic map with height information based on the three-dimensional coordinate information in the later stage.
- the electronic device determines the three-dimensional coordinate information in the forward flight environment based on the spatial position relationship and the two-dimensional coordinate information, that is, the electronic device executes step 303, which can be achieved by the following steps 303a and 303b:
- Step 303a analyzing the spatial position relationship to obtain the rotation matrix parameters and translation matrix parameters representing the flight change parameters.
- the electronic device analyzes the spatial position relationship, which may be dismantling the essential matrix parameters or fundamental matrix parameters representing the spatial position relationship to obtain the rotation matrix parameters and translation matrix parameters representing the flight change parameters.
- Step 303b based on the rotation matrix parameters, the translation matrix parameters and the two-dimensional coordinate information, determine the three-dimensional coordinate information in the forward flight environment.
- the electronic device determines the three-dimensional coordinate information of the spatial point associated with the image feature point pair in the forward flight environment based on the rotation matrix parameter, the translation matrix parameter and the two-dimensional coordinate information, according to the geometric operation.
- the geometric operation may be a relationship between a ray formed between a point mapped to a point in the forward flight environment by using at least one of the feature points of the first image and the feature point of the second image, and the relationship between the coincidence of the space ray in the camera coordinate system, The three-dimensional coordinate parameters of the spatial points associated with the image feature point pairs in the forward flight environment are determined.
- the electronic device is based on the relevant geometric calculation and the two-dimensional coordinate information of the image feature point pair, and the three-dimensional coordinate information of the spatial point associated with the image feature point pair in the forward flight environment; thus, it can efficiently And accurately determine the coordinate parameters of the spatial points with height information, and then enable the electronic equipment to construct a three-dimensional topographic map based on the coordinate parameters in the later stage.
- the electronic device adjusts the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information.
- step 104 which can be realized through the following steps 304 to 307:
- Step 304 acquiring the initial position information and initial flight attitude parameters of the UAV flight.
- the electronic device acquires and determines the initial position information and initial flight attitude parameters of the UAV during flight; wherein, the initial position parameters can be represented by three-dimensional coordinate parameters in world coordinates; at the same time, the initial flight The attitude parameter can be the angle difference corresponding to the flying origin of the UAV, etc.
- Step 305 Determine the distance between the initial position information and the three-dimensional coordinate information.
- the electronic device calculates and determines the distance difference between the three-dimensional coordinate information and the initial position information.
- each image feature point pair corresponds to a space point and the three-dimensional coordinate information of the space point in the forward flight environment, wherein the distance between different three-dimensional coordinate information and the initial position information is different.
- the distance may be a distance difference in any direction of the x-axis, y-axis and z-axis.
- Step 306 based on the distance, initial position information, and initial flight attitude parameters, construct coordinate vector parameters with preset dimensions that match the three-dimensional coordinate information.
- the electronic device can calculate the reciprocal of the distance, and fuse the reciprocal of the distance, initial position information, and initial flight attitude parameters to generate a coordinate vector parameter with a preset dimension that matches the three-dimensional coordinate information; where , the preset dimension can be six dimensions.
- the electronic device may use inverse depth parameterization to perform rapid depth convergence on the extracted three-dimensional coordinate information, so as to improve calculation efficiency.
- Step 307 based on the coordinate vector parameters, adjust the coordinates to be adjusted in the map to be adjusted to obtain a three-dimensional map.
- the electronic device adjusts the coordinate information to be adjusted in the map to be adjusted based on the extracted coordinate vector parameters, that is, the coordinate information of the actual space point in the flying environment ahead, to obtain a three-dimensional map; wherein, the to-be-adjusted
- the coordinates can be two-dimensional coordinates or three-dimensional coordinates.
- the convergence speed and calculation efficiency of the extended Kalman filter for the depth calculation of the feature points can be improved, and the small drone
- the man-machine can also quickly update the effect of obstacle depth information when moving at high speed; at the same time, the inverse depth parameterization can enable the algorithm to deal with long-distance features, including some feature points that are so far away that the parallax is very small during the movement of the drone. , so as to enhance the obstacle perception efficiency.
- the electronic device adjusts the coordinates to be adjusted in the map to be adjusted based on the coordinate vector parameters to obtain a three-dimensional map, that is, the electronic device executes step 307, which can be achieved through the following steps 307a to 307c:
- Step 307a Based on the coordinate vector parameters, an updated covariance matrix is constructed.
- the electronic device determines parameters for the correction matrix based on the coordinate vector parameters, that is, constructs an updated covariance matrix; wherein, the extended Kalman filter may be used to update the covariance matrix.
- Step 307b Based on the updated covariance matrix, adjust the coordinates of the map to be adjusted to obtain corrected three-dimensional coordinate information.
- the electronic device based on the updated covariance matrix, corrects the coordinate parameters associated with the three-dimensional coordinate parameters in the map to be adjusted to obtain the corrected three-dimensional coordinate information;
- the height information of the coordinates can be increased or decreased, and the height information can also be filled.
- Step 307c constructing a three-dimensional map based on the corrected three-dimensional coordinate information.
- the electronic device constructs and generates a three-dimensional map that matches the forward flight environment information based on the corrected three-dimensional coordinate information.
- the electronic device obtains a three-dimensional map with height information based on optimizing the map to be adjusted associated with the image to be processed; in this way, the state of obstacles can be restored more accurately, ensuring that no one is as far as possible The flight path of the aircraft is not affected.
- the flight control method of the UAV provided in the embodiment of the present application is based on the pair of image feature points in the two adjacent images to be processed in time sequence, and determines the space associated with the pair of image feature points in the forward flight environment through geometric operations.
- the three-dimensional coordinate information of the point and then optimize the initial map based on the three-dimensional coordinate information; in this way, the actual flight environment information can be restored efficiently and accurately, and a three-dimensional topographic map with height information can be constructed; at the same time, the flight trajectory can be determined based on the three-dimensional map to achieve Obstacle avoidance flight can reduce the impact of the actual flight environment on the UAV during flight.
- the electronic device determines the flight trajectory of the drone based on the three-dimensional map, which can be achieved by the following steps A1 and A2:
- Step A1 Determine the avoidance route based on the three-dimensional map.
- the electronic device senses information of obstacles in the flying environment ahead in advance, and then determines an avoidance route to circumvent the obstacles.
- Step A2 based on the avoidance route, determine the flight trajectory of the UAV.
- the electronic device determines the flight track of the drone based on the avoidance route.
- the electronic device based on the three-dimensional map, knows the obstacles in the flying environment ahead, and then determines the corresponding avoidance route to perform the relevant flight tasks; Impact.
- Step 1 The UAV executes the flight mission, that is, starts to operate, which corresponds to 401 in FIG. 4 .
- Step 2 collecting real-time images, which corresponds to 402 in FIG. 4 ; wherein, a monocular camera may be used to collect real-time images.
- Step 3 Enhance the collected real-time image to highlight the feature information in the image, which corresponds to 403 in Figure 4; because the UAV obstacle avoidance algorithm in the related technology is to collect real-time images in an ideal environment and perform Obstacle recognition and space perception, but in the actual application of UAV application scenarios, there are often many visual disturbances in the environment, such as: weak light, natural shadows and haze, etc., such disturbances may have a greater impact on machine vision The impact of the image directly leads to errors or insufficient feature extraction in the feature extraction of the spatial environment, and then it is necessary to optimize the problem of blurred image feature information in some special environments.
- the UAV obstacle avoidance algorithm in the related technology is to collect real-time images in an ideal environment and perform Obstacle recognition and space perception, but in the actual application of UAV application scenarios, there are often many visual disturbances in the environment, such as: weak light, natural shadows and haze, etc., such disturbances may have a greater impact on machine vision
- the impact of the image directly leads to errors or in
- the collected real-time image is nonlinearly shifted and the pixel values in the image are redistributed to ensure that the number of all pixel values of the real-time image within a certain gray scale range is roughly equal.
- increase the contrast of the pixel value in the middle peak part of the image reduce the contrast of the valley parts on both sides, and output the flat segmented histogram corresponding to the image.
- the feature information in the real-time image can be highlighted, and the intensity gradient of key pixels increases, and more prominent features can be extracted when performing feature extraction on the real-time image. point.
- Step 4 downsample the image according to the gradient, construct an image pyramid, and extract ORB feature points from each layer of the image pyramid, and perform feature point matching between image frames, which corresponds to 404, feature point extraction and Feature point matching.
- the embodiment of the present application can down-sample the image to be processed based on the image resolution to form an 8-layer image pyramid, and extract ORB feature points at each level of the image pyramid; The number of feature points in the grid, if the number of feature points is not enough, adjust the corner point calculation threshold until at least 5 feature points can be extracted from the grid; among them, extracting 5 feature points per grid can get better The characteristic description effect.
- the ORB feature algorithm may be used as the feature extraction and description algorithm of the image frame.
- FAST corners are used to detect feature points with intensity differences in the image, and then feature descriptors (BRIEF) algorithm is used to calculate the descriptors of feature points.
- BRIEF feature descriptors
- 16 pixel points are found near it with a radius of 3 pixels; If there are n consecutive pixels and the absolute value of the gray difference of pixel p is greater than a threshold t, then this pixel p can be selected as a candidate corner point for screening. If the final calculation result shows that there are 10 or more pixel points satisfying the condition on the circumference, the point can be considered as a FAST corner point.
- the ORB algorithm describes the feature points by using the improved BRIEF algorithm.
- Gaussian filtering is used to remove noise from the image and the integral image is used for smoothing; then a window of size S ⁇ S (preset) is taken with the image feature point as the center, and two pixels are randomly selected from the window Point x and y are used as a point pair, compare their pixel values, and perform binary assignment.
- the most obvious feature of the ORB algorithm is fast calculation speed and good scale and rotation invariance, which is mainly due to the extremely high speed of the FAST corner detection algorithm, and the unique binary string representation of the BRIEF algorithm not only saves The storage space is reduced, and the matching time is greatly shortened.
- the use of the ORB feature algorithm saves a lot of computing space for the entire obstacle avoidance algorithm.
- the ORB algorithm is more robust than other feature point algorithms, and can continuously extract stable features. All feature points in the image will be used for feature matching in subsequent frames.
- the feature matching between images can ensure that the UAV can realize real-time continuous perception of the surrounding environment during the flight process, and if an unknown obstacle appears in the flight path, it can also be detected in time and accurately locate the obstacle position. That is, after the feature point extraction is completed, the feature points in the image are described in the form of binary strings. At this time, the feature matching between image frames can be completed according to the described feature information. The main idea of this part is to traverse all the map points in the previous image frame, project all of them to the current frame, and then find a feature point with the closest descriptor distance in the current frame as its matching point.
- Step 5 Restoring the depth information of the relevant feature points in the actual space through geometric calculation, that is, corresponding to 405 in FIG. 4 , calculating the depth of the feature points.
- FIG. 5 it is a schematic diagram representing a corresponding relationship between pairs of image feature points provided by the embodiment of the present application.
- p 1 and p 2 are located in image frame I 1 and image frame I 2 respectively, and p 1 and p 2 are a pair of feature points; at the same time is the projection of point P in the space of p1 and p2 on the image frame I1 and the image frame I2 .
- the plane formed by point P and camera optical centers O 1 and O 2 is called polar plane.
- the intersection points e 1 and e 2 of O 1 , O 2 and I 1 , I 2 respectively are called poles.
- point P may exist at Any position on , that is, the corresponding in I 2 In , the coordinates of point P in the space point can be determined by finding the exact position of p 2 in the image frame I 2 through feature matching.
- the antipolar constraint is satisfied, as shown in formula (1):
- K is the internal reference of the camera. It can also be converted into formula (2) and formula (3);
- E is the essential matrix
- R is the fundamental matrix
- epipolar constraint can be simplified as formula (4):
- the problem of camera movement and pose change can be transformed into: calculating the matrix E or F through the pixel coordinates of the paired feature points; or calculating the rotation matrix R and translation matrix t through the calculated E or F.
- the three-dimensional coordinates of the point P in space are determined by using the coincidence relationship between the two-dimensional coordinate point ray in the image and the space point ray under the camera coordinate system, and the calculation formula is shown in (5):
- x represents p 1 and p 2
- X represents the three-dimensional coordinates of the spatial point P in the world coordinate system.
- Step 6 Perform inverse depth parameterization on the depth information of the feature points, and optimize the spatial point cloud by using the extended Kalman filter, which corresponds to 406 in FIG. 4 , optimizing the depth information.
- the inverse depth parameterization combined with the extended Kalman filter can be used to optimize the camera pose data, that is, to optimize the three-dimensional coordinate information of the spatial point P. Because small UAVs fly in a fast and narrow space, there are high requirements for the calculation efficiency of obstacle perception algorithms; in the embodiment of this application, the pose data stored in the database can be used to calculate the existing unmanned The process of continuous optimization and correction of machine pose and three-dimensional space point coordinates.
- the vision-based perception method uses the extended Kalman filter to optimize the coordinates of the feature points in the image in the environment space to minimize the accumulated errors during the flight.
- the embodiment of this application uses the inverse depth parameterization method to perform rapid depth convergence on the extracted feature points; the use of inverse depth parameterization makes the convergence speed faster than Cartesian parameterization, where the uncertainty in the inverse depth is closer to a Gaussian distribution than the standard depth.
- the feature points stored in the database are represented by a six-dimensional vector, which is composed of the Cartesian coordinates of the feature point P relative to an anchor point: [x a , y a , z a ] T , orientation
- the angle ⁇ , the elevation angle ⁇ , and the reciprocal ⁇ of the distance from the feature point P to the anchor point are jointly defined, where the anchor point is the spatial position of the drone when the database is initialized.
- R is the rotation matrix from the spatial coordinate system to the camera coordinates.
- the system executes the mapping algorithm according to the image sequence. Each feature point is regarded as an independent measurement data, and the correlation between the measured value and the real value is ignored.
- Step 7 Establish a topographic map with height information according to the depth information of feature points in space, which corresponds to 408 in Figure 4; wherein, using the three-dimensional coordinates of the convergence points in space, a terrain grid represented by height can be generated, and the specific position The height information is updated from the point coordinates in the database, and the height of the location in the grid is raised or lowered when a new convergent point is received.
- Step 8 the UAV performs obstacle avoidance flight according to the constructed three-dimensional terrain map.
- the grid terrain map generated by the filter can be used for UAV obstacle avoidance.
- the obstacle avoidance algorithm judges the next step by considering the grid height of the topographic map of the UAV in the direction of the horizontal velocity vector. First compare the height of the drone to the specified grid with the minimum height of the specified grid. If this minimum altitude would hinder the UAV's original trajectory, the UAV performs a smooth pull-up maneuver by itself. In a similar way, the algorithm also enables the drone to quickly return to the desired altitude after passing an obstacle.
- the embodiment of the present application proposes a monocular camera-based UAV obstacle sensing method during flight, which can sense the obstacle height information and ensure that the UAV can avoid obstacles from above the obstacle. It can shorten the obstacle avoidance distance of the drone and improve its perception of obstacles.
- this application designs a multi-scale feature extraction method to extract ORB features from layers with different resolutions to ensure the uniform distribution of feature points in the image, thereby obtaining better obstacle perception effects.
- common visual environment perception methods may also have problems such as misrecognition, high error or interruption.
- the processing step improves the robustness of obstacle-aware methods as much as possible.
- the method of calculating the depth information of the feature points in the image based on the inverse depth parameterization combined with the extended Kalman filter can improve the rapid convergence of the depth information, and can better restore the depth of the distant points in the space.
- this embodiment of the present application also provides a flight control device 6 for a drone, which can be applied to a drone provided in the corresponding embodiments in Figures 1 to 3
- the flight control device 6 of the UAV includes: an acquisition part 61, a first determination part 62, a second determination part 63, an adjustment part 64 and a third determination part 65, wherein :
- the acquisition part 61 is configured to acquire the image to be processed; wherein, the screen content of the image to be processed includes the information of the flight environment ahead;
- the first determining part 62 is configured to determine image feature point pairs satisfying preset conditions based on two temporally adjacent frames of images to be processed;
- the second determining part 63 is configured to determine the three-dimensional coordinate information associated with the image feature point pair in the forward flight environment
- the adjusting part 64 is configured to adjust the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information to obtain a three-dimensional map;
- the third determining part 65 is configured to determine the flight track of the UAV based on the three-dimensional map.
- the acquiring part 61 is further configured to acquire the forward flight environment information to obtain a preset image; adjust the image contrast of the preset image to obtain the image to be processed.
- the first determining part 62 is further configured to determine at least one image feature point of each frame of the image to be processed in the two adjacent frames of the image to be processed in time sequence; determine the at least one image feature point The binary parameter corresponding to the feature point; in at least one image feature point of each frame of the image to be processed in the two adjacent frames to be processed in the time sequence, based on the binary parameter corresponding to the at least one image feature point, determine the Image feature point pairs.
- the first determining part 62 is further configured to perform image downsampling on each frame of the image to be processed according to the image resolution gradient, and generate an image pyramid corresponding to each frame of the image to be processed; Feature extraction is performed on images at each level in the image pyramid corresponding to each frame of the image to be processed to obtain at least one image feature point of each frame of the image to be processed.
- the first determining part 62 is further configured to determine two image feature points in the two temporally adjacent frames of images to be processed based on the binary parameters corresponding to the image feature points Hamming distance between them; if the Hamming distance is less than the preset threshold, the two image feature points are determined as the image feature point pair.
- the second determining part 63 is further configured to obtain the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed; based on the two-dimensional coordinate information, determining the spatial position relationship between two image feature points in the image feature point pair; based on the spatial position relationship and the two-dimensional coordinate information, determining the three-dimensional coordinate information in the forward flight environment.
- the second determining part 63 is further configured to analyze the spatial position relationship to obtain rotation matrix parameters and translation matrix parameters that characterize the flight change parameters; based on the rotation matrix parameters, The translation matrix parameters and the two-dimensional coordinate information determine the three-dimensional coordinate information in the forward flight environment.
- the adjustment part 64 is further configured to obtain the initial position information and initial flight attitude parameters of the UAV flight; determine the distance between the initial position information and the three-dimensional coordinate information; Based on the distance, the initial position information, and the initial flight attitude parameters, construct a coordinate vector parameter with a preset dimension that matches the three-dimensional coordinate information; based on the coordinate vector parameter, map the map to be adjusted The coordinates to be adjusted are adjusted to obtain the three-dimensional map.
- the adjustment part 64 is further configured to construct an updated covariance matrix based on the coordinate vector parameters; based on the updated covariance matrix, the map to be adjusted The coordinates to be adjusted are adjusted to obtain corrected three-dimensional coordinate information; and the three-dimensional map is constructed based on the corrected three-dimensional coordinate information.
- the third determining part 65 is further configured to determine an avoidance route based on the three-dimensional map; and determine the flight trajectory of the UAV based on the avoidance route.
- the flight control device of the UAV determines the space associated with the image feature point pair in the forward flight environment through geometric calculation based on the pair of image feature points in the two adjacent frames of images to be processed in time sequence.
- the three-dimensional coordinate information of the point and then optimize the initial map based on the three-dimensional coordinate information; in this way, the actual flight environment information can be restored efficiently and accurately, and a three-dimensional topographic map with height information can be constructed; at the same time, the flight trajectory can be determined based on the three-dimensional map to achieve Obstacle avoidance flight can reduce the impact of the actual flight environment on the UAV during flight.
- this embodiment of the present application also provides an electronic device 7, which can be applied to a flight control method for a drone provided in the embodiments corresponding to Figures 1 to 3, as shown in Figure 7 , the electronic device 7 includes: a processor 71, a memory 72 and a communication bus 73, wherein:
- the communication bus 73 is used to realize the communication connection between the processor 71 and the memory 72 .
- the processor 71 is used to execute the program of the UAV flight control method stored in the memory 72, so as to realize the UAV flight control method provided in the embodiments corresponding to FIG. 1 to FIG. 3 .
- the electronic device determines the three-dimensional coordinate information of the spatial point associated with the image feature point pair in the forward flight environment through geometric calculation based on the image feature point pair in the two adjacent images to be processed in time sequence , and then optimize the initial map based on the three-dimensional coordinate information; in this way, the actual flight environment information can be efficiently and accurately restored, and a three-dimensional topographic map with height information can be constructed; at the same time, the flight trajectory can be determined based on the three-dimensional map to achieve obstacle avoidance flight, which can Reduce the impact of drone flight from the actual flight environment.
- the embodiments of the present application further provide a computer-readable storage medium, where one or more programs are stored in the computer-readable storage medium, and the one or more programs can be executed by one or more processors to Realize the flight control method of the unmanned aerial vehicle provided by the embodiments corresponding to Fig. 1 to 3 .
- the embodiment of the present application also provides a computer program, the computer program includes computer readable codes, and when the computer readable codes run in an electronic device, the processor of the electronic device executes to realize the Embodiments corresponding to 1 to 3 provide the flight control method of the UAV.
- the disclosed devices and methods can be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division.
- the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms of.
- the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units; they may be located in one place or distributed to multiple network units; Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present application can be integrated into one processing unit, or each unit can be used as a single unit, or two or more units can be integrated into one unit; the above-mentioned integration
- the unit can be realized in the form of hardware or in the form of hardware plus software functional unit.
- the above-mentioned integrated units of the present application are implemented in the form of software function parts and sold or used as independent products, they can also be stored in a computer-readable storage medium.
- the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art can be embodied in the form of a software product.
- the computer software product is stored in a storage medium and includes several instructions for Make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the methods described in the various embodiments of the present application.
- the aforementioned storage medium includes various media capable of storing program codes such as removable storage devices, ROMs, magnetic disks or optical disks.
- the embodiment of the present application discloses a flight control method, device, equipment, medium, and program of a UAV; wherein, the method includes: acquiring an image to be processed; wherein, the screen content of the image to be processed includes the front flight environment information; based on two adjacent frames of images to be processed in time sequence, determine image feature point pairs that meet preset conditions; in the forward flight environment, determine three-dimensional coordinate information associated with the image feature point pairs; based on the The three-dimensional coordinate information is used to adjust the map to be adjusted corresponding to the image to be processed to obtain a three-dimensional map; based on the three-dimensional map, the flight trajectory of the UAV is determined.
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111019049.8 | 2021-09-01 | ||
| CN202111019049.8A CN115729250A (zh) | 2021-09-01 | 2021-09-01 | 一种无人机的飞行控制方法、装置、设备及存储介质 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2023030062A1 true WO2023030062A1 (fr) | 2023-03-09 |
Family
ID=85292015
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2022/113856 Ceased WO2023030062A1 (fr) | 2021-09-01 | 2022-08-22 | Procédé et appareil de commande de vol pour véhicule aérien sans pilote, et dispositif, support et programme |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN115729250A (fr) |
| WO (1) | WO2023030062A1 (fr) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116678405A (zh) * | 2023-06-14 | 2023-09-01 | 广东汇天航空航天科技有限公司 | 视觉惯性里程计参数值更新方法、装置、设备及存储介质 |
| CN117058209A (zh) * | 2023-10-11 | 2023-11-14 | 山东欧龙电子科技有限公司 | 一种基于三维地图的飞行汽车视觉图像深度信息计算方法 |
| CN119512198A (zh) * | 2024-11-20 | 2025-02-25 | 拓恒技术有限公司 | 一种用于输电线路的无人机低空巡检的控制系统 |
| CN119690129A (zh) * | 2025-02-24 | 2025-03-25 | 北京理工大学 | 无人机的飞行路径的规划方法、装置、无人机及存储介质 |
| CN119810205A (zh) * | 2024-11-28 | 2025-04-11 | 贵州电网有限责任公司 | 基于三维地形数据的无人机视频传感器在线标定方法及系统 |
| CN119987409A (zh) * | 2025-04-17 | 2025-05-13 | 四川中研新材料科技有限责任公司 | 一种无人机实时路径规划与避障方法、装置、介质及设备 |
| CN120043445A (zh) * | 2025-04-23 | 2025-05-27 | 中国建筑第四工程局有限公司 | 一种多功能自动化空间点位追踪测量方法及装置 |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119887856A (zh) * | 2025-01-10 | 2025-04-25 | 天津理工大学 | 一种基于目标语义和空间位置信息的多无人机协同跟踪方法 |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106501829A (zh) * | 2016-09-26 | 2017-03-15 | 北京百度网讯科技有限公司 | 一种无人机导航方法和装置 |
| JP2017134617A (ja) * | 2016-01-27 | 2017-08-03 | 株式会社リコー | 位置推定装置、プログラム、位置推定方法 |
| CN107656545A (zh) * | 2017-09-12 | 2018-02-02 | 武汉大学 | 一种面向无人机野外搜救的自主避障与导航方法 |
| CN108917753A (zh) * | 2018-04-08 | 2018-11-30 | 中国人民解放军63920部队 | 基于从运动恢复结构的飞行器位置确定方法 |
| CN109407705A (zh) * | 2018-12-14 | 2019-03-01 | 厦门理工学院 | 一种无人机躲避障碍物的方法、装置、设备和存储介质 |
| CN110047142A (zh) * | 2019-03-19 | 2019-07-23 | 中国科学院深圳先进技术研究院 | 无人机三维地图构建方法、装置、计算机设备及存储介质 |
| CN112434709A (zh) * | 2020-11-20 | 2021-03-02 | 西安视野慧图智能科技有限公司 | 基于无人机实时稠密三维点云和dsm的航测方法及系统 |
| US20210141378A1 (en) * | 2018-07-18 | 2021-05-13 | SZ DJI Technology Co., Ltd. | Imaging method and device, and unmanned aerial vehicle |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106204516B (zh) * | 2015-05-06 | 2020-07-03 | Tcl科技集团股份有限公司 | 一种机器人的自动充电方法及装置 |
| CN105678754B (zh) * | 2015-12-31 | 2018-08-07 | 西北工业大学 | 一种无人机实时地图重建方法 |
| CN105865454B (zh) * | 2016-05-31 | 2019-09-24 | 西北工业大学 | 一种基于实时在线地图生成的无人机导航方法 |
| CN108062109B (zh) * | 2017-12-13 | 2020-09-11 | 天津萨瑞德科技有限公司 | 无人机避障方法 |
| CN108682027A (zh) * | 2018-05-11 | 2018-10-19 | 北京华捷艾米科技有限公司 | 基于点、线特征融合的vSLAM实现方法及系统 |
| CN109358638B (zh) * | 2018-09-10 | 2021-07-27 | 南京航空航天大学 | 基于分布式地图的无人机视觉避障方法 |
| CN110673632A (zh) * | 2019-09-27 | 2020-01-10 | 中国船舶重工集团公司第七0九研究所 | 一种基于视觉slam的无人机自主避障方法及装置 |
-
2021
- 2021-09-01 CN CN202111019049.8A patent/CN115729250A/zh active Pending
-
2022
- 2022-08-22 WO PCT/CN2022/113856 patent/WO2023030062A1/fr not_active Ceased
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2017134617A (ja) * | 2016-01-27 | 2017-08-03 | 株式会社リコー | 位置推定装置、プログラム、位置推定方法 |
| CN106501829A (zh) * | 2016-09-26 | 2017-03-15 | 北京百度网讯科技有限公司 | 一种无人机导航方法和装置 |
| CN107656545A (zh) * | 2017-09-12 | 2018-02-02 | 武汉大学 | 一种面向无人机野外搜救的自主避障与导航方法 |
| CN108917753A (zh) * | 2018-04-08 | 2018-11-30 | 中国人民解放军63920部队 | 基于从运动恢复结构的飞行器位置确定方法 |
| US20210141378A1 (en) * | 2018-07-18 | 2021-05-13 | SZ DJI Technology Co., Ltd. | Imaging method and device, and unmanned aerial vehicle |
| CN109407705A (zh) * | 2018-12-14 | 2019-03-01 | 厦门理工学院 | 一种无人机躲避障碍物的方法、装置、设备和存储介质 |
| CN110047142A (zh) * | 2019-03-19 | 2019-07-23 | 中国科学院深圳先进技术研究院 | 无人机三维地图构建方法、装置、计算机设备及存储介质 |
| CN112434709A (zh) * | 2020-11-20 | 2021-03-02 | 西安视野慧图智能科技有限公司 | 基于无人机实时稠密三维点云和dsm的航测方法及系统 |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN116678405A (zh) * | 2023-06-14 | 2023-09-01 | 广东汇天航空航天科技有限公司 | 视觉惯性里程计参数值更新方法、装置、设备及存储介质 |
| CN117058209A (zh) * | 2023-10-11 | 2023-11-14 | 山东欧龙电子科技有限公司 | 一种基于三维地图的飞行汽车视觉图像深度信息计算方法 |
| CN117058209B (zh) * | 2023-10-11 | 2024-01-23 | 山东欧龙电子科技有限公司 | 一种基于三维地图的飞行汽车视觉图像深度信息计算方法 |
| CN119512198A (zh) * | 2024-11-20 | 2025-02-25 | 拓恒技术有限公司 | 一种用于输电线路的无人机低空巡检的控制系统 |
| CN119810205A (zh) * | 2024-11-28 | 2025-04-11 | 贵州电网有限责任公司 | 基于三维地形数据的无人机视频传感器在线标定方法及系统 |
| CN119690129A (zh) * | 2025-02-24 | 2025-03-25 | 北京理工大学 | 无人机的飞行路径的规划方法、装置、无人机及存储介质 |
| CN119987409A (zh) * | 2025-04-17 | 2025-05-13 | 四川中研新材料科技有限责任公司 | 一种无人机实时路径规划与避障方法、装置、介质及设备 |
| CN120043445A (zh) * | 2025-04-23 | 2025-05-27 | 中国建筑第四工程局有限公司 | 一种多功能自动化空间点位追踪测量方法及装置 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN115729250A (zh) | 2023-03-03 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12504286B2 (en) | Simultaneous location and mapping (SLAM) using dual event cameras | |
| WO2023030062A1 (fr) | Procédé et appareil de commande de vol pour véhicule aérien sans pilote, et dispositif, support et programme | |
| US12198364B2 (en) | Computer vision systems and methods for detecting and modeling features of structures in images | |
| CN112902953B (zh) | 一种基于slam技术的自主位姿测量方法 | |
| WO2021233029A1 (fr) | Procédé de localisation et de cartographie simultanées, dispositif, système et support de stockage | |
| CN106940704B (zh) | 一种基于栅格地图的定位方法及装置 | |
| CN113359782B (zh) | 一种融合lidar点云与图像数据的无人机自主选址降落方法 | |
| CN112258618A (zh) | 基于先验激光点云与深度图融合的语义建图与定位方法 | |
| CN105865454B (zh) | 一种基于实时在线地图生成的无人机导航方法 | |
| CN113985445A (zh) | 一种基于相机与激光雷达数据融合的3d目标检测算法 | |
| CN106595659A (zh) | 城市复杂环境下多无人机视觉slam的地图融合方法 | |
| CN112802096A (zh) | 实时定位和建图的实现装置和方法 | |
| CN119478297B (zh) | 一种基于5g飞控的实时三维建模方法、系统及介质 | |
| Eynard et al. | Real time UAV altitude, attitude and motion estimation from hybrid stereovision | |
| CN115773759B (zh) | 自主移动机器人的室内定位方法、装置、设备及存储介质 | |
| CN118189959A (zh) | 一种基于yolo姿态估计的无人机目标定位方法 | |
| CN118225096A (zh) | 基于动态特征点剔除和回环检测的多传感器slam方法 | |
| CN117253003A (zh) | 一种融合直接法与点面特征法的室内rgb-d slam方法 | |
| Zhang et al. | A stereo SLAM system with dense mapping | |
| CN113158816B (zh) | 面向室外场景物体的视觉里程计二次曲面路标构建方法 | |
| CN114993293B (zh) | 室内弱纹理环境下移动无人系统同步定位与建图方法 | |
| Pal et al. | Evolution of simultaneous localization and mapping framework for autonomous robotics—a comprehensive review | |
| CN113011212A (zh) | 图像识别方法、装置及车辆 | |
| Hong et al. | A Method for Enhancing the Positioning Accuracy and Robustness of Indoor Inspection by Unmanned Aerial Vehicles through Multi-modal Fusion | |
| CN120869183A (zh) | 视觉里程计计算方法、存储介质、电子设备及程序产品 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22863201 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22863201 Country of ref document: EP Kind code of ref document: A1 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22863201 Country of ref document: EP Kind code of ref document: A1 |
|
| 32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 23/04/2024) |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 22863201 Country of ref document: EP Kind code of ref document: A1 |