[go: up one dir, main page]

WO2021159603A1 - Indoor navigation method and apparatus for unmanned aerial vehicle, device and storage medium - Google Patents

Indoor navigation method and apparatus for unmanned aerial vehicle, device and storage medium Download PDF

Info

Publication number
WO2021159603A1
WO2021159603A1 PCT/CN2020/085853 CN2020085853W WO2021159603A1 WO 2021159603 A1 WO2021159603 A1 WO 2021159603A1 CN 2020085853 W CN2020085853 W CN 2020085853W WO 2021159603 A1 WO2021159603 A1 WO 2021159603A1
Authority
WO
WIPO (PCT)
Prior art keywords
drone
model
information
point cloud
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2020/085853
Other languages
French (fr)
Chinese (zh)
Inventor
冼志海
陈东亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
Original Assignee
OneConnect Smart Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Smart Technology Co Ltd filed Critical OneConnect Smart Technology Co Ltd
Publication of WO2021159603A1 publication Critical patent/WO2021159603A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to the indoor navigation method, device, equipment and storage medium of an unmanned aerial vehicle.
  • the current drones that can realize autonomous flight rely on satellite signals to give location information, and then use the existing map to plan the route based on the location information.
  • the drone cannot receive GPS signals in an environment with shelters such as indoors and tunnels. Therefore, the indoor positioning and navigation of UAVs are usually in the following ways:
  • this technology does not require any front-end network deployment, and ordinary devices that can receive mobile phone signals can be positioned by comparing signals from multiple base stations. But it is difficult to guarantee the positioning accuracy;
  • This positioning system can use the camera to collect images of the surrounding environment and determine the location by comparing it with the information entered in advance. This method cannot determine different partitions of the same layout (such as different rooms with the same hotel decoration), and Due to the changeable environment, in a relatively small space, drones may not be able to avoid obstacles smoothly.
  • the main purpose of this application is to provide a method, device, equipment and storage medium for UAV indoor navigation, aiming to solve the current technical problem of high cost and low accuracy of UAV indoor navigation.
  • the indoor navigation method for a UAV includes the following steps:
  • the point cloud model and the preset BIM model are superimposed to obtain a superimposed model, a navigation route to the destination location is generated according to the superimposed model, and the drone is controlled to operate according to the navigation route.
  • the indoor navigation device for an unmanned aerial vehicle includes:
  • the request receiving module is used to obtain the initial position and destination position of the drone when receiving the drone navigation request;
  • the information collection module is used to collect the operating state information of the UAV through the first collection device in the UAV, and collect the different heights and heights at the initial position through the second collection device in the UAV. Surrounding environment information at different shooting angles;
  • a model construction module configured to construct a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and process the feature point cloud to obtain a point cloud model of the shooting object;
  • the route generation module is used to superimpose the point cloud model and the preset BIM model to obtain a superimposed model, generate a navigation route to the destination location according to the superimposed model, and control the drone to operate according to the navigation route .
  • this application also provides an indoor navigation device for drones
  • the UAV indoor navigation device includes: a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein: the computer program is executed by the processor to achieve the above
  • the steps of the UAV indoor navigation method at least includes the following steps: when receiving the UAV navigation request, obtain the initial position and the target position of the UAV;
  • the first collection device in the drone collects operating state information of the drone, and the second collection device in the drone collects surrounding environment information at different heights and different shooting angles at the initial position;
  • the operating state information and the surrounding environment information construct a feature point cloud of the shooting object, processing the feature point cloud to obtain a point cloud model of the shooting object; superimposing the point cloud model with a preset BIM model to obtain a superimposed model , Generating a navigation route to the destination location according to the superposition model, and controlling the drone to operate according to the navigation route.
  • this application also provides a computer storage medium; the computer storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the above-mentioned indoor navigation method for drones are realized, wherein:
  • the indoor navigation method of the UAV includes at least the following steps: when receiving the UAV navigation request, obtain the initial position and the target position of the UAV;
  • the operating state information of the man-machine is collected by the second collecting device in the drone to collect the surrounding environment information at different heights and different shooting angles at the initial position; and the shooting is constructed according to the operating state information and the surrounding environment information
  • the feature point cloud of the object is processed, and the feature point cloud is processed to obtain the point cloud model of the shooting object; the point cloud model and the preset BIM model are superimposed to obtain the superposition model, and the superposition model is generated according to the superposition model. Navigate the route, and control the UAV to operate according to the navigation route.
  • the embodiment of the application proposes an indoor navigation method, device, equipment, and storage medium for a UAV.
  • a terminal receives a UAV navigation request, it obtains the initial position and the target position of the UAV;
  • the first collection device in the UAV collects the operating state information of the UAV, and the second collection device in the UAV collects the surrounding environment information at different heights and different shooting angles at the initial position; according to the operation State information and the surrounding environment information construct a feature point cloud of the subject, process the feature point cloud to obtain a point cloud model of the subject; superimpose the point cloud model with a preset BIM model to obtain a superimposed model, and
  • the superposition model generates a navigation route to the destination location, and controls the UAV to operate according to the navigation route.
  • the technical solution of this embodiment does not require additional hardware costs, and the information of building components, doors and windows and other UAV navigation obstacles can be accurately determined through the superimposed model, so that the UAV can better avoid obstacles and reduce the incidence of accidents.
  • the indoor automatic navigation of the UAV is realized, and the accuracy of the indoor navigation of the UAV is improved.
  • FIG. 1 is a schematic diagram of a device structure of a hardware operating environment involved in a solution of an embodiment of the present application
  • FIG. 2 is a schematic flowchart of the first embodiment of the indoor navigation method for drones under this application;
  • FIG. 3 is a schematic diagram of functional modules of an embodiment of an indoor navigation device for drones under this application.
  • Figure 1 is the terminal of the hardware operating environment involved in the solution of the embodiment of the application (also called the UAV indoor navigation device, where the UAV indoor navigation device can be a separate UAV indoor navigation device
  • the structure can also be formed by combining other devices with the UAV indoor navigation device) schematic diagram of the structure.
  • the terminal in the embodiments of this application can be a fixed terminal or a mobile terminal, such as smart air conditioners with networking functions, smart lights, smart power supplies, smart speakers, autonomous vehicles, PC (personal computer) personal computers, smart phones, and tablet computers. , E-book readers, portable computers, etc.
  • the terminal may include a processor 1001, for example, a central processing unit (CPU), a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as WIreless-FIdelity, WIFI interface).
  • the memory 1 005 may be a high-speed RAM memory, or a stable memory (non-volatile memory), for example, a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • the terminal may also include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, and a WiFi module; an input unit, a display screen, and a touch screen; the network interface can be selected in addition to WiFi, Bluetooth, Probes and so on.
  • sensors such as light sensors, motion sensors and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor; of course, the mobile terminal may also be equipped with other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc., which will not be repeated here.
  • terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than shown in the figure, or combine some components, or arrange different components.
  • the computer software product is stored in a storage medium (storage medium: also called computer storage medium, computer medium, readable medium, readable storage medium, computer readable storage medium, or directly called medium, etc., storage medium
  • storage medium can be a non-volatile readable storage medium, such as RAM, magnetic disk, optical disk, and includes several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute this application
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a computer program.
  • the network interface 1004 is mainly used to connect to a back-end server and communicate with the back-end server;
  • the user interface 1003 is mainly used to connect to a client (user side) to communicate with the client;
  • the processor 1001 can be used to call a computer program stored in the memory 1005 and execute the steps in the UAV indoor navigation method provided in the following embodiments of the present application.
  • the point cloud model generated in real time can automatically plan the UAV indoor navigation route, so that the UAV can navigate. specifically:
  • the indoor navigation method for an unmanned aerial vehicle includes:
  • step S10 when the drone navigation request is received, the initial position and the target position of the drone are acquired.
  • the UAV indoor navigation method in this embodiment is applied to a terminal (also called UAV indoor navigation equipment).
  • the terminal communicates with the UAV, and the terminal can control the UAV.
  • the UAV has a preset collection device, The collection device is used to collect information about the surrounding environment of the drone and the operating status information of the drone (the operating status of the drone and the relative displacement information of the drone).
  • the number of depth cameras, inertial measurement devices, perception cameras, and depth cameras is not limited.
  • the drone sends the collected surrounding environment information and drone operation status information to the terminal, and the terminal processes the surrounding environment information and the drone operation Status information, get the UAV's indoor navigation route, and realize the accurate navigation of the UAV.
  • the terminal receives the drone navigation request
  • the triggering method of the drone navigation request is not specifically limited, that is, the drone navigation request can be triggered by the user actively, for example, the user clicks the drone navigation corresponding on the terminal display interface Button to trigger the drone navigation request; in addition, the drone navigation request can also be automatically triggered by the terminal.
  • the trigger condition of the drone navigation request is preset in the terminal: trigger the drone navigation request in the early morning every day. Collect xxx floor information, and the terminal will automatically trigger the drone navigation request when it reaches the wee hours.
  • the terminal When the terminal receives the drone navigation request, the terminal obtains the initial position and destination position of the drone.
  • the initial position and destination position of the drone can be set by the user; for example, the initial position of the drone is entered on the user terminal It is: 1 room on the 3rd floor of the xxx building, and the destination is: 3 rooms on the 2nd floor of the xxx building.
  • a specific implementation method for determining the initial position and the target position of the drone includes the following steps:
  • Step a1 when receiving the drone navigation request, obtain the building identifier of the building where the drone is currently located, and the preset BIM model associated with the building identifier;
  • Step a2 construct a three-dimensional coordinate system based on the preset BIM model, use the three-dimensional coordinates of the current position of the drone as the initial position of the drone, and obtain the navigation destination corresponding to the drone navigation request; Use the three-dimensionality of the navigation destination as the target position of the drone.
  • the terminal when the terminal receives the drone navigation request, the terminal obtains the building identifier of the building where the drone is currently located (the building identifier refers to the identification information that uniquely identifies the building, such as the name of the building or the location information of the building), and the terminal Obtain the preset BIM model associated with the building logo (the preset BIM model refers to the preset BIM model (Building Information Modeling model) associated with the building logo, and the BIM model contains buildings); it is generally constructed based on the preset BIM model In the three-dimensional coordinate system, the terminal uses the three-dimensional coordinates of the current position of the drone as the initial position of the drone, the terminal obtains the navigation destination corresponding to the drone navigation request, and uses the three-dimensional navigation destination as the destination position of the drone.
  • the building identifier refers to the identification information that uniquely identifies the building, such as the name of the building or the location information of the building
  • the terminal Obtain the preset BIM model associated with the building logo
  • a three-dimensional coordinate system is constructed based on the BIM model to accurately determine the initial position and target position of the drone, so as to realize accurate navigation of the drone.
  • Step S20 Collect operating state information of the drone through the first collection device in the drone, and collect different heights and different shooting angles at the initial position through the second collection device in the drone Surrounding environment information.
  • the collection device is preset in the drone, and the collection device is divided into a first collection device and a second collection device according to the purpose of the collection device.
  • the first collection device is used to collect the operating state information of the drone, and the first collection device includes at least One inertial measuring device and at least one sensing camera;
  • the second collecting device is used to collect information about the surrounding environment of the drone, and the second collecting device includes at least one depth camera.
  • the drone carries one inertial measuring device and one depth camera.
  • 4 environment-aware cameras of which, the inertial measuring device is responsible for sensing the direction information of the drone, the environment-aware camera is responsible for obtaining the relative displacement of the drone, and the depth camera is responsible for sensing the depth image information of the drone’s object.
  • the inertial measuring device is responsible for sensing the direction information of the drone
  • the environment-aware camera is responsible for obtaining the relative displacement of the drone
  • the depth camera is responsible for sensing the depth image information of the drone’s object.
  • Step b1 Adjust the height and shooting angle of the drone at the initial position, collect the direction information of the drone through the inertial measurement device, collect characteristic images through the sensing camera, and analyze the characteristic images to obtain The relative position information of the drone, using the direction information and the relative position information as operating state information of the drone;
  • Step b2 Transmit infrared pulses to the subject through the depth camera, receive the infrared pulse reflected by the subject and the reflection time of the infrared pulse, process the reflection time to obtain the depth image information of the subject, and convert the depth
  • the image information serves as the surrounding environment information.
  • the terminal controls the drone to autonomously adjust the height and shooting angle near the initial position, and performs multi-angle shooting to obtain feature images.
  • the terminal extracts feature points of the feature image, and the terminal analyzes the feature image to obtain the relative position information of the drone.
  • the direction information and relative position information are used as the operating status information of the UAV.
  • the terminal emits infrared pulses through the depth camera, and obtains the depth image information by calculating the reflection time, that is, the distance from the surface of the object to the camera.
  • Step S30 Construct a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and process the feature point cloud to obtain a point cloud model of the shooting object.
  • the terminal converts the depth image information in the surrounding environment information into a three-dimensional feature point cloud, and the terminal fuses the three-dimensional feature point cloud into a three-dimensional grid to obtain a point cloud model, that is, the terminal emits rays from the current collection device position to the three-dimensional feature point cloud of the previous step.
  • the feature point cloud is intersected to obtain the point cloud under the current frame perspective, and the terminal calculates its normal vector to register the input depth image information of the next frame, and continuously loop to obtain feature point clouds under different perspectives.
  • the scene surface of the reconstructed subject is reconstructed to form a point cloud model.
  • step S30 includes:
  • Step b1 extracting direction information and relative position information in the operating state information, and iterating the direction information and the relative position information to obtain the attitude change value of the first collecting device in the UAV;
  • Step b2 extracting the depth image information in the surrounding environment information, and iterating the depth image information according to the attitude change value to obtain the feature point cloud of the drone photographed object;
  • Step b3 processing the characteristic point cloud by a preset SLAM algorithm to obtain a point cloud model of the shooting object.
  • the terminal extracts the direction information and relative position information from the operating status information, and the terminal iterates the direction information and relative position information to obtain the attitude change value of the first acquisition device in the drone; the terminal extracts the depth in the surrounding environment information For image information, the terminal iterates the depth image information according to the attitude change value to obtain the feature point cloud of the drone photographed object; the terminal uses the preset SLAM algorithm (Simultaneous localization and mapping, synchronous positioning and mapping algorithm) to process the feature point cloud to obtain Point cloud model of the subject.
  • SLAM algorithm Simultaneous localization and mapping, synchronous positioning and mapping algorithm
  • the terminal determines the attitude change value of the drone to construct the characteristic point cloud of the shooting object, and obtains the approximate point cloud model of the shooting target according to the SLAM algorithm.
  • the terminal recognizes the corresponding non-building components in real time according to the point cloud model Obstacle information.
  • Step S40 Superimpose the point cloud model and the preset BIM model to obtain a superimposed model, generate a navigation route to the destination location according to the superimposed model, and control the drone to operate according to the navigation route.
  • the BIM model is preset in the terminal, and the terminal determines the building component information according to the preset BIM model.
  • the terminal can determine the non-building component information based on the point cloud model.
  • the terminal superimposes the BIM model and the point cloud model to obtain a superimposed model.
  • the superimposed model contains building components.
  • the terminal For UAV navigation obstacles such as information and non-building component information, the terminal generates a navigation route to the destination location according to the superimposed model, and controls the UAV to operate according to the navigation route.
  • step S30 includes:
  • Step c1 Determine the reference position corresponding to the initial position in the preset BIM model, and compare the edge information in the point cloud model with the edge information at the reference position in the preset BIM model to obtain the result The minimum distance between the point cloud model and the preset BIM model;
  • Step c2 superimpose the point cloud model and the preset BIM model according to the minimum distance to obtain a superimposed model
  • Step c3 trace back the path from the initial position according to the superimposed model, obtain a navigation route to the destination position, and control the UAV to operate according to the navigation route.
  • the terminal determines the reference position corresponding to the initial position in the preset BIM model.
  • the terminal compares the image of the subject in the point cloud model with the image at the reference position in the preset BIM model to obtain the minimum distance between the edge feature points of the two images.
  • the point cloud model and the preset BIM model are superimposed at the minimum distance to obtain the superimposed model.
  • the terminal traces the path from the initial position according to the superimposed model, obtains the navigation route to the destination position, and controls the drone to operate according to the navigation route.
  • the technical solution of this embodiment does not require additional hardware costs, and the information of building components, doors and windows and other UAV navigation obstacles can be accurately determined through the superimposed model, so that the UAV can better avoid obstacles and reduce the incidence of accidents.
  • the indoor automatic navigation of the UAV is realized, and the accuracy of the indoor navigation of the UAV is improved.
  • This embodiment is a refinement of step S40 in the first embodiment.
  • the difference between this embodiment and the first embodiment of this application lies in:
  • Step S41 Perform path traceability along the destination position from the initial position in the overlay model to determine whether there are obstacles in the traceability path, whether the traceability path repetition rate is greater than a preset repetition rate, and/or whether there are at least two Trace path
  • the terminal traces the path along the destination from the initial position in the superimposed model, that is, the superimposed model contains building component information and non-building component information, the terminal uses the building component information and non-building component information as obstacles, and the terminal avoids the obstacles.
  • Path tracing specifically, the terminal determines whether there are obstacles in the tracing path (obstacles can be walls, lamps, decorations, etc.), and whether the repetition rate of the traced path is greater than the preset repetition rate (the preset repetition rate refers to the preset repetition rate).
  • the ratio of the full path to the repeated path for example, the preset repetition rate is set to 30%) and/or whether there are at least two retrospective paths.
  • Step S42 if there are obstacles in the traceability path, change the traceability direction of the trace; if the traceability path repetition rate is greater than the preset repetition rate, then discard the traceability path; and/or if at least two traceability paths are obtained, the trace with the shortest distance
  • the path is used as the UAV navigation route, and the UAV is controlled to operate according to the navigation route.
  • the terminal determines that the path has reached the end, and the terminal changes the path tracing direction; if the repetition rate of the tracing path is greater than the preset repetition rate, the terminal determines that the path is a repetitive path, and then abandons the tracing path; and/or if it is obtained There are at least two tracing paths.
  • the terminal uses the shortest tracing path as the drone navigation route and controls the drone to operate according to the navigation route.
  • the path generation method is given in this embodiment, which effectively ensures the rationality of the UAV navigation route and makes the UAV navigation more accurate.
  • This embodiment is a step after step S40 in the first embodiment.
  • the difference between this embodiment and the first embodiment of the application lies in:
  • Step S50 When it is detected that the UAV deviates from the navigation route, a route control instruction is sent to the UAV to make the UAV return to the navigation route.
  • the terminal monitors the drone's running path information in real time.
  • the running path information includes the running speed, running route, and running time.
  • the terminal judges whether the drone deviates from the navigation route based on the drone's running route information. If it deviates from the navigation route, the terminal sends a route control instruction to the UAV so that the UAV can return to the navigation route according to the route control instruction.
  • Step S60 If the UAV does not return to the navigation route within a preset time period, send an information collection instruction to the UAV so that the UAV feeds back current operating parameters.
  • the terminal sends an information collection instruction to the drone, and the drone receiving terminal sends it
  • the UAV obtains the current operating parameters.
  • the operating parameters include operating time and operating route.
  • the UAV feeds back the current operating parameters to the mobile terminal.
  • Step S70 Receive current operating parameters fed back by the drone, and if the current operating parameters are abnormal, output prompt information.
  • the terminal receives the current operating parameters fed back by the drone.
  • the terminal compares the current operating parameters with the preset standard operating parameters to determine whether the current operating parameters meet the standard operating parameters. If the current operating parameters do not meet the standard operating parameters, the terminal determines the current operation If the parameter is abnormal, the terminal judges that the drone is faulty, and the terminal outputs prompt information.
  • the terminal monitors the operating status of the drone. When the drone fails, the terminal can output prompt information in real time to perform timely maintenance of the drone.
  • an embodiment of the present application also proposes an indoor navigation device for a drone, and the indoor navigation device for a drone includes:
  • the request receiving module 10 is used to obtain the initial position and the target position of the UAV when the UAV navigation request is received;
  • the information collection module 20 is configured to collect the operating state information of the drone through the first collection device in the drone, and collect the different heights at the initial position through the second collection device in the drone And surrounding environment information from different shooting angles;
  • the model construction module 30 is configured to construct a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and process the feature point cloud to obtain a point cloud model of the shooting object;
  • the route generation module 40 is configured to superimpose the point cloud model and the preset BIM model to obtain a superimposed model, generate a navigation route to the destination location according to the superimposed model, and control the drone to follow the navigation route run.
  • the first acquisition device includes at least one inertial measurement device and at least one perception camera, and the second acquisition device includes at least one depth camera;
  • the information collection module 20 includes:
  • the first collection module is used to adjust the height and shooting angle of the drone at the initial position, collect the direction information of the drone through the inertial measurement device, collect characteristic images through the sensing camera, and analyze the Obtaining the relative position information of the drone by using the characteristic image, and using the direction information and the relative position information as the operating state information of the drone;
  • the second acquisition module is used to transmit infrared pulses to the subject through the depth camera, receive the infrared pulse reflected by the subject and the reflection time of the infrared pulse, process the reflection time to obtain the depth image information of the subject,
  • the depth image information is used as surrounding environment information.
  • the model construction module 30 includes:
  • the attitude calculation unit is used to extract the direction information and relative position information in the operating state information, and iterate the direction information and the relative position information to obtain the attitude change value of the first collecting device in the UAV ;
  • a point cloud determining unit configured to extract the depth image information in the surrounding environment information, iterate the depth image information according to the attitude change value, and obtain the characteristic point cloud of the drone photographed object;
  • the model generating unit is configured to process the characteristic point cloud by using a preset SLAM algorithm to obtain a point cloud model of the shooting object.
  • the route generation module 40 includes:
  • the information comparison sub-module is used to determine the reference position corresponding to the initial position in the preset BIM model, and compare the edge information in the point cloud model with the edge information at the reference position in the preset BIM model Compare to obtain the minimum distance between the point cloud model and the preset BIM model;
  • the model superimposition sub-module is used to superimpose the point cloud model and the preset BIM model according to the minimum distance to obtain a superimposed model
  • the route generation sub-module is used to trace the path from the initial position according to the overlay model to obtain the navigation route to the destination position, and control the drone to operate according to the navigation route.
  • the route generation sub-module includes:
  • the retrospective judgment unit is used to trace the path from the initial position in the superimposition model along the destination location to determine whether there are obstacles in the retrospective path, whether the retrospective path repetition rate is greater than a preset repetition rate and/or whether there is At least two tracing paths;
  • the control operation unit is used to change the path tracing direction if there are obstacles in the tracing path; if the repetition rate of the tracing path is greater than the preset repetition rate, the tracing path is abandoned; and/or if at least two tracing paths are obtained, the distance is changed
  • the shortest traceability path is used as the drone navigation route, and the drone is controlled to operate according to the navigation route.
  • the indoor navigation device for drones includes:
  • a route monitoring module configured to send a route control instruction to the drone when it is detected that the drone deviates from the navigation route, so that the drone returns to the navigation route;
  • An instruction sending module configured to send an information collection instruction to the drone if the drone does not return to the navigation route within a preset time period, so that the drone can feed back current operating parameters
  • the prompt output module is used to receive the current operating parameters fed back by the drone, and output prompt information if the current operating parameters are abnormal.
  • the steps implemented by the various functional modules of the UAV indoor navigation device can refer to the various embodiments of the UAV indoor navigation method of the present application, which will not be repeated here.
  • the embodiment of the present application also proposes a computer storage medium, and the computer-readable storage medium may be non-volatile or volatile.
  • the computer storage medium stores a computer program that, when executed by a processor, implements the operations in the UAV indoor navigation method provided in the above embodiments, wherein the UAV indoor navigation method includes the following steps: When receiving the drone navigation request, obtain the initial position and the target position of the drone; collect the operating state information of the drone through the first acquisition device in the drone, and use the drone
  • the second collecting device in the S2 collects surrounding environment information at different heights and different shooting angles at the initial position; constructs a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and processes the feature point cloud to obtain The point cloud model of the shooting object; superimpose the point cloud model and the preset BIM model to obtain a superimposed model, generate a navigation route to the target location according to the superimposed model, and control the drone to follow the The navigation route runs.
  • the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
  • the device embodiments described above are merely illustrative, and the units described as separate components may or may not be physically separate. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solution of the present application. Those of ordinary skill in the art can understand and implement without creative work.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disks, optical disks), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present application.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

An indoor navigation method and apparatus for an unmanned aerial vehicle, a device and a storage medium. The method comprises: upon receiving an unmanned aerial vehicle navigation request, obtaining an initial position and a destination position of an unmanned aerial vehicle (S10); acquiring operating state information of the unmanned aerial vehicle by means of a first acquisition apparatus of the unmanned aerial vehicle, and acquiring surrounding environment information of different heights and different photographing angles at the initial position by means of a second acquisition apparatus of the unmanned aerial vehicle (S20); constructing a feature point cloud of a photographed object according to the operating state information and the surrounding environment information, and processing the feature point cloud to obtain a point cloud model of the photographed object (S30); and superposing the point cloud model and a preset BIM model to obtain a superposed model, generating a navigation route to the destination position according to the superposed model, and controlling the unmanned aerial vehicle to operate according to the navigation route (S40). Indoor automatic navigation for unmanned aerial vehicles is achieved, and the accuracy of indoor navigation for unmanned aerial vehicles is improved.

Description

无人机室内导航方法、装置、设备和存储介质UAV indoor navigation method, device, equipment and storage medium

本申请要求于2020年2月12日提交中国专利局、申请号为202010089061.5,发明名称为“无人机室内导航方法、装置、设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on February 12, 2020, the application number is 202010089061.5, and the invention title is "UAV indoor navigation method, device, equipment and storage medium". The entire content of the patent application is approved. The reference is incorporated in this application.

技术领域Technical field

本申请涉及人工智能技术领域,尤其涉及无人机室内导航方法、装置、设备和存储介质。This application relates to the field of artificial intelligence technology, and in particular to the indoor navigation method, device, equipment and storage medium of an unmanned aerial vehicle.

背景技术Background technique

目前的能实现自主飞行的无人机依赖卫星信号给出位置信息,根据位置信息再使用已有地图规划路径,然而在室内,隧道等有遮蔽物的环境中无人机无法接收到GPS信号,因此,无人机室内定位和导航通常为以下方式:The current drones that can realize autonomous flight rely on satellite signals to give location information, and then use the existing map to plan the route based on the location information. However, the drone cannot receive GPS signals in an environment with shelters such as indoors and tunnels. Therefore, the indoor positioning and navigation of UAVs are usually in the following ways:

1.基于蓝牙低功耗或者无线网络的定位,这种技术要想达到较高精度,需要提前在室内部署蓝牙信号源网格,再通过在客户端计算信号强度来定位,使得室内定位成本较高;1. Based on Bluetooth low energy or wireless network positioning, if this technology wants to achieve higher accuracy, it needs to deploy the Bluetooth signal source grid in advance in advance, and then calculate the signal strength on the client to locate, which makes the indoor positioning cost more expensive. high;

2.基于蜂窝网络的定位,这种技术不要求任何前置网络部署,普通能接收手机信号的设备都可以通过对比多个基站的信号来定位。但是难以保证定位精度;2. Based on cellular network positioning, this technology does not require any front-end network deployment, and ordinary devices that can receive mobile phone signals can be positioned by comparing signals from multiple base stations. But it is difficult to guarantee the positioning accuracy;

3.基于射频识别的定位技术,这种技术要求提前部署射频识别设备,室内导航定位的精度也不是很理想;3. Positioning technology based on radio frequency identification, which requires the deployment of radio frequency identification equipment in advance, and the accuracy of indoor navigation and positioning is not ideal;

4.视觉定位,这种定位系统可以利用摄像头采集周围环境的图像,通过和提前录入的信息对比从而确定位置,这种方式无法确定相同布局的不同分区(比如酒店装修一样的不同房间),而且由于环境的多变,在相对狭小的空间内,无人机可能无法顺利的避障。4. Visual positioning. This positioning system can use the camera to collect images of the surrounding environment and determine the location by comparing it with the information entered in advance. This method cannot determine different partitions of the same layout (such as different rooms with the same hotel decoration), and Due to the changeable environment, in a relatively small space, drones may not be able to avoid obstacles smoothly.

发明人意识到上述方式无人机在飞行过程中无法得到自身与周围环境实际状态,在无人机周围出现突发状况时,无人机无法做出及时反应,导致无人机很难在室内环境下进行自主定位和导航。The inventor realized that in the above-mentioned way, the drone cannot obtain the actual state of itself and the surrounding environment during the flight. When an emergency occurs around the drone, the drone cannot respond in time, making it difficult for the drone to stay indoors. Autonomous positioning and navigation in the environment.

发明概述Summary of the invention

技术问题technical problem

问题的解决方案The solution to the problem

技术解决方案Technical solutions

本申请的主要目的在于提供一种无人机室内导航方法、装置、设备和存储介质,旨在解决当前无人机室内导航成本较高精度较低的技术问题。The main purpose of this application is to provide a method, device, equipment and storage medium for UAV indoor navigation, aiming to solve the current technical problem of high cost and low accuracy of UAV indoor navigation.

为实现上述目的,本申请提供无人机室内导航方法,所述无人机室内导航方法包括以下步骤:In order to achieve the above objective, this application provides an indoor navigation method for a UAV. The indoor navigation method for a UAV includes the following steps:

在接收到无人机导航请求时,获取无人机的初始位置和目的位置;When receiving the drone navigation request, get the initial position and destination position of the drone;

通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;Collect the operating state information of the drone through the first collection device in the drone, and collect the surrounding environment at different heights and different shooting angles at the initial position through the second collection device in the drone information;

根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;Constructing a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and processing the feature point cloud to obtain a point cloud model of the shooting object;

将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。The point cloud model and the preset BIM model are superimposed to obtain a superimposed model, a navigation route to the destination location is generated according to the superimposed model, and the drone is controlled to operate according to the navigation route.

此外,为实现上述目的,本申请还提供一种无人机室内导航装置,所述无人机室内导航装置包括:In addition, in order to achieve the above objective, the present application also provides an indoor navigation device for an unmanned aerial vehicle. The indoor navigation device for an unmanned aerial vehicle includes:

请求接收模块,用于在接收到无人机导航请求时,获取无人机的初始位置和目的位置;The request receiving module is used to obtain the initial position and destination position of the drone when receiving the drone navigation request;

信息采集模块,用于通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;The information collection module is used to collect the operating state information of the UAV through the first collection device in the UAV, and collect the different heights and heights at the initial position through the second collection device in the UAV. Surrounding environment information at different shooting angles;

模型构建模块,用于根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;A model construction module, configured to construct a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and process the feature point cloud to obtain a point cloud model of the shooting object;

路线生成模块,用于将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。The route generation module is used to superimpose the point cloud model and the preset BIM model to obtain a superimposed model, generate a navigation route to the destination location according to the superimposed model, and control the drone to operate according to the navigation route .

此外,为实现上述目的,本申请还提供一种无人机室内导航设备;In addition, in order to achieve the above purpose, this application also provides an indoor navigation device for drones;

所述无人机室内导航设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中:所述计算机程序被所述处理器执行时实现如上所述的无人机室内导航方法的步骤,其中,所述无人机室内导航方法至少包括以下步骤:在接收到无人机导航请求时,获取无人机的初始位置和目的位置;通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。The UAV indoor navigation device includes: a memory, a processor, and a computer program stored in the memory and capable of running on the processor, wherein: the computer program is executed by the processor to achieve the above The steps of the UAV indoor navigation method, wherein the UAV indoor navigation method at least includes the following steps: when receiving the UAV navigation request, obtain the initial position and the target position of the UAV; The first collection device in the drone collects operating state information of the drone, and the second collection device in the drone collects surrounding environment information at different heights and different shooting angles at the initial position; The operating state information and the surrounding environment information construct a feature point cloud of the shooting object, processing the feature point cloud to obtain a point cloud model of the shooting object; superimposing the point cloud model with a preset BIM model to obtain a superimposed model , Generating a navigation route to the destination location according to the superposition model, and controlling the drone to operate according to the navigation route.

此外,为实现上述目的,本申请还提供计算机存储介质;所述计算机存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上述的无人机室内导航方法的步骤,其中,所述无人机室内导航方法至少包括以下步骤:在接收到无人机导航请求时,获取无人机的初始位置和目的位置;通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。In addition, in order to achieve the above object, this application also provides a computer storage medium; the computer storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the above-mentioned indoor navigation method for drones are realized, wherein: The indoor navigation method of the UAV includes at least the following steps: when receiving the UAV navigation request, obtain the initial position and the target position of the UAV; The operating state information of the man-machine is collected by the second collecting device in the drone to collect the surrounding environment information at different heights and different shooting angles at the initial position; and the shooting is constructed according to the operating state information and the surrounding environment information The feature point cloud of the object is processed, and the feature point cloud is processed to obtain the point cloud model of the shooting object; the point cloud model and the preset BIM model are superimposed to obtain the superposition model, and the superposition model is generated according to the superposition model. Navigate the route, and control the UAV to operate according to the navigation route.

本申请实施例提出的一种无人机室内导航方法、装置、设备和存储介质,终端在接收到无人机导航请求时,获取无人机的初始位置和目的位置;通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制 所述无人机按所述导航路线运行。本实施例的技术方案中不需要额外的硬件费用,通过叠加模型准确地确定建筑构件、门窗等无人机导航障碍物的信息,使得无人机可以更好的避障,减少事故发生率,实现了无人机室内自动导航,提高了无人机室内导航的精度。The embodiment of the application proposes an indoor navigation method, device, equipment, and storage medium for a UAV. When a terminal receives a UAV navigation request, it obtains the initial position and the target position of the UAV; The first collection device in the UAV collects the operating state information of the UAV, and the second collection device in the UAV collects the surrounding environment information at different heights and different shooting angles at the initial position; according to the operation State information and the surrounding environment information construct a feature point cloud of the subject, process the feature point cloud to obtain a point cloud model of the subject; superimpose the point cloud model with a preset BIM model to obtain a superimposed model, and The superposition model generates a navigation route to the destination location, and controls the UAV to operate according to the navigation route. The technical solution of this embodiment does not require additional hardware costs, and the information of building components, doors and windows and other UAV navigation obstacles can be accurately determined through the superimposed model, so that the UAV can better avoid obstacles and reduce the incidence of accidents. The indoor automatic navigation of the UAV is realized, and the accuracy of the indoor navigation of the UAV is improved.

发明的有益效果The beneficial effects of the invention

对附图的简要说明Brief description of the drawings

附图说明Description of the drawings

图1是本申请实施例方案涉及的硬件运行环境的装置结构示意图;FIG. 1 is a schematic diagram of a device structure of a hardware operating environment involved in a solution of an embodiment of the present application;

图2为本申请无人机室内导航方法第一实施例的流程示意图;FIG. 2 is a schematic flowchart of the first embodiment of the indoor navigation method for drones under this application;

图3为本申请无人机室内导航装置一实施例的功能模块示意图。FIG. 3 is a schematic diagram of functional modules of an embodiment of an indoor navigation device for drones under this application.

本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional characteristics, and advantages of the purpose of this application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.

发明实施例Invention embodiment

本发明的实施方式Embodiments of the present invention

应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described here are only used to explain the present application, and are not used to limit the present application.

如图1所示,图1是本申请实施例方案涉及的硬件运行环境的终端(又叫无人机室内导航设备,其中,无人机室内导航设备可以是由单独的无人机室内导航装置构成,也可以是由其他装置与无人机室内导航装置组合形成)结构示意图。As shown in Figure 1, Figure 1 is the terminal of the hardware operating environment involved in the solution of the embodiment of the application (also called the UAV indoor navigation device, where the UAV indoor navigation device can be a separate UAV indoor navigation device The structure can also be formed by combining other devices with the UAV indoor navigation device) schematic diagram of the structure.

本申请实施例终端可以固定终端,也可以是移动终端,如,带联网功能的智能空调、智能电灯、智能电源、智能音箱、自动驾驶汽车、PC(personal computer)个人计算机、智能手机、平板电脑、电子书阅读器、便携计算机等。The terminal in the embodiments of this application can be a fixed terminal or a mobile terminal, such as smart air conditioners with networking functions, smart lights, smart power supplies, smart speakers, autonomous vehicles, PC (personal computer) personal computers, smart phones, and tablet computers. , E-book readers, portable computers, etc.

如图1所示,该终端可以包括:处理器1001,例如,中央处理器Central Processing Unit,CPU),网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如无线保真WIreless-FIdelity,WIFI接口)。存储器1 005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如,磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。As shown in FIG. 1, the terminal may include a processor 1001, for example, a central processing unit (CPU), a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002. Among them, the communication bus 1002 is used to implement connection and communication between these components. The user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface. The network interface 1004 may optionally include a standard wired interface and a wireless interface (such as WIreless-FIdelity, WIFI interface). The memory 1 005 may be a high-speed RAM memory, or a stable memory (non-volatile memory), for example, a magnetic disk memory. Optionally, the memory 1005 may also be a storage device independent of the aforementioned processor 1001.

可选地,终端还可以包括摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块;输入单元,比显示屏,触摸屏;网络接口可选除无线接口中除WiFi外,蓝牙、探针等等。其中,传感器比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器;当然,移动终端还可配置陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。Optionally, the terminal may also include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, and a WiFi module; an input unit, a display screen, and a touch screen; the network interface can be selected in addition to WiFi, Bluetooth, Probes and so on. Among them, sensors such as light sensors, motion sensors and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor; of course, the mobile terminal may also be equipped with other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc., which will not be repeated here.

本领域技术人员可以理解,图1中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than shown in the figure, or combine some components, or arrange different components.

如图1所示,该计算机软件产品存储在一个存储介质(存储介质:又叫计算机存储介质、计算机介质、可读介质、可读存储介质、计算机可读存储介质或者直接叫介质等,存储介质可以是非易失性可读存储介质,如RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及计算机程序。As shown in Figure 1, the computer software product is stored in a storage medium (storage medium: also called computer storage medium, computer medium, readable medium, readable storage medium, computer readable storage medium, or directly called medium, etc., storage medium It can be a non-volatile readable storage medium, such as RAM, magnetic disk, optical disk, and includes several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute this application In the method described in each embodiment, the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a computer program.

在图1所示的终端中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的计算机程序,并执行本申请以下实施例提供的无人机室内导航方法中的步骤。In the terminal shown in FIG. 1, the network interface 1004 is mainly used to connect to a back-end server and communicate with the back-end server; the user interface 1003 is mainly used to connect to a client (user side) to communicate with the client; and the processor 1001 can be used to call a computer program stored in the memory 1005 and execute the steps in the UAV indoor navigation method provided in the following embodiments of the present application.

基于上述硬件运行环境的提出了本申请无人机室内导航方法的实施例。Based on the above hardware operating environment, an embodiment of the UAV indoor navigation method of the present application is proposed.

在本申请无人机室内导航方法的实施例中,需要室内空间中的两个点,实时生成的点云模型,就可以自动规划出无人机室内导航的路线,使得无人机进行导航,具体地:In the embodiment of the UAV indoor navigation method of the present application, two points in the indoor space are required, and the point cloud model generated in real time can automatically plan the UAV indoor navigation route, so that the UAV can navigate. specifically:

参照图2,在本申请一种无人机室内导航方法的第一实施例中,所述无人机室内导航方法包括:2, in the first embodiment of an indoor navigation method for an unmanned aerial vehicle of the present application, the indoor navigation method for an unmanned aerial vehicle includes:

步骤S10,在接收到无人机导航请求时,获取无人机的初始位置和目的位置。In step S10, when the drone navigation request is received, the initial position and the target position of the drone are acquired.

本实施例中无人机室内导航方法应用于终端(又叫无人机室内导航设备),终端与无人机通信连接,终端可以对无人机进行控制,无人机中预设采集装置,采集装置用于采集无人机周边环境信息和无人机运行状态信息(无人机运行状态信息无人机的运行方向和相对位移信息),采集装置包括但不仅限于惯性测量器、感知摄像头和深度摄像头,惯性测量器、感知摄像头和深度摄像头的具体数量不作限定,无人机将采集到的周边环境信息和无人机运行状态信息发送至终端,终端通过处理周边环境信息和无人机运行状态信息,得到无人机室内导航路线,实现无人机准确导航。The UAV indoor navigation method in this embodiment is applied to a terminal (also called UAV indoor navigation equipment). The terminal communicates with the UAV, and the terminal can control the UAV. The UAV has a preset collection device, The collection device is used to collect information about the surrounding environment of the drone and the operating status information of the drone (the operating status of the drone and the relative displacement information of the drone). The number of depth cameras, inertial measurement devices, perception cameras, and depth cameras is not limited. The drone sends the collected surrounding environment information and drone operation status information to the terminal, and the terminal processes the surrounding environment information and the drone operation Status information, get the UAV's indoor navigation route, and realize the accurate navigation of the UAV.

具体地,终端接收无人机导航请求,无人机导航请求的触发方式不作具体限定,即,无人机导航请求可以用户主动触发的,例如,用户在终端显示界面上点击无人机导航对应的按键,以触发无人机导航请求;此外,无人机导航请求还可以是终端自动触发的,例如,终端中预设无人机导航请求触发条件:每天凌晨触发无人机导航请求,以采集xxx楼层信息,终端在到达到凌晨时,自动触发无人机导航请求。Specifically, the terminal receives the drone navigation request, and the triggering method of the drone navigation request is not specifically limited, that is, the drone navigation request can be triggered by the user actively, for example, the user clicks the drone navigation corresponding on the terminal display interface Button to trigger the drone navigation request; in addition, the drone navigation request can also be automatically triggered by the terminal. For example, the trigger condition of the drone navigation request is preset in the terminal: trigger the drone navigation request in the early morning every day. Collect xxx floor information, and the terminal will automatically trigger the drone navigation request when it reaches the wee hours.

终端在接收到无人机导航请求时,终端获取无人机的初始位置和目的位置,无人机的初始位置和目的位置可以是用户设置的;例如,用户终端上输入无人机的初始位置为:xxx建筑3楼层1房间,目的位置为:xxx建筑2楼层3房间。When the terminal receives the drone navigation request, the terminal obtains the initial position and destination position of the drone. The initial position and destination position of the drone can be set by the user; for example, the initial position of the drone is entered on the user terminal It is: 1 room on the 3rd floor of the xxx building, and the destination is: 3 rooms on the 2nd floor of the xxx building.

本实施例中给出了一种确定无人机初始位置和目的位置的具体实现方式,包括以下步骤:In this embodiment, a specific implementation method for determining the initial position and the target position of the drone is provided, which includes the following steps:

步骤a1,在接收到无人机导航请求时,获取所述无人机当前所处建筑的建筑标识,及所述建筑标识关联的预设BIM模型;Step a1, when receiving the drone navigation request, obtain the building identifier of the building where the drone is currently located, and the preset BIM model associated with the building identifier;

步骤a2,基于所述预设BIM模型构建三维坐标系,将所述无人机当前位置的三维坐标作为所述无人机的初始位置,获取所述无人机导航请求对应的导航目的地,将所述导航目的地的三维作为所述无人机的目的位置。Step a2, construct a three-dimensional coordinate system based on the preset BIM model, use the three-dimensional coordinates of the current position of the drone as the initial position of the drone, and obtain the navigation destination corresponding to the drone navigation request; Use the three-dimensionality of the navigation destination as the target position of the drone.

即,终端在接收到无人机导航请求时,终端获取无人机当前所处建筑的建筑标识(建筑标识是指唯一识别建筑物的标识信息,例如,建筑名称,或者建筑位置信息),终端获取建筑标识关联的预设BIM模型(预设BIM模型是指预先设 置的建筑标识关联的BIM模型(Building Information Modeling模型),BIM模型中包含有建筑物的);总的基于预设BIM模型构建三维坐标系,终端将无人机当前位置的三维坐标作为无人机的初始位置,终端获取无人机导航请求对应的导航目的地,将导航目的地的三维作为无人机的目的位置。That is, when the terminal receives the drone navigation request, the terminal obtains the building identifier of the building where the drone is currently located (the building identifier refers to the identification information that uniquely identifies the building, such as the name of the building or the location information of the building), and the terminal Obtain the preset BIM model associated with the building logo (the preset BIM model refers to the preset BIM model (Building Information Modeling model) associated with the building logo, and the BIM model contains buildings); it is generally constructed based on the preset BIM model In the three-dimensional coordinate system, the terminal uses the three-dimensional coordinates of the current position of the drone as the initial position of the drone, the terminal obtains the navigation destination corresponding to the drone navigation request, and uses the three-dimensional navigation destination as the destination position of the drone.

本实施例中基于BIM模型构建三维坐标系,准确地确定无人机的初始位置和目的位置,以实现无人机的准确导航。In this embodiment, a three-dimensional coordinate system is constructed based on the BIM model to accurately determine the initial position and target position of the drone, so as to realize accurate navigation of the drone.

步骤S20,通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息。Step S20: Collect operating state information of the drone through the first collection device in the drone, and collect different heights and different shooting angles at the initial position through the second collection device in the drone Surrounding environment information.

无人机中预设采集装置,根据采集装置的用途将采集装置划分为第一采集装置和第二采集装置,第一采集装置用于采集无人机的运行状态信息,第一采集装置包括至少一个惯性测量器和至少一个感知摄像头;第二采集装置用于采集无人机周边环境信息,第二采集装置包括至少一个深度摄像头,例如,无人机携带1个惯性测量器、1个深度摄像头和4个环境感知摄像头,其中,惯性测量器负责感知无人机的方向信息,环境感知摄像头负责获得无人机的相对位移,深度摄像头负责感知无人机拍摄对象的深度图像信息,具体地,包括:The collection device is preset in the drone, and the collection device is divided into a first collection device and a second collection device according to the purpose of the collection device. The first collection device is used to collect the operating state information of the drone, and the first collection device includes at least One inertial measuring device and at least one sensing camera; the second collecting device is used to collect information about the surrounding environment of the drone, and the second collecting device includes at least one depth camera. For example, the drone carries one inertial measuring device and one depth camera. And 4 environment-aware cameras, of which, the inertial measuring device is responsible for sensing the direction information of the drone, the environment-aware camera is responsible for obtaining the relative displacement of the drone, and the depth camera is responsible for sensing the depth image information of the drone’s object. Specifically, include:

步骤b1,在所述初始位置处调整无人机的高度和拍摄角度,通过所述惯性测量器采集所述无人机的方向信息,通过所述感知摄像头采集特征图像,分析所述特征图像得到所述无人机的相对位置信息,将所述方向信息和所述相对位置信息作为所述无人机的运行状态信息;Step b1: Adjust the height and shooting angle of the drone at the initial position, collect the direction information of the drone through the inertial measurement device, collect characteristic images through the sensing camera, and analyze the characteristic images to obtain The relative position information of the drone, using the direction information and the relative position information as operating state information of the drone;

步骤b2,通过所述深度摄像头发射红外线脉冲至拍摄对象,接收所述拍摄对象反射的红外线脉冲及所述红外线脉冲的反射时间,处理所述反射时间获得拍摄对象的深度图像信息,将所述深度图像信息作为周边环境信息。Step b2: Transmit infrared pulses to the subject through the depth camera, receive the infrared pulse reflected by the subject and the reflection time of the infrared pulse, process the reflection time to obtain the depth image information of the subject, and convert the depth The image information serves as the surrounding environment information.

本实施例中终端控制无人机在初始位置附近自主调整高度和拍摄角度,进行多角度拍摄得到特征图像,终端提取特征图像的特征点,终端分析特征图像得到无人机的相对位置信息,终端将方向信息和相对位置信息作为无人机的运行状态信息。终端通过深度摄像头发射红外线脉冲,通过计算反射时间来获得深度图像信息,也就是物体表面到摄像头的距离。In this embodiment, the terminal controls the drone to autonomously adjust the height and shooting angle near the initial position, and performs multi-angle shooting to obtain feature images. The terminal extracts feature points of the feature image, and the terminal analyzes the feature image to obtain the relative position information of the drone. The direction information and relative position information are used as the operating status information of the UAV. The terminal emits infrared pulses through the depth camera, and obtains the depth image information by calculating the reflection time, that is, the distance from the surface of the object to the camera.

步骤S30,根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型。Step S30: Construct a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and process the feature point cloud to obtain a point cloud model of the shooting object.

终端将周边环境信息中的深度图像信息转换成三维特征点云,终端将三维特征点云融合到三维网格中得到点云模型,即,终端将当前的采集装置位置发出射线与上一步的三维特征点云求交集,得到当前帧视角下的点云,同时终端计算其法向量,用来对下一帧的输入深度图像信息配准,不断地循环,获得不同视角下的特征点云,从而重建完成的拍摄对象的场景表面,形成点云模型。具体地,步骤S30包括:The terminal converts the depth image information in the surrounding environment information into a three-dimensional feature point cloud, and the terminal fuses the three-dimensional feature point cloud into a three-dimensional grid to obtain a point cloud model, that is, the terminal emits rays from the current collection device position to the three-dimensional feature point cloud of the previous step. The feature point cloud is intersected to obtain the point cloud under the current frame perspective, and the terminal calculates its normal vector to register the input depth image information of the next frame, and continuously loop to obtain feature point clouds under different perspectives. The scene surface of the reconstructed subject is reconstructed to form a point cloud model. Specifically, step S30 includes:

步骤b1,提取所述运行状态信息中的方向信息和相对位置信息,将所述方向信息和所述相对位置信息进行迭代,得到所述无人机中第一采集装置的姿态变化值;Step b1, extracting direction information and relative position information in the operating state information, and iterating the direction information and the relative position information to obtain the attitude change value of the first collecting device in the UAV;

步骤b2,提取所述周边环境信息中的深度图像信息,按所述姿态变化值迭代所述深度图像信息,获得所述无人机拍摄对象的特征点云;Step b2, extracting the depth image information in the surrounding environment information, and iterating the depth image information according to the attitude change value to obtain the feature point cloud of the drone photographed object;

步骤b3,通过预设的SLAM算法处理所述特征点云,获得所述拍摄对象的点云模型。Step b3, processing the characteristic point cloud by a preset SLAM algorithm to obtain a point cloud model of the shooting object.

具体地,终端提取运行状态信息中的方向信息和相对位置信息,终端将方向信息和相对位置信息进行迭代,得到无人机中第一采集装置的姿态变化值;终端提取周边环境信息中的深度图像信息,终端按姿态变化值迭代深度图像信息,获得无人机拍摄对象的特征点云;终端通过预设的SLAM算法(Simultaneous localization and mapping,同步定位与建图算法)处理特征点云,获得拍摄对象的点云模型。Specifically, the terminal extracts the direction information and relative position information from the operating status information, and the terminal iterates the direction information and relative position information to obtain the attitude change value of the first acquisition device in the drone; the terminal extracts the depth in the surrounding environment information For image information, the terminal iterates the depth image information according to the attitude change value to obtain the feature point cloud of the drone photographed object; the terminal uses the preset SLAM algorithm (Simultaneous localization and mapping, synchronous positioning and mapping algorithm) to process the feature point cloud to obtain Point cloud model of the subject.

本实施例中终端确定无人机的姿态变化值,用于构建拍摄对象的特征点云,并根据SLAM算法得到拍摄目标大致的点云模型,终端根据点云模型实时地识别非建筑构件对应的障碍物信息。In this embodiment, the terminal determines the attitude change value of the drone to construct the characteristic point cloud of the shooting object, and obtains the approximate point cloud model of the shooting target according to the SLAM algorithm. The terminal recognizes the corresponding non-building components in real time according to the point cloud model Obstacle information.

步骤S40,将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。Step S40: Superimpose the point cloud model and the preset BIM model to obtain a superimposed model, generate a navigation route to the destination location according to the superimposed model, and control the drone to operate according to the navigation route.

终端中预设BIM模型,终端根据预设BIM模型确定建筑构件信息,终端根据点 云模型可以确定非建筑构件信息,终端将BIM模型和点云模型进行叠加得到叠加模型,叠加模型中具有建筑构件信息和非建筑构件信息等无人机导航障碍物,终端根据叠加模型生成到达目的位置的导航路线,并控制无人机按导航路线运行。具体地,步骤S30包括:The BIM model is preset in the terminal, and the terminal determines the building component information according to the preset BIM model. The terminal can determine the non-building component information based on the point cloud model. The terminal superimposes the BIM model and the point cloud model to obtain a superimposed model. The superimposed model contains building components. For UAV navigation obstacles such as information and non-building component information, the terminal generates a navigation route to the destination location according to the superimposed model, and controls the UAV to operate according to the navigation route. Specifically, step S30 includes:

步骤c1,确定预设BIM模型中所述初始位置对应的基准位置,将所述点云模型中的边缘信息与所述预设BIM模型中所述基准位置处的边缘信息进行比对,得到所述点云模型与所述预设BIM模型的最小距离;Step c1: Determine the reference position corresponding to the initial position in the preset BIM model, and compare the edge information in the point cloud model with the edge information at the reference position in the preset BIM model to obtain the result The minimum distance between the point cloud model and the preset BIM model;

步骤c2,将所述点云模型与预设BIM模型按所述最小距离进行叠加,得到叠加模型;Step c2, superimpose the point cloud model and the preset BIM model according to the minimum distance to obtain a superimposed model;

步骤c3,根据所述叠加模型从所述初始位置处进行路径追溯,得到到达所述目的位置处的导航路线,并控制所述无人机按所述导航路线运行。Step c3, trace back the path from the initial position according to the superimposed model, obtain a navigation route to the destination position, and control the UAV to operate according to the navigation route.

终端确定预设BIM模型中初始位置对应的基准位置,终端将点云模型中拍摄对象图像与预设BIM模型中基准位置处图像进行比对,得到两张图象边缘特征点距离最小距离,终端将点云模型与预设BIM模型按最小距离进行叠加,得到叠加模型。终端根据叠加模型从初始位置处进行路径追溯,得到到达目的位置处的导航路线,并控制无人机按导航路线运行。The terminal determines the reference position corresponding to the initial position in the preset BIM model. The terminal compares the image of the subject in the point cloud model with the image at the reference position in the preset BIM model to obtain the minimum distance between the edge feature points of the two images. The point cloud model and the preset BIM model are superimposed at the minimum distance to obtain the superimposed model. The terminal traces the path from the initial position according to the superimposed model, obtains the navigation route to the destination position, and controls the drone to operate according to the navigation route.

本实施例的技术方案中不需要额外的硬件费用,通过叠加模型准确地确定建筑构件、门窗等无人机导航障碍物的信息,使得无人机可以更好的避障,减少事故发生率,实现了无人机室内自动导航,提高了无人机室内导航的精度。The technical solution of this embodiment does not require additional hardware costs, and the information of building components, doors and windows and other UAV navigation obstacles can be accurately determined through the superimposed model, so that the UAV can better avoid obstacles and reduce the incidence of accidents. The indoor automatic navigation of the UAV is realized, and the accuracy of the indoor navigation of the UAV is improved.

进一步地,在本申请第一实施例的基础上,提出了本申请无人机室内导航方法的第二实施例。Further, on the basis of the first embodiment of the present application, a second embodiment of the indoor navigation method for drones of the present application is proposed.

本实施例是第一实施例中步骤S40的细化,本实施例与本申请第一实施例的区别在于:This embodiment is a refinement of step S40 in the first embodiment. The difference between this embodiment and the first embodiment of this application lies in:

步骤S41,从所述叠加模型中所述初始位置处沿所述目的位置进行路径追溯,判断追溯路径中是否存在障碍物、追溯路径重复率是否大于预设重复率和/或是否存在至少两条追溯路径;Step S41: Perform path traceability along the destination position from the initial position in the overlay model to determine whether there are obstacles in the traceability path, whether the traceability path repetition rate is greater than a preset repetition rate, and/or whether there are at least two Trace path

终端从叠加模型中初始位置处沿目的位置进行路径追溯,即,叠加模型中包含建筑构件信息和非建筑构件信息,终端将建筑构件信息和非建筑构件信息作为 障碍物,终端避开障碍物进行路径追溯,具体地,终端判断追溯路径中是否存在障碍物(障碍物可以是墙体、灯具、装饰品等等),追溯路径重复率是否大于预设重复率(预设重复率是指预先设置全路径与重复路径的比例,例如,预设重复率设置为30%)和/或是否存在至少两条追溯路径。The terminal traces the path along the destination from the initial position in the superimposed model, that is, the superimposed model contains building component information and non-building component information, the terminal uses the building component information and non-building component information as obstacles, and the terminal avoids the obstacles. Path tracing, specifically, the terminal determines whether there are obstacles in the tracing path (obstacles can be walls, lamps, decorations, etc.), and whether the repetition rate of the traced path is greater than the preset repetition rate (the preset repetition rate refers to the preset repetition rate). The ratio of the full path to the repeated path, for example, the preset repetition rate is set to 30%) and/or whether there are at least two retrospective paths.

步骤S42,若追溯路径中存在障碍物,则更换路径追溯方向;若追溯路径重复率大于预设重复率,则放弃追溯路径;和/或若得到至少两条追溯路径,则将距离最短的追溯路径作为无人机导航路线,并控制所述无人机按所述导航路线运行。Step S42, if there are obstacles in the traceability path, change the traceability direction of the trace; if the traceability path repetition rate is greater than the preset repetition rate, then discard the traceability path; and/or if at least two traceability paths are obtained, the trace with the shortest distance The path is used as the UAV navigation route, and the UAV is controlled to operate according to the navigation route.

若追溯路径中存在障碍物,终端确定该路径到达尽头,终端更换路径追溯方向;若追溯路径重复率大于预设重复率,终端判定该路径为重复路径,则放弃追溯路径;和/或若得到至少两条追溯路径,终端将距离最短的追溯路径作为无人机导航路线,并控制无人机按导航路线运行。本实施例中给出了路径生成方式,有效地保证了无人机导航路线的合理性,使得无人机导航更加准确。If there are obstacles in the tracing path, the terminal determines that the path has reached the end, and the terminal changes the path tracing direction; if the repetition rate of the tracing path is greater than the preset repetition rate, the terminal determines that the path is a repetitive path, and then abandons the tracing path; and/or if it is obtained There are at least two tracing paths. The terminal uses the shortest tracing path as the drone navigation route and controls the drone to operate according to the navigation route. The path generation method is given in this embodiment, which effectively ensures the rationality of the UAV navigation route and makes the UAV navigation more accurate.

进一步地,在本申请上述实施例的基础上,提出了本申请无人机室内导航方法的第三实施例。Further, on the basis of the foregoing embodiments of the present application, a third embodiment of the indoor navigation method for drones of the present application is proposed.

本实施例是第一实施例中步骤S40之后的步骤,本实施例与本申请第一实施例的区别在于:This embodiment is a step after step S40 in the first embodiment. The difference between this embodiment and the first embodiment of the application lies in:

步骤S50,在监测到所述无人机偏离所述导航路线时,发送路线控制指令至所述无人机,以使所述无人机回归所述导航路线。Step S50: When it is detected that the UAV deviates from the navigation route, a route control instruction is sent to the UAV to make the UAV return to the navigation route.

终端实时地监测无人机的运行路径信息,运行路径信息包括运行速度,运行路线和运行时间等等,终端根据无人机的运行路径信息,判断无人机是否偏离导航路线,若无人机偏离导航路线,终端发送路线控制指令至无人机,以使无人机根据路线控制指令回归导航路线。The terminal monitors the drone's running path information in real time. The running path information includes the running speed, running route, and running time. The terminal judges whether the drone deviates from the navigation route based on the drone's running route information. If it deviates from the navigation route, the terminal sends a route control instruction to the UAV so that the UAV can return to the navigation route according to the route control instruction.

步骤S60,若预设时间段内所述无人机没有回归所述导航路线,则发送信息采集指令至所述无人机,以使所述无人机反馈当前运行参数。Step S60: If the UAV does not return to the navigation route within a preset time period, send an information collection instruction to the UAV so that the UAV feeds back current operating parameters.

若预设时间段内(预设时间段根据具体场景设置,例如预设时间段设置为2分钟)无人机没有回归导航路线,终端发送信息采集指令至无人机,无人机接收终端发送的采集指令,无人机获取当前运行参数,运行参数包括运行时间、运 行路线,无人机将当前运行参数反馈至移动终端。If the drone does not return to the navigation route within the preset time period (the preset time period is set according to the specific scenario, for example, the preset time period is set to 2 minutes), the terminal sends an information collection instruction to the drone, and the drone receiving terminal sends it The UAV obtains the current operating parameters. The operating parameters include operating time and operating route. The UAV feeds back the current operating parameters to the mobile terminal.

步骤S70,接收所述无人机反馈的当前运行参数,若所述当前运行参数异常,则输出提示信息。Step S70: Receive current operating parameters fed back by the drone, and if the current operating parameters are abnormal, output prompt information.

终端接收无人机反馈的当前运行参数,终端将当前运行参数与预设的标准运行参数进行比较,判断当前运行参数是否符合标准运行参数,若当前运行参数不符合标准运行参数,终端确定当前运行参数异常,终端判定无人机故障,终端输出提示信息。本实施例中终端对无人机的运行状态进行监测,在无人机故障时,终端可以实时地输出提示信息,进行无人机的及时维修。The terminal receives the current operating parameters fed back by the drone. The terminal compares the current operating parameters with the preset standard operating parameters to determine whether the current operating parameters meet the standard operating parameters. If the current operating parameters do not meet the standard operating parameters, the terminal determines the current operation If the parameter is abnormal, the terminal judges that the drone is faulty, and the terminal outputs prompt information. In this embodiment, the terminal monitors the operating status of the drone. When the drone fails, the terminal can output prompt information in real time to perform timely maintenance of the drone.

此外,参照图3,本申请实施例还提出一种无人机室内导航装置,所述无人机室内导航装置包括:In addition, referring to Fig. 3, an embodiment of the present application also proposes an indoor navigation device for a drone, and the indoor navigation device for a drone includes:

请求接收模块10,用于在接收到无人机导航请求时,获取无人机的初始位置和目的位置;The request receiving module 10 is used to obtain the initial position and the target position of the UAV when the UAV navigation request is received;

信息采集模块20,用于通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;The information collection module 20 is configured to collect the operating state information of the drone through the first collection device in the drone, and collect the different heights at the initial position through the second collection device in the drone And surrounding environment information from different shooting angles;

模型构建模块30,用于根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;The model construction module 30 is configured to construct a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and process the feature point cloud to obtain a point cloud model of the shooting object;

路线生成模块40,用于将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。The route generation module 40 is configured to superimpose the point cloud model and the preset BIM model to obtain a superimposed model, generate a navigation route to the destination location according to the superimposed model, and control the drone to follow the navigation route run.

在一实施例中,所述第一采集装置包括至少一个惯性测量器和至少一个感知摄像头,所述第二采集装置包括至少一个深度摄像头;In an embodiment, the first acquisition device includes at least one inertial measurement device and at least one perception camera, and the second acquisition device includes at least one depth camera;

所述信息采集模块20,包括:The information collection module 20 includes:

第一采集模块,用于在所述初始位置处调整无人机的高度和拍摄角度,通过所述惯性测量器采集所述无人机的方向信息,通过所述感知摄像头采集特征图像,分析所述特征图像得到所述无人机的相对位置信息,将所述方向信息和所述相对位置信息作为所述无人机的运行状态信息;The first collection module is used to adjust the height and shooting angle of the drone at the initial position, collect the direction information of the drone through the inertial measurement device, collect characteristic images through the sensing camera, and analyze the Obtaining the relative position information of the drone by using the characteristic image, and using the direction information and the relative position information as the operating state information of the drone;

第二采集模块,用于通过所述深度摄像头发射红外线脉冲至拍摄对象,接收所 述拍摄对象反射的红外线脉冲及所述红外线脉冲的反射时间,处理所述反射时间获得拍摄对象的深度图像信息,将所述深度图像信息作为周边环境信息。The second acquisition module is used to transmit infrared pulses to the subject through the depth camera, receive the infrared pulse reflected by the subject and the reflection time of the infrared pulse, process the reflection time to obtain the depth image information of the subject, The depth image information is used as surrounding environment information.

在一实施例中,所述模型构建模块30,包括:In an embodiment, the model construction module 30 includes:

姿态计算单元,用于提取所述运行状态信息中的方向信息和相对位置信息,将所述方向信息和所述相对位置信息进行迭代,得到所述无人机中第一采集装置的姿态变化值;The attitude calculation unit is used to extract the direction information and relative position information in the operating state information, and iterate the direction information and the relative position information to obtain the attitude change value of the first collecting device in the UAV ;

点云确定单元,用于提取所述周边环境信息中的深度图像信息,按所述姿态变化值迭代所述深度图像信息,获得所述无人机拍摄对象的特征点云;A point cloud determining unit, configured to extract the depth image information in the surrounding environment information, iterate the depth image information according to the attitude change value, and obtain the characteristic point cloud of the drone photographed object;

模型生成单元,用于通过预设的SLAM算法处理所述特征点云,获得所述拍摄对象的点云模型。The model generating unit is configured to process the characteristic point cloud by using a preset SLAM algorithm to obtain a point cloud model of the shooting object.

在一实施例中,所述路线生成模块40,包括:In an embodiment, the route generation module 40 includes:

信息比对子模块,用于确定预设BIM模型中所述初始位置对应的基准位置,将所述点云模型中的边缘信息与所述预设BIM模型中所述基准位置处的边缘信息进行比对,得到所述点云模型与所述预设BIM模型的最小距离;The information comparison sub-module is used to determine the reference position corresponding to the initial position in the preset BIM model, and compare the edge information in the point cloud model with the edge information at the reference position in the preset BIM model Compare to obtain the minimum distance between the point cloud model and the preset BIM model;

模型叠加子模块,用于将所述点云模型与预设BIM模型按所述最小距离进行叠加,得到叠加模型;The model superimposition sub-module is used to superimpose the point cloud model and the preset BIM model according to the minimum distance to obtain a superimposed model;

路线生成子模块,用于根据所述叠加模型从所述初始位置处进行路径追溯,得到到达所述目的位置处的导航路线,并控制所述无人机按所述导航路线运行。The route generation sub-module is used to trace the path from the initial position according to the overlay model to obtain the navigation route to the destination position, and control the drone to operate according to the navigation route.

在一实施例中,所述路线生成子模块,包括:In an embodiment, the route generation sub-module includes:

追溯判断单元,用于从所述叠加模型中所述初始位置处沿所述目的位置进行路径追溯,判断追溯路径中是否存在障碍物、追溯路径重复率是否大于预设重复率和/或是否存在至少两条追溯路径;The retrospective judgment unit is used to trace the path from the initial position in the superimposition model along the destination location to determine whether there are obstacles in the retrospective path, whether the retrospective path repetition rate is greater than a preset repetition rate and/or whether there is At least two tracing paths;

控制运行单元,用于若追溯路径中存在障碍物,则更换路径追溯方向;若追溯路径重复率大于预设重复率,则放弃追溯路径;和/或若得到至少两条追溯路径,则将距离最短的追溯路径作为无人机导航路线,并控制所述无人机按所述导航路线运行。The control operation unit is used to change the path tracing direction if there are obstacles in the tracing path; if the repetition rate of the tracing path is greater than the preset repetition rate, the tracing path is abandoned; and/or if at least two tracing paths are obtained, the distance is changed The shortest traceability path is used as the drone navigation route, and the drone is controlled to operate according to the navigation route.

在一实施例中,所述的无人机室内导航装置,包括:In an embodiment, the indoor navigation device for drones includes:

路线监测模块,用于在监测到所述无人机偏离所述导航路线时,发送路线控制 指令至所述无人机,以使所述无人机回归所述导航路线;A route monitoring module, configured to send a route control instruction to the drone when it is detected that the drone deviates from the navigation route, so that the drone returns to the navigation route;

指令发送模块,用于若预设时间段内所述无人机没有回归所述导航路线,则发送信息采集指令至所述无人机,以使所述无人机反馈当前运行参数;An instruction sending module, configured to send an information collection instruction to the drone if the drone does not return to the navigation route within a preset time period, so that the drone can feed back current operating parameters;

提示输出模块,用于接收所述无人机反馈的当前运行参数,若所述当前运行参数异常,则输出提示信息。The prompt output module is used to receive the current operating parameters fed back by the drone, and output prompt information if the current operating parameters are abnormal.

其中,无人机室内导航装置的各个功能模块实现的步骤可参照本申请无人机室内导航方法的各个实施例,此处不再赘述。Among them, the steps implemented by the various functional modules of the UAV indoor navigation device can refer to the various embodiments of the UAV indoor navigation method of the present application, which will not be repeated here.

此外,本申请实施例还提出一种计算机存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性。所述计算机存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现上述实施例提供的无人机室内导航方法中的操作,其中,所述无人机室内导航方法包括以下步骤:在接收到无人机导航请求时,获取无人机的初始位置和目的位置;通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。In addition, the embodiment of the present application also proposes a computer storage medium, and the computer-readable storage medium may be non-volatile or volatile. The computer storage medium stores a computer program that, when executed by a processor, implements the operations in the UAV indoor navigation method provided in the above embodiments, wherein the UAV indoor navigation method includes the following steps: When receiving the drone navigation request, obtain the initial position and the target position of the drone; collect the operating state information of the drone through the first acquisition device in the drone, and use the drone The second collecting device in the S2 collects surrounding environment information at different heights and different shooting angles at the initial position; constructs a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and processes the feature point cloud to obtain The point cloud model of the shooting object; superimpose the point cloud model and the preset BIM model to obtain a superimposed model, generate a navigation route to the target location according to the superimposed model, and control the drone to follow the The navigation route runs.

需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体/操作/对象与另一个实体/操作/对象区分开来,而不一定要求或者暗示这些实体/操作/对象之间存在任何这种实际的关系或者顺序;术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should be noted that in this article, relational terms such as first and second are only used to distinguish one entity/operation/object from another entity/operation/object, and do not necessarily require or imply these There is any such actual relationship or order between entities/operations/objects; the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that the process, method, An article or system includes not only those elements, but also other elements that are not explicitly listed, or include elements inherent to the process, method, article, or system. Without more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or system that includes the element.

对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示 意性的,其中作为分离部件说明的单元可以是或者也可以不是物理上分开的。可以根据实际的需要选择中的部分或者全部模块来实现本申请方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。As for the device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment. The device embodiments described above are merely illustrative, and the units described as separate components may or may not be physically separate. Some or all of the modules can be selected according to actual needs to achieve the objectives of the solution of the present application. Those of ordinary skill in the art can understand and implement without creative work.

上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are for description only, and do not represent the superiority or inferiority of the embodiments.

通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above implementation manners, those skilled in the art can clearly understand that the above-mentioned embodiment method can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is better.的实施方式。 Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disks, optical disks), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present application.

以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only the preferred embodiments of the application, and do not limit the scope of the patent for this application. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of the application, or directly or indirectly applied to other related technical fields , The same reason is included in the scope of patent protection of this application.

Claims (20)

一种无人机室内导航方法,其中,所述无人机室内导航方法包括以下步骤:An indoor navigation method for an unmanned aerial vehicle, wherein the indoor navigation method for an unmanned aerial vehicle includes the following steps: 在接收到无人机导航请求时,获取无人机的初始位置和目的位置;When receiving the drone navigation request, get the initial position and destination position of the drone; 通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;Collect the operating state information of the drone through the first collection device in the drone, and collect the surrounding environment at different heights and different shooting angles at the initial position through the second collection device in the drone information; 根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;Constructing a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and processing the feature point cloud to obtain a point cloud model of the shooting object; 将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。The point cloud model and the preset BIM model are superimposed to obtain a superimposed model, a navigation route to the destination location is generated according to the superimposed model, and the drone is controlled to operate according to the navigation route. 如权利要求1所述的无人机室内导航方法,其中,所述在接收到无人机导航请求时,获取无人机的初始位置和目的位置的步骤,包括:The indoor navigation method for a UAV according to claim 1, wherein the step of obtaining the initial position and the target position of the UAV when receiving the UAV navigation request comprises: 在接收到无人机导航请求时,获取所述无人机当前所处建筑的建筑标识,及所述建筑标识关联的预设BIM模型;When receiving the drone navigation request, obtain the building identifier of the building where the drone is currently located, and the preset BIM model associated with the building identifier; 基于所述预设BIM模型构建三维坐标系,将所述无人机当前位置的三维坐标作为所述无人机的初始位置,获取所述无人机导航请求对应的导航目的地,将所述导航目的地的三维作为所述无人机的目的位置。Construct a three-dimensional coordinate system based on the preset BIM model, use the three-dimensional coordinates of the current position of the drone as the initial position of the drone, obtain the navigation destination corresponding to the drone navigation request, and set the The three-dimensional navigation destination is used as the destination position of the drone. 如权利要求1所述的无人机室内导航方法,其中,所述第一采集装置包括至少一个惯性测量器和至少一个感知摄像头,所述第二采集装置包括至少一个深度摄像头;The indoor navigation method for an unmanned aerial vehicle according to claim 1, wherein the first acquisition device includes at least one inertial measurement device and at least one perception camera, and the second acquisition device includes at least one depth camera; 所述通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息的步骤,包括:The first collecting device in the unmanned aerial vehicle collects the operating state information of the unmanned aerial vehicle, and the second collecting device in the unmanned aerial vehicle collects the information of different heights and different shooting angles at the initial position. The steps of surrounding environment information include: 在所述初始位置处调整无人机的高度和拍摄角度,通过所述惯性测量器采集所述无人机的方向信息,通过所述感知摄像头采集特征图像,分析所述特征图像得到所述无人机的相对位置信息,将所述方向信息和所述相对位置信息作为所述无人机的运行状态信息;The height and shooting angle of the drone are adjusted at the initial position, the direction information of the drone is collected by the inertial measurement device, the characteristic image is collected by the sensing camera, and the characteristic image is analyzed to obtain the The relative position information of the man-machine, using the direction information and the relative position information as the operating state information of the drone; 通过所述深度摄像头发射红外线脉冲至拍摄对象,接收所述拍摄对象反射的红外线脉冲及所述红外线脉冲的反射时间,处理所述反射时间获得拍摄对象的深度图像信息,将所述深度图像信息作为周边环境信息。Transmit infrared pulses to the subject through the depth camera, receive the infrared pulse reflected by the subject and the reflection time of the infrared pulse, process the reflection time to obtain the depth image information of the subject, and use the depth image information as Surrounding environment information. 如权利要求1所述的无人机室内导航方法,其中,所述根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型的步骤,包括:The UAV indoor navigation method according to claim 1, wherein the feature point cloud of the shooting object is constructed according to the operating state information and the surrounding environment information, and the feature point cloud is processed to obtain the image of the shooting object The steps of the point cloud model include: 提取所述运行状态信息中的方向信息和相对位置信息,将所述方向信息和所述相对位置信息进行迭代,得到所述无人机中第一采集装置的姿态变化值;Extracting direction information and relative position information in the operating state information, and iterating the direction information and the relative position information to obtain the attitude change value of the first collecting device in the drone; 提取所述周边环境信息中的深度图像信息,按所述姿态变化值迭代所述深度图像信息,获得所述无人机拍摄对象的特征点云;Extracting the depth image information in the surrounding environment information, and iterating the depth image information according to the attitude change value to obtain a feature point cloud of the drone photographed object; 通过预设的SLAM算法处理所述特征点云,获得所述拍摄对象的点云模型。The characteristic point cloud is processed by a preset SLAM algorithm to obtain a point cloud model of the shooting object. 如权利要求1所述的无人机室内导航方法,其中,所述将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行的步骤,包括:The UAV indoor navigation method according to claim 1, wherein said superimposing said point cloud model and a preset BIM model to obtain a superimposed model, and generating a navigation route to said destination position according to said superimposed model, and The steps of controlling the drone to operate according to the navigation route include: 确定预设BIM模型中所述初始位置对应的基准位置,将所述点云模型中的边缘信息与所述预设BIM模型中所述基准位置处的边缘信息进行比对,得到所述点云模型与所述预设BIM模型的最小距离;Determine the reference position corresponding to the initial position in the preset BIM model, and compare the edge information in the point cloud model with the edge information at the reference position in the preset BIM model to obtain the point cloud The minimum distance between the model and the preset BIM model; 将所述点云模型与预设BIM模型按所述最小距离进行叠加,得到 叠加模型;Superimposing the point cloud model and the preset BIM model according to the minimum distance to obtain a superimposed model; 根据所述叠加模型从所述初始位置处进行路径追溯,得到到达所述目的位置处的导航路线,并控制所述无人机按所述导航路线运行。Trace the path from the initial position according to the superimposed model, obtain a navigation route to the destination position, and control the drone to operate according to the navigation route. 如权利要求5所述的无人机室内导航方法,其中,所述根据所述叠加模型从所述初始位置处进行路径追溯,得到到达所述目的位置处的导航路线,并控制所述无人机按所述导航路线运行的步骤,包括:The indoor navigation method for drones according to claim 5, wherein the path traceability is performed from the initial position according to the superimposed model to obtain a navigation route to the destination position, and control the unmanned The steps for the machine to operate according to the navigation route include: 从所述叠加模型中所述初始位置处沿所述目的位置进行路径追溯,判断追溯路径中是否存在障碍物、追溯路径重复率是否大于预设重复率和/或是否存在至少两条追溯路径;Perform path tracing along the destination position from the initial position in the superimposition model to determine whether there are obstacles in the tracing path, whether the repetition rate of the tracing path is greater than a preset repetition rate, and/or whether there are at least two tracing paths; 若追溯路径中存在障碍物,则更换路径追溯方向;若追溯路径重复率大于预设重复率,则放弃追溯路径;和/或若得到至少两条追溯路径,则将距离最短的追溯路径作为无人机导航路线,并控制所述无人机按所述导航路线运行。If there are obstacles in the tracing path, the path tracing direction is changed; if the repetition rate of the tracing path is greater than the preset repetition rate, the tracing path is abandoned; and/or if at least two tracing paths are obtained, the tracing path with the shortest distance is regarded as none The man-machine navigates the route, and controls the UAV to operate according to the navigation route. 如权利要求1至6任意一项所述的无人机室内导航方法,其中,所述将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行的步骤之后,包括:The UAV indoor navigation method according to any one of claims 1 to 6, wherein the point cloud model and the preset BIM model are superimposed to obtain a superimposed model, and the destination position is generated according to the superimposed model. After the steps of controlling the UAV to operate according to the navigation route, the steps include: 在监测到所述无人机偏离所述导航路线时,发送路线控制指令至所述无人机,以使所述无人机回归所述导航路线;When it is detected that the UAV deviates from the navigation route, sending a route control instruction to the UAV to make the UAV return to the navigation route; 若预设时间段内所述无人机没有回归所述导航路线,则发送信息采集指令至所述无人机,以使所述无人机反馈当前运行参数;If the UAV does not return to the navigation route within the preset time period, sending an information collection instruction to the UAV so that the UAV feeds back current operating parameters; 接收所述无人机反馈的当前运行参数,若所述当前运行参数异常,则输出提示信息。Receive the current operating parameters fed back by the drone, and output prompt information if the current operating parameters are abnormal. 一种无人机室内导航装置,其中,所述无人机室内导航装置包括:An indoor navigation device for an unmanned aerial vehicle, wherein the indoor navigation device for an unmanned aerial vehicle includes: 请求接收模块,用于在接收到无人机导航请求时,获取无人机的 初始位置和目的位置;The request receiving module is used to obtain the initial position and destination position of the drone when receiving the drone navigation request; 信息采集模块,用于通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;The information collection module is used to collect the operating state information of the UAV through the first collection device in the UAV, and collect the different heights and heights at the initial position through the second collection device in the UAV. Surrounding environment information at different shooting angles; 模型构建模块,用于根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;A model construction module, configured to construct a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and process the feature point cloud to obtain a point cloud model of the shooting object; 路线生成模块,用于将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。The route generation module is used to superimpose the point cloud model and the preset BIM model to obtain a superimposed model, generate a navigation route to the destination location according to the superimposed model, and control the drone to operate according to the navigation route . 一种无人机室内导航设备,其中,所述无人机室内导航设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其中:An indoor navigation device for an unmanned aerial vehicle, wherein the indoor navigation device for an unmanned aerial vehicle includes: a memory, a processor, and a computer program stored on the memory and running on the processor, wherein: 所述计算机程序被所述处理器执行时实现一种无人机室内导航方法的步骤,其中,所述无人机室内导航方法包括以下步骤:When the computer program is executed by the processor, the steps of a UAV indoor navigation method are realized, wherein the UAV indoor navigation method includes the following steps: 在接收到无人机导航请求时,获取无人机的初始位置和目的位置;When receiving the drone navigation request, get the initial position and destination position of the drone; 通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;Collect the operating state information of the drone through the first collection device in the drone, and collect the surrounding environment at different heights and different shooting angles at the initial position through the second collection device in the drone information; 根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;Constructing a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and processing the feature point cloud to obtain a point cloud model of the shooting object; 将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。The point cloud model and the preset BIM model are superimposed to obtain a superimposed model, a navigation route to the destination location is generated according to the superimposed model, and the drone is controlled to operate according to the navigation route. 如权利要求9所述的无人机室内导航设备,其中,所述在接收到无人机导航请求时,获取无人机的初始位置和目的位置的步骤,包括:The UAV indoor navigation device according to claim 9, wherein the step of obtaining the initial position and the target position of the UAV when the UAV navigation request is received comprises: 在接收到无人机导航请求时,获取所述无人机当前所处建筑的建筑标识,及所述建筑标识关联的预设BIM模型;When receiving the drone navigation request, obtain the building identifier of the building where the drone is currently located, and the preset BIM model associated with the building identifier; 基于所述预设BIM模型构建三维坐标系,将所述无人机当前位置的三维坐标作为所述无人机的初始位置,获取所述无人机导航请求对应的导航目的地,将所述导航目的地的三维作为所述无人机的目的位置。Construct a three-dimensional coordinate system based on the preset BIM model, use the three-dimensional coordinates of the current position of the drone as the initial position of the drone, obtain the navigation destination corresponding to the drone navigation request, and set the The three-dimensional navigation destination is used as the destination position of the drone. 如权利要求9所述的无人机室内导航设备,其中,所述第一采集装置包括至少一个惯性测量器和至少一个感知摄像头,所述第二采集装置包括至少一个深度摄像头;The UAV indoor navigation device according to claim 9, wherein the first acquisition device includes at least one inertial measurement device and at least one perception camera, and the second acquisition device includes at least one depth camera; 所述通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息的步骤,包括:The first collecting device in the unmanned aerial vehicle collects the operating state information of the unmanned aerial vehicle, and the second collecting device in the unmanned aerial vehicle collects the information of different heights and different shooting angles at the initial position. The steps of surrounding environment information include: 在所述初始位置处调整无人机的高度和拍摄角度,通过所述惯性测量器采集所述无人机的方向信息,通过所述感知摄像头采集特征图像,分析所述特征图像得到所述无人机的相对位置信息,将所述方向信息和所述相对位置信息作为所述无人机的运行状态信息;The height and shooting angle of the drone are adjusted at the initial position, the direction information of the drone is collected by the inertial measurement device, the characteristic image is collected by the sensing camera, and the characteristic image is analyzed to obtain the The relative position information of the man-machine, using the direction information and the relative position information as the operating state information of the drone; 通过所述深度摄像头发射红外线脉冲至拍摄对象,接收所述拍摄对象反射的红外线脉冲及所述红外线脉冲的反射时间,处理所述反射时间获得拍摄对象的深度图像信息,将所述深度图像信息作为周边环境信息。Transmit infrared pulses to the subject through the depth camera, receive the infrared pulse reflected by the subject and the reflection time of the infrared pulse, process the reflection time to obtain the depth image information of the subject, and use the depth image information as Surrounding environment information. 如权利要求9所述的无人机室内导航设备,其中,所述根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型的步骤,包括:The UAV indoor navigation device according to claim 9, wherein the feature point cloud of the shooting object is constructed according to the operating state information and the surrounding environment information, and the feature point cloud is processed to obtain the image of the shooting object The steps of the point cloud model include: 提取所述运行状态信息中的方向信息和相对位置信息,将所述方向信息和所述相对位置信息进行迭代,得到所述无人机中第一采集装置的姿态变化值;Extracting direction information and relative position information in the operating state information, and iterating the direction information and the relative position information to obtain the attitude change value of the first collecting device in the drone; 提取所述周边环境信息中的深度图像信息,按所述姿态变化值迭 代所述深度图像信息,获得所述无人机拍摄对象的特征点云;Extracting the depth image information in the surrounding environment information, and iterating the depth image information according to the attitude change value to obtain a feature point cloud of the object photographed by the drone; 通过预设的SLAM算法处理所述特征点云,获得所述拍摄对象的点云模型。The characteristic point cloud is processed by a preset SLAM algorithm to obtain a point cloud model of the shooting object. 如权利要求9所述的无人机室内导航设备,其中,所述将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行的步骤,包括:The UAV indoor navigation device according to claim 9, wherein the superimposed model is obtained by superimposing the point cloud model and the preset BIM model, and generating a navigation route to the destination location according to the superimposed model, and The steps of controlling the drone to operate according to the navigation route include: 确定预设BIM模型中所述初始位置对应的基准位置,将所述点云模型中的边缘信息与所述预设BIM模型中所述基准位置处的边缘信息进行比对,得到所述点云模型与所述预设BIM模型的最小距离;Determine the reference position corresponding to the initial position in the preset BIM model, and compare the edge information in the point cloud model with the edge information at the reference position in the preset BIM model to obtain the point cloud The minimum distance between the model and the preset BIM model; 将所述点云模型与预设BIM模型按所述最小距离进行叠加,得到叠加模型;Superimposing the point cloud model and the preset BIM model according to the minimum distance to obtain a superimposed model; 根据所述叠加模型从所述初始位置处进行路径追溯,得到到达所述目的位置处的导航路线,并控制所述无人机按所述导航路线运行。Trace the path from the initial position according to the superimposed model, obtain a navigation route to the destination position, and control the drone to operate according to the navigation route. 如权利要求13所述的无人机室内导航设备,其中,所述根据所述叠加模型从所述初始位置处进行路径追溯,得到到达所述目的位置处的导航路线,并控制所述无人机按所述导航路线运行的步骤,包括:The UAV indoor navigation device according to claim 13, wherein the path is traced from the initial position according to the superimposed model to obtain a navigation route to the destination position, and control the unmanned The steps for the machine to operate according to the navigation route include: 从所述叠加模型中所述初始位置处沿所述目的位置进行路径追溯,判断追溯路径中是否存在障碍物、追溯路径重复率是否大于预设重复率和/或是否存在至少两条追溯路径;Perform path tracing along the destination position from the initial position in the superimposition model to determine whether there are obstacles in the tracing path, whether the repetition rate of the tracing path is greater than a preset repetition rate, and/or whether there are at least two tracing paths; 若追溯路径中存在障碍物,则更换路径追溯方向;若追溯路径重复率大于预设重复率,则放弃追溯路径;和/或若得到至少两条追溯路径,则将距离最短的追溯路径作为无人机导航路线,并控制所述无人机按所述导航路线运行。If there are obstacles in the tracing path, the path tracing direction is changed; if the repetition rate of the tracing path is greater than the preset repetition rate, the tracing path is abandoned; and/or if at least two tracing paths are obtained, the tracing path with the shortest distance is regarded as none The man-machine navigates the route, and controls the UAV to operate according to the navigation route. 一种计算机存储介质,其中,所述计算机存储介质上存储有计算 机程序,所述计算机程序被处理器执行时实现一种无人机室内导航方法的步骤,其中,所述无人机室内导航方法包括以下步骤:A computer storage medium, wherein a computer program is stored on the computer storage medium, and when the computer program is executed by a processor, the steps of an indoor navigation method for an unmanned aerial vehicle are realized, wherein the indoor navigation method for an unmanned aerial vehicle It includes the following steps: 在接收到无人机导航请求时,获取无人机的初始位置和目的位置;When receiving the drone navigation request, get the initial position and destination position of the drone; 通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息;Collect the operating state information of the drone through the first collection device in the drone, and collect the surrounding environment at different heights and different shooting angles at the initial position through the second collection device in the drone information; 根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型;Constructing a feature point cloud of the shooting object according to the operating state information and the surrounding environment information, and processing the feature point cloud to obtain a point cloud model of the shooting object; 将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行。The point cloud model and the preset BIM model are superimposed to obtain a superimposed model, a navigation route to the destination location is generated according to the superimposed model, and the drone is controlled to operate according to the navigation route. 如权利要求15所述的计算机存储介质,其中,所述在接收到无人机导航请求时,获取无人机的初始位置和目的位置的步骤,包括:15. The computer storage medium of claim 15, wherein the step of obtaining the initial position and the target position of the drone upon receiving the drone navigation request comprises: 在接收到无人机导航请求时,获取所述无人机当前所处建筑的建筑标识,及所述建筑标识关联的预设BIM模型;When receiving the drone navigation request, obtain the building identifier of the building where the drone is currently located, and the preset BIM model associated with the building identifier; 基于所述预设BIM模型构建三维坐标系,将所述无人机当前位置的三维坐标作为所述无人机的初始位置,获取所述无人机导航请求对应的导航目的地,将所述导航目的地的三维作为所述无人机的目的位置。Construct a three-dimensional coordinate system based on the preset BIM model, use the three-dimensional coordinates of the current position of the drone as the initial position of the drone, obtain the navigation destination corresponding to the drone navigation request, and set the The three-dimensional navigation destination is used as the destination position of the drone. 如权利要求15所述的计算机存储介质,其中,所述第一采集装置包括至少一个惯性测量器和至少一个感知摄像头,所述第二采集装置包括至少一个深度摄像头;15. The computer storage medium of claim 15, wherein the first acquisition device includes at least one inertial measurement device and at least one sensory camera, and the second acquisition device includes at least one depth camera; 所述通过所述无人机中的第一采集装置采集所述无人机的运行状态信息,通过所述无人机中的第二采集装置采集所述初始位置处不同高度和不同拍摄角度的周边环境信息的步骤,包括:The first collecting device in the unmanned aerial vehicle collects the operating state information of the unmanned aerial vehicle, and the second collecting device in the unmanned aerial vehicle collects the information of different heights and different shooting angles at the initial position. The steps of surrounding environment information include: 在所述初始位置处调整无人机的高度和拍摄角度,通过所述惯性 测量器采集所述无人机的方向信息,通过所述感知摄像头采集特征图像,分析所述特征图像得到所述无人机的相对位置信息,将所述方向信息和所述相对位置信息作为所述无人机的运行状态信息;The height and shooting angle of the drone are adjusted at the initial position, the direction information of the drone is collected by the inertial measurement device, the characteristic image is collected by the sensing camera, and the characteristic image is analyzed to obtain the The relative position information of the man-machine, using the direction information and the relative position information as the operating state information of the drone; 通过所述深度摄像头发射红外线脉冲至拍摄对象,接收所述拍摄对象反射的红外线脉冲及所述红外线脉冲的反射时间,处理所述反射时间获得拍摄对象的深度图像信息,将所述深度图像信息作为周边环境信息。Transmit infrared pulses to the subject through the depth camera, receive the infrared pulse reflected by the subject and the reflection time of the infrared pulse, process the reflection time to obtain the depth image information of the subject, and use the depth image information as Surrounding environment information. 如权利要求15所述的计算机存储介质,其中,所述根据所述运行状态信息和所述周边环境信息构建拍摄对象的特征点云,处理所述特征点云得到所述拍摄对象的点云模型的步骤,包括:15. The computer storage medium according to claim 15, wherein the feature point cloud of the shooting object is constructed according to the operating state information and the surrounding environment information, and the feature point cloud is processed to obtain the point cloud model of the shooting object The steps include: 提取所述运行状态信息中的方向信息和相对位置信息,将所述方向信息和所述相对位置信息进行迭代,得到所述无人机中第一采集装置的姿态变化值;Extracting direction information and relative position information in the operating state information, and iterating the direction information and the relative position information to obtain the attitude change value of the first collecting device in the drone; 提取所述周边环境信息中的深度图像信息,按所述姿态变化值迭代所述深度图像信息,获得所述无人机拍摄对象的特征点云;Extracting the depth image information in the surrounding environment information, and iterating the depth image information according to the attitude change value to obtain a feature point cloud of the drone photographed object; 通过预设的SLAM算法处理所述特征点云,获得所述拍摄对象的点云模型。The characteristic point cloud is processed by a preset SLAM algorithm to obtain a point cloud model of the shooting object. 如权利要求15所述的计算机存储介质,其中,所述将所述点云模型与预设BIM模型叠加得到叠加模型,根据所述叠加模型生成到达所述目的位置的导航路线,并控制所述无人机按所述导航路线运行的步骤,包括:The computer storage medium according to claim 15, wherein the superimposed model is obtained by superimposing the point cloud model and the preset BIM model, and generating a navigation route to the destination position according to the superimposed model, and controlling the The steps for the drone to operate according to the navigation route include: 确定预设BIM模型中所述初始位置对应的基准位置,将所述点云模型中的边缘信息与所述预设BIM模型中所述基准位置处的边缘信息进行比对,得到所述点云模型与所述预设BIM模型的最小距离;Determine the reference position corresponding to the initial position in the preset BIM model, and compare the edge information in the point cloud model with the edge information at the reference position in the preset BIM model to obtain the point cloud The minimum distance between the model and the preset BIM model; 将所述点云模型与预设BIM模型按所述最小距离进行叠加,得到叠加模型;Superimposing the point cloud model and the preset BIM model according to the minimum distance to obtain a superimposed model; 根据所述叠加模型从所述初始位置处进行路径追溯,得到到达所述目的位置处的导航路线,并控制所述无人机按所述导航路线运行。Trace the path from the initial position according to the superimposed model, obtain a navigation route to the destination position, and control the drone to operate according to the navigation route. 如权利要求19所述的计算机存储介质,其中,所述根据所述叠加模型从所述初始位置处进行路径追溯,得到到达所述目的位置处的导航路线,并控制所述无人机按所述导航路线运行的步骤,包括:The computer storage medium according to claim 19, wherein the path is traced from the initial position according to the superimposed model to obtain a navigation route to the destination position, and the drone is controlled to follow the path Describe the steps of navigation route operation, including: 从所述叠加模型中所述初始位置处沿所述目的位置进行路径追溯,判断追溯路径中是否存在障碍物、追溯路径重复率是否大于预设重复率和/或是否存在至少两条追溯路径;Perform path tracing along the destination position from the initial position in the superimposition model to determine whether there are obstacles in the tracing path, whether the repetition rate of the tracing path is greater than a preset repetition rate, and/or whether there are at least two tracing paths; 若追溯路径中存在障碍物,则更换路径追溯方向;若追溯路径重复率大于预设重复率,则放弃追溯路径;和/或若得到至少两条追溯路径,则将距离最短的追溯路径作为无人机导航路线,并控制所述无人机按所述导航路线运行。If there are obstacles in the tracing path, the path tracing direction is changed; if the repetition rate of the tracing path is greater than the preset repetition rate, the tracing path is abandoned; and/or if at least two tracing paths are obtained, the tracing path with the shortest distance is regarded as none The man-machine navigates the route, and controls the UAV to operate according to the navigation route.
PCT/CN2020/085853 2020-02-12 2020-04-21 Indoor navigation method and apparatus for unmanned aerial vehicle, device and storage medium Ceased WO2021159603A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010089061.5A CN111272172A (en) 2020-02-12 2020-02-12 Unmanned aerial vehicle indoor navigation method, device, equipment and storage medium
CN202010089061.5 2020-02-12

Publications (1)

Publication Number Publication Date
WO2021159603A1 true WO2021159603A1 (en) 2021-08-19

Family

ID=70997022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/085853 Ceased WO2021159603A1 (en) 2020-02-12 2020-04-21 Indoor navigation method and apparatus for unmanned aerial vehicle, device and storage medium

Country Status (2)

Country Link
CN (1) CN111272172A (en)
WO (1) WO2021159603A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706716A (en) * 2021-10-21 2021-11-26 湖南省交通科学研究院有限公司 Highway BIM modeling method utilizing unmanned aerial vehicle oblique photography

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111741263B (en) * 2020-06-18 2021-08-31 广东电网有限责任公司 A multi-eye situational awareness navigation method for substation inspection drones
CN111880566B (en) * 2020-07-28 2024-08-09 中国银行股份有限公司 Method, device, storage medium and equipment for receiving and sending money by going to door based on unmanned aerial vehicle
CN115639832A (en) * 2021-07-19 2023-01-24 久瓴(江苏)数字智能科技有限公司 Unmanned aerial vehicle automatic cruise control method and device, computer equipment and storage medium
CN113485438B (en) * 2021-07-30 2022-03-18 南京石知韵智能科技有限公司 Intelligent planning method and system for space monitoring path of unmanned aerial vehicle
CN114384541A (en) * 2021-12-15 2022-04-22 武汉万集光电技术有限公司 Point cloud target detection method, terminal device and computer-readable storage medium
CN114527790A (en) * 2022-01-19 2022-05-24 歌尔科技有限公司 Method and device for acquiring flight indication data and electronic equipment
WO2023173409A1 (en) * 2022-03-18 2023-09-21 深圳市大疆创新科技有限公司 Display method and apparatus for information, comparison method and apparatus for models, and unmanned aerial vehicle system
CN117130392B (en) * 2023-10-26 2024-02-20 深圳森磊弘泰消防科技有限公司 Unmanned aerial vehicle for indoor positioning navigation based on BIM data and control method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104236548A (en) * 2014-09-12 2014-12-24 清华大学 Indoor autonomous navigation method for micro unmanned aerial vehicle
US20170193830A1 (en) * 2016-01-05 2017-07-06 California Institute Of Technology Controlling unmanned aerial vehicles to avoid obstacle collision
US20170206648A1 (en) * 2016-01-20 2017-07-20 Ez3D, Llc System and method for structural inspection and construction estimation using an unmanned aerial vehicle
WO2018048353A1 (en) * 2016-09-09 2018-03-15 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN108303099A (en) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 Autonomous navigation method in unmanned plane room based on 3D vision SLAM
CN109410327A (en) * 2018-10-09 2019-03-01 鼎宸建设管理有限公司 A kind of three-dimension tidal current method based on BIM and GIS
CN109540142A (en) * 2018-11-27 2019-03-29 达闼科技(北京)有限公司 A kind of method, apparatus of robot localization navigation calculates equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160034013A (en) * 2014-09-19 2016-03-29 한국건설기술연구원 System and method for construction site management by using unmaned aerial vehicle
CN106441286B (en) * 2016-06-27 2019-11-19 上海大学 UAV tunnel inspection system based on BIM technology
CN109410330A (en) * 2018-11-12 2019-03-01 中国十七冶集团有限公司 One kind being based on BIM technology unmanned plane modeling method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104236548A (en) * 2014-09-12 2014-12-24 清华大学 Indoor autonomous navigation method for micro unmanned aerial vehicle
US20170193830A1 (en) * 2016-01-05 2017-07-06 California Institute Of Technology Controlling unmanned aerial vehicles to avoid obstacle collision
US20170206648A1 (en) * 2016-01-20 2017-07-20 Ez3D, Llc System and method for structural inspection and construction estimation using an unmanned aerial vehicle
WO2018048353A1 (en) * 2016-09-09 2018-03-15 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN108303099A (en) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 Autonomous navigation method in unmanned plane room based on 3D vision SLAM
CN109410327A (en) * 2018-10-09 2019-03-01 鼎宸建设管理有限公司 A kind of three-dimension tidal current method based on BIM and GIS
CN109540142A (en) * 2018-11-27 2019-03-29 达闼科技(北京)有限公司 A kind of method, apparatus of robot localization navigation calculates equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113706716A (en) * 2021-10-21 2021-11-26 湖南省交通科学研究院有限公司 Highway BIM modeling method utilizing unmanned aerial vehicle oblique photography
CN113706716B (en) * 2021-10-21 2022-01-07 湖南省交通科学研究院有限公司 A Highway BIM Modeling Method Using UAV Oblique Photography

Also Published As

Publication number Publication date
CN111272172A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
WO2021159603A1 (en) Indoor navigation method and apparatus for unmanned aerial vehicle, device and storage medium
Chen et al. DroneTalk: An Internet-of-Things-based drone system for last-mile drone delivery
US11320834B2 (en) Methods and systems for mapping, localization, navigation and control and mobile robot
KR102289745B1 (en) System and method for real-time monitoring field work
AU2023275894A1 (en) Automated control of image acquisition via use of mobile device user interface
WO2021103987A1 (en) Control method for sweeping robot, sweeping robot, and storage medium
US10896327B1 (en) Device with a camera for locating hidden object
CN106168805A (en) The method of robot autonomous walking based on cloud computing
CN103389699A (en) Robot monitoring and automatic mobile system operation method based on distributed intelligent monitoring controlling nodes
US20250200800A1 (en) Information processing device, mobile device, information processing system, and method
CN110375739A (en) A kind of mobile terminal vision fusion and positioning method, system and electronic equipment
Ye et al. 6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features
JP2019041261A (en) Image processing system and image processing system setting method
Cui et al. Search and rescue using multiple drones in post-disaster situation
WO2019051832A1 (en) Movable object control method, device and system
CN112050814A (en) Unmanned aerial vehicle visual navigation system and method for indoor transformer substation
CN112991440A (en) Vehicle positioning method and device, storage medium and electronic device
US20230052360A1 (en) Information processing apparatus, information processing system, and information processing method, and program
WO2025037291A2 (en) Enhancement of the 3d indoor positioning by augmenting a multitude of 3d imaging, lidar distance corrections, imu sensors and 3-d ultrasound
Strecker et al. MR object identification and interaction: Fusing object situation information from heterogeneous sources
US12087172B2 (en) Image processing device and image processing method
EP4207100A1 (en) Method and system for providing user interface for map target creation
Canh et al. Multisensor data fusion for reliable obstacle avoidance
Kato A remote navigation system for a simple tele-presence robot with virtual reality
CN204495357U (en) A kind of many Quito module net merges indoor occupant navigation positioning system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20918418

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/12/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20918418

Country of ref document: EP

Kind code of ref document: A1