[go: up one dir, main page]

WO2023120908A1 - Drone having onboard flight control computer, and system for obtaining positional coordinates of drone camera video object by using drone - Google Patents

Drone having onboard flight control computer, and system for obtaining positional coordinates of drone camera video object by using drone Download PDF

Info

Publication number
WO2023120908A1
WO2023120908A1 PCT/KR2022/015356 KR2022015356W WO2023120908A1 WO 2023120908 A1 WO2023120908 A1 WO 2023120908A1 KR 2022015356 W KR2022015356 W KR 2022015356W WO 2023120908 A1 WO2023120908 A1 WO 2023120908A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
drone
camera
information
flight control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/KR2022/015356
Other languages
French (fr)
Korean (ko)
Inventor
정진호
박용희
박진모
박성현
정유미
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dusitech Co Ltd
Original Assignee
Dusitech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dusitech Co Ltd filed Critical Dusitech Co Ltd
Publication of WO2023120908A1 publication Critical patent/WO2023120908A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D43/00Arrangements or adaptations of instruments
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENT OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots

Definitions

  • the present invention relates to a drone equipped with an on-board flight control computer, and more particularly, when the coordinates of a video object viewed by a drone camera are acquired during the drone's high-speed flight and the position of an object extracted by artificial intelligence is determined, the drone camera acquires the coordinates.
  • Real-time video, drone flight information, and gimbal attitude information are processed by the on-board flight control computer, and the drone's on-board flight control computer converts them into global coordinates and standard attitude values to form a frame in a container format that fuses video and meta information.
  • An on-board flight control computer that can process the determined position and time information by separating the extracted MPEG and identifying the video through AI-learned object recognition and synchronizing the time coordinates of the object calculated as meta information for the identified location coordinates. It relates to a drone equipped with and a drone camera video object location coordinate acquisition system using the same.
  • Image acquisition technology using drones is continuously evolving.
  • artificial intelligence technology is applied to drones
  • the importance of object identification of real-time drone video location and time information is emerging.
  • Demand for service technology is expanding.
  • the drone's video it is common for the drone's video to receive the video and GPS flight information separately using the drone's GPS value as the drone's flying location, and to match it with the image storage to obtain the location coordinates filmed by the drone, and to store and utilize the GPS flight information.
  • the real-time frame characteristics of the video in the drone show that the drone flight posture and data expression value and the camera gimbal attitude expression value are moving and output differently. should be converted into a global standard posture value and stored in meta information, and the altitude value that the camera is looking at is very important.
  • DGPS which corrects the GPS, corrects the altitude within 1m
  • RTK correction which corrects the altitude within 10cm. This is required, so it is not popularized due to the difficulty of miniaturization and light weight method in drones.
  • drone video characteristics change more severely than video frames that are generally fixed as the flight path changes, and the video frame compresses and transmits only the changed information of the previous frame and the current frame.
  • the video frame data increases, and when transmitted wirelessly from the drone, there is a characteristic that transmission delay time occurs due to irregular specifications of the amount of data, and even if the drone operator identifies an object in the video received from the drone, at any point in time
  • transmission delay time occurs due to irregular specifications of the amount of data, and even if the drone operator identifies an object in the video received from the drone, at any point in time
  • the video obtained from the drone must be wirelessly transmitted to the drone operator.
  • the video transmission causes a large change in data increase, and the barge communication transmits video and meta information of position and attitude including camera specification information at a constant speed according to limited frequency resources. Data transmission control is required to ensure lossless transmission.
  • the video MPEG transmitted from the drone is received by the onboard computer of the smart controller and has a computer function that is simultaneously transmitted to the ground control system (GCS) and artificial intelligence processor. Since the information is used together with the geographic information system, the artificial intelligence processor can recognize the object of the video and acquire the location coordinates of the meta information through the learning engine. Since it is a method of expressing video location identification synchronized with the position of the drone camera at the time of the location viewed by the drone camera, the frame in which the real-time video is expressed in the coordinates of the actual location is transferred between the on-board flight control computer and smart controller mounted on the drone. It is necessary to implement it as an on-board server and provide a service with coordinate synchronization technology at the time of video shooting.
  • GCS ground control system
  • Patent No. 10-1943823 is a method for determining the synchronization of UAV mission equipment for obtaining precise position data during high-speed flight and UAV mission equipment for obtaining precise position data during high-speed flight. It is an invention related to and the prior art is a synchronization method in photography necessary for 3D modeling using a drone.
  • the existing shooting method generally describes a method of shooting by fixing the camera gimbal mounted on a drone in a direct downward direction and synchronizing the drone's GPS coordinates and the camera gimbal's posture information with photos and meta information, and 3D modeling
  • This is a method of generating orthoimages with stereo imaging techniques according to the degree of overlap of the photos.
  • the operator shoots the coordinates of the ground control point (GCS) on the ground for post-processing correction with a drone and commercialized 3D modeling software
  • GCS ground control point
  • This method is a method for determining the position of a video taken by a drone by synchronizing the determination of a position for taking a picture with GPS information according to a photographing route, and is a technology for increasing the accuracy of the position of taking a picture.
  • Registered Patent No. 10-2242366 is a drone-based ground control point placement automation device for digital map creation at earthwork sites. Because of the large GPS error, RTK correction technology with cm precision is used to place virtual control points to create a 3D digital map. A method for taking pictures using RTK technology is presented.
  • analog or HDMI communication methods are widely used as existing video transmission methods.
  • This is an image dump method, which has a problem of not including meta information as a method of transmitting video in the same standard to solve the problem of increasing data during drone movement.
  • meta information included in the video frame has a problem of being deleted, so it is a video format that is not used in coordinate recognition.
  • RTK synchronization technology a separate satellite navigation (GNSS) correction technology, is applied to increase the accuracy of location information in the video of the drone, but this only has the effect of improving the precision of the drone location, and calculates the location that the real-time camera is looking at.
  • GNSS satellite navigation
  • the characteristic of video in the prior art is that the current video frame is information transmitted by compressing the changed information of the previous frame, so each frame has a feature that the exact time and place are not known, and the amount of data of a specific frame increases when wirelessly transmitted from a drone. Due to the problem of delay time, it is difficult to accurately identify anomalies even after detecting object identification, resulting in an error of hundreds of meters. Even if there is a continuous image of the value, the problem of not being able to know the location and time point with only the video frame that the operator looks at after the video transmission is always on the rise.
  • the convergence of video processing and meta information requires a high-performance computer to be installed on the drone, so synchronization including precise posture and angle of view and video convergence with meta information are possible. It is difficult to implement as much computer convergence technology as possible, and when signal processing is dispersed, the delay problem and when the capacity of a large amount of video frames change greatly, the video is delayed with a limited frequency in the wireless communication section or delayed due to CPU processing limitations of video transmission. A problem is occurring.
  • a redundant server computer configuration method for transmitting the video of a smart controller transmitted from a drone without loss of time coordinates of video digital information in a video frame to ground control equipment and an artificial intelligence processor and a ground control system in a smart controller displays the degree of confinement seen by drones in the geographic information system (GIS) and how videos are expressed in real-time coordinates, and artificial intelligence processors learn object identification through video and meta information through MPEG and object identification We are trying to solve the way in which services are provided by calculating the coordinates and realizing mixed reality of artificial intelligence through smart glasses or monitors.
  • GCS geographic information system
  • the present invention is to solve various disadvantages and problems of the prior art as described above, and configures a Linux operating system server on an on-board flight controller (FC) and a flight control computer board (FCC).
  • Real-time camera image acquisition, gimbal control information, attitude information of the flight controller, aircraft altitude angle sensor information are synchronized with GPS time information to process real-time video optimized for the camera, and signal synchronization optimization of the onboard flight controller
  • the present invention interlocks the posture of the drone body and the posture of the camera gimbal in real time through the interface of the camera gimbal and the flight controller, obtains standard attitude information in the earth coordinate system through an algorithm, obtains information on the angle of view and resolution of the camera, and the altimeter of the drone is inexpensive. Since it uses a sensor, it interlocks RTK (RTK) of less than 10 cm or DGPS (DGPS) of less than 1 m to link the corrected signal to GPS, and the flight control computer uses real-time camera video Based on this, meta information is created in the frame, and it is an MPEG structure in which a container structure is inserted in a time synchronization frame.
  • RTK RTK
  • DGPS DGPS
  • the image processing algorithm that adjusts the frame so that it does not exceed the frequency set bandwidth by automatically adjusting the frame runs.
  • the smart controller operated by the drone operator receives and processes MPEG video from the drone. It is simultaneously transmitted to the control system (GCS) and artificial intelligence processor, and the ground control system (GCS) receives the video MPEG, separates the video and meta information, interprets the meta information on the GIS map, and drones It mimics the camera direction and geographical position of the camera, and the video is displayed in real time at the location.
  • GCS control system
  • GIS ground control system
  • Another object is to provide a drone equipped with an on-board flight control computer and a drone camera video object location coordinate acquisition system using the same.
  • the present invention uses a global positioning system (GNSS) receiver, receives RTK / DGPS correction information, and is configured to perform GPS correction.
  • GNSS global positioning system
  • a Linux operating system server is configured on the flight controller (FC) and the flight control computer board (FCC), and real-time camera image acquisition, gimbal control information, flight controller attitude information, aircraft altitude angle sensor information are stored in GPS time It synchronizes the clock with information, processes real-time video optimized for the camera, and integrates a signal processing board to optimize the signal synchronization of the onboard flight controller.
  • the on-board flight control computer is configured to include an on-board flight control computer 200 using an image processing algorithm that adjusts the frame so that the frame does not exceed the set frequency band width by automatically adjusting the frame A drone is provided.
  • the on-board flight control computer 200 includes an attitude/altitude/angle sensor unit, an ESC (Electronic Speed Controller) power control unit for driving a motor, a video communication unit for transmitting image data to the smart controller 400, and the above It is characterized in that it is configured to include a control communication unit for receiving a control signal from the smart controller 400 and a flight control unit for controlling flight by control of the control communication unit.
  • ESC Electronic Speed Controller
  • the present invention uses a global positioning system (GNSS) receiver, and is configured to receive RTK / DGPS correction information and perform GPS correction (GPS) (100);
  • a Linux operating system server is configured on the on-board flight controller (FC) and flight control computer board (FCC), real-time camera image acquisition, gimbal control information, flight controller attitude information, aircraft altitude angle Synchronizes sensor information with GPS time information, processes real-time videos optimized for cameras, and integrates a flight control computer with a single signal processing board to optimize signal synchronization of the on-board flight controller, and includes a camera gimbal ( 300) and the flight controller interface to link the posture of the drone body and the posture of the camera gimbal in real time, obtain standard posture information in the earth coordinate system through a preset algorithm, obtain angle of view and resolution information of the camera of the camera gimbal 300, Based on the real-time camera video, meta information is created in the frame, and it is an MPEG structure with a container structure inserted in the
  • On-board flight control computer 200 using an image processing algorithm that adjusts the frame so that the frame does not exceed the frequency set bandwidth by automatic frame adjustment when the frame size is large according to the video transmission time synchronization when transmitted to the controller 400;
  • the controller on-board computer is driven by Linux operating system, includes a control controller, includes a ground control system, and provides a control communication unit for controlling the on-board flight control computer 200 and video communication with the on-board flight control computer 200.
  • a smart controller 400 comprising a video communication unit and a dual IP relay unit to perform; By receiving MPEG data from the smart controller 400, video and meta information are separated, and the video recognizes an object with a learned artificial intelligence algorithm, and the recognized object is positioned in time based on pre-learned meta information.
  • An artificial intelligence learner 500 that calculates coordinates and provides drone camera video object position coordinates to the mixed reality monitor 600; and a mixed room monitor 600 comprising smart glasses or a monitor and displaying the drone camera video object position coordinates provided by the artificial intelligence learner 500; A camera video object location coordinate acquisition system using a drone is provided.
  • the smart controller 400 is a smart controller operated by a drone pilot, and the on-board computer that receives and processes MPEG video from the on-board flight control computer 200 relays a Linux-based dual IP.
  • a server dual IP relay unit
  • MPEG streaming is simultaneously transmitted to the ground control system (GCS) and the artificial intelligence processor composed of the mixed reality monitor 600, and in the ground control system (GCS), video MPG
  • GIS ground control system
  • video MPG video MPG It is characterized by being configured to receive MPEG, separate video and meta information, interpret meta information on a GIS map, simulate the drone's camera direction and geographical position, and display the video in real time at the location. do.
  • the time coordinates of video digital information are configured with a redundant server computer to transmit the image of the smart controller transmitted from the drone without loss in the video frame to the ground control equipment and artificial intelligence processor, and the smart controller to the ground control system (GCS) ), the viewing angle of the drone is displayed in the geographic information system (GIS) and the video is expressed as real-time coordinates, and the artificial intelligence processor calculates the object identification and coordinates of the object learned through the video and meta information through MPEG. And by realizing the mixed reality of artificial intelligence through smart glasses or monitors, the video object location coordinates are stably acquired for the videos captured and transmitted by the drone camera moving in the air.
  • GIS geographic information system
  • FIG. 1 is a diagram for explaining the operation outline of a drone equipped with an on-board flight control computer and a system for obtaining location coordinates of a drone camera video object using the same according to the present invention
  • FIG. 2 is a block diagram for explaining an embodiment of a drone camera video object location coordinate acquisition system according to the present invention
  • FIG. 3 is a diagram for explaining an embodiment of a technology for precise position estimation and change detection based on object recognition learned in the drone camera video object location coordinate acquisition system according to the present invention
  • FIGS. 4 and 5 are views for explaining a drone equipped with an on-board flight control computer according to the present invention.
  • FIG. 6 is a diagram for explaining artificial intelligence learning object recognition and artificial intelligence learning results in the drone camera video object location coordinate acquisition system according to the present invention
  • FIG. 7 is a diagram for explaining the MPEG dual service without meta information loss in the smart controller and artificial intelligence learner of the drone camera video object location coordinate acquisition system according to the present invention.
  • FIG. 8 is a diagram showing an example of executing a tank object recognition learning engine in an artificial intelligence learner of a drone camera video object position coordinate acquisition system according to the present invention and displaying it on a mixed reality monitor.
  • FIGS. 9 and 10 are diagrams showing an example of executing a human object recognition learning engine in an artificial intelligence learner of a drone camera video object position coordinate acquisition system according to the present invention and displaying it on a mixed reality monitor.
  • the present invention configures a Linux operating system server on an on-board flight controller (FC) and a flight control computer board (FCC), and provides real-time camera image acquisition and gimbal control information, flight controller attitude information, airframe Synchronizes altitude angle sensor information with GPS time information, processes real-time video optimized for cameras, and integrates a flight control computer into one signal processing board to optimize signal synchronization of the on-board flight controller. It has an interface that synchronizes and integrates all signal processing operated in drones with a Linux-based operating system. to obtain the camera's angle of view and resolution information, and since the drone's altimeter uses a low-cost sensor, RTK (RTK) of less than 10 cm is interlocked with the corrected signal to GPS to solve the problem of large errors.
  • RTK RTK
  • DGPS correction information of less than 1m is linked, and the flight control computer generates meta information in the frame based on the real-time camera video, and the drone flies with the MPEG structure in which the container structure is inserted in the time-synchronized frame. It includes the information that calculates the coordinates the camera is looking at from the position, and the frame size is adjusted to match the video transmission time synchronization to solve the problem that the data transmission speed is delayed depending on the frame capacity when transmitting video from the flight control computer to the smart controller. If the frame is large, the image processing algorithm that adjusts the frame so that the frequency does not exceed the set bandwidth is driven by automatic frame adjustment, and the smart controller operated by the drone operator is a controller that receives and processes MPEG video from the drone.
  • Video MPEG MPEG
  • GIS ground control system
  • artificial intelligence processor transmits MPEG data.
  • Recognizes the video and meta information separates the video and recognizes the object with the learned artificial intelligence algorithm (to find a person, tank, etc.), and the recognized object calculates the location coordinates of the time based on the meta information
  • a drone equipped with an on-board flight control computer that displays object position coordinates and provides a time positioning method for artificial intelligence object identification, and a drone camera video object position coordinate acquisition system using the same.
  • FIG. 1 is a diagram for explaining the operation outline of a drone equipped with an on-board flight control computer and a drone camera video object location coordinate acquisition system using the same according to the present invention
  • FIG. 2 is a drone camera video object location coordinate acquisition system according to the present invention. It is a block configuration diagram for explaining an embodiment of the system.
  • FIG. The operation outline of a drone equipped with an on-board flight control computer and a drone camera video object location coordinate acquisition system using the same according to the present invention is shown in FIG. ), camera gimbal 300, smart controller 400, artificial intelligence learner 500, and mixed reality monitor 600, and flight control for the camera gimbal 300 in the drone camera video object location coordinate acquisition system.
  • Coordinates are obtained from the computer 200, images and meta information are synchronized, and spiritually synchronized MPEG is transmitted from the mission computer (on-board flight control computer 200) provided in the drone. That is, transmission management of MPEG traffic is performed, the smart controller 400 is composed of dual IP relay servers for dual video streaming, and the mixed reality monitor 600 performs mixed reality by artificial intelligence learning (meta). Provide object recognition coordinate service by monitoring.
  • FIG. It An embodiment of the drone camera video object position coordinate acquisition system for this purpose is as shown in FIG. It is configured to make GPS corrections.
  • the on-board flight control computer 200 configures a Linux operating system server on an on-board flight controller (FC) and a flight control computer board (FCC), and provides real-time camera image acquisition, gimbal control information, and flight controller Synchronizes the clock with the GPS time information of attitude information and aircraft altitude angle sensor information to process real-time video optimized for the camera, and to optimize the signal synchronization of the onboard flight controller.
  • FC on-board flight controller
  • FCC flight control computer board
  • It is a Linux-based operating system and has an interface that synchronizes and integrates all signal processing operated in drones.
  • the posture of the drone body and the posture of the camera gimbal are interlocked in real time, and standard posture information in the Earth coordinate system is obtained through a preset algorithm, and information on angle of view and resolution of the camera of the camera gimbal 300 are obtained.
  • RTK RTK
  • DGPS DGPS
  • the on-board flight control computer 200 generates meta information in frames based on real-time camera videos, and calculates the coordinates of the camera at the location where the drone flies with an MPEG structure in which a container structure is inserted into a time-synchronized frame.
  • the frame size is automatically adjusted according to the video transmission time synchronization when the frame size is large. An image processing algorithm that adjusts the frame so as not to exceed the frequency set bandwidth was used.
  • This on-board flight control computer 200 is composed of a posture/altitude/angle sensor unit and an ESC (Electronic Speed Controller) power control unit for driving a motor, and is composed of a video communication unit, a control communication unit, and a flight control unit. do.
  • the on-board flight control computer 200 receives precise location/altitude/time information from the global positioning system (GPS) 100 and gimbal information and video information (H.264 video) from the camera gimbal 300, and receives smart on the ground. It performs video communication and control communication with the controller 400 .
  • GPS global positioning system
  • H.264 video video information
  • the on-board flight control computer 200 includes an attitude/altitude/angle sensor unit, an ESC (Electronic Speed Controller) power control unit for driving a motor, a video communication unit for transmitting image data to the smart controller 400, It is configured to include a control communication unit for receiving a control signal from the smart controller 400 and a flight control unit for controlling flight by control of the control communication unit.
  • ESC Electronic Speed Controller
  • the smart controller 400 is a smart controller operated by a drone operator, and the controller on-board computer that receives and processes MPEG video from the on-board flight control computer 200 of the drone is a Linux-based dual IP relay. Composed of a server (dual IP relay) and received from the smart controller, MPEG streaming is simultaneously performed by the ground control system (GCS) provided in the smart controller 400 and the artificial intelligence processor configured in the mixed reality monitor 600.
  • the ground control system (GCS) receives the video MPEG, separates the video and meta information, interprets the meta information on the GIS map, and simulates the drone's camera direction and terrain position. It is displayed in real time at the location.
  • the smart controller 400 for this purpose is driven by the Linux operating system as a remote controller on-board computer, includes a control controller, includes a ground control system (GCS (tablet PC, etc.)), and controls the on-board flight control computer 200 It is configured to include a control communication unit, a video communication unit performing video communication with the on-board flight control computer 200, and a dual IP relay unit.
  • GCS ground control system
  • the artificial intelligence learner 500 receives MPEG data from the smart controller 400, separates the video and meta information, and the video is a learned artificial intelligence algorithm (eg, to find people, tanks, nematodes, etc.) ), and the recognized object calculates the location coordinates of the time based on the pre-learned meta information, and provides the drone camera video object location coordinates to the mixed reality monitor 600.
  • a learned artificial intelligence algorithm eg, to find people, tanks, nematodes, etc.
  • the mixed room monitor 600 is composed of smart glasses or a monitor and displays object location coordinates of the drone camera video provided by the artificial intelligence learner 500.
  • FIG. 3 is a diagram for explaining an embodiment of a technology for estimating a precise position and detecting a change based on object recognition learned in a drone camera video object position coordinate acquisition system according to the present invention.
  • the embodiment of the object recognition-based precise position estimation and change detection technology learned in the drone camera video object position coordinate acquisition system first recognizes a deep learning object and estimates the distance. and relative. Absolute position is reported.
  • L loss
  • p predicted class scores
  • u true class scores
  • t u true box coordinates
  • v predicted box coordinates, which is the difference between the actual position and the estimated position It is a formula that means learning to lower it.
  • Distance on ground is a target distance search on the ground
  • Target global position is an object recognition location coordinate
  • the relative absolute position report is to estimate the precise position based on object recognition and detect the change change.
  • FIGS. 4 and 5 are views for explaining a drone equipped with an on-board flight control computer according to the present invention.
  • FIG. 4 and 5 are views for explaining a drone equipped with an on-board flight control computer according to the present invention, the sensor unit configured in the on-board flight control computer 200 as shown in FIG. 2, flight control, and a Linux operating system. It shows that the operating flight control computer, control communication unit, video communication unit and power control unit are turned on on one board.
  • FIG. 5 several boards such as sensors and main boards, flight control computer boards, gimbal control boards (gimbal controllers), battery packs made of battery cells, and power control boards (power controllers) are integrated into one drone. While the drone has become larger as the core parts are configured, the present invention, as shown in the right side, shows that it is possible to control the small and lightweight on-board flight by being on-boarded with one on-board flight control computer 200.
  • FIG. 6 is a diagram for explaining artificial intelligence learning object recognition and artificial intelligence learning result processing in the drone camera video object location coordinate acquisition system according to the present invention
  • FIG. 7 is a drone camera video object location coordinate acquisition system according to the present invention
  • Figure 8 is a diagram for explaining the MPEG dual service without meta information loss in the smart controller and artificial intelligence learner of the present invention. It is a drawing showing an example.
  • artificial intelligence learning object recognition and processing of artificial intelligence learning results are the same as in FIGS. 6 and 7, but the GPS location in the GPS (100) Information, altitude/angle information, gimbal posture information (camera view angle information) and video information from the camera gimbal 300 are synchronized in the on-board flight control computer 200, and in the on-board flight control computer 200, the synchronized meta information and Meta information is created in the frame based on the real-time camera video of the video information, and the MPEG structure in which the container structure is inserted in the time synchronization frame calculates the coordinates of the camera at the location where the drone flies (Mission Computer Operation Department) .
  • the image processing algorithm is operated to adjust the frame so that the frame does not exceed the frequency set bandwidth by automatically adjusting the frame.
  • the controller on-board computer consists of a Linux-based dual IP relay server (dual IP relay unit), and the smart controller 400 receives and processes the MPEG frame,
  • the MPEG streaming received from 400) is simultaneously transmitted to the ground control system (GCS) (tablet PC, etc.) and artificial intelligence learning.
  • GCS ground control system
  • the ground control system receives the video MPEG, separates the video and meta information, interprets the meta information on the GIS map, simulates the drone's camera direction and terrain location, and the video is stored at the location. express in real time.
  • the artificial intelligence learner 500 receives the MPEG data and separates the video (H.264) and meta information (separating the MPEG packet), and the video is converted into a learned artificial intelligence algorithm (to find people, tanks, etc.)
  • the object is recognized, and the recognized object calculates the location coordinates of time based on the meta information and displays the drone camera video object location coordinates respectively.
  • the mixed reality monitor 600 detects an object with the tablet GIS through mixed reality artificial intelligence (MR), and displays it on the monitor (smart glasses) according to the tablet GIS coordinate information.
  • MR mixed reality artificial intelligence
  • real-time camera gimbal 300 video drone flight meta information synchronization SW is implemented in one step.
  • real-time video acquisition is synchronized
  • real-time camera view angle gimbal attitude + position angle altitude meta information is synchronized.
  • the mission computer synchronized H.26x video + meta information MPEG frame is generated.
  • an H.26x video + meta information container MPEG frame is generated, and encryption encoding and decoding operations are performed on the MPEG by applying an encryption algorithm.
  • the video Dual IP relay server is ported (based on Linux Ubuntu), and the smart controller includes MPEG video ground control system (GCS) and artificial intelligence learner.
  • GCS MPEG video ground control system
  • step 4 an artificial intelligence learner coordinate acquisition algorithm for object identification for surveillance and reconnaissance is performed, and object recognition pre-learning deep learning is performed to perform artificial intelligence surveillance and reconnaissance object recognition (real-time battlefield object identification and location estimation).
  • video object mixed reality smart glasses content service create video object identification mixed reality smart glasses content service.
  • video object mixed reality (MR) content for smart glasses is created (including UI).
  • GIS geographic information system
  • FIG. 8 shows an example of executing a tank object recognition learning engine in an artificial intelligence learner and displaying it on a mixed reality monitor.
  • object recognition is performed for three tanks, and the frame meta information coordinates are calculated in the lower left corner.
  • 9 and 10 are diagrams showing an example of executing a human object recognition learning engine in an artificial intelligence learner and displaying it on a mixed reality monitor.
  • object recognition for three people is performed in FIG. 9 and object recognition for one person is performed in FIG. 10, and the recognized object is displayed as a red circle. An example is shown.
  • the present invention is a camera video object location coordinate acquisition system using a drone, and specifically, a global positioning system (GPS) (100) configured to use a global positioning system (GNSS) receiver, receive RTK / DGPS correction information, and perform GPS correction. ), an on-board flight control computer 200 that receives gimbal information and video information (H.264 video) from the camera gimbal 300 and transmits them using an image processing algorithm that is frame-adjusted, an on-board flight control computer 200 and an image
  • a smart controller 400 composed of a communication unit and a dual IP relay unit, an artificial intelligence learner 500 that receives MPEG data from the smart controller and provides drone camera video object location coordinates to a mixed reality monitor, and an artificial intelligence learner It is about a drone equipped with an on-board flight control computer and a drone camera video object positioning system using the same, characterized in that it is configured to include a mixed-room monitor 600 displaying the coordinates of the drone camera video object location provided in (500), so the coordinates are obtained.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The present invention provides a system for obtaining positional coordinates of a camera video object by using a drone having an onboard flight control computer, the system comprising: a global positioning system (GPS) configured to use a global navigation satellite system (GNSS) receiver and receive RTK/DGPS calibration information to perform GPS calibration; an onboard flight control computer which receives gimbal information and video information (H.264 image) from a camera gimbal, obtains coordinates of the camera gimbal to generate meta information in a frame when image-synchronized MPEG transmission is performed, and transmits the meta information by using a frame-adjusted image processing algorithm; a smart controller configured to include the onboard flight control computer, an image communication unit, and a dual IP relay unit; an artificial intelligence learner which calculates positional coordinates of the time of a video and provides positional coordinates of a drone camera video object to a mixed reality monitor; and the mixed reality monitor configured to include smart glasses and a monitor to display the positional coordinates of the drone camera video object.

Description

온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치 좌표 획득 시스템A drone equipped with an on-board flight control computer and a drone camera video object location coordinate acquisition system using the same

본 발명은 온보드 비행제어 컴퓨터가 구비된 드론에 관한 것으로, 더욱 상세하게는 드론의 고속비행 중 드론 카메라가 바라보는 동영상 객체위치의 좌표획득과 인공지능으로 추출된 객체의 위치 결정 시 드론 카메라에서 획득된 실시간 동영상과 드론 비행정보와 짐벌 자세정보를 온보드 비행제어 컴퓨터에서 처리하고, 이를 해당 드론의 온보드 비행제어 컴퓨터에서 글로벌 좌표 표준자세 값으로 변환하여 동영상과 메타정보를 융합한 컨테이너 포맷으로 프레임을 구성하여 저장 및 스마트조종기로 전송하고 스마트조종기의 지상관제시스템(GCS)에서 수신된 엠펙(MPEG) 동영상 포맷을 영상과 메타정보로 분리(파싱)하여 지리정보에 표시하며 스마트조종기에서 인공지능 처리기로 전송된 엠펙(MPEG)을 분리하여 동영상은 인공지능 학습된 객체인식으로 식별하고 식별된위치 좌표는 메타정보로 산출된 객체의 시간좌표를 동기화 하여 결정된 위치와 시간정보를 처리할 수 있는 온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치 좌표 획득 시스템에 관한 것이다.The present invention relates to a drone equipped with an on-board flight control computer, and more particularly, when the coordinates of a video object viewed by a drone camera are acquired during the drone's high-speed flight and the position of an object extracted by artificial intelligence is determined, the drone camera acquires the coordinates. Real-time video, drone flight information, and gimbal attitude information are processed by the on-board flight control computer, and the drone's on-board flight control computer converts them into global coordinates and standard attitude values to form a frame in a container format that fuses video and meta information. It is stored and transmitted to the smart controller, and the MPEG video format received from the ground control system (GCS) of the smart controller is separated (parsed) into video and meta information, displayed in geographic information, and transmitted from the smart controller to the artificial intelligence processor. An on-board flight control computer that can process the determined position and time information by separating the extracted MPEG and identifying the video through AI-learned object recognition and synchronizing the time coordinates of the object calculated as meta information for the identified location coordinates. It relates to a drone equipped with and a drone camera video object location coordinate acquisition system using the same.

드론을 이용한 영상획득 기술이 지속적으로 발전하고 있다. 또한 드론에 인공지능 기술이 적용되면서 실시간 드론 동영상 위치와 시간정보의 객체식별 중요성이 대두되고 있으며, 드론이 실시간 이동중 자연적, 인공적인 객체에 대해 카메라가 바라보는 위치의 좌표와 시간정보를 함께 제공하는 서비스 기술수요가 확대 되고 있다. Image acquisition technology using drones is continuously evolving. In addition, as artificial intelligence technology is applied to drones, the importance of object identification of real-time drone video location and time information is emerging. Demand for service technology is expanding.

지금까지는 드론의 영상은 비행하는 위치를 드론 지피에스(GPS) 값으로 동영상과 지피에스 비행정보를 별도로 수신 받아 드론이 촬영한 위치좌표를 구하기 위해 영상저장과 메칭하여 지피에스 비행정보를 저장하여 활용하는 것이 일반적이다.Until now, it is common for the drone's video to receive the video and GPS flight information separately using the drone's GPS value as the drone's flying location, and to match it with the image storage to obtain the location coordinates filmed by the drone, and to store and utilize the GPS flight information. am.

그러나 현재까지는 드론의 위치와 카메라가 바라보는 거리 각도 자세값의 오차와 영상전송에서 데이터의 불규칙한 크기로 전송지연이 발생하고 있어 오차가 너무크게 발생하여 어려움을 겪고 있었다. However, until now, errors in the position of drones, distances, angles and attitudes that the camera is looking at, and transmission delays due to irregular sizes of data in video transmission have occurred, causing difficulties due to too large errors.

이러한 어려움을 해결하기 위한 방법으로는 드론에 컴퓨터를 탑재하거나 고가의 영상처리기와 신호처리기를 탑재하여 드론이 바라보는 카메라의 위치를 실시간 동기화를 위해 연구되고 있지만 이를 위하여는 컴퓨터를 탑재할만큼 드론의 크기가 대형화되어야 되어야 하는 문제와, 각각의 장치를 연결하는 과정에서 신호지연 등이 발생하여 크기가 너무 커지는 한편 고가의 장비로 운영의 어려움이 있었다. As a way to solve this difficulty, research is being conducted to synchronize the position of the camera the drone is looking at in real time by mounting a computer on the drone or installing an expensive image processor and signal processor. There was a problem that the size should be large, and signal delay occurred in the process of connecting each device, so the size was too large, and it was difficult to operate with expensive equipment.

또한 드론에서 동영상의 실시간 프레임 특성은 드론 비행자세와 데이터 표현값과 카메라 짐벌의 자세표현값이 서로 다르게 운동하며 출력되고 있어 두 개의 운동자세 결정은 카메라가 바라보는 최종 자세값으로 통합하고 최종 자세값은 글로벌 표준규격의 자세값으로 변환하여 메타정보에 저장되어야 하며, 카메라가 바라보는 고도값이 매우 중요하여 드론의 고도계는 비행중 오차가 수m이상 발생되어 카메라가 바라보는 위치 오차가 크므로 이를 위하여 지피에스를 보정하는 디지피에스(DGPS)는 1m이내 알티케이(RTK)보정은 10cm미만의 고도를 보정하고 각계는 지피에스(GPS) 아이엠유(IMU)와 지자기센서를 결합한 알고리즘으로 시간동기화된 기술구현이 요구되고 있어 드론에서 소형화 경량화 방법의 어려움으로 대중화되지 못하고 있다.In addition, the real-time frame characteristics of the video in the drone show that the drone flight posture and data expression value and the camera gimbal attitude expression value are moving and output differently. should be converted into a global standard posture value and stored in meta information, and the altitude value that the camera is looking at is very important. DGPS, which corrects the GPS, corrects the altitude within 1m, and RTK correction, which corrects the altitude within 10cm. This is required, so it is not popularized due to the difficulty of miniaturization and light weight method in drones.

특히 드론 동영상 특성은 비행경로 변경에 따라 일반적으로 고정되어 있는 영상프레임보다 동영상의 변화가 심하게 발생하며, 동영상 프레임은 이전프레임과 현재프레임의 변화된 정보만을 압축하여 전송하는 특징으로 이전 프레임과 현재의 프레임 변화가 클 경우 동영상 프레임 데이터가 증가되어 드론에서 무선으로 전송될 때 데이터량의 불규칙한 규격으로 전송 지연시간이 발생되는 특징도 있으며, 드론 조종자는 드론으로 부터 수신된 영상에서 객체식별을 하였어도 어느 시점의 동영상인지 프레임 영상만 가지고 확인이 어려운 문제가 있다. In particular, drone video characteristics change more severely than video frames that are generally fixed as the flight path changes, and the video frame compresses and transmits only the changed information of the previous frame and the current frame. When the change is large, the video frame data increases, and when transmitted wirelessly from the drone, there is a characteristic that transmission delay time occurs due to irregular specifications of the amount of data, and even if the drone operator identifies an object in the video received from the drone, at any point in time There is a problem that it is difficult to check whether it is a video or only a frame video.

최근 드론 영상이 인공지능에 접목되면서 드론영상의 객체 식별된 실시간 위치와 시간의 정보 요구가 더욱더 증가하고 있다. 따라서 드론이 비행중 드론 동영상의 시간의 위치좌표 획득은 카메라가 바라보는 짐벌자세 정보와 드론이 비행하는 자세의 정보에 카메라 각도, 정밀한 고도정보를 통해 동영상 프레임에 실시간 동기화 방법이 필요하다. Recently, as drone images are grafted onto artificial intelligence, the demand for real-time location and time information of object identification in drone images is increasing. Therefore, in order to acquire the positional coordinates of the time of the drone video while the drone is in flight, a real-time synchronization method is required for the video frame through the information of the gimbal posture that the camera is looking at and the information of the flying posture of the drone, the camera angle, and the precise altitude information.

또한 드론에서 획득된 영상은 드론 운영자에게 무선으로 전송되어야 하며 동영상 전송은 데이터 증가의 변화폭이 크게 발생하며 부선 통신은 한정된 주파수 자원에 따라 일정속도로 동영상과 카메라규격 정보를 포함한 위치 자세의 메타정보를 손실없이 전송되도록 데이터 전송 조절기능이 요구된다. In addition, the video obtained from the drone must be wirelessly transmitted to the drone operator. The video transmission causes a large change in data increase, and the barge communication transmits video and meta information of position and attitude including camera specification information at a constant speed according to limited frequency resources. Data transmission control is required to ensure lossless transmission.

그리고 드론에서 전송된 동영상 엠펙(MPEG)는 스마트조종기의 온보드 컴퓨터에서 수신되고, 지상관제시스템(GCS)과 인공지능 처리기에 동시에 전송되는 컴퓨터 기능을 갖고 있으며 지상관제시스템은 엠팩을 분석하여 동영상과 메타정보를 지리정보 시스템과 함께 이용되므로 인공지능 처리기는 학습엔진을 통해 동영상의 객체인식과 메타정보의 위치좌표 획득이 가능하며, 이러한 방법은 현재의 시간에 영상이 아니고 이동중 카메라가 바라보는 각도의 시간의 위치와 동기화된 동영상 좌표를 드론 카메라로 바라보는 위치의 시간에 동기화된 영상위치 식별을 표출하는 방법이므로 실시간 동영상이 실제위치 좌표에 표출되는 프레임을 드론에 탑재된 온보드 비행제어 컴퓨터와 스마트조종기의 온보드 서버로 구현하여 동영상 촬영 당시의 좌표동기화 기술로 서비스하는 방법이 필요하다.In addition, the video MPEG transmitted from the drone is received by the onboard computer of the smart controller and has a computer function that is simultaneously transmitted to the ground control system (GCS) and artificial intelligence processor. Since the information is used together with the geographic information system, the artificial intelligence processor can recognize the object of the video and acquire the location coordinates of the meta information through the learning engine. Since it is a method of expressing video location identification synchronized with the position of the drone camera at the time of the location viewed by the drone camera, the frame in which the real-time video is expressed in the coordinates of the actual location is transferred between the on-board flight control computer and smart controller mounted on the drone. It is necessary to implement it as an on-board server and provide a service with coordinate synchronization technology at the time of video shooting.

한편 특허 제10-1943823호 종래의 기술은 고속비행중 정밀위치 데이터 획득을 위한 무인기 및 고속비행중 정밀위치 데이터 획득을 위한 무인기 임무장비 동기화 결정 방법으로 드론(무인기)의 사진촬영중 위치오차 발생요인에 관한 발명이며 종래 기술은 드론을 이용한 3차원 모델링에 필요한 사진촬영에서 동기화 방법이다. On the other hand, the conventional technology of Patent No. 10-1943823 is a method for determining the synchronization of UAV mission equipment for obtaining precise position data during high-speed flight and UAV mission equipment for obtaining precise position data during high-speed flight. It is an invention related to and the prior art is a synchronization method in photography necessary for 3D modeling using a drone.

기존 촬영 방법은 일반적으로 드론에 장착되는 카메라 짐벌이 직하방향으로 고정시켜 드론의 지피에스(GPS)좌표와 카메라 짐벌의자세 정보를 사진과 메타정보로 동기화 시켜 촬영하는 방법을 설명하고 있으며, 3차원 모델링은 사진의 중첩도에 따라 스테레오 영상기법으로 정사영상을 생성하는 방법으로, 정밀도 향상을 위해 운용자는 후처리보정을 위한 지상의 지상기준점(GCS) 표시좌표를 드론이 함께 촬영하여 상용화된 3D모델링 소프트웨어서 지상기준점을 기준으로 정사 모자이크 처리를 통해 정확도를 향상시키는 촬영방법이다. The existing shooting method generally describes a method of shooting by fixing the camera gimbal mounted on a drone in a direct downward direction and synchronizing the drone's GPS coordinates and the camera gimbal's posture information with photos and meta information, and 3D modeling This is a method of generating orthoimages with stereo imaging techniques according to the degree of overlap of the photos. To improve precision, the operator shoots the coordinates of the ground control point (GCS) on the ground for post-processing correction with a drone and commercialized 3D modeling software This is a photographing method that improves accuracy through orthogonal mosaic processing based on the ground reference point.

이러한 방법은 드론이 촬영경로에 따라 사진촬영 위치결정을 GPS정보에 동기화하여 사진촬영 영상위치 결정하는 방법으로 사진촬영의 위치정밀도를 높이기 위한 기술이다.This method is a method for determining the position of a video taken by a drone by synchronizing the determination of a position for taking a picture with GPS information according to a photographing route, and is a technology for increasing the accuracy of the position of taking a picture.

그리고 등록특허 10-2242366호는 토공현장의 디지털맵 생성을 위한 드론 기반 지상기준점 배치 자동화 장치로, GPS오차가 크기 때문에 cm정밀도를 갖는 RTK보정 기술로 3차원 디지털맵 생성을 위해 가상기준점의 배치를 RTK기술로 사진촬영 방법을 제시하고 있다. And Registered Patent No. 10-2242366 is a drone-based ground control point placement automation device for digital map creation at earthwork sites. Because of the large GPS error, RTK correction technology with cm precision is used to place virtual control points to create a 3D digital map. A method for taking pictures using RTK technology is presented.

특히 기존 동영상 전송 방법은 아날로그 또는 HDMI 통신 방식이 많이 사용되고 있으며 이는 이미지 덤프 방식으로 드론 이동중 데이터 증가되는 문제를 동일한 규격으로 영상을 송출하는 방법으로 메타정보를 포함하지 못하는 문제가 있고, 메타정보를 포함한 정보를 아날로그 HDMI로 변환되면 동영상 프레임에 포함된 메타정보는 삭제되는 문제가 있어 좌표인식에서는 활용되지 않은 영상 포맷이다.In particular, analog or HDMI communication methods are widely used as existing video transmission methods. This is an image dump method, which has a problem of not including meta information as a method of transmitting video in the same standard to solve the problem of increasing data during drone movement. When the information is converted to analog HDMI, meta information included in the video frame has a problem of being deleted, so it is a video format that is not used in coordinate recognition.

또한 드론에서 동영상에서 위치정보 정밀도를 높이기 위해 별도의 위성항법(GNSS) 보정기술인 알티케이(RTK) 동기화 기술을 적용하고 있으나 이는 드론위치의 정밀도가 향상되는 효과만 있을뿐 실시간 카메라가 바라보는 위치 계산에 필요한 GPS시간에 위치좌표와 카메라정보 및 드론 자세와 짐벌 자세의 융합된 표준자세 정보, 정밀한 각도, 고도를 신호처리 손실 없이 결정하기 위해 카메라, 짐벌, 기체, 비행제어등 서로다른 장치를 연결할 경우 획득되는 정보의 통합 누적오차가 발생하고, 데이터 지연 문제로 기술적 한계를 가지고 있다.In addition, RTK synchronization technology, a separate satellite navigation (GNSS) correction technology, is applied to increase the accuracy of location information in the video of the drone, but this only has the effect of improving the precision of the drone location, and calculates the location that the real-time camera is looking at. When connecting different devices such as cameras, gimbals, airframes, flight controls, etc. to determine positional coordinates, camera information, standard attitude information, precise angle and altitude that are fused with drone attitude and gimbal attitude, without signal processing loss, at the GPS time required for Integration and accumulation errors of acquired information occur, and it has technical limitations due to data delay.

그리고 종래 기술에서의 동영상 특징은 현재의 영상 프레임은 이전프레임의 변화된 정보를 압축하여 전송되는 정보이므로 각 프레임은 정확한 시간과 장소를 알 수 없는 특징과 드론에서 무선전송 될 때 특정프레임의 데이터량 증가로 지연(Delay)시간이 발생하는 문제로 물체식별을 탐지하여도 정확한 위칙 식별이 어려워 수백m 오차가 발생하며, 현재 드론에서 메타정보가 없은 경우 카메라가 바라보는 영상 프레임의 위치는 이전영상의 변화 값의 연속적인 이미지를 가지고 있어도 영상 전송이후 운용자가 바라보는 동영상 프레임만 가지고 위치와 시간의 시점을 알 수가 없는 문제점은 항상 대두되고 있다.In addition, the characteristic of video in the prior art is that the current video frame is information transmitted by compressing the changed information of the previous frame, so each frame has a feature that the exact time and place are not known, and the amount of data of a specific frame increases when wirelessly transmitted from a drone. Due to the problem of delay time, it is difficult to accurately identify anomalies even after detecting object identification, resulting in an error of hundreds of meters. Even if there is a continuous image of the value, the problem of not being able to know the location and time point with only the video frame that the operator looks at after the video transmission is always on the rise.

이러한 동영상처리와 메타정보의 융합은 높은 성능의 컴퓨터가 드론에 탑재되어야 정밀한 자세와 화각을 포함하는 동기화 및 메타정보와 동영상 융합은 가능하나 소형드론에서 운용 가능한 서버구성 방법과 동영상 생성과 메타정보 처리가능한 컴퓨터 융합기술구현이 어렵고 신호처리가 분산될 경우 지연되는 문제와 대량의 동영상 프레임 용량이 크게 변화될 때 무선통신 구간에서 한정된 주파수로 영상이 지연되거나 영상전송의 시피유(CPU) 처리 한계로 지연되는 문제가 발생되고 있다.The convergence of video processing and meta information requires a high-performance computer to be installed on the drone, so synchronization including precise posture and angle of view and video convergence with meta information are possible. It is difficult to implement as much computer convergence technology as possible, and when signal processing is dispersed, the delay problem and when the capacity of a large amount of video frames change greatly, the video is delayed with a limited frequency in the wireless communication section or delayed due to CPU processing limitations of video transmission. A problem is occurring.

이렇듯 종래에는 드론에서 다양한 주변기기 인터페이스로 데이터 통합으로 동기화되지 않은 영상처리 문제와 메타정보 컨테이너 디지털 영상은 HDMI 또는 아나로그 포맷으로 변환될 경우 동영상과 메타정보가 손상되는 문제와. 드론에서 전송되는 동영상 무선 트레픽 자동조절과 한정된 통신 주파수에 용량이 큰 데이터가 전송될때 통신지연되는 문제가 있었다.In this way, conventionally, the problem of image processing that is not synchronized due to data integration from drones to various peripheral device interfaces and the problem of video and meta information being damaged when digital images of meta information containers are converted to HDMI or analog formats. There was a problem of automatic control of video wireless traffic transmitted from drones and communication delays when large-capacity data was transmitted in limited communication frequencies.

본 발명에서는 동영상 디지털 정보의 시간의 좌표를 동영상 프레임에서 손실 없이 드론에서 전송된 스마트조종기의 영상이 지상관제장비와 인공지능 처리기로 전송하기 위한 2중화 서버 컴퓨터 구성 방법과 스마트조종기에서 지상관제시스템(GCS)에서 지리정보 시스템(GIS)에 드론 이 바라보는 갇도 표시와 동영상이 실시간 좌표로 표현되는 방법 및 인공지능 처리기는 엠팩(MPEG)를 통해 동영상과 메타정보를 통해 학습된 객체식별과 객체의 좌표를 계산하고 그 결과를 스마트안경 또는 모니터를 통해 인공지능의 혼합현실(Mixed Reality)을 구현하여 서비스가 이루어지는 방법을 해결하고자 한다.In the present invention, a redundant server computer configuration method for transmitting the video of a smart controller transmitted from a drone without loss of time coordinates of video digital information in a video frame to ground control equipment and an artificial intelligence processor and a ground control system in a smart controller ( GCS) displays the degree of confinement seen by drones in the geographic information system (GIS) and how videos are expressed in real-time coordinates, and artificial intelligence processors learn object identification through video and meta information through MPEG and object identification We are trying to solve the way in which services are provided by calculating the coordinates and realizing mixed reality of artificial intelligence through smart glasses or monitors.

따라서, 본 발명은 상기와 같은 종래 기술의 제반 단점과 문제점을 해결하기 위한 것으로, 온보드(On-Board)된 비행제어기(Flight controller : FC)와 비행제어 컴퓨터 보드(FCC)에 리눅스 운영체제 서버를 구성하고 실시간 카메라 영상획득과 짐벌제어정보와, 비행제어기의 자세정보, 기체 고도 각도센서 정보를 지피에스 시간정보에 클럭을 동기화하여, 카메라에 최적화된 실시간 동영상을 처리하고, 온보드 비행제어기의 신호동기화 최적화를 위해 통합된 하나의 신호처리 보드로 비행제어컴퓨터를 포함하고 있으며 리눅스 기반 운영체제로 드론에서 운영되는 모든 신호처리를 동기화하여 통합 운영하는 인터페이스를 갖는 온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치 좌표 획득 시스템을 제공하는데 그 목적이 있다. Therefore, the present invention is to solve various disadvantages and problems of the prior art as described above, and configures a Linux operating system server on an on-board flight controller (FC) and a flight control computer board (FCC). Real-time camera image acquisition, gimbal control information, attitude information of the flight controller, aircraft altitude angle sensor information are synchronized with GPS time information to process real-time video optimized for the camera, and signal synchronization optimization of the onboard flight controller A drone equipped with an on-board flight control computer with an interface that synchronizes and integrates all signal processing operated in the drone with a Linux-based operating system and includes a flight control computer as a single signal processing board integrated for the purpose of operation and drone camera video using the same. Its purpose is to provide an object position coordinate acquisition system.

또한 본 발명은 카메라 짐벌과 비행제어기 인터페이스를 통해 드론기체의 자세와 카메라짐벌 자세를 실시간 연동하고 알고리즘을 통해 지구좌표계 표준 자세정보를 획득하고 카메라의 화각, 해상도 정보를 획득하며, 드론의 고도계는 저가 센서를 사용하므로 오차가 크게 발생되는 문제를 해결하기 이해 지피에스에 보정된 신호를 연동하도록 10cm미만 알티케이(RTK) 또는 1m미만 디지피에스(DGPS) 보정정보를 연동시키며, 비행제어 컴퓨터는 실시간 카메라 동영상을 기준으로 프레임에 메타정보가 생성되며 시간동기화 프레임에 컨테이너 구조를 삽입한 엠팩(MPEG) 구조로 드론이 비행하는 위치에서 카메라가 바라보는 좌표를 산출하는 정보를 포함하고, 비행제어 컴퓨터에서 동영상을 스마트조종기로 전송할 때 프레임용량에 따라 데이터 전송속도가 지연되는 문제를 해결하기 위해 동영상 전송 시간동기에 맞추어 프레임 크기가 크면 프레임 자동조절로 주파수 설정된 밴드폭을 넘지 않도록 프레임조절 되는 영상처리 알고리즘이 구동하며, 드론 조종자가 운영하는 스마트조종기는 드론에서 엠팩(MPEG) 동영상을 수신받아 처리하는 조종기 온보드 컴퓨터는 리눅스 기반 이중 아이피(Dual IP)중계 서버로 구성하여 스마트조종기에서 수신받은 엠팩(MPEG) 스트리밍은 지상관제시스템(GCS)와 인공지능 처리기에 동시 전송하고, 지상관제시스템(GCS)은 동영상 엠피이지(MPEG)를 전송받아 동영상과 메타정보를 분리하여 지아이에스(GIS) 지도에 메타정보를 해석하고 드론의 카메라 방향과 지형위치를 모사하며 동영상은 해당위치에서 실시간 표출하며, 인공지능 처리기는 엠팩(MPEG) 데이터를 전송받아 동영상과 메타정보를 분리하고 동영상은 학습된 인공지능 알고리즘(사람, 탱크 등 찾으라 하는 것)으로 객체를 인식하며, 인식된 객체는 메타정보를 기반으로 시간의 위치좌표를 계산하여 드론 카메라 동영상 객체위치 좌표를 각각 표시하여 인공지능 객체식별에 대해 시간의 위치결정 방법을 제공하는 온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치 좌표 획득 시스템을 제공하는데 다른 목적이 있다In addition, the present invention interlocks the posture of the drone body and the posture of the camera gimbal in real time through the interface of the camera gimbal and the flight controller, obtains standard attitude information in the earth coordinate system through an algorithm, obtains information on the angle of view and resolution of the camera, and the altimeter of the drone is inexpensive. Since it uses a sensor, it interlocks RTK (RTK) of less than 10 cm or DGPS (DGPS) of less than 1 m to link the corrected signal to GPS, and the flight control computer uses real-time camera video Based on this, meta information is created in the frame, and it is an MPEG structure in which a container structure is inserted in a time synchronization frame. In order to solve the problem that the data transmission speed is delayed according to the frame capacity when transmitting to the smart controller, if the frame size is large according to the video transmission time synchronization, the image processing algorithm that adjusts the frame so that it does not exceed the frequency set bandwidth by automatically adjusting the frame runs. , The smart controller operated by the drone operator receives and processes MPEG video from the drone. It is simultaneously transmitted to the control system (GCS) and artificial intelligence processor, and the ground control system (GCS) receives the video MPEG, separates the video and meta information, interprets the meta information on the GIS map, and drones It mimics the camera direction and geographical position of the camera, and the video is displayed in real time at the location. Recognizes an object by recognizing an object, and the recognized object calculates the location coordinates of time based on meta information and displays the coordinates of the object location of the drone camera video, respectively, to provide a method for determining the location of time for artificial intelligence object identification. Another object is to provide a drone equipped with an on-board flight control computer and a drone camera video object location coordinate acquisition system using the same.

상기한 목적을 달성하기 위하여 본 발명은 위성측위시스템(GNSS) 수신기를 이용하며, RTK/DGPS 보정정보를 수신받아 GPS 보정을 하도록 구성되는 위성항법장치(GPS)(100)와, 온보드(On-Board)된 비행제어기(Flight controller : FC)와 비행제어 컴퓨터 보드(FCC)에 리눅스 운영체제 서버를 구성하고 실시간 카메라 영상획득과 짐벌제어정보와, 비행제어기의 자세정보, 기체 고도 각도센서 정보를 지피에스 시간정보에 클럭을 동기화하여, 카메라에 최적화된 실시간 동영상을 처리하며, 온보드 비행제어기의 신호동기화 최적화를 위해 통합된 하나의 신호처리 보드로 비행제어컴퓨터를 포함하고 있으며 카메라 짐벌(300)과 비행제어기 인터페이스를 통해 드론기체의 자세와 카메라짐벌 자세를 실시간 연동하고 미리 설정된 알고리즘을 통해 지구좌표계 표준 자세정보를 획득하며 상기 카메라 짐벌(300)의 카메라의 화각, 해상도 정보를 획득하고, 실시간 카메라 동영상을 기준으로 프레임에 메타정보가 생성되며 시간동기화 프레임에 컨테이너 구조를 삽입한 엠팩(MPEG) 구조로 드론이 비행하는 위치에서 카메라가 바라보는 좌표를 산출하는 정보를 포함하며, 동영상을 스마트조종기(400)로 전송할 때 동영상 전송 시간동기에 맞추어 프레임 크기가 크면 프레임 자동조절로 주파수 설정된 밴드폭을 넘지 않도록 프레임조절 되는 영상처리 알고리즘을 이용하는 온보드 비행제어 컴퓨터(200)를 포함하여 구성됨을 특징으로 하는 온보드 비행제어 컴퓨터가 구비된 드론을 제공한다.In order to achieve the above object, the present invention uses a global positioning system (GNSS) receiver, receives RTK / DGPS correction information, and is configured to perform GPS correction. A Linux operating system server is configured on the flight controller (FC) and the flight control computer board (FCC), and real-time camera image acquisition, gimbal control information, flight controller attitude information, aircraft altitude angle sensor information are stored in GPS time It synchronizes the clock with information, processes real-time video optimized for the camera, and integrates a signal processing board to optimize the signal synchronization of the onboard flight controller. It includes a flight control computer, and the camera gimbal 300 and flight controller interface Through real-time linkage of the posture of the drone body and the posture of the camera gimbal, obtaining standard posture information in the Earth coordinate system through a preset algorithm, obtaining information on the angle of view and resolution of the camera of the camera gimbal 300, based on real-time camera video Meta information is created in the frame, and it is an MPEG structure in which a container structure is inserted into the time synchronization frame. When the frame size is large according to the video transmission time synchronization, the on-board flight control computer is configured to include an on-board flight control computer 200 using an image processing algorithm that adjusts the frame so that the frame does not exceed the set frequency band width by automatically adjusting the frame A drone is provided.

여기서 온보드 비행제어컴퓨터(200)는 자세/고도/각도의 센서부와, 모터를 구동하기 위한 ESC(Electronic Speed Controller) 전원제어부, 상기 스마트조종기(400)로 영상 데이터를 송신하기 위한 영상통신부, 상기 스마트조종기(400)로부터 제어신호를 수신받기 위한 제어통신부 및 제어통신부의 제어에 의해 비행을 제어하는 비행제어부를 포함하여 구성됨을 특징으로 한다.Here, the on-board flight control computer 200 includes an attitude/altitude/angle sensor unit, an ESC (Electronic Speed Controller) power control unit for driving a motor, a video communication unit for transmitting image data to the smart controller 400, and the above It is characterized in that it is configured to include a control communication unit for receiving a control signal from the smart controller 400 and a flight control unit for controlling flight by control of the control communication unit.

그리고 상기한 목적을 달성하기 위하여 본 발명은 위성측위시스템(GNSS) 수신기를 이용하며, RTK/DGPS 보정정보를 수신받아 GPS 보정을 하도록 구성되는 위성항법장치(GPS)(100); 온보드(On-Board)된 비행제어기(Flight controller : FC)와 비행제어 컴퓨터 보드(FCC)에 리눅스 운영체제 서버가 구성되고, 실시간 카메라 영상획득과 짐벌제어정보와, 비행제어기의 자세정보, 기체 고도 각도센서 정보를 지피에스 시간정보에 클럭을 동기화하여, 카메라에 최적화된 실시간 동영상을 처리하며, 온보드 비행제어기의 신호동기화 최적화를 위해 통합된 하나의 신호처리 보드로 비행제어컴퓨터를 포함하고 있으며, 카메라 짐벌(300)과 비행제어기 인터페이스를 통해 드론기체의 자세와 카메라짐벌 자세를 실시간 연동하고 미리 설정된 알고리즘을 통해 지구좌표계 표준 자세정보를 획득하며 상기 카메라 짐벌(300)의 카메라의 화각, 해상도 정보를 획득하고, 실시간 카메라 동영상을 기준으로 프레임에 메타정보가 생성되며 시간동기화 프레임에 컨테이너 구조를 삽입한 엠팩(MPEG) 구조로 드론이 비행하는 위치에서 카메라가 바라보는 좌표를 산출하는 정보를 포함하며, 동영상을 스마트조종기(400)로 전송할 때 동영상 전송 시간동기에 맞추어 프레임 크기가 크면 프레임 자동조절로 주파수 설정된 밴드폭을 넘지 않도록 프레임조절 되는 영상처리 알고리즘을 이용하는 온보드 비행제어 컴퓨터(200); 조종기 온보드 컴퓨터로 리눅스 운영체제로 구동되며, 조종제어기를 포함하고, 지상관제시스템을 포함하며, 상기 온보드 비행제어 컴퓨터(200) 제어를 위한 제어통신부와, 상기 온보드 비행제어 컴퓨터(200)와 영상통신을 수행하는 영상통신부 및 듀얼 IP 중계부를 포함하여 구성되는 스마트 조종기(400); 상기 스마트 조종기(400)로부터 엠펙(MPEG) 데이터를 전송받아 동영상과 메타정보를 분리하고 동영상은 학습된 인공지능 알고리즘으로 객체를 인식하며, 인식된 객체는 미리 학습된 메타정보를 기반으로 시간의 위치좌표를 계산하여 드론 카메라 동영상 객체위치 좌표를 혼합현실 모니터(600)로 제공하는 인공지능 학습기(500); 및 스마트 안경 또는 모니터를 포함하여 구성되어 상기 인공지능 학습기(500)에서 제공된 드론 카메라 동영상 객체위치 좌표를 표시하는 혼합혼실 모니터(600);를 포함하여 구성됨을 특징으로 하는 온보드 비행제어 컴퓨터가 구비된 드론을 이용한 카메라 동영상 객체 위치 좌표 획득 시스템을 제공한다.And in order to achieve the above object, the present invention uses a global positioning system (GNSS) receiver, and is configured to receive RTK / DGPS correction information and perform GPS correction (GPS) (100); A Linux operating system server is configured on the on-board flight controller (FC) and flight control computer board (FCC), real-time camera image acquisition, gimbal control information, flight controller attitude information, aircraft altitude angle Synchronizes sensor information with GPS time information, processes real-time videos optimized for cameras, and integrates a flight control computer with a single signal processing board to optimize signal synchronization of the on-board flight controller, and includes a camera gimbal ( 300) and the flight controller interface to link the posture of the drone body and the posture of the camera gimbal in real time, obtain standard posture information in the earth coordinate system through a preset algorithm, obtain angle of view and resolution information of the camera of the camera gimbal 300, Based on the real-time camera video, meta information is created in the frame, and it is an MPEG structure with a container structure inserted in the time synchronization frame. On-board flight control computer 200 using an image processing algorithm that adjusts the frame so that the frame does not exceed the frequency set bandwidth by automatic frame adjustment when the frame size is large according to the video transmission time synchronization when transmitted to the controller 400; The controller on-board computer is driven by Linux operating system, includes a control controller, includes a ground control system, and provides a control communication unit for controlling the on-board flight control computer 200 and video communication with the on-board flight control computer 200. A smart controller 400 comprising a video communication unit and a dual IP relay unit to perform; By receiving MPEG data from the smart controller 400, video and meta information are separated, and the video recognizes an object with a learned artificial intelligence algorithm, and the recognized object is positioned in time based on pre-learned meta information. An artificial intelligence learner 500 that calculates coordinates and provides drone camera video object position coordinates to the mixed reality monitor 600; and a mixed room monitor 600 comprising smart glasses or a monitor and displaying the drone camera video object position coordinates provided by the artificial intelligence learner 500; A camera video object location coordinate acquisition system using a drone is provided.

여기서 상기 스마트 조종기(400)는 드론 조종자가 운영하는 스마트조종기로, 상기 온보드 비행제어 컴퓨터(200)에서 엠팩(MPEG) 동영상을 수신받아 처리하는 온보드 컴퓨터는 리눅스 기반 이중(Dual) 아이피(IP) 중계 서버(듀얼 IP 중계부)로 구성되어 엠팩(MPEG) 스트리밍은 지상관제시스템(GCS)와 혼합현실 모니터(600)에 구성되는 인공지능 처리기에 동시 전송되며, 상기 지상관제시스템(GCS)에서는 동영상 엠피이지(MPEG)를 전송받아 동영상과 메타정보를 분리하여 지아이에스(GIS) 지도에 메타정보를 해석하고, 드론의 카메라 방향과 지형위치를 모사하며, 동영상은 해당위치에서 실시간 표출하도록 구성된 것을 특징으로 한다.Here, the smart controller 400 is a smart controller operated by a drone pilot, and the on-board computer that receives and processes MPEG video from the on-board flight control computer 200 relays a Linux-based dual IP. Composed of a server (dual IP relay unit), MPEG streaming is simultaneously transmitted to the ground control system (GCS) and the artificial intelligence processor composed of the mixed reality monitor 600, and in the ground control system (GCS), video MPG It is characterized by being configured to receive MPEG, separate video and meta information, interpret meta information on a GIS map, simulate the drone's camera direction and geographical position, and display the video in real time at the location. do.

본 발명의 실시 예에 따르면 다음과 같은 효과가 있다.According to an embodiment of the present invention, there are the following effects.

첫째, 비행제어기(Flight controller : FC)와 비행제어 컴퓨터 보드(FCC)에 리눅스 운영체제 서버를 온보드(On-Board)로 구성함으로써 안정적인 실시간 카메라 영상획득과 짐벌제어정보와, 비행제어기의 자세정보, 기체 고도 각도센서 정보를지피에스 시간정보에 클럭을 동기화하여, 카메라에 최적화된 실시간 동영상을 처리할 수 있고, 온보드화함에 따라 드론의 크기를 획기적으로 소형화할 수 있다. First, stable real-time camera image acquisition, gimbal control information, posture information of the flight controller, aircraft by configuring the Linux operating system server on-board on the flight controller (FC) and flight control computer board (FCC). By synchronizing the altitude and angle sensor information with the GPS time information, real-time videos optimized for the camera can be processed, and the size of the drone can be drastically reduced by onboarding.

둘째, 동영상 디지털 정보의 시간의 좌표를 동영상 프레임에서 손실없이 드론에서 전송된 스마트조종기의 영상이 지상관제장비와 인공지능 처리기로 전송하기 위한 2중화 서버 컴퓨터로 구성하여 스마트조종기에서 지상관제시스템(GCS)에서 지리정보 시스템(GIS)에 드론이 바라보는 각도 표시와 동영상이 실시간 좌표로 표현하고, 인공지능 처리기에서 엠팩(MPEG)을 통해 동영상과 메타정보를 통해 학습된 객체식별과 객체의 좌표를 계산하고 그 결과를 스마트안경 또는 모니터를 통해 인공지능의 혼합현실(Mixed Reality)을 구현함으로써, 공중에서 이동 중인 드론 카메라에서 촬영되어 전송된 동영상에 대하여도 동영상 객체위치 좌표 획득이 안정적으로 서비스가 이루어진다.Second, the time coordinates of video digital information are configured with a redundant server computer to transmit the image of the smart controller transmitted from the drone without loss in the video frame to the ground control equipment and artificial intelligence processor, and the smart controller to the ground control system (GCS) ), the viewing angle of the drone is displayed in the geographic information system (GIS) and the video is expressed as real-time coordinates, and the artificial intelligence processor calculates the object identification and coordinates of the object learned through the video and meta information through MPEG. And by realizing the mixed reality of artificial intelligence through smart glasses or monitors, the video object location coordinates are stably acquired for the videos captured and transmitted by the drone camera moving in the air.

도 1은 본 발명에 따른 온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 운영개요를 설명하기 위한 도면,1 is a diagram for explaining the operation outline of a drone equipped with an on-board flight control computer and a system for obtaining location coordinates of a drone camera video object using the same according to the present invention;

도 2는 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 실시예를 설명하기 위한 블록 구성도,2 is a block diagram for explaining an embodiment of a drone camera video object location coordinate acquisition system according to the present invention;

도 3은 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템에서 학습된 객체 인식기반의 정밀위치 추정과 변경 변화 검출 기술의 실시예를 설명하기 위한 도면,3 is a diagram for explaining an embodiment of a technology for precise position estimation and change detection based on object recognition learned in the drone camera video object location coordinate acquisition system according to the present invention;

도 4 및 도 5는 본 발명에 따른 온보드 비행제어 컴퓨터가 구비된 드론을 설명하기 위한 도면,4 and 5 are views for explaining a drone equipped with an on-board flight control computer according to the present invention;

도 6은 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템에서 인공지능 학습 객체인식과, 인공지능 학습 결과를 설명하기 위한 도면,6 is a diagram for explaining artificial intelligence learning object recognition and artificial intelligence learning results in the drone camera video object location coordinate acquisition system according to the present invention;

도 7은 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 스마트조정기와 인공지능 학습기에서 메타정보 손실없는 엠펙 듀얼 서비스를 설명하기 위한 도면,7 is a diagram for explaining the MPEG dual service without meta information loss in the smart controller and artificial intelligence learner of the drone camera video object location coordinate acquisition system according to the present invention;

도 8은 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 인공지능 학습기에서 탱크객체인식 학습엔진을 실행하고, 이를 혼합현실 모니터에서 디스플레이한 예를 보여주는 도면이다.8 is a diagram showing an example of executing a tank object recognition learning engine in an artificial intelligence learner of a drone camera video object position coordinate acquisition system according to the present invention and displaying it on a mixed reality monitor.

도 9 및 도 10은 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 인공지능 학습기에서 사람객체인식 학습엔진을 실행하고, 이를 혼합현실 모니터에서 디스플레이한 예를 보여주는 도면이다.9 and 10 are diagrams showing an example of executing a human object recognition learning engine in an artificial intelligence learner of a drone camera video object position coordinate acquisition system according to the present invention and displaying it on a mixed reality monitor.

본 발명의 바람직한 실시 예를 첨부된 도면에 의하여 상세히 설명하면 다음과 같다.A preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings.

아울러, 본 발명에서 사용되는 용어는 가능한 한 현재 널리 사용되는 일반적인 용어를 선택하였으나, 특정한 경우는 출원인이 임의로 선정한 용어도 있으며 이 경우는 해당되는 발명의 설명부분에서 상세히 그 의미를 기재하였으므로, 단순한 용어의 명칭이 아닌 용어가 가지는 의미로서 본 발명을 파악하여야 함을 밝혀두고자 한다. 또한 실시예를 설명함에 있어서 본 발명이 속하는 기술 분야에 익히 알려져 있고, 본 발명과 직접적으로 관련이 없는 기술 내용에 대해서는 설명을 생략한다. 이는 불필요한 설명을 생략함으로써 본 발명의 요지를 흐리지 않고 더욱 명확히 전달하기 위함이다. In addition, the terms used in the present invention have been selected from general terms that are currently widely used as much as possible, but in certain cases, there are terms arbitrarily selected by the applicant. It is intended to clarify that the present invention should be understood as the meaning of the term, not the name of. In addition, in describing the embodiments, descriptions of technical details that are well known in the technical field to which the present invention pertains and are not directly related to the present invention will be omitted. This is to more clearly convey the gist of the present invention without obscuring it by omitting unnecessary description.

본 발명은 온보드(On-Board)된 비행제어기(Flight controller : FC)와 비행제어 컴퓨터 보드(FCC)에 리눅스 운영체제 서버를 구성하고 실시간 카메라 영상획득과 짐벌제어정보와, 비행제어기의 자세정보, 기체 고도 각도센서 정보를 지피에스(GPS) 시간정보에 클럭을 동기화하여, 카메라에 최적화된 실시간 동영상을 처리하고, 온보드 비행제어기의 신호동기화 최적화를 위해 통합된 하나의 신호처리 보드로 비행제어컴퓨터를 포함하고 있으며 리눅스 기반 운영체제로 드론에서 운영되는 모든 신호처리를 동기화하여 통합 운영하는 인터페이스를 갖고, 카메라 짐벌과 비행제어기 인터페이스를 통해 드론기체의 자세와 카메라짐벌 자세를 실시간 연동하고 알고리즘을 통해 지구좌표계 표준 자세정보를 획득하고 카메라의 화각, 해상도 정보를 획득하며, 드론의 고도계는 저가 센서를 사용하므로 오차가 크게 발생되는 문제를 해결하기 위해 지피에스(GPS)에 보정된 신호를 연동하도록 10cm미만 알티케이(RTK) 또는 1m미만 디지피에스(DGPS) 보정정보를 연동시키며, 비행제어 컴퓨터는 실시간 카메라 동영상을 기준으로 프레임에 메타정보가 생성되며 시간동기화 프레임에 컨테이너 구조를 삽입한 엠팩(MPEG) 구조로 드론이 비행하는 위치에서 카메라가 바라보는 좌표를 산출하는 정보를 포함하고, 비행제어 컴퓨터에서 동영상을 스마트조종기로 전송할 때 프레임용량에 따라 데이터 전송속도가 지연되는 문제를 해결하기 위해 동영상 전송 시간동기에 맞추어 프레임 크기가 크면 프레임 자동조절로 주파수 설정된 밴드폭을 넘지 않도록 프레임조절 되는 영상처리 알고리즘이 구동하며, 드론 조종자가 운영하는 스마트조종기는 드론에서 엠팩(MPEG) 동영상을 수신받아 처리하는 조종기 온보드 컴퓨터는 리눅스 기반 이중 아이피(Dual IP)중계 서버로 구성하여 스마트조종기에서 수신받은 엠팩(MPEG) 스트리밍은 지상관제시스템(GCS)와 인공지능 처리기에 동시 전송하고, 지상관제시스템(GCS)은 동영상 엠피이지(MPEG)를 전송받아 동영상과 메타정보를 분리하여 지아이에스(GIS) 지도에 메타정보를 해석하고 드론의 카메라 방향과 지형위치를 모사하며 동영상은 해당위치에서 실시간 표출하며, 인공지능 처리기는 엠팩(MPEG) 데이터를 전송받아 동영상과 메타정보를 분리하고 동영상은 학습된 인공지능 알고리즘(사람, 탱크 등 찾으라 하는 것)으로 객체를 인식하며, 인식된 객체는 메타정보를 기반으로 시간의 위치좌표를 계산하여 드론 카메라 동영상 객체위치 좌표를 각각 표시하여 인공지능 객체식별에 대해 시간의 위치결정 방법을 제공하는 온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치 좌표 획득 시스템을 제공한다.The present invention configures a Linux operating system server on an on-board flight controller (FC) and a flight control computer board (FCC), and provides real-time camera image acquisition and gimbal control information, flight controller attitude information, airframe Synchronizes altitude angle sensor information with GPS time information, processes real-time video optimized for cameras, and integrates a flight control computer into one signal processing board to optimize signal synchronization of the on-board flight controller. It has an interface that synchronizes and integrates all signal processing operated in drones with a Linux-based operating system. to obtain the camera's angle of view and resolution information, and since the drone's altimeter uses a low-cost sensor, RTK (RTK) of less than 10 cm is interlocked with the corrected signal to GPS to solve the problem of large errors. Alternatively, DGPS correction information of less than 1m is linked, and the flight control computer generates meta information in the frame based on the real-time camera video, and the drone flies with the MPEG structure in which the container structure is inserted in the time-synchronized frame. It includes the information that calculates the coordinates the camera is looking at from the position, and the frame size is adjusted to match the video transmission time synchronization to solve the problem that the data transmission speed is delayed depending on the frame capacity when transmitting video from the flight control computer to the smart controller. If the frame is large, the image processing algorithm that adjusts the frame so that the frequency does not exceed the set bandwidth is driven by automatic frame adjustment, and the smart controller operated by the drone operator is a controller that receives and processes MPEG video from the drone. Composed of a (Dual IP) relay server, MPEG streaming received from the smart controller is simultaneously transmitted to the ground control system (GCS) and artificial intelligence processor, and the ground control system (GCS) transmits video MPEG (MPEG) It receives and separates video and meta information, interprets meta information on GIS map, simulates the drone's camera direction and terrain position, displays video in real time at the location, and artificial intelligence processor transmits MPEG data. Recognizes the video and meta information, separates the video and recognizes the object with the learned artificial intelligence algorithm (to find a person, tank, etc.), and the recognized object calculates the location coordinates of the time based on the meta information, Provided is a drone equipped with an on-board flight control computer that displays object position coordinates and provides a time positioning method for artificial intelligence object identification, and a drone camera video object position coordinate acquisition system using the same.

도 1은 본 발명에 따른 온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 운영개요를 설명하기 위한 도면이고, 도 2는 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 실시예를 설명하기 위한 블록 구성도이다.1 is a diagram for explaining the operation outline of a drone equipped with an on-board flight control computer and a drone camera video object location coordinate acquisition system using the same according to the present invention, and FIG. 2 is a drone camera video object location coordinate acquisition system according to the present invention. It is a block configuration diagram for explaining an embodiment of the system.

본 발명에 따른 온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 운영개요는 도 1에 나타낸 바와 같은데, 위성항법장치(GPS)(100), 온보드 비행제어컴퓨터(200), 카메라 짐벌(300), 스마트 조종기(400), 인공지능 학습기(500) 및 혼합현실 모니터(600)를 포함하여 구성되는 드론 카메라 동영상 객체 위치 좌표 획득 시스템에서 카메라 짐벌(300)에 대하여 비행제어 컴퓨터(200)에서 좌표를 획득하여 영상과 메타정보가 동기화되고, 드론에 구비된 미션 컴퓨터(온보드 비행제어컴퓨터(200))에서 영성동기화된 엠펙(MPEG)이 전송된다. 즉 엠펙 트레픽의 전송관리가 수행되고, 스마트 조정기(400)는 이중(Dual) 영상 스트리밍을 위한 이중 IP 중계 서버로 구성되며, 혼합현실 모니터(600)에서는 인공지능 학습(메타)에 의해 혼합현실을 모니터링하여 객제인식 좌표 서비스를 제공한다. The operation outline of a drone equipped with an on-board flight control computer and a drone camera video object location coordinate acquisition system using the same according to the present invention is shown in FIG. ), camera gimbal 300, smart controller 400, artificial intelligence learner 500, and mixed reality monitor 600, and flight control for the camera gimbal 300 in the drone camera video object location coordinate acquisition system. Coordinates are obtained from the computer 200, images and meta information are synchronized, and spiritually synchronized MPEG is transmitted from the mission computer (on-board flight control computer 200) provided in the drone. That is, transmission management of MPEG traffic is performed, the smart controller 400 is composed of dual IP relay servers for dual video streaming, and the mixed reality monitor 600 performs mixed reality by artificial intelligence learning (meta). Provide object recognition coordinate service by monitoring.

이를 위한 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 실시예는 도 2에 나타낸 바와 같은데, 위성항법장치(GPS)(100)는 위성측위시스템(GNSS) 수신기를 이용하며, RTK/DGPS 보정정보를 수신받아 GPS 보정을 하도록 구성된다.An embodiment of the drone camera video object position coordinate acquisition system for this purpose is as shown in FIG. It is configured to make GPS corrections.

온보드 비행제어컴퓨터(200)는 온보드(On-Board)된 비행제어기(Flight controller : FC)와 비행제어 컴퓨터 보드(FCC)에 리눅스 운영체제 서버를 구성하고 실시간 카메라 영상획득과 짐벌제어정보와, 비행제어기의 자세정보, 기체 고도 각도센서 정보를 지피에스 시간정보에 클럭을 동기화하여, 카메라에 최적화된 실시간 동영상을 처리하며, 온보드 비행제어기의 신호동기화 최적화를 위해 통합된 하나의 신호처리 보드로 비행제어컴퓨터를 포함하고 있으며 리눅스 기반 운영체제로 드론에서 운영되는 모든 신호처리를 동기화 하여 통합 운영하는 인터페이스를 갖는다. The on-board flight control computer 200 configures a Linux operating system server on an on-board flight controller (FC) and a flight control computer board (FCC), and provides real-time camera image acquisition, gimbal control information, and flight controller Synchronizes the clock with the GPS time information of attitude information and aircraft altitude angle sensor information to process real-time video optimized for the camera, and to optimize the signal synchronization of the onboard flight controller. It is a Linux-based operating system and has an interface that synchronizes and integrates all signal processing operated in drones.

특히 카메라 짐벌(300)과 비행제어기 인터페이스를 통해 드론기체의 자세와 카메라짐벌 자세를 실시간 연동하고 미리 설정된 알고리즘을 통해 지구좌표계 표준 자세정보를 획득하고, 카메라 짐벌(300)의 카메라의 화각, 해상도 정보를 획득하며, 드론의 고도계는 저가 센서를 사용하므로 오차가 크게 발생되는 문제를 해결하기 위해 지피에스에 보정된 신호를 연동하도록 10cm미만 알티케이(RTK) 또는 1m미만 디지피에스(DGPS) 보정정보를 연동시키도록 구성되었다.In particular, through the interface between the camera gimbal 300 and the flight controller, the posture of the drone body and the posture of the camera gimbal are interlocked in real time, and standard posture information in the Earth coordinate system is obtained through a preset algorithm, and information on angle of view and resolution of the camera of the camera gimbal 300 are obtained. Since the drone's altimeter uses a low-cost sensor, in order to solve the problem of large errors, RTK (RTK) less than 10 cm or DGPS (DGPS) correction information less than 1 m is interlocked to link the corrected signal to GPS. configured to do

또한 온보드 비행제어 컴퓨터(200)는 실시간 카메라 동영상을 기준으로 프레임에 메타정보가 생성되며 시간동기화 프레임에 컨테이너 구조를 삽입한 엠팩(MPEG) 구조로 드론이 비행하는 위치에서 카메라가 바라보는 좌표를 산출하는 정보를 포함하고, 온보드 비행제어 컴퓨터(200)에서 동영상을 스마트조종기(400)로 전송할 때 프레임용량에 따라 데이터 전송속도가 지연되는 문제해결 위해 동영상 전송 시간동기에 맞추어 프레임 크기가 크면 프레임 자동조절로 주파수 설정된 밴드폭을 넘지 않도록 프레임조절 되는 영상처리 알고리즘이 이용되도록 하였다.In addition, the on-board flight control computer 200 generates meta information in frames based on real-time camera videos, and calculates the coordinates of the camera at the location where the drone flies with an MPEG structure in which a container structure is inserted into a time-synchronized frame. In order to solve the problem that the data transmission speed is delayed according to the frame capacity when the video is transmitted from the on-board flight control computer 200 to the smart controller 400, the frame size is automatically adjusted according to the video transmission time synchronization when the frame size is large. An image processing algorithm that adjusts the frame so as not to exceed the frequency set bandwidth was used.

이러한 온보드 비행제어컴퓨터(200)는 자세/고도/각도의 센서부와, 모터를 구동하기 위한 ESC(Electronic Speed Controller) 전원제어부를 포함하여 구성되며, 영상통신부, 제어통신부 및 비행제어부를 포하하여 구성된다. 온보드 비행제어컴퓨터(200)는 위성항법장치(GPS)(100)로부터 정밀한 위치/고도/시간정보와, 카메라 짐벌(300)로부터 짐벌정보와 동영상정보(H.264 영상)를 수신받아 지상의 스마트 조정기(400)와 영상통신 및 제어 통신을 수행한다. 즉 상기 온보드 비행제어컴퓨터(200)는 자세/고도/각도의 센서부와, 모터를 구동하기 위한 ESC(Electronic Speed Controller) 전원제어부, 상기 스마트조종기(400)로 영상 데이터를 송신하기 위한 영상통신부, 상기 스마트조종기(400)로부터 제어신호를 수신받기 위한 제어통신부 및 제어통신부의 제어에 의해 비행을 제어하는 비행제어부를 포함하여 구성된다.This on-board flight control computer 200 is composed of a posture/altitude/angle sensor unit and an ESC (Electronic Speed Controller) power control unit for driving a motor, and is composed of a video communication unit, a control communication unit, and a flight control unit. do. The on-board flight control computer 200 receives precise location/altitude/time information from the global positioning system (GPS) 100 and gimbal information and video information (H.264 video) from the camera gimbal 300, and receives smart on the ground. It performs video communication and control communication with the controller 400 . That is, the on-board flight control computer 200 includes an attitude/altitude/angle sensor unit, an ESC (Electronic Speed Controller) power control unit for driving a motor, a video communication unit for transmitting image data to the smart controller 400, It is configured to include a control communication unit for receiving a control signal from the smart controller 400 and a flight control unit for controlling flight by control of the control communication unit.

스마트 조종기(400)는 드론 조종자가 운영하는 스마트조종기로, 드론의 온보드 비행제어 컴퓨터(200)에서 엠팩(MPEG) 동영상을 수신받아 처리하는 조종기 온보드 컴퓨터는 리눅스 기반 이중(Dual) 아이피(IP) 중계 서버(듀얼 IP 중계부)로 구성하여 스마트조종기에서 수신받은 엠팩(MPEG) 스트리밍은 스마트 조종기(400)에 구비되는 지상관제시스템(GCS)와 혼합현실 모니터(600)에 구성되는 인공지능 처리기에 동시 전송되며, 지상관제시스템(GCS)에서는 동영상 엠피이지(MPEG)를 전송받아 동영상과 메타정보를 분리하여 지아이에스(GIS) 지도에 메타정보를 해석하고 드론의 카메라 방향과 지형위치를 모사하며 동영상은 해당위치에서 실시간 표출한다. The smart controller 400 is a smart controller operated by a drone operator, and the controller on-board computer that receives and processes MPEG video from the on-board flight control computer 200 of the drone is a Linux-based dual IP relay. Composed of a server (dual IP relay) and received from the smart controller, MPEG streaming is simultaneously performed by the ground control system (GCS) provided in the smart controller 400 and the artificial intelligence processor configured in the mixed reality monitor 600. The ground control system (GCS) receives the video MPEG, separates the video and meta information, interprets the meta information on the GIS map, and simulates the drone's camera direction and terrain position. It is displayed in real time at the location.

이를 위한 스마트 조종기(400)는 조종기 온보드 컴퓨터로 리눅스 운영체제로 구동되며, 조종제어기를 포함하고, 지상관제시스템(GCS(테블릿 PC 등))을 포함하며, 온보드 비행제어 컴퓨터(200) 제어를 위한 제어통신부와, 온보드 비행제어 컴퓨터(200)와 영상통신을 수행하는 영상통신부 및 듀얼 IP 중계부를 포함하여 구성된다. The smart controller 400 for this purpose is driven by the Linux operating system as a remote controller on-board computer, includes a control controller, includes a ground control system (GCS (tablet PC, etc.)), and controls the on-board flight control computer 200 It is configured to include a control communication unit, a video communication unit performing video communication with the on-board flight control computer 200, and a dual IP relay unit.

인공지능 학습기(500)는 스마트 조종기(400)로부터 엠펙(MPEG) 데이터를 전송받아 동영상과 메타정보를 분리하고 동영상은 학습된 인공지능 알고리즘(예를 들어 사람, 탱크, 재선충 등을 찾으라 하는 것)으로 객체를 인식하며, 인식된 객체는 미리 학습된 메타정보를 기반으로 시간의 위치좌표를 계산하여 드론 카메라 동영상 객체위치 좌표를 혼합현실 모니터(600)로 제공한다.The artificial intelligence learner 500 receives MPEG data from the smart controller 400, separates the video and meta information, and the video is a learned artificial intelligence algorithm (eg, to find people, tanks, nematodes, etc.) ), and the recognized object calculates the location coordinates of the time based on the pre-learned meta information, and provides the drone camera video object location coordinates to the mixed reality monitor 600.

여기서 혼합혼실 모니터(600)는 스마트 안경이나 모니터 등으로 구성되어 인공지능 학습기(500)에서 제공된 드론 카메라 동영상 객체위치 좌표를 표시한다.Here, the mixed room monitor 600 is composed of smart glasses or a monitor and displays object location coordinates of the drone camera video provided by the artificial intelligence learner 500.

도 3은 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템에서 학습된 객체 인식기반의 정밀위치 추정과 변경 변화 검출 기술의 실시예를 설명하기 위한 도면이다.3 is a diagram for explaining an embodiment of a technology for estimating a precise position and detecting a change based on object recognition learned in a drone camera video object position coordinate acquisition system according to the present invention.

본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템에서 학습된 객체 인식기반의 정밀위치 추정과 변경 변화 검출 기술의 실시예는 도 3에 나타낸 바와 같이, 우선, 딥러닝 객체를 인식하고, 거리를 추정하며, 상대적. 절대위치가 보고된다. As shown in FIG. 3, the embodiment of the object recognition-based precise position estimation and change detection technology learned in the drone camera video object position coordinate acquisition system according to the present invention first recognizes a deep learning object and estimates the distance. and relative. Absolute position is reported.

여기서 딥러닝 객체 인식은,Here, deep learning object recognition,

식1과 같고, Same as Equation 1,

Figure PCTKR2022015356-appb-img-000001
Figure PCTKR2022015356-appb-img-000001

여기서, 딥러닝 객체인식 : L은 loss이고, p는 predicted class scores이며, u는 True class scores이고, tu는 True box coordinates이며, v는 predicted box coordinates로, 이는 실제 위치와 추정위치간의 차이를 뜻하며 이를 낮추도록 학습을 한다는 의미의 수식이다.Here, deep learning object recognition: L is loss, p is predicted class scores, u is true class scores, t u is true box coordinates, and v is predicted box coordinates, which is the difference between the actual position and the estimated position It is a formula that means learning to lower it.

그리고 거리 추정은 식 2와 같다.And the distance estimation is as Equation 2.

Figure PCTKR2022015356-appb-img-000002
Figure PCTKR2022015356-appb-img-000002

여기서 Distance on ground(D)는 지상에서 목표 거리 찾기이고, Target global position은 객체 인식 위치좌표이다.Here, Distance on ground (D) is a target distance search on the ground, and Target global position is an object recognition location coordinate.

그리고 상대적 절대 위치 보고는 객체 인식기반의 정밀 위치 추정과 변경 변화를 검출하는 것이다.And the relative absolute position report is to estimate the precise position based on object recognition and detect the change change.

도 4 및 도 5는 본 발명에 따른 온보드 비행제어 컴퓨터가 구비된 드론을 설명하기 위한 도면이다.4 and 5 are views for explaining a drone equipped with an on-board flight control computer according to the present invention.

도 4 및 도 5는 본 발명에 따른 온보드 비행제어 컴퓨터가 구비된 드론을 설명하기 위한 도면으로, 도 2에서와 같은 온보드 비행제어컴퓨터(200)에 구성되는 센서부와, 비행제어, 리눅스 운영체제로 운영되는 비행제어컴퓨터, 제어통신부, 영상통신부 및 전원제어부가 하나의 보드에 온되어 있는 것을 보여주고 있다. 이를 통해 도 5의 좌측에서의 기존 드론에서는 센서 및 메인보드, 비행제어 컴퓨터 보드, 짐벌 제어보드(짐벌 제어기), 배터리 셀로 이루어지는 배터리 팩, 전원제어보드(전원제어기) 등의 여러 보드가 하나의 드론 핵심 부품들을 구성함에 따라 드론이 대형화하였음에 반해 본원발명은 우측에서와 같이, 하나의 온보드 비행제어컴퓨터(200)에 온보드 됨으로서 소형 경량화 온보드 비행제어가 가능한 것을 보여주고 있다.4 and 5 are views for explaining a drone equipped with an on-board flight control computer according to the present invention, the sensor unit configured in the on-board flight control computer 200 as shown in FIG. 2, flight control, and a Linux operating system. It shows that the operating flight control computer, control communication unit, video communication unit and power control unit are turned on on one board. Through this, in the conventional drone on the left side of FIG. 5, several boards such as sensors and main boards, flight control computer boards, gimbal control boards (gimbal controllers), battery packs made of battery cells, and power control boards (power controllers) are integrated into one drone. While the drone has become larger as the core parts are configured, the present invention, as shown in the right side, shows that it is possible to control the small and lightweight on-board flight by being on-boarded with one on-board flight control computer 200.

도 6은 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템에서 인공지능 학습 객체인식과, 인공지능 학습 결과 처리를 설명하기 위한 도면이고, 도 7은 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 스마트조정기와 인공지능 학습기에서 메타정보 손실없는 엠펙 듀얼 서비스를 설명하기 위한 도면이며, 도 8은 본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템의 인공지능학습기에서 탱크 객체 인식 학습엔진 실행 결과의 예시를 나타낸 도면이다.6 is a diagram for explaining artificial intelligence learning object recognition and artificial intelligence learning result processing in the drone camera video object location coordinate acquisition system according to the present invention, and FIG. 7 is a drone camera video object location coordinate acquisition system according to the present invention. Figure 8 is a diagram for explaining the MPEG dual service without meta information loss in the smart controller and artificial intelligence learner of the present invention. It is a drawing showing an example.

본 발명에 따른 드론 카메라 동영상 객체 위치 좌표 획득 시스템에서 인공지능 학습 객체인식과, 인공지능 학습 결과에 대한 처리는 도 6 및 도 7에서와 같은데, 위성항법장치(GPS)(100)에서의 GPS 위치정보와 고도/각도정보, 카메라 짐벌(300)에서의 짐벌 자세정보(카메라 화각 정보)와 동영상 정보는 온보드 비행제어컴퓨터(200)에서 동기화 되고, 온보드 비행제어 컴퓨터(200)에서는 동기화된 메타정보와 동영상 정보를 실시간 카메라 동영상을 기준으로 프레임에 메타정보가 생성되며 시간동기화 프레임에 컨테이너 구조를 삽입한 엠팩(MPEG) 구조로 드론이 비행하는 위치에서 카메라가 바라보는 좌표를 산출한다(미션 컴퓨터 운영부). 이때 앞에서 설명한 바와 같이 프레임용량에 따라 데이터 전송속도가 지연되는 문제해결 위해 동영상 전송 시간동기에 맞추어 프레임 크기가 크면 프레임 자동조절로 주파수 설정된 밴드폭을 넘지 않도록 프레임조절 되는 영상처리 알고리즘이 구동된다.In the drone camera video object location coordinate acquisition system according to the present invention, artificial intelligence learning object recognition and processing of artificial intelligence learning results are the same as in FIGS. 6 and 7, but the GPS location in the GPS (100) Information, altitude/angle information, gimbal posture information (camera view angle information) and video information from the camera gimbal 300 are synchronized in the on-board flight control computer 200, and in the on-board flight control computer 200, the synchronized meta information and Meta information is created in the frame based on the real-time camera video of the video information, and the MPEG structure in which the container structure is inserted in the time synchronization frame calculates the coordinates of the camera at the location where the drone flies (Mission Computer Operation Department) . At this time, as described above, in order to solve the problem that the data transmission speed is delayed according to the frame capacity, if the frame size is large according to the video transmission time synchronization, the image processing algorithm is operated to adjust the frame so that the frame does not exceed the frequency set bandwidth by automatically adjusting the frame.

그리고 이를 스마트 조종기(400)로 전송하면 조종기 온보드 컴퓨터는 리눅스 기반 이중(Dual) 아이피(IP)중계 서버(듀얼 IP 중계부)로 구성된 스마트 조종기(400)는 엠펙 프레임을 수신받아 처리하여 스마트조종기(400)에서 수신받은 엠팩(MPEG) 스트리밍은 지상관제시스템(GCS)(타블렛 PC 등)과 인공지능 학습에 동시 전송된다.And when it is transmitted to the smart controller 400, the controller on-board computer consists of a Linux-based dual IP relay server (dual IP relay unit), and the smart controller 400 receives and processes the MPEG frame, The MPEG streaming received from 400) is simultaneously transmitted to the ground control system (GCS) (tablet PC, etc.) and artificial intelligence learning.

지상관제시스템(GCS)은 동영상 엠피이지(MPEG)를 전송받아 동영상과 메타정보를 분리하여 지아이에스(GIS) 지도에 메타정보를 해석하고 드론의 카메라 방향과 지형위치를 모사하며 동영상은 해당위치에서 실시간 표출한다.The ground control system (GCS) receives the video MPEG, separates the video and meta information, interprets the meta information on the GIS map, simulates the drone's camera direction and terrain location, and the video is stored at the location. express in real time.

인공지능 학습기(500)는 엠팩(MPEG)데이터를 전송받아 동영상(H.264)과 메타정보를 분리(엠펙 패킷 분리)하고 동영상은 학습된 인공지능 알고리즘(사람, 탱크 등 찾으라 하는 것)으로 객체를 인식하며, 인식된 객체는 메타정보를 기반으로 시간의 위치좌표를 계산하여 드론 카메라 동영상 객체위치 좌표를 각각 표시한다.The artificial intelligence learner 500 receives the MPEG data and separates the video (H.264) and meta information (separating the MPEG packet), and the video is converted into a learned artificial intelligence algorithm (to find people, tanks, etc.) The object is recognized, and the recognized object calculates the location coordinates of time based on the meta information and displays the drone camera video object location coordinates respectively.

혼합현실 모니터(600)는 혼합현실 인공지능(MR)을 통해 타블렛 GIS와 객체검출을 하고, 타블렛 GIS 좌표정보에 따라 모니터(스마트 안경)에 표시한다.The mixed reality monitor 600 detects an object with the tablet GIS through mixed reality artificial intelligence (MR), and displays it on the monitor (smart glasses) according to the tablet GIS coordinate information.

이를 단계별로 설명하면, 1단계로 실시간 카메라 짐벌(300) 동영상 드론 비행 메타정보 동기화 SW 구현한다. 이때, 실시간 동영상 획득 동기화하고, 실시간 카메라 화각 짐벌자세 + 위치각도 고도 메타정보 동기화한다.To explain this step by step, real-time camera gimbal 300 video drone flight meta information synchronization SW is implemented in one step. At this time, real-time video acquisition is synchronized, and real-time camera view angle gimbal attitude + position angle altitude meta information is synchronized.

그 다음 2단계로, 미션컴퓨터 동기화된 H.26x영상 + 메타정보 MPEG 프레임 생성한다. 이때, H.26x 동영상 + 메타정보 콘테이너 MPEG 프레임 생성하고, MPEG를 암호 알고리즘 적용 암호화 인코딩, 디코딩 작업을 수행한다.Then, in the second step, the mission computer synchronized H.26x video + meta information MPEG frame is generated. At this time, an H.26x video + meta information container MPEG frame is generated, and encryption encoding and decoding operations are performed on the MPEG by applying an encryption algorithm.

이어 3단계로, Dual 서비스 IP중계 서버포팅한다. 이때, 동영상 Dual IP 중계 서버 포팅(리눅스 우분트 기반의) 하고, 스마트조종기는 MPEG 동영상 지상관제 시스템(GCS), 인공지능 학습기를 포함한다.Next, in the third step, port the dual service IP relay server. At this time, the video Dual IP relay server is ported (based on Linux Ubuntu), and the smart controller includes MPEG video ground control system (GCS) and artificial intelligence learner.

그리고 4단계로, 감시정찰 객체식별 위한 인공지능 학습기 좌표 획득 알고리즘이 수행되는데, 객체인식 사전학습 딥러닝이 수행되어 인공지능 감시정찰 객체인식(실시간 전장 객체 식별 및 위치 추정)을 수행한다.And in step 4, an artificial intelligence learner coordinate acquisition algorithm for object identification for surveillance and reconnaissance is performed, and object recognition pre-learning deep learning is performed to perform artificial intelligence surveillance and reconnaissance object recognition (real-time battlefield object identification and location estimation).

그 다음 5단계로, 동영상 객체식별 혼합현실 스마트안경 콘텐츠 서비스 제작한다. 이때 스마트안경용 동영상 객체 혼합현실(MR) 콘텐츠 제작한다(UI 포함).In the next 5 steps, create video object identification mixed reality smart glasses content service. At this time, video object mixed reality (MR) content for smart glasses is created (including UI).

이어 6단계로, 지리정보 시스템(GIS)기반 AI객체정보 모니터링 구현한다. 그에 따라 동영상 혼합현실 콘텐츠 서비스가 제공된다.Next, in the sixth step, geographic information system (GIS)-based AI object information monitoring is implemented. Accordingly, a video mixed reality content service is provided.

도 8에서는 인공지능 학습기에서 탱크객체인식 학습엔진을 실행하고, 이를 혼합현실 모니터에서 디스플레이한 예를 보여주고 있다.8 shows an example of executing a tank object recognition learning engine in an artificial intelligence learner and displaying it on a mixed reality monitor.

인공지능 학습기에서 탱크객체인식 학습엔진을 실행한 예에서는 3개의 탱크에 대한 객체인식이 되고, 좌측 하단에는 프레임 메타정보 좌표가 산출된 것을 보여주고 있다.In the example of executing the tank object recognition learning engine in the artificial intelligence learner, object recognition is performed for three tanks, and the frame meta information coordinates are calculated in the lower left corner.

도 9 및 도 10은 인공지능 학습기에서 사람객체인식 학습엔진을 실행하고, 이를 혼합현실 모니터에서 디스플레이한 예를 보여주는 도면이다.9 and 10 are diagrams showing an example of executing a human object recognition learning engine in an artificial intelligence learner and displaying it on a mixed reality monitor.

인공지능 학습기에서 사람객체인식 학습엔진을 실행한 예에서는 도 9에서는 3명의 사람에 대한 객체인식이 되고, 도 10에서는 1명의 사람에 대한 객체인식이 된 것으로, 붉은 색 원으로 인식된 객체를 표시한 예를 보여주고 있다.In the example of executing the human object recognition learning engine in the artificial intelligence learner, object recognition for three people is performed in FIG. 9 and object recognition for one person is performed in FIG. 10, and the recognized object is displayed as a red circle. An example is shown.

이상과 같은 예로 본 발명을 설명하였으나, 본 발명은 반드시 이러한 예들에 국한되는 것이 아니고, 본 발명의 기술사상을 벗어나지 않는 범위 내에서 다양하게 변형 실시될 수 있다. 따라서 본 발명에 개시된 예들은 본 발명의 기술 사상을 한정하기 위한 것이 아니라 설명하기 위한 것이고, 이러한 예들에 의하여 본 발명의 기술 사상의 범위가 한정되는 것은 아니다. 본 발명의 보호 범위는 아래의 청구범위에 의하여 해석되어야 하며, 그와 동등한 범위 내에 있는 모든 기술 사상은 본 발명의 권리범위에 포함되는 것으로 해석되어야 한다. Although the present invention has been described with the above examples, the present invention is not necessarily limited to these examples, and may be variously modified and implemented without departing from the technical spirit of the present invention. Therefore, the examples disclosed in the present invention are not intended to limit the technical idea of the present invention, but to explain, and the scope of the technical idea of the present invention is not limited by these examples. The protection scope of the present invention should be construed according to the claims below, and all technical ideas within the equivalent range should be construed as being included in the scope of the present invention.

본 발명은 드론을 이용한 카메라 동영상 객체 위치 좌표 획득 시스템으로서, 구체적으로 위성측위시스템(GNSS) 수신기를 이용하며, RTK/DGPS 보정정보를 수신받아 GPS 보정을 하도록 구성되는 위성항법장치(GPS)(100), 카메라 짐벌(300)로부터 짐벌정보와 동영상정보(H.264 영상)를 수신 받아 프레임 조절되는 영상처리 알고리즘을 이용하여 전송하는 온보드 비행제어 컴퓨터(200), 온보드 비행제어 컴퓨터(200)와 영상통신부 및 듀얼 IP 중계부를 포함하여 구성되는 스마트 조종기(400), 스마트 조종기로부터 엠펙(MPEG) 데이터를 전송받아 드론 카메라 동영상 객체위치 좌표를 혼합현실 모니터로 제공하는 인공지능 학습기(500), 인공지능 학습기(500)에서 제공된 드론 카메라 동영상 객체위치 좌표를 표시하는 혼합혼실 모니터(600)를 포함하여 구성됨을 특징으로 하는 온보드 비행제어 컴퓨터가 구비된 드론 및 그를 이용한 드론 카메라 동영상 객체 위치시스템에 관한 것이므로 좌표 획득이 안정적으로 서비스가 이루어질 수 있으므로 산업상 이용가능성이 있다.The present invention is a camera video object location coordinate acquisition system using a drone, and specifically, a global positioning system (GPS) (100) configured to use a global positioning system (GNSS) receiver, receive RTK / DGPS correction information, and perform GPS correction. ), an on-board flight control computer 200 that receives gimbal information and video information (H.264 video) from the camera gimbal 300 and transmits them using an image processing algorithm that is frame-adjusted, an on-board flight control computer 200 and an image A smart controller 400 composed of a communication unit and a dual IP relay unit, an artificial intelligence learner 500 that receives MPEG data from the smart controller and provides drone camera video object location coordinates to a mixed reality monitor, and an artificial intelligence learner It is about a drone equipped with an on-board flight control computer and a drone camera video object positioning system using the same, characterized in that it is configured to include a mixed-room monitor 600 displaying the coordinates of the drone camera video object location provided in (500), so the coordinates are obtained. Since this service can be performed stably, there is industrial applicability.

Claims (2)

위성측위시스템(GNSS) 수신기를 이용하며, RTK/DGPS 보정정보를 수신받아 GPS 보정을 하도록 구성되는 위성항법장치(GPS)(100);A global positioning system (GPS) 100 configured to use a global positioning system (GNSS) receiver, receive RTK/DGPS correction information, and perform GPS correction; 모터를 구동하기 위한 ESC(Electronic Speed Controller) 전원제어부와, 영상 데이터를 송신하기 위한 영상통신부와, 제어신호를 수신받기 위한 제어통신부와, 상기 제어통신부의 제어에 의해 비행을 제어하는 비행제어부와, 자세/고도/각도 센서부 및 상기 위성항법장치(GPS)(100)로부터 정밀한 위치/고도/시간정보와, 카메라 짐벌(300)로부터 짐벌정보와 동영상정보(H.264 영상)를 수신받아 지상의 스마트 조정기(400)와 영상통신 및 제어 통신을 수행하며, 상기 카메라 짐벌(300)에 대한 좌표를 획득하여 영상과 메타정보가 동기화되고, 영상동기화된 엠펙(MPEG) 전송 시 실시간 카메라 동영상을 기준으로 프레임에 메타정보가 생성되며 시간동기화 프레임에 컨테이너 구조를 삽입한 엠팩(MPEG) 구조로 드론이 비행하는 위치에서 카메라가 바라보는 좌표를 산출하는 정보를 포함하고, 프레임용량에 따라 동영상 전송 시간동기에 맞추어 프레임 크기가 크면 프레임 자동조절로 주파수 설정된 밴드폭을 넘지 않도록 프레임조절 되는 영상처리 알고리즘을 이용하여 전송하는 온보드 비행제어 컴퓨터(200);An ESC (Electronic Speed Controller) power control unit for driving a motor, a video communication unit for transmitting video data, a control communication unit for receiving control signals, a flight control unit for controlling flight by control of the control communication unit, Receives precise position/altitude/time information from the attitude/altitude/angle sensor unit and the global positioning system (GPS) 100, and gimbal information and video information (H.264 video) from the camera gimbal 300, Performs video communication and control communication with the smart controller 400, acquires coordinates for the camera gimbal 300, synchronizes video and meta information, and transmits video-synchronized MPEG based on real-time camera video Meta information is created in the frame, and it is an MPEG structure in which a container structure is inserted into the time synchronization frame. If the frame size is large, the on-board flight control computer 200 transmits the frame using an image processing algorithm that adjusts the frame so that the frame does not exceed the set frequency band width by automatically adjusting the frame; 조종기 온보드 컴퓨터로 리눅스 운영체제로 구동되며, 조종제어기를 포함하고, 지상관제시스템을 포함하며, 상기 온보드 비행제어 컴퓨터(200) 제어를 위한 제어통신부와, 상기 온보드 비행제어 컴퓨터(200)와 영상통신을 수행하는 영상통신부 및 듀얼 IP 중계부를 포함하여 구성되는 스마트 조종기(400);The controller is an on-board computer driven by Linux operating system, includes a control controller and a ground control system, and provides a control communication unit for controlling the on-board flight control computer 200 and video communication with the on-board flight control computer 200. A smart controller 400 comprising a video communication unit and a dual IP relay unit to perform; 상기 스마트 조종기(400)로부터 엠펙(MPEG) 데이터를 전송받아 동영상과 메타정보를 분리하고 동영상은 학습된 인공지능 알고리즘으로 객체를 인식하며, 인식된 객체는 미리 학습된 메타정보를 기반으로 시간의 위치좌표를 계산하여 드론 카메라 동영상 객체위치 좌표를 혼합현실 모니터(600)로 제공하는 인공지능 학습기(500); 및 MPEG data is received from the smart controller 400, video and meta information are separated, and the video recognizes an object with a learned artificial intelligence algorithm, and the recognized object is positioned in time based on pre-learned meta information. An artificial intelligence learner 500 that calculates coordinates and provides drone camera video object position coordinates to the mixed reality monitor 600; and 스마트 안경 또는 모니터를 포함하여 구성되어 상기 인공지능 학습기(500)에서 제공된 드론 카메라 동영상 객체위치 좌표를 표시하는 혼합혼실 모니터(600);를 포함하여 구성됨을 특징으로 하는 온보드 비행제어 컴퓨터가 구비된 드론을 이용한 카메라 동영상 객체 위치 좌표 획득 시스템.A drone equipped with an on-board flight control computer, characterized in that it is configured to include a mixed room monitor 600 comprising smart glasses or a monitor and displaying the drone camera video object position coordinates provided by the artificial intelligence learner 500 A camera video object location coordinate acquisition system using . 제1항에 있어서,According to claim 1, 상기 스마트 조종기(400)는 드론 조종자가 운영하는 스마트조종기로, The smart controller 400 is a smart controller operated by a drone controller, 리눅스 운영체제로 구동되는 조종기 온보드 컴퓨터와, 조종제어기와 지상관제시스템(GCS)을 포함하며, 상기 온보드 비행제어 컴퓨터(200) 제어를 위한 제어통신부와, 상기 온보드 비행제어 컴퓨터(200)와 영상통신을 수행하는 영상통신부 및 듀얼 IP 중계부를 포함하여 구성되어,It includes a controller on-board computer driven by the Linux operating system, a control controller and a ground control system (GCS), a control communication unit for controlling the on-board flight control computer 200, and video communication with the on-board flight control computer 200. It is composed of a video communication unit and a dual IP relay unit to perform, 상기 온보드 비행제어 컴퓨터(200)에서 엠팩(MPEG) 동영상을 수신받아 엠팩(MPEG) 스트리밍은 상기 지상관제시스템(GCS)와 상기 혼합현실 모니터(600)에 구성되는 인공지능 처리기에 동시 전송되며, 상기 지상관제시스템(GCS)에서는 동영상 엠팩(MPEG)을 전송받아 동영상과 메타정보를 분리하여 지아이에스(GIS) 지도에 드론의 카메라 방향과 지형위치를 모사하며, 동영상은 해당위치에서 실시간 표출하도록 구성된 것을 특징으로 하는 온보드 비행제어 컴퓨터가 구비된 드론을 이용한 카메라 동영상 객체 위치 좌표 획득 시스템.The on-board flight control computer 200 receives the MPEG video and the MPEG streaming is simultaneously transmitted to the artificial intelligence processor configured in the ground control system (GCS) and the mixed reality monitor 600, The ground control system (GCS) receives the video MPEG and separates the video and meta information to simulate the drone's camera direction and terrain position on the GIS map, and the video is configured to be displayed in real time at the location. A camera video object position coordinate acquisition system using a drone equipped with an on-board flight control computer.
PCT/KR2022/015356 2021-12-23 2022-10-12 Drone having onboard flight control computer, and system for obtaining positional coordinates of drone camera video object by using drone Ceased WO2023120908A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2021-0186392 2021-12-23
KR1020210186392A KR102417591B1 (en) 2021-12-23 2021-12-23 Drone equipped with on-board flight control computer and drone camera video object position coordinate acquisition system using the same

Publications (1)

Publication Number Publication Date
WO2023120908A1 true WO2023120908A1 (en) 2023-06-29

Family

ID=82400274

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/015356 Ceased WO2023120908A1 (en) 2021-12-23 2022-10-12 Drone having onboard flight control computer, and system for obtaining positional coordinates of drone camera video object by using drone

Country Status (2)

Country Link
KR (1) KR102417591B1 (en)
WO (1) WO2023120908A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119497150A (en) * 2024-11-25 2025-02-21 南京航空航天大学 Low-altitude intelligent network multi-modal communication dynamic switching and adaptive compression system and method

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102417591B1 (en) * 2021-12-23 2022-07-06 주식회사 두시텍 Drone equipped with on-board flight control computer and drone camera video object position coordinate acquisition system using the same
KR20240023926A (en) 2022-08-16 2024-02-23 (주)메타파스 System and method for Unmanned Aerial Data Mapping of Unmanned Aerial Based Thermal Imaging Inspection Platform
KR102620116B1 (en) 2023-04-28 2024-01-02 (주)네온테크 Target Coordinate Acquisition System by Drone and Target Coordinate Acquisition Method by the Same
CN116543323A (en) * 2023-05-30 2023-08-04 赣南师范大学 Monitoring and emergency prevention and control system for citrus yellow dragon disease media psyllids
KR102810079B1 (en) 2024-08-23 2025-05-20 주식회사 두시텍 Autonomous flight system using artificial intelligence-based edge computing
KR102782897B1 (en) 2024-08-23 2025-03-18 주식회사 두시텍 Mission apparatus equipped with image processing and object detection functions

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180035606A1 (en) * 2016-08-05 2018-02-08 Romello Burdoucci Smart Interactive and Autonomous Robotic Property Maintenance Apparatus, System, and Method
KR20200048615A (en) * 2018-10-30 2020-05-08 (주)메타파스 Realtime inspecting drone for solar photovoltaic power station basen on machine learning
KR20200058079A (en) * 2018-11-19 2020-05-27 네이버시스템(주) Apparatus and method for aerial photographing to generate three-dimensional modeling and orthoimage
EP3619695B1 (en) * 2017-05-05 2021-07-28 Tg-17, Llc System and method for threat monitoring, detection, and response
KR102417591B1 (en) * 2021-12-23 2022-07-06 주식회사 두시텍 Drone equipped with on-board flight control computer and drone camera video object position coordinate acquisition system using the same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170083979A (en) * 2017-05-31 2017-07-19 주식회사 하우앳 System for controlling radio-controlled flight vehicle and its carmera gimbal for aerial tracking shot
KR101943823B1 (en) 2017-10-31 2019-01-31 주식회사 두시텍 UAV for accurate position data acquisition during high-speed flight and Determination of Unmanned Mission Equipment Synchronization for Accurate Position Data Acquisition during High Speed Flight
KR102260372B1 (en) 2020-10-29 2021-06-03 한국건설기술연구원 RTK drone based GRP auto-arrangement method for digital map creation of earthwork site

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180035606A1 (en) * 2016-08-05 2018-02-08 Romello Burdoucci Smart Interactive and Autonomous Robotic Property Maintenance Apparatus, System, and Method
EP3619695B1 (en) * 2017-05-05 2021-07-28 Tg-17, Llc System and method for threat monitoring, detection, and response
KR20200048615A (en) * 2018-10-30 2020-05-08 (주)메타파스 Realtime inspecting drone for solar photovoltaic power station basen on machine learning
KR20200058079A (en) * 2018-11-19 2020-05-27 네이버시스템(주) Apparatus and method for aerial photographing to generate three-dimensional modeling and orthoimage
KR102417591B1 (en) * 2021-12-23 2022-07-06 주식회사 두시텍 Drone equipped with on-board flight control computer and drone camera video object position coordinate acquisition system using the same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KIM DONG-HYUN, GO YONG-GUK, CHOI SOO-MI: "An Aerial Mixed-Reality Environment for First-Person-View Drone Flying", APPLIED SCIENCES, vol. 10, no. 16, pages 5436, XP093076043, DOI: 10.3390/app10165436 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119497150A (en) * 2024-11-25 2025-02-21 南京航空航天大学 Low-altitude intelligent network multi-modal communication dynamic switching and adaptive compression system and method

Also Published As

Publication number Publication date
KR102417591B1 (en) 2022-07-06

Similar Documents

Publication Publication Date Title
WO2023120908A1 (en) Drone having onboard flight control computer, and system for obtaining positional coordinates of drone camera video object by using drone
US12129026B2 (en) Method and system for controlling aircraft
JP6583840B1 (en) Inspection system
CN111307291B (en) Method, device and system for detecting and locating abnormal surface temperature based on UAV
CN105116907A (en) Method for designing data transmission and control system of miniature unmanned aerial vehicle
KR102267764B1 (en) Group drone based broadband reconnaissance and surveillance system, broadband reconnaissance and surveillance method
KR101943823B1 (en) UAV for accurate position data acquisition during high-speed flight and Determination of Unmanned Mission Equipment Synchronization for Accurate Position Data Acquisition during High Speed Flight
CN108733064A (en) A kind of the vision positioning obstacle avoidance system and its method of unmanned plane
KR20110134076A (en) 3D spatial information construction method using attitude control of unmanned aerial vehicle
US20190373184A1 (en) Image display method, image display system, flying object, program, and recording medium
CN106094876A (en) A kind of unmanned plane target locking system and method thereof
CN109665099A (en) Stringing camera chain and stringing method for imaging
JP2025041720A (en) Inspection Systems
WO2020048365A1 (en) Flight control method and device for aircraft, and terminal device and flight control system
CN109521785B (en) A portable camera intelligent rotorcraft system
US10412372B2 (en) Dynamic baseline depth imaging using multiple drones
US10557718B2 (en) Auxiliary control method and system for unmanned aerial vehicle
CN113063401A (en) Unmanned aerial vehicle aerial survey system
US12266168B2 (en) Method for controlling lens module, aerial vehicle, and aircraft system
CN204515536U (en) A kind of autonomous cruise camera system based on four rotors
US10509819B2 (en) Comparative geolocation system
CN115493594A (en) Patrol control system and method based on BIM
Bakirci et al. Avionics system development for a rotary wing unmanned combat aerial vehicle
JP6681101B2 (en) Inspection system
CN107036602A (en) Autonomous navigation system and method in mixing unmanned plane room based on environmental information code

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22911547

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22911547

Country of ref document: EP

Kind code of ref document: A1