[go: up one dir, main page]

WO2018090505A1 - 无人机及其控制方法 - Google Patents

无人机及其控制方法 Download PDF

Info

Publication number
WO2018090505A1
WO2018090505A1 PCT/CN2017/075303 CN2017075303W WO2018090505A1 WO 2018090505 A1 WO2018090505 A1 WO 2018090505A1 CN 2017075303 W CN2017075303 W CN 2017075303W WO 2018090505 A1 WO2018090505 A1 WO 2018090505A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
virtual reality
drone
reality device
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/075303
Other languages
English (en)
French (fr)
Inventor
刘均
宋朝忠
欧阳张鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Launch Technology Co Ltd
Original Assignee
Shenzhen Launch Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Launch Technology Co Ltd filed Critical Shenzhen Launch Technology Co Ltd
Publication of WO2018090505A1 publication Critical patent/WO2018090505A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25816Management of client data involving client authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47202End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting content on demand, e.g. video on demand

Definitions

  • the invention relates to the technical field of drones, in particular to a drone and a control method thereof.
  • drones are becoming more and more widely used in aerial photography, disaster rescue, film and television shooting, etc. Users can use drones for remote video and image shooting.
  • a user wants to view a video taken by a drone the user usually needs to connect the drone to a mobile terminal such as a smart phone, and then play the video captured by the drone through the mobile terminal. Due to the limitation of the screen size of the mobile terminal, the effect of video playback is not good.
  • the main object of the present invention is to provide a UAV and a control method thereof, which aim to solve the technical problem that the playback effect of the video captured by the UAV in the prior art is not good.
  • the present invention provides a control method for a drone, and the control method of the drone includes:
  • the drone When receiving the shooting instruction, the drone captures the video of the current first scene, and the video is a video in a virtual reality video format;
  • the method further includes:
  • the step of sending the video to a virtual reality device that establishes a wireless connection with the drone includes:
  • the saved video is sent to a virtual reality device that establishes a wireless connection with the drone.
  • the step of transmitting the saved video to a virtual reality device that establishes a wireless connection with the drone includes:
  • the video of the query is sent to the virtual reality device.
  • the step of transmitting the video to a virtual reality device that establishes a wireless connection with the drone includes:
  • the step of transmitting the video to a virtual reality device that establishes a wireless connection with the drone includes:
  • the video is sent to the virtual reality device.
  • the method further includes:
  • the step of transmitting the captured video to a virtual reality device that establishes a wireless connection with the drone includes:
  • the video is sent to the virtual reality device upon receiving the video fed back by the server.
  • the present invention also provides a drone, the drone comprising:
  • a shooting module configured to capture a video of a current first scene when the shooting instruction is received, where the video is a video in a virtual reality video format;
  • a processing module configured to send the video to a virtual reality device that establishes a wireless connection with the drone, for the virtual reality device to play the video, where the virtual reality device is located at a second site.
  • the drone further comprises:
  • a sending module configured to send the video to a server, where the server saves the video
  • the processing module is configured to send a corresponding video acquisition request to the server when receiving the video acquisition instruction, and send the video to the server when receiving the saved video fed back by the server Virtual reality device.
  • the drone further comprises:
  • a saving module for saving the captured video
  • the processing module is configured to send the saved video to a virtual reality device that establishes a wireless connection with the drone when receiving a video acquisition instruction.
  • the processing module comprises:
  • An acquiring unit configured to acquire video identification information carried in the video acquisition instruction when receiving a video acquisition instruction
  • a querying unit configured to query, according to the video identification information, a video corresponding to the video identification information from the saved video
  • a processing unit configured to send the video of the query to the virtual reality device.
  • the processing module is configured to:
  • the processing module is specifically configured to:
  • the saving module is further configured to:
  • the processing module is configured to send a corresponding video acquisition request to the server when receiving the video acquisition instruction, and send the video to the server when receiving the saved video fed back by the server Virtual reality device.
  • the drone and the control method thereof after the UAV photographs the video of the first scene, the video is sent to the virtual reality device located at the second site with which the wireless connection is established, and the virtual reality device plays no The video captured by the human machine, so that when the user watches the video by wearing the virtual reality device in the second scene, the effect of the first scene can be achieved, thereby improving the playing effect of the video captured by the drone.
  • FIG. 1 is a schematic flow chart of a first embodiment of a control method for a drone according to the present invention
  • FIG. 2 is a schematic flow chart of a second embodiment of a control method for a drone according to the present invention
  • FIG. 3 is a schematic flow chart of a third embodiment of a control method for a drone according to the present invention.
  • FIG. 4 is a schematic diagram of functional modules of a first embodiment of the drone of the present invention.
  • FIG. 5 is a schematic diagram of functional modules of a second embodiment of the drone of the present invention.
  • FIG. 6 is a schematic diagram of a refinement function module of a processing module in a second embodiment of the drone of the present invention.
  • the invention provides a control method for a drone.
  • control method of the drone includes:
  • Step S10 when the drone receives the shooting instruction, the current video of the first scene is captured, and the video is a video in a virtual reality video format;
  • drones are becoming more and more widely used in aerial photography, disaster rescue, film and television shooting, etc. Users can use drones for remote video and image capture.
  • the user uses a drone for video shooting.
  • the user wants to shoot the first live video
  • the user can control the drone to reach the first scene.
  • the user performs the drone shooting operation to trigger the shooting instruction of the drone, such as
  • the shooting instruction is sent to the drone through a control device such as a mobile terminal.
  • the drone receives the shooting command, the drone calls its configured camera to capture the current first live video.
  • the camera configured by the drone can be a monocular camera or a multi-view camera, and the video taken by the drone is VR (Virtual Reality) A video of a virtual reality video format that can be played by a VR virtual reality device.
  • the first scene may be the current location of the user, or other places where the user is not currently located. For example, when the user is at home, but wants to take a video of the school classroom, the user can control the drone to arrive at the school, and then use the drone to shoot the video of the classroom.
  • the drone determines whether the VR virtual reality device can directly play the video before transmitting the captured video to the VR virtual reality device. If the VR virtual reality device is capable of playing the video directly, the drone sends the video to the VR virtual reality device. If the VR virtual reality device cannot directly play the video, the drone first converts the video into a video format, and converts the video into a format suitable for playing on the VR virtual reality device, for example, converting the video into a left and right format. Video. The video format converted video is then sent to the VR virtual reality device.
  • Step S20 Send the video to a virtual reality device that establishes a wireless connection with the drone for the virtual reality device to play the video, where the virtual reality device is located at the second site.
  • the drone establishes a wireless communication connection with the corresponding VR virtual reality device, for example, the drone establishes a wireless communication connection with the VR glasses.
  • the VR virtual reality device is located at the second site. It can be understood that the second site may be different from the first site or may be the same place as the first site.
  • the drone After the drone has taken the video of the first scene, the drone sends the video to the VR virtual reality device with which the wireless connection is established.
  • the VR virtual reality device receives the video sent by the drone, the video captured by the drone can be played.
  • the drone can send the captured video to the VR virtual reality device with which the wireless connection is established in real time; or the drone can also shoot the first
  • the live video is saved, and then when the user wants to watch the video, the saved video is sent to the VR virtual reality device for playback by the VR virtual reality device.
  • the VR virtual reality device closes the user's visual and auditory sense to the outside world, and guides the user to create a feeling in the virtual environment
  • the user when the user wears the VR virtual reality device to watch the video, the user can immerse into the video environment. , greatly enriched the user's viewing video experience.
  • the landscape video can be taken by the drone, and then the landscape video is sent to the VR virtual reality device located at home, so that other users in the home can wear the VR virtual reality device. Watch the scenery video, so that users at home can also experience the scenery environment.
  • the video is sent to the virtual reality device located at the second site where the wireless connection is established, and the UAV is played by the virtual reality device.
  • the video in this way, when the user watches the video by wearing the virtual reality device in the second scene, the effect of the first scene can be achieved, thereby improving the playing effect of the video captured by the drone and improving the viewing experience of the user.
  • a second embodiment of the control method of the unmanned aerial vehicle of the present invention is proposed based on the first embodiment.
  • the method further includes the following steps:
  • Step S30 saving the captured video
  • the step S20 includes:
  • Step S21 When receiving the video acquisition instruction, send the saved video to a virtual reality device that establishes a wireless connection with the drone.
  • the drone saves the captured video. Thereafter, when the user wants to watch the video taken by the drone at the first scene, the user performs a corresponding video acquisition operation, for example, by controlling the device to perform a video acquisition control operation, triggering a corresponding video acquisition instruction.
  • the video acquisition instruction may also be sent to the drone through the VR virtual reality device.
  • the drone receives the video acquisition command, the drone sends the saved video to the VR virtual reality device.
  • the step S21 includes:
  • Step a when receiving a video acquisition instruction, acquiring video identification information carried in the video acquisition instruction;
  • Step b querying, according to the video identification information, a video corresponding to the video identification information from the saved video;
  • Step c sending the queried video to the virtual reality device.
  • the drone saves the captured video every time the video is taken, as the number of shots increases, the number of videos saved in the drone increases.
  • the user wants to view the video captured by the drone, the user performs a video acquisition operation to trigger a video acquisition instruction, where the video acquisition instruction includes video identification information corresponding to the video to be viewed.
  • each video has a unique corresponding video identification information.
  • the video identification information may be a video ID number, a video name, or the like corresponding to the video.
  • the UAV receives the video acquisition instruction, the video identification information included in the video acquisition instruction is obtained, and then the UAV queries and retrieves the video from the saved videos according to the obtained video identification information. The video corresponding to the identification information. After the video corresponding to the video identification information is queried, the queried video is sent to the VR virtual reality device.
  • the UAV saves the captured video of the first scene, and then, when receiving the video acquisition instruction, sends the saved video to the virtual reality device, and plays the video through the virtual reality device.
  • the user can play the video at any time as needed, so that the video played by the drone is more flexible, thereby further improving the user's viewing experience.
  • the third embodiment of the control method of the unmanned aerial vehicle of the present invention is proposed based on the first embodiment or the second embodiment.
  • the step S20 includes:
  • Step d sending the video and the first identity information of the drone to the server, so that the server queries the associated unmanned mobile phone identity information and the virtual reality device identity information, and determines The second identity information associated with the first identity information is sent to the virtual reality device corresponding to the second identity information.
  • the drone when the drone photographs the current first After the live video, the drone sends the captured video to the corresponding server, which then sends the video to the VR virtual reality device.
  • the identity identification information corresponding to each of the UAV and the VR virtual reality device is preset.
  • the ID number corresponding to the setting of the drone and the VR virtual reality device is preset.
  • the server associates the identity information of each drone with the identity information of its corresponding VR virtual reality device, and associates each drone with its corresponding VR virtual reality device. For example, if the identity information of the UAV is the first identity information, the identity information of the corresponding VR virtual reality device is the second identity information, and the server associates the first identity information with the second identity information. .
  • the drone After the drone captures the current first live video, the drone sends the captured video and the first identity information of the drone to the server.
  • the server receives the first identity information of the video and the drone sent by the drone, the server queries the identity information of the associated drone and the identity information of the corresponding VR virtual reality device, and determines The second identity information associated with the first identity information is the identity information of the VR virtual reality device corresponding to the unmanned device. Then, the server sends the received video to the VR virtual reality device corresponding to the second identity information, and sends the video to the VR virtual reality device associated with the drone, and plays the video through the VR virtual reality device.
  • the method further includes:
  • Step S40 sending the video to a server, where the server saves the video
  • the step S20 includes:
  • Step S22 when receiving the video acquisition instruction, sending a corresponding video acquisition request to the server, for the server to feed back the saved video;
  • Step S23 when receiving the video fed back by the server, sending the video to the virtual reality device.
  • the drone after the drone photographs the current first live video, the drone directly sends the captured current first live video to the server; or the drone saves the captured video. And send the current first live video of the shoot to the server.
  • the video is saved when the server receives the video sent by the drone.
  • the user wants to view the video, the user performs a corresponding video acquisition operation to trigger a video acquisition instruction.
  • the video acquisition instruction is triggered.
  • the drone receives the video acquisition command, it sends a corresponding video acquisition request to the server.
  • the server receives the video acquisition request
  • the saved video is sent to the drone, and after receiving the video sent by the server, the drone sends the video to the VR virtual reality device with which the wireless connection is established, and the The VR virtual reality device plays the video.
  • the video acquisition instruction includes the video identification information corresponding to the video, and when the UAV receives the video acquisition instruction, acquires the video identification information included in the video acquisition instruction, and then generates the inclusion according to the video identification information.
  • the video identifies a video acquisition request and sends the video acquisition request to the server.
  • the server receives the video acquisition request, according to the video identification information included in the video acquisition request, the video corresponding to the video identification information is queried from the saved video, and then the queried video is sent to the drone,
  • the human machine transmits the received video to the VR virtual reality device.
  • the VR virtual reality device may send a video acquisition request to the server, where The video identification request contains video identification information.
  • the server receives the video acquisition request sent by the VR virtual reality device, the server searches for the video corresponding to the video identification information from the saved video according to the video identification information included in the video acquisition request, and then sends the queried video to the queried video.
  • the VR virtual reality device plays the video through the VR virtual reality device.
  • the drone transmits the captured video to the server for saving, and then when the user wants to watch the video captured by the drone, the saved video only needs to be obtained from the server, so that not only the user can be realized.
  • the video is played at any time as needed, and the storage space of the drone is also avoided.
  • the invention further provides a drone.
  • FIG. 4 is a schematic diagram of functional modules of a first embodiment of the drone of the present invention.
  • the functional block diagram shown in FIG. 4 is merely an exemplary embodiment of a preferred embodiment, and those skilled in the art can surround the functional module of the drone shown in FIG.
  • the new function modules are easily added; the names of the function modules are custom names, which are only used to assist in understanding the various program function blocks of the drone, and are not used to define the technical solution of the present invention.
  • the core of the technical solution of the present invention is , the function to be achieved by the function module of each name.
  • the drone includes:
  • the shooting module 10 is configured to capture a video of the current first scene when receiving the shooting instruction
  • the user uses a drone for video shooting.
  • the user wants to shoot the first live video
  • the user can control the drone to reach the first scene.
  • the user performs the drone shooting operation to trigger the shooting instruction of the drone, such as
  • the shooting instruction is sent to the drone through a control device such as a mobile terminal.
  • the shooting module 10 calls the camera configured by the drone to capture the video of the current first scene.
  • the camera configured by the drone can be a monocular camera or a multi-view camera, and the video captured by the shooting module 10 is VR (Virtual Reality) A video of a virtual reality video format that can be played by a VR virtual reality device.
  • the first scene may be the current location of the user, or other places where the user is not currently located. For example, when the user is located at home but wants to take a video of the school classroom, the user can control the drone to arrive at the school, and then take a video of the classroom class through the shooting module 10.
  • the processing module 20 is configured to send the captured video to a virtual reality device that establishes a wireless connection with the drone for the virtual reality device to play the video, where the virtual reality device is located in the second on site.
  • the drone establishes a wireless communication connection with the corresponding VR virtual reality device, for example, the drone establishes a wireless communication connection with the VR glasses.
  • the VR virtual reality device is located at the second site. It can be understood that the second site may be different from the first site or may be the same place as the first site.
  • the processing module 20 sends the video to the VR virtual reality device with which the wireless connection is established.
  • the VR virtual reality device receives the video sent by the drone
  • the video captured by the drone can be played.
  • the processing module 20 may send the captured video to the VR virtual reality device with which the wireless connection is established in real time; or, the first shot by the shooting module 10
  • the live video is saved. After the user wants to view the video, the processing module 20 sends the saved video to the VR virtual reality device for playback by the VR virtual reality device.
  • the processing module 20 determines whether the VR virtual reality device can directly play the video before transmitting the captured video to the VR virtual reality device. If the VR virtual reality device is capable of playing the video directly, the processing module 20 transmits the video to the VR virtual reality device. If the VR virtual reality device cannot directly play the video, the processing module 20 first converts the video into a video format, and converts the video into a format suitable for playing on the VR virtual reality device, for example, converting the video into a left and right format. Video. The processing module 20 then transmits the video format converted video to the VR virtual reality device.
  • the VR virtual reality device closes the user's visual and auditory sense to the outside world, and guides the user to create a feeling in the virtual environment
  • the user can immerse into the video environment. , greatly enriched the user's viewing video experience.
  • the landscape video can be taken by the camera module 10 of the drone, and then the processing module 20 sends the landscape video to the VR virtual reality device located at home, so that other users in the home wear the The VR virtual reality device can view the landscape video, so that the user at home can also experience the scenery environment.
  • the processing module 20 sends the video to the virtual reality device located at the second site, and the virtual reality device is played by the virtual reality device.
  • the video is taken by the machine, so that when the user watches the video by wearing the virtual reality device in the second scene, the effect of the first scene can be achieved, thereby improving the playback effect of the video captured by the drone and improving the user. Viewing experience.
  • the drone further includes:
  • a saving module 30, configured to save the captured video
  • the processing module 20 is configured to:
  • the saved video is sent to a virtual reality device that establishes a wireless connection with the drone.
  • the saving module 30 saves the captured video.
  • the user wants to watch the video taken by the drone at the first scene, the user performs a corresponding video acquisition operation, for example, by controlling the device to perform a video acquisition control operation, triggering a corresponding video acquisition instruction.
  • the video acquisition instruction may also be sent to the drone through the VR virtual reality device.
  • the processing module 20 transmits the saved video to the VR virtual reality device.
  • the processing module 20 includes:
  • the acquiring unit 21 is configured to acquire video identification information carried in the video acquisition instruction when receiving a video acquisition instruction;
  • the querying unit 22 is configured to query, according to the video identification information, the video corresponding to the video identification information from the saved video;
  • the processing unit 23 is configured to send the video of the query to the virtual reality device.
  • the saving module 30 saves the captured video every time the shooting module 10 captures the video, as the number of times of shooting increases, the number of videos saved in the drone increases.
  • the user wants to view the video captured by the drone, the user performs a video acquisition operation to trigger a video acquisition instruction, where the video acquisition instruction includes video identification information corresponding to the video to be viewed.
  • each video has a unique corresponding video identification information.
  • the video identification information may be a video ID number, a video name, or the like corresponding to the video.
  • the obtaining unit 21 acquires the video identification information included in the video acquisition instruction, and then the query unit 22 queries and retrieves the video from the saved videos according to the obtained video identification information.
  • the video corresponding to the identification information After the query unit 22 queries the video corresponding to the video identification information, the processing unit 23 sends the queried video to the VR virtual reality device.
  • the saving module 30 is further configured to:
  • the processing module 20 is configured to:
  • a corresponding video acquisition request is sent to the server, and when the saved video fed back by the server is received, the video is sent to the virtual reality device.
  • the saving module 30 saves the captured video and sends the captured current first live video to the server.
  • the video is saved when the server receives the video sent by the drone.
  • the user wants to view the video, the user performs a corresponding video acquisition operation to trigger a video acquisition instruction.
  • the processing module 20 sends a corresponding video acquisition request to the server.
  • the server receives the video acquisition request, the saved video is sent to the drone.
  • the processing module 20 sends the video to the VR virtual reality device with which the wireless connection is established, and the video is played by the VR virtual reality device.
  • the saving module 30 saves the captured video of the first scene, and then when receiving the video acquisition instruction, the processing module 20 sends the saved video to the virtual reality device, and plays the video through the virtual reality device. Therefore, the user can play the video at any time as needed, so that the video played by the drone is more flexible, thereby further improving the user's viewing experience.
  • the processing module 20 is configured to:
  • the processing module 20 sends the captured video to the corresponding server, and the server sends the video to the VR virtual reality device.
  • the identity identification information corresponding to each of the UAV and the VR virtual reality device is preset.
  • the ID number corresponding to the setting of the drone and the VR virtual reality device is preset.
  • the server associates the identity information of each drone with the identity information of its corresponding VR virtual reality device, and associates each drone with its corresponding VR virtual reality device. For example, if the identity information of the UAV is the first identity information, the identity information of the corresponding VR virtual reality device is the second identity information, and the server associates the first identity information with the second identity information. .
  • the processing module 20 sends the captured video and the first identification information of the drone to the server.
  • the server receives the first identity information of the video and the drone sent by the drone, the server queries the identity information of the associated drone and the identity information of the corresponding VR virtual reality device, and determines The second identity information associated with the first identity information is the identity information of the VR virtual reality device corresponding to the unmanned device. Then, the server sends the received video to the VR virtual reality device corresponding to the second identity information, and sends the video to the VR virtual reality device associated with the drone, and plays the video through the VR virtual reality device.
  • the drone further includes:
  • a sending module configured to send the video to a server, where the server saves the video
  • the processing module 20 is configured to:
  • a corresponding video acquisition request is sent to the server, and when the saved video fed back by the server is received, the video is sent to the virtual reality device.
  • the shooting module 10 captures the video of the current first scene
  • the video is not saved by the saving module 30, but the captured current first scene video is directly sent to the server through the sending module.
  • the video is saved when the server receives the video sent by the drone.
  • the user wants to view the video
  • the user performs a corresponding video acquisition operation to trigger a video acquisition instruction.
  • the processing module 20 sends a corresponding video acquisition request to the server.
  • the server receives the video acquisition request
  • the saved video is sent to the drone.
  • the processing module 20 sends the video to the VR virtual reality device with which the wireless connection is established, and the video is played by the VR virtual reality device.
  • the video acquisition instruction includes the video identification information corresponding to the video
  • the processing module 20 acquires the video identification information included in the video acquisition instruction, and then generates the inclusion according to the video identification information.
  • the video identifies a video acquisition request and sends the video acquisition request to the server.
  • the server searches for the video corresponding to the video identification information from the saved video according to the video identification information included in the video acquisition request, and then sends the queried video to the drone to process Module 20 sends the received video to the VR virtual reality device.
  • the VR virtual reality device may send a video acquisition request to the server, where The video identification request contains video identification information.
  • the server receives the video acquisition request sent by the VR virtual reality device, the server searches for the video corresponding to the video identification information from the saved video according to the video identification information included in the video acquisition request, and then sends the queried video to the queried video.
  • the VR virtual reality device plays the video through the VR virtual reality device.
  • the processing module 20 sends the captured video to the server for saving, and then when the user wants to watch the video captured by the drone, the saved video only needs to be obtained from the server. Therefore, not only the user can be realized.
  • the video is played at any time as needed, and the storage space of the drone is also avoided.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, or a network device, etc.) to perform the methods described in various embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Graphics (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Processing Or Creating Images (AREA)
  • Studio Devices (AREA)

Abstract

本发明公开了一种无人机的控制方法,包括步骤:无人机在接收到拍摄指令时,拍摄当前第一现场的视频,所述视频为虚拟现实视频格式的视频;将所述视频发送至与所述无人机建立无线连接的虚拟现实设备,以供所述虚拟现实设备播放所述视频,其中,所述虚拟现实设备位于第二现场。本发明还公开了一种无人机。本发明提高了无人机拍摄的视频的播放效果。

Description

无人机及其控制方法
技术领域
本发明涉及无人机技术领域,尤其涉及一种无人机及其控制方法。
背景技术
随着科技的发展,无人机在航拍、灾难救援、影视拍摄等领域的应用越来越广,用户可以采用无人机进行远程的视频和图像拍摄。当用户要查看无人机拍摄的视频时,通常需要用户先将无人机与智能手机等移动终端进行连接,然后通过移动终端播放无人机拍摄的视频。由于移动终端屏幕大小的限制,视频播放的效果不佳。
发明内容
本发明的主要目的在于提出一种无人机及其控制方法,旨在解决现有技术中无人机拍摄的视频的播放效果不佳的技术问题。
为实现上述目的,本发明提供的一种无人机的控制方法,所述无人机的控制方法包括:
无人机在接收到拍摄指令时,拍摄当前第一现场的视频,所述视频为虚拟现实视频格式的视频;
将所述视频发送至与所述无人机建立无线连接的虚拟现实设备,以供所述虚拟现实设备播放所述视频,其中,所述虚拟现实设备位于第二现场。
优选地,所述无人机在接收到拍摄指令时,拍摄当前第一现场的视频的步骤之后,还包括:
保存拍摄的所述视频;
其中,所述将所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
在接收到视频获取指令时,将保存的所述视频发送至与所述无人机建立无线连接的虚拟现实设备。
优选地,所述在接收到视频获取指令时,将保存的所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
在接收到视频获取指令时,获取所述视频获取指令中携带的视频标识信息;
根据所述视频标识信息,从保存的视频中查询所述视频标识信息对应的视频;
将查询的所述视频发送至所述虚拟现实设备。
优选地,所述将所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
将所述视频以及所述无人机的第一身份标识信息发送至服务器,以供所述服务器查询关联保存的无人机身份标识信息和虚拟现实设备身份标识信息,确定与所述第一身份标识信息关联的第二身份标识信息,并将所述视频发送至所述第二身份标识信息对应的虚拟现实设备。
优选地,所述将所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
判断所述视频是否可被所述虚拟现实设备播放;
若是,则将所述视频发送至所述虚拟现实设备。
优选地,所述无人机在接收到拍摄指令时,拍摄当前第一现场的视频的步骤之后,还包括:
将所述视频发送至服务器,以供所述服务器保存所述视频;
其中,所述将拍摄的所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,以供所述服务器反馈保存的所述视频;
在接收到所述服务器反馈的所述视频时,将所述视频发送至所述虚拟现实设备。
此外,为实现上述目的,本发明还提出一种无人机,所述无人机包括:
拍摄模块,用于在接收到拍摄指令时,拍摄当前第一现场的视频,所述视频为虚拟现实视频格式的视频;
处理模块,用于将所述视频发送至与所述无人机建立无线连接的虚拟现实设备,以供所述虚拟现实设备播放所述视频,其中,所述虚拟现实设备位于第二现场。
优选地,所述无人机还包括:
发送模块,用于将所述视频发送至服务器,以供所述服务器保存所述视频;
其中,所述处理模块用于在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,并在接收到所述服务器反馈的保存的所述视频时,将所述视频发送至所述虚拟现实设备。
优选地,所述无人机还包括:
保存模块,用于保存拍摄的所述视频;
其中,所述处理模块,用于在接收到视频获取指令时,将保存的所述视频发送至与所述无人机建立无线连接的虚拟现实设备。
优选地,所述处理模块包括:
获取单元,用于在接收到视频获取指令时,获取所述视频获取指令中携带的视频标识信息;
查询单元,用于根据所述视频标识信息,从保存的视频中查询所述视频标识信息对应的视频;
处理单元,用于将查询的所述视频发送至所述虚拟现实设备。
优选地,所述处理模块用于:
将所述视频以及所述无人机的第一身份标识信息发送至服务器,以供所述服务器查询关联保存的无人机身份标识信息和虚拟现实设备身份标识信息,确定与所述第一身份标识信息关联的第二身份标识信息,并将所述视频发送至所述第二身份标识信息对应的虚拟现实设备。
优选地,所述处理模块具体用于:
判断所述视频是否可被所述虚拟现实设备播放,若是,则将所述视频发送至所述虚拟现实设备。
优选地,所述保存模块还用于:
将所述视频发送至服务器,以供所述服务器保存所述视频;
其中,所述处理模块用于在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,并在接收到所述服务器反馈的保存的所述视频时,将所述视频发送至所述虚拟现实设备。
本发明提出的无人机及其控制方法,无人机拍摄了当前第一现场的视频后,将该视频发送至与其建立无线连接的位于第二现场的虚拟现实设备,通过虚拟现实设备播放无人机拍摄的该视频,这样,用户在第二现场通过佩戴虚拟现实设备观看该视频时,就可以达到身临第一现场的效果,从而提高了无人机拍摄的视频的播放效果。
附图说明
图1为本发明无人机的控制方法第一实施例的流程示意图;
图2为本发明无人机的控制方法第二实施例的流程示意图;
图3为本发明无人机的控制方法第三实施例的流程示意图;
图4为本发明无人机第一实施例的功能模块示意图;
图5为本发明无人机第二实施例的功能模块示意图;
图6为本发明无人机第二实施例中处理模块的细化功能模块示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供一种无人机的控制方法。
参照图1,图1为本发明无人机的控制方法第一实施例的流程示意图。在本实施例中,所述无人机的控制方法包括:
步骤S10,无人机在接收到拍摄指令时,拍摄当前第一现场的视频,所述视频为虚拟现实视频格式的视频;
随着科技的发展,无人机在航拍、灾难救援、影视拍摄等领域的应用越来越广,用户可以采用无人机进行远程的视频、图像拍摄。在本实施例中,用户采用无人机进行视频拍摄。当用户想要拍摄第一现场的视频时,用户可以操控无人机到达第一现场,当无人机位于第一现场时,用户执行无人机拍摄操作,触发无人机的拍摄指令,比如通过移动终端等控制设备发送拍摄指令至无人机。当无人机接收到该拍摄指令时,无人机调用其配置的摄像头拍摄当前第一现场的视频。本领域技术人员可以理解的是,无人机配置的摄像头可以为单目摄像头,也可以为多目摄像头,无人机拍摄的视频为VR(Virtual Reality)虚拟现实视频格式的视频,VR虚拟现实设备可以播放该视频。可以理解的是,该第一现场可以是用户当前所在的地方,也可以是其他并不是用户当前所在的地方。比如,当用户本人位于家中,但想要拍摄学校教室的上课视频时,用户可以控制无人机到达学校,然后采用无人机拍摄教室上课的视频。
进一步地,为了确保VR虚拟现实设备可以播放该视频,无人机在将拍摄的视频发送至VR虚拟现实设备之前,先判断VR虚拟现实设备是否能够直接播放该视频。若VR虚拟现实设备能够直接播放该视频,则无人机将该视频发送至VR虚拟现实设备。若VR虚拟现实设备不能够直接播放该视频,则无人机先将该视频进行视频格式转换,将该视频转换成适宜在VR虚拟现实设备上播放的格式,比如,将该视频转换成左右格式的视频。然后将视频格式转换后的该视频发送至VR虚拟现实设备。
步骤S20,将所述视频发送至与所述无人机建立无线连接的虚拟现实设备,以供所述虚拟现实设备播放所述视频,其中,所述虚拟现实设备位于第二现场。
本实施例中,无人机与相应的VR虚拟现实设备预先建立无线通信连接,比如,无人机与VR眼镜建立无线通信连接。该VR虚拟现实设备位于第二现场,可以理解的是,该第二现场可以是与第一现场不同的地方,也可以是与第一现场相同的地方。
当无人机拍摄了第一现场的视频之后,无人机将该视频发送至与其建立无线连接的VR虚拟现实设备。当VR虚拟现实设备接收到无人机发送的视频时,即可播放无人机拍摄的该视频。可选地,当无人机拍摄第一现场的视频时,无人机可实时将拍摄的视频发送至与其建立无线连接的VR虚拟现实设备;或者,无人机也可以先将拍摄的第一现场的视频进行保存,之后当用户要观看该视频时,再将保存的该视频发送至VR虚拟现实设备,供VR虚拟现实设备进行播放。
由于VR虚拟现实设备将用户的对外界的视觉、听觉封闭,引导用户产生一种身在虚拟环境中的感觉,因此,当用户佩戴VR虚拟现实设备观看该视频时,就能沉浸到视频环境中,极大地丰富了用户的观看视频体验。比如,当某用户在外旅游时,可以通过无人机拍摄风景视频,然后将该风景视频发送至位于家中的VR虚拟现实设备,这样,在家中的其他用户通过佩戴该VR虚拟现实设备,就可以观看到该风景视频,从而给在家中的用户也身临风景环境的体验。
本实施例提出的方案,无人机拍摄了当前第一现场的视频后,将该视频发送至与其建立无线连接的位于第二现场的虚拟现实设备,通过虚拟现实设备播放无人机拍摄的该视频,这样,用户在第二现场通过佩戴虚拟现实设备观看该视频时,就可以达到身临第一现场的效果,从而提高了无人机拍摄的视频的播放效果,提高了用户的观看体验。
进一步地,如图2所示,基于第一实施例提出本发明无人机的控制方法第二实施例,在本实施例中,所述步骤S10之后,还包括步骤:
步骤S30,保存拍摄的所述视频;
所述步骤S20包括:
步骤S21,在接收到视频获取指令时,将保存的所述视频发送至与所述无人机建立无线连接的虚拟现实设备。
在本实施例中,当无人机拍摄了当前第一现场的视频之后,无人机将拍摄的该视频保存。之后,当用户要观看无人机在第一现场拍摄的视频时,用户执行相应的视频获取操作,比如,通过控制设备执行视频获取的控制操作,触发相应的视频获取指令。可选地,也可以通过VR虚拟现实设备发送视频获取指令至无人机。当无人机接收到视频获取指令时,无人机将保存的视频发送至VR虚拟现实设备。具体地,所述步骤S21包括:
步骤a,在接收到视频获取指令时,获取所述视频获取指令中携带的视频标识信息;
步骤b,根据所述视频标识信息,从保存的视频中查询所述视频标识信息对应的视频;
步骤c,将查询的所述视频发送至所述虚拟现实设备。
由于无人机在每次拍摄视频后,将拍摄的视频保存,这样随着拍摄次数增加,无人机中保存到视频的数量也会增加。当用户要观看无人机某一次拍摄的视频时,用户执行视频获取操作,触发视频获取指令,其中,该视频获取指令中包含有要观看的该视频对应的视频标识信息。本实施例中,每个视频都有唯一对应的视频标识信息,比如,视频标识信息可以为视频对应的视频ID号、视频名称等。当无人机接收到视频获取指令时,获取该视频获取指令中所包含的视频标识信息,然后无人机根据获取的该视频标识信息,从保存的各视频中,查询出与获取的该视频标识信息对应的视频。在查询到该视频标识信息对应的视频后,将查询的视频发送至VR虚拟现实设备。
本实施例提出的方案,无人机将拍摄的第一现场的视频保存,之后当接收到视频获取指令时,将保存的该视频发送至虚拟现实设备,通过虚拟现实设备播放该视频,因此,用户可以根据需要随时播放该视频,使得无人机拍摄的视频播放更加灵活,从而进一步提高了用户的观看体验。
进一步地,基于第一实施例或第二实施例提出本发明无人机的控制方法第三实施例,在本实施例中,所述步骤S20包括:
步骤d,将所述视频以及所述无人机的第一身份标识信息发送至服务器,以供所述服务器查询关联保存的无人机身份标识信息和虚拟现实设备身份标识信息,确定与所述第一身份标识信息关联的第二身份标识信息,并将所述视频发送至所述第二身份标识信息对应的虚拟现实设备。
为了避免在无人机与VR虚拟现实设备的无线通信连接发生中断时,无人机无法将拍摄的视频发送至VR虚拟现实设备的问题,本实施例中,当无人机拍摄了当前第一现场的视频之后,无人机将拍摄的该视频发送至相应的服务器,再由服务器将该视频发送至VR虚拟现实设备。具体地,预先设置每一个无人机和VR虚拟现实设备对应的身份标识信息。比如,预先设置无人机和VR虚拟现实设备设置对应的身份ID号。服务器将每一个无人机的身份标识信息和其对应的VR虚拟现实设备的身份标识信息关联保存,也即将每个无人机和其对应的VR虚拟现实设备进行关联。例如,若无人机的身份标识信息是第一身份标识信息,其对应的VR虚拟现实设备的身份标识信息是第二身份标识信息,服务器将第一身份标识信息和第二身份标识信息关联保存。
当无人机拍摄了当前第一现场的视频之后,无人机将拍摄的该视频以及无人机的第一身份标识信息发送至服务器。当服务器接收到无人机发送的该视频和无人机的第一身份标识信息时,服务器查询关联保存的无人机的身份标识信息和其对应的VR虚拟现实设备的身份标识信息,确定与第一身份标识信息关联的第二身份标识信息,该第二身份标识信息就是无人机所对应的VR虚拟现实设备的身份标识信息。然后,服务器将接收到的视频发送至该第二身份标识信息所对应的VR虚拟现实设备,也即将该视频发送至与无人机关联的VR虚拟现实设备,通过VR虚拟现实设备播放该视频。
进一步地,如图3所示,本实施例中,所述步骤S10之后,还包括:
步骤S40,将所述视频发送至服务器,以供所述服务器保存所述视频;
所述步骤S20包括:
步骤S22,在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,以供所述服务器反馈保存的所述视频;
步骤S23,在接收到所述服务器反馈的所述视频时,将所述视频发送至所述虚拟现实设备。
进一步地,本实施例中,当无人机拍摄了当前第一现场的视频之后,无人机直接将拍摄的当前第一现场的视频发送至服务器;或者,无人机将拍摄的该视频保存,并将拍摄的当前第一现场的视频发送至服务器。当服务器接收到无人机发送的该视频时,将该视频进行保存。之后,当用户要观看该视频时,用户执行相应的视频获取操作,触发视频获取指令,比如,当用户通过控制设备或者VR虚拟现实设备执行相应的视频获取操作,触发视频获取指令。当无人机接收到视频获取指令时,发送相应的视频获取请求至服务器。当服务器接收到该视频获取请求时,发送保存的该视频至无人机,无人机在接收到服务器发送的该视频后,将该视频发送至与其建立无线连接的VR虚拟现实设备,通过该VR虚拟现实设备播放该视频。
可选地,视频获取指令中包含有该视频对应的视频标识信息,无人机在接收到视频获取指令时,获取该视频获取指令中包含的视频标识信息,然后根据该视频标识信息,生成包含该视频标识信息的视频获取请求,并发送该视频获取请求至服务器。当服务器接收到该视频获取请求时,根据该视频获取请求中包含的视频标识信息,从保存的视频中查询出该视频标识信息对应的视频,然后将查询到的视频发送至无人机,无人机将接收到的视频发送至VR虚拟现实设备。
进一步地,为了提高视频播放的效率,在服务器保存了无人机拍摄的视频之后,当用户要播放无人机拍摄的视频时,可通过VR虚拟现实设备发送视频获取请求至服务器,其中,该视频获取请求中包含有视频标识信息。当服务器接收到VR虚拟现实设备发送的视频获取请求时,根据该视频获取请求中包含的视频标识信息,从保存的视频中查询出该视频标识信息对应的视频,然后将查询到的视频发送至VR虚拟现实设备,通过VR虚拟现实设备播放该视频。
本实施例提出的方案,无人机将拍摄的视频发送至服务器保存,之后当用户要观看无人机拍摄的视频时,只需要从服务器中获取保存的该视频,因此,不仅实现了用户可以根据需要随时播放该视频,而且还避免了占用无人机的存储空间。
本发明进一步提供一种无人机。
参照图4,图4为本发明无人机第一实施例的功能模块示意图。
需要强调的是,对本领域的技术人员来说,图4所示功能模块图仅仅是一个较佳实施例的示例图,本领域的技术人员围绕图4所示的无人机的功能模块,可轻易进行新的功能模块的补充;各功能模块的名称是自定义名称,仅用于辅助理解该无人机的各个程序功能块,不用于限定本发明的技术方案,本发明技术方案的核心是,各自定义名称的功能模块所要达成的功能。
在本实施例中,所述无人机包括:
拍摄模块10,用于在接收到拍摄指令时,拍摄当前第一现场的视频;
随着科技的发展,无人机在航拍、灾难救援、影视拍摄等领域的应用越来越广,用户可以采用无人机进行远程的视频、图像拍摄。在本实施例中,用户采用无人机进行视频拍摄。当用户想要拍摄第一现场的视频时,用户可以操控无人机到达第一现场,当无人机位于第一现场时,用户执行无人机拍摄操作,触发无人机的拍摄指令,比如通过移动终端等控制设备发送拍摄指令至无人机。当接收到该拍摄指令时,拍摄模块10调用无人机配置的摄像头拍摄当前第一现场的视频。本领域技术人员可以理解的是,无人机配置的摄像头可以为单目摄像头,也可以为多目摄像头,拍摄模块10拍摄的视频为VR(Virtual Reality)虚拟现实视频格式的视频,VR虚拟现实设备可以播放该视频。可以理解的是,该第一现场可以是用户当前所在的地方,也可以是其他并不是用户当前所在的地方。比如,当用户本人位于家中,但想要拍摄学校教室的上课视频时,用户可以控制无人机到达学校,然后通过拍摄模块10拍摄教室上课的视频。
处理模块20,用于将拍摄的所述视频发送至与所述无人机建立无线连接的虚拟现实设备,以供所述虚拟现实设备播放所述视频,其中,所述虚拟现实设备位于第二现场。
本实施例中,无人机与相应的VR虚拟现实设备预先建立无线通信连接,比如,无人机与VR眼镜建立无线通信连接。该VR虚拟现实设备位于第二现场,可以理解的是,该第二现场可以是与第一现场不同的地方,也可以是与第一现场相同的地方。
当拍摄模块10拍摄了第一现场的视频之后,处理模块20将该视频发送至与其建立无线连接的VR虚拟现实设备。当VR虚拟现实设备接收到无人机发送的视频时,即可播放无人机拍摄的该视频。可选地,当拍摄模块10拍摄第一现场的视频时,处理模块20可实时将拍摄的视频发送至与其建立无线连接的VR虚拟现实设备;或者,也可以先将拍摄模块10拍摄的第一现场的视频进行保存,之后当用户要观看该视频时,处理模块20再将保存的该视频发送至VR虚拟现实设备,供VR虚拟现实设备进行播放。
进一步地,为了确保VR虚拟现实设备可以播放该视频,处理模块20在将拍摄的视频发送至VR虚拟现实设备之前,先判断VR虚拟现实设备是否能够直接播放该视频。若VR虚拟现实设备能够直接播放该视频,则处理模块20将该视频发送至VR虚拟现实设备。若VR虚拟现实设备不能够直接播放该视频,则处理模块20先将该视频进行视频格式转换,将该视频转换成适宜在VR虚拟现实设备上播放的格式,比如,将该视频转换成左右格式的视频。然后处理模块20将视频格式转换后的该视频发送至VR虚拟现实设备。
由于VR虚拟现实设备将用户的对外界的视觉、听觉封闭,引导用户产生一种身在虚拟环境中的感觉,因此,当用户佩戴VR虚拟现实设备观看该视频时,就能沉浸到视频环境中,极大地丰富了用户的观看视频体验。比如,当某用户在外旅游时,可以通过无人机的拍摄模块10拍摄风景视频,然后处理模块20将该风景视频发送至位于家中的VR虚拟现实设备,这样,在家中的其他用户通过佩戴该VR虚拟现实设备,就可以观看到该风景视频,从而给在家中的用户也身临风景环境的体验。
本实施例提出的方案,通过拍摄模块10拍摄了当前第一现场的视频后,处理模块20将该视频发送至与其建立无线连接的位于第二现场的虚拟现实设备,通过虚拟现实设备播放无人机拍摄的该视频,这样,用户在第二现场通过佩戴虚拟现实设备观看该视频时,就可以达到身临第一现场的效果,从而提高了无人机拍摄的视频的播放效果,提高了用户的观看体验。
进一步地,如图5所示,基于第一实施例提出本发明无人机第二实施例,在本实施例中,所述无人机还包括:
保存模块30,用于保存拍摄的所述视频;
所述处理模块20用于:
在接收到视频获取指令时,将保存的所述视频发送至与所述无人机建立无线连接的虚拟现实设备。
在本实施例中,当拍摄模块10拍摄了当前第一现场的视频之后,保存模块30将拍摄的该视频保存。之后,当用户要观看无人机在第一现场拍摄的视频时,用户执行相应的视频获取操作,比如,通过控制设备执行视频获取的控制操作,触发相应的视频获取指令。可选地,也可以通过VR虚拟现实设备发送视频获取指令至无人机。当接收到视频获取指令时,处理模块20将保存的视频发送至VR虚拟现实设备。具体地,如图6所示,所述处理模块20包括:
获取单元21,用于在接收到视频获取指令时,获取所述视频获取指令中携带的视频标识信息;
查询单元22,用于根据所述视频标识信息,从保存的视频中查询所述视频标识信息对应的视频;
处理单元23,用于将查询的所述视频发送至所述虚拟现实设备。
由于在拍摄模块10每次拍摄视频后,保存模块30将拍摄的视频保存,这样随着拍摄次数增加,无人机中保存到视频的数量也会增加。当用户要观看无人机某一次拍摄的视频时,用户执行视频获取操作,触发视频获取指令,其中,该视频获取指令中包含有要观看的该视频对应的视频标识信息。本实施例中,每个视频都有唯一对应的视频标识信息,比如,视频标识信息可以为视频对应的视频ID号、视频名称等。当接收到视频获取指令时,获取单元21获取该视频获取指令中所包含的视频标识信息,然后查询单元22根据获取的该视频标识信息,从保存的各视频中,查询出与获取的该视频标识信息对应的视频。在查询单元22查询到该视频标识信息对应的视频后,处理单元23将查询的视频发送至VR虚拟现实设备。
进一步地,所述保存模块30还用于:
将所述视频发送至服务器,以供所述服务器保存所述视频;
所述处理模块20用于:
在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,并在接收到所述服务器反馈的保存的所述视频时,将所述视频发送至所述虚拟现实设备。
进一步地,当拍摄模块10拍摄了当前第一现场的视频之后,保存模块30将拍摄的该视频保存,并将拍摄的当前第一现场的视频发送至服务器。当服务器接收到无人机发送的该视频时,将该视频进行保存。之后,当用户要观看该视频时,用户执行相应的视频获取操作,触发视频获取指令,比如,当用户通过控制设备或者VR虚拟现实设备执行相应的视频获取操作,触发视频获取指令。当接收到视频获取指令时,处理模块20发送相应的视频获取请求至服务器。当服务器接收到该视频获取请求时,发送保存的该视频至无人机。在接收到服务器发送的该视频后,处理模块20将该视频发送至与其建立无线连接的VR虚拟现实设备,通过该VR虚拟现实设备播放该视频。
本实施例提出的方案,保存模块30将拍摄的第一现场的视频保存,之后当接收到视频获取指令时,处理模块20将保存的该视频发送至虚拟现实设备,通过虚拟现实设备播放该视频,因此,用户可以根据需要随时播放该视频,使得无人机拍摄的视频播放更加灵活,从而进一步提高了用户的观看体验。
进一步地,基于第一实施例或第二实施例提出本发明无人机第三实施例,在本实施例中,所述处理模块20用于:
将所述视频以及所述无人机的第一身份标识信息发送至服务器,以供所述服务器查询关联保存的无人机身份标识信息和虚拟现实设备身份标识信息,确定与所述第一身份标识信息关联的第二身份标识信息,并将所述视频发送至所述第二身份标识信息对应的虚拟现实设备。
为了避免在无人机与VR虚拟现实设备的无线通信连接发生中断时,无法将拍摄的视频发送至VR虚拟现实设备的问题,本实施例中,当拍摄模块10拍摄了当前第一现场的视频之后,处理模块20将拍摄的该视频发送至相应的服务器,再由服务器将该视频发送至VR虚拟现实设备。具体地,预先设置每一个无人机和VR虚拟现实设备对应的身份标识信息。比如,预先设置无人机和VR虚拟现实设备设置对应的身份ID号。服务器将每一个无人机的身份标识信息和其对应的VR虚拟现实设备的身份标识信息关联保存,也即将每个无人机和其对应的VR虚拟现实设备进行关联。例如,若无人机的身份标识信息是第一身份标识信息,其对应的VR虚拟现实设备的身份标识信息是第二身份标识信息,服务器将第一身份标识信息和第二身份标识信息关联保存。
当拍摄模块10拍摄了当前第一现场的视频之后,处理模块20将拍摄的该视频以及无人机的第一身份标识信息发送至服务器。当服务器接收到无人机发送的该视频和无人机的第一身份标识信息时,服务器查询关联保存的无人机的身份标识信息和其对应的VR虚拟现实设备的身份标识信息,确定与第一身份标识信息关联的第二身份标识信息,该第二身份标识信息就是无人机所对应的VR虚拟现实设备的身份标识信息。然后,服务器将接收到的视频发送至该第二身份标识信息所对应的VR虚拟现实设备,也即将该视频发送至与无人机关联的VR虚拟现实设备,通过VR虚拟现实设备播放该视频。
进一步地,所述无人机还包括:
发送模块,用于将所述视频发送至服务器,以供所述服务器保存所述视频;
所述处理模块20用于:
在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,并在接收到所述服务器反馈的保存的所述视频时,将所述视频发送至所述虚拟现实设备。
进一步地,当拍摄模块10拍摄了当前第一现场的视频之后,并不通过保存模块30保存该视频,而是通过发送模块直接将拍摄的当前第一现场的视频发送至服务器。当服务器接收到无人机发送的该视频时,将该视频进行保存。之后,当用户要观看该视频时,用户执行相应的视频获取操作,触发视频获取指令,比如,当用户通过控制设备或者VR虚拟现实设备执行相应的视频获取操作,触发视频获取指令。当接收到视频获取指令时,处理模块20发送相应的视频获取请求至服务器。当服务器接收到该视频获取请求时,发送保存的该视频至无人机。在接收到服务器发送的该视频后,处理模块20将该视频发送至与其建立无线连接的VR虚拟现实设备,通过该VR虚拟现实设备播放该视频。
可选地,视频获取指令中包含有该视频对应的视频标识信息,在接收到视频获取指令时,处理模块20获取该视频获取指令中包含的视频标识信息,然后根据该视频标识信息,生成包含该视频标识信息的视频获取请求,并发送该视频获取请求至服务器。当服务器接收到该视频获取请求时,根据该视频获取请求中包含的视频标识信息,从保存的视频中查询出该视频标识信息对应的视频,然后将查询到的视频发送至无人机,处理模块20将接收到的视频发送至VR虚拟现实设备。
进一步地,为了提高视频播放的效率,在服务器保存了无人机拍摄的视频之后,当用户要播放无人机拍摄的视频时,可通过VR虚拟现实设备发送视频获取请求至服务器,其中,该视频获取请求中包含有视频标识信息。当服务器接收到VR虚拟现实设备发送的视频获取请求时,根据该视频获取请求中包含的视频标识信息,从保存的视频中查询出该视频标识信息对应的视频,然后将查询到的视频发送至VR虚拟现实设备,通过VR虚拟现实设备播放该视频。
本实施例提出的方案,处理模块20将拍摄的视频发送至服务器保存,之后当用户要观看无人机拍摄的视频时,只需要从服务器中获取保存的该视频,因此,不仅实现了用户可以根据需要随时播放该视频,而且还避免了占用无人机的存储空间。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其它变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其它要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其它相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (16)

  1. 一种无人机的控制方法,其特征在于,所述无人机的控制方法包括以下步骤:
    无人机在接收到拍摄指令时,拍摄当前第一现场的视频,所述视频为虚拟现实视频格式的视频;
    将所述视频发送至与所述无人机建立无线连接的虚拟现实设备,以供所述虚拟现实设备播放所述视频,其中,所述虚拟现实设备位于第二现场。
  2. 如权利要求1所述的无人机的控制方法,其特征在于,所述无人机在接收到拍摄指令时,拍摄当前第一现场的视频的步骤之后,还包括:
    将所述视频发送至服务器,以供所述服务器保存所述视频;
    其中,所述将所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
    在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,以供所述服务器反馈保存的所述视频;
    在接收到所述服务器反馈的所述视频时,将所述视频发送至所述虚拟现实设备。
  3. 如权利要求1所述的无人机的控制方法,其特征在于,所述无人机在接收到拍摄指令时,拍摄当前第一现场的视频的步骤之后,还包括:
    保存拍摄的所述视频;
    其中,所述将所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
    在接收到视频获取指令时,将保存的所述视频发送至与所述无人机建立无线连接的虚拟现实设备。
  4. 如权利要求3所述的无人机的控制方法,其特征在于,所述无人机在接收到拍摄指令时,拍摄当前第一现场的视频的步骤之后,还包括:
    将所述视频发送至服务器,以供所述服务器保存所述视频;
    其中,所述将所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
    在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,以供所述服务器反馈保存的所述视频;
    在接收到所述服务器反馈的所述视频时,将所述视频发送至所述虚拟现实设备。
  5. 如权利要求3所述的无人机的控制方法,其特征在于,所述在接收到视频获取指令时,将保存的所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
    在接收到视频获取指令时,获取所述视频获取指令中携带的视频标识信息;
    根据所述视频标识信息,从保存的视频中查询所述视频标识信息对应的视频;
    将查询的所述视频发送至所述虚拟现实设备。
  6. 如权利要求1所述的无人机的控制方法,其特征在于,所述将所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
    将所述视频以及所述无人机的第一身份标识信息发送至服务器,以供所述服务器查询关联保存的无人机身份标识信息和虚拟现实设备身份标识信息,确定与所述第一身份标识信息关联的第二身份标识信息,并将所述视频发送至所述第二身份标识信息对应的虚拟现实设备。
  7. 如权利要求1所述的无人机的控制方法,其特征在于,所述将所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
    判断所述视频是否可被所述虚拟现实设备播放;
    若是,则将所述视频发送至所述虚拟现实设备。
  8. 如权利要求5所述的无人机的控制方法,其特征在于,所述无人机在接收到拍摄指令时,拍摄当前第一现场的视频的步骤之后,还包括:
    将所述视频发送至服务器,以供所述服务器保存所述视频;
    其中,所述将所述视频发送至与所述无人机建立无线连接的虚拟现实设备的步骤包括:
    在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,以供所述服务器反馈保存的所述视频;
    在接收到所述服务器反馈的所述视频时,将所述视频发送至所述虚拟现实设备。
  9. 一种无人机,其特征在于,所述无人机包括:
    拍摄模块,用于在接收到拍摄指令时,拍摄当前第一现场的视频,所述视频为虚拟现实视频格式的视频;
    处理模块,用于将所述视频发送至与无人机建立无线连接的虚拟现实设备,以供所述虚拟现实设备播放所述视频,其中,所述虚拟现实设备位于第二现场。
  10. 如权利要求9所述的无人机,其特征在于,所述无人机还包括:
    发送模块,用于将所述视频发送至服务器,以供所述服务器保存所述视频;
    其中,所述处理模块用于在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,并在接收到所述服务器反馈的保存的所述视频时,将所述视频发送至所述虚拟现实设备。
  11. 如权利要求9所述的无人机,其特征在于,所述无人机还包括:
    保存模块,用于保存拍摄的所述视频;
    其中,所述处理模块,用于在接收到视频获取指令时,将保存的所述视频发送至与所述无人机建立无线连接的虚拟现实设备。
  12. 如权利要求11所述的无人机,其特征在于,所述保存模块还用于:
    将所述视频发送至服务器,以供所述服务器保存所述视频;
    其中,所述处理模块用于在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,并在接收到所述服务器反馈的保存的所述视频时,将所述视频发送至所述虚拟现实设备。
  13. 如权利要求11所述的无人机,其特征在于,所述处理模块包括:
    获取单元,用于在接收到视频获取指令时,获取所述视频获取指令中携带的视频标识信息;
    查询单元,用于根据所述视频标识信息,从保存的视频中查询所述视频标识信息对应的视频;
    处理单元,用于将查询的所述视频发送至所述虚拟现实设备。
  14. 如权利要求9所述的无人机,其特征在于,所述处理模块用于:
    将所述视频以及所述无人机的第一身份标识信息发送至服务器,以供所述服务器查询关联保存的无人机身份标识信息和虚拟现实设备身份标识信息,确定与所述第一身份标识信息关联的第二身份标识信息,并将所述视频发送至所述第二身份标识信息对应的虚拟现实设备。
  15. 如权利要求9所述的无人机,其特征在于,所述处理模块具体用于:
    判断所述视频是否可被所述虚拟现实设备播放,若是,则将所述视频发送至所述虚拟现实设备。
  16. 如权利要求13所述的无人机,其特征在于,所述保存模块还用于:
    将所述视频发送至服务器,以供所述服务器保存所述视频;
    其中,所述处理模块用于在接收到视频获取指令时,发送相应的视频获取请求至所述服务器,并在接收到所述服务器反馈的保存的所述视频时,将所述视频发送至所述虚拟现实设备。
PCT/CN2017/075303 2016-11-16 2017-03-01 无人机及其控制方法 Ceased WO2018090505A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611018366.7 2016-11-16
CN201611018366.7A CN106791599B (zh) 2016-11-16 2016-11-16 无人机及其控制方法

Publications (1)

Publication Number Publication Date
WO2018090505A1 true WO2018090505A1 (zh) 2018-05-24

Family

ID=58969793

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/075303 Ceased WO2018090505A1 (zh) 2016-11-16 2017-03-01 无人机及其控制方法

Country Status (2)

Country Link
CN (1) CN106791599B (zh)
WO (1) WO2018090505A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111901333A (zh) * 2020-07-27 2020-11-06 广州卓远虚拟现实科技有限公司 一种基于mqtt协议的vr人机交互方法及可读存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108347626A (zh) * 2018-03-06 2018-07-31 深圳春沐源控股有限公司 一种种植视频推送的方法及系统
CN113242382A (zh) * 2020-01-22 2021-08-10 中移智行网络科技有限公司 铁路巡线作业的拍摄方法、装置、存储介质和计算机设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105373137A (zh) * 2015-11-03 2016-03-02 上海酷睿网络科技股份有限公司 无人驾驶系统
CN205210690U (zh) * 2015-11-03 2016-05-04 上海酷睿网络科技股份有限公司 无人驾驶系统
CN105704501A (zh) * 2016-02-06 2016-06-22 普宙飞行器科技(深圳)有限公司 一种基于无人机全景视频的虚拟现实直播系统
CN105828062A (zh) * 2016-03-23 2016-08-03 常州视线电子科技有限公司 无人机3d虚拟现实拍摄系统
US20160330405A1 (en) * 2013-09-27 2016-11-10 Intel Corporation Ambulatory system to communicate visual projections

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160330405A1 (en) * 2013-09-27 2016-11-10 Intel Corporation Ambulatory system to communicate visual projections
CN105373137A (zh) * 2015-11-03 2016-03-02 上海酷睿网络科技股份有限公司 无人驾驶系统
CN205210690U (zh) * 2015-11-03 2016-05-04 上海酷睿网络科技股份有限公司 无人驾驶系统
CN105704501A (zh) * 2016-02-06 2016-06-22 普宙飞行器科技(深圳)有限公司 一种基于无人机全景视频的虚拟现实直播系统
CN105828062A (zh) * 2016-03-23 2016-08-03 常州视线电子科技有限公司 无人机3d虚拟现实拍摄系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111901333A (zh) * 2020-07-27 2020-11-06 广州卓远虚拟现实科技有限公司 一种基于mqtt协议的vr人机交互方法及可读存储介质

Also Published As

Publication number Publication date
CN106791599A (zh) 2017-05-31
CN106791599B (zh) 2019-09-20

Similar Documents

Publication Publication Date Title
WO2018006489A1 (zh) 终端的语音交互方法及装置
WO2019051902A1 (zh) 终端控制方法、空调器及计算机可读存储介质
WO2014187158A1 (zh) 终端数据云分享的控制方法、服务器及终端
WO2018023926A1 (zh) 电视与移动终端的互动方法及系统
WO2017191978A1 (en) Method, apparatus, and recording medium for processing image
WO2016029594A1 (zh) 终端连接显示设备的方法及系统
WO2018166224A1 (zh) 全景视频的目标追踪显示方法、装置及存储介质
WO2016175424A1 (ko) 이동 단말기 및 그 제어 방법
WO2018103187A1 (zh) 监控装置的监控画面形成方法和系统
WO2016101698A1 (zh) 基于dlna技术实现屏幕推送的方法及系统
WO2015046747A1 (ko) Tv 및 그 동작 방법
WO2017020649A1 (zh) 音视频播放控制方法及装置
WO2017045441A1 (zh) 基于智能电视的音频播放方法及装置
WO2016058258A1 (zh) 终端远程控制方法和系统
WO2017206377A1 (zh) 同步播放节目的方法和装置
WO2017045435A1 (zh) 控制电视播放方法和装置
WO2019051900A1 (zh) 智能家居设备的控制方法、装置及可读存储介质
WO2019051903A1 (zh) 终端控制方法、装置及计算机可读存储介质
WO2019061546A1 (zh) 移动终端的拍照方法、装置及计算机可读存储介质
WO2019085543A1 (zh) 电视机系统及电视机控制方法
WO2018090505A1 (zh) 无人机及其控制方法
WO2018023925A1 (zh) 拍摄方法及系统
WO2015158032A1 (zh) 通过视网膜信息匹配实现移动终端屏幕解锁的方法及系统
WO2019112182A1 (ko) 디스플레이 장치 및 음향 출력 방법
WO2017113600A1 (zh) 视频播放方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17872820

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17872820

Country of ref document: EP

Kind code of ref document: A1