WO2017035960A1 - 一种基于移动摄像机的视频拍摄方法及移动摄像机 - Google Patents
一种基于移动摄像机的视频拍摄方法及移动摄像机 Download PDFInfo
- Publication number
- WO2017035960A1 WO2017035960A1 PCT/CN2015/095347 CN2015095347W WO2017035960A1 WO 2017035960 A1 WO2017035960 A1 WO 2017035960A1 CN 2015095347 W CN2015095347 W CN 2015095347W WO 2017035960 A1 WO2017035960 A1 WO 2017035960A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- time
- real
- coordinate information
- trigger signal
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/278—Subtitling
Definitions
- the present invention relates to the field of photography, and in particular, to a video shooting method based on a mobile camera and a moving camera.
- the video information captured by the existing camera cannot display the shooting route of the video information by the camera according to the video information. Therefore, how to obtain a shooting route video file capable of displaying the video information of the camera is a technical problem to be solved.
- the video file obtained by the shooting method can use the video file to display the captured content of the camera or the shooting route of the camera.
- a moving camera that implements the shooting method is also provided.
- a video camera shooting method based on a mobile camera comprising the following steps:
- the first trigger signal is used to trigger the mobile camera to start shooting.
- Real-time time information, real-time coordinate information and real-time video frames are acquired at each time point, and real-time time information and real-time coordinate information are integrated into real-time video frames.
- the second trigger signal is used to trigger the mobile camera to end shooting.
- the end position coordinate information and the end time information are acquired so that the video file is obtained.
- real-time time information, real-time coordinate information and real-time video frames are acquired at each time point, and the real-time time information and the real-time coordinate information are integrated into the real-time video frame, including:
- the third trigger signal is used to set a starting identification point.
- Real-time time information, real-time coordinate information and real-time video frames are acquired at each time point, and real-time time information and real-time coordinate information are integrated into real-time video frames.
- the fourth trigger signal of the external input is received, and the fourth trigger signal is used to set the end identification point.
- the mobile camera comprises a mobile terminal
- the mobile terminal comprises an imaging device
- the imaging device comprises an acquisition module
- the acquisition module is configured to acquire coordinate information and time information.
- a mobile camera includes a receiving module, an acquisition module, and a processing module.
- the receiving module is configured to receive a first trigger signal of the external input, where the first trigger signal is used to trigger the mobile camera to start shooting.
- the obtaining module is configured to obtain initial position coordinate information and initial time information.
- the acquisition module is also used to obtain real-time time information, real-time coordinate information and real-time video frames at each time point.
- a processing module for integrating real-time time information and real-time coordinate information into a real-time video frame.
- the receiving module is configured to receive a second trigger signal that is externally input, and the second trigger signal is used to trigger the mobile camera to end shooting. Obtaining a module, obtaining end position coordinate information and end time information, so as to obtain a video file.
- the receiving module is configured to receive an externally input third trigger signal, and the third trigger signal is used to set a starting identification point.
- the acquiring module is configured to obtain first position coordinate information and first time information of the starting identification point.
- the acquisition module is also used to obtain real-time time information, real-time coordinate information and real-time video frames at each time point.
- the processing module is further configured to integrate real-time time information and real-time coordinate information into a real-time video frame.
- the receiving module is configured to receive an externally input fourth trigger signal, and the fourth trigger signal is used to set an end identification point.
- the acquiring module acquires second position coordinate information and second time information of the ending identification point.
- the mobile camera comprises a mobile terminal
- the mobile terminal comprises an imaging device
- the imaging device comprises an acquisition module
- the acquisition module is configured to acquire coordinate information and time information.
- a video camera shooting method based on a mobile camera comprising the following steps:
- the first trigger signal is used to trigger the mobile camera to start shooting.
- the second trigger signal is used to trigger the mobile camera to end shooting.
- the end position coordinate information is obtained so that the video file is obtained.
- the real-time coordinate information and the real-time video frame are acquired at each time point, and the real-time coordinate information is integrated into the real-time video frame, including:
- the third trigger signal is used to set a starting identification point.
- Real-time coordinate information and real-time video frames are acquired at each time point, and real-time coordinate information is integrated into real-time video frames.
- the fourth trigger signal of the external input is received, and the fourth trigger signal is used to set the end identification point.
- a mobile camera includes a receiving module, an acquisition module, and a processing module.
- the receiving module is configured to receive a first trigger signal of the external input, where the first trigger signal is used to trigger the mobile camera to start shooting.
- the acquisition module is configured to obtain initial position coordinate information.
- the acquisition module is also used to acquire real-time coordinate information and real-time video frames at each time point.
- the processing module is also used to integrate real-time coordinate information into real-time video frames.
- the receiving module is further configured to receive a second trigger signal that is externally input, and the second trigger signal is used to trigger the mobile camera to end shooting.
- the acquisition module is also used to obtain the end position coordinate information, so that the video file is obtained.
- the receiving module is further configured to receive an externally input third trigger signal, where the third trigger signal is used to set a starting identification point.
- the obtaining module is further configured to obtain coordinate information of the first position of the starting identification point. Get the module, It is also used to obtain real-time coordinate information and real-time video frames at each point in time.
- the processing module is also used to integrate real-time coordinate information into real-time video frames.
- the receiving module is further configured to receive an externally input fourth trigger signal, and the fourth trigger signal is used to set an end identification point.
- the obtaining module is further configured to obtain second position coordinate information of the ending identification point.
- the mobile camera of the present invention acquires real-time time information, real-time coordinate information and real-time video frames at each time point, and integrates the real-time time information and real-time coordinate information into a real-time video frame to obtain a video file, so that the subsequent can pass the
- the time information and coordinate information in the video file, combined with the GIS map, can be used to understand the shooting route of the video file.
- FIG. 1 is a schematic diagram of functional modules of an embodiment video processing system.
- FIG. 2 is a schematic flow chart of a video camera acquiring a video file in an embodiment of a video processing method.
- FIG. 3 is a schematic flowchart of a video processing method for playing a video file by an intermediate frequency playing device according to an embodiment.
- FIG. 4 is a schematic diagram of functional modules of another embodiment video processing system.
- FIG. 5 is a schematic flowchart of a video camera acquiring a video file in another embodiment of a video processing method.
- FIG. 6 is a schematic flowchart of playing a video file by a video playing device in another embodiment of a video processing method.
- FIG. 7 is a schematic diagram of a GIS display window including a trajectory line of an embodiment.
- FIG. 8 is a schematic diagram of a video play window that matches FIG. 7 and includes a video progress bar.
- FIG. 9 is a schematic diagram of a GIS display window including a trajectory line of another embodiment.
- FIG. 10 is a schematic diagram of a video play window that matches FIG. 9 and includes a video progress bar.
- the video processing system includes a mobile camera 10 and a video playback device 20.
- the mobile camera 10 is communicatively coupled to the video playback device 20 over a network.
- the mobile camera 10 includes a second receiving module 101, an obtaining module 102, and a processing module 103.
- Moving camera 10 includes a mobile terminal including a camera device, a camera device acquisition module 102, and the acquisition module 102 acquires coordinate information.
- the video playback device 20 includes a GIS display window 202, a first receiving module 201, a trajectory line generating module 203, a trajectory line marking module 204, and a trajectory line output module 205.
- the video playback device 20 includes a terminal device on which video playback software is installed, which can be used to output a track line on a GIS display window.
- the second receiving module 101 of the mobile camera 10 is configured to receive an externally input first trigger signal, and the first trigger signal is used to trigger the mobile camera 10 to start shooting.
- the acquisition module 102 of the mobile camera 10 is configured to acquire initial position coordinate information.
- the second receiving module 101 of the mobile camera 10 is further configured to receive an externally input third trigger signal, where the third trigger signal is used to set a starting identification point.
- the acquiring module 102 of the mobile camera 10 is further configured to acquire first position coordinate information of the starting identification point.
- the acquisition module 102 of the mobile camera 10 is further configured to acquire real-time coordinate information and real-time video frames at each time point.
- the processing module 103 of the mobile camera 10 is also used to integrate real-time coordinate information into real-time video frames.
- the second receiving module 101 of the mobile camera 10 is further configured to receive an externally input fourth trigger signal, and the fourth trigger signal is used to set an end identification point.
- the acquiring module 102 of the mobile camera 10 is further configured to acquire second position coordinate information of the ending identification point.
- the second receiving module 101 of the mobile camera 10 is further configured to receive an externally input second trigger signal, and the second trigger signal is used to trigger the mobile camera 10 to end shooting.
- the acquisition module 102 of the mobile camera 10 is further configured to acquire end position coordinate information, so as to obtain a video file.
- the first receiving module 201 of the video playback device 20 is configured to receive a video file of the mobile camera 10, where the video file includes a plurality of consecutive video frames, initial identification coordinate information, and end identification coordinate information, and coordinates corresponding to each video frame. information.
- the trajectory line generating module 203 of the video playback device 20 is configured to generate a trajectory line according to coordinate information of all video frames.
- the trajectory line marking module 204 of the video playback device 20 is used to mark the initial position point and the end position point of the trajectory line.
- the trajectory line marking module 204 of the video playback device 20 is further configured to mark the starting identification point on the trajectory line according to the initial identification coordinate information.
- the trajectory line marking module 204 of the video playback device 20 is further configured to mark the end identification point on the trajectory line according to the end identification coordinate information.
- the trajectory line output module 205 of the video playback device 20 displays the trajectory line output to the GIS display window 202.
- FIG. 2 it is a schematic flowchart of a video file acquired by a mobile camera in an embodiment of a video processing method.
- the steps for the mobile camera to obtain video files include:
- step S1 the mobile camera receives an externally input first trigger signal, and the first trigger signal is used to trigger the mobile camera to start shooting.
- step S2 the mobile camera acquires initial position coordinate information.
- step S3 the mobile camera acquires real-time coordinate information and real-time video frames at each time point, and integrates the real-time coordinate information into the real-time video frame.
- step S4 the mobile camera receives an externally input third trigger signal, and the third trigger signal is used to set a starting identification point.
- step S5 the mobile camera acquires first position coordinate information of the starting identification point.
- step S6 the mobile camera receives an externally input fourth trigger signal, and the fourth trigger signal is used to set an end identification point.
- step S7 the mobile camera acquires second position coordinate information of the end identification point.
- step S8 the mobile camera receives a second trigger signal externally input, and the second trigger signal is used to trigger the mobile camera to end shooting.
- step S9 the mobile camera acquires the end position coordinate information, so that the video file is obtained.
- FIG. 3 it is a schematic flowchart of a video processing method for playing a video file by an intermediate frequency playing device.
- the steps of playing the video file by the video playing device include:
- Step S10 The video playback device receives the video file sent by the mobile camera, where the video file includes a plurality of consecutive video frames, initial identification coordinate information, end identification coordinate information, and coordinate information corresponding to each of the video frames.
- step S11 the video playing device generates a trajectory line according to the coordinate information of all the video frames, and marks the initial position point and the end position point of the trajectory line.
- Step S12 The video playing device marks the starting identification point on the trajectory line according to the initial identification coordinate information and marks the ending identification point on the trajectory line according to the ending identification coordinate information.
- step S13 the trajectory output is displayed to the GIS display window.
- the video processing system of the present invention acquires the position coordinate information while acquiring the video frame of each time point, so as to obtain the shooting route of the video file according to the coordinate information, so as to improve the user experience and facilitate other users to understand the video file. Take the route.
- the video processing system includes a moving camera 10 and video playback device 30.
- the mobile camera 10 is communicatively coupled to the video playback device 30 via a network.
- the video playback device 30 includes a first receiving module 301 and a GIS display window. 302, video play window 303, track line generating module 304, video progress bar generating module 305, linkage module 306, track line marking module 307, video progress bar marking module 308.
- the mobile camera 10 includes a second receiving module 101, an obtaining module 102, and a processing module. 103.
- the mobile camera 10 includes a mobile terminal including a camera device, a camera device acquisition module 102, and the acquisition module 102 acquires coordinate information and time information.
- the video playback device 30 includes a terminal device on which video playback software is installed. The video playback software can be used to output a trajectory line in the GIS display window, and at the same time, output a video frame corresponding to the trajectory line in the video playback window.
- the second receiving module 101 of the mobile camera 10 is configured to receive an externally input first trigger signal, and the first trigger signal is used to trigger the moving camera 10 starts shooting.
- the acquisition module 102 of the mobile camera 10 is configured to acquire initial position coordinate information and initial time information.
- the third trigger signal is used for receiving an external input, and the third trigger signal is used to set a starting identification point.
- the first location coordinate information and the first time information used to obtain the initial identifier point.
- Acquisition module of mobile camera 10 102 It is also used to obtain real-time time information, real-time coordinate information, and real-time video frames at each time point. Processing module of mobile camera 10 103.
- the second receiving module of the mobile camera 10 101 The fourth trigger signal is used for receiving an external input, and the fourth trigger signal is used to set an end identification point.
- Acquisition module of mobile camera 10 102. Acquire second position coordinate information and second time information of the end identification point.
- the second receiving module 101 of the mobile camera 10 is configured to receive an externally input second trigger signal, and the second trigger signal is used to trigger the moving camera 10 End shooting.
- the acquisition module 102 of the mobile camera 10 acquires end position coordinate information and end time information so that a video file is obtained.
- First receiving module of video playback device 30 301 The method is configured to receive a video file sent by the mobile camera 10, where the video file includes multiple consecutive video frames, coordinate information and time information corresponding to each video frame, and the coordinate information uniquely corresponds to the time information, and the video file further includes a start.
- Video player The trajectory line generating module 304 of 30 is configured to generate a trajectory line according to coordinate information of all video frames.
- Track line marking module of video playback device 30 307 used to mark the initial position point and the end position point of the trajectory line.
- the trajectory marking module 307 of the video playback device 30 is further configured to mark the initial identification point on the trajectory line according to the initial identification coordinate information.
- Video player The trajectory marking module 307 of 30 is further configured to mark the ending identification point on the trajectory line according to the end identification coordinate information.
- the trajectory line output module 309 of the video playback device 30 displays the trajectory output to the GIS display window. 302.
- the video progress bar generating module 305 of the video playback device 30 generates a video progress bar based on the time information of all the video frames.
- the 308 is further configured to mark a start identifier cursor position on the video progress bar according to the start identifier time information, where the start identifier cursor position corresponds to the start identifier point.
- the 308 is further configured to mark an end identification cursor position on the video progress bar according to the end identification time information, and the end identification cursor position corresponds to the end identification point.
- Video progress bar output module of video playback device 30 310 is also used to display the video progress bar output to the video play window 303.
- Linkage module of video playback device 30 306. Use, according to the correspondence between the coordinate information and the time information, to correspond each position point on the trajectory line with the cursor position of the video progress bar.
- Video progress bar output module 310 of video playback device 30 for use in a video play window 303 Synchronous output cursor position.
- the video frame output module 311 of the video playback device 30 is configured to synchronously output a video frame corresponding to the cursor position in the video play window 303.
- FIG. 5 it is a schematic flowchart of a video file acquired by a mobile camera in another embodiment video processing method.
- the steps for the mobile camera to obtain video files include:
- step S20 the mobile camera receives an externally input first trigger signal, and the first trigger signal is used to trigger the mobile camera to start shooting.
- step S21 the mobile camera acquires initial position coordinate information and initial time information.
- Step S22 the mobile camera acquires real-time time information, real-time coordinate information and real-time video frames at each time point, and integrates real-time time information and real-time coordinate information into the real-time video frame.
- step S23 the mobile camera receives an externally input third trigger signal, and the third trigger signal is used to set a starting identification point.
- Step S24 the mobile camera acquires first position coordinate information of the initial identification point and first time information.
- step S25 the mobile camera receives an externally input fourth trigger signal, and the fourth trigger signal is used to set an end identification point.
- Step S26 the mobile camera acquires second position coordinate information and second time information of the end identification point.
- step S27 the mobile camera receives a second trigger signal externally input, and the second trigger signal is used to trigger the mobile camera to end shooting.
- step S28 the mobile camera acquires the end position coordinate information and the end time information, so that the video file is obtained.
- FIG. 6 it is a schematic flowchart of playing a video file by a video playing device in another embodiment video processing method.
- the steps of playing the video file by the video playing device include:
- Step S30 the video playing device receives the video file sent by the mobile camera, where the video file includes a plurality of consecutive video frames, coordinate information and time information corresponding to each video frame, and the coordinate information uniquely corresponds to the time information, and the coordinate information includes The coordinate information and the end identification coordinate information are included, and the time information includes start identification time information and end identification time information.
- Step S31 the video playing device generates a trajectory line according to the coordinate information of all the video frames, the trajectory line includes a plurality of position points, and marks the initial position point and the end position point of the trajectory line.
- Step S32 the video playing device generates a video progress bar according to the time information of all the video frames, the video progress bar includes a plurality of cursor positions, and marks the initial cursor position and the ending cursor position of the video progress bar.
- step S33 the video playback device associates each location point on the trajectory line with the cursor position of the video progress bar according to the correspondence between the coordinate information and the time information.
- Step S34 the video playing device marks the starting identification point on the trajectory line according to the initial identification coordinate information and marks the ending identification point on the trajectory line according to the ending identification coordinate information.
- Step S35 The video playback device marks the start identification cursor position on the video progress bar according to the start identification time information, the start identification cursor position corresponds to the start identification point, and marks the end identifier on the video progress bar according to the end identification time information. Cursor position, end marker The cursor position corresponds to the end marker point.
- Step S36 the video playing device outputs a position point on the trajectory line of the GIS display window, and synchronously outputs the cursor position and the video frame corresponding to the cursor position in the video playing window.
- the mobile camera of the present invention acquires real-time time information, real-time coordinate information and real-time video frames at each time point, and integrates the real-time time information and real-time coordinate information into a real-time video frame to obtain a video file, so that the subsequent can pass the
- the time information and coordinate information in the video file can be used to understand the shooting route of the video file and to understand the video frame content of each point on the shooting route.
- FIG. 7 is a schematic diagram of a GIS display window including a trajectory line according to an embodiment.
- 8 is a schematic diagram of a video play window that matches FIG. 7 and includes a video progress bar.
- a GIS map 1 and a trajectory line 12 are shown, the initial position point A (Xa, Ya) of the trajectory line 12, End position point B (Xb, Yb) and real-time position point S (Xs, Ys).
- the initial position point A (Xa, Ya) corresponds to the initial cursor position A (Ta)
- the end position point B (Xb, Yb) corresponds to the end cursor position B (Tb)
- the cursor position S (Ts) corresponds.
- FIG. 9 is a schematic diagram of a GIS display window including a trajectory line according to another embodiment.
- 10 is a schematic diagram of a video play window that matches FIG. 9 and includes a video progress bar.
- a GIS map 1 and a trajectory line 12 are shown, the initial position point A (Xa, Ya) of the trajectory line 12, End position point B (Xb, Yb), start marker point M (Xm, Ym), and end marker point N (Xn, Yn).
- the play control area 22 includes a fast forward key, a play key, a pause key, a stop key, a fast reverse key, and the like.
- the initial position point A (Xa, Ya) corresponds to the initial cursor position A (Ta)
- the end position point B (Xb, Yb) corresponds to the end cursor position B (Tb)
- the cursor position S(Ts) corresponds to, in addition, the start marker point M(Xm, Ym) corresponds to the start marker cursor position M(Tm), the end marker point N(Xn, Yn) and the end marker cursor position N(Tn) correspond.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
一种基于移动摄像机的视频拍摄方法,其包括:接收外部输入的第一触发信号,第一触发信号用于触发移动摄像机开始拍摄。获取初始位置坐标信息和初始时间信息。获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧,并将实时时间信息和实时坐标信息整合至实时视频帧中。接收外部输入的第二触发信号,第二触发信号用于触发移动摄像机结束拍摄。获取结束位置坐标信息和结束时间信息,以致获得视频文件。本发明还公开了一种移动摄像机。本发明的移动摄像机将该实时时间信息、实时坐标信息整合至实时视频帧中以得到视频文件,以致后续可以通过该视频文件中的时间信息和坐标信息,结合GIS地图即可了解该视频文件的拍摄路线。
Description
技术领域
本发明涉及拍摄领域,尤其涉及一种基于移动摄像机的视频拍摄方法及移动摄像机。
背景技术
现有摄像机拍摄的视频信息,不能根据该视频信息显示摄像机拍摄该视频信息的拍摄路线。因此,如何获得能够显示摄像机拍摄该视频信息的拍摄路线视频文件,是当前亟待解决的技术问题。
发明内容
有鉴于此,实有必要提供一种视频文件的拍摄方法,通过该拍摄方法获得的视频文件,既可以利用该视频文件显示摄像机的拍摄内容,也可以显示摄像机的拍摄路线。此外,还提供了一种实现该拍摄方法的移动摄像机。
一种基于移动摄像机的视频拍摄方法,其包括如下步骤:
接收外部输入的第一触发信号,第一触发信号用于触发移动摄像机开始拍摄。
获取初始位置坐标信息和初始时间信息。
获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧,并将实时时间信息和实时坐标信息整合至实时视频帧中。
接收外部输入的第二触发信号,第二触发信号用于触发移动摄像机结束拍摄。
获取结束位置坐标信息和结束时间信息,以致获得视频文件。
优选地,获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧,并将实时时间信息和实时坐标信息整合至实时视频帧中的步骤之中,包括:
接收外部输入的第三触发信号,第三触发信号用于设置起始标识点。
获取起始标识点的第一位置坐标信息和第一时间信息。
获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧,并将实时时间信息和实时坐标信息整合至实时视频帧中。
接收外部输入的第四触发信号,第四触发信号用于设置结束标识点。
获取结束标识点的第二位置坐标信息和第二时间信息。
优选地,移动摄像机包括移动终端,移动终端包括摄像装置,摄像装置包括获取模块,该获取模块用于获取坐标信息和时间信息。
一种移动摄像机,其包括接收模块、获取模块和处理模块。接收模块,用于接收外部输入的第一触发信号,第一触发信号用于触发移动摄像机开始拍摄。获取模块,用于获取初始位置坐标信息和初始时间信息。获取模块,还用于获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧。处理模块,用于将实时时间信息和实时坐标信息整合至实时视频帧中。接收模块,用于接收外部输入的第二触发信号,第二触发信号用于触发移动摄像机结束拍摄。获取模块,获取结束位置坐标信息和结束时间信息,以致获得视频文件。
优选地,接收模块,用于接收外部输入的第三触发信号,第三触发信号用于设置起始标识点。获取模块,用于获取起始标识点的第一位置坐标信息和第一时间信息。获取模块,还用于获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧。处理模块,还用于将实时时间信息和实时坐标信息整合至实时视频帧中。接收模块,用于接收外部输入的第四触发信号,第四触发信号用于设置结束标识点。获取模块,获取结束标识点的第二位置坐标信息和第二时间信息。
优选地,移动摄像机包括移动终端,移动终端包括摄像装置,摄像装置包括获取模块,该获取模块用于获取坐标信息和时间信息。
一种基于移动摄像机的视频拍摄方法,其包括如下步骤:
接收外部输入的第一触发信号,第一触发信号用于触发移动摄像机开始拍摄。
获取初始位置坐标信息。
获取每一个时间点的实时坐标信息和实时视频帧,并将实时坐标信息整合至实时视频帧中。
接收外部输入的第二触发信号,第二触发信号用于触发移动摄像机结束拍摄。
获取结束位置坐标信息,以致获得视频文件。
优选地,获取每一个时间点的实时坐标信息和实时视频帧,并将实时坐标信息整合至实时视频帧中的步骤之中,包括:
接收外部输入的第三触发信号,第三触发信号用于设置起始标识点。
获取起始标识点的第一位置坐标信息。
获取每一个时间点的实时坐标信息和实时视频帧,并将和实时坐标信息整合至实时视频帧中。
接收外部输入的第四触发信号,第四触发信号用于设置结束标识点。
获取结束标识点的第二位置坐标信息。
一种移动摄像机,其包括接收模块、获取模块和处理模块。接收模块,用于接收外部输入的第一触发信号,第一触发信号用于触发移动摄像机开始拍摄。获取模块,用于获取初始位置坐标信息。获取模块,还用于获取每一个时间点的实时坐标信息和实时视频帧。处理模块,还用于将实时坐标信息整合至实时视频帧中。接收模块,还用于接收外部输入的第二触发信号,第二触发信号用于触发移动摄像机结束拍摄。获取模块,还用于获取结束位置坐标信息,以致获得视频文件。
优选地,接收模块,还用于接收外部输入的第三触发信号,第三触发信号用于设置起始标识点。获取模块,还用于获取起始标识点的第一位置坐标信息。获取模块,
还用于获取每一个时间点的实时坐标信息和实时视频帧。处理模块,还用于将和实时坐标信息整合至实时视频帧中。接收模块,还用于接收外部输入的第四触发信号,第四触发信号用于设置结束标识点。获取模块,还用于获取结束标识点的第二位置坐标信息。
本发明的移动摄像机通过获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧,并将该实时时间信息、实时坐标信息整合至实时视频帧中以得到视频文件,以致后续可以通过该视频文件中的时间信息和坐标信息,结合GIS地图即可了解该视频文件的拍摄路线。
附图说明
图1为一种实施方式视频处理系统的功能模块示意图。
图2为一种实施方式视频处理方法中移动摄像机获取视频文件的流程示意图。
图3为一种实施方式视频处理方法中频播放装置播放视频文件的流程示意图。
图4为另一种实施方式视频处理系统的功能模块示意图。
图5为另一种实施方式视频处理方法中移动摄像机获取视频文件的流程示意图。
图6为另一种实施方式视频处理方法中视频播放装置播放视频文件的流程示意图。
图7为一种实施方式的包括轨迹线的GIS显示窗口示意图。
图8为与图7匹配且包括视频进度条的视频播放窗口示意图。
图9为另一种实施方式的包括轨迹线的GIS显示窗口示意图。
图10为与图9匹配且包括视频进度条的视频播放窗口示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用来限定本发明。
如图1所示,其为一种实施方式视频处理系统的功能模块示意图。该视频处理系统包括移动摄像机 10和视频播放装置
20。移动摄像机 10通过网络与与视频播放装置 20通信连接。移动摄像机10包括第二接收模块 101、获取模块 102和处理模块 103。移动摄像机
10包括移动终端,移动终端包括摄像装置,摄像装置获取模块 102,所述获取模块
102获取坐标信息。视频播放装置20包括GIS显示窗口202、第一接收模块201、轨迹线生成模块203、轨迹线标记模块204和轨迹线输出模块205。视频播放装置20包括安装有视频播放软件的终端设备,该视频播放软件可以用来在GIS显示窗口上输出轨迹线。
移动摄像机10的第二接收模块101,用于接收外部输入的第一触发信号,第一触发信号用于触发移动摄像机10开始拍摄。移动摄像机10的获取模块102,用于获取初始位置坐标信息。移动摄像机10的第二接收模块101,还用于接收外部输入的第三触发信号,第三触发信号用于设置起始标识点。移动摄像机10的获取模块102,还用于获取起始标识点的第一位置坐标信息。移动摄像机10的获取模块102,还用于获取每一个时间点的实时坐标信息和实时视频帧。移动摄像机10的处理模块103,还用于将和实时坐标信息整合至实时视频帧中。移动摄像机10的第二接收模块101,还用于接收外部输入的第四触发信号,第四触发信号用于设置结束标识点。移动摄像机10的获取模块102,还用于获取结束标识点的第二位置坐标信息。移动摄像机10的第二接收模块101,还用于接收外部输入的第二触发信号,第二触发信号用于触发移动摄像机10结束拍摄。移动摄像机10的获取模块102,还用于获取结束位置坐标信息,以致获得视频文件。
视频播放装置20的第一接收模块201,用于接收移动摄像机10的视频文件,视频文件包括多个连续的视频帧、起始标识坐标信息和结束标识坐标信息,与每一个视频帧对应的坐标信息。视频播放装置20的轨迹线生成模块203,用于根据所有视频帧的坐标信息生成轨迹线。视频播放装置20的轨迹线标记模块204,用于标记轨迹线的初始位置点和结束位置点。视频播放装置20的轨迹线标记模块204,还用于根据起始标识坐标信息在轨迹线上标记起始标识点。视频播放装置20的轨迹线标记模块204,还用于根据结束标识坐标信息在轨迹线上标记结束标识点。视频播放装置20的轨迹线输出模块205,将轨迹线输出显示至GIS显示窗口202。
如图2所示,其为一种实施方式视频处理方法中移动摄像机获取视频文件的流程示意图。移动摄像机获取视频文件的步骤包括:
步骤S1,移动摄像机接收外部输入的第一触发信号,第一触发信号用于触发移动摄像机开始拍摄。
步骤S2,移动摄像机获取初始位置坐标信息。
步骤S3,移动摄像机获取每一个时间点的实时坐标信息和实时视频帧,并将实时坐标信息整合至实时视频帧中。
步骤S4,移动摄像机接收外部输入的第三触发信号,第三触发信号用于设置起始标识点。
步骤S5,移动摄像机获取起始标识点的第一位置坐标信息。
步骤S6,移动摄像机接收外部输入的第四触发信号,第四触发信号用于设置结束标识点。
步骤S7,移动摄像机获取结束标识点的第二位置坐标信息。
步骤S8,移动摄像机接收外部输入的第二触发信号,第二触发信号用于触发移动摄像机结束拍摄。
步骤S9,移动摄像机获取结束位置坐标信息,以致获得视频文件。
如图3所示,其为一种实施方式视频处理方法中频播放装置播放视频文件的流程示意图。该视频播放装置播放视频文件的步骤包括:
步骤S10,视频播放装置接收移动摄像机发送的视频文件,视频文件包括多个连续的视频帧、起始标识坐标信息、结束标识坐标信息和与每一个所述视频帧对应的坐标信息。
步骤S11,视频播放装置根据所有视频帧的坐标信息生成轨迹线,标记轨迹线的初始位置点和结束位置点。
步骤S12,视频播放装置根据起始标识坐标信息在轨迹线上标记起始标识点和根据结束标识坐标信息在轨迹线上标记结束标识点。
步骤S13,将所述轨迹线输出显示至所述GIS显示窗口。
本发明视频处理系统,在获取每一个时间点的视频帧的同时获取位置坐标信息,以便后续根据该坐标信息获取该视频文件的拍摄路线,以致提升用户的体验度,便于其他用户了解视频文件的拍摄路线。
如图4所示,其为另一种实施方式视频处理系统的功能模块示意图。该视频处理系统包括移动摄像机
10和视频播放装置 30。移动摄像机 10通过网络与视频播放装置 30通信连接。视频播放装置 30包括第一接收模块 301、GIS显示窗口
302、视频播放窗口 303、轨迹线生成模块 304、视频进度条生成模块 305、联动模块 306、轨迹线标记模块 307、视频进度条标记模块
308、轨迹线输出模块 309、视频进度条输出模块 310和视频帧输出模块 311。移动摄像机10包括第二接收模块 101、获取模块 102和处理模块
103。移动摄像机 10包括移动终端,移动终端包括摄像装置,摄像装置获取模块 102,所述获取模块
102获取坐标信息和时间信息。视频播放装置30包括安装有视频播放软件的终端设备,该视频播放软件既可以用来在GIS显示窗口输出轨迹线,同时,在视频播放窗口输出与轨迹线对应的视频帧。
移动摄像机 10的第二接收模块 101,用于接收外部输入的第一触发信号,第一触发信号用于触发移动摄像机
10开始拍摄。移动摄像机 10的获取模块 102,用于获取初始位置坐标信息和初始时间信息。移动摄像机 10的第二接收模块
101,用于接收外部输入的第三触发信号,第三触发信号用于设置起始标识点。移动摄像机 10的获取模块
102,用于获取起始标识点的第一位置坐标信息和第一时间信息。移动摄像机 10的获取模块
102,还用于获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧。移动摄像机 10的处理模块
103,用于将实时时间信息和实时坐标信息整合至实时视频帧中。移动摄像机 10的第二接收模块
101,用于接收外部输入的第四触发信号,第四触发信号用于设置结束标识点。移动摄像机 10的获取模块
102,获取结束标识点的第二位置坐标信息和第二时间信息。移动摄像机 10的第二接收模块 101,用于接收外部输入的第二触发信号,第二触发信号用于触发移动摄像机
10结束拍摄。移动摄像机 10的获取模块 102,获取结束位置坐标信息和结束时间信息,以致获得视频文件。
视频播放装置 30的第一接收模块
301,用于接收移动摄像机10发送的视频文件,视频文件包括多个连续的视频帧、与每一个视频帧对应的坐标信息和时间信息,坐标信息与时间信息唯一对应,视频文件还包括起始标识坐标信息及对应的起始标识时间信息,和结束标识坐标信息及对应的结束标识时间信息。视频播放装置
30的轨迹线生成模块 304,用于根据所有视频帧的坐标信息生成轨迹线。视频播放装置 30的轨迹线标记模块
307,用于标记轨迹线的初始位置点和结束位置点。视频播放装置 30的轨迹线标记模块 307,还用于根据起始标识坐标信息在轨迹线上标记起始标识点。视频播放装置
30的轨迹线标记模块 307,还用于根据结束标识坐标信息在轨迹线上标记结束标识点。视频播放装置 30的轨迹线输出模块 309,将轨迹线输出显示至GIS显示窗口
302。视频播放装置 30的视频进度条生成模块 305,根据所有视频帧的时间信息生成视频进度条。视频播放装置 30的视频进度条标记模块
308,用于标记视频进度条的初始光标位置和结束光标位置。视频播放装置 30的视频进度条标记模块
308,还用于根据起始标识时间信息在视频进度条上标记起始标识光标位置,起始标识光标位置与起始标识点对应。视频播放装置 30的视频进度条标记模块
308,还用于根据结束标识时间信息在视频进度条上标记结束标识光标位置,结束标识光标位置与结束标识点对应。视频播放装置 30的视频进度条输出模块
310,还用于将视频进度条输出显示至视频播放窗口 303。视频播放装置 30的联动模块
306,用于根据坐标信息与时间信息的对应关系将轨迹线上的每一个位置点与视频进度条的光标位置对应。视频播放装置 30的轨迹线输出模块
309,用于在GIS显示窗口 302的轨迹线上输出位置点。视频播放装置 30的视频进度条输出模块 310,用于在视频播放窗口
303同步输出光标位置。视频播放装置 30的视频帧输出模块 311,用于在视频播放窗口 303同步输出与光标位置对应的视频帧。
如图5所示,其为另一种实施方式视频处理方法中移动摄像机获取视频文件的流程示意图。移动摄像机获取视频文件的步骤包括:
步骤S20,移动摄像机接收外部输入的第一触发信号,第一触发信号用于触发移动摄像机开始拍摄。
步骤S21,移动摄像机获取初始位置坐标信息和初始时间信息。
步骤S22,移动摄像机获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧,并将实时时间信息和实时坐标信息整合至实时视频帧中。
步骤S23,移动摄像机接收外部输入的第三触发信号,第三触发信号用于设置起始标识点。
步骤S24,移动摄像机获取起始标识点的第一位置坐标信息和第一时间信息。
步骤S25,移动摄像机接收外部输入的第四触发信号,第四触发信号用于设置结束标识点。
步骤S26,移动摄像机获取结束标识点的第二位置坐标信息和第二时间信息。
步骤S27,移动摄像机接收外部输入的第二触发信号,第二触发信号用于触发移动摄像机结束拍摄。
步骤S28,移动摄像机获取结束位置坐标信息和结束时间信息,以致获得视频文件。
如图6所示,其为另一种实施方式视频处理方法中视频播放装置播放视频文件的流程示意图。该视频播放装置播放视频文件的步骤包括:
步骤S30,视频播放装置接收移动摄像机发送的视频文件,视频文件包括多个连续的视频帧、与每一个视频帧对应的坐标信息和时间信息,坐标信息与时间信息唯一对应,该坐标信息包括起始标识坐标信息和结束标识坐标信息,该时间信息包括起始标识时间信息和结束标识时间信息。
步骤S31,视频播放装置根据所有视频帧的坐标信息生成轨迹线,该轨迹线包括多个位置点,标记轨迹线的初始位置点和结束位置点。
步骤S32,视频播放装置根据所有视频帧的时间信息生成视频进度条,该视频进度条包括多个光标位置,标记视频进度条的初始光标位置和结束光标位置。
步骤S33,视频播放装置根据坐标信息与时间信息的对应关系将轨迹线上的每一个位置点与视频进度条的光标位置对应。
步骤S34,视频播放装置根据起始标识坐标信息在轨迹线上标记起始标识点和根据结束标识坐标信息在轨迹线上标记结束标识点。
步骤S35,视频播放装置根据起始标识时间信息在视频进度条上标记起始标识光标位置,起始标识光标位置与起始标识点对应,和根据结束标识时间信息在视频进度条上标记结束标识光标位置,结束标识光标位置与结束标识点对应。
步骤S36,视频播放装置于GIS显示窗口的轨迹线上输出位置点,且于视频播放窗口同步输出光标位置和与光标位置对应的视频帧。
本发明的移动摄像机通过获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧,并将该实时时间信息、实时坐标信息整合至实时视频帧中以得到视频文件,以致后续可以通过该视频文件中的时间信息和坐标信息,结合GIS地图即可了解该视频文件的拍摄路线,以及了解该拍摄路线上每一个点的视频帧内容。
如图7~图8所示,图7为一种实施方式的包括轨迹线的GIS显示窗口示意图。图8为与图7匹配且包括视频进度条的视频播放窗口示意图。在该图7中,示出了GIS地图1和轨迹线12,该轨迹线12的初始位置点A(Xa,Ya)、
结束位置点B(Xb,Yb)和实时位置点S(Xs,Ys)。在该图8中,示出了视频播放窗口2、播放控制区22和视频进度条21,该视频进度太21的初始光标位置A(Ta)、结束光标位置B(Tb)和实时光标位置S(Ts),该播放控制区22包括快进键、播放键、暂停键、停止键、快退键等。该初始位置点A(Xa,Ya)与初始光标位置A(Ta)对应,结束位置点B(Xb,Yb)与结束光标位置B(Tb)对应,实时位置点S(Xs,Ys)与实时光标位置S(Ts)对应。
如图9~图10所示,图9为另一种实施方式的包括轨迹线的GIS显示窗口示意图。图10为与图9匹配且包括视频进度条的视频播放窗口示意图。在该图9中,示出了GIS地图1和轨迹线12,该轨迹线12的初始位置点A(Xa,Ya)、
结束位置点B(Xb,Yb)、起始标识点M(Xm,Ym)和结束标识点N(Xn,Yn)。在该图10中,示出了视频播放窗口2、播放控制区22和视频进度条21,该视频进度太21的初始光标位置A(Ta)、结束光标位置B(Tb)、起始标识光标位置M(Tm)和结束标识光标位置N(Tn)。该播放控制区22包括快进键、播放键、暂停键、停止键、快退键等。该初始位置点A(Xa,Ya)与初始光标位置A(Ta)对应,结束位置点B(Xb,Yb)与结束光标位置B(Tb)对应,实时位置点S(Xs,Ys)与实时光标位置S(Ts)对应,此外,起始标识点M(Xm,Ym)与起始标识光标位置M(Tm)对应,结束标识点N(Xn,Yn)与结束标识光标位置N(Tn)对应。
以上对发明的具体实施方式进行了详细说明,但其只作为范例,本发明并不限制与以上描述的具体实施方式。对于本领域的技术人员而言,任何对该发明进行的等同修改或替代也都在本发明的范畴之中,因此,在不脱离本发明的精神和原则范围下所作的均等变换和修改、改进等,都应涵盖在本发明的范围内。
Claims (10)
- 一种基于移动摄像机的视频拍摄方法,其特征在于,其包括如下步骤:接收外部输入的第一触发信号,所述第一触发信号用于触发所述移动摄像机开始拍摄;获取初始位置坐标信息和初始时间信息;获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧,并将所述实时时间信息和实时坐标信息整合至所述实时视频帧中;接收外部输入的第二触发信号,所述第二触发信号用于触发所述移动摄像机结束拍摄;获取结束位置坐标信息和结束时间信息,以致获得视频文件。
- 根据权利要求1所述的基于移动摄像机的视频拍摄方法,其特征在于,获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧,并将所述实时时间信息和实时坐标信息整合至所述实时视频帧中的步骤之中,包括:接收外部输入的第三触发信号,所述第三触发信号用于设置起始标识点;获取所述起始标识点的第一位置坐标信息和第一时间信息;获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧,并将所述实时时间信息和实时坐标信息整合至所述实时视频帧中;接收外部输入的第四触发信号,所述第四触发信号用于设置结束标识点;获取所述结束标识点的第二位置坐标信息和第二时间信息。
- 根据权利要求2所述的基于移动摄像机的视频拍摄方法,其特征在于,所述移动摄像机包括移动终端,所述移动终端包括摄像装置,所述摄像装置包括获取模块,该获取模块用于获取坐标信息和时间信息。
- 一种移动摄像机,其特征在于,其包括接收模块、获取模块和处理模块;所述接收模块,用于接收外部输入的第一触发信号,所述第一触发信号用于触发所述移动摄像机开始拍摄;所述获取模块,用于获取初始位置坐标信息和初始时间信息;所述获取模块,还用于获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧;所述处理模块,用于将所述实时时间信息和实时坐标信息整合至所述实时视频帧中;所述接收模块,用于接收外部输入的第二触发信号,所述第二触发信号用于触发所述移动摄像机结束拍摄;所述获取模块,获取结束位置坐标信息和结束时间信息,以致获得视频文件。
- 根据权利要求4所述的移动摄像机,其特征在于,所述接收模块,用于接收外部输入的第三触发信号,所述第三触发信号用于设置起始标识点;所述获取模块,用于获取所述起始标识点的第一位置坐标信息和第一时间信息;所述获取模块,还用于获取每一个时间点的实时时间信息、实时坐标信息和实时视频帧;所述处理模块,还用于将所述实时时间信息和实时坐标信息整合至所述实时视频帧中;所述接收模块,用于接收外部输入的第四触发信号,所述第四触发信号用于设置结束标识点;所述获取模块,获取所述结束标识点的第二位置坐标信息和第二时间信息。
- 根据权利要求4所述的移动摄像机,其特征在于,所述移动摄像机包括移动终端,所述移动终端包括摄像装置,所述摄像装置包括获取模块,该获取模块用于获取坐标信息和时间信息。
- 一种基于移动摄像机的视频拍摄方法,其特征在于,其包括如下步骤:接收外部输入的第一触发信号,所述第一触发信号用于触发所述移动摄像机开始拍摄;获取初始位置坐标信息;获取每一个时间点的实时坐标信息和实时视频帧,并将所述实时坐标信息整合至所述实时视频帧中;接收外部输入的第二触发信号,所述第二触发信号用于触发所述移动摄像机结束拍摄;获取结束位置坐标信息,以致获得视频文件。
- 根据权利要求7所述的基于移动摄像机的视频拍摄方法,其特征在于,获取每一个时间点的实时坐标信息和实时视频帧,并将所述实时坐标信息整合至所述实时视频帧中的步骤之中,包括:接收外部输入的第三触发信号,所述第三触发信号用于设置起始标识点;获取所述起始标识点的第一位置坐标信息;获取每一个时间点的实时坐标信息和实时视频帧,并将所述和实时坐标信息整合至所述实时视频帧中;接收外部输入的第四触发信号,所述第四触发信号用于设置结束标识点;获取所述结束标识点的第二位置坐标信息。
- 一种移动摄像机,其特征在于,其包括接收模块、获取模块和处理模块;所述接收模块,用于接收外部输入的第一触发信号,所述第一触发信号用于触发所述移动摄像机开始拍摄;所述获取模块,用于获取初始位置坐标信息;所述获取模块,还用于获取每一个时间点的实时坐标信息和实时视频帧;所述处理模块,还用于将所述实时坐标信息整合至所述实时视频帧中;所述接收模块,还用于接收外部输入的第二触发信号,所述第二触发信号用于触发所述移动摄像机结束拍摄;所述获取模块,还用于获取结束位置坐标信息,以致获得视频文件。
- 根据权利要求4所述的移动摄像机,其特征在于,所述接收模块,还用于接收外部输入的第三触发信号,所述第三触发信号用于设置起始标识点;所述获取模块,还用于获取所述起始标识点的第一位置坐标信息;所述获取模块, 还用于获取每一个时间点的实时坐标信息和实时视频帧;所述处理模块,还用于将所述和实时坐标信息整合至所述实时视频帧中;所述接收模块,还用于接收外部输入的第四触发信号,所述第四触发信号用于设置结束标识点;所述获取模块,还用于获取所述结束标识点的第二位置坐标信息。
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510552635.7 | 2015-09-01 | ||
| CN201510552635.7A CN105120168A (zh) | 2015-09-01 | 2015-09-01 | 一种基于移动摄像机的视频拍摄方法及移动摄像机 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017035960A1 true WO2017035960A1 (zh) | 2017-03-09 |
Family
ID=54668044
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2015/095347 Ceased WO2017035960A1 (zh) | 2015-09-01 | 2015-11-23 | 一种基于移动摄像机的视频拍摄方法及移动摄像机 |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN105120168A (zh) |
| WO (1) | WO2017035960A1 (zh) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109961457A (zh) * | 2017-12-25 | 2019-07-02 | 北京顺智信科技有限公司 | 一种收银防损方法及系统 |
| CN113190040A (zh) * | 2021-04-29 | 2021-07-30 | 集展通航(北京)科技有限公司 | 一种基于无人机视频与铁路bim进行线路巡检的方法和系统 |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105677159B (zh) * | 2016-01-14 | 2019-01-18 | 深圳市至壹科技开发有限公司 | 视频显示方法和视频显示装置 |
| CN118400483A (zh) * | 2024-04-24 | 2024-07-26 | 维沃移动通信有限公司 | 视频录制方法、装置、电子设备及存储介质 |
| CN119628891A (zh) * | 2024-11-25 | 2025-03-14 | 武汉中科通达高新技术股份有限公司 | 一种基于gb35114移动视频坐标编码方法和系统 |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040027365A1 (en) * | 2002-08-07 | 2004-02-12 | Sayers Craig P. | Controlling playback of a temporal stream with a user interface device |
| CN101640775A (zh) * | 2009-08-24 | 2010-02-03 | 深圳华为通信技术有限公司 | 视频录制方法和照片拍摄方法及移动终端 |
| CN102244740A (zh) * | 2011-06-28 | 2011-11-16 | 青岛海信移动通信技术股份有限公司 | 视频信息添加方法和装置 |
| CN103780928A (zh) * | 2012-10-26 | 2014-05-07 | 中国电信股份有限公司 | 视频信息中加入位置信息的方法、系统与视频管理服务器 |
| CN105163080A (zh) * | 2015-09-01 | 2015-12-16 | 上海由零网络科技有限公司 | 一种视频播放方法及视频播放装置 |
| CN105227830A (zh) * | 2015-09-01 | 2016-01-06 | 上海由零网络科技有限公司 | 一种视频处理方法及视频处理系统 |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103167395B (zh) * | 2011-12-08 | 2015-08-12 | 腾讯科技(深圳)有限公司 | 基于移动终端导航功能的照片定位方法和系统 |
| CN103870599A (zh) * | 2014-04-02 | 2014-06-18 | 联想(北京)有限公司 | 一种拍摄数据的归集方法、装置和电子设备 |
-
2015
- 2015-09-01 CN CN201510552635.7A patent/CN105120168A/zh active Pending
- 2015-11-23 WO PCT/CN2015/095347 patent/WO2017035960A1/zh not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040027365A1 (en) * | 2002-08-07 | 2004-02-12 | Sayers Craig P. | Controlling playback of a temporal stream with a user interface device |
| CN101640775A (zh) * | 2009-08-24 | 2010-02-03 | 深圳华为通信技术有限公司 | 视频录制方法和照片拍摄方法及移动终端 |
| CN102244740A (zh) * | 2011-06-28 | 2011-11-16 | 青岛海信移动通信技术股份有限公司 | 视频信息添加方法和装置 |
| CN103780928A (zh) * | 2012-10-26 | 2014-05-07 | 中国电信股份有限公司 | 视频信息中加入位置信息的方法、系统与视频管理服务器 |
| CN105163080A (zh) * | 2015-09-01 | 2015-12-16 | 上海由零网络科技有限公司 | 一种视频播放方法及视频播放装置 |
| CN105227830A (zh) * | 2015-09-01 | 2016-01-06 | 上海由零网络科技有限公司 | 一种视频处理方法及视频处理系统 |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109961457A (zh) * | 2017-12-25 | 2019-07-02 | 北京顺智信科技有限公司 | 一种收银防损方法及系统 |
| CN109961457B (zh) * | 2017-12-25 | 2023-06-23 | 百融至信(北京)科技有限公司 | 一种收银防损方法及系统 |
| CN113190040A (zh) * | 2021-04-29 | 2021-07-30 | 集展通航(北京)科技有限公司 | 一种基于无人机视频与铁路bim进行线路巡检的方法和系统 |
| CN113190040B (zh) * | 2021-04-29 | 2021-10-08 | 集展通航(北京)科技有限公司 | 一种基于无人机视频与铁路bim进行线路巡检的方法和系统 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN105120168A (zh) | 2015-12-02 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2017035960A1 (zh) | 一种基于移动摄像机的视频拍摄方法及移动摄像机 | |
| WO2018103188A1 (zh) | 基于vr眼镜控制无人机的方法及装置 | |
| WO2018032693A1 (zh) | 电视显示内容的处理方法、装置及系统 | |
| WO2016192254A1 (zh) | 网络视频在线播放的方法和装置 | |
| WO2016208849A1 (ko) | 디지털 촬영 장치 및 그 동작 방법 | |
| CN107852466A (zh) | 一种实现全景图像拼接的方法及系统 | |
| WO2018045682A1 (zh) | 音画同步测试方法及装置 | |
| WO2019090699A1 (zh) | 一种双镜头智能摄像设备及其摄像方法 | |
| WO2020098013A1 (zh) | 电视节目推荐方法、终端、系统及存储介质 | |
| WO2017036209A1 (zh) | 基于智能电视的音频数据播放方法、智能电视及系统 | |
| WO2017090833A1 (en) | Photographing device and method of controlling the same | |
| WO2017036203A1 (zh) | 媒体应用的播放控制方法、遥控装置及电视系统 | |
| WO2015023145A1 (ko) | 공간 해상도가 가변되는 거리 정보를 획득할 수 있는 거리검출장치 및 이를 구비한 영상표시장치 | |
| WO2017036218A1 (zh) | 多媒体文件读取方法及装置 | |
| WO2018074768A1 (ko) | 영상 표시 방법 및 그 전자장치 | |
| WO2016029502A1 (zh) | 信号源的切换方法及装置 | |
| WO2016101252A1 (zh) | 智能电视的频道信息显示方法及装置 | |
| WO2017016310A1 (zh) | 遥控功能数据动态配置的方法和装置 | |
| WO2017016262A1 (zh) | 一种智能手表的控制方法及智能手表 | |
| WO2018120717A1 (zh) | 空调器控制方法和装置 | |
| WO2018014498A1 (zh) | 一种穿戴设备的显示方法及装置 | |
| WO2021246758A1 (ko) | 전자 장치 및 그 동작 방법 | |
| WO2017016308A1 (zh) | 菜单背景颜色处理方法及装置 | |
| WO2017041546A1 (zh) | 电视蓝牙连接方法及装置 | |
| WO2018090505A1 (zh) | 无人机及其控制方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15902767 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 15902767 Country of ref document: EP Kind code of ref document: A1 |