CN111601160A - Method and device for editing video - Google Patents
Method and device for editing video Download PDFInfo
- Publication number
- CN111601160A CN111601160A CN202010476849.1A CN202010476849A CN111601160A CN 111601160 A CN111601160 A CN 111601160A CN 202010476849 A CN202010476849 A CN 202010476849A CN 111601160 A CN111601160 A CN 111601160A
- Authority
- CN
- China
- Prior art keywords
- video
- scene
- feature
- processed
- segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
- G06V20/47—Detecting features for summarising video content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
Description
技术领域technical field
本公开的实施例涉及人工智能计算机视觉领域,具体涉及用于剪辑视频的方法及装置。Embodiments of the present disclosure relate to the field of artificial intelligence computer vision, and in particular, to a method and apparatus for editing video.
背景技术Background technique
随着科技的发展,各种智能设备应用在人们的工作、学习和日常生活当中,提高了人们工作学习的效率和日常生活的便利性。智能设备通常具有视频采集功能,人们可以通过智能设备获取与工作和生活相关的视频,并将视频上传至视频服务器,以提供给其他用户。并且,视频本身具有信息丰富和传播性好的特点,进一步促进的视频的推广。With the development of science and technology, various smart devices are used in people's work, study and daily life, which improves the efficiency of people's work and study and the convenience of daily life. Smart devices usually have a video capture function. People can acquire videos related to work and life through smart devices, and upload the videos to a video server to provide them to other users. Moreover, the video itself has the characteristics of rich information and good dissemination, which further promotes the promotion of the video.
视频上传到视频服务器后,技术人员需要对视频内容进行审核,并对视频进行分类、剪辑等处理。After the video is uploaded to the video server, the technicians need to review the video content, and classify and edit the video.
发明内容SUMMARY OF THE INVENTION
本公开的实施例提出了用于剪辑视频的方法及装置。Embodiments of the present disclosure propose a method and apparatus for editing a video.
第一方面,本公开的实施例提供了一种用于剪辑视频的方法,该方法包括:对待处理视频的视频帧进行识别,获取所述待处理视频的场景片段;根据所述场景片段确定待处理视频的内容信息,根据所述内容信息提取封面图像;提取所述场景片段的设定时间长度的特征片段;将所述封面图像和特征片段组合构成特征视频。In a first aspect, an embodiment of the present disclosure provides a method for editing a video, the method includes: identifying a video frame of a video to be processed, acquiring a scene segment of the video to be processed; The content information of the video is processed, and the cover image is extracted according to the content information; the feature segment of the set time length of the scene segment is extracted; the cover image and the feature segment are combined to form a feature video.
在一些实施例中,所述对待处理视频的视频帧进行识别,获取所述待处理视频的场景片段,包括:对所述待处理视频的视频帧进行图像识别,得到对应所述待处理视频的场景信息,所述场景信息用于表征视频内容所在的场景;根据场景信息将所述待处理视频划分为场景片段。In some embodiments, the identifying the video frame of the video to be processed, and acquiring the scene segment of the video to be processed, includes: performing image recognition on the video frame of the video to be processed, and obtaining a corresponding image of the video to be processed. Scene information, where the scene information is used to represent the scene where the video content is located; the to-be-processed video is divided into scene segments according to the scene information.
在一些实施例中,所述根据所述场景片段确定待处理视频的内容信息,包括:识别所述场景片段中的初始物体图像;统计所述初始物体图像在所述待处理视频中的出现次数和出现时间,根据所述出现次数和出现时间确定目标物体信息;查询所述目标物体信息的属性信息,根据所述属性信息确定所述待处理视频的内容信息,其中,所述属性信息包括以下至少一项:目标物体的名称、目标物体的用途。In some embodiments, the determining the content information of the video to be processed according to the scene segment includes: identifying an initial object image in the scene segment; counting the number of occurrences of the initial object image in the to-be-processed video and occurrence time, determine target object information according to the number of occurrences and occurrence time; query the attribute information of the target object information, and determine the content information of the video to be processed according to the attribute information, wherein the attribute information includes the following At least one item: the name of the target object, the purpose of the target object.
在一些实施例中,所述根据所述内容信息提取封面图像,包括:获取所述场景片段中的特征图像,所述特征图像用于表征该场景片段的特征;从将所述特征图像与所述内容信息进行匹配,确定封面图像。In some embodiments, the extracting the cover image according to the content information includes: acquiring a feature image in the scene segment, where the feature image is used to characterize the feature of the scene segment; The content information is matched to determine the cover image.
在一些实施例中,所述将所述封面图像和特征片段组合构成特征视频,包括:响应于所述特征片段为多个,删除重复的特征片段。In some embodiments, the combining the cover image and the feature segment to form a feature video includes: in response to the plurality of feature segments, deleting duplicate feature segments.
在一些实施例中,所述将所述封面图像和特征片段组合构成特征视频,包括:对所述特征视频中的相邻特征片段进行过渡处理,所述过渡处理包括以下至少一项:颜色过渡、场景过渡、光线过渡。In some embodiments, the combination of the cover image and the feature segment to form a feature video includes: performing transition processing on adjacent feature segments in the feature video, and the transition process includes at least one of the following: color transition , scene transition, light transition.
在一些实施例中,所述方法还包括:根据所述内容信息为所述特征视频添加标题。In some embodiments, the method further comprises: adding a title to the feature video according to the content information.
第二方面,本公开的实施例提供了一种用于剪辑视频的装置,该装置包括:场景片段获取单元,被配置成对待处理视频的视频帧进行识别,获取所述待处理视频的场景片段;封面图像获取单元,被配置成根据所述场景片段确定待处理视频的内容信息,根据所述内容信息提取封面图像;特征片段提取单元,被配置成提取所述场景片段的设定时间长度的特征片段;特征视频获取单元,被配置成将所述封面图像和特征片段组合构成特征视频。In a second aspect, an embodiment of the present disclosure provides an apparatus for editing a video, the apparatus comprising: a scene segment acquiring unit, configured to identify a video frame of a video to be processed, and acquire a scene segment of the video to be processed The cover image acquisition unit is configured to determine the content information of the video to be processed according to the scene segment, and extract the cover image according to the content information; the feature segment extraction unit is configured to extract the set time length of the scene segment. A feature segment; a feature video acquisition unit configured to combine the cover image and the feature segment to form a feature video.
在一些实施例中,所述场景片段获取单元包括:场景信息获取子单元,被配置成对所述待处理视频的视频帧进行图像识别,得到对应所述待处理视频的场景信息,所述场景信息用于表征视频内容所在的场景;场景片段划分子单元,被配置成根据场景信息将所述待处理视频划分为场景片段。In some embodiments, the scene segment obtaining unit includes: a scene information obtaining subunit, configured to perform image recognition on the video frame of the video to be processed to obtain scene information corresponding to the video to be processed, the scene The information is used to represent the scene where the video content is located; the scene segment dividing subunit is configured to divide the video to be processed into scene segments according to the scene information.
在一些实施例中,所述封面图像获取单元包括:初始物体图像识别子单元,被配置成识别所述场景片段中的初始物体图像;目标物体信息确定子单元,被配置成统计所述初始物体图像在所述待处理视频中的出现次数和出现时间,根据所述出现次数和出现时间确定目标物体信息;内容信息确定子单元,被配置成查询所述目标物体信息的属性信息,根据所述属性信息确定所述待处理视频的内容信息,其中,所述属性信息包括以下至少一项:目标物体的名称、目标物体的用途。In some embodiments, the cover image acquisition unit includes: an initial object image identification subunit configured to identify an initial object image in the scene segment; a target object information determination subunit configured to count the initial object The number of occurrences and the occurrence time of the image in the video to be processed, and the target object information is determined according to the number of occurrences and the occurrence time; the content information determination subunit is configured to query the attribute information of the target object information, according to the The attribute information determines content information of the video to be processed, wherein the attribute information includes at least one of the following: the name of the target object and the purpose of the target object.
在一些实施例中,所述封面图像获取单元包括:特征图像获取子单元,被配置成获取所述场景片段中的特征图像,所述特征图像用于表征该场景片段的特征;封面图像确定子单元,被配置成从将所述特征图像与所述内容信息进行匹配,确定封面图像。In some embodiments, the cover image obtaining unit includes: a feature image obtaining subunit configured to obtain a feature image in the scene segment, the feature image being used to characterize the feature of the scene segment; a cover image determining subunit A unit configured to determine a cover image from matching the feature image with the content information.
在一些实施例中,所述特征视频获取单元包括:视频删除子单元,响应于所述特征片段为多个,被配置成删除重复的特征片段。In some embodiments, the feature video acquisition unit includes a video deletion subunit configured to delete duplicate feature segments in response to the plurality of feature segments.
在一些实施例中,所述特征视频获取单元包括:过渡处理子单元,被配置成对所述特征视频中的相邻特征片段进行过渡处理,所述过渡处理包括以下至少一项:颜色过渡、场景过渡、光线过渡。In some embodiments, the feature video acquisition unit includes: a transition processing subunit configured to perform transition processing on adjacent feature segments in the feature video, the transition processing including at least one of the following: color transition, Scene transition, light transition.
在一些实施例中,所述装置还包括:标题添加单元,被配置成根据所述内容信息为所述特征视频添加标题。In some embodiments, the apparatus further includes a title adding unit configured to add a title to the feature video according to the content information.
第三方面,本公开的实施例提供了一种电子设备,包括:一个或多个处理器;存储器,其上存储有一个或多个程序,当上述一个或多个程序被上述一个或多个处理器执行时,使得上述一个或多个处理器执行上述第一方面的用于剪辑视频的方法。In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a memory on which one or more programs are stored, when the one or more programs are stored by the one or more programs described above When executed by the processor, the above-mentioned one or more processors are caused to execute the method for editing a video of the above-mentioned first aspect.
第四方面,本公开的实施例提供了一种计算机可读介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现上述第一方面的用于剪辑视频的方法。In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the method for editing a video of the first aspect above is implemented.
本公开的实施例提供的用于剪辑视频的方法及装置,首先对待处理视频的视频帧进行识别,获取所述待处理视频的场景片段;然后根据所述场景片段确定待处理视频的内容信息,根据所述内容信息提取封面图像,提高了用户选择视频的准确性;之后提取所述场景片段的设定时间长度的特征片段,有利于提高特征视频的观赏性;最后将所述封面图像和特征片段组合构成特征视频。本申请有利于获取特征视频的准确性和有效性。The method and device for editing a video provided by the embodiments of the present disclosure firstly identify the video frame of the video to be processed, and obtain the scene segment of the to-be-processed video; then determine the content information of the to-be-processed video according to the scene segment, Extracting the cover image according to the content information improves the accuracy of the video selected by the user; then extracts the feature segment of the set time length of the scene segment, which is conducive to improving the viewing quality of the feature video; finally, the cover image and the feature The segments are combined to form a feature video. The present application is beneficial to obtain the accuracy and validity of the feature video.
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。It should be understood that what is described in this section is not intended to identify key or critical features of embodiments of the disclosure, nor is it intended to limit the scope of the disclosure. Other features of the present disclosure will become readily understood from the following description.
附图说明Description of drawings
附图用于更好地理解本方案,不构成对本申请的限定。其中:The accompanying drawings are used for better understanding of the present solution, and do not constitute a limitation to the present application. in:
图1是根据本申请第一实施例的示意图;1 is a schematic diagram according to a first embodiment of the present application;
图2是根据本申请第二实施例的示意图;2 is a schematic diagram according to a second embodiment of the present application;
图3是根据本申请第三实施例的示意图;3 is a schematic diagram according to a third embodiment of the present application;
图4是根据本申请第四实施例的示意图;4 is a schematic diagram according to a fourth embodiment of the present application;
图5是用来实现本申请实施例的用于剪辑视频的方法的电子设备的框图;5 is a block diagram of an electronic device used to implement the method for editing a video according to an embodiment of the present application;
图6是适于用来实现本公开的实施例的电子设备结构示意图。FIG. 6 is a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure.
具体实施方式Detailed ways
以下结合附图对本申请的示范性实施例做出说明,其中包括本申请实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本申请的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。Exemplary embodiments of the present application are described below with reference to the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted from the following description for clarity and conciseness.
图1示出了可以应用本公开的实施例的用于剪辑视频的方法或用于剪辑视频的装置的示例性系统架构100。FIG. 1 illustrates an
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。As shown in FIG. 1 , the
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种视频客户端应用,例如视频浏览器应用、视频播放插件、视频搜索应用、视频下载工具、视频播放客户端等。The user can use the
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是具有显示屏并且支持网页浏览的各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块(例如用来提供分布式服务),也可以实现成单个软件或软件模块,在此不做具体限定。The
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103发来的待处理视频进行处理的服务器。服务器可以对接收到的待处理视频等数据进行分析等处理,并以得到特征视频。The
需要说明的是,本公开的实施例所提供的用于剪辑视频的方法一般由服务器105执行,相应地,用于剪辑视频的装置一般设置于服务器105中。It should be noted that the method for editing a video provided by the embodiments of the present disclosure is generally performed by the
需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务),也可以实现成单个软件或软件模块,在此不做具体限定。It should be noted that the server may be hardware or software. When the server is hardware, it can be implemented as a distributed server cluster composed of multiple servers, or can be implemented as a single server. When the server is software, it may be implemented as multiple software or software modules (for example, used to provide distributed services), or may be implemented as a single software or software module, which is not specifically limited herein.
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。It should be understood that the numbers of terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
继续参考图2,示出了根据本公开的用于剪辑视频的方法的一个实施例的流程200。该用于剪辑视频的方法包括以下步骤:With continued reference to FIG. 2, a
步骤201,对待处理视频的视频帧进行识别,获取所述待处理视频的场景片段。Step 201: Identify the video frame of the video to be processed, and acquire the scene segment of the video to be processed.
在本实施例中,用于剪辑视频的方法的执行主体(例如图1所示的服务器105)可以通过有线连接方式或者无线连接方式从终端设备101、102、103接收待处理视频。需要指出的是,上述无线连接方式可以包括但不限于3G/4G连接、WiFi连接、蓝牙连接、WiMAX连接、Zigbee连接、UWB(Ultra Wideband)连接、以及其他现在已知或将来开发的无线连接方式。In this embodiment, the execution body of the method for editing a video (for example, the
现有视频分类或剪辑通常由人工完成,无法应对海量的视频。视频审核人员在视频的审核标准、剪辑视频的能力等方便存在差异,导致视频的分类剪辑等不规范。人工审核剪辑通常需要观看视频的全部内容,需要花费大量的时间,视频处理效率低,不易获取视频的关键内容,视频内容不够突出。Existing video classification or editing is usually done manually and cannot cope with massive videos. Video reviewers have differences in video review standards and video editing capabilities, resulting in irregular video classification and editing. Manual review of clips usually requires watching the entire content of the video, which takes a lot of time, and the video processing efficiency is low, it is difficult to obtain the key content of the video, and the video content is not prominent enough.
为此,执行主体可以对待处理视频的视频帧进行识别,得到视频帧的视频帧内容。其中,视频帧内容可以包括人物图像、动物图像、背景图像等。之后,执行主体可以根据视频帧内容确定场景片段。例如:视频帧内容包括某篮球运动员,背景图像为篮球场,则执行主体可以确定视频帧对应的是打篮球场景。进而执行主体根据视频帧内容对待处理视频进行划分,得到打篮球的场景片段。如此,可以获取到待处理视频的场景片段,有利于获取特征视频的准确性和有效性。To this end, the execution subject may identify the video frame of the video to be processed, and obtain the video frame content of the video frame. Wherein, the video frame content may include human images, animal images, background images, and the like. After that, the execution body can determine the scene segment according to the video frame content. For example, if the video frame content includes a basketball player and the background image is a basketball court, the execution subject can determine that the video frame corresponds to a basketball scene. Further, the execution subject divides the video to be processed according to the content of the video frame, and obtains the scene segment of playing basketball. In this way, the scene segment of the video to be processed can be obtained, which is beneficial to the accuracy and effectiveness of obtaining the characteristic video.
步骤202,根据所述场景片段确定待处理视频的内容信息,根据所述内容信息提取封面图像。Step 202: Determine content information of the video to be processed according to the scene segment, and extract a cover image according to the content information.
获取到场景片段后,执行主体可以根据场景片段确定待处理视频的内容信息,进而根据内容信息从待处理视频中挑选一张能够代表待处理视频内容的视频帧作为封面图像。例如,当场景片段为打篮球时,对应的内容信息可以为:篮球视频。之后,执行主体可以从场景片段中选取一张篮球或篮球运动员的视频帧作为封面图像。通过封面图像可以直观地表明待处理视频的视频内容,提高了用户选择视频的准确性。After acquiring the scene segment, the execution subject can determine the content information of the video to be processed according to the scene segment, and then select a video frame representing the content of the to-be-processed video from the to-be-processed video as the cover image according to the content information. For example, when the scene segment is playing basketball, the corresponding content information may be: basketball video. After that, the executive body can select a video frame of a basketball or a basketball player from the scene clip as the cover image. The video content of the video to be processed can be visually indicated by the cover image, which improves the accuracy of the video selected by the user.
步骤203,提取所述场景片段的设定时间长度的特征片段。
为了获取特征视频,执行主体可以提取所述场景片段的设定时间长度的特征片段。其中,特征片段可以表征对应的场景片段中具有代表性的内容。例如,场景片段为打篮球,则对应的特征片段可以是某著名篮球运动员进球的内容。针对不同的场景片段,特征片段的具体内容可以不同。如此,有利于提高特征视频的观赏性。In order to acquire the feature video, the execution body may extract the feature segment of the scene segment with the set time length. The feature segment may represent representative content in the corresponding scene segment. For example, if the scene segment is playing basketball, the corresponding feature segment may be the content of a famous basketball player scoring a goal. For different scene segments, the specific content of the feature segments may be different. In this way, it is beneficial to improve the viewing quality of the feature video.
步骤204,将所述封面图像和特征片段组合构成特征视频。
得到封面图像和特征片段后,执行主体可以将封面图像作为特征片段的封面构成特征视频。有上述描述可知,特征视频包含了各个场景片段的特征片段,可以突出待处理视频的内容。如此,实现了从待处理视频中获取到了、包含待处理视频主要内容的特征视频,突出了待处理视频的内容,提高了获取特征视频的效率,提高了特征视频的有效性。After obtaining the cover image and the feature segment, the execution subject can use the cover image as the cover of the feature segment to form a feature video. As can be seen from the above description, the feature video includes feature segments of each scene segment, which can highlight the content of the video to be processed. In this way, the feature video obtained from the to-be-processed video and including the main content of the to-be-processed video is realized, the content of the to-be-processed video is highlighted, the efficiency of acquiring the feature video is improved, and the effectiveness of the feature video is improved.
继续参考图3,示出了根据本公开的用于剪辑视频的方法的一个实施例的流程300。该用于剪辑视频的方法包括以下步骤:With continued reference to FIG. 3, a
步骤301,对所述待处理视频的视频帧进行图像识别,得到对应所述待处理视频的场景信息。Step 301: Perform image recognition on video frames of the to-be-processed video to obtain scene information corresponding to the to-be-processed video.
执行主体获取到待处理视频后,可以对待处理视频的视频帧进行图像识别,得到视频帧的视频帧内容。视频帧内容可以包括人物图像、动物图像、背景图像等。之后,对视频帧内容进行分析,确定场景信息。其中,所述场景信息可以用于表征视频内容所在的场景。例如,视频帧内容包括某篮球运动员,背景图像为篮球场,则执行主体可以确定视频帧对应的是打篮球场景。对应的场景信息可以篮球运动。After acquiring the video to be processed, the execution body can perform image recognition on the video frame of the video to be processed to obtain the video frame content of the video frame. The video frame content may include human images, animal images, background images, and the like. After that, the content of the video frame is analyzed to determine scene information. The scene information may be used to represent the scene where the video content is located. For example, if the video frame content includes a basketball player and the background image is a basketball court, the execution subject can determine that the video frame corresponds to a basketball scene. The corresponding scene information can be basketball sports.
步骤302,根据场景信息将所述待处理视频划分为场景片段。Step 302: Divide the to-be-processed video into scene segments according to scene information.
确定了场景信息后,执行主体可以俺要场景信息对待处理视频的内容进行划分,得到场景片段。实际中,待处理视频可以包含一个场景片段,也可以包含多个场景片段。如此,实现了按照场景片段对待处理视频进行划分,提高了识别待处理视频内容的准确性。After the scene information is determined, the execution subject can divide the content of the video to be processed according to the scene information to obtain scene segments. In practice, the video to be processed may include one scene segment or multiple scene segments. In this way, the video to be processed is divided according to the scene segments, and the accuracy of identifying the video content to be processed is improved.
步骤303,根据所述场景片段确定待处理视频的内容信息,根据所述内容信息提取封面图像。Step 303: Determine content information of the video to be processed according to the scene segment, and extract a cover image according to the content information.
在本实施例的一些可选的实现方式中,所述根据所述场景片段确定待处理视频的内容信息,可以包括以下步骤:In some optional implementation manners of this embodiment, the determining the content information of the video to be processed according to the scene segment may include the following steps:
第一步,识别所述场景片段中的初始物体图像。In the first step, the initial object image in the scene segment is identified.
执行主体可以对场景片段包含的视频帧进行图像识别,确定每个视频帧中包含的至少一个厨师物体图像。还以上述的打篮球的场景片段为例,执行主体识别场景片段的视频帧中的初始物体图像可以包括:篮球运动员图像、篮球图像、篮球场图像、篮球框图像、球队标识图像、篮球场馆图像、观众图像等。对于不同的场景片段,初始物体图像可以不同。The executive body may perform image recognition on the video frames included in the scene segment, and determine at least one chef object image included in each video frame. Also taking the above-mentioned basketball scene segment as an example, the initial object images in the video frame of the execution subject recognition scene segment may include: basketball player images, basketball images, basketball court images, basketball hoop images, team logo images, basketball stadium images. images, audience images, etc. The initial object images can be different for different scene segments.
第二步,统计所述初始物体图像在所述待处理视频中的出现次数和出现时间,根据所述出现次数和出现时间确定目标物体信息。In the second step, count the occurrence times and occurrence times of the initial object image in the to-be-processed video, and determine target object information according to the occurrence times and occurrence times.
通常,视频的主要内容在视频中出现的次数最多,出现的时间也最长。为此,执行主体可以统计初始物体图像在所述待处理视频中的出现次数和出现时间。之后,再根据所述出现次数和出现时间确定目标物体信息。例如,场景片段为打篮球,初始物体图像中,某运动员的出现次数做多,出现时间也最长,此时,执行主体可以确定目标物体信息为该运动员的名字。实际中,还可能存在某初始物体图像出现次数最多,但出现时间不是最长的情况。此时,执行主体仍然可以将该初始物体图像标记为目标物体。并把出现时间最长的初始物体图像也标记为目标物体。即,目标物体可以有多个,对应的目标物体信息也可以有多个。Typically, the main content of the video appears the most and for the longest time in the video. To this end, the execution subject may count the occurrence times and the occurrence time of the initial object image in the video to be processed. After that, the target object information is determined according to the number of occurrences and the occurrence time. For example, if the scene segment is playing basketball, in the initial object image, an athlete appears more frequently and appears for the longest time. At this time, the execution subject can determine that the target object information is the name of the athlete. In practice, there may also be a situation in which an initial object image appears the most times, but the appearance time is not the longest. At this time, the execution subject can still mark the initial object image as the target object. And the initial object image with the longest appearance time is also marked as the target object. That is, there may be multiple target objects, and there may also be multiple pieces of corresponding target object information.
第三步,查询所述目标物体信息的属性信息,根据所述属性信息确定所述待处理视频的内容信息。In the third step, the attribute information of the target object information is inquired, and the content information of the video to be processed is determined according to the attribute information.
目标物体信息通常是直接对目标物体本身进行的说明。而多数情况下,视频是对目标物体的属性信息进行的说明。其中,所述属性信息可以包括以下至少一项:目标物体的名称、目标物体的用途、目标物体的职业。对于不同的目标物体,属性信息可以不同。例如,当目标物体为人时,属性信息可以包括目标物体的名称、目标物体的职业,但不包括目标物体的用途。当目标物体为设备时(例如可以是手机),属性信息可以包括目标物体的名称、目标物体的用途,但不包括目标物体的职业。因此,执行主体可以根据属性信息来确定待处理视频的内容信息。例如,目标物体信息为某篮球运动员的名字,则查询到对应该名字的属性信息可以包括目标物体的职业:篮球。此时,执行主体可以将待处理视频的内容信息设置为篮球视频。The target object information is usually a description of the target object itself. In most cases, the video is an explanation of the attribute information of the target object. The attribute information may include at least one of the following: the name of the target object, the purpose of the target object, and the occupation of the target object. For different target objects, the attribute information can be different. For example, when the target object is a human, the attribute information may include the name of the target object and the occupation of the target object, but not the usage of the target object. When the target object is a device (for example, a mobile phone), the attribute information may include the name of the target object and the purpose of the target object, but does not include the occupation of the target object. Therefore, the execution subject can determine the content information of the video to be processed according to the attribute information. For example, if the target object information is the name of a basketball player, the attribute information corresponding to the name queried may include the occupation of the target object: basketball. At this time, the execution body can set the content information of the video to be processed as a basketball video.
在本实施例的一些可选的实现方式中,所述根据所述内容信息提取封面图像,可以包括以下内容:In some optional implementation manners of this embodiment, the extracting the cover image according to the content information may include the following content:
第一步,获取所述场景片段中的特征图像。The first step is to acquire feature images in the scene segment.
例如,场景片段为打篮球时,则执行主体可以获取到的特征图像可以为运动员投篮的图像、篮球场的图像、运动员的图像等。场景图像为踢足球时,则执行主体可以获取到的特征图像可以为运动员射门的图像、运动员带球的图像、足球场的图像等。即,所述特征图像可以用于表征该场景片段的特征。For example, when the scene segment is playing basketball, the characteristic images that can be acquired by the execution subject may be an image of an athlete shooting a basketball, an image of a basketball court, an image of an athlete, and the like. When the scene image is football, the characteristic images that can be acquired by the execution subject may be an image of a player shooting a goal, an image of a player dribbling a ball, an image of a football field, and the like. That is, the feature image may be used to characterize the scene segment.
第二步,从将所述特征图像与所述内容信息进行匹配,确定封面图像。In the second step, the cover image is determined by matching the feature image with the content information.
得到特征图像后,执行主体可以将特征图像与内容信息进行匹配,确定封面图像。例如,内容信息为篮球视频,特征图像包括运动员投篮的图像、篮球场的图像、运动员的图像等。由于运动员投篮的图像包含了篮球运动员、篮球和篮球的工作信息,因此可以认为运动员投篮的图像与篮球视频的相关性更高,可以将运动员投篮的图像确定为封面图像。After obtaining the characteristic image, the execution subject can match the characteristic image with the content information to determine the cover image. For example, the content information is a basketball video, and the characteristic images include an image of a player shooting a basketball, an image of a basketball court, an image of an athlete, and the like. Since the image of the athlete's shot contains the basketball player, basketball and basketball's work information, it can be considered that the image of the athlete's shot has a higher correlation with the basketball video, and the image of the athlete's shot can be determined as the cover image.
步骤304,提取所述场景片段的设定时间长度的特征片段。
步骤304的内容与步骤203的内容相同,此处不再一一赘述。The content of
步骤305,响应于所述特征片段为多个,删除重复的特征片段。
待处理视频的视频帧可以出现多个不同的场景片段,多个场景片段可以在待处理视频中反复出现。而同一个场景片段通过一个特征片段就可以表示。因此,为了避免出现重复的特征片段,提高特征视频的有效性,执行主体可以删除重复的特征片段,每种特征片段只保留一个即可。同时,为了防止出现文件数据过大的、占用资源过多的情况,执行主体还可以调整每个特征片段的时间,以使得最终得到的特征视频的视频播放时间在设定的时间范围内,视频文件大小在设定的文件大小内。A video frame of the video to be processed may appear in multiple different scene segments, and multiple scene segments may appear repeatedly in the to-be-processed video. The same scene fragment can be represented by a feature fragment. Therefore, in order to avoid repeated feature segments and improve the effectiveness of the feature video, the executive body can delete the repeated feature segments, and only one of each feature segment is required. At the same time, in order to prevent the file data from being too large and occupying too many resources, the execution body can also adjust the time of each feature segment, so that the video playback time of the finally obtained feature video is within the set time range, and the video The file size is within the set file size.
步骤306,对所述特征视频中的相邻特征片段进行过渡处理。Step 306: Perform transition processing on adjacent feature segments in the feature video.
不同的特征片段对应的场景不同,对应的,不同的特征片段在颜色、亮度等方面也存在差异。为了提高视频的播放效果,执行主体可以对相邻特征片段进行过渡处理,以使得特征视频在播放时具有相同或相近的视觉特性。其中,所述过渡处理可以包括以下至少一项:颜色过渡、场景过渡、光线过渡。Different feature segments correspond to different scenes, and correspondingly, different feature segments also have differences in color, brightness, etc. In order to improve the playback effect of the video, the execution subject may perform transition processing on adjacent feature segments, so that the feature videos have the same or similar visual characteristics during playback. The transition processing may include at least one of the following: color transition, scene transition, and light transition.
继续参考图4,示出了根据本公开的用于剪辑视频的方法的一个实施例的流程400。该用于剪辑视频的方法包括以下步骤:With continued reference to FIG. 4, a
步骤401,对待处理视频的视频帧进行识别,获取所述待处理视频的场景片段。Step 401: Identify the video frame of the video to be processed, and acquire the scene segment of the video to be processed.
步骤401的内容与步骤201的内容相同,此处不再一一赘述。The content of
步骤402,根据所述场景片段确定待处理视频的内容信息,根据所述内容信息提取封面图像。Step 402: Determine content information of the video to be processed according to the scene segment, and extract a cover image according to the content information.
步骤402的内容与步骤202的内容相同,此处不再一一赘述。The content of
步骤403,提取所述场景片段的设定时间长度的特征片段。
步骤403的内容与步骤203的内容相同,此处不再一一赘述。The content of
步骤404,将所述封面图像和特征片段组合构成特征视频。
步骤404的内容与步骤204的内容相同,此处不再一一赘述。The content of
步骤405,根据所述内容信息为所述特征视频添加标题。Step 405: Add a title to the feature video according to the content information.
得到特征视频后,为了进一步提高特征视频的可阅读性,执行主体还可以根据内容信息为所述特征视频添加标题,使用户可以通过文字了解特征视频的内容。例如,内容信息为篮球视频,且对应内容信息的目标物体信息为某运动员A,则标题可以为:“运动员A的篮球集锦”等。After the feature video is obtained, in order to further improve the readability of the feature video, the execution body can also add a title to the feature video according to the content information, so that the user can understand the content of the feature video through text. For example, if the content information is a basketball video, and the target object information corresponding to the content information is a certain athlete A, the title may be: "Athlete A's Basketball Highlights" and the like.
进一步参考图5,作为对上述各图所示方法的实现,本公开提供了一种用于剪辑视频的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。Further referring to FIG. 5 , as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of an apparatus for editing a video. The apparatus embodiment corresponds to the method embodiment shown in FIG. 2 . The device can be specifically applied to various electronic devices.
如图5所示,本实施例的用于剪辑视频的装置500可以包括:场景片段获取单元501、封面图像获取单元502、特征片段提取单元503和特征视频获取单元504。其中,场景片段获取单元501,被配置成对待处理视频的视频帧进行识别,获取所述待处理视频的场景片段;封面图像获取单元502,被配置成根据所述场景片段确定待处理视频的内容信息,根据所述内容信息提取封面图像;特征片段提取单元503,被配置成提取所述场景片段的设定时间长度的特征片段;特征视频获取单元504,被配置成将所述封面图像和特征片段组合构成特征视频。As shown in FIG. 5 , the
在本实施例的一些可选的实现方式中,所述场景片段获取单元501可以包括:场景信息获取子单元(图中未示出)和场景片段划分子单元(图中未示出)。其中,场景信息获取子单元,被配置成对所述待处理视频的视频帧进行图像识别,得到对应所述待处理视频的场景信息,所述场景信息用于表征视频内容所在的场景;场景片段划分子单元,被配置成根据场景信息将所述待处理视频划分为场景片段。In some optional implementations of this embodiment, the scene
在本实施例的一些可选的实现方式中,所述封面图像获取单元502可以包括:初始物体图像识别子单元(图中未示出)、目标物体信息确定子单元(图中未示出)和内容信息确定子单元(图中未示出)。其中,初始物体图像识别子单元,被配置成识别所述场景片段中的初始物体图像;目标物体信息确定子单元,被配置成统计所述初始物体图像在所述待处理视频中的出现次数和出现时间,根据所述出现次数和出现时间确定目标物体信息;内容信息确定子单元,被配置成查询所述目标物体信息的属性信息,根据所述属性信息确定所述待处理视频的内容信息,其中,所述属性信息包括以下至少一项:目标物体的名称、目标物体的用途。In some optional implementations of this embodiment, the cover
在本实施例的一些可选的实现方式中,所述封面图像获取单元502可以包括:特征图像获取子单元(图中未示出)和封面图像确定子单元(图中未示出)。其中,特征图像获取子单元,被配置成获取所述场景片段中的特征图像,所述特征图像用于表征该场景片段的特征;封面图像确定子单元,被配置成从将所述特征图像与所述内容信息进行匹配,确定封面图像。In some optional implementations of this embodiment, the cover
在本实施例的一些可选的实现方式中,所述特征视频获取单元504可以包括:视频删除子单元(图中未示出),响应于所述特征片段为多个,被配置成删除重复的特征片段。In some optional implementations of this embodiment, the feature
在本实施例的一些可选的实现方式中,所述特征视频获取单元504可以包括:过渡处理子单元(图中未示出),被配置成对所述特征视频中的相邻特征片段进行过渡处理,所述过渡处理包括以下至少一项:颜色过渡、场景过渡、光线过渡。In some optional implementation manners of this embodiment, the feature
在本实施例的一些可选的实现方式中,用于剪辑视频的装置500还可以包括:标题添加单元(图中未示出),被配置成根据所述内容信息为所述特征视频添加标题。In some optional implementations of this embodiment, the
根据本申请的实施例,本申请还提供了一种电子设备和一种可读存储介质。According to the embodiments of the present application, the present application further provides an electronic device and a readable storage medium.
如图6所示,是根据本申请实施例的用于剪辑视频的方法的电子设备的框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。As shown in FIG. 6 , it is a block diagram of an electronic device for a method for editing a video according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.
如图6所示,该电子设备包括:一个或多个处理器601、存储器602,以及用于连接各部件的接口,包括高速接口和低速接口。各个部件利用不同的总线互相连接,并且可以被安装在公共主板上或者根据需要以其它方式安装。处理器可以对在电子设备内执行的指令进行处理,包括存储在存储器中或者存储器上以在外部输入/输出装置(诸如,耦合至接口的显示设备)上显示GUI的图形信息的指令。在其它实施方式中,若需要,可以将多个处理器和/或多条总线与多个存储器和多个存储器一起使用。同样,可以连接多个电子设备,各个设备提供部分必要的操作(例如,作为服务器阵列、一组刀片式服务器、或者多处理器系统)。图6中以一个处理器601为例。As shown in FIG. 6, the electronic device includes: one or
存储器602即为本申请所提供的非瞬时计算机可读存储介质。其中,所述存储器存储有可由至少一个处理器执行的指令,以使所述至少一个处理器执行本申请所提供的用于剪辑视频的方法。本申请的非瞬时计算机可读存储介质存储计算机指令,该计算机指令用于使计算机执行本申请所提供的用于剪辑视频的方法。The
存储器602作为一种非瞬时计算机可读存储介质,可用于存储非瞬时软件程序、非瞬时计算机可执行程序以及模块,如本申请实施例中的用于剪辑视频的方法对应的程序指令/模块(例如,附图5所示的场景片段获取单元501、封面图像获取单元502、特征片段提取单元503和特征视频获取单元504)。处理器601通过运行存储在存储器602中的非瞬时软件程序、指令以及模块,从而执行服务器的各种功能应用以及数据处理,即实现上述方法实施例中的用于剪辑视频的方法。As a non-transitory computer-readable storage medium, the
存储器602可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据用于剪辑视频的电子设备的使用所创建的数据等。此外,存储器602可以包括高速随机存取存储器,还可以包括非瞬时存储器,例如至少一个磁盘存储器件、闪存器件、或其他非瞬时固态存储器件。在一些实施例中,存储器602可选包括相对于处理器601远程设置的存储器,这些远程存储器可以通过网络连接至用于剪辑视频的电子设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The
用于剪辑视频的方法的电子设备还可以包括:输入装置603和输出装置604。处理器601、存储器602、输入装置603和输出装置604可以通过总线或者其他方式连接,图6中以通过总线连接为例。The electronic device for the method of editing a video may further include: an
输入装置603可接收输入的数字或字符信息,以及产生与用于剪辑视频的电子设备的用户设置以及功能控制有关的键信号输入,例如触摸屏、小键盘、鼠标、轨迹板、触摸板、指示杆、一个或者多个鼠标按钮、轨迹球、操纵杆等输入装置。输出装置604可以包括显示设备、辅助照明装置(例如,LED)和触觉反馈装置(例如,振动电机)等。该显示设备可以包括但不限于,液晶显示器(LCD)、发光二极管(LED)显示器和等离子体显示器。在一些实施方式中,显示设备可以是触摸屏。The
此处描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、专用ASIC(专用集成电路)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。Various implementations of the systems and techniques described herein can be implemented in digital electronic circuitry, integrated circuit systems, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpretable on a programmable system including at least one programmable processor that The processor, which may be a special purpose or general-purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device an output device.
这些计算程序(也称作程序、软件、软件应用、或者代码)包括可编程处理器的机器指令,并且可以利用高级过程和/或面向对象的编程语言、和/或汇编/机器语言来实施这些计算程序。如本文使用的,术语“机器可读介质”和“计算机可读介质”指的是用于将机器指令和/或数据提供给可编程处理器的任何计算机程序产品、设备、和/或装置(例如,磁盘、光盘、存储器、可编程逻辑装置(PLD)),包括,接收作为机器可读信号的机器指令的机器可读介质。术语“机器可读信号”指的是用于将机器指令和/或数据提供给可编程处理器的任何信号。These computational programs (also referred to as programs, software, software applications, or codes) include machine instructions for programmable processors, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages calculation program. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or apparatus for providing machine instructions and/or data to a programmable processor ( For example, magnetic disks, optical disks, memories, programmable logic devices (PLDs), including machine-readable media that receive machine instructions as machine-readable signals. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。To provide interaction with a user, the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user ); and a keyboard and pointing device (eg, a mouse or trackball) through which a user can provide input to the computer. Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (eg, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including acoustic input, voice input, or tactile input) to receive input from the user.
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。The systems and techniques described herein may be implemented on a computing system that includes back-end components (eg, as a data server), or a computing system that includes middleware components (eg, an application server), or a computing system that includes front-end components (eg, a user's computer having a graphical user interface or web browser through which a user may interact with implementations of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system. The components of the system may be interconnected by any form or medium of digital data communication (eg, a communication network). Examples of communication networks include: Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。A computer system can include clients and servers. Clients and servers are generally remote from each other and usually interact through a communication network. The relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
根据本申请实施例的技术方案,首先对待处理视频的视频帧进行识别,获取所述待处理视频的场景片段;然后根据所述场景片段确定待处理视频的内容信息,根据所述内容信息提取封面图像,提高了用户选择视频的准确性;之后提取所述场景片段的设定时间长度的特征片段,有利于提高特征视频的观赏性;最后将所述封面图像和特征片段组合构成特征视频。本申请有利于获取特征视频的准确性和有效性。According to the technical solutions of the embodiments of the present application, the video frame of the video to be processed is first identified, and the scene segment of the to-be-processed video is obtained; then the content information of the to-be-processed video is determined according to the scene segment, and the cover page is extracted according to the content information image, which improves the accuracy of video selection by the user; then extracts the feature segment of the set time length of the scene segment, which is beneficial to improve the viewing quality of the feature video; finally, the cover image and the feature segment are combined to form a feature video. The present application is beneficial to obtain the accuracy and validity of the feature video.
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发申请中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本申请公开的技术方案所期望的结果,本文在此不进行限制。It should be understood that steps may be reordered, added or deleted using the various forms of flow shown above. For example, the steps described in the present application can be performed in parallel, sequentially or in different orders, and as long as the desired results of the technical solutions disclosed in the present application can be achieved, no limitation is imposed herein.
上述具体实施方式,并不构成对本申请保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本申请的精神和原则之内所作的修改、等同替换和改进等,均应包含在本申请保护范围之内。The above-mentioned specific embodiments do not constitute a limitation on the protection scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may occur depending on design requirements and other factors. Any modifications, equivalent replacements and improvements made within the spirit and principles of this application shall be included within the protection scope of this application.
Claims (16)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010476849.1A CN111601160A (en) | 2020-05-29 | 2020-05-29 | Method and device for editing video |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010476849.1A CN111601160A (en) | 2020-05-29 | 2020-05-29 | Method and device for editing video |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN111601160A true CN111601160A (en) | 2020-08-28 |
Family
ID=72191453
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010476849.1A Pending CN111601160A (en) | 2020-05-29 | 2020-05-29 | Method and device for editing video |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111601160A (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113301430A (en) * | 2021-07-27 | 2021-08-24 | 腾讯科技(深圳)有限公司 | Video clipping method, video clipping device, electronic equipment and storage medium |
| CN114205671A (en) * | 2022-01-17 | 2022-03-18 | 百度在线网络技术(北京)有限公司 | Video content editing method and device based on scene alignment |
| CN115407912A (en) * | 2021-05-26 | 2022-11-29 | 奥多比公司 | Interact with semantic video clips through interactive tiles |
| US12468760B2 (en) | 2021-10-28 | 2025-11-11 | Adobe Inc. | Customizable framework to extract moments of interest |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090153648A1 (en) * | 2007-12-13 | 2009-06-18 | Apple Inc. | Three-dimensional movie browser or editor |
| CN104284241A (en) * | 2014-09-22 | 2015-01-14 | 北京奇艺世纪科技有限公司 | Video editing method and device |
| CN106803987A (en) * | 2015-11-26 | 2017-06-06 | 腾讯科技(深圳)有限公司 | The acquisition methods of video data, device and system |
| CN108259990A (en) * | 2018-01-26 | 2018-07-06 | 腾讯科技(深圳)有限公司 | A kind of method and device of video clipping |
| CN108419145A (en) * | 2018-05-04 | 2018-08-17 | 腾讯科技(深圳)有限公司 | The generation method and device and computer readable storage medium of a kind of video frequency abstract |
| CN110263213A (en) * | 2019-05-22 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Video pushing method, device, computer equipment and storage medium |
| CN110399848A (en) * | 2019-07-30 | 2019-11-01 | 北京字节跳动网络技术有限公司 | Video cover generation method, device and electronic equipment |
| CN110602554A (en) * | 2019-08-16 | 2019-12-20 | 华为技术有限公司 | Cover image determining method, device and equipment |
| CN110798735A (en) * | 2019-08-28 | 2020-02-14 | 腾讯科技(深圳)有限公司 | Video processing method and device and electronic equipment |
| CN111107392A (en) * | 2019-12-31 | 2020-05-05 | 北京百度网讯科技有限公司 | Video processing method, apparatus and electronic device |
| CN111143613A (en) * | 2019-12-30 | 2020-05-12 | 携程计算机技术(上海)有限公司 | Method, system, electronic device and storage medium for selecting video cover |
-
2020
- 2020-05-29 CN CN202010476849.1A patent/CN111601160A/en active Pending
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090153648A1 (en) * | 2007-12-13 | 2009-06-18 | Apple Inc. | Three-dimensional movie browser or editor |
| CN104284241A (en) * | 2014-09-22 | 2015-01-14 | 北京奇艺世纪科技有限公司 | Video editing method and device |
| CN106803987A (en) * | 2015-11-26 | 2017-06-06 | 腾讯科技(深圳)有限公司 | The acquisition methods of video data, device and system |
| CN108259990A (en) * | 2018-01-26 | 2018-07-06 | 腾讯科技(深圳)有限公司 | A kind of method and device of video clipping |
| CN108419145A (en) * | 2018-05-04 | 2018-08-17 | 腾讯科技(深圳)有限公司 | The generation method and device and computer readable storage medium of a kind of video frequency abstract |
| CN110263213A (en) * | 2019-05-22 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Video pushing method, device, computer equipment and storage medium |
| CN110399848A (en) * | 2019-07-30 | 2019-11-01 | 北京字节跳动网络技术有限公司 | Video cover generation method, device and electronic equipment |
| CN110602554A (en) * | 2019-08-16 | 2019-12-20 | 华为技术有限公司 | Cover image determining method, device and equipment |
| CN110798735A (en) * | 2019-08-28 | 2020-02-14 | 腾讯科技(深圳)有限公司 | Video processing method and device and electronic equipment |
| CN111143613A (en) * | 2019-12-30 | 2020-05-12 | 携程计算机技术(上海)有限公司 | Method, system, electronic device and storage medium for selecting video cover |
| CN111107392A (en) * | 2019-12-31 | 2020-05-05 | 北京百度网讯科技有限公司 | Video processing method, apparatus and electronic device |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115407912A (en) * | 2021-05-26 | 2022-11-29 | 奥多比公司 | Interact with semantic video clips through interactive tiles |
| CN113301430A (en) * | 2021-07-27 | 2021-08-24 | 腾讯科技(深圳)有限公司 | Video clipping method, video clipping device, electronic equipment and storage medium |
| CN113301430B (en) * | 2021-07-27 | 2021-12-07 | 腾讯科技(深圳)有限公司 | Video clipping method, video clipping device, electronic equipment and storage medium |
| US12468760B2 (en) | 2021-10-28 | 2025-11-11 | Adobe Inc. | Customizable framework to extract moments of interest |
| CN114205671A (en) * | 2022-01-17 | 2022-03-18 | 百度在线网络技术(北京)有限公司 | Video content editing method and device based on scene alignment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111107392B (en) | Video processing method, device and electronic equipment | |
| CN111626202B (en) | Method and device for recognizing video | |
| US11250066B2 (en) | Method for processing information, electronic device and storage medium | |
| CN107633066B (en) | Information display method and device, electronic equipment and storage medium | |
| US9972360B2 (en) | Computerized system and method for automatically generating high-quality digital content thumbnails from digital video | |
| CN112328816B (en) | Media information display method and device, electronic equipment and storage medium | |
| CN111597433B (en) | Resource searching method and device and electronic equipment | |
| CN111601160A (en) | Method and device for editing video | |
| CN111782977B (en) | Point-of-interest processing method, device, equipment and computer readable storage medium | |
| JP6986187B2 (en) | Person identification methods, devices, electronic devices, storage media, and programs | |
| CN111901615A (en) | Live video playing method and device | |
| CN113746874A (en) | Voice packet recommendation method, device, equipment and storage medium | |
| EP3890294B1 (en) | Method and apparatus for extracting hotspot segment from video | |
| CN104602128A (en) | Video processing method and device | |
| CN111737501A (en) | A content recommendation method and device, electronic device, and storage medium | |
| CN111225236B (en) | Method and device for generating video cover, electronic equipment and computer-readable storage medium | |
| CN111967493A (en) | Image auditing method and device, electronic equipment and storage medium | |
| CN111158924B (en) | Content sharing method and device, electronic equipment and readable storage medium | |
| CN111984825A (en) | Method and apparatus for searching video | |
| CN111949820B (en) | Processing method, device and electronic equipment for video associated points of interest | |
| CN113542725B (en) | Video auditing method, video auditing device and electronic equipment | |
| CN111309200A (en) | Method, device, equipment and storage medium for determining extended reading content | |
| CN111447507B (en) | Video production method and device, electronic equipment and storage medium | |
| CN111726682A (en) | Video segment generating method, apparatus, device and computer storage medium | |
| CN111444819A (en) | Cut frame determination method, network training method, device, equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| RJ01 | Rejection of invention patent application after publication | ||
| RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200828 |