[go: up one dir, main page]

CN114640803A - Video clip processing system and method - Google Patents

Video clip processing system and method Download PDF

Info

Publication number
CN114640803A
CN114640803A CN202011474505.3A CN202011474505A CN114640803A CN 114640803 A CN114640803 A CN 114640803A CN 202011474505 A CN202011474505 A CN 202011474505A CN 114640803 A CN114640803 A CN 114640803A
Authority
CN
China
Prior art keywords
video
processed
image
frame image
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011474505.3A
Other languages
Chinese (zh)
Inventor
孙涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wanluo Culture Media Co ltd
Original Assignee
Shanghai Wanluo Culture Media Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wanluo Culture Media Co ltd filed Critical Shanghai Wanluo Culture Media Co ltd
Priority to CN202011474505.3A priority Critical patent/CN114640803A/en
Publication of CN114640803A publication Critical patent/CN114640803A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a video clip processing system and a method, relating to the technical field of video processing, wherein the system comprises: the device comprises a display module, a storage module and a processing module, wherein the display module is used for displaying a content identification set and a style identification set of a first video to be clipped, the content identification set comprises at least one content identification, each content identification corresponds to at least one video segment in the first video, the style identification set comprises at least one style identification, each style identification corresponds to at least one material combination, and each material combination comprises at least one clipping material; the first receiving module is used for receiving a first input of a user to the content identification set; and the first video segmentation processing module is used for responding to the first input and acquiring a frame image to be processed of an object corresponding to the target content identifier selected by the first input from the first video. The method has the advantages of high intelligent degree and good video editing effect.

Description

Video clip processing system and method
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to a video clip processing system and method.
Background
The video editing technology is a technology for processing video segments in a video in an editing mode to generate video works with different expressive power, and is often applied to scenes such as short video production, video collection and the like.
With the development of science and technology, the technology of image acquisition equipment is also increasing day by day. The video recorded by the image acquisition equipment is clearer, and the resolution and the display effect are also greatly improved. However, the existing recorded videos are only monotonous recorded materials, and cannot meet more and more personalized requirements provided by users. In the prior art, after the video is recorded, the video can be manually processed by a user. However, such processing requires a user to have a high image processing technology, and requires a long time for the user to perform the processing, which is complicated in processing and complicated in technology.
In the prior art, a video clip mainly adopts a manual clipping mode, when the video clip is carried out, a user needs to spend a large amount of time to carry out the track alignment adjustment of the video speed and the video length, the screening of the transition effect, the matching of the audio rhythm and the like, the operation is more complicated, and the video clip efficiency is lower.
Disclosure of Invention
In view of this, the present invention provides a video clip processing system and method, which have the advantages of high intelligence and good video clip effect.
In order to achieve the purpose, the invention adopts the following technical scheme:
a video clip processing system, the system comprising: the device comprises a display module, a storage module and a processing module, wherein the display module is used for displaying a content identification set and a style identification set of a first video to be clipped, the content identification set comprises at least one content identification, each content identification corresponds to at least one video segment in the first video, the style identification set comprises at least one style identification, each style identification corresponds to at least one material combination, and each material combination comprises at least one clipping material; the first receiving module is used for receiving a first input of a user to the content identification set; a first video segmentation processing module, configured to, in response to the first input, acquire, from the first video, a frame image to be processed that includes an object corresponding to a target content identifier selected by the first input; the second receiving module is used for receiving a second input of the style identification set by the user; the second video segmentation processing module is used for responding to the second input and acquiring a frame image to be processed of an object corresponding to the target content identification selected by the second input; and the synthesis module is used for synthesizing the frame image to be processed and the clip material in the target material combination to generate a second video.
Further, the first video segmentation processing module and the second video segmentation processing module each include: an acquisition unit configured to acquire video data; the screening unit is used for screening the video data to acquire a frame image to be processed containing a specific object; the segmentation processing unit is used for carrying out image segmentation processing on the frame image to be processed to obtain a foreground image aiming at the specific object; the blurring processing unit is used for blurring the edge of the foreground image; the edge optimization processing unit is used for carrying out edge optimization processing on the foreground image after the blurring processing by utilizing a covariance matrix extracted from the frame image to be processed; the combined processing unit is used for combining the foreground image subjected to the edge optimization processing with a preset background image to obtain a processed frame image; and the covering unit is used for covering the processed frame image with the frame image to be processed to obtain the processed video data.
Further, the blur processing unit is further configured to: selecting a pixel value from a preset pixel value range aiming at any one of a plurality of pixel points at the edge of the foreground image, and assigning the pixel value to the pixel point at the edge of the foreground image; the edge optimization processing unit is further configured to: determining whether the similarity degree of the color information of the pixel points of the edge of the foreground image after the blurring processing and the color information of the pixel points of the foreground image of the frame image to be processed is greater than the similarity degree of the color information of the pixel points of the background image of the frame image to be processed according to the covariance matrix; if yes, updating the pixel values of the pixel points at the edge of the foreground image after the blurring processing; and if not, keeping the pixel values of the pixel points at the edge of the foreground image after the blurring processing.
Further, before the displaying module displays the content identification set and the genre identification set of the first video to be edited, the method further includes: classifying each video frame in the first video to obtain at least one video clip, wherein the video frames in each video clip have the same category; extracting a subtitle fragment of each video fragment; extracting at least one keyword of each subtitle segment according to the occurrence frequency of each word in the subtitle segment; the at least one keyword of each subtitle segment is determined as a content identifier of the corresponding video segment.
A video clip processing method, the method performing the steps of: step 1: displaying a content identification set and a style identification set of a first video to be clipped, wherein the content identification set comprises at least one content identification, each content identification corresponds to at least one video segment in the first video, the style identification set comprises at least one style identification, each style identification corresponds to at least one material combination, and each material combination comprises at least one clipping material; step 2: receiving a first input of the content identification set by a user; and 3, step 3: responding to the first input, and acquiring a frame image to be processed of an object corresponding to a target content identifier selected by the first input from the first video; and 4, step 4: receiving a second input of the style identification set by the user; and 5: responding to the second input, and acquiring a frame image to be processed of an object corresponding to the target content identification selected by the second input; step 6: and synthesizing the frame image to be processed and the clipping material in the target material combination to generate a second video.
Further, step 3 and step 5 both include the following: acquiring video data; screening the video data to acquire a frame image to be processed containing a specific object; performing image segmentation processing on the frame image to be processed to obtain a foreground image for the specific object; blurring the edge of the foreground image; performing edge optimization processing on the foreground image after the blurring processing by using a covariance matrix extracted from the frame image to be processed; combining the foreground image subjected to the edge optimization processing with a preset background image to obtain a processed frame image; and covering the processed frame image with the frame image to be processed to obtain processed video data.
Further, the blurring the edge of the foreground image further includes: selecting a pixel value from a preset pixel value range aiming at any one of a plurality of pixel points at the edge of the foreground image, and assigning the pixel value to the pixel point at the edge of the foreground image; the edge optimization processing unit is further configured to: determining whether the similarity degree of the color information of the pixel points of the edge of the foreground image after the blurring processing and the color information of the pixel points of the foreground image of the frame image to be processed is greater than the similarity degree of the color information of the pixel points of the background image of the frame image to be processed according to the covariance matrix; if so, updating the pixel values of the pixel points at the edge of the foreground image after the blurring processing; and if not, keeping the pixels of the pixel points at the edge of the foreground image after the blurring processing.
Compared with the prior art, the invention has the following beneficial effects: the method has the advantages of high intelligent degree and good video editing effect. .
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
FIG. 1 is a system diagram of a video clip processing system according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating a method of processing a video clip according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided for illustrative purposes, and other advantages and effects of the present invention will become apparent to those skilled in the art from the present disclosure.
It should be understood that the structures, ratios, sizes, and the like shown in the drawings and described in the specification are only configured to match the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not configured to limit the conditions under which the present invention can be implemented, so that the present invention has no technical significance, and any structural modification, ratio relationship change, or size adjustment should still fall within the scope of the present invention without affecting the efficacy and achievable purpose of the present invention. In addition, the terms such as "upper", "lower", "left", "right", "middle" and "one" used in the present specification are for clarity of description, and are not configured to limit the scope of the present invention, and changes or modifications of the relative relationship may be made without substantial technical changes and modifications.
Example 1
As shown in fig. 1, a video clip processing system, the system comprising: the device comprises a display module, a storage module and a processing module, wherein the display module is used for displaying a content identification set and a style identification set of a first video to be clipped, the content identification set comprises at least one content identification, each content identification corresponds to at least one video segment in the first video, the style identification set comprises at least one style identification, each style identification corresponds to at least one material combination, and each material combination comprises at least one clipping material; the first receiving module is used for receiving a first input of a user to the content identification set; a first video segmentation processing module, configured to, in response to the first input, acquire, from the first video, a frame image to be processed that includes an object corresponding to a target content identifier selected by the first input; the second receiving module is used for receiving a second input of the style identification set by the user; the second video segmentation processing module is used for responding to the second input and acquiring a frame image to be processed of an object corresponding to the target content identification selected by the second input; and the synthesis module is used for synthesizing the frame image to be processed and the clipping material in the target material combination to generate a second video.
Specifically, the video clip is software for performing nonlinear editing on a video source, and belongs to the field of multimedia production software. The software mixes the added materials such as pictures, background music, special effects, scenes and the like with the video again, cuts and combines video sources, and generates new videos with different expressive forces through secondary coding.
Example 2
On the basis of the above embodiment, the first video segmentation processing module and the second video segmentation processing module each include: an acquisition unit configured to acquire video data; the screening unit is used for screening the video data to acquire a frame image to be processed containing a specific object; the segmentation processing unit is used for carrying out image segmentation processing on the frame image to be processed to obtain a foreground image aiming at the specific object; the blurring processing unit is used for blurring the edge of the foreground image; the edge optimization processing unit is used for carrying out edge optimization processing on the foreground image after the blurring processing by utilizing a covariance matrix extracted from the frame image to be processed; the combined processing unit is used for carrying out combined processing on the foreground image subjected to the edge optimization processing and a preset background image to obtain a processed frame image; and the covering unit is used for covering the processed frame image with the frame image to be processed to obtain the processed video data.
Specifically, a single frame is a still picture, and successive frames form a moving picture, such as a television image. We generally say the number of frames, which is simply the number of frames of a picture transmitted in 1 second of time, and can also understand that the graphics processor can refresh several times Per second, which is generally denoted by fps (frames Per second). Each frame is a still image and displaying frames in rapid succession creates the illusion of motion. A high frame rate may result in a smoother, more realistic animation. The greater the frames per second (fps), the smoother the displayed motion will be.
Example 3
On the basis of the above embodiment, the blur processing unit is further configured to: selecting a pixel value from a preset pixel value range aiming at any one of a plurality of pixel points at the edge of the foreground image, and assigning the pixel value to the pixel point at the edge of the foreground image; the edge optimization processing unit is further configured to: determining whether the similarity degree of the color information of the pixel points of the edge of the foreground image after the blurring processing and the color information of the pixel points of the foreground image of the frame image to be processed is greater than the similarity degree of the color information of the pixel points of the background image of the frame image to be processed according to the covariance matrix; if so, updating the pixel values of the pixel points at the edge of the foreground image after the blurring processing; if not, keeping the pixel values of the pixel points at the edge of the foreground image after the blurring processing.
In particular, edge detection is a fundamental problem in image processing and computer vision, and the purpose of edge detection is to identify points in a digital image where brightness changes are significant. Significant changes in image attributes typically reflect significant events and changes in the attributes. These include (i) discontinuities in depth, (ii) surface orientation discontinuities, (iii) material property variations, and (iv) scene lighting variations. Edge detection is a research area in image processing and computer vision, especially in feature extraction.
Edges may be perspective dependent, that is, edges may vary from perspective to perspective, typically reflecting the geometry of the scene, objects, one obscuring the other, or perspective independent, which generally reflects the properties of the viewed object, such as surface texture and surface shape. In two-dimensional or even higher dimensional spaces, the effect of perspective projection needs to be taken into account.
A typical border may be, for example, a border between a red and a yellow color, as opposed to a border which may be a few differently colored points on another constant background. There is one edge on each side of the edge line. Edges play a very important role in many image processing applications. However, in recent years, substantial research efforts have been made on computer vision processing methods that do not significantly rely on edge detection as a preprocessing.
Example 4
On the basis of the above embodiment, before the displaying module displays the content identification set and the genre identification set of the first video to be edited, the method further includes: classifying each video frame in the first video to obtain at least one video clip, wherein the video frames in each video clip have the same category; extracting a subtitle fragment of each video fragment; extracting at least one keyword of each subtitle segment according to the occurrence frequency of each word in the subtitle segment; the at least one keyword of each subtitle segment is determined as a content identifier of the corresponding video segment.
Example 5
A video clip processing method, the method performing the steps of: step 1: displaying a content identification set and a style identification set of a first video to be clipped, wherein the content identification set comprises at least one content identification, each content identification corresponds to at least one video segment in the first video, the style identification set comprises at least one style identification, each style identification corresponds to at least one material combination, and each material combination comprises at least one clipping material; step 2: receiving a first input of the content identification set by a user; and 3, step 3: responding to the first input, and acquiring a frame image to be processed of an object corresponding to a target content identifier selected by the first input from the first video; and 4, step 4: receiving a second input of the style identification set by the user; and 5: responding to the second input, and acquiring a frame image to be processed of an object corresponding to the target content identification selected by the second input; step 6: and synthesizing the frame image to be processed and the clipping material in the target material combination to generate a second video.
Example 6
On the basis of the above embodiment, step 3 and step 5 each include: acquiring video data; screening the video data to acquire a frame image to be processed containing a specific object; performing image segmentation processing on the frame image to be processed to obtain a foreground image for the specific object; blurring the edge of the foreground image; performing edge optimization processing on the foreground image after the blurring processing by using a covariance matrix extracted from the frame image to be processed; combining the foreground image subjected to the edge optimization processing with a preset background image to obtain a processed frame image; covering the processed frame image with a frame image to be processed to obtain processed video data;
example 7
On the basis of the previous embodiment, the blurring the edges of the foreground image further includes: selecting a pixel value from a preset pixel value range aiming at any one of a plurality of pixel points at the edge of the foreground image, and assigning the pixel value to the pixel point at the edge of the foreground image; the edge optimization processing unit is further configured to: determining whether the similarity degree of the color information of the pixel points at the edge of the blurred foreground image and the color information of the pixel points of the foreground image of the frame image to be processed is greater than the similarity degree of the color information of the pixel points of the background image of the frame image to be processed according to the covariance matrix; if so, updating the pixel values of the pixel points at the edge of the foreground image after the blurring processing; and if not, keeping the pixels of the pixel points at the edge of the foreground image after the blurring processing.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage unit and the processing unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative elements, method steps, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the elements, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, register unit, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether these functions are performed in electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or unit that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or unit.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not to be construed as limiting the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (7)

1. A video clip processing system, the system comprising:
the device comprises a display module, a storage module and a processing module, wherein the display module is used for displaying a content identification set and a style identification set of a first video to be clipped, the content identification set comprises at least one content identification, each content identification corresponds to at least one video segment in the first video, the style identification set comprises at least one style identification, each style identification corresponds to at least one material combination, and each material combination comprises at least one clipping material;
the first receiving module is used for receiving a first input of a user to the content identification set;
the first video segmentation processing module is used for responding to the first input and acquiring a frame image to be processed of an object corresponding to a target content identifier selected by the first input from the first video;
the second receiving module is used for receiving a second input of the style identification set by the user;
the second video segmentation processing module is used for responding to the second input and acquiring a frame image to be processed of an object corresponding to the target content identification selected by the second input;
and the synthesis module is used for synthesizing the frame image to be processed and the clipping material in the target material combination to generate a second video.
2. The system of claim 1, wherein the first video segmentation processing module and the second video segmentation processing module each comprise: an acquisition unit configured to acquire video data; the screening unit is used for screening the video data to acquire a frame image to be processed containing a specific object; the segmentation processing unit is used for carrying out image segmentation processing on the frame image to be processed to obtain a foreground image aiming at the specific object; the blurring processing unit is used for blurring the edge of the foreground image; the edge optimization processing unit is used for carrying out edge optimization processing on the foreground image after the blurring processing by utilizing a covariance matrix extracted from the frame image to be processed; the combined processing unit is used for combining the foreground image subjected to the edge optimization processing with a preset background image to obtain a processed frame image; and the covering unit is used for covering the processed frame image with the frame image to be processed to obtain the processed video data.
3. The system of claim 2, wherein the blur processing unit is further to: selecting a pixel value from a preset pixel value range aiming at any one of a plurality of pixel points at the edge of the foreground image, and assigning the pixel value to the pixel point at the edge of the foreground image; the edge optimization processing unit is further configured to: determining whether the similarity degree of the color information of the pixel points at the edge of the blurred foreground image and the color information of the pixel points of the foreground image of the frame image to be processed is greater than the similarity degree of the color information of the pixel points of the background image of the frame image to be processed according to the covariance matrix; if so, updating the pixel values of the pixel points at the edge of the foreground image after the blurring processing; and if not, keeping the pixel values of the pixel points at the edge of the foreground image after the blurring processing.
4. The system of claim 3, wherein prior to the display module displaying the set of content identifications and the set of genre identifications for the first video to be clipped, further comprising: classifying each video frame in the first video to obtain at least one video clip, wherein the video frames in each video clip have the same category; extracting a subtitle fragment of each video fragment; extracting at least one keyword of each subtitle segment according to the occurrence frequency of each word in the subtitle segment; the at least one keyword of each subtitle segment is determined as a content identifier of the corresponding video segment.
5. A method of processing video clips based on the system of any of claims 1 to 4, wherein the method performs the steps of: step 1: displaying a content identification set and a style identification set of a first video to be clipped, wherein the content identification set comprises at least one content identification, each content identification corresponds to at least one video segment in the first video, the style identification set comprises at least one style identification, each style identification corresponds to at least one material combination, and each material combination comprises at least one clipping material; step 2: receiving a first input of the content identification set by a user; and step 3: responding to the first input, and acquiring a frame image to be processed of an object corresponding to a target content identifier selected by the first input from the first video; and 4, step 4: receiving a second input of the style identification set by the user; and 5: responding to the second input, and acquiring a frame image to be processed of an object corresponding to the target content identification selected by the second input; step 6: and synthesizing the frame image to be processed and the clipping material in the target material combination to generate a second video.
6. The method of claim 5, wherein step 3 and step 5 each comprise the following: acquiring video data; screening the video data to acquire a frame image to be processed containing a specific object; performing image segmentation processing on the frame image to be processed to obtain a foreground image for the specific object; blurring the edge of the foreground image; performing edge optimization processing on the foreground image after the blurring processing by using a covariance matrix extracted from the frame image to be processed; combining the foreground image subjected to the edge optimization processing with a preset background image to obtain a processed frame image; and covering the processed frame image with the frame image to be processed to obtain processed video data.
7. The method of claim 6, wherein the blurring the edges of the foreground image further comprises: selecting a pixel value from a preset pixel value range aiming at any one of a plurality of pixel points at the edge of the foreground image, and assigning the pixel value to the pixel point at the edge of the foreground image; the edge optimization processing unit is further configured to: determining whether the similarity degree of the color information of the pixel points at the edge of the blurred foreground image and the color information of the pixel points of the foreground image of the frame image to be processed is greater than the similarity degree of the color information of the pixel points of the background image of the frame image to be processed according to the covariance matrix; if so, updating the pixel values of the pixel points at the edge of the foreground image after the blurring processing; and if not, keeping the pixels of the pixel points at the edge of the foreground image after the blurring processing.
CN202011474505.3A 2020-12-15 2020-12-15 Video clip processing system and method Pending CN114640803A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011474505.3A CN114640803A (en) 2020-12-15 2020-12-15 Video clip processing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011474505.3A CN114640803A (en) 2020-12-15 2020-12-15 Video clip processing system and method

Publications (1)

Publication Number Publication Date
CN114640803A true CN114640803A (en) 2022-06-17

Family

ID=81945513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011474505.3A Pending CN114640803A (en) 2020-12-15 2020-12-15 Video clip processing system and method

Country Status (1)

Country Link
CN (1) CN114640803A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024217226A1 (en) * 2023-04-21 2024-10-24 北京字跳网络技术有限公司 Video producing method and apparatus, device, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024217226A1 (en) * 2023-04-21 2024-10-24 北京字跳网络技术有限公司 Video producing method and apparatus, device, and storage medium

Similar Documents

Publication Publication Date Title
US11601630B2 (en) Video processing method, electronic device, and non-transitory computer-readable medium
AU2007345938B2 (en) Method and system for video indexing and video synopsis
US7760956B2 (en) System and method for producing a page using frames of a video stream
US9443555B2 (en) Multi-stage production pipeline system
US7383509B2 (en) Automatic generation of multimedia presentation
US7894633B1 (en) Image conversion and encoding techniques
CN104272377B (en) Moving picture project management system
EP3238213B1 (en) Method and apparatus for generating an extrapolated image based on object detection
US7904815B2 (en) Content-based dynamic photo-to-video methods and apparatuses
US20120229489A1 (en) Pillarboxing Correction
JP2000030040A (en) Image processing apparatus and computer-readable storage medium
US20130163961A1 (en) Video summary with depth information
JP2006333453A (en) Method and system for spatially and temporally summarizing video
JP2004505394A (en) Image conversion and coding technology
US7848567B2 (en) Determining regions of interest in synthetic images
KR20150112535A (en) Representative image managing apparatus and method
CN111652024B (en) Face display and live broadcast method and device, electronic equipment and storage medium
CN114640803A (en) Video clip processing system and method
WO2012153744A1 (en) Information processing device, information processing method, and information processing program
JP2025143234A (en) Personalized Video Mechanism
CN119580164A (en) A target segmentation and tracking, material generation method and naked eye 3D scene card material
CN120091185A (en) Multimedia processing method and device
Chapdelaine et al. Designing caption production rules based on face, text, and motion detection
KR20210075445A (en) Apparatus for editing hologram video
IL199678A (en) Method and system for video indexing and video synopsis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20220617