CN119893221A - Animation generation method and device, computer equipment and storage medium - Google Patents
Animation generation method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN119893221A CN119893221A CN202311388610.9A CN202311388610A CN119893221A CN 119893221 A CN119893221 A CN 119893221A CN 202311388610 A CN202311388610 A CN 202311388610A CN 119893221 A CN119893221 A CN 119893221A
- Authority
- CN
- China
- Prior art keywords
- animation
- editing
- data
- sub
- dimensions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43072—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computer Security & Cryptography (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The present disclosure provides an animation generation method, an apparatus, a computer device, and a storage medium, wherein the method includes obtaining animation sub-data in a plurality of animation editing dimensions, displaying editing tracks respectively corresponding to the plurality of animation editing dimensions on an animation editing interface, determining animation sub-data in the plurality of animation editing dimensions after editing in response to an editing operation performed on the animation sub-data in the plurality of animation editing dimensions by the editing tracks, generating target animation data based on the animation sub-data in the plurality of animation editing dimensions after editing, and generating target animation based on the target animation data. Therefore, compared with the editing operation of the video layer of the recorded animation content, the editing operation of the animation sub-data under the acquired animation editing dimensions can be performed on the animation content at the animation sub-data layer at the lower layer, so that the editing effect of the animation sub-data is better, and the animation works with higher quality can be obtained.
Description
Technical Field
The present disclosure relates to the field of animation technologies, and in particular, to an animation generation method, an animation generation device, a computer device, and a storage medium.
Background
With the development of virtual reality technology and augmented reality technology, virtual live broadcast technology using virtual characters for live broadcast has been developed. In the virtual live broadcast process, the virtual roles driven by the virtual role control object can replace a true man anchor to perform the talent exhibition. To facilitate viewing playback by a viewer, the virtual live broadcast may be recorded to generate an animation containing the virtual character.
In the related art, after recording the virtual live broadcast, in order to increase the wonderful degree of the finally generated animation work, it is often necessary to clip the wonderful content in the recorded complete content to generate the animation clip work corresponding to the virtual character, but since only the clipping operation can be performed on the picture content of the virtual live broadcast, the clipping effect of the clipping operation is limited, and the quality of the generated animation work needs to be improved.
Disclosure of Invention
The embodiment of the disclosure at least provides an animation generation method, an animation generation device, computer equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an animation generation method, including:
Obtaining animation sub-data under a plurality of animation editing dimensions;
displaying the editing tracks respectively corresponding to the animation editing dimensions on an animation editing interface;
Determining the animation sub-data in the multiple edited animation editing dimensions in response to the editing operation performed on the animation sub-data in the multiple animation editing dimensions by the editing track;
and generating target animation data based on the animation sub-data in the edited multiple animation editing dimensions, and generating target animation based on the target animation data.
In one possible implementation, the plurality of animation editing dimensions include at least two editing dimensions of an action control data editing dimension, a sub-mirror editing dimension and a sub-mirror tracking target editing dimension, wherein:
The action control data editing dimension is used for editing action control data of the virtual character, and the action control data is used for driving the virtual character to execute corresponding actions;
The sub-mirror editing dimension is used for editing each sub-mirror in the animation;
and the object editing dimension of the mirror tracking is used for editing the tracked object virtual character.
In a possible implementation manner, the acquiring animation sub-data in multiple animation editing dimensions includes:
Responding to the animation data recording operation, and determining a recording object and a recording period indicated by the animation data recording operation, wherein the recording object comprises a target virtual character in virtual characters displayed in a live broadcast picture of a virtual scene;
and acquiring the animation recording data of the target virtual character in the plurality of animation editing dimensions in the recording period, and taking the animation recording data in the plurality of animation editing dimensions as the animation sub-data.
In one possible embodiment, the animation data recording operation includes a plurality of animation data recording operations;
displaying the editing tracks respectively corresponding to the animation editing dimensions on an animation editing interface comprises the following steps:
And displaying the animation sub-data recorded in the animation data recording operation in a plurality of times in the editing track of the animation editing interface, wherein each animation sub-data recorded in the animation data recording operation comprises animation sub-data under the plurality of editing tracks.
In a possible embodiment, the method further comprises:
Generating at least one set of duplicate animation sub-data corresponding to the animation sub-data in the plurality of animation editing dimensions in response to a copy operation of the animation sub-data recorded for the animation data recording operation;
displaying the editing tracks respectively corresponding to the animation editing dimensions on an animation editing interface comprises the following steps:
and displaying the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data in an editing track of the animation editing interface, wherein the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data can be edited synchronously.
In a possible embodiment, the method further comprises:
And responding to the editing triggering operation of the animation sub-data in any animation editing dimension, and performing scaling processing on the editing area of the animation sub-data in the animation editing dimension in the animation editing page based on the corresponding display size of the animation sub-data in the animation editing dimension in the animation editing page.
In a possible embodiment, the method further comprises:
Determining a target animation file to be played according to the triggering operation of any one of a plurality of animation files displayed on a live broadcast control page in the live broadcast process of a virtual scene, wherein the plurality of animation files comprise animation files corresponding to the target animation;
And sending the animation data corresponding to the target animation file to a live broadcast server, so as to push the animation data corresponding to the target animation file to a live broadcast client based on the live broadcast server, and displaying the animation content corresponding to the target animation file at the live broadcast client.
In a second aspect, an embodiment of the present disclosure further provides an animation generating apparatus, including:
the acquisition module is used for acquiring animation sub-data under a plurality of animation editing dimensions;
the display module is used for displaying the editing tracks corresponding to the animation editing dimensions respectively on the animation editing interface;
A determining module, configured to determine, in response to an editing operation performed on the animation sub-data in the plurality of animation editing dimensions by the editing track, animation sub-data in the plurality of edited animation editing dimensions;
and the generating module is used for generating target animation data based on the animation sub-data in the edited multiple animation editing dimensions and generating target animation based on the target animation data.
In one possible implementation, the plurality of animation editing dimensions include at least two editing dimensions of an action control data editing dimension, a sub-mirror editing dimension and a sub-mirror tracking target editing dimension, wherein:
The action control data editing dimension is used for editing action control data of the virtual character, and the action control data is used for driving the virtual character to execute corresponding actions;
The sub-mirror editing dimension is used for editing each sub-mirror in the animation;
and the object editing dimension of the mirror tracking is used for editing the tracked object virtual character.
In a possible implementation manner, the acquiring module is configured to, when acquiring animation sub-data in multiple animation editing dimensions:
Responding to the animation data recording operation, and determining a recording object and a recording period indicated by the animation data recording operation, wherein the recording object comprises a target virtual character in virtual characters displayed in a live broadcast picture of a virtual scene;
and acquiring the animation recording data of the target virtual character in the plurality of animation editing dimensions in the recording period, and taking the animation recording data in the plurality of animation editing dimensions as the animation sub-data.
In one possible embodiment, the animation data recording operation includes a plurality of animation data recording operations;
the display module is used for displaying the editing tracks corresponding to the animation editing dimensions respectively on the animation editing interface:
And displaying the animation sub-data recorded in the animation data recording operation in a plurality of times in the editing track of the animation editing interface, wherein each animation sub-data recorded in the animation data recording operation comprises animation sub-data under the plurality of editing tracks.
In a possible implementation manner, the obtaining module is further configured to:
Generating at least one set of duplicate animation sub-data corresponding to the animation sub-data in the plurality of animation editing dimensions in response to a copy operation of the animation sub-data recorded for the animation data recording operation;
the display module is used for displaying the editing tracks corresponding to the animation editing dimensions respectively on the animation editing interface:
and displaying the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data in an editing track of the animation editing interface, wherein the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data can be edited synchronously.
In a possible embodiment, the display module is further configured to:
And responding to the editing triggering operation of the animation sub-data in any animation editing dimension, and performing scaling processing on the editing area of the animation sub-data in the animation editing dimension in the animation editing page based on the corresponding display size of the animation sub-data in the animation editing dimension in the animation editing page.
In a possible implementation manner, the generating module is further configured to:
Determining a target animation file to be played according to the triggering operation of any one of a plurality of animation files displayed on a live broadcast control page in the live broadcast process of a virtual scene, wherein the plurality of animation files comprise animation files corresponding to the target animation;
And sending the animation data corresponding to the target animation file to a live broadcast server, so as to push the animation data corresponding to the target animation file to a live broadcast client based on the live broadcast server, and displaying the animation content corresponding to the target animation file at the live broadcast client.
In a third aspect, the disclosed embodiments also provide a computer device comprising a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is running, the machine-readable instructions when executed by the processor performing the steps of the first aspect, or any of the possible implementations of the first aspect.
In a fourth aspect, the presently disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the first aspect, or any of the possible implementations of the first aspect.
According to the animation generation method, the device, the computer equipment and the storage medium, after the animation sub-data in the animation editing dimensions are obtained, the editing tracks corresponding to the animation editing dimensions respectively can be displayed on an animation editing interface, the editing operation performed on the animation sub-data in the animation editing dimensions by the editing tracks is responded, the animation sub-data in the animation editing dimensions after editing is determined, and therefore target animation data can be generated based on the animation sub-data in the animation editing dimensions after editing, and target animation is generated based on the target animation data. Therefore, compared with the editing operation of the video layer of the recorded animation content, the editing operation of the animation sub-data under the acquired animation editing dimensions can be performed on the animation content at the animation sub-data layer at the lower layer, so that the editing effect of the animation sub-data is better, and the animation works with higher quality can be obtained.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of an animation generation method provided by some embodiments of the present disclosure;
FIG. 2 illustrates a schematic diagram of an animation editing page in an animation generation method provided by some embodiments of the present disclosure;
FIG. 3 is a schematic diagram of an animation editing page after scaling in the animation generation method according to some embodiments of the present disclosure;
FIG. 4 illustrates an architectural diagram of an animation generation device provided by some embodiments of the present disclosure;
Fig. 5 illustrates a schematic diagram of a computer device provided in some embodiments of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that like reference numerals and letters refer to like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, and means that three relationships may exist, for example, A and/or B, and that three cases exist, A alone, A and B together, and B alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
According to the research, after the virtual live broadcast is recorded, in order to improve the wonderful degree of the finally generated animation work, the wonderful content in the recorded complete content is often required to be clipped to generate the animation clipping work corresponding to the virtual role, but the clipping effect of the clipping operation is limited because the clipping operation can only be performed on the picture content of the virtual live broadcast, and the quality of the generated animation work needs to be improved.
Based on the above-mentioned study, the present disclosure provides an animation generation method, apparatus, computer device and storage medium, after obtaining animation sub-data in a plurality of animation editing dimensions, it is possible to generate target animation data based on the animation sub-data in the plurality of animation editing dimensions after editing by displaying editing tracks respectively corresponding to the plurality of animation editing dimensions on an animation editing interface and determining the animation sub-data in the plurality of animation editing dimensions after editing in response to an editing operation performed on the animation sub-data in the plurality of animation editing dimensions by the editing tracks, and generate target animation based on the animation sub-data in the plurality of animation editing dimensions after editing. Therefore, compared with the editing operation of the video layer of the recorded animation content, the editing operation of the animation sub-data under the acquired animation editing dimensions can be performed on the animation content at the animation sub-data layer at the lower layer, so that the editing effect of the animation sub-data is better, and the animation works with higher quality can be obtained.
For the sake of understanding the present embodiment, first, a detailed description will be given of an animation generation method disclosed in the embodiments of the present disclosure, where an execution subject of the animation generation method provided in the embodiments of the present disclosure is generally a computer device with a certain computing capability, and the computer device includes, for example, a terminal device or a server or other processing device, and the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the animation generation method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Referring to fig. 1, a flowchart of an animation generation method according to an embodiment of the disclosure is shown, where the method includes S101 to S104, where:
s101, acquiring animation sub-data under a plurality of animation editing dimensions.
S102, displaying the editing tracks corresponding to the animation editing dimensions respectively on an animation editing interface.
And S103, responding to the editing operation executed on the animation sub-data in the animation editing dimensions by the editing track, and determining the edited animation sub-data in the animation editing dimensions.
S104, generating target animation data based on the animation sub-data in the edited multiple animation editing dimensions, and generating target animation based on the target animation data.
The following is a detailed description of the above steps.
Acquisition procedure for S101:
The animation editing dimension is used for representing the dimension when the animation sub-data is edited, the plurality of animation editing dimensions can comprise at least two editing dimensions of an action control data editing dimension, a sub-mirror editing dimension and a sub-mirror tracking target editing dimension, and the animation editing dimension can also comprise an audio editing dimension, wherein:
Dimension 1, action control data edit dimension
Here, the motion control data editing dimension may be used to edit motion control data of the virtual character, where the motion control data is used to drive the virtual character to perform a corresponding motion, and the motion control data may be, for example, virtual character skeleton control data livelink (live link is a plug-in provided by an engine and is used to provide a universal interface to stream and add animation data from an external source to the engine), and the motion control data may be real-time motion capture data, where the real-time motion capture data may be collected by a motion capture device and is used to characterize a motion made by a virtual character control object in a real scene.
Specifically, by editing the motion control data, the motion of the virtual character in the finally generated target animation can be edited, so that the virtual character in the finally obtained target animation can perform the motion meeting the requirement.
Dimension 2, and Sunglasses editing dimension
The video sub-data corresponding to the virtual character can be shot by different virtual cameras, the different virtual cameras can shoot the virtual character from different shooting angles, so that corresponding video sub-data is obtained, the sub-mirror editing dimension can be used for editing each sub-mirror in the video, and one sub-mirror can correspond to one virtual camera.
Specifically, the split mirrors are edited, that is, the animation sub-data shot by each virtual camera is edited, so that animation data including virtual characters under different shooting angles can be obtained.
Dimension 3, mirror tracking target editing dimension
Here, the mirror-tracked target edit dimension may be used to edit the tracked target avatar.
Specifically, since the virtual camera may shoot multiple virtual roles simultaneously when shooting, in order to ensure that the target virtual roles can be tracked in the finally generated target animation so as to highlight the target virtual roles, editing operations under the editing dimension of the target tracked in the target animation can be performed through mirror splitting tracking, so that the tracked target virtual roles in the target animation are determined, and a highlight display effect of the target virtual roles is achieved.
Dimension 4, audio editing dimension
Here, the audio editing dimension may be used to edit audio data in an animation, which may include data such as background music data.
Specifically, by editing the audio data in the animation, the audio data in the finally obtained target animation can be enabled to meet the audio requirement of the animation.
In a possible implementation manner, when obtaining animation sub-data in multiple animation editing dimensions, the following steps A1-A2 may be used:
a1, responding to the animation data recording operation, and determining a recording object and a recording period indicated by the animation data recording operation, wherein the recording object comprises a target virtual character in virtual characters displayed in a live broadcast picture of the virtual scene.
A2, acquiring animation recording data of the target virtual character in the plurality of animation editing dimensions in the recording period, and taking the animation recording data in the plurality of animation editing dimensions as the animation sub-data.
The animation data recording operation may include an animation data recording start operation and an animation data recording end operation, where the duration between the animation data recording start operation and the animation data recording end operation constitutes a recording period, or the recording period may be directly indicated by the animation data recording operation, for example, a recording period of 10 minutes is indicated, and the recording object may be edited on a recording operation editing page corresponding to the animation data recording operation, where recording may be started after the recording operation editing page is edited.
In addition, the recording operation editing page may further indicate the type of the recorded animation sub-data (i.e. the animation editing dimension for editing later), for example, may indicate that the recorded animation sub-data is the animation sub-data corresponding to the audio editing dimension, the motion control data editing dimension and the sub-mirror editing dimension, and by indicating the recording of the animation sub-data in the animation editing dimension of multiple types, the corresponding type of animation recording data may be obtained during recording, so as to complete recording of the animation sub-data.
Editing track presentation process for S102:
here, the editing track corresponding to the animation editing dimension is used to show the animation sub-data in the animation editing dimension.
The schematic diagram of the animation editing page may be shown in fig. 2, in which fig. 2, the animation editing page may include an animation file selection area and an animation sub-data editing area, files in the animation file selection area may be displayed in a form of a structure tree, etc. to facilitate operations such as storing and selecting the animation data to be edited and the animation data after the editing is completed by an editor, the animation sub-data editing area may include a time axis for indicating when the animation sub-data is displayed, a plurality of animation editing dimensions, and animation sub-data in each animation editing dimension, where the animation sub-data is displayed in an editable editing track, and the animation editing dimensions in fig. 2 include an action control data editing dimension, an audio editing dimension, a mirror tracking target editing dimension, and a mirror editing dimension, where 6 areas with different colors in the mirror editing dimension represent the animation sub-data corresponding to the mirrors 1 to 6, respectively.
In a possible implementation manner, the animation data recording operation may include multiple animation data recording operations, and when the animation editing interface displays the editing tracks corresponding to the multiple animation editing dimensions respectively, the animation sub-data recorded in the multiple animation data recording operations may be displayed in the editing tracks of the animation editing interface, where each animation sub-data recorded in the animation data recording operation includes animation sub-data under the multiple editing tracks.
Here, the animation data recording operation may include a plurality of animation data recording operations, and each time the animation sub-data recorded in the plurality of animation editing dimensions by the animation data recording operation may form a group of animation sub-data, a plurality of groups of animation sub-data may be obtained after the plurality of animation data recording operations.
Specifically, when the animation editing page displays multiple groups of animation sub-data obtained after multiple animation data recording operations, the animation sub-data recorded in the multiple animation data recording operations can be sequentially displayed in editing tracks of the animation editing interface, and each animation sub-data recorded in the animation data recording operations comprises animation sub-data under the multiple editing tracks, so that orderly display of each group of animation sub-data is realized.
Therefore, by simultaneously displaying a plurality of groups of animation sub-data in the animation editing page, the animation data can be conveniently edited according to the plurality of groups of animation sub-data, and the target animation capable of meeting the animation editing requirement can be obtained.
Further, the recorded animation sub-data can be copied and edited by the following steps:
b1, generating at least one set of duplicate animation sub-data corresponding to the animation sub-data in the plurality of animation editing dimensions in response to a copy operation of the animation sub-data recorded for the animation data recording operation.
And B2, displaying the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data in an editing track of the animation editing interface, wherein the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data can be edited synchronously.
Here, when the animation sub-data recorded by the animation data recording operation and the at least one set of copy animation sub-data are displayed in the editing track of the animation editing interface, the copy animation sub-data corresponding to the copy operation may be added to the pasting position indicated by the copy operation in response to the pasting operation for the at least one set of copy animation sub-data, so as to place the copy animation sub-data in the animation editing page, so that the animation sub-data recorded by the animation data recording operation and the copy animation sub-data are simultaneously displayed in the editing track of the animation editing interface.
Specifically, the animation sub-data recorded by the animation data recording operation and the at least one set of copy animation sub-data can be edited synchronously, which can be the editing operation of the animation sub-data recorded by the animation data recording operation, and can be executed synchronously for the at least one set of copy animation sub-data, so that the same multiple sets of animation sub-data can be edited by one editing operation.
In addition, if the corresponding animation sub-data exists at the pasting position, the copy animation sub-data and the animation sub-data existing at the pasting position may be combined into a new group of animation sub-data, and the editing operation between the combined integrated animation sub-data and the animation sub-data recorded by the animation data recording operation (i.e., the animation sub-data indicated by the copying operation) may be mutually independent.
Therefore, by setting corresponding copy, paste and synchronous editing mechanisms for the animation editing page, an editor can conveniently edit the animation sub-data, so that the animation data editing efficiency and the animation generation efficiency are improved.
Animation editing and animation generation processes for S103 and S104:
Here, the editing operation may include a movement operation for editing a presentation start time and a presentation end time of the animation sub data, a scaling operation for controlling a presentation time period of the animation sub data, a cropping operation for cropping the animation content corresponding to the animation sub data, and the like.
In addition, in order to improve the animation editing efficiency, the editing operation may be initiated by a plurality of editing terminals, the plurality of editing terminals may edit the same animation sub-data, and the animation editing pages in the editing may be synchronously displayed in the pictures of the editing terminals, so that multiple people may edit the same animation sub-data, thereby improving the animation editing efficiency and the animation generating efficiency.
In a possible implementation manner, in response to an editing trigger operation for the animation sub-data in any animation editing dimension, scaling is performed on an editing area of the animation sub-data in the animation editing dimension in the animation editing page based on a display size corresponding to the animation sub-data in the animation editing dimension in the animation editing page.
Here, since the animation editing page may display animation sub-data under a plurality of editing tracks, and it is often difficult for an excessive editing track to concentrate an editor on animation sub-data currently being edited, the animation editing dimension triggered by the editing triggering operation may be used as a target animation editing dimension in response to an editing triggering operation for the animation sub-data under any animation editing dimension, and based on a display size corresponding to the animation sub-data under the target animation editing dimension in the animation editing page, scaling processing is performed on an editing area of the animation sub-data under the target animation editing dimension in the animation editing page, so as to improve a display size corresponding to the animation sub-data under the target animation editing dimension in the animation editing page, thereby facilitating the editor to perform finer animation data editing operation.
For example, the schematic diagram of the animation editing page before the scaling process may be shown in fig. 2, and the schematic diagram of the animation editing page after the scaling process may be shown in fig. 3, where in fig. 3, the editing area of the animation sub-data corresponding to the split mirror 1 in the split mirror is enlarged, so that the editing operation on the corresponding animation sub-data is facilitated.
In addition, a mark can be added in an animation subdata editing area in the animation editing page in the editing process, for example, a mark can be added on a time axis to record information such as contents needing to be edited, in addition, the editing operation can be canceled after execution, the animation data displayed in the animation editing page can be updated to a state before being edited by the editing operation after any editing operation is canceled, in addition, in order to avoid the animation subdata in any animation editing dimension from being edited by mistake, the editing state of the corresponding animation editing dimension can be updated to a locking state after the editing locking operation for any animation editing dimension is received, so that the animation subdata in the animation editing dimension can not be edited.
Further, live broadcasting of the animation content can be performed through the following steps C1-C2:
And C1, determining a target animation file to be played in response to a triggering operation of any one of a plurality of animation files displayed on a live broadcast control page in a live broadcast process of a virtual scene, wherein the plurality of animation files comprise animation files corresponding to the target animation.
And C2, transmitting the animation data corresponding to the target animation file to a live broadcast server, pushing the animation data corresponding to the target animation file to a live broadcast client based on the live broadcast server, and displaying the animation content corresponding to the target animation file on the live broadcast client.
The live broadcast server may be a server in a content delivery network (Content Delivery Network, CDN), and the transmission stability of the live broadcast data may be improved by sending the animation data corresponding to the target animation file to the server in the CDN.
According to the animation generation method provided by the embodiment of the disclosure, after the animation sub-data in a plurality of animation editing dimensions are obtained, the editing tracks corresponding to the animation editing dimensions respectively can be displayed on the animation editing interface, the editing operations performed on the animation sub-data in the animation editing dimensions by the editing tracks are responded, the animation sub-data in the edited animation editing dimensions are determined, and therefore target animation data can be generated based on the animation sub-data in the edited animation editing dimensions, and target animation is generated based on the target animation data. Therefore, compared with the editing operation of the video layer of the recorded animation content, the editing operation of the animation sub-data under the acquired animation editing dimensions can be performed on the animation content at the animation sub-data layer at the lower layer, so that the editing effect of the animation sub-data is better, and the animation works with higher quality can be obtained.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same inventive concept, the embodiments of the present disclosure further provide an animation generating device corresponding to the animation generating method, and since the principle of the device in the embodiments of the present disclosure for solving the problem is similar to that of the animation generating method in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 4, an architecture diagram of an animation generating device according to an embodiment of the present disclosure is provided, where the device includes an obtaining module 401, a displaying module 402, a determining module 403, and a generating module 404, where:
an obtaining module 401, configured to obtain animation sub-data in multiple animation editing dimensions;
The display module 402 is configured to display, on an animation editing interface, editing tracks corresponding to the multiple animation editing dimensions respectively;
A determining module 403, configured to determine, in response to an editing operation performed on the animation sub-data in the plurality of animation editing dimensions by the editing track, animation sub-data in the plurality of edited animation editing dimensions;
And the generating module 404 is used for generating target animation data based on the animation sub-data in the edited multiple animation editing dimensions and generating target animation based on the target animation data.
In one possible implementation, the plurality of animation editing dimensions include at least two editing dimensions of an action control data editing dimension, a sub-mirror editing dimension and a sub-mirror tracking target editing dimension, wherein:
The action control data editing dimension is used for editing action control data of the virtual character, and the action control data is used for driving the virtual character to execute corresponding actions;
The sub-mirror editing dimension is used for editing each sub-mirror in the animation;
and the object editing dimension of the mirror tracking is used for editing the tracked object virtual character.
In a possible implementation manner, the obtaining module 401 is configured to, when obtaining animation sub-data in multiple animation editing dimensions:
Responding to the animation data recording operation, and determining a recording object and a recording period indicated by the animation data recording operation, wherein the recording object comprises a target virtual character in virtual characters displayed in a live broadcast picture of a virtual scene;
and acquiring the animation recording data of the target virtual character in the plurality of animation editing dimensions in the recording period, and taking the animation recording data in the plurality of animation editing dimensions as the animation sub-data.
In one possible embodiment, the animation data recording operation includes a plurality of animation data recording operations;
The display module 402 is configured to, when displaying the edit tracks corresponding to the plurality of animation editing dimensions on the animation editing interface:
And displaying the animation sub-data recorded in the animation data recording operation in a plurality of times in the editing track of the animation editing interface, wherein each animation sub-data recorded in the animation data recording operation comprises animation sub-data under the plurality of editing tracks.
In a possible implementation manner, the obtaining module 401 is further configured to:
Generating at least one set of duplicate animation sub-data corresponding to the animation sub-data in the plurality of animation editing dimensions in response to a copy operation of the animation sub-data recorded for the animation data recording operation;
The display module 402 is configured to, when displaying the edit tracks corresponding to the plurality of animation editing dimensions on the animation editing interface:
and displaying the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data in an editing track of the animation editing interface, wherein the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data can be edited synchronously.
In a possible implementation manner, the display module 402 is further configured to:
And responding to the editing triggering operation of the animation sub-data in any animation editing dimension, and performing scaling processing on the editing area of the animation sub-data in the animation editing dimension in the animation editing page based on the corresponding display size of the animation sub-data in the animation editing dimension in the animation editing page.
In a possible implementation manner, the generating module 404 is further configured to:
Determining a target animation file to be played according to the triggering operation of any one of a plurality of animation files displayed on a live broadcast control page in the live broadcast process of a virtual scene, wherein the plurality of animation files comprise animation files corresponding to the target animation;
And sending the animation data corresponding to the target animation file to a live broadcast server, so as to push the animation data corresponding to the target animation file to a live broadcast client based on the live broadcast server, and displaying the animation content corresponding to the target animation file at the live broadcast client.
According to the animation generation device provided by the embodiment of the disclosure, after the animation sub-data in a plurality of animation editing dimensions are acquired, the editing tracks corresponding to the animation editing dimensions respectively can be displayed on the animation editing interface, the editing operations performed on the animation sub-data in the animation editing dimensions by the editing tracks are responded, the animation sub-data in the edited animation editing dimensions are determined, and therefore target animation data can be generated based on the animation sub-data in the edited animation editing dimensions, and target animation is generated based on the target animation data. Therefore, compared with the editing operation of the video layer of the recorded animation content, the editing operation of the animation sub-data under the acquired animation editing dimensions can be performed on the animation content at the animation sub-data layer at the lower layer, so that the editing effect of the animation sub-data is better, and the animation works with higher quality can be obtained.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical concept, the embodiment of the disclosure also provides computer equipment. Referring to fig. 5, a schematic structural diagram of a computer device 500 according to an embodiment of the disclosure includes a processor 501, a memory 502, and a bus 503. The memory 502 is configured to store execution instructions, including a memory 5021 and an external memory 5022, where the memory 5021 is also referred to as an internal memory, and is configured to temporarily store operation data in the processor 501 and data exchanged with the external memory 5022, such as a hard disk, where the processor 501 exchanges data with the external memory 5022 through the memory 5021, and when the computer device 500 operates, the processor 501 and the memory 502 communicate with each other through the bus 503, so that the processor 501 executes the following instructions:
Obtaining animation sub-data under a plurality of animation editing dimensions;
displaying the editing tracks respectively corresponding to the animation editing dimensions on an animation editing interface;
Determining the animation sub-data in the multiple edited animation editing dimensions in response to the editing operation performed on the animation sub-data in the multiple animation editing dimensions by the editing track;
and generating target animation data based on the animation sub-data in the edited multiple animation editing dimensions, and generating target animation based on the target animation data.
In a possible implementation manner, in the instructions of the processor 501, the plurality of animation editing dimensions include at least two editing dimensions of an action control data editing dimension, a sub-mirror editing dimension and a sub-mirror tracking target editing dimension, wherein:
The action control data editing dimension is used for editing action control data of the virtual character, and the action control data is used for driving the virtual character to execute corresponding actions;
The sub-mirror editing dimension is used for editing each sub-mirror in the animation;
and the object editing dimension of the mirror tracking is used for editing the tracked object virtual character.
In a possible implementation manner, in the instruction of the processor 501, the acquiring animation sub-data in multiple animation editing dimensions includes:
Responding to the animation data recording operation, and determining a recording object and a recording period indicated by the animation data recording operation, wherein the recording object comprises a target virtual character in virtual characters displayed in a live broadcast picture of a virtual scene;
and acquiring the animation recording data of the target virtual character in the plurality of animation editing dimensions in the recording period, and taking the animation recording data in the plurality of animation editing dimensions as the animation sub-data.
In a possible implementation manner, in the instruction of the processor 501, the animation data recording operation includes multiple animation data recording operations;
displaying the editing tracks respectively corresponding to the animation editing dimensions on an animation editing interface comprises the following steps:
And displaying the animation sub-data recorded in the animation data recording operation in a plurality of times in the editing track of the animation editing interface, wherein each animation sub-data recorded in the animation data recording operation comprises animation sub-data under the plurality of editing tracks.
In a possible implementation manner, the instructions of the processor 501 further include:
Generating at least one set of duplicate animation sub-data corresponding to the animation sub-data in the plurality of animation editing dimensions in response to a copy operation of the animation sub-data recorded for the animation data recording operation;
displaying the editing tracks respectively corresponding to the animation editing dimensions on an animation editing interface comprises the following steps:
and displaying the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data in an editing track of the animation editing interface, wherein the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data can be edited synchronously.
In a possible implementation manner, the instructions of the processor 501 further include:
And responding to the editing triggering operation of the animation sub-data in any animation editing dimension, and performing scaling processing on the editing area of the animation sub-data in the animation editing dimension in the animation editing page based on the corresponding display size of the animation sub-data in the animation editing dimension in the animation editing page.
In a possible implementation manner, the instructions of the processor 501 further include:
Determining a target animation file to be played according to the triggering operation of any one of a plurality of animation files displayed on a live broadcast control page in the live broadcast process of a virtual scene, wherein the plurality of animation files comprise animation files corresponding to the target animation;
And sending the animation data corresponding to the target animation file to a live broadcast server, so as to push the animation data corresponding to the target animation file to a live broadcast client based on the live broadcast server, and displaying the animation content corresponding to the target animation file at the live broadcast client.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the animation generation method described in the method embodiments above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries a program code, where instructions included in the program code may be used to perform steps of the animation generation method described in the above method embodiments, and specifically reference may be made to the above method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
It should be noted that the foregoing embodiments are merely specific implementations of the disclosure, and are not intended to limit the scope of the disclosure, and although the disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that any modification, variation or substitution of some of the technical features described in the foregoing embodiments may be made or equivalents may be substituted for those within the scope of the disclosure without departing from the spirit and scope of the technical aspects of the embodiments of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. An animation generation method, comprising:
Obtaining animation sub-data under a plurality of animation editing dimensions;
displaying the editing tracks respectively corresponding to the animation editing dimensions on an animation editing interface;
Determining the animation sub-data in the multiple edited animation editing dimensions in response to the editing operation performed on the animation sub-data in the multiple animation editing dimensions by the editing track;
and generating target animation data based on the animation sub-data in the edited multiple animation editing dimensions, and generating target animation based on the target animation data.
2. The method of claim 1, wherein the plurality of animation editing dimensions comprises at least two of an action control data editing dimension, a mirror tracking target editing dimension, wherein:
The action control data editing dimension is used for editing action control data of the virtual character, and the action control data is used for driving the virtual character to execute corresponding actions;
The sub-mirror editing dimension is used for editing each sub-mirror in the animation;
and the object editing dimension of the mirror tracking is used for editing the tracked object virtual character.
3. The method of claim 1, wherein the obtaining animation sub-data in a plurality of animation editing dimensions comprises:
Responding to the animation data recording operation, and determining a recording object and a recording period indicated by the animation data recording operation, wherein the recording object comprises a target virtual character in virtual characters displayed in a live broadcast picture of a virtual scene;
and acquiring the animation recording data of the target virtual character in the plurality of animation editing dimensions in the recording period, and taking the animation recording data in the plurality of animation editing dimensions as the animation sub-data.
4. A method according to claim 3, wherein the animation data recording operation comprises a plurality of animation data recording operations;
displaying the editing tracks respectively corresponding to the animation editing dimensions on an animation editing interface comprises the following steps:
And displaying the animation sub-data recorded in the animation data recording operation in a plurality of times in the editing track of the animation editing interface, wherein each animation sub-data recorded in the animation data recording operation comprises animation sub-data under the plurality of editing tracks.
5. A method according to claim 3, characterized in that the method further comprises:
Generating at least one set of duplicate animation sub-data corresponding to the animation sub-data in the plurality of animation editing dimensions in response to a copy operation of the animation sub-data recorded for the animation data recording operation;
displaying the editing tracks respectively corresponding to the animation editing dimensions on an animation editing interface comprises the following steps:
and displaying the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data in an editing track of the animation editing interface, wherein the animation sub-data recorded by the animation data recording operation and the at least one set of duplicate animation sub-data can be edited synchronously.
6. The method according to claim 1, wherein the method further comprises:
And responding to the editing triggering operation of the animation sub-data in any animation editing dimension, and performing scaling processing on the editing area of the animation sub-data in the animation editing dimension in the animation editing page based on the corresponding display size of the animation sub-data in the animation editing dimension in the animation editing page.
7. The method according to claim 1, wherein the method further comprises:
Determining a target animation file to be played according to the triggering operation of any one of a plurality of animation files displayed on a live broadcast control page in the live broadcast process of a virtual scene, wherein the plurality of animation files comprise animation files corresponding to the target animation;
And sending the animation data corresponding to the target animation file to a live broadcast server, so as to push the animation data corresponding to the target animation file to a live broadcast client based on the live broadcast server, and displaying the animation content corresponding to the target animation file at the live broadcast client.
8. An animation generation device, comprising:
the acquisition module is used for acquiring animation sub-data under a plurality of animation editing dimensions;
the display module is used for displaying the editing tracks corresponding to the animation editing dimensions respectively on the animation editing interface;
A determining module, configured to determine, in response to an editing operation performed on the animation sub-data in the plurality of animation editing dimensions by the editing track, animation sub-data in the plurality of edited animation editing dimensions;
and the generating module is used for generating target animation data based on the animation sub-data in the edited multiple animation editing dimensions and generating target animation based on the target animation data.
9. A computer device comprising a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is in operation, the machine-readable instructions when executed by the processor performing the steps of the animation generation method of any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the animation generation method according to any of claims 1 to 7.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311388610.9A CN119893221A (en) | 2023-10-24 | 2023-10-24 | Animation generation method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311388610.9A CN119893221A (en) | 2023-10-24 | 2023-10-24 | Animation generation method and device, computer equipment and storage medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119893221A true CN119893221A (en) | 2025-04-25 |
Family
ID=95424686
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311388610.9A Pending CN119893221A (en) | 2023-10-24 | 2023-10-24 | Animation generation method and device, computer equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119893221A (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100281378A1 (en) * | 2009-05-01 | 2010-11-04 | Colleen Pendergast | Media editing application with capability to focus on graphical composite elements in a media compositing area |
| CN110049266A (en) * | 2019-04-10 | 2019-07-23 | 北京字节跳动网络技术有限公司 | Video data issues method, apparatus, electronic equipment and storage medium |
| CN110225224A (en) * | 2019-07-05 | 2019-09-10 | 北京乐元素文化发展有限公司 | Director method, the apparatus and system of virtual image |
| CN115393484A (en) * | 2022-08-30 | 2022-11-25 | 厦门黑镜科技有限公司 | Method and device for generating virtual image animation, electronic equipment and storage medium |
| CN116309969A (en) * | 2023-03-14 | 2023-06-23 | 网易(杭州)网络有限公司 | Method and device for generating scenario animation in game, storage medium and electronic equipment |
| CN116740236A (en) * | 2022-03-01 | 2023-09-12 | 腾讯科技(深圳)有限公司 | Animation multiplexing method and device, storage medium and electronic equipment |
-
2023
- 2023-10-24 CN CN202311388610.9A patent/CN119893221A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100281378A1 (en) * | 2009-05-01 | 2010-11-04 | Colleen Pendergast | Media editing application with capability to focus on graphical composite elements in a media compositing area |
| CN110049266A (en) * | 2019-04-10 | 2019-07-23 | 北京字节跳动网络技术有限公司 | Video data issues method, apparatus, electronic equipment and storage medium |
| CN110225224A (en) * | 2019-07-05 | 2019-09-10 | 北京乐元素文化发展有限公司 | Director method, the apparatus and system of virtual image |
| CN116740236A (en) * | 2022-03-01 | 2023-09-12 | 腾讯科技(深圳)有限公司 | Animation multiplexing method and device, storage medium and electronic equipment |
| CN115393484A (en) * | 2022-08-30 | 2022-11-25 | 厦门黑镜科技有限公司 | Method and device for generating virtual image animation, electronic equipment and storage medium |
| CN116309969A (en) * | 2023-03-14 | 2023-06-23 | 网易(杭州)网络有限公司 | Method and device for generating scenario animation in game, storage medium and electronic equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11482192B2 (en) | Automated object selection and placement for augmented reality | |
| CN106210861B (en) | Method and system for displaying bullet screen | |
| US9374552B2 (en) | Streaming game server video recorder | |
| US5768447A (en) | Method for indexing image information using a reference model | |
| CN111083515B (en) | Method, device and system for processing live broadcast content | |
| US20120008003A1 (en) | Apparatus and method for providing augmented reality through generation of a virtual marker | |
| CN106204168A (en) | Commodity barrage display system, unit and method | |
| CN107493228A (en) | A kind of social interaction method and system based on augmented reality | |
| CN114615513B (en) | Video data generation method and device, electronic equipment and storage medium | |
| US12315203B2 (en) | Image data encoding method and apparatus, display method and apparatus, and electronic device | |
| CN104969264A (en) | Method and apparatus for adding annotations to a plenoptic light field | |
| US8610713B1 (en) | Reconstituting 3D scenes for retakes | |
| CN114332417A (en) | Method, device, storage medium and program product for multi-person scene interaction | |
| CN110102057B (en) | Connecting method, device, equipment and medium for cut-scene animations | |
| CN112153472A (en) | Method and device for generating special picture effect, storage medium and electronic equipment | |
| CN114189704B (en) | Video generation method, device, computer equipment and storage medium | |
| CN119893221A (en) | Animation generation method and device, computer equipment and storage medium | |
| JP4321751B2 (en) | Drawing processing apparatus, drawing processing method, drawing processing program, and electronic conference system including the same | |
| CN117641070A (en) | Video editing method, device, computer equipment and storage medium | |
| CN113559503A (en) | Video generation method, apparatus and computer readable medium | |
| CN114630141B (en) | Video processing method and related equipment | |
| CN112804551A (en) | Live broadcast method and device, computer equipment and storage medium | |
| CN111935493B (en) | Anchor photo album processing method and device, storage medium and electronic equipment | |
| CN111314793B (en) | Video processing method, apparatus and computer readable medium | |
| US20230195856A1 (en) | Method for media creation, sharing, and communication and associated system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |