[go: up one dir, main page]

CN114860358B - Object processing method and device, electronic equipment and storage medium - Google Patents

Object processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114860358B
CN114860358B CN202210344776.XA CN202210344776A CN114860358B CN 114860358 B CN114860358 B CN 114860358B CN 202210344776 A CN202210344776 A CN 202210344776A CN 114860358 B CN114860358 B CN 114860358B
Authority
CN
China
Prior art keywords
scene
target event
configuration data
event
displayed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210344776.XA
Other languages
Chinese (zh)
Other versions
CN114860358A (en
Inventor
蔡晓华
李伟鹏
杨小刚
胡方正
鞠达豪
杨凯丽
朱彤
孙弘法
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210344776.XA priority Critical patent/CN114860358B/en
Publication of CN114860358A publication Critical patent/CN114860358A/en
Application granted granted Critical
Publication of CN114860358B publication Critical patent/CN114860358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to an object processing method, an object processing device, an electronic device and a storage medium, wherein the object processing method comprises the following steps: acquiring configuration data of an object to be displayed; the configuration data includes scene configuration data and view configuration data; determining a scene sequence having scene information based on the scene configuration data; rendering views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence; adding the rendered scene sequence to a view container to obtain an object to be displayed; playing the object to be displayed in response to a playing instruction of the object to be displayed; and in the playing process of the object to be displayed, if the target event in the object to be displayed is monitored, executing the action corresponding to the target event based on the trigger corresponding to the target event. The application forms a closed loop of an object processing scheme, so that different platforms can generate completely consistent objects to be displayed based on configuration data, thereby realizing the unification of interface display and logic and reducing the later maintenance cost.

Description

Object processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of internet, and in particular relates to an object processing method, an object processing device, electronic equipment and a storage medium.
Background
With the development of the mobile internet, users can browse information through application programs and websites. In the process of providing information for users by application programs or websites, more interactive logic is involved, including links involving front-end information display, user clicking, playing and watching, and functions involving back-end interactive logic issuing, so that the application programs or websites need logic capability to analyze, process and execute each link and scene involved in the information.
In the prior art, developers generally choose different ways to solve the above information processing problem based on different platforms. For example, the construction of information in some platforms is purely dependent on a third party framework, and the construction of information in some platforms requires a combination of a native framework and a third party framework. The information constructed on the platforms is enabled to be inconsistent in display effect on different platforms, so that more time and energy are needed to be input to maintain the information on different platforms in the later period, and maintenance cost is increased.
Disclosure of Invention
The disclosure provides an object processing method, an object processing device, electronic equipment and a storage medium, and the technical scheme of the disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided an object processing method, including:
Acquiring configuration data of an object to be displayed; the configuration data includes scene configuration data and view configuration data;
Determining a scene sequence having scene information based on the scene configuration data;
Rendering views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence;
adding the rendered scene sequence to a view container to obtain an object to be displayed;
Playing the object to be displayed in response to a playing instruction of the object to be displayed;
And in the playing process of the object to be displayed, if the target event in the object to be displayed is monitored, executing the action corresponding to the target event based on the trigger corresponding to the target event.
In some possible embodiments, before the method for obtaining the configuration data of the object to be displayed, the method further includes:
Creating a scene controller;
determining a scene sequence with scene information based on scene configuration data, rendering views in each scene in the scene sequence based on view configuration data to obtain a rendered scene sequence, comprising:
Transmitting configuration data of the object to be displayed to a scene controller;
Analyzing the configuration data of the object to be displayed by using the scene controller to obtain scene configuration data and view configuration data;
determining, with a scene controller, a scene sequence having scene information based on the scene configuration data;
And rendering the view in each scene in the scene sequence based on the view configuration data by using the scene controller to obtain a rendered scene sequence.
In some possible embodiments, determining a scene sequence having scene information based on scene configuration data includes:
determining feature information of scenes, display feature information of the scenes and feature information among the scenes based on the scene configuration data;
and determining a scene sequence based on the feature information of the scenes, the display feature information of the scenes and the feature information among the scenes.
In some possible embodiments, rendering views in each scene in the sequence of scenes based on the view configuration data to obtain a rendered sequence of scenes comprises:
Determining image-text parameters, control parameters and animation effect parameters in each scene in the scene sequence based on the view configuration data;
Rendering the corresponding scene based on the image-text parameter, the control parameter and the animation effect parameter in each scene to obtain a rendered scene sequence.
In some possible embodiments, the method further comprises:
determining a target event on a scene in a sequence of scenes;
Configuring a trigger of a target event;
and configuring a behavior operation instruction corresponding to the trigger on the instruction translator based on the trigger identification of the trigger.
In some possible embodiments, in a playing process of an object to be displayed, if a target event in the object to be displayed is monitored, executing a behavior corresponding to the target event based on a trigger corresponding to the target event includes:
During the playing process of the object to be displayed, monitoring the event on the scene in the scene sequence through an event monitor;
if the target event is monitored, sending a trigger corresponding to the target event through an event monitor;
determining a trigger identification of the trigger through the instruction transitioner, and determining a behavior operation instruction based on the trigger identification;
And executing the behaviors in the received behavior operation instructions through a behavior executor.
In some possible embodiments, if the target event is monitored, sending, by the event monitor, a trigger corresponding to the target event includes:
If the first target event is monitored, an original trigger corresponding to the first target event is sent through an event monitor;
Or alternatively;
If the first target event is monitored and the condition parameter of the first target event meets the preset condition parameter, determining to generate a second target event, and sending a condition trigger corresponding to the second target event through the event monitor;
Or alternatively;
if the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending a time trigger corresponding to the third target event through the event monitor.
In some possible embodiments, if the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending, by the event monitor, a time trigger corresponding to the third target event includes:
If the first target event is monitored, a delay timer is generated;
the timing time parameter on the delay timer meets the first preset time parameter, and a third target event is determined to be generated;
And sending a first time trigger corresponding to the third target event through the event monitor.
In some possible embodiments, if the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending, by the event monitor, a time trigger corresponding to the third target event includes:
if the first target event is monitored, generating an interval timer;
generating a third target event and resetting the interval timer whenever the interval time parameter on the interval timer meets a second preset time parameter;
and sending a second time trigger corresponding to the third target event through the event monitor.
According to a second aspect of embodiments of the present disclosure, there is provided an object processing apparatus comprising:
the data acquisition module is configured to acquire configuration data of an object to be displayed; the configuration data includes scene configuration data and view configuration data;
a scene construction module configured to perform determining a scene sequence having scene information based on the scene configuration data;
The view construction module is configured to perform rendering of views in each scene in the scene sequence based on view configuration data to obtain a rendered scene sequence;
The object generation module is configured to execute the addition of the rendered scene sequence to the view container to obtain an object to be displayed;
The playing module is configured to execute a playing instruction responding to the exclusive share to be displayed and play the object to be displayed;
the event execution module is configured to execute a behavior corresponding to a target event based on a trigger corresponding to the target event if the target event in the object to be displayed is monitored in the playing process of the object to be displayed.
In some possible embodiments, before obtaining the configuration data device of the object to be displayed, the method further includes:
the controller creation module, the controller creates the scene controller;
a scene building module configured to perform:
Transmitting configuration data of the object to be displayed to a scene controller;
Analyzing the configuration data of the object to be displayed by using the scene controller to obtain scene configuration data and view configuration data;
determining, with a scene controller, a scene sequence having scene information based on the scene configuration data;
And the view construction module is configured to perform rendering of views in each scene in the scene sequence based on view configuration data by using the scene controller to obtain a rendered scene sequence.
In some possible embodiments, the scene construction module is configured to perform:
determining feature information of scenes, display feature information of the scenes and feature information among the scenes based on the scene configuration data;
and determining a scene sequence based on the feature information of the scenes, the display feature information of the scenes and the feature information among the scenes.
In some possible embodiments, the view construction module is configured to perform:
Determining image-text parameters, control parameters and animation effect parameters in each scene in the scene sequence based on the view configuration data;
Rendering the corresponding scene based on the image-text parameter, the control parameter and the animation effect parameter in each scene to obtain a rendered scene sequence.
In some possible embodiments, the apparatus further comprises:
An event determination module configured to perform determining a target event on a scene in a sequence of scenes;
a trigger configuration module configured to execute a trigger that configures a target event;
the instruction configuration module is configured to execute trigger identification based on the trigger and configure behavior operation instructions corresponding to the trigger on the instruction translator.
In some possible embodiments, the event execution module is configured to execute:
the monitoring module is configured to monitor events on scenes in the scene sequence through the event monitor in the playing process of the object to be displayed;
the sending module is configured to execute sending a trigger corresponding to the target event through the event monitor if the target event is monitored;
An instruction determination module configured to execute determining a trigger identification of the trigger by the instruction relay and determining a behavior operation instruction based on the trigger identification;
and the execution module is configured to execute the behavior in the received behavior operation instruction through the behavior executor.
In some possible embodiments, the sending module is configured to perform:
If the first target event is monitored, an original trigger corresponding to the first target event is sent through an event monitor;
Or alternatively;
If the first target event is monitored and the condition parameter of the first target event meets the preset condition parameter, determining to generate a second target event, and sending a condition trigger corresponding to the second target event through the event monitor;
Or alternatively;
if the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending a time trigger corresponding to the third target event through the event monitor.
In some possible embodiments, the sending module is configured to perform:
If the first target event is monitored, a delay timer is generated;
the timing time parameter on the delay timer meets the first preset time parameter, and a third target event is determined to be generated;
And sending a first time trigger corresponding to the third target event through the event monitor.
In some possible embodiments, the sending module is configured to perform:
if the first target event is monitored, generating an interval timer;
generating a third target event and resetting the interval timer whenever the interval time parameter on the interval timer meets a second preset time parameter;
and sending a second time trigger corresponding to the third target event through the event monitor.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to execute instructions to implement the method as in any of the first aspects above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of any one of the first aspects of embodiments of the present disclosure.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program stored in a readable storage medium, the computer program being read from the readable storage medium by at least one processor of the computer device and executed, such that the computer device performs the method of any one of the first aspects of embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
acquiring configuration data of an object to be displayed; the configuration data includes scene configuration data and view configuration data; determining a scene sequence having scene information based on the scene configuration data; rendering views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence; adding the rendered scene sequence to a view container to obtain an object to be displayed; playing the object to be displayed in response to a playing instruction of the object to be displayed; and in the playing process of the object to be displayed, if the target event in the object to be displayed is monitored, executing the action corresponding to the target event based on the trigger corresponding to the target event. The application forms a closed loop of an object processing (including object construction and display) scheme, so that different platforms can generate completely consistent objects to be displayed based on configuration data, thereby realizing the unification of interface display and logic and reducing the later maintenance cost.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an application environment shown in accordance with an exemplary embodiment;
FIG. 2 is a flowchart illustrating a method of object processing according to an exemplary embodiment;
FIG. 3 is an implementation flow diagram of a rendered sequence of scenes, shown in accordance with an exemplary embodiment;
FIG. 4 is a flowchart illustrating an implementation of playing an object to be presented according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating a first time trigger implementation according to an example embodiment;
FIG. 6 is a flowchart illustrating a second time trigger implementation according to an example embodiment;
FIG. 7 is a block diagram of an object processing apparatus, according to an example embodiment;
FIG. 8 is a block diagram of an electronic device for object processing, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar first objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for presentation, analyzed data, etc.) related to the present disclosure are information and data authorized by the user or sufficiently authorized by each party.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment of an object processing method according to an exemplary embodiment, and as shown in fig. 1, the application environment may include a terminal 01, and an object browser 011, a scene controller 012, and an instruction translator 013 located in a certain application program of the terminal 01.
In some possible embodiments, the terminal 01 may include, but is not limited to, a smart phone, a desktop computer, a tablet computer, a notebook computer, a smart speaker, a digital assistant, an augmented reality (augmented reality, AR)/Virtual Reality (VR) device, a smart wearable device, and other types of clients. Or may be software running on the client, such as an application, applet, etc. Alternatively, the operating system running on the client may include, but is not limited to, an android system, an IOS system, linux, windows, unix, and the like.
Wherein the object browser 011, the scene controller 012 and the instruction relay 013 may belong to the same application function on the terminal 01, and the scene controller 012 and the instruction relay 013 may be created by the object browser 011.
In some possible embodiments, the object browser 011 obtains configuration data for the object to be exposed; the configuration data includes scene configuration data and view configuration data; determining a scene sequence having scene information based on the scene configuration data; rendering views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence; adding the rendered scene sequence to a view container to obtain an object to be displayed; playing the object to be displayed in response to a playing instruction of the object to be displayed; and in the playing process of the object to be displayed, if the target event in the object to be displayed is monitored, executing the action corresponding to the target event based on the trigger corresponding to the target event.
In addition, it should be noted that, fig. 1 is only one application environment of the object processing method provided in the present disclosure, and other application environments may also be included in practical applications.
Fig. 2 is a flowchart illustrating an object processing method according to an exemplary embodiment, and as shown in fig. 2, the object processing method may be applied to a server or a client, mainly an application program in the client, and includes the following steps:
in step S201, configuration data of an object to be displayed is obtained; the configuration data includes scene configuration data and view configuration data.
In the embodiment of the application, the object processing method from step S201 to step S211 and the embodiment based on the expansion
The method is developed based on the original functions of the terminal and applied to application programs of the terminal. The development based on the native function of the terminal means that the development of the application program is performed on the platform by using a development language, a development class library, a development tool and the like provided by the platform.
The beneficial effects of utilizing the original development are as follows:
1. based on the stability of the native development, the terminal can implement all functions of the terminal.
2. Because based on the original development, the terminal does not have additional virtual machines and the like to load and start when using the functions of the application program, so that the running speed is higher as that of starting the original functions when using the functions of the application program, the performance is higher, and excellent use experience can be provided for users.
3. On the original frame, the terminal can support a large number of graphics and animations, does not get stuck, and is quick in response.
4. The functional compatibility in the application program of the terminal is high due to the fact that the functional compatibility is built on the original framework.
5. The terminal uses the interface provided by the terminal relatively quickly, and the processing speed is advantageous.
In the embodiment of the application, the object in the object processing method can be information displayed in an application program.
In order to enable devices of different systems, devices of the same system but different sizes or different application programs in the same device to obtain available fault-free objects to be displayed based on own functions, the objects to be displayed in the embodiment of the application can be generated by an object browser based on configuration data.
Optionally, the object browser in the application may first obtain configuration data of the object to be displayed from other devices, and generate the object to be displayed based on the configuration data.
In an alternative embodiment, an instruction translator and a behavior executor may be included in the object browser. In the embodiment of the application, before the object browser in the application program acquires the configuration data of the object to be displayed from other devices, the object browser is required to perform preliminary preparation work, including initializing the object browser, registering some general processors, routes and buried points on the instruction translator, namely registering some general functions on the instruction translator, and creating a view container of information patterns in the object browser.
In step S203, a scene sequence having scene information is determined based on the scene configuration data.
In step S205, views in each scene in the scene sequence are rendered based on the view configuration data, resulting in a rendered scene sequence.
In an alternative embodiment, the object browser may determine a sequence of scenes with scene information based on the scene configuration data, and render views in each scene in the sequence of scenes based on the view configuration data, resulting in a rendered sequence of scenes.
In an alternative embodiment, to avoid taking too many tasks, the object browser may create a module to take some effort, relieving the object browser of the burden, thereby increasing the speed of the overall solution process. Alternatively, the object browser may create a scene controller at the same time as creating the view container of the information style.
FIG. 3 is an implementation flow diagram of a rendered sequence of scenes, as shown in FIG. 3, according to an exemplary embodiment, including:
in step S301, configuration data of an object to be displayed is sent to a scene controller.
In the embodiment of the application, the object browser can send the configuration data of the object to be displayed to the scene controller.
In step S302, the scene controller is used to parse the configuration data of the object to be displayed, so as to obtain scene configuration data and view configuration data.
Optionally, after the scene controller receives the configuration data, the configuration data may be parsed to obtain scene configuration data and view configuration data.
In step S303, a scene sequence having scene information is determined based on the scene configuration data using the scene controller.
Alternatively, the scene controller may determine a scene sequence having scene information based on the scene configuration data.
In an alternative embodiment, the scene controller may determine feature information of the scenes, presentation feature information of the scenes, and feature information between the scenes based on the scene configuration data, and determine each scene based on the feature information of the scenes, the presentation feature information of the scenes, and the feature information between the scenes, and then compose a sequential scene sequence from the plurality of scenes. The feature information of the scenes can include the number of the scenes and the display position of each scene on a preset page. The display characteristic information of the scenes comprises a sequence before and after displaying the scenes, display time of the scenes and display duration of each scene. The inter-scene feature information includes inter-scene switching content.
The presentation time of the scene refers to what condition a certain scene triggers presentation, some scenes can be triggered based on controls, and some scenes can be triggered based on time. The switching content between scenes can be switching animation between scenes or switching special effects (such as popup, fly-in, fade-out, etc.).
In step S304, the view in each scene in the scene sequence is rendered based on the view configuration data by using the scene controller, so as to obtain a rendered scene sequence.
Optionally, the scene controller may render views in each scene in the scene sequence based on the view configuration data, resulting in a rendered scene sequence.
In an alternative embodiment, the scene controller may determine, based on the view configuration data, an image-text parameter, a control parameter, and an animation effect parameter in each scene in the scene sequence, and render the corresponding scene based on the image-text parameter, the control parameter, and the animation effect parameter in each scene, to obtain a rendered scene sequence.
Wherein the picture parameters in each scene include picture parameters, text parameters, and the like in each scene. Animation effect parameters may include animation parameters, expression parameters, background parameters, watermark parameters, and so forth.
The picture parameter may be an address of the picture. The text in the text parameter may be a notepad and the text parameter may be a parameter used to describe the specific content of the text. The rich text may also be included in the view, text parameters, watermark parameters, background parameters, symbol parameters, segmentation parameters, etc., and the scene controller may generate the rich text in the view based on the parameters included in the rich text. Further, the control parameters may include button parameters that describe the position of the button in the scene, the shape, color, etc. of the button.
Optionally, the view may also include an animation, which may correspond to animation parameters that may be used to describe the transparency of the animation, the location in the scene, whether and how to rotate, the degree of zoom, and so forth.
In this way, the scene controller renders the corresponding scene based on the picture parameter, the text parameter, the watermark parameter, the background parameter, the control parameter and the animation parameter in each scene to obtain the text, the rich text, the picture, the button or the animation in the scene, and further obtain the rendered scene sequence.
In step S207, the rendered scene sequence is added to the view container, resulting in an object to be displayed.
In an alternative embodiment, before adding the rendered scene sequence to the view container to obtain the object to be displayed, the scene controller may register an event in the scene on the instruction translator, so that when a certain event in the scene occurs, the processor of the event can be determined in the instruction translator based on the event registered in advance, and the behavior operation instruction corresponding to the event is determined, so that the later behavior executor can execute based on the behavior in the behavior operation instruction.
In an alternative embodiment, the object browser may determine a target event on a scene in the sequence of scenes via the scene controller, configuring a trigger for the target event. And configuring a behavior operation instruction corresponding to the trigger on the instruction translator based on the trigger identification of the trigger.
Specifically, for a target event on a scene in a scene sequence, the object browser can determine the target event through the scene controller, package the target event, configure a trigger of the target event, and configure a behavior operation instruction corresponding to the trigger on the instruction translator based on the trigger identifier. Therefore, when the target event is monitored, the trigger identification corresponding to the target event can be acquired, the corresponding behavior operation instruction is determined based on the trigger identification, and the behavior corresponding to the behavior operation instruction is executed.
The following describes the trigger configuration by way of example. For example, if it is desired to perform a certain action after the playing of the object to be presented is completed. The scene controller determines that the playing of the object to be displayed is completed as a target event, and configures a trigger of the target event at the time when the playing of the object to be displayed is completed. For example, if it is desired to recognize that a button on a scene is clicked for a backoff time, the scene controller determines that a button on a scene is recognized as a target event, and configures a trigger of the target event at the time when a button is clicked.
Of course, the above two examples are only possible embodiments in each scene of the object to be displayed, and do not limit other embodiments of the present application.
In step S209, the object to be displayed is played in response to the play instruction of the object to be displayed.
Thus, the application completes the construction of the object to be displayed, and when receiving the playing instruction of the object to be displayed, the object to be displayed can be played.
In step S211, if the target event in the object to be displayed is monitored during the playing process of the object to be displayed, the action corresponding to the target event is executed based on the trigger corresponding to the target event.
Fig. 4 is a flowchart illustrating an implementation of playing an object to be presented, as shown in fig. 4, including:
in step S401, during the playing process of the object to be displayed, monitoring, by an event monitor, an event on a scene in the scene sequence;
In step S402, if the event monitor monitors the target event, the process goes to step S403; otherwise, go to step S401;
in step S403, the event listener sends a trigger corresponding to the target event;
in step S404, the command relay determines a trigger identification of the trigger;
In step S405, the instruction relay determines a behavior operation instruction based on the trigger identification;
in step S406, the instruction relay transmits a behavior operation instruction to the behavior executor;
In step S407, the behavior executor executes the behavior in the behavior operation instruction.
Specifically, the terminal may monitor an event on a scene in the scene sequence through the event monitor, and if the event monitor monitors a target event, the event monitor may send a trigger corresponding to the target event to the instruction relay. If the event listener does not hear the target event, it can intercept other events on the scene in the scene sequence. When the instruction relay receives the trigger sent by the event monitor, the instruction relay can determine the trigger identification of the trigger, determine a behavior operation instruction based on the trigger identification, and send the behavior operation instruction to the behavior executor. When the behavior executor receives the behavior operation instruction, the terminal can analyze the behavior operation instruction through the behavior executor to obtain an execution behavior and an execution object, and execute the execution object based on the execution behavior.
In the embodiment of the present application, the types of triggers are numerous, and different triggers are described herein in connection with examples.
In an alternative embodiment, the trigger is an original trigger. When the terminal monitors a first target event through the event monitor, an original trigger can be generated, and the original trigger corresponding to the first target event is sent through the event monitor, wherein the original trigger can carry a trigger identifier.
For example, assume that the object to be presented plays out as the first target event. When the terminal monitors that the playing of the object to be displayed on the scene is completed through the event monitor, an original trigger corresponding to the first target event when the playing of the object to be displayed is completed can be generated, the original trigger corresponding to the first target event is sent to the instruction relay, when the instruction relay receives the original trigger, a trigger identifier can be analyzed, a behavior operation instruction is determined based on the trigger identifier, and behavior description information can be included in the behavior operation instruction. The instruction translator may then send the behavior operation instruction to the behavior executor. Correspondingly, the behavior executor can analyze the behavior operation instruction to obtain the execution behavior and the execution object in the behavior description information. Assuming that the execution behavior is "pop up a page", the execution object may be the link address of "end page". In this way, the behavior executor may pop up the end page on the scene.
In another alternative embodiment, the flip-flop is a conditional flip-flop. When the terminal monitors a first target event through the event monitor, and the condition parameters of the first target event meet the preset condition parameters, determining to generate a second target event, generating a condition trigger corresponding to the second target event, and sending the condition trigger corresponding to the second target event through the event monitor. Wherein the conditional triggers may carry trigger identifications.
For example, assume that the object to be presented plays out as the first target event. When the terminal monitors that the playing of the object to be displayed on the scene is completed through the event monitor, whether the number of times of the playing of the object to be displayed meets the preset condition parameters or not can be determined, if yes (for example, the current number of times of the playing is equal to 1), the terminal determines that the second target event is completed (namely, the playing of the object to be displayed on the scene is completed, and the number of times of the completion is 1). And generating a condition trigger corresponding to the second target event, sending the condition trigger corresponding to the second target event to the instruction relay, analyzing a trigger identifier when the instruction relay receives the condition trigger, and determining a behavior operation instruction based on the trigger identifier, wherein the behavior operation instruction can comprise behavior description information. The instruction translator may then send the behavior operation instruction to the behavior executor. Correspondingly, the behavior executor can analyze the behavior operation instruction to obtain the execution behavior and the execution object in the behavior description information. Assuming that the execution behavior is "report," the execution object may be a "buried point. Thus, the behavior executor can report the buried point.
In the embodiment of the present application, when the first time trigger is configured, the execution condition of the condition trigger may be set equal to 1 time.
In another alternative embodiment, the trigger is a time trigger. When the terminal monitors a first target event through the event monitor, and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, generating a time trigger corresponding to the third target event, and sending the time trigger corresponding to the third target event through the event monitor. Wherein the time trigger may carry a trigger identification.
In an alternative embodiment of the time trigger, the time trigger is a first time trigger. FIG. 5 is a flowchart of a first time trigger implementation, as shown in FIG. 5, including:
in step S501, if the first target event is monitored, a delay timer is generated.
In step S502, the timing time parameter on the delay timer satisfies the first preset time parameter, and it is determined to generate the third target event.
In step S503, the first time trigger corresponding to the third target event is transmitted through the event listener.
In the embodiment of the application, when the terminal monitors the first target event through the event monitor, a delay timer can be generated, the timing time parameter on the delay timer meets the first preset time parameter, the third target event is determined to be generated, the first time trigger corresponding to the third target event is generated, and the first time trigger corresponding to the third target event is sent through the event monitor. Wherein the first time trigger may carry a trigger identification.
For example, assume that an object to be presented is played in a scene as a first target event. When the terminal monitors that the object to be displayed is played in the scene through the event monitor, a delay timer can be generated and timed. Determining whether the timing time parameter on the delay timer meets the first preset time parameter, and if so (for example, the timing time parameter is 3 seconds), determining that a third target event is generated (namely, playing the object to be displayed in the scene, and the playing time meets 3 seconds). And then, the terminal generates a first time trigger corresponding to the third target event and sends the first time trigger corresponding to the third target event to the instruction relay. When the instruction relay receives the first time trigger, the trigger identification can be resolved, the behavior operation instruction is determined based on the trigger identification, and the behavior operation instruction can comprise behavior description information. The instruction translator may then send the behavior operation instruction to the behavior executor. Correspondingly, the behavior executor can analyze the behavior operation instruction to obtain the execution behavior and the execution object in the behavior description information. Assuming that the execution behavior is "play a certain animation scene", the execution object may be a connection address of "a certain animation scene". Thus, the behavior executor can play the animation scene.
In the embodiment of the present application, when the first time trigger is configured, the execution time of the first time trigger may be set to 3 seconds.
In an alternative embodiment of the time trigger, the time trigger is a second time trigger. FIG. 6 is a flowchart of a second time trigger implementation, as shown in FIG. 6, including:
In step S601, if the first target event is monitored, an interval timer is generated.
In step S602, a third target event is generated and the interval timer is reset each time the interval time parameter on the interval timer satisfies the second preset time parameter.
In step S603, a second time trigger corresponding to the third target event is transmitted through the event listener.
In the embodiment of the application, when the terminal monitors the first target event through the event monitor, an interval timer can be generated, and when the interval time parameter on the interval timer meets the second preset time parameter, a third target event is generated and the interval timer is reset. And generating a second time trigger corresponding to the third target event, and sending the first time trigger corresponding to the third target event through the event monitor. Wherein the second time trigger may carry a trigger identification.
For example, assume that an object to be presented is played in a scene as a first target event. When the terminal monitors that the object to be displayed is played in the scene through the event monitor, an interval timer can be generated and timed. The terminal determines to generate a third target event (i.e. playing the object to be presented in the scene, and the playing time satisfying 5 seconds) whenever the interval time parameter on the interval timer satisfies the second preset time parameter (5 seconds), the interval timer may be reset. And then, the terminal generates a second time trigger corresponding to the third target event and sends the second time trigger corresponding to the third target event to the instruction relay. When the instruction relay receives the second time trigger, the trigger identification can be resolved, the behavior operation instruction is determined based on the trigger identification, and the behavior operation instruction can comprise behavior description information. The instruction translator may then send the behavior operation instruction to the behavior executor. Correspondingly, the behavior executor can analyze the behavior operation instruction to obtain the execution behavior and the execution object in the behavior description information. Assuming that the execution behavior is "play a certain special effect", the execution object may be a connection address of "special effect". Thus, the behavior executor can play the special effect (such as firework animation). Therefore, as long as the object to be displayed is played on the scene, the firework animation can be played every 5 seconds.
In the embodiment of the present application, when the second time trigger is configured, the execution time interval of the second time trigger may be set to 5 seconds, and the execution times may be set to infinity.
Optionally, in a specific embodiment, the behavior executor may be a scene controller, for example, when an end page is popped up, the behavior executor is the scene controller, and controls the popping up of the end page, and optionally, the behavior executor may also be other processors.
The central control function of the embodiment of the application is configured with the trigger of the event when each interface of the application program. One event listener may simultaneously listen for the occurrence of different events or different event listeners may simultaneously listen for the occurrence of different events.
It has been mentioned above that, in executing the behavior in the behavior operation instruction, the terminal may parse the behavior operation instruction by the behavior executor, determine the execution behavior and the execution object, and execute the execution object based on the execution behavior. Therefore, the application completes various links related to front-end information display, user clicking, playing and watching, information disappearance and the like based on original development, and some simple functions related to various functions of back-end statistics data reporting, user behavior response, interactive logic issuing and the like.
Optionally, the behavior operation instruction is analyzed at the terminal, and the sub-trigger identifier can be obtained through analysis. At this time, the analysis of the behavior operation instruction may be used as another target event, it is monitored that the behavior operation instruction is analyzed, a sub-trigger corresponding to the event may be analyzed by sending the behavior operation instruction to the instruction relay, and the sub-trigger corresponding to the event may be analyzed by sending the behavior operation instruction to the instruction relay, when the instruction relay receives the sub-trigger, a sub-trigger identifier may be analyzed, a sub-behavior operation instruction is determined based on the sub-trigger identifier, and the sub-behavior operation instruction may include behavior description information. The instruction translator may then send the child behavior operation instruction to the behavior executor. Correspondingly, the behavior executor can analyze the sub-behavior operation instruction to obtain the execution behavior and the execution object in the behavior description information. As such, the behavior executor may execute the execution object in the child behavior operation instruction based on the execution behavior.
In the above embodiment, only one trigger is nested, and in the practical application process, a plurality of triggers can be nested, and the specific nesting manner can refer to the content in the previous section, which is not described herein. Thus, the embodiment of the application can complete interaction of complex scenes through the combination of the triggers, and has stronger applicability.
Alternatively, the first target event may be a single event, such as completion of playing of the object to be displayed.
Alternatively, the first target event may be a set of events, such as the completion of playing the object to be presented and the detection of a click of a preset button. In this way, the terminal can determine to monitor the first target event only when the terminal determines the first target event and monitors the first sub-target event and the second sub-target event.
Optionally, the first target event may be any event of a plurality of events, for example, playing of the object to be displayed is completed or the preset button is detected to be clicked, so that when determining the first target event, the terminal monitors the first sub-target event or the second sub-target event, and can determine that the first target event is monitored.
Therefore, different events in the same behavior caused by different events can be associated with the same behavior operation instruction, and software resources are saved.
Optionally, if an instruction for stopping playing the object to be displayed is received during the playing process of the object to be displayed, all the subsequent triggers can be destroyed.
In summary, a closed loop of an object processing scheme is formed through the object browser, so that devices of different systems, devices of the same system but different sizes or different application programs in the same device can generate completely consistent objects to be displayed based on configuration data, interface display and logic unification are realized, and the later maintenance cost is reduced. In addition, the object browser is completely based on original development, has strong stability, does not need to rely on a third party framework, has no auditing risk, and is simple and convenient.
FIG. 7 is a block diagram of an object processing apparatus according to an example embodiment. Referring to fig. 7, the device object processing device is constructed in a native development of a terminal, and is applied to an application program of the terminal, and includes a data acquisition module 701, a scene construction module 702, a view construction module 703, an object generation module 704, a play module 705, and an event execution module 706:
A data acquisition module 701 configured to perform acquisition of configuration data of an object to be displayed; the configuration data includes scene configuration data and view configuration data;
A scene construction module 702 configured to perform determining a scene sequence having scene information based on the scene configuration data;
A view construction module 703 configured to perform rendering of the view in each scene in the scene sequence based on the view configuration data, resulting in a rendered scene sequence;
an object generation module 704 configured to perform adding the rendered scene sequence to the view container, resulting in an object to be presented;
a playing module 705 configured to execute a playing instruction in response to the exclusive share to be displayed, playing the object to be displayed;
The event execution module 706 is configured to execute, during the playing process of the object to be displayed, if the target event in the object to be displayed is monitored, the action corresponding to the target event based on the trigger corresponding to the target event.
In some possible embodiments, before obtaining the configuration data device of the object to be displayed, the method further includes:
the controller creation module, the controller creates the scene controller;
a scene building module configured to perform:
Transmitting configuration data of the object to be displayed to a scene controller;
Analyzing the configuration data of the object to be displayed by using the scene controller to obtain scene configuration data and view configuration data;
determining, with a scene controller, a scene sequence having scene information based on the scene configuration data;
And the view construction module is configured to perform rendering of views in each scene in the scene sequence based on view configuration data by using the scene controller to obtain a rendered scene sequence.
In some possible embodiments, the scene construction module is configured to perform:
determining feature information of scenes, display feature information of the scenes and feature information among the scenes based on the scene configuration data;
and determining a scene sequence based on the feature information of the scenes, the display feature information of the scenes and the feature information among the scenes.
In some possible embodiments, the view construction module is configured to perform:
Determining image-text parameters, control parameters and animation effect parameters in each scene in the scene sequence based on the view configuration data;
Rendering the corresponding scene based on the image-text parameter, the control parameter and the animation effect parameter in each scene to obtain a rendered scene sequence.
In some possible embodiments, the apparatus further comprises:
An event determination module configured to perform determining a target event on a scene in a sequence of scenes;
a trigger configuration module configured to execute a trigger that configures a target event;
the instruction configuration module is configured to execute trigger identification based on the trigger and configure behavior operation instructions corresponding to the trigger on the instruction translator.
In some possible embodiments, the event execution module is configured to execute:
the monitoring module is configured to monitor events on scenes in the scene sequence through the event monitor in the playing process of the object to be displayed;
the sending module is configured to execute sending a trigger corresponding to the target event through the event monitor if the target event is monitored;
An instruction determination module configured to execute determining a trigger identification of the trigger by the instruction relay and determining a behavior operation instruction based on the trigger identification;
and the execution module is configured to execute the behavior in the received behavior operation instruction through the behavior executor.
In some possible embodiments, the sending module is configured to perform:
If the first target event is monitored, an original trigger corresponding to the first target event is sent through an event monitor;
Or alternatively;
If the first target event is monitored and the condition parameter of the first target event meets the preset condition parameter, determining to generate a second target event, and sending a condition trigger corresponding to the second target event through the event monitor;
Or alternatively;
if the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending a time trigger corresponding to the third target event through the event monitor.
In some possible embodiments, the sending module is configured to perform:
If the first target event is monitored, a delay timer is generated;
the timing time parameter on the delay timer meets the first preset time parameter, and a third target event is determined to be generated;
And sending a first time trigger corresponding to the third target event through the event monitor.
In some possible embodiments, the sending module is configured to perform:
if the first target event is monitored, generating an interval timer;
generating a third target event and resetting the interval timer whenever the interval time parameter on the interval timer meets a second preset time parameter;
and sending a second time trigger corresponding to the third target event through the event monitor.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 8 is a block diagram illustrating an apparatus 2000 for object processing according to an example embodiment. For example, apparatus 2000 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and so forth.
Referring to fig. 8, apparatus 2000 may include one or more of the following components: a processing component 2002, a memory 2004, a power component 2006, a multimedia component 2008, an audio component 2010, an input/output (I/O) interface 2012, a sensor component 2014, and a communication component 2016.
The processing component 2002 generally controls overall operation of the device 2000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 2002 may include one or more processors 2020 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 2002 may include one or more modules that facilitate interactions between the processing component 2002 and other components. For example, the processing component 2002 can include a multimedia module to facilitate interaction between the multimedia component 2008 and the processing component 2002.
The memory 2004 is configured to store various types of data to support operations at the device 2000. Examples of such data include instructions for any application or method operating on the device 2000, contact data, phonebook data, messages, pictures, videos, and the like. The memory 2004 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply assembly 2006 provides power to the various components of the device 2000. The power supply components 2006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 2000.
The multimedia component 2008 includes a screen between the device 2000 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia assembly 2008 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 2000 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 2010 is configured to output and/or input audio signals. For example, audio component 2010 includes a Microphone (MIC) configured to receive external audio signals when device 2000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 2004 or transmitted via the communication component 2016. In some embodiments, audio assembly 2010 further includes a speaker for outputting audio signals.
I/O interface 2012 provides an interface between processing component 2002 and peripheral interface modules, which may be keyboards, click wheels, buttons, and the like. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 2014 includes one or more sensors for providing status assessment of various aspects of the apparatus 2000. For example, the sensor assembly 2014 may detect an on/off state of the apparatus 2000, a relative positioning of the components, such as a display and keypad of the device 2000, a change in position of the device 2000 or a component of the device 2000, the presence or absence of a user contact with the device 2000, an orientation or acceleration/deceleration of the device 2000, and a change in temperature of the device 2000. The sensor assembly 2014 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 2014 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 2014 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 2016 is configured to facilitate communication between the apparatus 2000 and other devices, either wired or wireless. The device 2000 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 2016 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 2016 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 2000 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a storage medium is also provided, such as a memory 2004 including instructions executable by the processor 2020 of the apparatus 2000 to perform the above-described method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.

Claims (21)

1. An object processing method, comprising:
Acquiring configuration data of an object to be displayed; the configuration data comprises scene configuration data and view configuration data; the scene configuration data comprises at least one of feature information of scenes, display feature information of the scenes and feature information among the scenes;
Determining a scene sequence having scene information based on the scene configuration data;
rendering views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence;
Adding the rendered scene sequence to a view container to obtain the object to be displayed;
Responding to a playing instruction of the object to be displayed, and playing the object to be displayed;
and in the playing process of the object to be displayed, if a target event in the object to be displayed is monitored, executing a behavior corresponding to the target event based on a trigger corresponding to the target event.
2. The method for processing an object according to claim 1, further comprising, before the method for acquiring configuration data of the object to be displayed:
Creating a scene controller;
The determining a scene sequence with scene information based on the scene configuration data, rendering views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence, including:
transmitting the configuration data of the object to be displayed to the scene controller;
analyzing the configuration data of the object to be displayed by using the scene controller to obtain the scene configuration data and the view configuration data;
determining, with the scene controller, the scene sequence having scene information based on the scene configuration data;
And rendering views in each scene in the scene sequence based on the view configuration data by using the scene controller to obtain the rendered scene sequence.
3. The object processing method according to claim 2, wherein the determining the scene sequence having scene information based on the scene configuration data includes:
determining feature information of scenes, display feature information of the scenes and feature information among the scenes based on the scene configuration data;
And determining the scene sequence based on the feature information of the scenes, the display feature information of the scenes and the feature information among the scenes.
4. The method according to claim 2, wherein rendering the view in each scene in the scene sequence based on the view configuration data to obtain the rendered scene sequence comprises:
determining image-text parameters, control parameters and animation effect parameters in each scene in the scene sequence based on the view configuration data;
Rendering the corresponding scene based on the image-text parameters, the control parameters and the animation effect parameters in each scene to obtain the rendered scene sequence.
5. The method of object processing according to any one of claims 1 to 4, further comprising:
Determining a target event on a scene in the sequence of scenes;
configuring a trigger of the target event;
And configuring a behavior operation instruction corresponding to the trigger on the instruction relay based on the trigger identification of the trigger.
6. The method for processing an object according to claim 5, wherein, in the playing process of the object to be displayed, if a target event in the object to be displayed is monitored, executing a behavior corresponding to the target event based on a trigger corresponding to the target event, includes:
During the playing process of the object to be displayed, monitoring an event on a scene in the scene sequence through an event monitor;
if the target event is monitored, sending a trigger corresponding to the target event through the event monitor;
Determining a trigger identification of the trigger through an instruction relay, and determining a behavior operation instruction based on the trigger identification;
And executing the behaviors in the received behavior operation instructions through a behavior executor.
7. The method according to claim 6, wherein the sending, by the event listener, the trigger corresponding to the target event if the target event is monitored, includes:
if a first target event is monitored, sending an original trigger corresponding to the first target event through the event monitor;
Or alternatively;
if a first target event is monitored and the condition parameters of the first target event meet preset condition parameters, determining to generate a second target event, and sending a condition trigger corresponding to the second target event through the event monitor;
Or alternatively;
If the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending a time trigger corresponding to the third target event through the event monitor.
8. The method according to claim 7, wherein if the first target event is monitored, and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending, by the event monitor, a time trigger corresponding to the third target event, includes:
If the first target event is monitored, a delay timer is generated;
the timing time parameter on the delay timer meets a first preset time parameter, and the third target event is determined to be generated;
and sending a first time trigger corresponding to the third target event through the event monitor.
9. The method according to claim 7, wherein if the first target event is monitored, and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending, by the event monitor, a time trigger corresponding to the third target event, includes:
if the first target event is monitored, generating an interval timer;
generating the third target event and resetting the interval timer every time the interval time parameter on the interval timer meets a second preset time parameter;
and sending a second time trigger corresponding to the third target event through the event monitor.
10. An object processing apparatus, comprising:
The data acquisition module is configured to acquire configuration data of an object to be displayed; the configuration data comprises scene configuration data and view configuration data; the scene configuration data comprises at least one of feature information of scenes, display feature information of the scenes and feature information among the scenes;
a scene construction module configured to perform determining a scene sequence having scene information based on the scene configuration data;
the view construction module is configured to perform rendering of views in each scene in the scene sequence based on the view configuration data to obtain a rendered scene sequence;
the object generation module is configured to add the rendered scene sequence to a view container to obtain the object to be displayed;
The playing module is configured to execute a playing instruction responding to the exclusive share to be displayed and play the object to be displayed;
And the event execution module is configured to execute the action corresponding to the target event based on the trigger corresponding to the target event if the target event in the object to be displayed is monitored in the playing process of the object to be displayed.
11. The object processing apparatus according to claim 10, further comprising, prior to the means for obtaining the configuration data of the object to be displayed:
the controller creation module, the controller creates the scene controller;
The scene building module is configured to perform:
transmitting the configuration data of the object to be displayed to the scene controller;
analyzing the configuration data of the object to be displayed by using the scene controller to obtain the scene configuration data and the view configuration data;
determining, with the scene controller, the scene sequence having scene information based on the scene configuration data;
The view construction module is configured to perform rendering of views in each scene in the scene sequence based on the view configuration data by using the scene controller, so as to obtain the rendered scene sequence.
12. The object processing apparatus of claim 11, wherein the scene building module is configured to perform:
determining feature information of scenes, display feature information of the scenes and feature information among the scenes based on the scene configuration data;
And determining the scene sequence based on the feature information of the scenes, the display feature information of the scenes and the feature information among the scenes.
13. The object processing apparatus of claim 11, wherein the view construction module is configured to perform:
determining image-text parameters, control parameters and animation effect parameters in each scene in the scene sequence based on the view configuration data;
Rendering the corresponding scene based on the image-text parameters, the control parameters and the animation effect parameters in each scene to obtain the rendered scene sequence.
14. The object handling device according to any of claims 10-13, wherein the device further comprises:
an event determination module configured to perform determining a target event on a scene in the sequence of scenes;
a trigger configuration module configured to execute a trigger that configures the target event;
the instruction configuration module is configured to execute a trigger identification based on the trigger, and configure a behavior operation instruction corresponding to the trigger on the instruction relay.
15. The object processing apparatus of claim 14, wherein the event execution module is configured to execute:
the monitoring module is configured to monitor events on scenes in the scene sequence through the event monitor in the playing process of the objects to be displayed;
the sending module is configured to execute sending a trigger corresponding to the target event through the event monitor if the target event is monitored;
An instruction determination module configured to execute determining a trigger identification of the trigger by an instruction translator, and determining a behavioural operation instruction based on the trigger identification;
and the execution module is configured to execute the behavior in the received behavior operation instruction through the behavior executor.
16. The object processing apparatus according to claim 15, wherein the transmission module is configured to perform:
if a first target event is monitored, sending an original trigger corresponding to the first target event through the event monitor;
Or alternatively;
if a first target event is monitored and the condition parameters of the first target event meet preset condition parameters, determining to generate a second target event, and sending a condition trigger corresponding to the second target event through the event monitor;
Or alternatively;
If the first target event is monitored and the time parameter of the first target event meets the preset time parameter, determining to generate a third target event, and sending a time trigger corresponding to the third target event through the event monitor.
17. The object processing apparatus according to claim 16, wherein the transmission module is configured to perform:
If the first target event is monitored, a delay timer is generated;
the timing time parameter on the delay timer meets a first preset time parameter, and the third target event is determined to be generated;
and sending a first time trigger corresponding to the third target event through the event monitor.
18. The object processing apparatus according to claim 16, wherein the transmission module is configured to perform:
if the first target event is monitored, generating an interval timer;
generating the third target event and resetting the interval timer every time the interval time parameter on the interval timer meets a second preset time parameter;
and sending a second time trigger corresponding to the third target event through the event monitor.
19. An electronic device, comprising:
A processor;
A memory for storing the processor-executable instructions;
Wherein the processor is configured to execute the instructions to implement the object processing method of any of claims 1 to 9.
20. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the object processing method of any one of claims 1 to 9.
21. A computer program product, characterized in that the computer program product comprises a computer program stored in a readable storage medium, from which at least one processor of a computer device reads and executes the computer program, such that the computer device performs the object processing method according to any of claims 1 to 9.
CN202210344776.XA 2022-03-31 2022-03-31 Object processing method and device, electronic equipment and storage medium Active CN114860358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210344776.XA CN114860358B (en) 2022-03-31 2022-03-31 Object processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210344776.XA CN114860358B (en) 2022-03-31 2022-03-31 Object processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114860358A CN114860358A (en) 2022-08-05
CN114860358B true CN114860358B (en) 2024-06-21

Family

ID=82630262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210344776.XA Active CN114860358B (en) 2022-03-31 2022-03-31 Object processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114860358B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911477A (en) * 2022-03-31 2022-08-16 北京达佳互联信息技术有限公司 Event execution method and device, electronic equipment and storage medium
CN117119099A (en) * 2023-07-31 2023-11-24 珠海格力电器股份有限公司 Application information display method and device, user terminal and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109040822A (en) * 2018-07-16 2018-12-18 北京奇艺世纪科技有限公司 Player configuration method and device, storage medium
CN112235604A (en) * 2020-10-20 2021-01-15 广州博冠信息科技有限公司 Rendering method and device, computer readable storage medium and electronic device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106792188B (en) * 2016-12-06 2020-06-02 腾讯数码(天津)有限公司 Data processing method, device and system for live broadcast page and storage medium
CN108961380B (en) * 2017-05-26 2022-06-14 创新先进技术有限公司 Graph rendering method and device
CN112070863B (en) * 2019-06-11 2025-05-30 腾讯科技(深圳)有限公司 Animation file processing method, device, computer-readable storage medium and computer equipment
CN112150586B (en) * 2019-06-11 2024-11-22 腾讯科技(深圳)有限公司 Animation processing method, device, computer readable storage medium and computer equipment
CN112135161A (en) * 2020-09-25 2020-12-25 广州华多网络科技有限公司 Dynamic effect display method and device of virtual gift, storage medium and electronic equipment
CN113204722B (en) * 2021-03-30 2022-11-22 北京达佳互联信息技术有限公司 Page display method and device, electronic equipment and storage medium
CN113850898B (en) * 2021-10-18 2024-08-06 深圳追一科技有限公司 Scene rendering method and device, storage medium and electronic equipment
CN114040240B (en) * 2021-11-18 2024-09-20 北京达佳互联信息技术有限公司 Button configuration method, device, server and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109040822A (en) * 2018-07-16 2018-12-18 北京奇艺世纪科技有限公司 Player configuration method and device, storage medium
CN112235604A (en) * 2020-10-20 2021-01-15 广州博冠信息科技有限公司 Rendering method and device, computer readable storage medium and electronic device

Also Published As

Publication number Publication date
CN114860358A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN107729522B (en) Multimedia resource fragment intercepting method and device
EP4376423A1 (en) Virtual object interaction method and device, and storage medium and computer program product
US20200007944A1 (en) Method and apparatus for displaying interactive attributes during multimedia playback
CN110874217A (en) Interface display method, device and storage medium for quick application
CN110865863B (en) Interface display method and device for fast application and storage medium
CN114860358B (en) Object processing method and device, electronic equipment and storage medium
CN103886025A (en) Method and device for displaying pictures in webpage
CN112508020A (en) Labeling method and device, electronic equipment and storage medium
CN112616053B (en) Transcoding method and device for live video and electronic equipment
US10613622B2 (en) Method and device for controlling virtual reality helmets
CN110971974B (en) Configuration parameter creating method, device, terminal and storage medium
CN115278273A (en) Resource display method and device, electronic equipment and storage medium
CN111596980B (en) Information processing method and device
CN113031781A (en) Augmented reality resource display method and device, electronic equipment and storage medium
CN112733058A (en) Data processing system, method, device, electronic equipment and storage medium
CN110769311A (en) Method, device and system for processing live data stream
CN111338961A (en) Application debugging method and device, electronic device and storage medium
CN112732250A (en) Interface processing method, device and storage medium
CN110908904A (en) Method and device for debugging fast application and electronic equipment
CN117376627A (en) Resource interaction method and device, electronic equipment and storage medium
US11496787B2 (en) Information processing method and device, electronic device, and storage medium
CN111290944B (en) Script generation method, script generation device and storage medium
CN109947640B (en) Regression test-based core function coverage statistical method and device
CN114911477A (en) Event execution method and device, electronic equipment and storage medium
CN111131000A (en) Information transmission method, device, server and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant