Disclosure of Invention
Based on the problems, the application provides a model rendering method, device, equipment and medium, which can improve the loading speed of a target three-dimensional model, reduce the rendering time of the target three-dimensional model and improve the rendering efficiency of the target three-dimensional model.
The embodiment of the application discloses the following technical scheme:
In a first aspect, the present application discloses a model rendering method, the method comprising:
Acquiring an original model information set of a target three-dimensional model, wherein the original model information set comprises an original texture information set and an original geometric information set, the target three-dimensional model is used for rendering a target scene, the original texture information set comprises at least one of color information and texture map information, and the original geometric information set comprises at least one of vertex coordinate information and side information;
selecting updated geometric information for rendering the target scene, wherein the similarity with other original geometric information is lower than a first threshold value, and the distance between the updated geometric information and the other original geometric information is greater than a second threshold value, from the original geometric information set, and acquiring updated texture information for rendering the target scene by removing repeated texture blocks from the original texture information set;
Rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the method further comprises:
and if the similarity between the first original texture information and the second original texture information in the original texture information set is higher than a third threshold value, obtaining updated texture information for rendering the target scene by combining the first original texture information and the second original texture information.
Optionally, the rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model includes:
Determining distance information between the target three-dimensional model and a current camera viewpoint;
Selecting a detail level model matched with the distance information from a plurality of detail level models;
And rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model includes:
determining rendering load information of the target scene;
according to the rendering load information, adjusting the resolution of the updated texture information and/or adjusting the level of detail of the updated geometric information;
Rendering the adjusted updated texture information and/or the adjusted updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the acquiring the original model information set of the target three-dimensional model includes:
Determining a to-be-loaded range of the target three-dimensional model;
and acquiring an original model information set of the target three-dimensional model in the range to be loaded.
In a second aspect, the application discloses a model rendering device, which comprises an acquisition module, a selection module and a rendering module;
The acquisition module is used for acquiring an original model information set of a target three-dimensional model, wherein the original model information set comprises an original texture information set and an original geometric information set, the target three-dimensional model is used for rendering a target scene, the original texture information set comprises at least one of color information and texture mapping information, and the original geometric information set comprises at least one of vertex coordinate information and side information;
The selecting module is configured to select, from the original set of geometric information, updated geometric information for rendering the target scene, where a similarity to other original geometric information is lower than a first threshold and a distance from the other original geometric information is greater than a second threshold, and acquire updated texture information for rendering the target scene by removing repeated texture blocks from the original set of texture information;
And the rendering module is used for rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the selecting module is further configured to obtain updated texture information for rendering the target scene by merging the first original texture information and the second original texture information if the similarity between the first original texture information and the second original texture information in the original texture information set is higher than a third threshold.
Optionally, the rendering module comprises a first rendering sub-module, a second rendering sub-module and a third rendering sub-module;
The first rendering sub-module is used for determining the distance information between the target three-dimensional model and the current camera viewpoint;
The second rendering sub-module is used for selecting a detail level model matched with the distance information from a plurality of detail level models;
And the third rendering sub-module is used for rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the rendering module comprises a fourth rendering module, a fifth rendering module and a sixth rendering module;
the fourth rendering module is used for determining the distance information between the target three-dimensional model and the current camera viewpoint;
The fifth rendering module is used for selecting a detail level model matched with the distance information from a plurality of detail level models;
And the sixth rendering module is used for rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
Optionally, the acquisition module is specifically configured to determine a to-be-loaded range of the target three-dimensional model, and acquire an original model information set of the target three-dimensional model within the to-be-loaded range.
In a third aspect, the present application discloses a computer device comprising a processor and a memory:
The memory is used for storing program codes and transmitting the program codes to the processor;
The processor is configured to execute the steps of the model rendering method according to the first aspect according to instructions in the program code.
In a fourth aspect, the present application discloses a computer readable storage medium for storing program code for performing the steps of the model rendering method according to the first aspect.
Compared with the prior art, the application has the following beneficial effects:
The embodiment of the application provides a model rendering method, device, equipment and medium, wherein the method comprises the steps of obtaining an original model information set of a target three-dimensional model, wherein the original model information set comprises an original texture information set and an original geometric information set, and the target three-dimensional model is used for rendering a target scene; selecting updated geometric information for rendering the target scene, wherein the similarity with other original geometric information is lower than a first threshold value and the distance with other original geometric information is greater than a second threshold value, from the original geometric information set, and acquiring updated texture information for rendering the target scene by removing repeated texture blocks from the original texture information set; rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model. Therefore, by only loading the necessary parts (namely updating texture information and updating geometric information) in the target three-dimensional model, the data volume to be read and processed can be obviously reduced, and the data transmission time from a storage device (such as a hard disk or a network) to a memory is reduced, so that the loading speed of the target three-dimensional model is increased. In addition, unnecessary details can be removed by selecting updated texture information and updated geometric information for rendering, so that the complexity of the target three-dimensional model is reduced, the data volume required to be processed by a rendering engine is reduced, the rendering time of the target three-dimensional model is further shortened, and the rendering efficiency of the target three-dimensional model is improved.
Detailed Description
As described above, openGL may involve reading, analyzing, and processing a large amount of data when processing the loading and rendering tasks of the three-dimensional model, where the data amount is very large and the processing is complex, which easily results in a long loading time and rendering time, thereby affecting the overall experience of the user.
The inventor has studied and proposed a model rendering method, apparatus and medium, this model rendering method is through only loading the necessary part in the three-dimensional model of goal (namely update the texture information and update the geometric information), can reduce the data bulk that needs to read and process notably, has reduced the data transmission time from storage device (such as hard disk or network) to the memory, thus has accelerated the loading speed of the three-dimensional model of goal. In addition, unnecessary details can be removed by selecting updated texture information and updated geometric information for rendering, so that the complexity of the target three-dimensional model is reduced, the data volume required to be processed by a rendering engine is reduced, the rendering time of the target three-dimensional model is further shortened, and the rendering efficiency of the target three-dimensional model is improved.
In order to make the present application better understood by those skilled in the art, the following description will clearly and completely describe the technical solutions in the embodiments of the present application with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, the flow chart of a model rendering method according to an embodiment of the present application is shown. The method comprises the following steps:
S101, acquiring an original model information set of a target three-dimensional model, wherein the original model information set comprises an original texture information set and an original geometric information set, the target three-dimensional model is used for rendering a target scene, the original texture information set comprises at least one of color information and texture map information, and the original geometric information set comprises at least one of vertex coordinate information and side information.
The original texture information set describes visual characteristics of the target three-dimensional model surface, including but not limited to color information and texture map information. Wherein the color information defines the colors of the respective parts of the target three-dimensional model. The color information can be simple pure color or gradient color or pattern color, and is used for distinguishing different parts of the target three-dimensional model or representing different material effects. Texture map information provides more complex surface details of the target three-dimensional model, such as texture (e.g., wood grain, stone grain, cloth texture, etc., used to simulate the surface texture of an object), reflectivity (used to simulate the reflection effects of the object surface), glossiness (used to define the smoothness of the object surface), etc., which can greatly enhance the realism of the rendered target scene.
The original set of geometric information describes the spatial shape and structure of the three-dimensional model of the object, including but not limited to vertex coordinate information and side information. Wherein the vertex coordinate information defines three-dimensional positions of respective vertices of the target three-dimensional model. Each vertex has a three-dimensional coordinate (X, Y, Z) that together form the three-dimensional shape of the three-dimensional model of the object. By adjusting the position of the vertices, the shape and structure of the target three-dimensional model can be changed. The side information defines how the vertices are connected to form the basic structure of the target three-dimensional model.
S102, selecting updated geometric information for rendering the target scene, wherein the similarity with other original geometric information is lower than a first threshold value and the distance with other original geometric information is greater than a second threshold value, from the original geometric information set, and acquiring updated texture information for rendering the target scene by removing repeated texture blocks from the original texture information set.
In some specific implementations, to reduce unnecessary data processing in the rendering process, thereby reducing the amount of computation in rendering and improving the rendering efficiency, updated texture information for rendering the target scene may be selected from the original texture information set.
Specifically, the original texture information set is first analyzed to find and identify repeated texture blocks therein. These repeated texture blocks may be identical texture blocks of texture, reflectivity, gloss, etc., or similar texture blocks that are visually indistinguishable. Once the duplicate texture blocks are identified, their excess copies are removed from the original texture information set, resulting in updated texture information for rendering the target scene. The updated texture information includes all the texture data that is necessary, without repetition, when rendering the target scene. Because redundant original texture information is removed, the updated texture information obtained by screening is smaller in data size and higher in loading speed, and meanwhile, memory and cache resources can be utilized more efficiently during rendering, so that the rendering efficiency is improved.
In some specific implementations, in order to reduce unnecessary data processing in the rendering process, thereby reducing the amount of computation in rendering and improving the rendering efficiency, the unique geometric information (i.e., the geometric information which is not highly similar to other geometric information and is far away from other geometric information) may be selected from the original geometric information set as updated geometric information in rendering the target scene.
Specifically, first, a first threshold value and a second threshold value are set. Wherein the first threshold is used to measure the degree of similarity between two geometric information. When the similarity of the two geometric information is below a first threshold, they are considered unique and need to be processed separately during the rendering process to preserve the details and diversity of the rendered target scene. When the similarity of two geometric information is higher than or equal to the first threshold, they are considered similar, so that only one geometric information may be rendered during the rendering process, or the geometric information may be optimized by other means (e.g., merging, instantiation, etc.), thereby improving the rendering efficiency.
The second threshold is used to measure the relative positional relationship of the geometric information in space and represents the maximum distance that can be accepted between the two geometric information. When the distance between the two geometric information is greater than the second threshold, they are considered to be uncorrelated or need to be considered in rendering the scene, i.e. need to be processed separately in the rendering process to preserve the details and diversity of the rendered target scene. When the distance between two geometry information is less than or equal to the second threshold, they are considered similar, so that only one geometry information may be rendered during the rendering process, or the geometry information may be optimized by other means (e.g., merging, instantiation, etc.), thereby improving the rendering efficiency.
Thus, if one geometry information has a similarity to all other geometry information below a first threshold and its distance from all other geometry information is greater than a second threshold, then this geometry information is considered as updated geometry information for rendering the target scene.
Therefore, through the screening of the texture information and the geometric information provided by the embodiment of the application, the calculated amount during rendering can be reduced, and the rendering efficiency is improved. Meanwhile, as the unique geometric information is reserved, the rendered target scene can be ensured to have good visual effect while keeping details and sense of reality.
It should be noted that, the present application is not limited to executing the precedence relationship of selecting the updated texture information for rendering the target scene from the original texture information set and executing the precedence relationship of selecting the updated geometry information for rendering the target scene from the original geometry information set.
And S103, rendering the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
In summary, the embodiment of the present application provides a model rendering method, which can significantly reduce the amount of data to be read and processed by loading only the necessary parts (i.e. updating texture information and updating geometric information) in the target three-dimensional model, and reduce the data transmission time from the storage device (such as a hard disk or a network) to the memory, thereby accelerating the loading speed of the target three-dimensional model. In addition, unnecessary details can be removed by selecting updated texture information and updated geometric information for rendering, so that the complexity of the target three-dimensional model is reduced, the data volume required to be processed by a rendering engine is reduced, the rendering time of the target three-dimensional model is further shortened, and the rendering efficiency of the target three-dimensional model is improved.
Referring to fig. 2, a flowchart of another model rendering method according to an embodiment of the present application is shown.
It should be noted that, in the model rendering method disclosed by the application, the object three-dimensional model file can be read through the Assimp library to obtain the original model information of the object three-dimensional model. This is because the Assimp library is smaller in volume and more convenient to integrate into projects than the OpenGL or other large graphics library, assimp, without significantly increasing the volume of the final application. Also, the Assimp library supports multiple three-dimensional model file formats, which enable developers to easily load and process model data from different sources.
Furthermore, the Assimp library not only reads the model data, but also performs certain preprocessing, such as coordinate conversion, skeleton animation data analysis and the like, so that the data processing work in the subsequent rendering process is greatly simplified. The method is particularly suitable for processing complex scenes and large-scale data sets, can effectively reduce the burden of a CPU and improves the overall performance of software.
The method comprises the following steps:
S201, determining a to-be-loaded range of the target three-dimensional model.
Because the target three-dimensional model may be very large and complex, loading the entire target three-dimensional model at a time may result in problems such as insufficient memory, reduced rendering performance, and the like. Therefore, a reasonable target three-dimensional model to be loaded is generally required to be determined according to factors such as a current view angle, user interaction, scene complexity and the like.
Specifically, determining the range to be loaded of the target three-dimensional model according to the current viewing angle refers to determining which parts of the target three-dimensional model are located in or near the viewing cone of the camera according to the position and orientation of the current camera viewpoint, and the parts of the target three-dimensional model need to be loaded preferentially.
Determining the scope to be loaded of the target three-dimensional model according to user interaction refers to predicting which parts of the target three-dimensional model may enter the field of view of the user according to the operation (such as rotation, scaling, translation and the like) of the user, wherein the parts of the target three-dimensional model need to be loaded preferentially.
Determining the scope to load of the target three-dimensional model according to scene complexity refers to that for complex scenes, it may be necessary to assign priorities to load the target three-dimensional model according to importance and urgency of the scene to ensure that critical portions of the target three-dimensional model are loaded first.
It should be noted that, the present application is not limited to a specific method for determining the to-be-loaded range of the target three-dimensional model.
S202, acquiring an original model information set of a target three-dimensional model in a range to be loaded, wherein the original model information set comprises an original texture information set and an original geometric information set, the target three-dimensional model is used for rendering a target scene, the original texture information set comprises at least one of color information and texture map information, and the original geometric information set comprises at least one of vertex coordinate information and side information.
In some specific implementations, in order to improve the rendering efficiency and the response speed, part of the original model information of the target three-dimensional model may be selectively loaded according to actual requirements. For example, if the current scene only needs to show the appearance of the static target three-dimensional model, and does not need to process complex functions such as animation or physical simulation, only the original texture information set and the original geometric information set of the target three-dimensional model can be obtained, and data (such as the original animation information set and the original physical simulation information set) related to the complex functions are ignored, so that memory occupation is reduced, and rendering performance is improved.
In other specific implementations, in order to reduce memory occupation, reduce storage requirements, and optimize network transmission bandwidth, the obtained original texture information set may be compressed, so as to avoid the problem of insufficient memory, and increase delay and bandwidth consumption.
It is understood that the step S202 is similar to the step S101, and will not be described here.
And S203, acquiring updated texture information for rendering the target scene by removing the repeated texture blocks from the original texture information set.
It is understood that the step S203 is similar to the step S102, and will not be described here.
S204, selecting updated geometric information for rendering the target scene, wherein the similarity with other original geometric information is lower than a first threshold value and the distance with other original geometric information is greater than a second threshold value, from the original geometric information set.
It is understood that the step S203 is similar to the step S102, and will not be described here.
S205, if the similarity between the first original texture information and the second original texture information in the original texture information set is higher than a third threshold value, obtaining updated texture information for rendering the target scene by combining the first original texture information and the second original texture information.
In order to reduce the memory occupation during rendering and improve the rendering performance, similar original texture information can be detected and combined. Specifically, the similarity between the first original texture information and the second original texture information in the original texture information set is first checked. Such similarity is typically calculated by comparing pixel data, color distribution, texture features, etc. of the two original texture information. If the calculated similarity is higher than the third threshold, the two original texture information are sufficiently similar in vision, a merging process can be performed, and the merged texture information is taken as updated texture information for rendering the target scene.
It will be appreciated that in most cases, merging similar texture information does not have a significant impact on the final rendering effect, as they are already sufficiently close in visual sense. Therefore, by combining similar original texture information, the quantity of updated texture information stored in the memory can be reduced, so that memory resources are saved.
And S206, determining the distance information between the target three-dimensional model and the current camera viewpoint.
In some specific implementations, a level of detail (Level ofDetail, LOD) technique may also be applied to select a level of detail model of different complexity according to distance information of the target three-dimensional model from the observer (i.e., the current camera viewpoint), achieving the technical effect of reducing the amount of computation while maintaining the visual quality.
S207, selecting a detail level model matched with the distance information from a plurality of detail level models.
And selecting a best matching detail level model from the predefined multiple detail level models according to the distance information obtained in the step S206.
In particular, when the distance information is large (i.e., the target three-dimensional model is far from the current camera viewpoint), a lower level of detail model (i.e., simpler, fewer polygons) is typically selected. This is because the visual details of distant objects are greatly simplified, and the use of high detail models not only wastes computational resources, but may also lead to visual unrealistic sensations due to perspective effects. When the distance information is small (i.e., the target three-dimensional model is closer to the current camera viewpoint), a higher level of detail model (i.e., more complex, more polygons) is typically selected, thereby ensuring that the target three-dimensional model provides enough detail to maintain visual realism and clarity when viewed at close distances.
Thus, it can be ensured that a simplified version of the level of detail model is sufficient to provide sufficient visual information while reducing the rendering burden when the object to be rendered is far, while the level of detail model can provide finer detail when the object to be rendered is near.
And S208, rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
In some specific implementations, the rendering load information of the target scene can also be dynamically evaluated by analyzing the usage rate of the GPU during the rendering process. The rendering load information refers to a measure of computing resources and time required for rendering a target scene, and depends on the complexity of a target three-bit model of the target scene (such as the number of top points, texture resolution, material quantity, etc.), the complexity of illumination and shadow calculation, the resolution of views (i.e. the number of pixels of rendering output), and the frame rate requirement during real-time rendering. If the rendering load value is high, attempts may be made to reduce the resolution of the texture to reduce the memory usage of the GPU and the texture sampling time. Conversely, if the rendering load is lower and performance allows, texture resolution may be increased to improve image quality. Or if the rendering load value is higher, an attempt may be made to select a lower level of detail (i.e., simpler, fewer polygons) level of detail model, thereby conserving computational resources. Conversely, if the rendering load is lower and performance allows, a higher level of detail (i.e., more complex, more polygons) model may be selected, thereby ensuring that the target three-dimensional model provides enough detail to maintain visual realism and clarity. And then rendering the target scene by using the adjusted updated texture information and/or the adjusted updated geometric information. By the adjustment, the rendering load can be reduced as much as possible on the premise of keeping the visual effect acceptable, and the rendering efficiency and performance are improved.
It will be appreciated that there is also a need in the rendering process to be able to respond to various operations of the user, such as mouse dragging, scroll wheel scrolling, etc., to generate a target scene desired by the user. For example, a user may wish to change the viewing angle by dragging a mouse, or zoom in or out by scrolling a scroll wheel, and the distance between the target scene corresponding to the target three-dimensional model, so that the camera viewpoint and/or rendering parameters need to be updated in real-time according to these user operations to ensure that the rendered scene is consistent with the user's expectations.
In summary, the embodiment of the present application provides a model rendering method, which can significantly reduce the amount of data to be read and processed by loading only the necessary parts (i.e. updating texture information and updating geometric information) in the target three-dimensional model, and reduce the data transmission time from the storage device (such as a hard disk or a network) to the memory, thereby accelerating the loading speed of the target three-dimensional model. In addition, unnecessary details can be removed by selecting updated texture information and updated geometric information for rendering, so that the complexity of the target three-dimensional model is reduced, the data volume required to be processed by a rendering engine is reduced, the rendering time of the target three-dimensional model is further shortened, and the rendering efficiency of the target three-dimensional model is improved.
Referring to fig. 3, the present application provides a model rendering device. The model rendering device 300 comprises an acquisition module 301, a selection module 302 and a rendering module 303.
An obtaining module 301, configured to obtain an original model information set of a target three-dimensional model, where the original model information set includes an original texture information set and an original geometry information set, and the target three-dimensional model is used for rendering a target scene, the original texture information set includes at least one of color information and texture map information, and the original geometry information set includes at least one of vertex coordinate information and side information;
A selection module 302, configured to select, from the original set of geometric information, updated geometric information for rendering the target scene that has a similarity with other original geometric information lower than a first threshold and a distance from other original geometric information greater than a second threshold, and obtain updated texture information for rendering the target scene by removing the repeated texture blocks from the original set of texture information;
and the rendering module 303 is configured to render the updated texture information and the updated geometric information to obtain a target scene corresponding to the target three-dimensional model.
In some specific implementations, the selecting module 302 is further configured to obtain updated texture information for rendering the target scene by merging the first original texture information and the second original texture information if the similarity between the first original texture information and the second original texture information in the set of original texture information is higher than a third threshold.
In some specific implementations, the rendering module 303 includes a first rendering sub-module, a second rendering sub-module, and a third rendering sub-module;
the first rendering sub-module is used for determining the distance information between the target three-dimensional model and the current camera viewpoint;
the second rendering sub-module is used for selecting a detail level model matched with the distance information from the detail level models;
And the third rendering sub-module is used for rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
In some specific implementations, the rendering module 303 includes a fourth rendering module, a fifth rendering module, and a sixth rendering module;
The fourth rendering module is used for determining the distance information between the target three-dimensional model and the current camera viewpoint;
a fifth rendering module, configured to select a detail level model that matches the distance information from a plurality of detail level models;
And the sixth rendering module is used for rendering the updated texture information and the updated geometric information corresponding to the detail level model to obtain a target scene corresponding to the target three-dimensional model.
In some specific implementations, the obtaining module 301 is specifically configured to determine a to-be-loaded range of the target three-dimensional model, and obtain an original model information set of the target three-dimensional model within the to-be-loaded range.
In summary, the embodiment of the present application provides a model rendering apparatus, which can significantly reduce the amount of data to be read and processed by loading only the necessary parts (i.e. updating texture information and updating geometry information) in the target three-dimensional model, and reduce the data transmission time from the storage device (such as a hard disk or a network) to the memory, thereby accelerating the loading speed of the target three-dimensional model. In addition, unnecessary details can be removed by selecting updated texture information and updated geometric information for rendering, so that the complexity of the target three-dimensional model is reduced, the data volume required to be processed by a rendering engine is reduced, the rendering time of the target three-dimensional model is further shortened, and the rendering efficiency of the target three-dimensional model is improved.
The embodiment of the application also provides corresponding computer equipment and a computer storage medium, which are used for realizing the model rendering method provided by the embodiment of the application.
The computer device comprises a memory for storing instructions or code and a processor for executing the instructions or code to cause the device to perform a model rendering method according to any of the embodiments of the present application.
The computer storage medium has code stored therein, and when the code is executed, the apparatus for executing the code implements the computer method according to any of the embodiments of the present application.
The "first" and "second" in the names of "first", "second" (where present) and the like in the embodiments of the present application are used for name identification only, and do not represent the first and second in sequence.
From the above description of embodiments, it will be apparent to those skilled in the art that all or part of the steps of the above described example methods may be implemented in software plus general hardware platforms. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a storage medium, such as a read-only memory (ROM)/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network communication device such as a router) to perform the method according to the embodiments or some parts of the embodiments of the present application.
It should be noted that, in the present specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment is mainly described in a different point from other embodiments. In particular, for the apparatus and system embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, with reference to the description of the method embodiments in part. The above-described apparatus and system embodiments are merely illustrative, in which elements illustrated as separate elements may or may not be physically separate, and elements illustrated as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be included in the scope of the present application. Therefore, the protection scope of the present application should be subject to the protection scope of the claims.